EVOLUTION OF A.I. USING DARWINIANS CONCEPT
In the field of computer science, modern algorithms are evolving everyday and we should thank Darwin for that! Surprised? You shouldn’t be.
Nature has been instrumental in the evolution of mankind. So, it is only natural that we draw inspiration from the biological world around us. One of the new trends in Artificial Intelligence (A.I.) is the evolution of A.I. using Darwinian concepts.
Before diving further, let us first revisit Darwin’s concept of evolution. According to Darwin, an organism suffers from random mutation that changes its characteristics over a course of centuries. A certain characteristic is exposed to the natural environment.
If this new characteristic helps the organism to survive in the environment, then the organism will live longer and passes this advantageous characteristic to future generations. This is called evolution. On the other hand, if the mutation is a disadvantage, then the organism will probably die early and have no children or, at least, less children.
In that way this mutation will tend to disappear through future generations. In the course of time only good mutations will be found ‘fittest’ by the environment (this is also known as natural selection) and will endure for generations.
NATURAL SELECTION DEFINES EVOLUTION OF A.I.
The same logic can be used in Computer Science. Let’s say we have an algorithm to recognize a given pattern. We submit this to a testing batch and see how many patterns are correctly identified. Like in Nature, we make small changes in different copies of the algorithm (a mutation) and test it on the same batch.
If the ‘mutated’ algorithm achieves better results than the old ones, they will “live” else they will “die” (be deleted).
Of course it is necessary to have a function to evaluate which algorithm is the most suitable (the equivalent of natural selection). In sequence, the ones that “lived” are mutated again and evaluated. This process occurs recursively for a number of iterations or till the results are satisfactory.
It is important to notice that this process will not lead necessarily to optimal solution. It would be a near-optimal solution, which means that algorithm may not be able to recognize the pattern perfectly. But, on the other hand, they will certainly recognize it better than the initial one.
This kind of strategy is used a lot together with neural networks (were the mutations are the weight of each neuron) or in optimization of delivering routes (where the mutation is the next point of deliverance).
But this technology is not necessarily new. In fact it has been in use for several years now.
GENERIC ALGORITHM TO TRAIN OTHER ALGORITHMS
OpenAI, which is backed by Microsoft, is trying to go one step further. Instead of training one algorithm to make a specific task, such as pattern recognition, the idea is to have a generic algorithm (“worker” algorithm) to train other algorithms (master) to do unknown tasks.
In that way machines can learn more efficiently even in a completely unknown situation. For example, the workers plays Atari and ones with best results are selected by the master and then mutated. This process go on and on. On the other hand, this process has its limitation.
In Atari example workers only reports the final score, so master is not able to identify any good moves during the play. To make it in different way, a lot of computing power would be needed which makes it very unlikely and inefficient. Even if the computer power would be available, surely not using a genetic algorithm would not be a good way to solve the problem.
With company such as Google and Microsoft investing heavily on this topic, we can expect great advance in this field from the next year. This will greatly contribute for the future of AI applications and can solve a lot of optimization problems that, nowadays, would require a great amount of computer power.