Can an AI Create Another AI?

Can an AI Create Another AI?

Artificial intelligence (AI) can be described as a computer program that can learn and improve, unlike other programs that are limited to executing the programmer’s instructions.

Yes, AI can create another AI. Since 2017, there have been published instances of AI creating other AIs or “Child AIs.” This process is not independent. AI that can make other AI is usually built for that purpose and taught how to do it. 

As with many aspects of AI technology, the issue of whether AI can create other AI is not a yes/no situation. Read on to understand the ability of AI to develop additional AI and Child AIs, both currently and as it’s projected to be soon. 

Could AI Learn To Create Itself? 

Could AI Learn To Create Itself?
Mechanized AI robot assembly line with AI-controlled robots in production.

To answer the question yes, “Could AI Learn To Create Itself?” accurately, it should be noted that there are generally two types of learning:

  • A person can learn what a teacher is teaching them. In this case, the teacher usually has a predetermined goal. Successful learning occurs when the teacher’s goal is met. Supervised Learning.
  • Alternatively, learning can occur without being directed by a third party. In this case, there’s no predetermined definition of success. Any form of learning that happens counts as success. Unsupervised Learning.

The above analogy corresponds to the two types of artificial intelligence (AI) machine learning below:  

  • AI Supervised Learning, where the learning outcome is predetermined. 
  • AI Unsupervised Learning, where the result of learning is unknown. 

In supervised learning, AI could and has already learned to create itself. On the other hand, unsupervised learning as a field of AI is still young, and no published AI has learned to create other AIs of its own accord.

Researchers Have Already Built AI That Can Create Other AI

Google Deepmind Open-Ended Learning Leads to Generally Capable Agents

It is now possible to create an AI capable of building other AIs, which is more about algorithms than sentient beings. To understand how this works, we must first be clear on what an AI is. 

An AI can be a sentient super-program capable of insane feats of intelligence, consciousness, morality, and so on, as seen in science fiction movies and possibly in the future. However, the sort of AI in the real world is not as fancy yet. 

Modern AIs are advanced and well-trained algorithms that are highly efficient in specialized tasks. Not surprisingly, any AI they create will also be an algorithm suited for one function. 

And that’s what AutoML from Google is. 

Google’s AutoML – Child AI’s

Google has built an AI that can make other AIs: AutoML

Before AutoML, Google had a major problem: They used a lot of time and manpower to build machine learning algorithms. 

So right from the start, they had a clear goal, to create an AI that could help them build other AIs (aka machine learning (ML) models). 

And they were able to achieve their goal. AutoML can develop machine learning algorithms (AIs) that are just as effective as ML algorithms created by humans. 

Better still, these “Child AIs” are more accurate in many instances. And since part of the work is being done by the parent AI instead of programmers, it is a less labor-intensive process. 

AutoML in Action, NASNet

In 2016, Barret Zoph and Quoc V. Le proposed a neural architecture search and then, a year after that, proposed NASNet-A.

The perfect example of AutoML is NASNet; a child AI explicitly created for object recognition. NASNet was 1.2% better at its job than any other existing system. 

While success stories like AutoML are impressive, the resulting AI is limited to specific, highly specialized jobs. And while child AIs like NASNet have high utility, they don’t come close to the objective of matching or eclipsing human intelligence yet. 

The Google AutoML approach is promising and is on the market today to train custom machine-learning models. 

What is a Child AI?

In May 2017, Google AutoML was released, and in November, NASNet was also released to automate machine learning models to reduce human data scientist labor requirements in the design of artificial intelligence (AI) algorithms.

These would eventually include Computer Vision, Labeled Data, and Natural Language Processing (NLP), among other areas.

Google, in 2017 said that they created a controller neural net that can propose a ‘child’ model architecture, which can then be trained and evaluated for quality on a particular task.

“That information is then used to inform the controller how to improve its proposals for the next round. We repeat this process thousands of times — generating new architectures, testing them, and giving that feedback to the controller to learn from.”

There have been breakthroughs ever since and will continue to be as Google has open-sourced NASNet so it can be used for applications, and as we discuss further, “What is a Child AI?”

Could AI Create Other AI Without Human Direction?

Ubers Poet Algorithm – Box2D Bipedal Walker in OpenAI Gym

AutoML was explicitly made to automate building machine learning algorithms in the Google example above. AutoML is directed by humans to create AI and taught how to do it. 

An equally exciting scenario is if an AI independently creates a child AI on its own.

But is that even possible? 

As far as unsupervised learning goes, AI researchers have made promising progress, indicating that it’s theoretically possible for AI to learn to create other AI independently

An example of progress in this area is the paired open-ended trailblazer (POET). 

What is Open-Ended AI?

What is Open Ended AI?
Uber agent’s traversing unknown obstacles in Paired Open-Ended Trailblazer (POET) training.

Earlier in this article, we explained the difference between supervised and unsupervised learning. 

Google’s AutoML is a classic case of supervised learning. Google wanted AutoML to learn how to create machine-learning algorithms. That was the only measure of success. 

In unsupervised machine learning, the goal is not to learn a specific thing but rather to learn anything. And this is perfectly exemplified by the optimization of agents in Uber’s Paired Open-Ended Trailblazer (POET) algorithm

POET is a system built by the AI division at Uber. The system continually generates new environments and obstacles for bots to overcome. 

It is open-ended in that there’s no specific goal. The Agents learn to solve the problems posed by obstacle courses. Once a problem is solved, a new one is created. 

The system could run forever, endlessly generating new problems. That means endless solutions will be developed as the agents learn how to solve the issues, which allows for the element of surprise

Agents in such situations have surprised researchers by coming up with solutions humans did not consider. With ai-generating algorithms, who knows, maybe one of those solutions will be the creation of an AI!

To objectively assess the possibility of that happening, we need to address whether it fulfills one of the critical prerequisites for any form of intelligence (whether human or artificial) to create AI: the ability to write code. AI is essentially code. So for AI to develop other AI, it needs to be able to write code.

But can AI do that? Or is it already doing it?

AI Can Code

AI Can Code
Serious Coding Bull Terrier

Researchers have developed AI that can do a lot, from identifying signs of cancer to running self-driving cars. One of the impressive capabilities AI currently has is writing code. 

Codex is an AI that can write code and AI that understands natural language, so it can receive instructions in English and write the program in one of twelve programming languages. 

For example, you can tell Codex to create an essential website for your dog and ask it to include a circular profile photo. It will write the code for that website according to your instructions, and you will have your website.

Codex learned to do this by analyzing the code of numerous computer programs and multiple natural language texts.

Theoretically, and most probably, Codex will write code for AI creation if someone instructed it to do.  

But what would be even more interesting is if Codex decided, of its own volition, to write code and create an AI. However, that’s unlikely to happen because Codex is built to execute instructions and can’t act independently. 

Still, the Codex example shows that AI can write code.

Consequently, if the agents in a system like POET had the Codex-like ability to write code, it is conceivable that one of their surprise solutions could be the creation of AI to solve a specific problem. 

If that were to happen, AI would have learned how to create AI without supervision, which would be impressive. 

Why is There a Lack of Computational Power and AI Training Issues?

Why is There a Lack of Computational Power and AI Training Issues
AI Energy Consumption Graph showing Pace to outrun power supply.

One of the limitations of advancing Artificial Intelligence is the data and power required to train machine learning models. Teaching AI algorithms involves analyzing immense amounts of data that require enormous computational and electrical power.

The computing power needed for training the most significant AI models, as reported by OpenAI, has doubled by a rate of every 3.4 months since 2012 compared to Moore’s Law, which doubled approximately every two years.

The current computational power required to process Zettabytes and Yottabytes of digital data storage and more in the future will not suffice for the complex requirements of AI in daily life. A Yottabyte is so big that it would take approximately a billion supercomputers to store this data.

This reality is especially true with the Internet of Things (IoT) permeation in our vehicles, appliances, and gadgets.

This scenario is like asking the question Will the Internet Ever Run Out of Space?

AI Energy Consumption During Training

The computational and data demands of artificial intelligence deep learning neural network training are considerable. The AI model energy usage is comparable to the proportions of the network and depends on the size and complexity of the algorithm and what is required. 

The energy necessary already is almost too much for AI training, and the constant addition of economic and population growth stresses our current power grids.

Why does AI training consume so much power? Today it may take more than half a million kilowatt (kW) hours to train an AI model, and training often is necessary more than once. 

Most current AI-supervised and unsupervised agent learning requires retraining when presented with new information. But with the pursuit and study of new unsupervised Open-Ended AI to the point of AI creating, other AIs who can build upon themselves. This endeavor will require enormous energy loads unless alternatives can be found.

We are already having issues with our aging power grids in the United States and abroad, which will only worsen in the short term and have an increased effect on AI training. We discuss aging power grid issues in our articles “What Happens if the American Power Grid Goes Down?” and “What Is An Internet Apocalypse”? 

Issue of AI’s Energy Carbon Footprint and Effect on the Planet

There is also the issue of AI’s carbon footprint and its effect on the environment and the planet. 

In a study report titled “Energy and Policy Considerations for Deep Learning in NLP,” in 2019 from researchers at the University of Massachusetts, Amherst. The university performed a life cycle assessment for training several AI models. 

The university found that the process can emit more than 626,000 pounds of carbon dioxide, as shown below in the tables. This measurement is equivalent to over 55 American home’s electricity use for one year or 34 million smartphones being charged.

Averages for Consumption CO₂e (lbs)CO₂e (lbs)
Air travel, 1 passenger, NY↔SF1,984
Human life, avg, 1 year 11,023
American life, avg, 1 year36,156
Car, avg incl. fuel, 1 lifetime126,000
Training one AI model (GPU) – Estimated CO₂ emissions from training
com-mon NLP models, compared to familiar consumption.
CO₂e (lbs)
NLP pipeline (parsing, SRL)39
With tuning & experimentation78,468
Transformer model (big)192
With neural architecture search626,155
Tables – Energy and Policy Considerations for Deep Learning in NLP, Emma Strubell, Ananya Ganesh, Andrew McCallum, College of Information and the Computer Sciences University of Massachusetts Amherst {strubell, aganesh, mccallum}@cs.umass.edu

What is the Relationship Between Quantum Computing and AI?

What is the Relationship Between Quantum Computing and AI?
Marissa Giustina American physicist who is a senior research scientist at the Google Quantum Artificial Intelligence Lab

The building of quantum computers is new, although the technology has been brewing for years. It is possible to build and buy a quantum computer but not for the average person. Quantum computers are still very much in the research and developmental stages.

There is already a relationship between Artificial Intelligence and quantum computing. In an article by IBM titled “Quantum Kernels Can Solve Machine Learning Problems That Are Hard For All Classical Methods,” IBM researchers found mathematical proof of a potential quantum advantage for quantum machine learning.

Quantum computers are complex and demanding machines owned by the likes of IBM, D-Wave, and Google. Quantum Computing may address the AI processing speed problem in the future but has not yet.

Quantum computing has the potential to solve many of the issues with AI training mentioned in this article, including energy and computational ability. Quantum computing with AI is also the most promising path to Artificial General Intelligence (AGI).

How Long Will It Be Before AI Independently Creates Other AI?

The feat of AI independently creating other AI seems to require human-like intelligence but maybe not. 

If an AI can encounter a problem and decide to create a child AI to help solve the problem, that AI would have reasoned like a human and might be considered an artificial general intelligence. 

The trouble is that the scientific field does not yet understand how the human brain works and doesn’t seem close to creating artificial general intelligence.

If the goal of human-like intelligence is to be achieved, it’s unlikely to be through the AutoML approach. The POET approach is more likely to accomplish that feat. 

Conclusion

AI can create Paired, Open-Ended Trailblazer (POET) Child AIs that solve specific problems. However, currently, that only happens under the direction of programmers. 

Theoretically speaking, though, an AI could stumble upon the ability to create another AI.

There are many issues with the future of AI and training and development, many related to energy and the continued growth of AI technology. The Power Grid and Future Artificial Intelligence Energy Issues will require more computing resources to grow and may be developed in conjunction with Quantum computing.

AI cannot independently create its own AI to solve problems, at least not yet. Still, AI’s evolution will one day progress into creating other AI as a solution to specific issues and discovering entirely new AI capabilities. It may have nothing to do with human intelligence and be far superior.

Finally, check this great link from O’Reilly if you are interested in more investigation and training for yourself. Open-endedness: The last grand challenge you’ve never heard of. “While open-endedness could be a force for discovering intelligence, it could also be a component of AI itself.”

O’Reilly – Open-endedness: The last grand challenge you’ve never heard of.

John Mortensen

I am a project manager, tech writer, and science enthusiast who loves to study the latest technology, such as AI, comedy, quantum computers, smartphones, headphones, and software.

Recent Posts