All technological advancements spark debate and speculation. This becomes even more apparent in the case of Artificial Intelligence (AI). This technology has been at the center of speculation and science fiction for decades, thus establishing itself as a thing of the future and a potential threat to the general public.
A child AI might be one of the scariest things on the planet with the release of ChatGPT4 and Bard, artificial intelligence onto the internet by Microsoft and Google.
A current Child AI is an algorithm model that humans directed for a parent AI to create another artificial intelligence algorithm, the child AI. This would imply that Artificial Intelligence has learned how to “procreate” by writing code and determining the parameters needed for the new algorithm to operate.
At the highest level possible, AI could be in an eventual position to make decisions on its own without human intervention. The short answer to the question, “What is a child AI?” is that it can indeed learn how to create new Artificial Intelligence.
This realization also poses a set of other questions about the implications of child AI. These include Artificial Intelligence’s limitations, capabilities, and even morality. In this article, we are going to discuss these questions to contribute to the debate surrounding AI.
AI Can Learn How to Create Another AI
Nowadays, it is no longer a question of whether an Artificial Intelligence system can create another AI. There have already been instances of AIs that can build other AI systems. In fact, child AIs are so effective that they can outperform the human-built AIs that are created for the same purpose.
Right now, the most important question is whether an AI can learn how to build another AI on its own. Up until now, Artificial Intelligence is using human guidance to perform tasks. In essence, it is created for the purpose of building a new AI. This is how we have already produced the first child AIs.
Yet, researchers are trying to implement unsupervised learning that will eliminate the need for human direction. Every artificial intelligence discovery brings us one step closer to this goal, even though unsupervised learning brings various new challenges that researchers need to overcome.
Examples of Child AIs Today
When Google’s Brain Team announced its breakthrough in 2017, it became evident it would have a huge impact on society. This project was none other than AutoML, an AI that pushed the envelope of automated machine learning to produce a new Google AI Child.
The researchers working on this team utilized the reinforcement learning approach. With this, they can propose a child AI, which is later trained and evaluated. The results were more than impressive since the new AI outperformed the already well-established ones.
A few months later, AutoML (Automated machine learning) created the first child AI, which was called NASNet. This AI was designed for object classification in an image and was tested on both ImageNet image classification and COCO object detection datasets. The results showed that NASNet performed better in both datasets.
Another instance worth mentioning is the use of hyper networks. Researchers have designed a hyper-network that can predict the parameters of another untrained neural network.
GHN-2 is another Artificial intelligence that builds AI and has been trained with one million algorithms. After this, GHN-2 is able to predict the parameters of another AI system. The researchers have tested the untrained systems with the parameters that GHN-2 has predicted with great success.
What Are the Limitations of Existing Child AIs?
As we have already seen, Artificial Intelligence algorithms can create other AIs with amazing results. However, it is important to highlight that there are some limitations.
First of all, the systems existing today need the supervision of a human. This means that the parent AI needs to be created for the purpose of producing another AI system. Of course, researchers are constantly pushing the boundaries of modern technologies, and they have already proven in theory that an AI can create a child AI on its own.
Another limitation that we need to keep in mind is that machine learning requires large computational power, which is expensive and not sustainable. Quantum computers are one avenue that may change all of that, but for now, it is still in the future.
One more issue that arises in hyper networks is the complexity of the system. When we are using an AI algorithm to predict the parameters of another AI, then we might not be in a position to evaluate where the mistake happened if it occurs. Sometimes called Blackbox AI.
Please see our article Is Black Box AI Dangerous? if you are interested in learning more about the subject.
Is It Possible for an AI to Create an AI without Human Intervention?
One of the most prevalent questions nowadays is undoubtedly whether an AI can create a child AI without human interference. This is where technology really starts to resemble science fiction, and this might not be too far off in the future.
The short answer to the question “Is It Possible for an AI to Create an AI without Human Intervention?” is not yet. We discuss this question in depth in our article “Can an AI Create Another AI?”
While GHN-2 can predict parameters that we can later use in an untrained system, there is already an AI that can write code. OpenAI’s Codex has been created to understand commands in natural language and convert them into a fully-functional website using one of 12 coding languages.
Of course, Codex right now comes with limitations that make the probability of creating an AI on its own impossible. Nevertheless, it still proves that it has the potential to develop into something more.
However, one technology direction that can lead to the concept of an AI creating a child AI without human intervention is Paired Open-Ended Trailblazer (POET). The algorithm creates various environmental challenges and obstacle courses, that virtual bots have to overcome. The design of the obstacles, as well as the assessment of the bot’s capabilities, are all generated by the algorithm.
Both Codex and POET are distinct technologies with different applications. However, there might be a way to combine these two approaches to create a new AI algorithm. This way, the AI would have the ability to assess the environment and write the code to create a child AI without needing humans to provide instructions.
Artificial General Intelligence – A Thing of the Future?
At the moment, Artificial Intelligence seems to have reached a stage that we have only encountered in speculative fiction. Does every recent technological advancement pose the question; is Artificial General Intelligence (AGI) (AGI) within our reach? If such a thing is possible, an AI able to produce a child AI would be potentially working in that direction.
But what really is Artificial General Intelligence? The term refers to a system with the ability to understand, learn, and perform any task that humans can do. This means that the algorithm possesses the cognitive ability to analyze any given task, even when it has unknown parameters.
All of the AI systems developed to create new AIs touch on this subject. The OpenAI Codex can already understand natural language, and POET can already analyze and design an environment on its own. However, we are still far from reaching the state where these AI algorithms are close to the human-level AI as imagined by the AGI.
A very interesting paper called “When Will AI Exceed Human Performance?” is trying to find out when AI can outsmart humans in specific areas. According to this study, researchers believe that there is a 50% chance that we will reach human-level machine intelligence (HLMI) in around 45 years.
AI and Humanity – Ethical Questions
When we are discussing AI that has the cognitive ability to create another AI on its own, we also need to talk about the ethical implications.
Artificial Intelligence has already taken leaps that bring more accuracy and speed than ever before. However, what would it mean for human beings if they did have the ability to create new AIs without human supervision?
Every human decision is based on several factors, whether they are scientific facts or morality. For many, the errors that AIs have already made in decision making is proof that they shouldn’t be in a position to act without human interference.
This is also the reason why AI alignment has become so important with the passing of time. Aligning the algorithms with what humans want and mean might solve several problems that arise with child AIs. Now, more than ever, there is a need for an AI ethics committee that will work as a safeguard against these concerns.
How Is the Future Looking for Child AIs?
There is no denying that technology is only going to move forward. Artificial Intelligence will continue to find new applications in our daily lives, a thing that will boost its significance.
Algorithms, such as AutoML, will lead to breakthroughs in computer vision, natural language, tables for tabular structured data, and many other applications. The immediate future for child AIs is to bring solutions for specific issues. Researchers are going to use AIs to produce new algorithms that work better than the existing ones.
At the same time, the amount of research in unsupervised learning and open-ended artificial intelligence is only going to increase. It is only a matter of time before an AI with the ability to produce a child AI without human direction, or intervention is created.
Artificial Intelligence is continuously advancing, bringing new facets into the limelight. An AI algorithm that can create a child AI is not only an exciting technological advancement but possible progress for self-procreating agent algorithms.
Apart from the knowledge and expertise needed for a system like this to exist, we also need to take into consideration all ethical concerns around it. This is the only way to embrace Artificial General Intelligence and use it for the greater good of human society.