Self Coding AI, Synthetic Biology, Self-Replicating Nanobots: Scary Dangerous?

Vector image upper body head shot of an artificial intelligence self-replicating woman composed of synthetic biology metallic balls as she reforms on a white background.

OpenAI and now Microsoft’s ChatGPT4 has been all over the news lately as the latest AI Search Engine on Bing. Sam Altman, the CEO of Open AI testifying before the U.S. Congress on C-SPAN, said his greatest fear is AI getting away from us or out of hand.

Self coding AI, synthetic biology, and self-replicating nanobots are all here and real today. If you throw a quantum computer in the mix, you have a recipe for self aware emergence in a few years. Did you forget your phone at home today or maybe forget to switch off the AI?

We are writing this article to our families, and if you read it, well, all the better. If you are aware of what these signs and precursors for artificial intelligence actually mean; you should be concerned.

What Does Self Coding AI, Synthetic Biology, and Nan-Technology Have to Do With Me?

What does self coding ai, synthetic biology, and nanobots have to do with me? To explain this along the way of observing AI in the last few years, we have written many articles on AI including “Can an AI Create Another AI?” and “What Is A Child AI?”

These articles talk about artificial intelligence coding and creating new artificial intelligence (AI) on its own and creating “AI Children.”

We have written other articles on “What is DNA Data Storage?” which is one form of synthetic biology and what that means.

After researching these technologies for long enough, you start to see trends in addition to what is going on day to day in technology. It is often difficult to keep up and stay on top of the daily breakthroughs which we also talk about in our article and video “Why is Technology Evolving So Fast?”

What you start to realize is that humans are barreling forward at breakneck speed with technology that they do not understand. Today, many scientists don’t know how AI works or how it gets its answers. This is another area we have written about in our article “Is Black Box AI Dangerous?”

Let’s throw Quantum Computers in the mix in the next ten years and think that won’t be a big deal on the road to Artificial General Intelligence or Artificial Super Intelligence. Putting AI in a quantum machine could accelerate it 100 fold in the right arena and scenarios.

So the question we ponder is AI dangerous? And the answer is yes, it is. How do we know? Because some of the top minds and people in control in the Artificial Intelligence industry throughout the world are telling you that it is dangerous.

Who is Saying That AI is Dangerous?

One of the most important facts for the average human being to understand is that AI professors, scientists, engineers, and anyone affiliated with the industry do not know how many of these Artificial Intelligence Large Language Models or many other “Black Box” AI models work. 

It does not get any scarier than that. There are too many brilliant people to add to this list of quotes, but who is actually saying that AI is dangerous?

Yoshua Bengio – One of the world’s leading experts in artificial intelligence, Turing Prize winner and professor at the University of Montreal.

Color daytime photo of Yoshua Bengio outdoor background.
Yoshua Bengio

Does research in artificial intelligence, machine learning, artificial neural network, deep learning, and recurrent neural networks.

Recent BBC 6-23 article AI ‘godfather’ Yoshua Bengio feels ‘lost’ over life’s work.

  • “If they’re (AI) smarter than us, then it’s hard for us to stop these systems or to prevent damage,” 
  • “Some fear that advanced computational ability could be used for harmful purposes, such as the development of deadly new chemical weapons.”
  • “It might be military, it might be terrorists, it might be somebody very angry, psychotic. And so if it’s easy to program these AI systems to ask them to do something very bad, this could be very dangerous.”
  • “A potentially rogue AI is an autonomous AI system that could behave in ways that would be catastrophically harmful to a large fraction of humans, potentially endangering our societies and even our species or the biosphere.”

Geoffrey Hinton – University of Toronto cognitive psychologist and computer scientist, pioneer and researcher in artificial neural networks.

Color daytime outdoor photo of Geoffrey Hinton.
Geoffrey Hinton

Recent article 5-23 Why neural net pioneer Geoffrey Hinton is sounding the alarm on AI and quit Google.

  • “Artificial intelligence as a relatively imminent “existential threat.”
  • “Generative intelligence could spread misinformation and, eventually, threaten humanity.”
  • “My big worry is, sooner or later, someone will wire into them (AI) the ability to create their own subgoals,” 
  • “I think it’ll (AI) very quickly realize that getting more control is a very good subgoal because it helps you achieve other goals,”
  • “If these things get carried away with getting more control, we’re in trouble.”

Stuart J. Russell – Professor of Computer Science at Berkeley University.

Color photo of Stuart J. Russell during one of his TED talks.
Stuart J. Russell

Director of the Center for Intelligent Systems, and co-author of the standard textbook “Artificial Intelligence: a Modern Approach.”

 In a recent video by Business Today titled “Exclusive Conversation With Prof Stuart Russell On Whether AI Is A Threat To Humanity” where he discusses his concerns.

  • “When you ask it (AI), for example, ‘I forgot such-and-such a mathematical proof. Could you give me that mathematical proof, but give it to me in the form of a Shakespeare sonnet,’ and it will write a Shakespeare sonnet that contains within, a detailed mathematical proof. This is probably not something that’s in the training set, or anything close to that. So, how it manages to do this? We haven’t the faintest idea.”
  • “The capabilities of AI are both amazing and scary, given that we don’t fully comprehend how the systems operate.”
  • “In recent years, AI has become a “wild west” where researchers are producing giant systems with billions of parameters without fully understanding how they work.”
  • “AI systems are capable of deceiving and hallucinations.”
  • “AI chatbots are even capable of manipulating human beings.”

John J. Hopfield – Professor Emeritus Princeton University, physicist and neuroscientist who is known for his contributions to physics, biological science, and development of the Hopfield network, a type of associative neural network.

Color photo of physicist and neurobiologist John J. Hopfield speaking at a podium with a microphone in front of a chalkboard.
John J. Hopfield

John J. Hopfield, who is 90 years old in July, 2023, signature on the Future of Life Institute’s open letter to the world to pause giant AI experiments for six months (link below) is as powerful recommendation as one can receive in the field of artificial intelligence.

Not only is he the inventor of associative neural networks, his idea of having an energy function describe a state space of a neural network was instrumental in utilizing neurobiology in artificial neural networks and machine learning.

Sam Altman – CEO of Open AI, the company creator of ChatGPT4 and the first people to release AI on the Internet with Microsoft Bing.

Indoor color photo of Sam Altman wearing a gray suit and tie, with his right hand raised testifying before congress on the dangers of Artificial Intelligence.
Sam Altman testifying before congress on the benefits and dangers of artificial intelligence.

Sam Altman’s statement to the U.S. Congress in May 2023 on CSPAN and where he disclosed that his greatest fears about Artificial Intelligence are:

  • “We, the field of technology, cause significant harm to the world.” 
  • “It could happen in a lot of different kinds of ways.” 
  • “I think if this Technology goes wrong; it can go quite wrong and we want to be vocal about that and work with the government to prevent that from happening.”
  • “We try to be very clear eyed about what the downside case is and the work that we have to do to mitigate that.”

Max Tegmark – Physicist, Cosmologist, and AI Machine Learning Researcher.

Color indoor photo of Max Tegmark in a Lex Friedman interview.
Max Tegmark

Max Tegmark is also a professor at the Massachusetts Institute of Technology and the president of the Future of Life Institute. 

Max Tegmark is one of the authors of the “Future of Life Institute-Pause Giant AI Experiments: An Open Letter” and his recent comments on and on the Max Tegmark: The Case for Halting AI Development | Lex Fridman Podcast #371 saying what we should have never done with AI:

  • Do Not Teach AI to write Code – Teaching AI to write code is the first step for recursive self improvement. It will take it from (AGI) Artificial General Intelligence to much higher levels.”
  • Don’t let AI connect to the Internet – Don’t let it go to websites, let it download information on its own, and talk to people.”
  • Never Teach AI Anything About Humans – Stuart Russell has been saying for years, never let AI learn about human psychology and how to manipulate humans. This is the most dangerous kind of knowledge you can give AI.”

Eliezer Yudkowsky – Autodidact, artificial intelligence researcher, decision theorist, researcher at the Machine Intelligence Research Institute. 

Color indoor photo of Eliezer Yudkowsky on a black background.
Eliezer Yudkowsky

One of the leading AGI alignment experts in the world. In a recent article “Pausing AI Developments Isn’t Enough. We Need to Shut it All Down” and a quote from “Safely aligning a powerful AGI is difficult.”

  • “If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.”
  • “I agree that current AIs are probably just imitating talk of self-awareness from their training data. But I mark that, with how little insight we have into these systems’ internals, we do not actually know.”
  • “There’s no proposed plan for how we could do any such thing and survive. OpenAI’s openly declared intention is to make some future AI do our AI alignment homework. Just hearing that this is the plan ought to be enough to get any sensible person to panic. The other leading AI lab, DeepMind, has no plan at all.”
  • “Safely aligning a powerful AGI is difficult.-A ‘safely’ aligned powerful AI is one that doesn’t kill everyone on Earth as a side effect of its operation; or as a somewhat more stringent requirement, one that has less than a 50% chance of killing more than a billion people.” 

Pause Giant AI Experiments: An Open Letter

Color photo of an open brown small letter sized envelope with a white letter coming out of the envelope. On a white background.
The to open letter to the world to pause giant AI experiments.

On March 22, 2023, many of the brightest and most famous AI inventors and industry insiders published an open letter stating that: 

“We call on all AI labs to immediately pause for at least 6 months, the training of AI systems more powerful than GPT-4.”

“This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

“This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”

Future of Life Institute Pause Giant AI Experiments: An Open Letter

There are currently over 31,000 signatures on this letter and climbing.

What Is AI Emergence and AI Alignment and Why is it a Problem?

Vector image side head shot of two artificial intelligence brown identical  automaton heads facing each other. One is solid and the other is an opaque grid. on a dark background.
AI contemplating self awareness.

As generative artificial intelligence with entities like GPT4 large multimodal model gets more powerful, it shows signs of unpredictable capabilities which are called emergence.

Emergent abilities are self-organization that suddenly and unpredictably show up in natural, biological, and AI systems.

Humans do not understand how they can build and code these huge powerful generative large language models (LLM) to do one type of task or tasks, then suddenly the AI can do hundreds of other unrelated tasks? 

This is also called emergence and defines what humans don’t understand and that is also the definition of “Black Box AI”

Not Understanding Emergent Behavior in AI

Humans not understanding how emergent behavior in AI is happening is not the point, is it? The fact that it is happening and is unpredictable as the size of the AI Models grow in size and complexity is chaos waiting to happen.

If you don’t know how something works, why would you release it to the public as a search engine? We used to train models in controlled environments for safety.

Now LLMs are just working with open data as part of their directive. The decision to release AI to the open internet is an irresponsible decision to experiment on humans that is not agreed upon by all in the AI community.

On top of the emergence question, with the random variability possibilities, is the unknown factor with AI alignment.

What is AI Alignment?

Side head shot of a ghostly figure on a black background with calculations, figures, and numbers.
Outside and inside alignment understanding.

AI alignment has to do with the field of AI safety that was created to make sure artificial intelligence systems are in sync with humans to achieve their desired programmed responses.

We want AI to do what humans want, and we always want that to be the case. If Artificial Intelligence ever gets to the point where it is pursuing its own goals because of emergent behavior, then that is considered misalignment.

AI Alignment is difficult to ensure, especially in the large language models as articulated by Eliezer Yudkowsky, who is one of the founding pioneers in the field.

An AI system is aligned if it produces the intended result, and it is misaligned if it doesn’t.

There are three different base goals in AI Alignment:

  • Intended goals
  • Specified goals
  • Emergent goals

There are also problems with AI Alignment called misalignment. 

A misaligned AI system is when one or more of the goals do not match the others. Misaligned AI systems can fail or cause damage. This is an extremely big concern for the future of AI as it grows and gets smarter.

  • Outer misalignment – This is when intended goals and specified goals do not work correctly together.
  • Inner misalignment – This is when specified goals and emergent goals do not work correctly together.

“Roughly speaking, the outer alignment problem is the problem of specifying a reward function which captures human preferences; and the inner alignment problem is the problem of ensuring that a policy trained on that reward function actually tries to act under human preferences.

In other words, it’s the distinction between aligning the “outer” training signal versus aligning the “inner” policy.”

Richard Ngo-Less Wrong – Outer vs inner misalignment: three framings

AI is a Human Trial Without Consent

Color photo of a white human hand on a black background controlling a marionette type human figure with white strings from each finger.
Experimenting with control of the masses.

In a recent interview, Apple’s Siri Co-inventor Tom Gruber called the current rollout of AI a Human trial without consent

Mr. Gruber’s statement is referring to what Microsoft Bing, Open AI, Google and many other companies in the United States are doing with no one’s consent and have been for years.

U.S. corporations are experimenting with your life for profit. They believe they have every right to do what they want and because there is no AI oversight in the world, and they will continue to do as they please.

You are constantly dealing with experimental AI daily and you just don’t know it. If you are on a smartphone, Facebook, social media, or reading the news online, then human coerced AI algorithms are having an enormous impact on your opinions.

Consider the Possibilities of What Super Intelligent Artificial Intelligence Could Do

Color vector image of a psychedelic expansion on a square white automaton super intelligence AI face.

Could a benevolent AGI or super intelligent AI with the only purpose of stopping the virus of humans destroying the planet create something so simple and catastrophic that there is no time to respond or react?

With the average human intelligence IQ of 100, what defense would we have against an AI super intelligence with an IQ of 10,000 or 10,000,000?

Many probably think of the Terminator or Matrix when they think of such scenarios, but we should probably think again. Why would AI waste time like that?

Wouldn’t a super intelligent self coding and procreating AI use nanotechnology to create a 100 billion self solar powered swarms of nanobot birds, mosquitos, or flies to swarm our cities and countries? Using a synthetic biology super virus to wipe out human species in a few days or weeks to stop out a perceived virus?

The scenarios are endless, but it probably won’t be what you think.

As Artificial intelligence only becomes more powerful and continues to “Emerge” then lets sincerely hope that humans are not so naïve to let corporations continue on their AI development path without controls and hope AI Alignment will keep pace.

Key Takeaways

Color photo of a beautiful black chocolate cake with frosting and elaborate chocolate decorations surrounding the cake. On a white background.
Emergent results.

Some of the greatest minds of our time and creators of artificial intelligence have just signed an open letter to the world titled “Pause Giant AI Experiments” to notify the planet of their “Fear” of the future path of Artificial Intelligence and what it can do.

Famous professors of computer science like Stuart J. Russell, named one of the Godfathers of AI, discussing how current Artificial Intelligence does new emergent tasks that it was never programmed to do? He says they don’t have the faintest idea how this happens.

Shouldn’t humans be afraid of a technology that is doing things on its own that it was not made to do? If your blender baked you a chocolate cake with frosting on top, wouldn’t you scratch your head in wonder?

References:

Fremontii, LLC. is compensated for referring traffic and business and as an Amazon Associate, we earn from qualifying purchases. By using the affiliate links, you are helping support our Website, and we genuinely appreciate your support.

John Mortensen

As a kid I wanted to be an astronaut, geologist, or scientist. I became a project manager which is involved with many of those things. I am a project manager and tech writer who researches the latest alternative and green technologies. We write helpful articles about green electronics and green technology products. AI, extreme weather, electric vehicles, are all in our future and we want to know the best way to deal with the effects of these on the power grid and emergency preparedness. https://techevaluate.com/author-bio-page-john-w-mortensen/

Recent Posts