New AI warning: ‘Risk of extinction’

Top AI scientists, leaders, others sign Statement on AI Risk
© Xtau | Dreamstime.com

This morning, 350 of the world’s most prominent business executives, researchers and scientists signed a statement saying that artificial intelligence (AI) poses a “risk of extinction” on par with pandemics and nuclear war. 

Let me say it another way … The same people who created AI say that AI can wipe out humanity

The signed statement is a who’s who of tech pioneers: Elon Musk, Sam Altman (CEO of OpenAI, the company that developed ChatGPT), Microsoft Chief Technology Officer Kevin Scott and several Google AI executives.

What you need to know

When talking about AI with your family and friends, know this: No one on this list is concerned about AI giving fake citations in a legal brief. Something called artificial general intelligence (AGI) is the central issue. 

AGI happens when machines become capable of performing functions and developing their own programming to do whatever they want without any human interaction.

The concern is that without controls, we’ll be battling a superintelligent machine and network of machines that have no compassion or empathy. 

They could program themselves to be far superior to the human race and not need us. Yeah, Hollywood movie material coming to life. No joke, James Cameron has stopped working on the new “Terminator” script until he knows more about AGI’s future.

The fear is justified

Let’s face it. We’re not getting along very well. We have wars, crime, disease, food shortages and weapons. We have good drugs and street drugs. We don’t take care of the planet. In general, as humans, we’re doing a piss poor job living in harmony with each other and the resources we share. 

It would be hard to make a compelling argument to keep us (Although I could make a case to keep me, a compassionate and kind person with an eye for great puns … ).

It’s not a happy thought

I don’t want to think about it, either. But AGI is not going away. We need to take this seriously and prepare for a nightmare scenario. How long do we have? My guess is 40 to 50 years, but maybe less if we don’t act now.

Now, explain this as the big deal about AI at your next gathering. You don’t even need to give me credit.

Tags: AGI (Artificial General Intelligence), AI (artificial intelligence), Elon Musk, Google, network, OpenAI, Terminator, warning