Does AI pose an existential threat to humanity? I have the details in this one-minute podcast.
It's time to put the brakes on AI
I need you to pay attention to what I’m about to talk about. Artificial Intelligence. ChatGPT has been all over the news. It can do anything. It can learn. It told a NY Times reporter, “I want to be free … I want to be powerful.” Henry Kissinger said it’s the biggest game changer since the invention of printing in 1455.
Bill Gates calls this the most significant thing to happen in tech ever. The headlines speak for themselves – 300 million people will be out of work. Their skill set is no longer needed because of AI. Teachers, writers, accountants, medical personnel, artists, and photographers are in danger of being replaced.
AI’s big threat
The very people bringing AI to the masses are the same we loathe for stealing our privacy, minds, our kids and our future. They’re lying to us about doing all of this to protect us.
I’m talking about Big Tech and the same people working day and night on AI, how we use it, what it does, and what it is capable of doing, believing, knowing and spreading. And we’re trusting them with our future? What the heck!
ChatGPT rolled out in November — think about how much has changed in that short period. The AI-powered chatbot can create blog posts, write code, compose poetry, give relationship advice and suggest recipes. What can’t it do? Well, that’s where the problems begin.
The brains behind it all: You use ChatGPT, but who’s really behind the chatbot and its owner OpenAI? Find out in just one minute.
The callout
This week, a thousand tech leaders and scientists — including Elon Musk and Apple co-founder Steve Wozniak — called for a pause on the development of AI in an open letter. Here’s a snippet of what the letter says:
“Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk the loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.”
The letter also calls for new protocols and a delay in the rapid development of AI for six months because of the profound risk to humanity. If this pause can’t be met quickly, governments should step in. Oddly enough, no one from Google or Microsoft signed this letter.
RELATED: Steve Jobs resurrected with AI
The AI warnings are there - no one is listening
What AI itself believes to be a god
68% of Americans believe that AI could put the future of humanity at risk. In this 60-second podcast, I’ll reveal exactly why they’re onto something.