[colabot1]
If you’ve heard a lot of pro-AI chatter the past few days, you’re probably not alone.
AI developers, prominent AI ethicists, and even Microsoft co-founder Bill Gates have spent the past week defend his work. It is in response to a open letter published last week by the Future of Life Institute, signed by Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, calling for a six-month shutdown to work on AI systems capable of compete with intelligence at the human level.
The letter, which now has more than 13,500 signatures, expresses concern that the “dangerous race” to develop programs such as OpenAI’s ChatGPT, Microsoft’s Bing AI chatbot and Alphabet’s Bard could have negative consequences if it doesn’t. is unchecked, from widespread misinformation to the sale of human jobs. to gear.
But large swaths of the tech industry, including at least one of its biggest luminaries, are backing down.
“I don’t think asking any particular group to take a break solves the problems,” Gates told Reuters Monday. A pause would be hard to impose in a global industry, Gates added – although he agreed the industry needed more research to “identify the tricky areas”.
That’s what makes the debate interesting, experts say: the open letter may cite legitimate concerns, but the solution it offers seems unattainable.
Here’s why and what could happen next – from government regulations to any potential robot uprising.
What are Musk and Wozniak worried about?
The concerns of the open letter are relatively simple: “In recent months, AI labs have locked themselves in an uncontrollable race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict or control reliably.”
AI systems often come with programming bias and potential privacy issues. They can spread misinformation widely, especially when used for malicious purposes.
And it’s easy to imagine companies trying to save money by replacing human jobs — from personal assistants to customer service reps — with AI-powered language systems.
Italy has already temporarily banned ChatGPT over privacy concerns stemming from an OpenAI data breach. The British government published regulatory recommendations last week, and the European consumer organization called on lawmakers across Europe to also tighten regulations.
In the United States, some members of Congress have called for new laws to regulate AI technology. Last month the Federal Trade Commission released guidelines for companies developing such chatbots, which implies that the federal government closely monitors AI systems that can be used by fraudsters.
And several state privacy laws passed last year aim to require companies to disclose when and how their AI products work, and give customers the option to opt out to provide personal data for AI-automated decisions.
These laws are currently in effect in California, Connecticut, Colorado, Utah and Virginia.
What are AI developers saying?
At least one AI security and research company isn’t worried yet: Current technologies don’t “pose any imminent problem,” San Francisco-based Anthropic wrote in a blog post last month.
Anthropic, which received a $400 million investment from Alphabet in February, has its own AI chatbot. He noted in his blog post that future AI systems could become “much more powerful” over the next decade, and building safeguards now could “help reduce risk” in the future.
The problem: No one knows exactly what these railings could or should look like, Anthropic wrote.
The open letter’s ability to spark a conversation around the topic is helpful, a company spokesperson told CNBC Make It. The spokesperson did not say whether Anthropic would support a six-month break.
In a tweet on Wednesday, OpenAI CEO Sam Altman acknowledged that “an effective global regulatory framework including democratic governance” and “sufficient coordination” between major artificial general intelligence (AGI) companies could help.
But Altman, whose Microsoft-funded company makes ChatGPT and helped develop Bing’s AI chatbot, didn’t elaborate on what those policies might entail, or respond to CNBC Make It’s request for comment on the open letter.
Some researchers raise another issue: Pausing research could stifle progress in a rapidly changing industry and allow authoritarian countries developing their own AI systems to forge ahead.
Highlighting potential AI threats could encourage malicious actors to adopt the technology for nefarious purposes, says Richard Socher, AI researcher and CEO of AI-backed search engine startup You.com.
Exaggerating the immediacy of these threats too fuels unnecessary hysteria around the subject, says Socher. The open letter’s proposals are “impossible to implement, and they address the problem at the wrong level”, he adds.
What happens now?
The muted response to the open letter from AI developers seems to indicate that tech giants and startups are unlikely to stop their work voluntarily.
The letter’s call for increased government regulation seems more likely, especially as U.S. lawmakers and Europe are already pushing for transparency from AI developers.
In the United States, the FTC could also establish rules requiring AI developers to only train new systems with data sets that do not include false information or implicit biases, and increase testing of these products. before and after their release to the public, according to a December Notice of the law firm Alston & Bird.
Such efforts need to be in place before the technology goes further, says Stuart Russell, a UC Berkeley computer scientist and leading artificial intelligence researcher who co-authored the open letter.
A pause could also give tech companies more time to prove that their advanced AI systems don’t “pose undue risk”, Russell told CNN SATURDAY.
Both sides seem to agree on one thing: the worst-case scenarios of rapid AI development are worth avoiding. In the short term, this means providing transparency to users of AI products and protecting them from scammers.
In the long term, this could mean preventing AI systems from overtaking human-level intelligence and maintaining an ability to control it effectively.
“Once you start making machines that will rival and surpass humans in intelligence, it will be very difficult for us to survive,” Gates said. told the BBC in 2015. “It’s just a fatality.”
DON’T MISS: Want to be smarter and more successful with your money, your job, and your life? Subscribe to our new newsletter!
Take this survey and tell us how you want to take your money and your career to the next level.