Bill Gates, AI developers push back against Musk, Wozniak’s open letter

0 9

If you’ve heard a lot of pro-AI chatter in the past few days, you’re probably not alone.

AI developers, famous AI experts and even Microsoft co-founder Bill Gates this past week are defending their work. That’s in response to an open letter published last week by the Future of Life Institute, signed by Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, calling for a six-month suspension from work. on AI systems that can compete with human level. engineering

The letter, which now has more than 13,500 signatures, expressed fears that the “dangerous race” to develop programs such as OpenAI’s ChatGPT, Microsoft’s Bing AI chatbot and Bard na h -Alphabet if left unchecked, from widespread misinformation to giving away human jobs. to machines.

But large parts of the tech industry, including at least one of its biggest lights, are pushing back.

“I don’t think asking one particular group to stop solving the challenges,” Gates told Reuters on Monday. Enforcing a stop across a global industry would be difficult, Gates said – although he agreed that the industry needs more research to “identify the difficult areas.”

That’s what makes the debate interesting, experts say: The open letter may point to some legitimate concerns, but the proposed solution seems impossible to achieve

Here’s why, and what could happen again – from government regulations to any robot revolution.

What are Musk and Wozniak worried about?

What do AI developers say?

At least one AI safety and research company isn’t worried yet: current technologies “are not a concern in the near future,” writes San Francisco-based Anthropic. blog post last month.

Anthropic, which received a $400 million investment from Alphabet in February, has its own AI chatbot. He noted in his blog post that future AI systems could become “much more powerful” over the next decade, and that building guardrails now could “help reduce risks ” down the road.

The problem: No one is sure what those guardrails would or should look like, Anthropic wrote.

The open letter’s ability to spark conversation about the topic is useful, a company spokesperson tells CNBC Do It. The spokesperson did not specify whether Anthropic would support a six-month suspension.

In a tweet on Wednesday, OpenAI CEO Sam Altman acknowledged that “an effective global governance framework including democratic governance” and “sufficient coordination” among major artificial general intelligence (AGI) companies could help.

But Altman, whose Microsoft-funded company makes ChatGPT and helped develop Bing’s AI chatbot, did not specify what those policies might be or respond to CNBC Make It’s request for comment on the letter. open

Some researchers raise another issue: Stopping research could hinder progress in a fast-moving industry, and allow authoritarian countries developing their own AI systems to proceed.

Emphasizing the potential dangers of AI could encourage bad actors to adopt the technology for nefarious purposes, says Richard Socher, an AI researcher and CEO of search engine startup You .com with AI support.

Adding to the immediacy of these threats as well feeding unnecessary hysteria around the subject, Socher says. The open letter’s recommendations are “impossible to implement, and it’s addressing the problem at the wrong level,” he said.

What happens now?

The muted response to the open letter from AI developers seems to indicate that tech giants and startups are unlikely to stop their work voluntarily.

The letter’s call is more likely for more government regulation, especially since lawmakers in the US and Europe is already pushing for transparency from AI developers.

In the US, the FTC could also establish rules requiring AI developers to train new systems only with data sets that do not contain false information or implicit bias, and to conduct tests on increase those results before and after they are released to the public, according to a consultant in December from the law firm Alston & Bird.

Such efforts must be in place before the technology advances further, says Stuart Russell, a computer scientist at the University of Berkeley and a leading AI researcher who signed the open letter.

A halt could also give tech companies more time to prove their advanced AI systems “do not pose an unacceptable risk,” Russell told CNN on Saturday.

Both sides seem to agree on one thing: The worst-case scenarios of rapid AI development are worth preventing. In the short term, that means providing transparency to AI product users, and protecting them from scammers.

In the long term, that could mean keeping AI systems from surpassing human-level intelligence, and maintaining the ability to effectively control it.

“Once you start making machines that compete with and surpass humans with intelligence, it becomes very difficult for us to survive,” Gates told the BBC back in 2015. It’s just inevitable.”

DON’T MISS: Do you want to be smarter and more successful with your money, work and life? Sign up for our new newsletter!

Take this survey and tell us how you want to take your money and career to the next level.

How this 24-year-old became the US Barista Champion

Leave A Reply

Your email address will not be published.