Jump to content
Head Coach Openings 2024 ×
  • Current Donation Goals

    • Raised $2,716 of $3,600 target

Open Club  ·  47 members  ·  Free

OOB v2.0

500 Top Technologists and Elon Musk Demand Immediate Pause of Advanced AI Systems


Muda69

Recommended Posts

https://gizmodo.com/open-letter-musk-demand-pause-on-advanced-ai-1850275951

Quote

A wide-ranging coalition of more than 500 technologists, engineers, and AI ethicists have signed an open letter calling on AI labs to immediately pause all training on any AI systems more powerful than Open AI’s recently released GPT-4 for at least six months. The signatories, which include Apple co-founder Steve Wozniak and “based AI” developer Elon Musk, warn these advanced new AI models could pose “profound risks to society and humanity,” if allowed to advance without sufficient safeguards. If companies refuse to pause development, the letter says governments should whip out the big guns and institute a mandatory moratorium.

“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources,” the letter reads. “Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.”

The letter was released by The Future of Life Institute, an organization self-described as focused on steering technologies away from perceived large-scale risks to humanity. Those primary risk groups include AI, biotechnology, nuclear weapons, and climate change. The group’s concerns over AI systems rest on the assumption that those systems, “are now becoming human-competitive at general tasks.” That level of sophistication, the letter argues, could lead to a near future where bad actors use AI to flood the internet with propaganda, make once stable jobs redundant, and develop “nonhuman minds” that could out-complete or “replace” humans.

Emerging AI systems, the letter argues, currently lack meaningful safeguards or controls that ensure they are safe, “beyond a reasonable doubt.” To solve that problem, the letter says AI labs should use the pause to implement and agree on a shared set of safety protocols and ensure systems and audited by an independent review of outside experts. One of the prominent signatories told Gizmodo the details of what that review actually looks like in practice are still, “very much a matter of discussion.” The pause and added safeguards notably wouldn’t apply to all AI development. Instead, it would focus on “black-box models with emergent capabilities,” deemed more powerful than Open AI’s GPT 4. Crucially, that includes Open AI’s in-development GPT 5.

“AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal,” the letter reads.

AI skeptics are divided on the scale of the threat

Gizmodo spoke with Stuart Russell, a professor of computer science at Berkeley University and co-author of Artificial Intelligence: a Modern Approach. Russell, who is also one of the letter’s more prominent signatories, said concerns about threats to democracy and weaponized disinformation already apply to GPT-4, Google’s Bard, and other available large language models. The more concerning AI nightmares, he said, are ones that could emerge from the next generation of tools.

“The most important concern comes from what seems to be an unfettered race among the tech companies, who are already saying that they will not stop developing more and more powerful systems, regardless of the risk,” Russell told Gizmodo in an interview. “And let’s be clear: the risk they are referring to here is the loss of human control over the world and our own future, much as gorillas have lost control over their own future because of humans.”

Russell claims neither we nor the creators of the AI tools themselves, have any idea how they actually work. Though other prominent AI researchers have refuted this description, Russell says the models are basically, a “blank slate of a trillion parameters.”

“That’s all we know,” Russell said. “We don’t know, for example, if they have developed their own internal goals and the ability to pursue them through extended planning.” Russel pointed to a recent research paper from Microsoft researchers which claimed Open AI’s recently released GPT-4 exhibited “sparks of artificial general intelligence.”

Other AI experts speaking with Gizmodo who didn’t add their names to the Future of Life Institute’s open letter were far more conservative with their criticisms. The experts shared concerns over potential AI misuse but recoiled at increasingly common attempts to compare AI systems to human intelligence. Talk of artificial general intelligence, they noted, could be counterproductive. OpenAI’s ChatGPT, which was able to pass the business school classes and a major medical licensing exam, simultaneously struggles with basic arithmetic.

“I think a lot of people are concerned about the capabilities of AI, understandably so and if we want these systems to be accountable to the public we’ll need to regulate the major players involved,” AI Now Institute Managing Director Sarah Myers West told Gizmodo. “But here’s what’s key to understand about Chat GPT and other similar large language models: they’re not in any way actually reflecting the depth of understanding of human language—they’re mimicking its form.”

Though Myers shares concerns about AI misuse she worries the tech’s current hype train and over-exaggeration of its capabilities could distract from more pressing concerns.

Russell acknowledged some of these critics but said the unknowns of what new models could do were reason enough for alarm.

“Do LLMs create internal goals so as to better imitate humans?” Russel asked. “If so, what are they? We have no idea. We have no idea. We just hope that hasn’t happened yet.”

One doesn’t necessarily need to believe in an imminent real-world version of The Terminator to still harbor real worries about AI. Multiple AI researchers Gizmodo spoke with expressed genuine concerns over a lack of laws or meaningful regulation in the space, particularly given the tech’s reliance on vast swaths of data and its breakneck speed. Prominent large language models like GPT-4 currently lack meaningful transparency around the types of training data they use to develop their models, making independent audits challenging. Biases related to gender and race, already widely felt in less advanced AI models, risk being amplified even further.

There’s also the pesky problem of LLM models lying through their teeth, a feature some have referred to as “AI hallucinations.” Right now, those hallucinations are mostly funny punchlines, but that could change as more and more users turn to the technology for search and other methods of information gathering. The tech’s perceived objectivity means users could be all the more likely to assume AI responses are statements of fact when they are really closer to a well-estimated guess. That complete disregard for truth or reality at scale could make an already cluttered information ecosystem all the more indiscernible.

“These are programs for creating (quickly and, at present for the end users at least, cheaply) text that sounds plausible but has no grounding in any commitment to truth,” University of Washington Professor of Linguistics Emily M. Bender told Gizmodo. “This means that our information ecosystem could quickly become flooded with non-information, making it harder to find trustworthy information sources and harder to trust them.”

And despite all the hype surrounding it, the general public still seems uncertain at best about AI’s current course. Just 9% of US adults surveyed in a recent Monmouth University poll said they believed AI would do more good than harm to society. Another 56% said they believe a world inundated with advanced AI would hurt humans’ overall quality of life.

“It seems as if some people view AI not just as a technological and economic concern, but also as a public health matter,” Monmouth University Polling Institute polling director Patrick Murray said in a statement.

Experts and alarmists united on calls for regulation

One thing the Future of Life Institute signatories and the more cautious AI skeptics did agree on was the urgent need for lawmakers to devise new rules for AI. The letter called on policymakers to “dramatically accelerate the development of robust AI governance systems,” that include regulators specifically focused on AI as well as oversight and tracking of powerful AI tools. Additionally, the letter called for watermarking tools to help users quickly distinguish between synthetic and real content.

“Unless we have policy intervention, we’re facing a world where the trajectory for AI will be unaccountable to the public, and determined by the handful of companies that have the resources to develop these tools and experiment with them in the wild,” West of the AI Now Institute told Gizmodo.

Interesting times we live in.  What do the other GID members believe is the hope, or threat, of AI?

 

Link to comment
Share on other sites

×
×
  • Create New...