Elon Musk joins call for pause in creation of giant AI ‘digital minds’ :- More than 1,000 professionals, academics, and supporters in the field of artificial intelligence have endorsed a request for an immediate halt to the development of “giant” AIs for at least six months so that the potential benefits and risks of systems like GPT-4 can be thoroughly examined and reduced.
Major AI players, including Elon Musk, who co-created OpenAI, the research facility behind ChatGPT and GPT-4, Emad Mostaque, who founded London-based Stability AI, and Steve Wozniak, who co-founded Apple, make the demand in an open letter.
Together with academics like cognitive scientist Gary Marcus, it has engineers from Amazon, DeepMind, Google, Meta, and Microsoft as signatories.
In recent months, AI labs have been engaged in an uncontrollable race to create and implement ever-more potent digital minds that no one, not even their designers, can comprehend, foresee, or reliably control, according to the letter.
Strong AI systems should only be created when we are certain that they will have beneficial outcomes and pose minimal hazards.
Sam Altman, the co-founder of OpenAI
Sam Altman, the co-founder of OpenAI, is cited by the authors, who were brought together by the “longtermist” thinktank the Future of Life Institute, to support their claims.
“At some point, it may be vital to acquire independent evaluation before starting to train future systems,” said Altman in a post from February. “And for the most advanced attempts to agree to limit pace of expansion of compute needed for developing new models.”
“We concur,” the letter concluded. That time has come.
The authors of the letter argue that “governments should step in” if researchers do not willingly stop developing “giant” AI models that are more potent than GPT-4.
They go on to say that this is simply a step back from the risky rush towards ever-larger, unpredictable black-box models with emergent capabilities. “This does not suggest a pause on AI research in general,” they write.
Since the release of GPT-4, OpenAI has been enhancing the AI system’s functionality through “plugins,” enabling it to search up information on the internet, schedule vacations, and even place orders for groceries. However, the business must deal with “capability overhang,” which is the problem of its own systems being more powerful than it realizes at the time of release. (Elon Musk Joins Call for Pause in Creation of Giant AI ‘Digital Minds’).
Researchers are sure to discover fresh approaches to “prompting” GPT-4 that enhance its capacity to handle challenging challenges as they continue to work with it over the ensuing weeks and months.
One recent finding showed that the AI answers questions considerably more accurately if it is initially instructed to do so “in the style of a knowledgeable expert.”
UK Government AI Regulation White Paper :
The UK government’s Wednesday-released AI regulation white paper, the government’s flagship document, contains no new authorities at all, in stark contrast to the call for stringent control.
The government asserts that the emphasis should be on synchronizing already-existing authorities, including the Health and Safety Executive and the Competition and Markets Authority, and providing five “principles” through which they should approach AI.
Science, innovation, and technology secretary Michelle Donelan stated that “our new approach is founded on strong values so that people can trust entrepreneurs to unleash this technology of tomorrow.”
The announcement was criticized, among others, by the Ada Lovelace Institute. The UK’s strategy, according to Michael Birtwistle, who oversees data and AI law and policy at the research center, “has severe gaps, which could leave harms unresolved, and is underpowered relative to the urgency and complexity of the situation.”
“The government’s schedule of a year or more for implementation will leave dangers unresolved at a time when AI systems, from search engines to office suite software, are being swiftly integrated into our daily lives,” says the report.
Lucy Powell, the shadow culture minister for Labor, added her voice to the criticism by charging that the administration had “let down their side of the agreement.”
The implementation of this regulation will take several months, if not years, she stated. Meanwhile, programs like ChatGPT, Google’s Bard, and many others are integrating AI into daily life.
“At the same time that they are eroding the underpinnings of our current regulatory structure with their upcoming data bill, the government runs the risk of reinforcing holes in those frameworks and making it extremely difficult for businesses and citizens to traverse.”