Elon Musk, tech leaders call for pause on out-of-control AI development race

Originally published in Relevant Magazine

Agroup of over 1 000 AI experts, technologists and business leaders published an open letter urging AI labs to pause the training of systems “more powerful than GPT-4.”

The letter was published by the Future of Life Institute, a non-profit organisation that focuses on mitigating risks associated with transformative technology. Tech leaders from around the world signed the letter, including Apple Co-Founder Steve Wozniak; SpaceX, Tesla and Twitter CEO Elon Musk; Stability AI CEO Emad Mostaque; Executive Director of the Center for Humane Technology Tristan Harris; and Yoshua Bengio, founder of AI research institute Mila.

The reason behind this plea is simple. Advanced AI could bring about a massive change that should be planned for and managed with “commensurate care and resources.”

- Advertisement -

“Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” the letter said.

OpenAI, Microsoft, Google and other tech companies have been working rapidly to generate their AI models. Nearly every week, it seems as if a tech company announces a new advancement or product release. But the letter says it’s all happening far too quickly for ethical, regulatory and safety concerns to be properly exercised.

“We must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”

- Advertisement -

The open letter is calling for a six-month pause on AI experiments that “should be developed only once we are confident that their effects will be positive and their risks will be manageable.” It recommends AI labs use this time to collectively develop safety protocols that can be audited by third parties. If they don’t pause, governments should step in and impose a moratorium.

The letter also suggests it’s long past time for lawmakers to get involved. From regulatory authorities dedicated to overseeing and tracking AI systems, to implementing programs that distinguish real from generated content, to even auditing and certifying AI models, AI developers should be working alongside policymakers to “dramatically accelerate development of robust AI governance systems.”

It’s important to note the letter isn’t calling for a total stop on all AI developments. It’s just saying pump the brakes. Societies need a brief time-out to ensure proper infrastructure is in place so that the inevitable AI revolution can be safe and beneficial to all.

Subscribe to Newsletter

Please help us to keep on publishing news that brings Hope in Jesus:

>> Donate  >> Become a Super Subscriber

Click to join movement

VISIT OUR YOUTUBE CHANNEL: https://www.youtube.com/gatewaynews100

COMMENTING GUIDELINES
You are welcome to engage with our articles by making comments [in the Comments area below] that add value to a topic or to engage in thoughtful, constructive discussion with fellow readers. Comments that contain vulgar language will be removed. Hostile, demeaning, disrespectful, propagandistic and off-topic comments may also be moved. This is a Christian website and if you wish to vent against Christian beliefs you have probably come to the wrong place and your comments may be removed. Ongoing debates and repetitiveness will not be tolerated. You will also disqualify yourself from commenting if you engage in trolling.

Comments are closed.