Tech Leaders Ask For A Pause On Large AI Tasks. But will the world hear?

ace AI becomes more powerful and pervasive, concerns about its impact on society continue to mount. In recent months, we have seen incredible advances like GPT-4, the ChatGPT language model’s new version from Open-AI, able to learn so fast and respond with many quality responses that can be useful in many ways. But at the same time, it raised many concerns about our civilization’s future.

last week, in an “open letter” signed by Tesla CEO, Elon Musk, Apple co-founder Steve Wozniak, and also by representatives from a wide range of fields such as robotics, machine learning, and computer science, urged for a 6-month pause on “giant AI experiments,” saying it represents a risk for humanity.

Since then, I’ve been following some specialists’ opinions and I invite you to join me in a reflection on this scenario.

The open letter

the “Pause Giant AI Experiments: An open letter”, which currently has almost 6k signatures asks, as an urgent matter, that artificial intelligence laboratories pause some projects. “We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4” says the highlight in the header.

It warns, “AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

And it also predicts an “apocalyptic” future: “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”

What is the real “weight” of this letter?

At first, it’s easy to sympathize with the cause, but let’s reflect on all the global contexts involved.

Despite the letter being endorsed by a long list of leading technology authorities, including Google and Meta engineers for example, the letter has generated severe controversies around some representative subscribers inconsistent with their practices regarding security limits involving their technologies, such as Elon Musk. Musk himself fired his ‘Ethical AI’ team last year, as reported by Wired, Futurismand many other news sites at that time.

It’s worth mentioning that Musk, who co-founded Open-AI and left the company in 2018, has repeatedly attacked them on Twitter with scathing criticisms of ChatGPT’s advances.

Sam Altman, co-founder of Open-AI, in a conversation with podcaster Lex Fridmanasserts that concerns around AGI experiments are legitimate and acknowledges that risks, such as misinformation, are real.

So, in an interview with WSJAltman says the company has long been concerned about the security of its technologies and that they have spent more than 6 months testing the tool before its release.

What are its practical effects?

Andrew Ng, Founder and CEO of Landing AI, Founder of DeepLearning.AI, and Managing General Partner of AI Fund, says on Linkedin “The call for a 6 month moratorium on making AI progress beyond GPT-4 is a terrible idea. I’m seeing many new applications in education, healthcare, food, … that’ll help many people. Improving GPT-4 will help. Let’s balance the huge value AI is creating vs. realistic risks.”

He also said “There is no realistic way to implement a moratorium and stop all teams from scaling up LLMs, unless governments step in. Having governments pause emerging technologies they don’t understand is anti-competitive, sets a terrible precedent, and is awful innovation policy.”

Like Ng, many other technology specialists also disagree with the main point of the letter, asking for a pause to the experiments. In their opinion, in this way, we could harm huge advances in science and health discoveries, such as detecting breast canceras published in the NY Times last month.

AI ethics and regulation: a real need

While a real race is taking place between giants to place increasingly intelligent LLM solutions in the market, the fact is that little progress has been made in the direction of regulation and other precautions that need to be taken “for yesterday”. If we think about it, it would not even be necessary to focus on “apocalyptic” events, those of long duration, such as those mentioned in the letter, to confirm the urgency. The current and fateful problems generated by “misinformation” would suffice.

Around this, we have recently seen how AI can create “truths” with perfect montages of images, like the viral one of the Pope using a puffer coat that has dominated the web the last few days, among many other “fake” video productions, using celebrities’ voices and faces.

In this sense, AI laboratories, including Open-AI, have been working to ensure the identification of content (texts, images, videos, etc.) generated by AI can be easily identified, as shown in this article from What’s New in Publishing (WNIP) about watermarking.

Conclusion

Just like the privacy policy implemented on the websites we browse, ensuring our power of choice (whether or not we agree to share our information), I still believe it’s possible to think of a future where artificial intelligence works, in a safe way, to generate new advances for our society.

Do you want to continue to be updated with Marketing best practices? I strongly suggest that you subscribe to The Beat, Rock Content’s interactive newsletter. There, you’ll find all the trends that matter in the Digital Marketing landscape. See you there!

Leave a Reply

Your email address will not be published. Required fields are marked *