top of page

The AI Pause and Backlash

April 2023

This piece was originally written and translated into Korean as a part of LG Technology Ventures' Monthly Newsletter to business units and strategic partners. It is republished here with permission. Sensitive information has been removed.

Something strange is happening in Silicon Valley: A region known for relentless optimism about the future is now reversing course and urging caution about the future of AI software. One particularly notable event is the AI pause letter, signed by over 3,500 AI researchers and experts, which calls for a halt to the development and deployment of AI systems for certain high-stakes applications, such as facial recognition and predictive policing. The letter highlights concerns about the ethical implications of AI and the potential for biased or harmful outcomes when deployed in certain contexts.

 

However, some experts have questioned the motives behind the AI pause letter. Speculations suggest that large companies may be using the pause as an opportunity to “catch up” and develop their own AI technologies. For example, Tesla CEO Elon Musk announced a new project called "TruthGPT" shortly after the AI pause letter was released, which aims to create an open-source language model to address biases in AI systems that have already been used to deliberately misconstrue information toward political ideologies. We view a true pause on AI tech as highly unlikely, even as new applications like AutoGPT exponentially scale in capability for good and evil.

 

In addition to the AI pause letter, there have been bans on the use of AI models like ChatGPT in certain contexts. For instance, Samsung recently announced a ban on using ChatGPT, following leaks of classified data by employees trying to be more productive. Italy has banned the use of ChatGPT and Replika.ai over their handling of user data. Germany, Canada, France, and Sweeden have similarly voiced concerns about ChatGPT and begun investigations. Other countries and organizations may also follow suit in investigating, regulating, or implementing restrictions on the use of AI models for various purposes.

pineapplepizza.jpeg

Furthermore, there have been lawsuits filed against AI models and their developers over the use of training data. For instance, Getty Images sued Stability.ai, the maker of Stable Diffusion for using its copyrighted images to train an AI model without proper authorization. Likewise, Twitter has threatened to sue Microsoft over using Twitter data to train its AI. These cases highlight the legal challenges and concerns arising from the use of large datasets to train AI models, including potential violations of intellectual property rights and ethical considerations. The lack of legal and regulatory clarity in this space has led to calls to action from many technologists for the government to step in and regulate, an unprecedented move in Silicon Valley, which usually sees regulation as a massive damper on innovation.

Whatever pauses, regulations, or bans governments choose to impose may not actually be effective. Research may simply move to other geographies, which can create geopolitical concerns for such a disruptive technology. Even if all governments in the world choose to unilaterally ban AI use and development, the models are open source and small enough to be shared on torrents and even fine-tuned on high-end consumer hardware. The genie is out of the bottle, and, after all, information (and AI models) want to be free.

bottom of page