In a compelling testimony before a US Senate subcommittee on privacy, technology, and the law, Sam Altman, the CEO of OpenAI, passionately urged American lawmakers to take immediate action in regulating the rapid advancements of artificial intelligence (AI) technology.
Expressing his concerns regarding the potential dangers of “interactive disinformation” surrounding the forthcoming US elections, Altman emphasized the urgent need for safeguards.
Altman’s appearance before the subcommittee on Tuesday shed light on the pressing issues surrounding AI and its impact on democratic processes. As the mastermind behind the creation of AI chatbot ChatGPT, Altman acknowledged the necessity of regulating this burgeoning technology. He stressed the importance of implementing independent audits, establishing a licensing regime, and adopting warning systems similar to nutritional labels on food products.
During the hearing, senators also probed Altman on the profound capabilities of AI in predicting and influencing public opinion, particularly in the context of the upcoming election. This line of questioning underscored the growing unease among policymakers regarding the potential exploitation of AI for political gain and the manipulation of voters.
Altman’s testimony reverberated with a sense of urgency, serving as a wake-up call to lawmakers and the public alike. It highlighted the imperative for comprehensive regulations that strike a delicate balance between fostering innovation and safeguarding democratic processes. As the US elections draw near, the role of AI in shaping public discourse and the need for proactive measures to prevent the proliferation of disinformation has become an increasingly pressing concern.
“The more general ability of these models to manipulate, to persuade, to provide sort of one-on-one interactive disinformation . . . given that we’re going to face an election next year and these models are getting better. I think this is a significant area of concern,” he said.
In a plea to lawmakers, Sam Altman called for the establishment of clear guidelines and expectations regarding disclosure for companies involved in providing such technology. While emphasizing the potential of this technology, he expressed his belief that the general public would rapidly grasp its immense capabilities.
“When Photoshop came on to the scene a long time ago, for a while people were really quite fooled by Photoshopped images and then pretty quickly developed an understanding that images might be Photoshopped. This will be like that, but on steroids.”
Regulators and governments worldwide are intensifying their scrutiny of AI technology, including initiatives by Silicon Valley giants like Google and Microsoft, amidst mounting concerns regarding its potential for abuse. This hearing takes place against this backdrop.
EU lawmakers recently reached an agreement on stringent regulations governing AI usage, including constraints on chatbots such as ChatGPT. Similarly, the US Federal Trade Commission and the UK competition watchdog recently issued stern warnings to the industry. The FTC expressed its intense focus on how companies employ AI technology, while the UK’s Competition and Markets Authority announced plans to launch a comprehensive review of the AI market.
The US Congress is also engaged in deliberations on crafting regulations to govern the technology, with intentions to consult more industry sources in the coming months. During Tuesday’s hearing, Richard Blumenthal, the Democratic senator from Connecticut and chair of the privacy subcommittee, proposed the implementation of limitations on AI usage in cases where the risks are exceedingly high. He suggested imposing restrictions or even banning its application, particularly when it encroaches upon people’s privacy for profit or affects their livelihoods.
In contrast to the often confrontational exchanges witnessed during previous appearances of tech executives before Congress, the interactions with lawmakers during this hearing remained respectful and amicable. Blumenthal acknowledged Altman’s evident concern for potential AI risks and described it as a deep and intense commitment.
Altman emphasized the need for collaboration between the industry and lawmakers to formulate effective regulations. He emphasized the importance of OpenAI’s active engagement with the government to prevent the technology from going awry, stating, “I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that. We want to work with the government to prevent that from happening.”
Altman acknowledged that advancements like GPT-4, the underlying technology powering ChatGPT, would lead to the complete automation of certain jobs. However, he argued that new and superior employment opportunities would emerge as a result, which OpenAI firmly believed in.
Blumenthal shared his concerns about the impending technological revolution, fearing the displacement of millions of workers and the consequential loss of numerous jobs. He referred to the past failure to swiftly regulate social media and expressed a desire to avoid repeating those mistakes.