Five key takeaways from OpenAI’s CEO Sam Altman’s Senate hearing |  TechnologyNews

Sam Altman, the chief executive of ChatGPT’s OpenAI, tested before members of a Senate subcommittee on Tuesday about the need to regulate the increasingly powerful artificial intelligence technology being created inside his company and others like Google and Microsoft.

The three-hour-long hearings touched on several aspects of the risks that generative AI could pose to society, how it would affect the jobs market and why regulation by governments would be needed.

Tuesday’s hearing will be the first in a series of hearings to come as lawmakers grapple with drafting regulations around AI to address its ethical, legal and national security concerns.

Here are five key takeaways from the hearing:

1. Hearing opened with a deep fake

Senator Richard Blumenthal from Connecticut opened the proceedings with an AI-generated audio recording that sounded just like him.

“Too often we have seen what happens when technology outpaces regulation. The unbridled exploitation of personal data, the proliferation of disinformation and the deepening of social inequalities. We have seen how algorithmic biases can perpetuate discrimination and prejudice and how the lack of transparency can undermine public trust. This is not the future we want,” the voice said.

Blumenthal, who is the chairman of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, revealed that he did not write or speak the remarks but let the AI ​​chatbot ChatGPT generate them.

A deep fake is a type of synthetic media that is trained on existing media that mimics a real person.

2. AI could cause significant harm

Sam Altmanused his appearance on Tuesday to urge Congress to impose new rules on Big Tech, despite deep political divisions that for years have blocked legislation aimed at regulating the internet.

Altman shared his biggest fears about artificial intelligence. He said: “My worst fears are that we cause, we the field, the technology, the industry, cause significant harm to the world.

“I think if this technology goes wrong, it can go quite wrong.”

3. AI regulation needed

Altman described AI’s current boom as a potential “printing press moment” but that required safeguards.

“We think that regulatory intervention by governments will be critical to mitigating the risks of increasingly powerful models,” Altman said.

Also testifying on Tuesday was Christina Montgomery, IBM’s vice president and chief privacy and trust officer, as well as Gary Marcusa former New York University professor.

Montgomery urged Congress to “adopt a precision regulation approach to AI. This means establishing the rules to govern the deployment of AI in specific use cases, not regulating the technology itself.”

Marcus urged the subcommittee to consider a new federal agency that would review AI programs before they were released to the public.

“There are more genies to come from more bottles,” Marcus said. “If you are going to introduce something to 100 million people, somebody has to have their eyeballs on it.”

4. Jobs substitution remains unresolved

Both Altman and Montgomery said AI may eliminate some jobs, but create new ones in their place.

“There will be an impact on jobs,” Altman said. “We try to be very clear about that, and I think it’ll require a partnership between industry and government, but mostly action by the government, to figure out how we want to mitigate that. But I’m very optimistic about how great the jobs of the future will be,” he added.

Montgomery said the “most important thing we need to do is prepare the workforce for AI-related skills” through training and education.

Illustration of people busying themselves at computer workstations with one desk left unattended with

Will ChatGPT take your job — and millions of others?

5. Misinformation and the upcoming US elections

When asked about how generative AI might sway voters, Altman said the potential for AI to be used to manipulate voters and target disinformation are among “my areas of greatest concern”, especially because “we’re going to face an election next year and this models are getting better”.

Altman said OpenAI has adopted policies to address these risks, which include barring the use of ChatGPT for “generating high volumes of campaign materials”.