
AI CEO Testifies Before Congress
What do you think will happen if AI is not regulated soon?
Updated May 16, 2023
- OpenAI CEO Sam Altman testified at a Senate hearing today, telling Congress that government intervention is "critical to mitigat[ing] the risks of increasingly powerful" AI technology.
- Altman proposed to the committee the creation of a U.S. or international agency that would license powerful AI systems and have the authority to "ensure compliance with safety standards." He said:
"As this technology advances, we understand that people are anxious about how it could change the way we live. We are too."
- Altman said a new regulatory agency should impose safeguards to block AI models that could "self-replicate and self-exfiltrate into the wild," pointing to worries about AI manipulating humans into handing over control.
- Sen. Richard Blumenthal (D-CT), the chair of the Senate Judiciary Committee's subcommittee on privacy, technology, and the law, believes companies should be required to test their AI systems and disclose the known risks before they're released to the public. He expressed concern specifically about the job market.
What's the story?
- Political leaders are beginning to confront the risks of artificial intelligence as they meet with experts and Silicon Valley chief executives to discuss limitations and regulations.
- Congress and the White House are working to catch up with the rapidly advancing technology and face the public's rising questions and concerns. President Joe Biden, Vice President Kamala Harris, and other political leaders held the first White House meeting about AI since the release of ChatGPT, which has created much public discourse.
- Congress is also taking action in the House and Senate, with a strong push coming from Senate Majority Leader Chuck Schumer, who is gauging the interest of both parties on newly proposed AI legislation.
What are their concerns?
- Earlier this month, the White House expressed concerns directly to the leaders of Google, Microsoft, OpenAI, and Anthropic, urging them to limit the power of the technology. For many, the meeting signified just how much pressure is being put on political leaders to protect the public from the risks of AI. Critics fear the systems are too powerful and could impact economies, geopolitics, and criminal activity. Harris said:
"The private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products. And every company must comply with existing laws to protect the American people."
- Sen. Josh Hawley's (R-MO) primary worry is AI's role in upcoming elections. Hawley, the top Republican on a Senate Judiciary Committee subpanel that will examine AI oversight options during a Tuesday hearing, said:
"For me right now, the power of AI to influence elections is a huge concern. So I think we've got to figure out what is the threat level there, and then what can we reasonably do about it?"
- CEO of OpenAI, the company behind ChaptGPT, Sam Altman, will testify before Congress for the first time during Tuesday's hearing.
- In the House, Rep. Ted Lieu (D-CA) is co-leading a bipartisan dinner hosting Altman on Monday. Earlier this year, Lieu introduced the first piece of federal legislation written by an AI system arguing for AI regulation. Just before introducing the legislation, Lieu said:
"You have all sorts of harms in the future we don't know about, and so I think Congress should step up and look at ways to regulate."
Congress and technology
- Despite the bipartisan worries, Schumer shared concerns about Congress' ability to pass legislation on AI. He admitted to facing significant challenges when discussing the technology with other members of Congress. He said:
"It's a very difficult issue, AI, because a) it's moving so quickly and b) because it's so vast and changing so quickly."
- Congress has a history of struggling to regulate emerging technologies, as seen with the internet and social media. Experts say lawmakers should have noticed a critical window for installing guardrails on the two technologies much earlier, and they could flounder just the same with AI.
- Law professor Ifeoma Ajunqa, a co-founder of an AI research program at the University of North Carolina, said:
"AI, or automated decision-making technologies, are advancing at breakneck speed. There is this race…yet…the regulations are not keeping pace."
- Ajunqa also pointed to the lack of computer science experts on Capitol Hill, which is making AI lawmaking all the more challenging.
What do you think will happen if AI is not regulated soon?
-Jamie Epstein
The Latest
-
Changes are almost here!It's almost time for Causes bold new look—and a bigger mission. We’ve reimagined the experience to better connect people with read more...
-
The Long Arc: Taking Action in Times of Change“Change does not roll in on the wheels of inevitability, but comes through continuous struggle.” Martin Luther King Jr. Today in read more... Advocacy
-
Thousands Displaced as Climate Change Fuels Wildfire Catastrophe in Los AngelesIt's been a week of unprecedented destruction in Los Angeles. So far the Palisades, Eaton and other fires have burned 35,000 read more... Environment
-
Puberty, Privacy, and PolicyOn December 11, the Montana Supreme Court temporarily blocked SB99 , a law that sought to ban gender-affirming care for read more... Families