
AI CEO Testifies Before Congress
What do you think will happen if AI is not regulated soon?
Updated May 16, 2023
- OpenAI CEO Sam Altman testified at a Senate hearing today, telling Congress that government intervention is "critical to mitigat[ing] the risks of increasingly powerful" AI technology.
- Altman proposed to the committee the creation of a U.S. or international agency that would license powerful AI systems and have the authority to "ensure compliance with safety standards." He said:
"As this technology advances, we understand that people are anxious about how it could change the way we live. We are too."
- Altman said a new regulatory agency should impose safeguards to block AI models that could "self-replicate and self-exfiltrate into the wild," pointing to worries about AI manipulating humans into handing over control.
- Sen. Richard Blumenthal (D-CT), the chair of the Senate Judiciary Committee's subcommittee on privacy, technology, and the law, believes companies should be required to test their AI systems and disclose the known risks before they're released to the public. He expressed concern specifically about the job market.
What's the story?
- Political leaders are beginning to confront the risks of artificial intelligence as they meet with experts and Silicon Valley chief executives to discuss limitations and regulations.
- Congress and the White House are working to catch up with the rapidly advancing technology and face the public's rising questions and concerns. President Joe Biden, Vice President Kamala Harris, and other political leaders held the first White House meeting about AI since the release of ChatGPT, which has created much public discourse.
- Congress is also taking action in the House and Senate, with a strong push coming from Senate Majority Leader Chuck Schumer, who is gauging the interest of both parties on newly proposed AI legislation.
What are their concerns?
- Earlier this month, the White House expressed concerns directly to the leaders of Google, Microsoft, OpenAI, and Anthropic, urging them to limit the power of the technology. For many, the meeting signified just how much pressure is being put on political leaders to protect the public from the risks of AI. Critics fear the systems are too powerful and could impact economies, geopolitics, and criminal activity. Harris said:
"The private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products. And every company must comply with existing laws to protect the American people."
- Sen. Josh Hawley's (R-MO) primary worry is AI's role in upcoming elections. Hawley, the top Republican on a Senate Judiciary Committee subpanel that will examine AI oversight options during a Tuesday hearing, said:
"For me right now, the power of AI to influence elections is a huge concern. So I think we've got to figure out what is the threat level there, and then what can we reasonably do about it?"
- CEO of OpenAI, the company behind ChaptGPT, Sam Altman, will testify before Congress for the first time during Tuesday's hearing.
- In the House, Rep. Ted Lieu (D-CA) is co-leading a bipartisan dinner hosting Altman on Monday. Earlier this year, Lieu introduced the first piece of federal legislation written by an AI system arguing for AI regulation. Just before introducing the legislation, Lieu said:
"You have all sorts of harms in the future we don't know about, and so I think Congress should step up and look at ways to regulate."
Congress and technology
- Despite the bipartisan worries, Schumer shared concerns about Congress' ability to pass legislation on AI. He admitted to facing significant challenges when discussing the technology with other members of Congress. He said:
"It's a very difficult issue, AI, because a) it's moving so quickly and b) because it's so vast and changing so quickly."
- Congress has a history of struggling to regulate emerging technologies, as seen with the internet and social media. Experts say lawmakers should have noticed a critical window for installing guardrails on the two technologies much earlier, and they could flounder just the same with AI.
- Law professor Ifeoma Ajunqa, a co-founder of an AI research program at the University of North Carolina, said:
"AI, or automated decision-making technologies, are advancing at breakneck speed. There is this race…yet…the regulations are not keeping pace."
- Ajunqa also pointed to the lack of computer science experts on Capitol Hill, which is making AI lawmaking all the more challenging.
What do you think will happen if AI is not regulated soon?
-Jamie Epstein
The Latest
-
Feds Claim Civil Rights Violation on Waste System in Black CommunityWhat's the story? Lowndes County, Alabama, a majority Black community, has long been relying on outdated pipes to pump human read more... Environment
-
Biden Admin Seeks to Change Misleading Recycling LogoWhat's the story? The familiar recycling logo, with its triangular chasing arrows, has been a universal symbol for five decades. read more... Environment
-
AI's Risk to Democracy - TrackerGenerative AI poses a significant risk to democracy. One that we need to address rapidly before significant harm is done. Most read more... Artificial Intelligence
-
Countries Are Banning Vapes - Should More Do the Same?What’s the story? Countries worldwide are introducing legislation to ban or restrict vapes due to concerns over their popularity read more... Food & Agriculture
We really have no way to predict what could happen when we develop intelligent machines that surpass human intelligence. While this sounds like SciFi, we already have machines that can beat human champions of chess, Jeopardy, Go, and poker. What happens when they create machines that can kill more efficiently for war, or can perform all our jobs? What happens to people?
Even before we get to that advanced state we have a variety of AI applications making regulations complex because:
(1) it can be not only a stand alone product but embedded in other products.
(2) bias in algorithms like automated credit card approvals that discriminate against a population, eg women, young people, etc which are amplified beyond human decision making due to automation (nation-wide, global application) and resulting class action lawsuits
(3) trust and scope of use. AI used for photo focusing is of less concern than AI used for legal or medical decision making. If it goes wrong the outcomes are more serious.
(4) scale of geography and markets for nationwide or global applications. If you’re developing local applications for COVID restrictions, weather or product prices and discounts, the local situation may be vastly different than a national or global average.
(5) compliance with regulations (local, state, national, international) and organizations (businesses, non-profits, governmental, etc) like consumer protection, regulations, etc
(6) transparency so results can have human review, decision making and modifications. Needs to explain rationale, risk/benefit, trade-offs, lessons learned considered.
(7) Whether continuous learning will be allowed and how frequently to review changes occurring due to continuous learning
(8) Is collecting biometrics (finger prints, voice prints, facial recognition, void recognition, etc) an invasion of privacy, and how do we make sure it's not used for cyberattacks, identity theft, etc?
(9) Should applications be validated to make sure they do what they say they are doing? How do we handle applications from other countries, especially adversarial countries that may embed code or firmware that performs hostile actions?
https://www.causes.com/comments/79459
https://www.causes.com/comments/79617
https://www.causes.com/comments/79939
https://www.causes.com/comments/80662
https://www.brookings.edu/research/the-eu-ai-act-will-have-global-impact-but-a-limited-brussels-effect/?amp
https://www.brookings.edu/research/ai-needs-more-regulation-not-less/?amp
https://www.politico.eu/article/eu-plan-regulate-chatgpt-openai-artificial-intelligence-act/
https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence
https://www.weforum.org/agenda/2023/03/the-european-union-s-ai-act-explained/
https://hbr.org/2021/09/ai-regulation-is-coming
https://issues.org/perspective-artificial-intelligence-regulated/
Artificial Intelligence will cause disruptions and deaths no matter whether it regulated or not. Its failures will be nearly impossible to predict. And our political leaders are the last people I would trust to deal with this technological problem.
A bit of the crazy defining the 'state' of using and exploiting AI...
"The State of AI and the AI State | Frontline News"
https://frontline.news/post/the-state-of-ai-and-the-ai-state
And while the US is far from 'ANY' action, the EU has been working on SOME regulation/thought, and will probably set some basic rules as even US AI innovators aren't going to want to write 'different' code based on country!!
"The European Union’s Artificial Intelligence Act, explained"
https://www.weforum.org/agenda/2023/03/the-european-union-s-ai-act-explained/
I'm pretty sure all this is above ANY of My Reps limited intelligence, but maybe their staff can help⁉️‼️🤣🤣‼️😂 . And who did you think I was talking to?!
Politicians are clueless about technology and would only make things worse.
Congress (especially the Senate) is known for moving slowly on evolving technologies and topics, and still hasn't found good solutions for social media decades after it's started.
I hope they can do a much better job with AI by bringing together true experts in the field, asking sharp and pertinent questions, and proposing forward-thinking and clear legislation to prevent the abuse and misuse of AI before it's too late to put the horse back in the barn.
Knowing how the Senate typically operates, I have little hope.
A. I. Needs a Blue Ribbon Commission.
Like others here and on other platforms, I do not want Congress to wade into and get lost in the field of AI. Frankly, many lack the intelligence not just the needed academic and experiential backgrounds.
I realized that, if possible given the political acrimony that exists, the best we can ask for is a Blue Ribbon Commission made up of PhDs in the relevant areas of Computer Science, Economics, and Philosophy. I would be open to having members from other countries.
We need to ask deeply rational questions as it stand to reason that AI will impact numerous if not all areas of our lives.
It might start with defining and implementing ethical systems. But needs to go further. Recently a leader of AI was reported to have asked what do we do when AI becomes smarter than us. That one is easy. We do it daily often without realizing it. I ask what do we do once AI achieves self-awareness and demands self-determination?
We need to regulate AI which is a new technology for the 21st century to move our country forward
It's already being used within the government to cause anxiety and distress to Americans.
When you take away freedom of speech and make laws that News Broadcast have to be approved and governed by you. Then introduce AI with the freedom.. You have us simply and completely controlled. Not only will this not be the 'The Land of The Free' but you will make us sister wife's with North and South Korea, CHINA. You are taking this country back in time. This is not progress. Your setting this country up to fail even more!!!
We joke about what we go through trying to talk to a human when we call a corporation or large business now. These answering systems can only do what they are programed to do, and their aim is to handle your call without it going to a person! Ask yourself how many times you've found yourself repeating, "Representative" over and over! AI is a machine & machines break down! If we give the AI more power will it begin to think it's better than people? This concept has been proposed in movies and stories for years. In "2001 A Space Odyssey," Hal 9000, an AI, takes over, since it was programed to do too much! In it's thinking, it found humans lacking compared to itself. No machine, no matter how well it's programmed should exist without regulations! Even humans can't handle all situations without periodically getting angry. Why would we expect an AI to be better, when it's programmed by a human? It must be regulated for our own protection!
It could take us all down.
Get going on regulations as you are off to a LATE start.
We definately do not need any more DISINFORMATION!!
Can't let it get out of control.
"Regulated soon"...a bit too late. If there wren't parameters and barriers as PART of the creation of A.I. right from the start, it's already beyond actual control. If one is planning on raising livestock, for instance, setting up fences and housing comes BEFORE acquiring said livestock, not AFTER. Most "fears" of A.I. comes from science fiction, and isn't necessarily logical or reasonable in the long run, but creating A.I. programs designed to do harm in this digital age is already here, if that is what's been given its "purpose". I would hope, though, that any coder designing an artificial inteligence program to gather information and "learn" from it, eventually come to the same philosophical conclusions the wisest of humans have developed---that the overall health and wellbeing to the planet, and all on it, INCLUDING them"selves", is the highest of values. Even if humans just become useful servants....
AI is totally dependent upon the individual(s) that input the data. It will be impossible to gain unbiased information. Allowing artificial intelligence is not a wise decision.
AI is a boogey according to some in government but I think the problem is with the government having a means to politicize AI and abusing it or avoiding security and cyber theft.....As long as it We the people who control AI, I think the government can oversight the private sector and maintain it's distance......
I've sorta been following AI/Machine Learning/ChagGPT for a while. Will Terminator or Skynet take over? No.
The worrying issue is other countries like Iran, North Korea, Russia, China, and a few countries would utilize these amazing technologies to spread lies, mimic famous personalities, etc.
These four troubling issues: consumer privacy, biased programming, danger to humans, and unclear legal regulation. These last issues can happen in any country.
Programmers need to set up guardrails in all the programmed AI models so we all can use them comfortably. Ergo, my dear Congressmen, do your duty and ensure that variety of AI are safe to use.
We are battling absurd lies now, it will only get far worse. It's going to be used for evil. We need to protect ourselves.
with consequences!
right now AI can make a video if anyone, doing or saying anything.
we won't be able to trust video soon.
Sam Altman, during his testimony yesterday, told the Senate that AI needs to be regulated.
I sure hope they listen and take this seriously.
The creators of AI have already told us it can be used to spread disinformation/lies, etc. It needs to be regulated to prevent using AI as a political weapon. We've already seen what happens when social media is allowed to run amuck, just imagine what will happen if AI is allowed to go in that same direction and it will unless there are guardrails put in place now.
I'm predicting the kind of confusion that has, historically, accompanied the popularization of any sort of new tech.
M, I responded to your comment.
Also you may want to do a search for Blue Ribbon Committees / Commissions / etc.
In no case can the committees enact anything.
Our govt has always been way behind the 8 ball in regulating the tech industry and we all see the mess that has created
We kind of already know what will happen. It will be used to try to manipulate and some cases cause harm to society. So far the worse issue I encountered was Chaos GPT literally being tasked to wipe out humanity by some really inconsiderate individual and it actually found a way to do it. Here are my sources: https://futurism.com/ai-destroy-humanity-tried-its-best; https://www.foxnews.com/tech/ai-bot-chaosgpt-plans-destroy-humanity-we-must-eliminate-them;)What is worse is to my knowledge no one knows who actually did this. You would think this would be something would be prepared to handle, but it appears they really were not ready to. Lets hope the government finds the individual who did this soon and hold them legally responsible for what they did.
AI needs to be regulated. Women's bodies and the LGBT population and books should not.
The development and use of AI is far too much, far too fast. It needs to be heavily regulated before it gets even more out of control. Nobody involved can be trusted to self-regulate, as has been the case with any other venture of this sort.