
Microsoft President Urges Lawmakers to Control AI
Are you skeptical of Microsoft's push to regulate AI?
What’s the story?
- Microsoft’s president Brad Smith endorsed a set of regulations for artificial intelligence as the company, like many of its competitors, is navigating concerns from the public and government regarding the technology’s risks.
- Smith laid out a proposal in front of an audience of lawmakers in Washington Thursday morning, saying:
“Companies need to step up. Government needs to move faster.”
What is Microsoft calling for?
- The company proposed a group of regulations, including a requirement to install an AI emergency braking system in critical infrastructure, where the user would be able to slow down or entirely turn off the AI system when necessary.
- Microsoft also suggested laws that clarify when legal obligations apply to the technology. Additionally, Smith recommended introducing labels to make it clear when AI produced an image or video. Lawmakers have expressed worries about the latter, saying that false images and videos can spread dangerous misinformation. Similarly, Smith noted that deep fakes are one of his biggest concerns.
- He and Sam Altman — CEO of OpenAI who testified before Congress last week — endorsed the idea that a government agency should be created to issue licenses to companies with “highly capable” AI models. Smith explained further:
“That means you notify the government when you start testing. You’ve got to share results with the government. Even when it’s licensed for deployment, you have a duty to continue to monitor it and report to the government if there are unexpected issues that arise.”
Who’s responsible?
- Many have questioned the validity of the AI developers calling for regulation, criticizing the tech leaders for attempting to shift the blame onto the government. To this, Smith said Microsoft was not trying to ignore responsibility for its new technology, which should be clear in his specific ideas for regulation, and pledged to carry out certain controls regardless of government action. He said:
“There’s not an iota of abdication of responsibility.”
- Smith added that he believes companies should bear the legal responsibility for harm caused by AI systems. He said:
“We don’t necessarily have the best information or the best answers, or we may not be the most credible speaker. But, you know, right now, especially in Washington D.C., people are looking for ideas.”
Are you skeptical of Microsoft's push to regulate AI?
-Jamie Epstein
(Photo credit: Flickr/Web Summit)
The Latest
-
Changes are almost here!It's almost time for Causes bold new look—and a bigger mission. We’ve reimagined the experience to better connect people with read more...
-
The Long Arc: Taking Action in Times of Change“Change does not roll in on the wheels of inevitability, but comes through continuous struggle.” Martin Luther King Jr. Today in read more... Advocacy
-
Thousands Displaced as Climate Change Fuels Wildfire Catastrophe in Los AngelesIt's been a week of unprecedented destruction in Los Angeles. So far the Palisades, Eaton and other fires have burned 35,000 read more... Environment
-
Puberty, Privacy, and PolicyOn December 11, the Montana Supreme Court temporarily blocked SB99 , a law that sought to ban gender-affirming care for read more... Families