
Microsoft President Urges Lawmakers to Control AI
Are you skeptical of Microsoft's push to regulate AI?
What’s the story?
- Microsoft’s president Brad Smith endorsed a set of regulations for artificial intelligence as the company, like many of its competitors, is navigating concerns from the public and government regarding the technology’s risks.
- Smith laid out a proposal in front of an audience of lawmakers in Washington Thursday morning, saying:
“Companies need to step up. Government needs to move faster.”
What is Microsoft calling for?
- The company proposed a group of regulations, including a requirement to install an AI emergency braking system in critical infrastructure, where the user would be able to slow down or entirely turn off the AI system when necessary.
- Microsoft also suggested laws that clarify when legal obligations apply to the technology. Additionally, Smith recommended introducing labels to make it clear when AI produced an image or video. Lawmakers have expressed worries about the latter, saying that false images and videos can spread dangerous misinformation. Similarly, Smith noted that deep fakes are one of his biggest concerns.
- He and Sam Altman — CEO of OpenAI who testified before Congress last week — endorsed the idea that a government agency should be created to issue licenses to companies with “highly capable” AI models. Smith explained further:
“That means you notify the government when you start testing. You’ve got to share results with the government. Even when it’s licensed for deployment, you have a duty to continue to monitor it and report to the government if there are unexpected issues that arise.”
Who’s responsible?
- Many have questioned the validity of the AI developers calling for regulation, criticizing the tech leaders for attempting to shift the blame onto the government. To this, Smith said Microsoft was not trying to ignore responsibility for its new technology, which should be clear in his specific ideas for regulation, and pledged to carry out certain controls regardless of government action. He said:
“There’s not an iota of abdication of responsibility.”
- Smith added that he believes companies should bear the legal responsibility for harm caused by AI systems. He said:
“We don’t necessarily have the best information or the best answers, or we may not be the most credible speaker. But, you know, right now, especially in Washington D.C., people are looking for ideas.”
Are you skeptical of Microsoft's push to regulate AI?
-Jamie Epstein
(Photo credit: Flickr/Web Summit)
The Latest
-
Feds Claim Civil Rights Violation on Waste System in Black CommunityWhat's the story? Lowndes County, Alabama, a majority Black community, has long been relying on outdated pipes to pump human read more... Environment
-
Biden Admin Seeks to Change Misleading Recycling LogoWhat's the story? The familiar recycling logo, with its triangular chasing arrows, has been a universal symbol for five decades. read more... Environment
-
AI's Risk to Democracy - TrackerGenerative AI poses a significant risk to democracy. One that we need to address rapidly before significant harm is done. Most read more... Artificial Intelligence
-
Countries Are Banning Vapes - Should More Do the Same?What’s the story? Countries worldwide are introducing legislation to ban or restrict vapes due to concerns over their popularity read more... Food & Agriculture
AI is also involved in medicine. Feel like some good here.
"A new antibiotic, discovered with artificial intelligence, may defeat a dangerous superbug"
https://www.cnn.com/2023/05/25/health/antibiotic-artificial-intelligence-superbug/index.html
From the article ...
"Using artificial intelligence, researchers say, they’ve found a new type of antibiotic that works against a particularly menacing drug-resistant bacteria.
When they tested the antibiotic on the skin of mice that were experimentally infected with the superbug, it controlled the growth of the bacteria, suggesting that the method could be used to create antibiotics tailored to fight other drug-resistant pathogens.
The researchers also tested the antibiotic against 41 different strains of antibiotic-resistant Acinetobacter baumannii. The drug worked on all of them, though it would need to be further refined and tested in human clinical trials before it could be used in patients."
I've never known a tech company yet that didn't prioritize speed to market release to generate revenue over anything else like accuracy, ethics, quality, etc and AI is no different than any other high tech product believing 1st to market captures market share. How Google and Microsoft have handled the release of their AI products thus far is a perfect example of how all AI product releases will be no different than any other product release.
"In A.I. Race, Microsoft and Google Choose Speed Over Caution"
"In March, two Google employees, whose jobs are to review the company’s artificial intelligence products, tried to stop Google from launching an A.I. chatbot. They believed it generated inaccurate and dangerous statements."
"Ten months earlier, similar concerns were raised at Microsoft by ethicists and other employees. They wrote in several documents that the A.I. technology behind a planned chatbot could flood Facebook groups with disinformation, degrade critical thinking and erode the factual foundation of modern society."
"The companies released their chatbots anyway. Microsoft was first, with a splashy event in February to reveal an A.I. chatbot woven into its Bing search engine. Google followed about six weeks later with its own chatbot, Bard."
"When the tech industry is suddenly shifting toward a new kind of technology, the first company to introduce a product “is the long-term winner just because they got started first,” he wrote. “Sometimes the difference is measured in weeks."
"The urgency to build with the new A.I. was crystallized in an internal email sent last month by Sam Schillace, a technology executive at Microsoft. He wrote in the email, which was viewed by The New York Times, that it was an “absolutely fatal error in this moment to worry about things that can be fixed later.”
https://www.forbes.com/sites/richardnieva/2023/02/08/google-openai-chatgpt-microsoft-bing-ai/?sh=6dc006d24de4
https://www.nytimes.com/2023/04/07/technology/ai-chatbots-google-microsoft.html
What really needs to be controlled is the Republican Party. Just take a look at Iowa Gov. Kim Reynolds a Republican who just passed legislation allowing kids as young as 15 to work in bars serving alcohol. I wonder how this idiot would feel about one of her granddaughters working in a Hooters at the age of 15. This woman is a completely vapid pig. Because to her it's not gonna be her kids it's gonna be kids his families don't have enough to eat. This woman is a piece of work. What in gods name has happened to the Republican party. Come on Governor do you want your little niece to work in a Hooters, you Idiot!
I wish "Maybe" had been an option.
But honestly, all industries need guardrails. I'm sure Microsoft doesn't want to be known as "the company that destroyed civilization." It's a lot easier to navigate new technology if you have a map.
CROSS POST
We MUST Watch what they DO, NOT WHAT THEY SAY!
As AI booms, tech firms are laying off their ethicists
Google, Twitch and Microsoft are among the technology companies that are cutting their ethical AI teams. Will these cuts pay off?
By Gerrit De Vynck and Will Oremus
https://www.washingtonpost.com/technology/2023/03/30/tech-companies-cut-ai-ethics/
Microsoft will write the guidelines and regulations and turn them over to Congress. Microsoft will understand the loopholes and take full advantage.
This is like the fox guarding the hen house.
Here's an attempt to use an available AI for the good of a community:
Delaware taps artificial intelligence to evacuate crowded beaches when floods hit
Delaware's low elevation mixed with crowded beaches and limited exit routes make the state particularly vulnerable to massive...
Read the full story
https://apnews.com/article/4023aeebc0b8bf897fd2896fb18ed104
I don't think Microsoft or Bill Gates can be trusted and AI will prove detrimental.
Doubt what any companies say
I trust No company to self-regulate AI or any other system. All they care about is making MORE and MORE money.
Federal regulations are absolutely NECESSARY !
need more research before regulations.
I really don't know enough about AI, therefore I am skeptical about a huge company pushing for it. I don't think we know enough what to expect.
There's too many evil ways it can go and totally allowing for fraud. It would be real dangerous not to regulate.
I've used their products. What does that tell you?
Regulate it fir their beo
Need to control some aspects of AI
Why should people be concerned about Microsofts concern about AI... shouldn't we all be concerned about this slippery slope and stay aware of what's happening in this new field?
No. AI needs to be regulated or we risk going the same route as the damn social media crap has gone-the dark side. Yes, I know social media can be good, but it can and has/is used for negative purposes and AI will go the same way if not regulated. You can't trust a corporation to self-regulate when $ is involved. Corporations want to maximize profits and generally only look at the short-term, not the long-term. Profits are the holly graile of corporations and they tend to not care how they make those profits.