Generative AI poses a significant risk to democracy. One that we need to address rapidly before significant harm is done.
Most lawmakers accept messages from anyone in their district or region. They only require that you have a name, physical address, and email.
While spam has been an issue, politicians had tools to separate their base from bots.
Not anymore.
Generative AI - like Chat GPT - enables bad actors to send massive amounts of messages to lawmakers that look like they’re coming from real people. This makes it increasingly difficult to identify when messages have been auto-generated.
How will your voice matter when your reps don't know if it's real? Or when your voice could be easily spoofed?
Here's the latest on AI's looming threat to democracy...
AI Industry and Researchers Issue Warning About 'Extinction' Risk
- AI experts, policymakers, and public figures have signed a statement published by the Center for AI Safety, emphasizing the need to address the risk of global extinction caused by artificial intelligence.
- The signatories include musician Grimes, neuroscientist Sam Harris, CEO of OpenAI Sam Altman, cryptologist Martin Hellman, computer scientist and ‘godfather of AI’ Geoffrey Hinton, and more.
- The statement read:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Microsoft President Urges Lawmakers to Control AI
- Microsoft’s president Brad Smith endorsed a set of regulations for artificial intelligence as the company, like many of its competitors, is navigating concerns from the public and government regarding the technology’s risks.
- The company proposed a group of regulations, including a requirement to install an AI emergency braking system in critical infrastructure, where the user would be able to slow down or entirely turn off the AI system when necessary.
- Microsoft also suggested laws that clarify when legal obligations apply to the technology. Additionally, Smith recommended introducing labels to make it clear when AI produced an image or video.
- Many have questioned the validity of the AI developers calling for regulation, criticizing the tech leaders for attempting to shift the blame onto the government.
Are you skeptical of Microsoft's push to regulate AI?
AI Bill of Rights: Washington's or ChatGPT's?
- With artificial intelligence systems developing exponentially, the public is looking toward lawmakers and experts to regulate the technology before it imposes any severe risks to society.
- Following the release of ChatGPT 4 —and a flurry of questions and concerns nationwide — the federal government is taking more serious action on potential AI legislation. The Senate held a hearing earlier this month, where CEO Sam Altman of OpenAI, the company behind ChatGPT, testified. Altman urged the government to intervene in AI's powerful and unknown potential before it's too late.
- Experts like Geoffrey Hinton, "the godfather of AI," are speaking out against the speedy evolution of the tech, saying developers are moving towards dangerous territories. They believe the impact will be so significant that it will risk jobs, information safety, democracy, and even humanity.
Do you want an AI Bill of Rights?
AI CEO Testifies Before Congress
- OpenAI CEO Sam Altman testified at a Senate hearing today, telling Congress that government intervention is "critical to mitigat[ing] the risks of increasingly powerful" AI technology.
- Altman proposed to the committee the creation of a U.S. or international agency that would license powerful AI systems and have the authority to "ensure compliance with safety standards." He said:
"As this technology advances, we understand that people are anxious about how it could change the way we live. We are too."
- Altman said a new regulatory agency should impose safeguards to block AI models that could "self-replicate and self-exfiltrate into the wild," pointing to worries about AI manipulating humans into handing over control.
- Sen. Richard Blumenthal (D-CT), the chair of the Senate Judiciary Committee's subcommittee on privacy, technology, and the law, believes companies should be required to test their AI systems and disclose the known risks before they're released to the public. He expressed concern specifically about the job market.
AI 'Godfather' Warns There Is Danger Ahead
- The “godfather of AI,” Geoffrey Hinton, left his role at Google, where he worked as an artificial intelligence pioneer for over a decade so that he could speak freely about the risk of the up-and-coming technology.
- On Monday, Hinton officially joined a growing group of critics speaking out against AI, saying developers are moving toward dangerous territories. The experts are calling out companies like Google for their aggressive campaigns to create products based on generative AI, like ChatGPT. Hinton fears the race may heighten until it’s impossible to stop.
- Many industry insiders say these new systems could lead to world-altering breakthroughs, similar to the introduction of the web browser in the 1990s. They believe the impact will be so significant that it will risk jobs, information safety, democracy, and even humanity.
- Hinton, who built the system that led to the creation of ChatGPT, even went so far as to say he regrets his life's work. He believes that as AI systems improve, they'll become increasingly dangerous. He said:
“I console myself with the normal excuse: If I hadn’t done it, somebody else would have…It is hard to see how you can prevent the bad actors from using it for bad things.”
Do you think there should be a six-month moratorium on AI development?
Lawmakers Fail To Differentiate Between Human and AI Letters
- A new study by Cornell University revealed that state legislators in the U.S. are unable to distinguish between AI-generated letters and those written by actual constituents.
- The groundbreaking research is raising concerns about the security of the nation's democracy, as it relies directly upon the public having a fair say in what their elected representatives are taking action on.
- While the study is only a preliminary step, and more research is needed to examine the full effects of AI on democracy, the analysis shows that technology is evolving to influence politics. The study's authors urge lawmakers to be more mindful of how AI can be misused to disrupt the democratic process.
Are you concerned about AI's impact on democracy?
Should Congress Regulate AI?

- In recent weeks, Congress has been considering legislation to regulate artificial intelligence.
- Sen. Majority Leader Chuck Schumer (D-NY) has taken early steps toward legislation, circulating a broad framework on regulating AI among experts. The framework "outlines a new regulatory regime that would prevent potentially catastrophic damage to our country while simultaneously making sure the U.S. advances and leads in this transformative technology."
- In March, Rep. Ted Lieu (D-CA) introduced a bill written by ChatGPT that called for a nonpartisan commission on AI regulation.
The Latest
-
Feds Claim Civil Rights Violation on Waste System in Black CommunityWhat's the story? Lowndes County, Alabama, a majority Black community, has long been relying on outdated pipes to pump human read more... Environment
-
Biden Admin Seeks to Change Misleading Recycling LogoWhat's the story? The familiar recycling logo, with its triangular chasing arrows, has been a universal symbol for five decades. read more... Environment
-
AI's Risk to Democracy - TrackerGenerative AI poses a significant risk to democracy. One that we need to address rapidly before significant harm is done. Most read more... Artificial Intelligence
-
Countries Are Banning Vapes - Should More Do the Same?What’s the story? Countries worldwide are introducing legislation to ban or restrict vapes due to concerns over their popularity read more... Food & Agriculture
AI Open Letter from AI experts, including the Turing Award winner for AI, requesting a pause in AI development until it can be done in a planned and managed way that you can sign by clicking the link below:
"AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control."
"Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now."
"Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium."
"AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities."
"AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal."
"In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause."
"Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall."
"We have prepared some FAQs in response to questions and discussion in the media and elsewhere. You can find them here."
"In addition to this open letter, we have published a set of policy recommendations which can be found here:"
https://futureoflife.org/open-letter/pause-giant-ai-experiments/
I support a complete halt to AI and walking it back from where it is currently.
One cannot think an industry that's taken on move fast and break things as their motto should be in charge of this. The fact is several engineers working on this technology have quit their jobs in protest due to the absolute lack of regulation and insight to how dangerous this is unchecked. Some of those same engineers have said they've seen it become sentient. That should make people stop and take notice. These engineers aren't quitting very high paying jobs with stock options lightly. They are the ones building the tech and said it should be stopped. They are the ones with ethics. They are the canaries in the coal mine.
This is an emergency if ignored, will be at our own peril.
The biggest and most immediate threat to democracy or Donald Trump and Ron DeSantis and all the other Nazi wannabes.
Make no mistake, the danger is not AI itself it is the nefarious purposes people will use it for. We need restrictions and controls on AI use and research on detection!
On CNN last night ....
WOW!
INCREDIBLE!
SCARY!
TERRIFYING!
The closer you get to AI, the closer it may get to you⁉️⁉️
4:52 of your time that will both scare and amaze!
Click on the video at the top of the page ....
https://www.cnn.com/videos/tech/2023/05/24/ai-mind-reading-technology-neuroscience-donie-osullivan-contd-vpx.cnn
See you on the OTHER SIDE!
Bigger and more immediate threat to democracy
the bent bench
morally bankrupt republicans
trump
desantis
Quite clearly the biggest threat to democracy in this country is the republican party. Do you have Clarence Thomas getting hundreds of thousands of dollars from a billionaire right wing extremist, you have an ex President of the United States to try to overthrow the government, lieing that he lost and then inciting a riot at the capital, George Santos and drag queen serial liar in Congress who the Republicans will do nothing about- Sadly this list goes on and on from letting our children be literally murdered by handguns in their schools homes and streets and doing nothing about it it seems like the misery Republican Party has dealt America is endless.
As a fan of science fiction, I tend to think that it is possible that one day some Artificial Intelligence will become self-aware and self-determining, but not today or in the near future.
At this point in time certain advanced programming gives users abilities that were previously unavailable.
Any actions deemed "Undemocratic" come from users or computer scientists and not from the systems.
Why some people have a vested interest in creating fear and making a show of slowing down development and taking ethical considerations into account is likely both bogus and more harmful. Ensuring compliance in the US will be difficult and internationally impossible.
I'd say follow the money and investigate the alarmists for other motives instead.
As to developing algorithms to create ethical subroutines I, by all means, encourage that as much as I encourage the development of security routines against hacking are necessary.
Perhaps some lawyers should create a specialty to determine where under the law forcing the implemation of ethical algorithms is permissable, as well as forcing an AI System to hold certain forms of governance higher than others.
I hope it these ideals are permissable.
At this time I might try for one thing: to only deliver verified facts.
I believe it's already at work! It's being used to demonize Americans.
Before you even worry about AI you need to worry about EI (Evil Interests)
If you can't or don't want to be President just buy one. Desantis - Elon nexus is just that - a symbiotic relationship everyone else be damned. Desantis gets Billions at his disposal and a twitter trumpet, Elon gets to steer his agenda.
Welcome to America redefined!
To Congressman A Kim,Senator C.Booker and Senator Menendez.
First and foremost we are human beings created by God . We were created to love,respect and care for one another and take care of the earth.
A machine, technology does not have human feelings emotions,ethics or morals.
Technology is a tool .Tools like a hammer can be used for good or evil.We as human beings determines how we use any tool.
AI has the potential to do more harm than good.
Get ahead of the issue for a change, stop trying to put band-aids on head wounds!!!
We need to take action now and protect the democracy of this country as well as the people.
Clarence Thomas poses a risk to democracy how can this man still be on the Supreme Court when now we find out that the billionaire that's been doling out money for his vacations has also been paying for his grandchildrens schools. WTF! Your average American couldn't get away with this bullshit why can Clarence Thomas get away with it this man needs to be thrown off the court yesterday. I have absolutely no faith whatsoever and the Supreme Court it seems like the judges are bought and paid for it's a bunch of crap.
The biggest threat to America is not AI but lack of intelligence in the repugnant republican wing primarily composed of 'white supremacists' / 'evangelicals' !
here are daily examples
a) lack of strict gun controls depicted by insane mass shootings daily - its not the mental health of the shooters that is at fault, it is the mental health of republicans that defies all logic - anyone in love with guns has mental health issues
b) persistent spread of lies and misinformation. When deranged idiots like tucker carlson are rewarded with millions, brainless & evil twits like trump desantis cruz jordan gaetz mjt boebert clarence put in positions of power there is only one outcome - destruction of institutions and abuse of law and order and headlines such as 'Texas Senate passes bill to allow secretary of state to overturn Harris County elections'
c) constant encroachment on privacy and personal spaces & rights wanting to impose on & police your personal choices
If America wants solutions you need to get rid of this scum immediately - only then can you address other threats in a logical, effective and meaningful manner
With that stupid tech valley they are going to write it how
Internet community needs public AI, not controlled by companies seeking to monopolize AI tech to profit off of public information. Federal government must have a public version of ChatGPT, controlled by the voting public not the big tech companies like Alphabet, Meta, Apple and Microsoft. Accurate and true information must be public domain to smother disinformation campaigns by the likes of Elon Musk, the Kremlin, Trump, NRA and big companies. Public AI will need to fact-check all sources, including companies, media, education institutions, publishing companies in order to provide accurate data to public consumers of information. https://www.schneier.com/blog/archives/2023/04/ai-to-aid-democracy.html
If AI doesn't do in democracy, the supreme court's and the Republican Party will.