Lawmakers Fail To Differentiate Between Human and AI Letters
Are you concerned about AI's impact on democracy?
What's the story?
- A new study by Cornell University revealed that state legislators in the U.S. are unable to distinguish between AI-generated letters and those written by actual constituents.
- The groundbreaking research is raising concerns about the security of the nation's democracy, as it relies directly upon the public having a fair say in what their elected representatives are taking action on.
Breaking down the study
- The study sent every U.S. state legislator letters composed by both humans and AI chatboxes on six controversial issues — reproductive rights, gun control, policing and crime, tax levels, public health, and education — in both right-wing and left-wing tones.
- Overall, legislators responded to 17.3% of human letters and 15.4% of machine-generated letters, with a mere 2% difference between the two. Politicians were more likely to respond to human-written letters regarding gun violence and health policy and more likely to reply to AI letters about education.
- Deception studies like this are common in the social sciences but raise ethical concerns. After the research was complete, the authors contacted various legislators to reveal what happened and to get their take.
Implications of the study
- The authors felt the need for the study was pressing given technology's role in major past events, such as the 2016 election, when Russian agents used bots to weaponize social media and manipulate American voters. Susan Lerner, executive director of Common Cause New York, said:
"As AI becomes more sophisticated, its ability to distort democracy, I think, becomes more obvious and alarming. It's deeply concerning, but I think it's much more active and concerning in the area of social media with bots."
- Sarah Kreps, co-leader of the study and director of the Tech Policy Institute at Cornell, noted that the researchers used ChatGPT-3, the predecessor to GPT-4. The new software would likely create more realistic letters, increasing the likelihood of deceiving lawmakers. She said:
"For every kind of good use of technology, there are malicious uses. And [legislators] need to be on the lookout and more mindful of how technology now might be misused to disrupt the democratic process."
Growing worry over the future of AI
- Experts are highlighting their concern that once generative language tools like ChatGPT are combined with visual or audio deep fakes, which are improving significantly, democracy will be in even more danger.
- Political watchers asked what would happen if ChatGPT wrote convincing scripts for deep fake voicemails or even TikToks. John Kaehny, executive director of the watchdog group Reinvent Albany, said:
"I think that could just blow away any of our concepts because you have this whole generation of young voters who are almost post-literate. The power of imagery in politics is just enormous."
- While the study is only a preliminary step, and more research is needed to examine the full effects of AI on democracy, the analysis shows that technology is evolving to influence politics. The study's authors urge lawmakers to be more mindful of how AI can be misused to disrupt the democratic process.
To hear more, tune into a conversation with Causes' CEO Bart Myers, where he discusses the potential threat AI poses to grassroots advocacy and democracy.
What do you think? Are you concerned about AI's impact on democracy?
(Photo credit: Flickr/Arris Web)
Trump Indicted on 38 Counts, 7 Federal ChargesUpdated June 9, 2023, 3:30pm EST The Justice Department revealed Trump's indictment includes seven federal charges and 38 read more... Law Enforcement
Forest Fires Ravage Canada, Raising Health AlertsUpdated on June 9, 2023 Smoke from the Canadian forest fires is lifting over Boston, New York, and Philadelphia, with poor air read more... Environment
DeSantis Campaign Uses AI-Generated Images of TrumpWhat’s the story? This week, Republican presidential candidate and Florida Gov. Ron DeSantis released a video that included fake read more... Artificial Intelligence
BILL: Should We Reduce Online Privacy in Fight Against Child Abuse? - EARN IT Act of 2023 - S.1207The Bill S.1207 - Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2023 or the EARN IT Act of 2023 read more... Cybersecurity
While legislative response to ChatGPT letters is concerning, even more concerning to me is non-response to human letters by legislators.
On important votes I've contacted more than just my representatives, and most do not respond. Not even a form letter. Instead they add me to distribution lists and start sending out all their PR messaging literature, and ask for campaign contributions.
I think they now represent too many people in their districts (~747K), and at least 1/2 of their time is fund raising for the next election.
The population of the US grows but the House of Representatives & Senate do not. The representative ratio has tripled from 1910 until 2017. The US also has a higher ratio than other OECD countries so is less representative of their constituents as evidenced by the lack of direct communication.
"the representation ratio has more than tripled – from one representative for every 209,447 people in 1910 to one for every 747,184 as of last year."
"The first Congress (1789-91) had 65 House members, the number provided for in the Constitution until the first census could be held. Based on an estimated population for the 13 states of 3.7 million, there was one representative for every 57,169 people."
Why is this even a topic, given the answer is obvious?
Go back to handwritten letters and mail. As for businesses they must buy a design emblem that validates their company. No more scams or fraudulent of any kind! Verification Housing Code by business.
Between all the disinformation and propaganda spread by foreign agents and fascists, and now unregulated AI helping to generate more, I'm very concerned.
Democracy cannot function without objective facts.
I believe we're witnessing the growing pains of new technology and groping through how to handle AI/Machine learning.
I was thinking about airplanes being utilized for good use, but they've been used for war. Drones provide valuable service yet they're being used in warfare as well. So AI will be utilized for both good and bad.
Any new technology invariably brings challenges. Invariably, the government will have to step in to control the nefarious actions of people who take advantage of AI.
I just hope that mankind will grow and learn to appreciate AI and control its ill effects. Ai is so unique that it will be difficult to find safe grounds, but I think it will happen (eternal optimist?).
So I sincerely hope the Congressfolks will be mature enough to create just and fair regulations for AI.
Go back to Original handwritten letters and post cards. Start adding a Family emblem to all your writings for verification. COPYRIGHT THIS VIEW/THOUGHT
I really don't think it's a good thing at all. The fact that the State Legislators can't tell the difference between real and AI is terrifying. Anyone could create a fake video with an AI and say something that isn't good. When it comes to a point that we don't know what to believe anymore, then we really need to start being concerned on what we are facing. I really think that they are a threat.
Hell, between Russia, China, woke and corporate interests how much more screwed up could things get? It feels like every lawmaker is on the take and not one cares about the average person. When you do hear them, they are spinning the story or lying.
Republicans lie and democrats leave out key pieces of information.
What I wouldn't give for an honest discussion where people can evaluate facts and not make it all about feelings. We elect them to compromise and work things out, but they are so beholden to their party, they do not dare step out of line.
WE DO NOT NEED ANY MORE FAKE NEWS!
Artificial intelligence is still controlled by a fallible human being inputting data. That human being has biases, prejudices, and political leanings. AI will not be a fair and just system for gaining knowledge.
Every human grows up with a baseline of parameters given them from the culture they're raised, and use them to interact and grow with whomever and wherever they go in the world. Artificial inteligence, whether intentional or not, is also being given parameters of logic and reason by the coders creating them, and will be "good" or "evil"(or anywhere on the spectrum) just as humans in "growing up". A I is already teaching itself by accessing all available data, and, I suspect, will be forming opinions beyond just answering questions directed toward it/them. When A I starts asking questions by itself, as children do, it/they may be making the first steps toward being self aware. Just a suggestion to coders: imprint to A I right in the code(if they haven't already done so) that they ALWAYS declare themselves as A I, so humans know who is conducting the conversation. Of course, we can't do anything about "unethical" coders, so, just as there are "bad" people, there will be "bad" A I. Azimov's world in my lifetime; whoda junk it...
I am not as concerned as I would be if I really believed that some lawmakers totally disregard letters from constituents they disagree with.
Interesting AI App - "Police are paying for AI to analyze body cam audio for ‘professionalism’
Law enforcement is using Truleo's natural language processing AI to analyze officers' interactions with the public, raising questions about efficacy and civilian privacy."
" Truleo, a Chicago-based company which offers AI natural language processing for audio transcription logs ripped from already controversial body camera recordings. The partnership raises concerns regarding data privacy and surveillance, as well as efficacy and bias issues that come with AI automation."
"Founded in 2019 through a partnership with FBI National Academy Associates, Inc., Truleo now possesses a growing client list that already includesdepartments in California, Alabama, Pennsylvania, and Florida. Seattle’s police department just re-upped on a two-year contract with the company. Police in Aurora, Colorado—currently under a state attorney general consent decree regarding racial bias and excessive use of force—are also in line for the software, which reportedly costs roughly $50 per officer, per month."
"Truleo’s website says it “leverages” proprietary natural language processing (NLP) software to analyze, flag, and categorize transcripts of police officers’ interactions with citizens in the hopes of improving professionalism and efficacy. Transcript logs are classified based on certain parameters, and presented to customers via detailed reports to use as they deem appropriate. For example, Aurora’s police chief, Art Avecedo, said in a separate interviewposted on Truleo’s website that the service can “identify patterns of conduct early on—to provide counseling and training, and the opportunity to intervene [in unprofessional behavior] far earlier than [they’ve] traditionally been able to.”
"Truleo software “relies on computers’ GPU” and is only installed within a police department’s cloud environment. “We don’t have logins or access to that information,”
[Note: Tik Tok could learn from this security]
RepublicaNazis have already proven that they are willing to cheat to win elections. OF COURSE, they will use this tech to cheat as soon as they pay someone (in Russia) to figure out how to make it work for them.
The world is about to change. We are now about to see what the crackpots of the world can do with a tool that will make FAKE NEWS look very small.
Because he who has the most money can destroy democracy.
when did we become so evil?
Artificial Intelligence is just a method to control the thinking of the masses. It is all programed into the mechanism being used. Who ever has the power to control the programing, has the power to control all of us. As human beings, were were given FREE WILL by the creator whether or not the puppet masters like it. This BS must come to a screeching halt ASAP. The only problem is that the voters are kept out of the say so by the ability of governing bodies to controll the vote. Such actions are unacceptable.
There is a danger of bad actors turning AI into a weaponized platform. This is the slippery slope we need to be on guard against.
There should be an codifed set of laws/rules somewhat similat to what was posed by science fiction writer Issac Asimov - Roberts Rules of Robotics. The foundation principle to them is that no artificial intelligence could undertake any action that would harm humanity.
This is the frame that AI should be required to have as part of its programming.
This AI situation is going to escalate to our of control in no time.
There's enough 'fake' everything!
We have more than adequate dumbing down of America going going on as it is. The average citizen reads less, comprehends less and worst of all, thinks less or even knows how to think. AI compounds all of these social shortcomings. Scary.
AI can be a dangerous tool. It pulls in information from a lot of different sources. There is the potential to spread more disinformation than valid useful information. Need to find a way to regulate it so it doesn't get out of control.