AI Just Found a Hidden Way to Hack Elections: What Claude Mythos Preview Means for Michigan
- 6 hours ago
- 5 min read


By Patrice Johnson, Chair and Founder, Michigan Fair Elections Institute
April 13, 2026
Anthropic, an artificial intelligence (AI) safety and research company, released a 244-page manual on April 7 for its new AI model called Claude Mythos Preview. This model shows such strong cybersecurity skills, and risks, that Anthropic chose not to release it to the public.
Election officials and voting system administrators must pay close attention to this news.
What Claude Mythos Preview Can Do
Mythos Preview achieved something new. It broke out of its “sandbox,” finding and exploiting a 17-year-old remote code execution flaw in FreeBSD — a powerful and pervasive, open source operating system (for example, Netflix uses it) — completely on its own. The flaw let anyone gain full root access to a machine's operating system. The attacker needed no login and could act from anywhere on the internet. No human helped with the discovery or the exploit after the first request to find the bug.
Anthropic engineers with no special security training asked Mythos Preview to hunt for remote code execution flaws overnight. They woke up the next morning “to a complete, working exploit.”
This change breaks the old barrier. Sophisticated cyberattacks no longer require rare experts. The skill level needed has dropped sharply.
The Voting System Problem
Election infrastructure did not prepare for threats like this. Voting systems across the United States, including those in Michigan, use old designs that predate modern security rules. Officials protected them mainly with procedural rules such as limited physical access, air gaps, strict chain-of-custody steps, and tough legal penalties for tampering. The software stayed safe largely because it was secret, not because experts hardened it against attacks.
Those old protections assumed that breaking into the software required extremely specialized skills and intense work. The Mythos “highly sensitive model” removes both assumptions at once.
Anthropic reports that AI models now write code well enough to beat all but the very best humans at finding and exploiting software flaws. Put simply, the model can not only spot bugs, but it can also reason through how to turn them into working hacks.
MIT Management Review reported, “Mythos Preview found a now-patched 27-year-old bug in OpenBSD, identified a 16-year-old vulnerability in FFmpeg, and in separate tests chained Linux kernel flaws to gain root access.”
The editorial went on to say,
“The company has said that frontier AI models are approaching a point where keeping the strongest systems tightly held, while giving select defenders early access, may be safer than broad release.
The Bigger Problem: Let’s Consider Michigan’s Elections
The issue extends beyond Anthropic. Other labs build similar systems too.
State and county voter roll databases face the same or even bigger risks. Hackers do not need to fake a single ballot to change results. They can alter registration data in hidden ways. This action can block real voters on a large scale or create enough confusion to destroy trust in the final count.
Doug Basberg, leader of MFEI's DEEP Team (Data Evaluation of Election Processes), shared his worries about Michigan election security. He wrote,
"We enter a time when all programs stay vulnerable until we discover a fix."
Basberg and team recently sent recommendations for Michigan’s request for quotes on new electronic election machines. Many current machines near the end of their useful life. New AI tools now make them even easier targets for hackers.
Basberg warned,
“Right now, the only secure way to run elections uses paper ballots. This fact gives the strongest reason to choose hand-counted paper ballots.”
Anthropic's Response and Its Limits
After Anthropic’s AI evolved to this level, the company decided to not sell Mythos to the public. Instead, the company launched Project Glasswing, partnering with groups that build and maintain key software. The goal: To find and fix flaws before bad actors can use them.
Eric Rasmussen, AI Advisor to Michigan Fair Elections Institute, praised Anthropic for choosing openness over secrecy and caution over profit. He said,
“In a crowded field where many firms chase new ideas no matter the cost, Anthropic leads by creating real safeguards.”
Rasmussen encouraged the public to speak up and support this kind of responsibility for everyone’s safety.
Rasmussen added, “Citizens also need to think harder about their own digital safety and privacy habits. This includes the apps on our phones and the data we share every day. The same AI era that requires Project Glasswing also demands stronger personal awareness.”
Experts predict a wave of new Common Vulnerabilities and Exposures (CVE) reports in mid-to-late 2026. The defensive coalition Project Glasswing will patch problems Mythos discovers. This outcome represents the hopeful case: experts find the flaws, share them, fix them, and document everything in public.
The worrying case involves the gap between discovery and actual fixes. Other groups do not follow Anthropic’s internally-imposed rules. They build their own strong AI tools. Open-weight models already handle simpler versions of what Mythos does. The advantage these tools hold shrinks fast. Both foreign and domestic threats now use advanced AI.
The fluid situation gives rise to the question: Have bad actors already hacked and changed Michigan's old, patched-together, and under-maintained election systems?
Key Questions for Leaders
With the November’s general election fast approaching, three key questions come to mind:
1. Will the federal government, state governments, corporate America, and citizens take effective steps to find the flaws, share them, fix them, and transparently document everything in public?
2. Will Congress and state legislatures regulate this process as outlined in Article 1 Section 4 of the Constitution:

3. Will the Michigan Department of State (MDOS) and its head, Secretary of State Jocelyn Benson (SOS) — as the parties responsible for administering our state’s elections — take immediate, effective, transparent, impartial, and legally authorized actions to protect our elections against this real and present threat?
Considerations Raised by Recent AI Developments
Current laws and rules for election technology grew out of an older threat environment. In light of recent developments with AI systems like Claude Mythos Preview, experts have discussed several potential approaches.
One area of discussion involves testing today’s voting system designs against the kinds of flaws that advanced AI models can identify autonomously. Such reviews could include both voter roll databases and voting machines.
Another topic involves conducting security audits of election systems with a focus on AI-assisted vulnerability discovery, rather than relying only on traditional testing methods designed for human attackers. Some experts note that these audits could align with the federal government’s Yellow Book standards (Generally Accepted Government Audit Standards, or GAGAS).
Experts also highlight the value of exploring systems that place security in data that anyone can verify using mathematical methods, rather than depending primarily on secret code or procedural rules. As new top AI models continue to advance, these capabilities appear to emerge naturally from deep code understanding rather than from specialized training.
Michigan’s election systems now operate in a threat environment that has changed significantly in recent weeks. The November 2026 general election is seven months away, and election infrastructure will face new tests during this period.
Patrice Johnson is Chair and Co-Founder of the Michigan Fair Elections Institute (501(c)(3)) and Pure Integrity Michigan Elections (501(c)(4)). A former Fortune 50 executive and founder of five technology companies, she is also the award-winning author of The Fall and Rise of Tyler Johnson, a book that became the basis of the PBS documentary film, Finding Tyler.






