C
A
N
D
R
E
W
L
E
E
14

AI-Enhanced Terrorism

Mar 30, 2023 by Andrew Lee

With the breakneck pace of development of publicly available AI technology released in the past year, there have been a lot of questions about ethics. How do these technologies enable bad actors? How will society handle the potentially rapid displacement of jobs? Should more powerful AIs continue to be developed at the pace we’ve seen? A recent petition, signed by a lengthening list of industry giants, says no. Despite the weight of the signatures, I question the influence this will have (as I do for the majority of well-meaning petitions frankly). Regardless, it’s clearly a signal that a lot of smart people are concerned about the future because of AI. Misinformation, propaganda, and scams, just to name a few techniques bad actors might employ, seem like they are going to be super-powered by AI in the very near future, if not already.

I think there are reasonable arguments as to why AI could make the world a better place, but in this short post, I want to bring up an aspect of this future that scares me and a glimpse of which we may already be seeing today: AI-enhanced terrorism.

Coming on heels of the tragedy of the Nashville school shooting, yesterday there were widespread hoax school shooting calls across the country. Here are articles from Pennsylvania, Massachusetts, and Utah (my home state). The hoax 911 calls had the sounds of gunshots in the background and whispers into the phone saying that they are scared, kids are being shot, please help, “I’m at blank high school“, giving specific details about victims, then hanging up. This is something very close to home for me, both geographically and because my wife was one of the 911 dispatchers receiving these calls and sending the officers…it was a very tough day for her, even as strong as she is. Groups of officers rushed to schools, schools went into lockdown, children hid fearing for their lives and texted their parents that they loved them. In the aftermath at Spanish Fork High, they held a student roll call with armed officers on the bleachers and parents anxiously awaiting to hear their child’s name and embrace them. Here is a harrowing video of the scene. Thankfully, these were all hoax calls and the students were safely accounted for. However, parents, students, teachers, and officers alike have been shaken. There are ongoing investigations into the source of the calls, but we do know their origin has been tracked to outside of the US. Whether a cybercrime group, nation-state actor, or a simply evil individual, this is a disgusting act of terrorism taking advantage of the grief from a real shooting.

Upon hearing about the content of the calls, it seemed like they could very likely have been AI-generated. Would another country hire child/teenage actors reading scripts in convincing accents and custom names of dozens of schools? Take my opinion with a grain of salt, but I doubt that. Rather, I have seen enough deepfake voice videos on YouTube to know that voice generation technology may currently be capable of generating these kinds of voice recordings. The impressive power of startup ElevenLabs has been unveiled publicly, but what might be possible in private well-funded labs of nation-states, for example? I want to clarify that I’m not claiming that these calls were, in fact, AI generated or that any company or country was involved. There are methods of conducting attacks like this without AI (voice modulation), but the key point there is that its scale is limited by capital and willing workers. I am instead claiming that the technologies that we are seeing unfold today make attacks like this much more feasible at scale, increasing potential terrorist capabilities at all socioeconomic levels.

Where might this go next? I wrote a list of a few terrifying possibilities, but after editing, I’ve decided to cut them out. I hesitate to even put ideas like them on the internet. I’m sure you’ll be able to think of some sufficiently chilling examples yourself. There’s plenty of ways voice generation and mimicry could cause some serious problems in society. Let it be an AI ethics exercise left for the reader, I suppose.

My purpose in writing this has been to mention a specific, real instance of cyber-terrorism that may have been enhanced or even made possible by AI. This has affected thousands of people, and its possible that there may be more of this kind of attack ahead. Whether or not AI was involved in this case, the “AI arms race”, so to speak, has made me and many people I’ve talked to feel simultaneously excited and horrified, like we are careening into some unknown future either utopic or catastrophic, and somehow maybe both. Call me a drama queen, I know. I get that I may be over-internalizing the current hype, but from all my understanding, I certainly would not bet against AI in the long term. Thus if I am to take the bet of AI’s capability to do good, I must also accept its intimately-linked capability to do evil. It’s going to take some monumental effort to untangle the two, then empower the good and limit the bad. I’m no Luddite and I think this can be done, but our chance of success seems much less likely if we rush headlong into a world we don’t have time to understand until it’s too late.

~ Andrew