Why Data Center Opposition Is Getting Violent

A spate of headline-grabbing attacks motivated by anxiety over artificial intelligence have rattled nerves across the U.S.
On Friday, I wrote a story about whether developers should be worried about violence after a shooting in Indiana targeted a city councilman who had voted in favor of a local data center. Almost at the same time the story published, news broke that an attacker had attempted to firebomb OpenAI CEO Sam Altman’s house. On Monday, the Justice Department filed charges against a 20-year-old from Texas for allegedly throwing a Molotov cocktail at the AI executive’s house. The Houston Chronicle reported that the individual charged had a Substack where they posted several anti-AI screeds; while I have reviewed the blog and can verify it exists, I cannot confirm the author’s connection to the individual charged.
As if that wasn’t enough, just days after the alleged firebombing, two people shot at Altman’s house.
To attempt to make sense of such chaotic brutality, I spoke with Mauro Lubrano, a lecturer at the University of Bath in the United Kingdom and author of the new book Stop the Machines: The Rise of Anti-Tech Extremism. Lubrano has for much of his career studied the rise of a global decentralized movement against tech infrastructure, including energy and transportation systems. Last year, for example, he published a detailed examination of the spate of attacks against Tesla vehicles, dealerships, and factories, calling them “insurrectionary anarchism” rooted in “anti-tech extremism” that “spans multiple ideologies — from eco-extremism to eco-fascism.”
Lubrano and I discussed how a prevailing pessimism about the future, AI acceleration, and climate anxiety is making people more likely to launch physical attacks on devices representing a perceived techno-apocalypse. Lubrano said we should expect more people to attack things linked to electricity itself, and that the solution to the violence is not eco-modernism or optimistic thinking, but rather society finally working through the hard questions raised by AI, climate change, economic inequality, and the other ills vexing so many today.
The following conversation was lightly edited and condensed for clarity.
We’ve seen these movements against tech infrastructure — attacks, threats — for a while. The concept goes back a long time. For a lot of folks in the U.S., there’s analogues here ranging from the assassination of the UnitedHealthcare CEO to ecoterrorism attacks on pipelines and other forms of energy infrastructure. How would you characterize the forces driving these recent attacks on executives and politicians supporting AI data centers?
When we look at anti-technology violence, we tend to see two main patterns of violence: attacks on tech executives, personalities, and so on; and attacks on critical infrastructure. This is related to a worldview that technology is not a collection of individual devices, but part of an interconnected system. Some anti-tech extremists will refer to the “mega-machine,” one that has three main manifestations. There’s an ideological one — the general idea that progress is inherently good. There’s the material manifestation, which is the technologies we interact with every day. And there’s the human component. People become cogs. So by targeting cogs in the machine, you contribute to the collapse of the machine itself.
There’s a propaganda element to all of this, too, targeting individuals who for one reason or another are prominent so it sends shockwaves to the tech community, to make some people change minds or join them in their anti-tech fight, or to just deter people from pursuing research on technology.
Then there’s also critical infrastructure. It comes back to this vision of the mega-machine, where instead of targeting individual technologies you target those critical for the machine to function. They want to strike those first because they will create a domino effect, where they affect all the technologies and the collapse of the system. You will find the attacks tend to cluster around specific targets.
How do you define technology here? Do you mean any kind of tech application? I’m hearing what you’re saying and thinking this may apply to more than AI.
Oh, of course. It’s not just AI. When these people think of technology they are not just thinking of devices but know-how, the ideology of progress, of social forces shaping society and how it works and how labor is organized. Technology is a complex entity, in a way.
In the early 2010s, for example, you saw attacks on facilities after the Fukushima Daiichi disaster. More recently, you had attacks on companies making semiconductors and microchips, so if you take out microchips you cripple the system. And data centers have been discussed for quite some time — I wouldn’t be surprised if we see something happen there, as well. It’s about identifying technologies that all other tech depends on.
There’s an argument some of them make that there’s only one technology all the other depend on, which is electricity. That’s why we’ve seen attacks on power plants, on different targets related to power.
Are you speaking about organized groups? Discussions and forums? I’m sure you’re referencing people you know of, but help us get a better understanding.
When we look at the violent side of the coin we need to acknowledge first that these networks, these movements, reflect trends we’ve seen in political violence over the last few decades, trends that show us we’re in a post-organizational era of political violence. We have names, we have acronyms, but these names are not as important as they used to be. These are decentralized networks, often leaderless, that operate without solid hierarchies or chains of control. We’re not talking about organizations like Al-Qaeda or the Irish Republican Army. We’re talking about networks in which militants often do not know each other because they interact online.
Some of the networks that have been involved in these kinds of attacks are the Informal Anarchist Federation. It formed in 2003 in Italy and became a global entity around 2011. There’s the Conspiracy of Fire Nuclei, which emerged in Greece and then became international. And then there’s a series of ad hoc groups that have emerged over the decades, sometimes who are only known because they’ll release a communique after an attack. Like there’s Vulkan Group, which has carried out a series of attacks on Tesla factories in Germany. Or Individualists Tending to the Wild.
An affiliation to a network is not motivated by gaining material or support or leadership. It’s almost an identity factor because again, when these individuals carry out attacks on their own, they don’t rely on existing networks for support. They might also only be around for one or two attacks because it’s not the group that matters — it’s the network.
Is it just the rise of modern technology driving this violence? Are there other factors at play inciting events, creating this current wave of attacks?
One of the remarkable qualities of anti-tech extremism is that it’s quite flexible. The way this decentralized system works, especially on the anarchist or eco-extremist side, is one side will carry out an attack in a communique they publish online and then make a call for similar attacks on similar targets. Whether or not attacks occur is up to others in the network. If a campaign is considered not really appealing, this might not take place. If instead it’s deemed appealing, you’ll see more attacks.
Last year there was a campaign a French group started called Welcome Spring, Burn a Tesla, which resulted across Europe in a lot of Tesla dealerships being torched. There was some confusion because there was also a campaign against Elon Musk and Tesla, but that wasn’t carried out by people motivated by anti-tech violence, but instead Musk’s role in the U.S. government.
There can also be things people say that incite. In this case, there was an interview recently where Sam Altman basically said if AI is going to steal all the jobs, then maybe those jobs weren’t “real” in the first place. That type of statement is likely to make a few people annoyed. It’s hard to consider what type of development might constitute a catalyst for violence.
I’m struck by the way you’re describing this movement and the rhetoric and signals. I think about Alex Jones and, for example, the idea that 5G is going to brainwash people on behalf of globalists. Do you see anything in global politics providing kindling to this fire?
This is an interesting question because conspiracy thinking is widespread amongst these groups, that there’s this obscure force at work determining outcomes. But on the other hand it depends. In certain groups of people, there’s such a rejection to anything conventional that you’d find disagreement between those people and the political figures. In others, you might argue influencers or politicians who spread rumors about COVID vaccines or 5G that this idea resonates. For example, I don’t see anarchists paying attention to what a politician says because they’re a part of the problem to begin with.
What can be done to counterbalance this? Is there an oppositional force against this rising tide of anti-tech violence? I’ve been stunned to see the absence of any widespread outrage online at what’s transpired so far. Almost all the commentary has been “good, I’m glad this is happening.”
I’m not surprised you’re saying this about the commentary. I’ve been researching violence for years now, but this is the first time I’ve seen the narratives of extremists reflecting some objective concerns amongst people. It doesn’t mean all those other people are participating in the violence themselves, but concerns about AI are real. People are afraid and scared of these developments they don’t understand. But what they do understand is that it’ll have impacts on their lives, to the extent they’re able to comprehend it.
I think demonizing these concerns driving the violence would be a very foolish thing to do. It’ll confirm narratives of surveillance and control.
Right. I mean, some of these are valid concerns. Water, electricity, job loss, surveillance. All of that. But if demonizing this isn’t the right call, what can be done?
Short term, don’t securitize these concerns but do something to limit the violent manifestations. Most of the solutions will be long term. That’s not what people want. People want solutions with immediate effect.
You can divide the solutions into two groups. The first one is, stakeholders and those who develop technologies have to be responsibilized. Going back to that Altman interview, these kinds of comments are not doing us a favor in trying to solve the violence — not to mention other stakeholders can be even more incendiary. You can also limit the problem in how the technologies are used. If we see AI is used to monitor people at protests and demonstrations, acquire and execute attacks in warfare, it can only get worse from here. These applications of AI don’t do us a favor.
Then on a philosophical level, we all need to change the way we relate to technology. We need to go from a position where we think, “What does this allow me to do?” We need to instead think, “Within those activities, let’s select those that will further our connections with one another and with nature.”
What about eco-modernism? Techno-optimism? Are those ideologies solutions or antidotes? Or are they inadequate to address the sheer degree of pessimism and anxiety driving this violence?
From what I can see, doomerism and pessimism is now so widespread that I don’t think those ideologies can work. A lot of people in younger generations believe we are doomed. They believe climate change is going to ruin our lives. There’s wars, geopolitical conflicts. We’re stuck with dystopian visions of the future. This isn’t confined to anti-tech stuff, so therefore optimism has very limited effects.
What gives you hope?
That’s funny because I’m working on a project that concludes there’s no hope.
I didn’t think that was going to be a hard question.
There’s a growing acknowledgement that people may be too dependent on technology. Hopefully we’ll manage to be less dependent on technology and more conscious of what it’s doing to us. An awareness that AI has tremendous environmental impacts.
With acknowledgement is where you need to start. That’s the little hope I have.
