When the wicked rule, the people groan.
Where there is no vision, the people perish. …
– Book of Proverbs 29:2 and 18 (written before 700 BCE
In place of the old bourgeois society, with its classes and class antagonisms, we shall have an association, in which the free development of each is the condition for the free development of all.
– Karl Marx, The Communist Manifesto (1848)
I have decided not to use footnotes in this work nor page numbers of works cited. The exponential growth of AI-led publications, largely the effect of platformisation and overproduction, not only leads to quick fashions and diminished reader attention but also enforces changes in the style of their use. No reasonably full ‘coverage’ from the hundreds of thousands of items on AI is possible. The pages for the citations can be easily found for those who really want to check them. I offer only a very partial list of items I found accessible and most useful; I have no pretence to ‘coverage’ in the academic sense. The translations from other languages where a translator is not mentioned are mine.
I’m much indebted (in chronological order) to the work and critiques of Hito Steyerl, the French Terra-HN site, and the critical comments of Patricia MacManus, Ginevra Petroni, and Lazar Atanaskov.
- Introductory
I am afraid Artificial Intelligence powers (hereinafter AI) will be a key tool in the physical death or profound lesion of millions of people (for the clearest case, see Section 3 below on war) and in an immense dumbing down of all our public life, including schooling. AI is hyped beyond recognition – though it still has significant problems, as Gary Marcus argues – but its possible massive introduction within military, political, and economic suppressions and surveillance may be a major danger. AI’s huge potentialities may be useful in already normative domains such as the natural sciences, but they may also lead to the collapse of all sense of history and culture, though not in the silly sense of a superior machine intelligence to either save or damn us.
I’m not an AI expert and have had to make a forced march through the avalanche of facts and writings about it. I’m neither a fan nor an enemy of this real if ambiguous novum; I’m a concerned citizen and cohabitant, understanding as I can the contours of ‘the giant mass to come’ (Shakespeare). However, I’ve been struck by the frequent lack of context for the AI descending upon us, especially in the obedient media. I propose to talk about some root historical, societal, cultural, and political contexts and fallouts of approaching AI.
- On the Road to AI
There’s little doubt the irruption of AI is one of two or three major events of 2022-24, comparable only to the advent of hot wars beyond regional significance and the refusal to face capitalist climate destruction, while redeploying profitable finances to massive rearmament. Hundreds of billions of US dollars have been poured into AI research: in 2023, funding for ‘generative AI’ surged nearly eight times from 2022 to reach $25.2 billion. ‘Artificial intelligence’ is already present in everyday human affairs, for example in all our emails, and it will no doubt increase in the coming years. Yet the Ipsos and Pew data in 2023 showed 52% of respondents were both nervous toward AI products and services and more concerned than excited about AI. So, what is AI’s determining context?
The world in which we live is – as I have argued in ‘Words and Lesions’ – determined by ongoing violent lesions of billions, top-down events initiated by the ruling classes in their own interests. They function in vicious feedback with the new hegemony’s cognitive degradation which is easily channelled into nationalism, racism, and finally fascism. Oxfam reported in 2023 that, over the previous three years, the richest five men in the world had doubled their fortunes. During the same period, almost five billion people the world over became poorer: hardship and hunger determine their daily reality. A huge concentration of global corporate and monopoly power has taken us, squeezing workers, dodging taxes, privatising all levels of the state, and foolishly spurring climate breakdown. Of the world capital invested into actions on stock-markets by 2023, 80% was controlled by 2% of holders. Plainly, the big fish have eaten the little fish and become ever more uncontrollable sharks. They are both an economic power and a Siamese twin controlling state power, well on the way to a new rentier or caste system. By the inescapable logic of violence, economic centralisation must be guaranteed by further political and military centralisation, with a hot World War looming.
In the ‘globalisation’ epoch, say 1973-2008, the production of physical commodities and the exploitation of the labour force grew by leaps and bounds. The psychophysically fragmented workers became, together with the resentfully sinking petty bourgeoisie, the main force co-optable by extreme right-wing organisations, which exercised, where possible, a ‘soft’ dictatorship – but liable whenever necessary, from Pinochet to Putin and Trump, to turn very hard. The cultural and highly political aspect of the new upper-class dictatorship is a colonisation of minds by the overwhelming production of fake needs and consumption, an unending deification of commodity fetishism of low use-value, and ideologies of hatred against the stranger. ‘This leads to a de-cognitising of billions of people, very often led and pushed into mindless (demented) reactions. Aggressive reterritorialisation desperately reacts against the vertiginous and chaotic turn of their deterritorialisation….’ (Berardi Bifo, Breathing) It is at work in the punctilious ostracising of the Left’s ideas of justice, from the EU banks’ squelching of Syriza in Greece to the dethroning of Corbyn in the UK Labour Party.
A deeper process at work here is that the composition of capital relies increasingly on cognitive and semiotic work. The cognitive workers, embedded or free-floating intellectuals of various kinds, had, from the end of 19th century, an impelling function in the accumulation of capital. However, after the containment and then regress of the Russian 1917 revolution and the following ones, this ‘cognitive proletariat’ (Berardi Bifo, Ultimi) has mostly failed to organise for its own and general human interests. These working people grew up during the full defeat of even defensive trade-unionism, with privatisation of the public sphere – e.g. health services – and shameless expansion of precarious work with no existential guarantees. One way to carry on working was to meld innovation with a new Latin to keep the masses ignorant.
An important, as it were genetic, step toward AI was the internet development to Web 2.0 (a term invented in 2003). Web 2.0 is taken to mean a system characterised by applications that permit a high level of site-user interaction, by blogs, forums, chats, and agencies such as Wikipedia, YouTube, Facebook, Gmail, WordPress, Tripadvisor, and so on. I’m in sympathy with the potentialities for personal and democratic expression that have been opened up in comparison to the static Web up to the 1990s, in which the user could only navigate between the pages as well as avail herself of email and research engines. The main novum here was twofold. Not only did the data volume, variety, and velocity of production increase and become more accessible to anybody with a computer, where not expressly censored (as in work for the military). Web 2.0 users also functioned as both message senders and receivers, approaching a construction of societal sense similar to face-to-face dialogue. A potential democracy of users did and does exist; in part, it is represented by the collective projects of Wikipedia, by petitions and viral hashtags or by millions of free music pieces now present on platforms. Also potentially enriching was a passage from mainly verbal to multimodal signs, e.g. emojis, GIFs, photographs, cartoons, movies, music, voices, and colours.
However, a possible cognitive and art-loving democracy was torpedoed by the new gatekeepers based on the hegemony of financialisation, for which Google could serve as the key example; the psychological companion and obverse of it is infotainment, me-only petty showmanship and group hatred. There is a terrible degeneration from the fair and clear, often pointed debates within the rising bourgeois class to capitalocene ‘communication’, mainly used for obfuscation and often for discrimination and segregation of the poor and powerless. The organisational forms used in digital communication – social media platforms, messaging apps, video conferencing tools, etc. – were slanted toward idiotic brevity in the form of brief reactions (likes, emojis) where thoughtfulness was sacrificed to rapid interaction. For example, Facebook supplied only ‘information’ corresponding to what its boss-owners and algorithms guessed to be the proper majority taste, opinions, and values. This created a closed space institutionalising lack of fact-checking; it also led to the present flood of ‘fake news’ and much hysteria and psychic lesion among users. Internet work is nowadays as a rule done on huge capitalist digital platforms, mainly US (Facebook, Instagram, Messenger, Twitch, Linkedln…) or Chinese (TikTok, WeChat…). Overt ideologies, which could be checked as to believability, gave place to propaganda systems built into the architecture of platforms, ruling out strategic diversity and fostering hysterical personal power affirmation as well as conformity with ruling interests of big corporations and states, often in aggressive and brutal pseudo-dialogues (see Chun).
Thus, alongside checkable mini-bits of information, Web 2.0 and most ‘social media’ are often used to harass people and inundate media with spam: not so much in-forming as con-forming. The infosphere of decaying capitalism has also led to scandalous interferences into many persons’ private concerns and to the rise of a new power of self-proclaimed media demagogues. Campaigns made up of bots, fakery, and trolls could now be coordinated by small groups of the rich and powerful classes to give the illusion of large-scale consensus. Some ruling powers use political bots to silence opponents, push official state messaging, and sway the vote during elections, while all defame critics. The new ‘normality’ grew largely manipulative and discriminatory (see Baudot; Woolley and Howard). When Google monitors my Web searches, my email, and my location, that makes for better predictions of what I might be pressured into paying for. However, as a big player on the stock-market, it will have quite different aims from me or from some federated, publicly run set of services that could reach a data-sharing agreements free from monitoring by intelligence agencies. Quite analogous to the rise of centralising politics and economics in states and corporations, the rise of huge international web firms on the lines of Amazon and Twitter meant new regulation and censorship. The advent of AI, whose use is often said to herald Web 3.0, marks the present phase.
Hito Steyerl points out that all such uses ‘normalize a siloed production environment in which users constantly have to pay rent to some cloud system, not only to be able to perform but even to access the tools and results of their own labour’. And she cites Dwayne Monroe’s naming of this oligopoly (also an ideologically-closed monopoly) as a ‘super rentier structure’, in which digital corporations privatize users’ data and sell the products back to them: ‘The tech industry has hijacked a variety of commons and then rent us access to what should be open’ (‘Mean Images’). In other words: the content of personal computers is by now, as it were, pre-emptively hijacked, and if you discontinued using Microsoft, it would (unless copied before discontinuation) be accessible to the author only upon payment – a crass form of platform rentier feudalism, threatening a new debt slavery. Steyerl is thus speaking about the new dominant of Big Data and ‘raw’ AI to which I now turn.
- How (Not) To Use AI: ‘Raw AI’
2.1. Karl Marx defined the machine as ‘a means for producing surplus-value’, and, already in The Communist Manifesto, presciently observed that the capitalists’ extensive use of machinery turns a worker into ‘an appendage of the machine’; he kept following machine production as a new thing under the sun. In his Notebook VII from 1858 he judged that ‘direct labour and its quantity disappear as the determinant principle of production – of creating use values – and is reduced both quantitatively, to a smaller proportion, and qualitatively, as an indispensable but subordinate moment, compared to general scientific labour, technological application of natural sciences, on one side, and to the general productive force arising from social combination [Gliederung] in total production on the other side’. He optimistically concluded that such huge new productive forces mean exploitation of human labour – and class society in general – was no longer unavoidable.
The reverse of any optimism is, as the great pioneer of cybernetics Norbert Wiener warned, that there are problems ‘caused by the simultaneous action of the machine and the human being in a joint enterprise’; they include both time horizons (people operate slower) and accountability (the creation of less safe systems). More complex machines ‘can and do transcend some of the limitations of their designers, and in doing so they may be more effective and dangerous’ (‘Some Moral and Technical Consequences of Automation’). He called these problems moral, but they are both pragmatic and epistemological: ‘The result of a programming technique of automatization is to remove from the mind of the designer and operator an effective understanding of many of the stages by which the machine comes to its conclusions and of what the real tactical intentions of many of its operations may be’ (ibidem; see also his ‘Men, Machines, and the World About’ ).
This problem is growing acute and decisive within AI. Let me begin by considering the usage of its strategic unit: algorithms.
An algorithm in digital programming has been explained as the description of the method operating on data and computational structures by which a task is to be accomplished by means of sequences of ordered steps or instructions. The effect of such code sequences is to treat, classify, and analyse the exponentially growing digital data. An algorithm has an autonomous existence independent of its embodiment in a particular programming language for a particular machine architecture. It can vary in complexity from simple rules in natural language to the most complex mathematical formulae involving all kinds of variables (I have adapted this definition from Goffey and Terranova). Algorithms work by reproducing the statistically dominant patterns of understanding and behaviour in politics and economics.
Two central problems here are expropriation of earlier authors and criteria for configuring algorithms. As to expropriation, AI chatbots are prone to falsehoods and lack of attribution. Thus OpenAI – backed by Microsoft money and computing power – is being sued for ingesting authored writings to train its chatbots without permission or compensation for the original authors. Many other companies have been challenged by writers, visual artists, music producers, and other creators who see ‘generative’ (?) AI companies profiting from confiscation (see O’Brien). As Marx remarked about mass land expropriation from peasants, this was a ‘primitive fact of conquest under the cloak of “Natural Right”’, exactly as claimed by big AI today.
As to the criteria, we know far too little about the operations used in configuring algorithms needed to process Big Data or ML, and clearest in the classifying system chosen – or in general about the ‘social life of standards’ (Graham et al. 2021), including the statistics of ethnic, gender, and other discrimination tacitly involved. Terranova rightly highlights three points. First, algorithms are by now central to the hugely spreading ‘information and communication technologies, stretching from production to circulation, from industrial logistics to financial speculation, from urban planning and design to social communication’. For example, most users of the Internet are daily subjected to algorithms: Google’s PageRank sorts the results of our search queries, while Facebook’s EdgeRank automatically decides in which order we should get our news on our feed. These and many other less known algorithms modulate our relationship with data and each other.
Second,
algorithms can exist only as part of assemblages that include hardware, data, data structures such as lists, databases, memory, but also human behaviour … on its outside. Furthermore, as contemporary algorithms become increasingly exposed to larger and larger data sets (and in general to a growing entropy in … Big Data), they are becoming something more than mere sets of instructions … infinite amounts of information interfere with and re-program algorithmic procedures … and data produce alien rules. It seems clear … that algorithms are neither homogeneous … nor do they guarantee the infallible execution of automated order and control. (Terranova)
Third, in capitalism algorithms are financially valuable only inasmuch they can ‘convert … knowledge into exchange-value – monetization – and its exponentially increasing accumulation’, the titanic oligopolies on the Internet. Insofar as they ‘constitute fixed capital, algorithms appear “as a presupposition against which the value-creating power of the individual labour capacity is an infinitesimal, vanishing magnitude”’ (Terranova, the internal quote is from Marx’s ‘Fragment on Machines’ in the Grundrisse). To the contrary, this method to extract surplus from activities not heretofore understood as labour might be possibly – and most usefully – be applied to turn algorithms into use-value:
[F]eeding populations, constructing shelter and adequate housing, learning and researching, caring for the children, the sick and the elderly requires the mobilization of social invention and cooperation. The many … [should] redefine the meaning of what is necessary and valuable, while inventing new ways of achieving it. This corresponds in a way to the notion of ‘commonfare’… [implying] the socialization of investment and money and … modes of management and organisation which allow for an authentic democratic reappropriation of the institutions of Welfare … and the ecologic re-structuring of our systems of production. (Terranova)
2.2. I propose now to introduce the concept of ‘raw AI’ for the ensemble of AI algorithms used without added precautions to their operating mode. Centrally, raw AI doesn’t relate to any human or natural world but only to its own sources, the Big Data of words (and then of images, whose stylisation doesn’t radically change this embeddedness). Such AI is constitutively utterly indifferent to any destruction it might inflict on the world, small or big, because for it the world doesn’t exist.
AI possesses predefined parameters necessary to undertake any action. But it is ‘raw’ in comparison with a state where all such parameters would mandatorily include something analogous to Isaac Asimov’s First Law of – alas fictional – Robotics from 1942, a time of antifascist World War: ‘A robot may not injure a human being or, through inaction, allow a human being to come to harm’.
There are abundant examples of AI systems exhibiting aberrant behaviours. A neat one is that Google’s ‘AI Overviews’ feature had to be revamped – we don’t know how – after counselling to use glue in pizzas and to eat rocks, possibly from ‘data’ in a satirical website (Rosalsky). Perhaps we might neglect purely technical constraints or limitations in the design process or computer tools, assuming that with proper scientific methods, financing, and democratic control they can be rectified. Yet inbuilt technical and cognitive biases which threaten all uses of AI based on large language models (LLMs) include at least the risk of access bias, algorithmic bias, and what I shall call preconception bias (though all of them are sedimented in the algorithm used). ‘Access bias [deals with] who has access to the technologies and tools needed for documenting events. Algorithmic bias [is] embedded in the design of algorithms and their use, often due to already-biased training data; [it] can impact what results users see in a search and the order in which results are presented’: class, gender, and ethnic prejudices have been discovered. In other words, access determines where and from what kind of source the digital information is found, and to my mind it should prominently include political and financial ease vs. blocking of access; algorithmic bias shapes the analysis and filtering of information gathered; and preconception bias leads to systematic errors in the gathering or interpretation of information: ‘This might include, for example, where an attempt is made to encode nuanced human experiences or concepts into computer systems’ (both quotes from McDermott et al.). Even the Google DeepMind AI research group organises its 64-page report on LLM risks (Weidinger et al.) into six areas: ‘Discrimination, Exclusion, and Toxicity’, ‘Information Hazards’, ‘Misinformation Harms’, ‘Malicious Uses’, ‘Human-Computer Interaction Harms’, and ‘Automation, Access, and Environmental Harms’.
What is lacking here is a politico-economic stocktaking of access, when AI infrastructure needs huge capital investments that determine where, how, and from what kind of source the digital information is used. Only major corporations and states can afford to deploy such AI models at scale. with the USA and China controlling more than two-thirds of the world’s computing. The subservience to private profit in cahoots with violent power (often by military use) is an omnipresent, usually goal setting and limiting factor of AI uses. They are changing everyday life in the world context of equilibria between geopolitical power blocs, with mounting social disempowerment of large groups – a majority of people – through automation, reorganisation of urban space, and radicalisation of class hierarchy.
In sum, no data are fully neutral.
Thus, it is mandatory to delve on the anti-human use of AI so far, that is with how (not) to use AI. There are probably many extrinsic arguments against raw AI – that is, any not strictly monitored and rule-bound use of AI – similar to the cases of nuclear weapons or poison gases, but I shall here deal only with its multiplication of easy killing and maiming in present wars.
- AI and Warfare (with Surveillance)
Any AI model is programmed under certain assumptions and trained on selected data sets. Most evidently and urgently, this is also true of AI-enabled wargames and decision-support systems tasked with setting up courses of action in wars.
Silvia Federici has rightly stressed how ‘computerization has increased the military capacity of the capitalist class and its surveillance of our work and lives, [so that facing these] developments the benefits we can draw from the use of personal computers pale’. What are we then to say about the huge acceleration and applicability of AI – it seems a hundred millionfold? (Lovely) – in wars and in peacetime surveillance of people? The two are intimately linked, as clearly shown in the Gaza massacres: not by chance but by design, the eager use of AI by all big armies is accelerating upon the heels of much use in internal surveillance. The USA in particular – but perhaps it is simply more open? – is planning to deploy thousands of autonomous weapon systems within two years. The wars in Ukraine and Palestine are ideal testing grounds for armies. As a cynical anonymous article in The Economist saw it, both America and China see AI as the key to military superiority: ‘The results are most visible in the advance of intelligent killing machines’ (‘AI Will Transform’).
The clearest case of AI use for a qualitative jump in mainly civilian killings comes from the present Gaza war, both because of the enormous number of victims and of courageous Israeli-cum-Palestinian informants (see Abraham et al.) who published info from inside the Israeli intelligence and army. Targeting in this war seems to be based on facial identifications of Palestinians obtained from a mixture of previous intelligence and extensive use of AI by surveillance data, Google photos, drone footage, intercepted communications, and monitoring the movements and behaviour patterns of individuals and large groups. It is probable this has killed a higher proportion of Hamas members than in previous Israeli onslaughts, though US intelligence estimated in February 2024 it was not close to ‘eradicating’ Hamas, while the Israeli military has been warning for months this was unachievable (Salvage). However, this approach has resulted in record Palestine casualties in Gaza: as of August 16, 2024, ca. 40,000 dead and ca. 92,700 wounded were reported, in all ca. 133,000 people. Many more Palestinians in Gaza are dying of raging illnesses such as hepatitis, of thirst, hunger, and bodily traumas amid bombings and shellings. In early July, a letter to The Lancet from three scientists urged a ‘conservative estimate of four indirect deaths per one direct death reported’, so that it is plausible the number of dead from Israeli actions in Gaza is ca. 200,000 (or more) deaths, and counting; more than half of the reported deaths are of women and children. We should not forget the civilian and military victims of the Hamas Oct. 7-8 attack in Israel: ca. 1,150 dead, 5,400 wounded, and 251 taken captive; nor the 500 Palestinians killed in the stepped-up racist attacks by Israeli settlers’ in the occupied West Bank since the war began, with the number of wounded and arrested unknown.
The Israeli army has developed an AI-based program called ‘Lavender’ that quickly generates multiple ‘kill lists’. It was designed to mark as assassination targets all suspected operatives in the military wings of Hamas and Palestinian Islamic Jihad, including low-ranking ones. This AI output was accepted as guide and central factor in the unprecedented bombing of Palestinians in Gaza. Despite awareness that the AI system is in error ‘for approximately 10% of cases, and is known to occasionally mark individuals who have merely a loose connection to militant groups or no connection at all’, there was no thorough checking by humans why Lavender made those choices nor was ‘the raw intelligence data’ on which they were based (mainly, it seems, inference from contacts with other people) further checked, except for brief insurance they were not female. With callous disregard, the Israeli army systematically attacked the target individuals in their family homes at night. Additional automated systems, including one known cynically as ‘Where’s Daddy?’, were used to track the targeted individuals and carry out bombings after they had entered their residences, often around 5AM. The main reason for using unguided missiles, which destroy entire buildings burying their occupants, was to save on the expensive ‘smart’ bombs.
The article in +972 Magazine claims that ‘for every junior Hamas operative that Lavender marked, it was [as different from past Israeli operations] permissible to kill up to 15 or 20 civilians; [and if] the target was a senior Hamas official with the rank of battalion or brigade commander, the army on several occasions authorized the killing of more than 100 civilians in the assassination of a single commander’ – overwhelmingly women, children, and elderly. In what one hopes is an extreme example, the ratio was 1:300: ‘In order to assassinate Ayman Nofal, the commander of Hamas’ Central Gaza Brigade, a source said the army authorized the killing of approximately 300 civilians’, destroying several multi-storey buildings in airstrikes on 17 October 2023. After the first two weeks of the Gaza war, Israeli intelligence checked the accuracy of a random sample of several hundred targets selected by the AI system and found that Lavender’s results were 90% accurate in identifying an individual’s affiliation with Hamas. At that point, the army authorised the sweeping use of the system for several weeks (possibly still ongoing).
In sum, Lavender ‘clocked as many as 37,000 Palestinians as suspected militants’ and 37,000 apartments as possible bombing targets. It is possible that a majority of the 133,000 victims (and counting) resulted from such bombings. If we – modestly – assume that 80,000 Palestinian casualties came from these AI uses, we must first subtract 10% (8,000 people) for errors hitting ‘innocents’; then, dividing the 72,000 victims left into Militant + Family (the latter usually larger than the Western nuclear one of ca. 3-4 people) in a 1:7 ratio, we come to the extremely hypothetical number of 9,000 senior plus junior ‘military’ killed or wounded and 71,000 ‘civilians’ killed or wounded. This doesn’t include cases of senior military where Israeli killings used the ‘1 militant to up to 100 collaterals’ ratio, which would lower the number of militants killed or wounded. Thus, these fantasy calculations (better than having none) indicate the victims were perhaps something like 9,000 Palestinian militants and 124,000 Palestinian civilians (the ratio is 1 to 14). The great majority of these 133,000 deaths or woundings, plus all those dying from bombing-induced devastations and stress, would be normally called war crimes. Even if my calculations were to be halved, it would be the collective punishment of a whole population.
As ‘B., the senior intelligence source, said that in retrospect, he believes this “disproportionate” policy of killing Palestinians in Gaza also endangers Israelis…’ (Abraham et al.). First, the Israeli army assumes there are 30-40,000 militants in Gaza (see Davies et al.), so this humongous destruction may have accounted for 20-30% of them, while the rest are in tunnels or otherwise unfindable. Second, just imagine what the surviving youngsters from Gaza will feel about Israel for the rest of their lives…
We have much less news about AI in the Russo-Ukrainian war, but, in October 2023, Ukraine is reported to have made the first battlefield use of lethal autonomous weapons (LAWs) or killer robots; the United States, China, and Israel are developing their own LAWs. Also, the mass use of drones for spotting and attacking targets has been vital to both sides: AI can both pre-plan every tank-size target and do away with the need of source-to-drone link or ‘kill chain’, so that far larger numbers of low-cost munitions, eventually self-directing swarms, can be used. ‘[AI’s] astonishing feats of object recognition and higher-order problem solving’ (‘AI Will Transform’) are here to stay. It has been reported that the US Defense Department has a new Generative AI Task Force, which is or will soon be the case for most major armed forces.
Warring states are in a spiral of widening mutual killing and destruction, but Israel, the US, and Russia agree on opposing any international law on autonomous weapons or on AI control of nuclear weapons. ‘The paradox is that even as AI gives a clearer sense of the battlefield … there will be less time [for the people who fight it] to stop and think … Armies will fear that if they do not give [the AI models and] advisers a longer leash, they will be defeated by an adversary who does. Faster combat and fewer pauses will make it harder to negotiate truces or halt escalation’. Also, ‘AI-infused’ war favours armies with ‘mass and industrial heft … if software can pick out tens of thousands of targets, armies will need tens of thousands of weapons to strike them … The digital systems that mesh the battlefield together will be fiendishly expensive … [taking] huge investments in cloud servers able to handle secret data’. (‘AI Will Transform’) The Economist’s article concludes:
AI systems told to maximise military advantage will need to be encoded with values and restraints that human commanders take for granted. These include placing an implicit value on human life — how many civilians is it acceptable to kill in pursuing a high-value target? — and avoiding certain destabilising strikes, such as on nuclear early-warning satellites.
If not, all of us are likely to be victims.
AI-based war crimes by all sides, in proportion to their combat strength, are just beginning to be discussed; while a full appraisal of their scale is perhaps not yet possible, the evidence from the mass increase in human suffering points to a qualitative jump due to lax and vengeful use of ‘raw AI’. The term ‘intelligent killing machines’ is misleading, we should say ‘mass and instantaneous automatic killing machines’. Of course, as always, tested war technologies and methods from Ukraine and especially Israel will be exported into all new wars.
I must here mention, albeit cursorily, the peacetime surveillance use of AI. This happens largely through facial recognition, which has for years now been the mainstay of mass police interventions against both criminals and protesters (see Aïm and the bibliography in Clarke). Digitalisation has been easily and prominently integrated into State capitalism’s turn towards profoundly liberticidal and privacy-denying global programs of surveillance (Valluy), offering rulers both a cheaper and a less visible deterrent power than police or military patrols. Probably most major cities, which means most people on this globe, are by now covered by AI surveillance, exported worldwide by Chinese and US mega-corporations. ‘Civil’ surveillance and military uses of AI are in a constant feedback loop, held together by the violence of the ruling classes like crazy particles evolving in the Hadron Accelerator, so that it’s difficult to say which is the chicken and which the egg.
The semantic parallel of Big Data to, say, Big Pharma and other billionaire ventures should rip apart the sanctifying awe that covers their regimes of exploitation and manipulation. Big Data is of a piece with universal capillary surveillance, from the outset consubstantial with employing AI. The false consciousness of technology as ‘natural’ was already countered by Henri Lefebvre and Herbert Marcuse in the 1960s, for it ‘shapes the entire universe of discourse and action … swallows up or repels all alternatives’ (Marcuse). The regimes of exploitation transfer profit from extra labour-time to huge programs that extract and accumulate wealth by wholesale expropriation of the valuable assets (labour and biodata) of plebeian participants. But this broad enterprise easily segues from exploitation to manipulation. Marx found the English ‘enclosures’ of common land ‘dripping blood’ inasmuch as they caused huge hordes of landless paupers dying prematurely; these blood-soaked enclosures were both root and emblem of the nascent capitalism. Going land-grabbing one better, regimes of manipulation refer to huge programs for influencing the decisions and actions of ordinary people. We are all ‘datafied and assetised’ – literally turned into things and commodities – in the interest of profiteer rulers (see Van Dijck). Politically, this amounts to a new and global colonising enslavement targeting the modern equivalents of both working and middle classes (see Couldry and Mejias). The horizon of the ‘metaverse’, a term appropriated by the likes of Mr. Zuckerberg from dystopian SF writer Neal Stephenson and the Matrix movies, comports that big companies and big States financially subsume spaces and interactions – national, professional, familial, cultural… – of the users (see Bratton). It is a titanic ‘colonisation of social relationships, based on their expropriation followed by their appropriation … which neutralises all choice arising from exceptional needs, singular desires or non-standard values’ (Balibar).
- A Conclusion on Capitalist Realism
It might be most useful to situate this incomplete conclusion within Karl Polanyi’s view of limits to market expansion in what he called the three ‘fictitious commodities’: labour, land (or nature in general) and money. These goods are, of course, real and vital, yet they cannot be commodified fully but only partially and contradictorily, on penalty of being destroyed for human use (Polanyi); Streeck found that, in the wake of the excessive commodification of money, evident in the 2008 crash, market expansion in these three domains needs to abandon the regime of limitless increase. We must on the evidence of these last few years add that limitless commodification of people’s biodata by AI – from facial appearance to reading and interest habits – invents a final and most explosive ‘fictitious commodity’. It brings to an immediately threatening head the impossibility that dominant patterns of capitalist rule, such as way too high energy consumption, could be extended to planetary scale without destroying the planet’s human life. This could be approached under many headings, and I have above done so for warfare, and elsewhere for ecology and time horizons. I continue here with what is immediately most horrifying for any thinking human being: cognition, the massive dumbing down in human affairs.
The present AI myth, glorified by all available means and moneys in this unfettered and decaying capitalism, is the cutting edge of a worrisome trend. The 1890-to-1970 epoch was one of great discoveries, from the Theory of Imperialism and Theory of Relativity through Quantum Theory to DNA, not to mention painkillers, anaesthetics, and laser surgery. After it, we mainly see much improved mass killings and surveillance, also humongous companies with humongous quantities of data in their computers used to destroy privacy and radical opposition.
For we are amid badly understood but potentially horrendous problems:
There is no known way of preventing Large Language Models from … weaving untruths and absurdities into their output, in ways that can be hard to spot unless one has already done the relevant work oneself. Where people don’t perform this function, or where the rulers are interested in propagating disorientation, the hallucinations will propagate unchecked … Much as physical waste is shipped to the Global South for disposal, digital effluent is being dumped on the global poor: low-quality machine translations of low-quality English language content already dominate the web. This … risks poisoning one of the major wells from which generative AI models have hitherto been drinking, raising the spectre of a degenerative loop … machine learning turning into its opposite. (Lucas)
As a warning symptom, the book market is already being attacked by auto-generated rubbish: as of February 2024, there is a boom in AI-written e-books on Amazon (AI and Amazon seem to perfectly fit each other).
In Plato’s terms, Big Data would be the doxa, the prevailing and unexamined common sense that Socrates systematically doubted and tore down by pointing out its inherent unexamined presuppositions. As Wendy Chun’s important book puts it, what I here call ‘raw’ AI encodes segregation, eugenics, and identity politics through their default assumptions. The very use of ‘raw’ Big Data and of the correlation method (see section 2.2) that grounds its predictive potential, is shaped by the telos of differential targeting to train users into predictable sameness, with strong grouping into ‘agitated clusters of comforting rage’. This use seeks to disrupt the future by making true disruption impossible.
Furthermore, both the education and healthcare systems in the richer countries are nearing collapse, wilfully starved of a caring financing in favour of private profits. This provides excellent excuses for introducing new large AI-based shake-ups, which also cuts down employing now officially unnecessary people. These disasters are already upon us – just as indiscriminate war killings and financial dispossessions. I conclude with Steyerl on finances: ‘Private property rights, within digital capitalism and beyond, are relevant only when it comes to rich proprietors. Anyone else can be routinely stolen from’ (‘Mean Images’).
AI is a huge boost to massively used intelligence or cognition for violent power against personal freedom. The end-result of AI and Big Data, used by a financial capitalism sapping at the root both normal climate and peaceful coexistence of States, is a ‘probably unsurvivable result … prescribed as a norm’ (‘Mean Images’). As recounted in detail by Lovely, whom I’m paraphrasing here, Google cofounder Larry Page (worth about US$120 billion) holds that superintelligent AI is ‘just the next step in evolution’ so that efforts to prevent AI-driven extinction and protect human consciousness are sentimental nonsense; former Google DeepMind senior scientist Richard Sutton agrees. Opposed to this crass Social Darwinism are two of the leading AI ‘deep learning’ scientists, Geoffrey Hinton and Yoshua Bengio, whose 2023 position paper warns that ‘no one currently knows how to reliably align AI behavior with complex values’ and points out the risk of ‘an irreversible loss of human control over autonomous AI systems’; they were joined by Nobel laureate Daniel Kahneman and Sapiens author Yuval Noah Harari. In spite of all this uncertainty, AI companies continue racing to make these systems as powerful as they can, cutting corners on safety. Nobody can now do large-scale work upon AI without investment of millions of dollars that promises billions of profits (and/or crash). Sam Altman, the cofounder of the leading OpenAI – into which since 2019 Microsoft has invested $13 billion – opined that ‘AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies’.
Yet even the company name here is a typically misleading one, since it hints vaguely at ‘open’ in the sense of ‘free’ or ‘non-paying’ while its reason for existing is to make lots of money, no holds barred. Thus, the diversity and believability or reliability of statements and positions in such AI is imperilled. The algorithms usable for quick profit are often based on existing stereotypes and always on a fixed value system moulded by capitalist circulation.
Lovely’s upshot is that ‘Employers are already using AI to surveil, control, and exploit workers. But the real dream is to cut humans out of the loop’. This apocalyptic fear may be extreme, but there’s little doubt that the near future of AI ‘looks more like intensified racial discrimination in incarceration and loan decisions, the Amazon warehousification [sic] of workplaces, attacks on the working poor, and a further entrenched and enriched techno-elite’.
I do not deny the large possibilities of saving time and energy by applying AI to, say, health sciences, engineering or geosciences – though, even there, the dogma and doxa of profits above all will often stymie mass welfare, as in the case of Covid-19 vaccines largely denied to the poor South, leading to probably over 10 million avoidable deaths.
The existing, mainly raw, AI seems today a dance on the edge of the abyss for civilisation. In a capitalism dispensing ever more with human work and human safeguards, we have very much to fear from it, though gains in profits and power for a few may be immense. As argued in Section 2, raw AI in warfare and culture – the ensemble of algorithms without safety additions to their operating mode – doesn’t relate to any human or natural world but only to its own ‘deeply learned’ sources. Humanity is being subjected to a financialisation of most nooks and crannies for existing, a capture or subjection of human bodies, times, and language. AI is therefore potentially as dangerous as the climate destruction and the imperial state wars.
I have always regretted the split between science and humanities and in my work striven to deny it. This is now exacerbated into a split between ‘analogue’ words and ‘digital’ numbers, whereas we need both. AI machines ‘can, at last, interpret natural-language instructions fairly accurately, and fluently turn out text and images in response’ (Lucas). So far, so good. But anything presupposing Other Possible Worlds that is not yet expressly found in the database cannot be understood by AI algorithms; philosopher Bernard Stiegler called this an always mandatory distinction between ‘authentic thinking’ and ‘computational cognitivism’. He rightly feared that the capitalist technocrats, our new barbarian rulers, are leading to a historically unprecedented impoverishment of the human mind, a denial of utopian hope by feudal enclosure of the mind and enslavement. Human intelligence works by feedback with malleable memory, and, furthermore, by melding ideation and emotion. None of this is replicable by AI, which can only congeal ideational norms into an eternal present. This misuse of technology is now the spearpoint of the ‘shock and awe’ recipe of a capitalism that no longer fears communism, so well brought to our attention by Naomi Klein. It begins, as does her book, with reality being dumbed down, ‘blank is beautiful’.
This is also – most revealingly – a loss of irony, parody, sarcasm, and satire, and even of any non-AI (that is, more or less value-based) contextual readings: wired-in technology enforces upon the young that discourse must be in bite-sized fragments, the thought-span of a tweet. For example, all literary and cultural studies in the wake of our Great Ancestors, such as Shklovsky, Bakhtin, Benjamin, Brecht, Gramsci, Auerbach, Williams, Jameson, and all their deviant companions or debaters, couldn’t be understood by raw AI. Universities using AI for teaching would not only abolish all serious cultural studies or humanities, and this wave is already crashing down: I witnessed, by pure chance, the firing of almost 100 teachers of humanities at Brighton University, UK, in May 2023. The most efficient and humanising approach of teaching, the face-to-face interaction of the taught and the teacher, would also be lost.
The physicist and philosopher Ragnar Fjelland has pointed out that human intelligence is partly transmitted in ways that are not taught but are communicated tacitly and cannot be duplicated using algorithms or code: much of our knowledge is ‘tacit’ (also posited by Michael Polanyi and Hubert Dreyfus). I do worry about a ‘digital intelligence’ fenced in to churn out texts and images that enforce a very specific behaviour fully breaking with incarnated, causal intelligence and its needful time-rhythms. This disincarnation refuses all checks by pragmatic needs and dangers, most of all the Pleasure Principle, including erotics and humanised desire, and it is therefore contrary to the preservation of the species Homo (see Pasquinelli); or to put it politically, to freedom and solidarity between humans.
This is humanity’s dark horizon. The AI being wrought today is no new saviour but a huge multiplication of powers within the existing inhuman value system, a ready tool of blind ruling classes the world over, primarily leading to material and cognitive destruction. In already operating practice AI is being massively used against personal freedom. The most important decisions about human life and death are increasingly reached by privileging raw AI input; this increases the prospect of a rogue reality, already upon us in the form of polluted air, water, and climate leading to more better wars, finally out of anybody’s control. It multiplies the forces of a Behemoth fusion of State and monopoly capitalism, best discussed as fascism by Franz Neumann, Herbert Marcuse, and now Alberto Toscano. Already the empowerment of capitalist finances has produced ‘economic stagnation combined with oligarchic redistribution’ (Streeck) – that is, much illegal subterfuge and the stratification of society into a few super-rich and a huge mass of super-poor comparable only to the worst class societies, breeding hunger and war. Here my findings converge with and are strengthened by a properly political assessment on ‘the three coming catastrophes’ being ‘Climate, War, and Metaverse’ (Balibar).
Furthermore, newer research is going beyond the purely random probability (or competitive capitalism) model of Big Data toward a rules-based abstract logical representation of the world. With ongoing high financing by our rulers, in a few years they might get there. If the new rules for modelling the world would embed the capitalocene framework of values and norms into the algorithms, we would get a fascist caste system of new slavery.
A technically similar possibility might be based on cooperative associations of producers without capitalists. Therefore, capitalist realism should be countered by utopian realism, but I can’t do it here. However, safe AI operation can only be had by upgrading and applying civic and democratic (citoyen) values, ensconced in permanent safeguards superadded at a constitutive point before its application. It is therefore imperative we should reverse the trend, inbuilt into capitalism from its origins, to plunder the public domain, which in this case means a large mobilisation against using raw AI. Of possibly great importance might be the first major anti-AI strike, by the Writers Guild of America in 2023, lasting 148 days, against the Hollywood Motion Picture and Television Producers. demanding that chatbots not be used to write source material. The WGA succeeded in somewhat reining in AI use but not in banning it; some compensation crumbs may be thrown to WGA members, but film and TV studios can still generate entire shows and casts by AI (see Billet).
I would support the grassroots movement to pause ‘frontier’ AI development of the most highly capable general-purpose models at its current state, so that citizens and governments can develop a comprehensive regulation for both present and future, possibly through an International AI Agency as proposed by UN Secretary-General António Guterres. Rigorous pre-deployment risk assessments by companies, as those for dangerous biological research, should be made mandatory. Doing so would restrain tech companies from exacerbating existing harms and introducing new ones: it would be a chance to stop the digital industry from doing irreparable harm (see Kelly). Diametrically opposed to ‘austerity’ applied to people’s use of needed goods, only an austere AI redirection towards actual citizen survival and enrichment can prevent utter breakdown. This needs legislation and an informed and committed citizenry, both well-funded. This depends upon who holds financial and political power over images and writings. The only way out would be to radically redistribute the power of billionaires and corporations back to ordinary working people and their associations.
Some Works Referred To
The subdivision is cumulative: items from a Section may also be referred to in next Sections.
Sections 0-1
Berardi ‘Bifo’, Franco. Precarious Rhapsody. Minor Compositions, 2009.
— Breathing: Chaos and Poetry. Semiotext(e), 2018.
— Ultimi bagliori del Moderno. Lavoro, tecnica e movimento nel laboratorio di Potere Operaio. Ombre Corte, 2023.
Marcus, Gary. ‘Open AI’s new text-to-video app…’ Bulletin of Atomic Scientists, 22 February 2024, https://thebulletin.org/2024/02/openais-new-text-to-video-app-is-impressive-at-first-sight-but-those-physics-glitches/?utm_source= (Accessed 29 February 2024)
Oxfam International. ‘Inequality Inc’. [Report for 2023], 15 January 2024, www.oxfam.org/en/research/inequality-inc (Accessed 1 February 2024)
Steyerl, Hito. ‘Mean Images’. New Left Review 140/141 (2023): 81-97.
Suvin, Darko. ‘Words and Lesions: Epistemological Reflections on Violence, the 1968 Moment, and Revolution…’ Critical Q 62.1 (2020): 83-122.
Woolley, Samuel C., and Philip N. Howard eds. Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media. Oxford University Press, 2019.
Section 2
Baudot, Pierre-Yves. Gouverner par les données? Pour une sociologie politique du numérique. ENS Éditions, 2023.
Federici, Silvia. Re-enchanting the World: Feminism and the Politics of the Commons. PM Press, 2019.
Goffrey, Andrew. ‘Algorithm’. https://sidoli.w.waseda.jp/Goffey_2006_Algorithm.pdf (Accessed 6 July 2024)
Graham, Janice E., Christina Holmes, Fiona McDonald, and Regna Darnell (eds.) The Social Life of Standards: Ethnographic Methods for Local Engagement. UBC Press, 2021.
Marx, Karl. Notebook VII (Feb.-June 1858), https://thenewobjectivity.com/pdf/marx.pdf (Accessed 9 May 2024)
McDermott, Yvonne, Alexa Koenig, and Daragh Murray. ‘Open Source Information’s Blind Spot’. Journal of International Criminal Justice 19 (2021): 85–105.
O’Brien, Matt. ‘Two 80-something Journalists Tried ChatGPT. Then, They Sued to Protect the “Written Word”’. Associated Press, 11 July 2024, https://apnews.com/article/writers-chatgpt-copyright-lawsuit-nick-gage-basbanes-openai-microsoft-9e92d20327c63460209279c1c2e38238 (Accessed 4 September 2024)
Rosalsky, Greg. ‘10 Reasons Why AI May Be Overrated’. Planet Money 6 August 2024. www.npr.org/sections/planet-money/2024/08/06/g-s1-15245/10-reasons-why-ai-may-be-overrated-artificial-intelligence (Accessed 11 August 2024)
Steyerl, Hito. ‘Common Sensing?’ New Left Review 144 (2023), https://newleftreview.org/issues/ii144/articles/hito-steyerl-common-sensing (Accessed 30 January 2024)
Terranova, Tiziana. ‘Red Stack Attack!: Algorithms, Capital and the Automation of the Common’, in #Accelerate: The Accelerationist Reader, ed. Robin Mackay and Armen Avanessian. MIT Press, 2014, pp. 377-97.
Weidinger, Laura, and 22 co-authors. ‘Ethical and Social Risks of Harm from Language Models’. DeepMind, 8 December 2021, https:/arxiv.org/pdf/2112-04359 (Accessed 1 March 2024)
Wiener, Norbert. ‘Men, Machines, and the World About’, in Medicine and Science, ed I. Galderston. New York Academy of Medicine and Science, 1954, pp. 13-28, http:// 21stcenturywiener.org/wp-content/uploads/2013/11/Men-Machines-and-the-World-About-by-N.-Wiener.pdf (Accessed 13 June 2024)
— ‘Some Moral and Technical Consequences of Automation’. Science 131.3410 (1960): 1355–58, https://nissenbaum.tech.cornell.edu/papers/Wiener.pdf (Accessed 13 June 2024)
Section 3
Abraham, Yuval, and Local Call. ‘“Lavender”: The AI machine directing Israel’s bombing spree in Gaza’. +972 Magazine, 3 April 2024, www.972mag.com/lavender-ai-israeli-army-gaza/ (Accessed 10 April 2024)
‘AI Will Transform the Character of Warfare’. The Economist, 20 June 2024, www.economist.com/leaders/2024/06/20/war-and-ai?utm_campaign=a.the-economist-this-week &utm_medium=email.internal-newsletter.np&utm4 (Accessed 21 June 2024)
Aïm, Olivier. ‘Surveillance, images numériques et persistances photographiques’. Recueil Alexandries, mai 2024, www.reseau-terra.eu/article1482.html (Accessed 21 May 2024)
Balibar, Étienne. ‘Sur la catastrophe informatique: une fin de l’historicité.’ Les Temps qui restent, no. 1 (2024), https://lestempsquirestent.org/fr/numeros/numero-1/sur-la-catastrophe-informatique-une-fin-de-l-historicite (Accessed 31 July 2024)
Bratton, Benjamin. The Stack: On Software and Sovereignty. MIT Press, 2015.
Clarke, Roger. Dataveillance and Information Privacy Home-Page. (Accessed 6 July 2024)
Couldry, Nick, and Ulises A. Mejias. The Costs of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism. Stanford University Press, 2019.
Davies, Harry, Bethan McKernan, and Dan Sabbagh. ‘“The Gospel”: How Israel Uses AI To Select Bombing Targets in Gaza.’ The Guardian, 1 December 2023, www.theguardian. com/world/2023/dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets (Accessed 6 May 2024)
Lovely, Garrison. ‘AI?’ Jacobin, 2 January 2024, https://jacobin.com/2024/01/can-humanity-survive-ai (Accessed 19 April 2024)
Marcuse, Herbert. One-Dimensional Man. Routledge & Kegan Paul, 1964.
Salvage Editorial Collective. ‘From Apartheid to Genocide’. Salvage Perspectives 14 (2024): 2-18, https://salvage.zone/from-apartheid-to-genocide-salvage-perspectives-14/ (Accessed 4 August 2024)
Valluy, Jérôme. Humanité et numérique(s) – De l’histoire de l’informatique en expansion sociétale… au capitalisme de surveillance et d’influence (1890-2023), TERRA-HN-éditions 2023, www.reseau-terra.eu/article1347 (Accessed 22 February 2024)
Van Dijck, José. ‘Datafication, Dataism and Dataveillance: Big Data between Scientific Paradigm and Ideology’. Surveillance & Society 12.2 (2014): 197-208.
Section 4
Billet, Alexander. ‘In and against the Dream Factory’. Salvage 14 (2024): 177-96.
Chun, Wendy Hui Kyong. Discriminating Data: Correlation, Neighbourhoods, and the New Politics of Recognition. MIT Press, 2021.
Fisher, Mark. Capitalist Realism: Is There No Alternative? Zer0 Books, 2009.
Fjelland, Ragnar. ‘Why General Artificial Intelligence Will Not Be Realized‘. www.researchgate.net/publication/342235141_Why_general_artificial_intelligence_will_not_be_realized (Accessed 29 March 2024)
Kelly, Jack. ‘Three Key Misconceptions in the Debate about AI and Existential Risk’. Bulletin of Atomic Scientists, 15 July 2024, https://thebulletin.org/2024/07/three-key-misconceptions-in-the-debate-about-ai-and-existential-risk/?utm_source=Newsletter&utm_medium (Accessed 15 July 2024)
Lucas, Rob. ‘Unlearning Machines’. NLR Sidecar, 2 February 2024, https://newleftreview.org /sidecar/posts/unlearning-machines?pc=1575 (Accessed 29 February 2024)
Pasquinelli, Matteo. The Eye of the Master: A Social History of Artificial Intelligence. Verso, 2023.
Polanyi, Karl. The Great Transformation. Beacon Press, 1957 [1944].
Stiegler, Bernard. The Age of Disruption, trans. D. Ross. Polity, 2019.
Streeck, Wolfgang. ‘How Will Capitalism End?’ New Left Review 87 (2014): 35-64.