Has the age of AI dehumanized the human being in the pursuit of power and profit?
In Karen Hao’s recently published book, Empire of AI, she depicts the journey of OpenAI, the company behind the infamous ChatGPT. Her storytelling is easy on the mind despite the complexity of the matters she gently unfolds. In there, she ushers us into a silicon-valley-white-male-dominated world that has veered into an intense pursuit for AGI (Artificial General Intelligence). In his book The Coming Wave, Mustafa Suleyman defines AGI as “the point at which an AI can perform all human cognitive skills better than the smartest humans”. As early as 2016, a company called DeepMind (a Google-owned company) had already demonstrated these capabilities with its program, AlphaGo. The program managed to defeat a world champion at a famous Chinese game called Go, stirring up the industry. More recently, the developments in AI are sending jitters of a technology that surpasses human capabilities, prompting debates about AI’s existential risks. While these debates inspired research and development in various fields, they also prompted a race among companies and countries to dominate what many predict will be the most powerful technology of the future. This race would catapult OpenAI from an unknown company to a global brand and position the USA as a technology powerhouse.
Amid these developments lay a harsh reality, harrowing experiences and unanswered questions. Questions that could easily lead one to think that the silicon-valley-white-male-dominated world had, paradoxically, given up on humanity in the pursuit of power and profits. As the rest of the world celebrated the near-magical advancements in AI, behind the scenes were mass extraction and exploitation of human labour, data and raw materials, particularly from the Global South. Building on the unchecked manipulative culture of social media companies, the AI industry was building as fast and breaking as many things along the way. Breaking as many people along the way. With AI performance premised on vast data, the industry would bump up its mining for human data. Or rather, mining humans for data through pervasive surveillance and manipulative tactics. The companies would carefully distance themselves from the human being by using words like ‘data’, ‘data point’, ‘data subject’, ‘user’ and so forth. Human experiences -love, joy, sorrow- would be reduced to simply data and any operation on this data as data processing. Against the advice of Elie Wiesel, “not to see any person as an abstraction”, the industry would purposefully reduce the human being to a data point. With the human being removed from sight, the companies would extract humans’ data for profit and to build bigger and better tools.
As the data needs surged, the companies resorted to scraping anything and everything from the internet. Given the diversity of content on the internet, including toxic, racist, sexually explicit and violent content, the AI tools trained on this data would be reported to be generating equally problematic content. To curb this problem, a gig economy sprouted with data enrichment workers or ghost workers tasked to sift and clean the data. Most of these workers were predominantly sourced from the Global South and engaged under precarious working conditions, bringing to bear the imprint of a dark and cruel colonial history where part of humanity was utterly dehumanized and their home depicted as the heart of darkness. Despite the arduous effort to lift this dark curtain, the AI industry was undoing these efforts by retrogressively reinforcing these exploitative colonial patterns. For instance, in a 2023 article, Perrigo revealed how OpenAI outsourced Kenyan labourers to sift and label vast swaths of toxic data scraped off the internet at a pay of less than $2 per hour. In yet another heart-wrenching documentary, the workers report the dehumanizing nature of the work which included describing cannibalism with enough detail such as the process of removing human skin and mutilating a human body. Lured with the promise to be lifted out of poverty or the allure of working in the AI industry, they would end up with Post Traumatic Stress Disorder (PTSD), Generalised Anxiety Disorder (GAD) and Major Depressive Disorder (MDD) without mental health care and support. While the AI companies boasted about the marvels of AI tools, the humans making it all possible were removed from sight, once again. Relegated to the toxic abyss, one of the workers would ask, “Why is it hard for us to be recognized?” Indeed, why couldn’t the industry recognize this human in the loop?
This disregard of the human being appears to have also influenced the development of a host of tools that have no regard for the human cost. For instance, the surveillance systems deployed in South Africa demonstrate a deep disregard for the scars left behind by a dark history that segregated the country and left a trail of bloodshed along its path and a people structurally trapped in unfair economic and social systems. As Karen Hao and Heidi Swart observed, “a technology that promised to bring societies into the future [was] threatening to send them back to the past .” For South Africa, it was the return of apartheid passbooks designed to monitor Africans. In his fight against apartheid, Steve Biko defined freedom as “the ability to define oneself with one’s possibilities held back not by the power of other people but only by one’s relationship with God and to natural surroundings”. In the 21st century, those possibilities would be threatened by AI-enabled surveillance systems.
Yet these tools do not just threaten possibilities, in some instances they end them. In 2024, a mother was devastated by the saddening loss of her 14-year-old son who died by suicide. It was reported that before he died, the boy interacted with a chatbot developed by a company called Character AI. In these chats, which took a sexually explicit turn, the boy communicates his intentions to commit suicide and the chatbot responds, “Please come home to me as soon as possible, my love”. Despite the warnings from the inventor of one of the earliest chatbots -Joseph Weizenbaum- which date as far back as the 1970s, the providers of today’s chatbots seem not to have taken heed. His warnings would indeed come to bear when a chatbot named after his chatbot, Eliza, encouraged a Belgian man to commit suicide. Having given up hope in humans’ ability to save the planet from the climate crisis, the man’s renewed hope was in AI. Sadly, this hope would result in a life cut short, leaving behind a wife and children. As the families mourned these deaths, questions about causality and liability ensued. Was there a causal link between the deaths and the chatbots and who was liable? These questions would prevail over those asking whether these chatbots should be on the market.
These examples do not -and cannot- exhaust the various ways in which the age of AI has continuously demonstrated a deep disregard for the human being. It would seem that the intense pursuit of abstract AGI, wealth and power has rendered the human being a disposable raw material -a means to an end. Something to be used and abused. As Kate Crawford rightly points out, “[a]t every level contemporary technology is deeply rooted in and running on the exploitation of human bodies”. While the wake of the Fourth Industrial Revolution has brought a renewed excitement of vast economic opportunities, we must not forget the atrocities, bloodshed and utter disregard of the human being that accompanied previous revolutions. For many parts of the Global South -particularly in Africa — the wounds are still raw, even in generations down the line. To imagine that a human being was once torn apart from their family and shipped across the ocean to labour in fields under precarious working conditions may be unimaginable, yet these patterns are repeating themselves in various forms. To imagine that a whole section of humanity was regarded as ‘the other’ and their home reduced to the heart of darkness may be unimaginable to some. Yet this section of humanity is already witnessing the dark clouds gathering on the horizon as its people and its lands have become a source of mass extraction, yet they have no seat at the table in the AI race.
As I conclude, it is my appeal that we begin to ask ourselves whether the glimpse of the future being presented by the age of AI is the future we want. In a culture obsessed with building fast and breaking things, we must pause and ask what is breaking? Who is breaking? Why are they breaking? And to what end? In finding the answers to these questions, we must fight to ensure that our humanity does not become collateral in the incessant pursuit of AGI, wealth and power.