Burning Down The House.
Reality:
One might argue that the potential risks of artificial intelligence (AI) currently dominate the public discourse about it. According to the majority of popular narratives, AI is about to decide humanity’s fate by increasingly taking over people’s tasks and replacing human relationships with autonomous agents for both professional and personal interactions.
The news is dominated by stories ranging from AI girlfriends/boyfriends to virtual influencers on social media, and (pornographic) deep fake images and videos. The AI landscape currently resembles the wild west – a chaotic and unpredictable environment where tech giants cut corners and ignore copyright laws and corporate policies in their desperate hunt for new data to train their AI tools. Meanwhile, as Western societies nearly drown in the vague but powerful agency of algorithms and the myths surrounding black boxes, data-poor countries are widely neglected and exploited in the context of data colonialism, resource extraction, biases, and environmental degradation, all favoring data-rich societies.
Authoritarian uses of AI for state propaganda, surveillance, and censorship are certainly not just an afterthought in all of this. These doomsday scenarios are deeply embedded in our imagination of the technology, and is mirrored in the design of rules and regulations for it, for instance, in the various risk levels of the EU AI Act. The pervasive fear of potentially ‘burning down the house’ shapes not only public perception but also the foundational principles guiding the development and deployment of AI.
Check:
As is often the case with significant aspects of human culture, technology is influenced by several factors, including ideologies. Once we strip away these dominant cultural narratives and begin to explain the abstract, formal models of algorithms and the concept of machine learning, we uncover the realities of AI.
The technology has been embedded in our daily tasks even before the mass popularity of generative AI tools, from email spam filters and voice assistants on smartphones to recommendations on streaming platforms and personalization of content on social media.
For years, automation has been part of our communication through automated email responses, chatbots in customer service, and smart devices enabling voice-activated commands. However, as generative AI tools work with vast amounts of existing and sometimes incomplete and imbalanced data to create new media objects, the limitations and inaccuracy of the generated output become apparent. These include a lack of applying context and nuance, ethical and moral reasoning, emotional intelligence, sociocultural sensitivity, and creative and abstract thinking.
To progress beyond using AI for mere automation and ordinary, mundane, and conventional tasks, society needs to democratize the technology without the ideologies or agendas imposed by others. This involves everyone directly or indirectly affected by new AI developments and regulations, including teachers, students, nurses, social workers, journalists, human rights advocates, and environmental activists, to name a few.
Industries must critically reflect on their use of AI and adhere to the highest ethical standards. That way we might save the house from burning down. However, without any collaborative efforts, we still risk losing things in the fire – things we will never get back.
Written by: Dr. Christian Stiegler