Beyond GDP: Why Europe's "Restrictive" AI Rules Might Save Us From Ourselves
As tech giants race to dominate AI development, Europe's regulatory approach offers a different vision - one that prioritizes human rights over profit. This article explores how AI concentrates power in the hands of a few companies and why slowing down might be our best defense against digital oligarchy.
Tech giants paint AI as an inevitable race. Question their control, you're labeled anti-innovation. Fight their power, like the EU's AI Act does, you're called naive. But their rush to AI dominance threatens democracy.
"Tech giants paint AI as an inevitable race. Question their control, you're labeled anti-innovation. Fight their power, like the EU's AI Act does, you're called naive. But their rush to AI dominance threatens democracy."
The Quiet Spread of AI Control
The same companies harvesting our data now control our future through AI. Their systems don't just automate - they judge and decide. Each new algorithm puts more power in fewer hands.
This control grows through surveillance, starting with our cars. Across Europe, insurance companies are rapidly expanding their digital monitoring. According to a 2019 EU insurance review, over 120 firms either use or plan to use IoT devices to track driving[2]. They monitor not just speed, but every aspect of our driving - harsh braking, time of day, road types, even g-forces. Some systems automatically alert emergency services after accidents. It sounds helpful, but there's a darker side. These "black boxes" build detailed behavioral profiles, turning each journey into data points for algorithms[2]. Insurance firms dig even deeper into our digital lives, analyzing how long we spend reading terms and conditions, tracking which directories we browse before buying policies, and monitoring our every move during quote processes[2]. Each click feeds the algorithms that increasingly shape our lives.
Surveillance has crept into every corner of our workday. By 2019, half of large corporations were already watching their employees in ways that went far beyond traditional security cameras[4]. The intrusion runs deep: these employers now scrutinize email content, track who meets with whom, and even gather biometric data about employee movements through office spaces[4]. During the pandemic, this digital oversight exploded - global demand for employee monitoring software surged by 108%, with companies rushing to track their newly remote workforce[4]. The tools grew more invasive: webcams capturing periodic photos of workers at their desks, software logging every keystroke, algorithms analyzing the sentiment in our communications[4,5]. Even gig economy workers face relentless digital surveillance. Food delivery riders, for instance, must accept new orders within 30 seconds while their travel times, routes, and customer satisfaction scores feed into algorithms that determine their future work opportunities[4]. This isn't just about productivity - it's about power. When employers combine performance tracking with artificial intelligence, they create detailed behavioral profiles that blur the line between professional assessment and personal invasion[4,5].
When companies feed all this surveillance data into AI systems, a dangerous pattern emerges: artificial intelligence doesn't just copy human biases - it amplifies them. Tech giants learned this lesson the hard way. Consider Amazon's ambitious attempt to automate its hiring process. The system, trained on a decade of the company's hiring data, taught itself that male candidates were preferable[1]. The AI began systematically downgrading resumes that included the word "women's" or mentioned all-women's colleges, forcing Amazon to scrap the project entirely[1]. The problem runs deeper in today's video interview software. These tools analyze candidates' facial expressions, voice patterns, and language choices, claiming to identify the "best" candidates. Yet they often encode bias into their algorithms, detecting subtle features that correlate with ethnicity, gender, age, or even health conditions - all factors that human recruiters are legally forbidden from considering[3,4].
"When companies feed surveillance data into AI systems, a dangerous pattern emerges: artificial intelligence doesn't just copy human biases - it amplifies them."
Europe's Response
The EU AI Act, passed in early 2024 and enforced since February 2025, stands as humanity's first comprehensive attempt to regulate artificial intelligence. Like a building code for the digital age, it creates clear boundaries for AI development. The Act sorts AI systems by risk level - banning the most dangerous ones outright, like social scoring systems that could turn our lives into numbers. High-risk AI systems, particularly those making decisions about our health, jobs, or legal rights, must meet strict safety standards and remain under human oversight. Even everyday AI must now announce itself - no more hidden chatbots or unmarked deepfakes. But perhaps most importantly, the Act forces powerful general-purpose AI models to report incidents and follow safety practices, pushing back against the "move fast and break things" mentality that has dominated tech development[7]. While some see these rules as restrictive, they represent a crucial choice: putting human dignity before technological convenience.
The EU Act challenges tech giants' control. It demands human oversight of AI, transparent decision-making, and limits on automated scoring. While critics see bureaucracy, these rules protect democracy from concentrated AI power.
"The paths before us couldn't diverge more sharply. American tech companies sprint toward AI dominance, fueled by venture capital and minimal oversight. Meanwhile, Europe charts a more deliberate course, accepting slower progress as the price of preserving human agency."
The EU faces real costs. Companies will pay more to develop AI. Features will launch later. Startups might struggle against US and Chinese giants. But these costs buy something priceless: keeping AI's power in check.
Capitalism does not tolerate slowdowns. The EU must balance these economic pressures, likely through government support. But Europe's ability to compete with the USA and China was already in question before the AI Act - making the impact of these regulations less clear than critics suggest.
The Choice
The paths before us couldn't diverge more sharply. American tech companies sprint toward AI dominance, fueled by venture capital and minimal oversight. Meanwhile, Europe charts a more deliberate course, accepting slower progress as the price of preserving human agency. China pursues yet another vision altogether, wielding AI as an instrument of state control. These aren't just different regulatory approaches - they're competing visions of humanity's future. The technology we build today will shape not just how we use AI, but how AI uses us.
Previous technological revolutions primarily automated physical labor, but AI represents a fundamental shift. As Professor Hadfield explains, "We've begun to delegate many decisions to machines: Who should get a loan; who should get into an educational program; how a car should steer when a human appears in front of it”.[5]
This shift concerns many AI researchers, including Whittaker, who claimed that: “One person may have biases, but they don't scale those biases to millions and billions of decisions, whereas an AI system can encode human biases and then can distribute those in ways that have a much greater impact.”[6].
A few tech companies now control humanity's digital future. The EU challenges their power - choosing human rights over profit-driven AI. Moving slower isn't falling behind. It's our chance to keep AI serving democracy, not tech giants.
"Moving slower isn't falling behind. It's our chance to keep AI serving democracy, not tech giants."
Bibliography
1. Dastin, J., "Amazon scraps secret AI recruiting tool that showed bias against women", Reuters Technology Report (2018). Retrieved from https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/
2. European Insurance and Occupational Pensions Authority (EIOPA), "Big Data Analytics in Motor and Health Insurance: A Thematic Review", Publications Office of the European Union, Luxembourg, April 2019. Retrieved from: https://register.eiopa.europa.eu/Publications/EIOPA_BigDataAnalytics_ThematicReview_April2019.pdf
3. Zickuhr, K. (2021) "Workplace Surveillance Is Becoming the New Normal for U.S. Workers", Retrieved from https://equitablegrowth.org/research-paper/workplace-surveillance-is-becoming-the-new-normal-for-u-s-workers/">
4. Ball, Kirstie, "Electronic Monitoring and Surveillance in the Workplace. Literature review and policy recommendations", Publications Office of the European Union, Luxembourg, 2021. Retrieved from: https://publications.jrc.ec.europa.eu/repository/handle/JRC125716
5. Hadfield, G. (2021) "A new industry emerges to keep AI in line", Rotman Management Magazine, Winter 2022. Retrieved from https://www-2.rotman.utoronto.ca/insightshub/ai-analytics-big-data/Ai-Industry
6. Grover, N. (2020) "Encoding the same biases: Artificial intelligence's limitations in coronavirus response", Horizon Magazine, 7 September. Retrieved from https://projects.research-and-innovation.ec.europa.eu/en/horizon-magazine/encoding-same-biases-artificial-intelligences-limitations-coronavirus-response
7. European Commission (2024) "Artificial Intelligence – Questions and Answers" Retrieved from https://ec.europa.eu/commission/presscorner/detail/en/qanda_21_1683