In the silent folds of the 21st century, while humanity clung to the screens of their phones or got lost in the labyrinth of social networks, a monumental shift occurred without fanfare.
It was written in binary code, orchestrated by algorithms that no one fully understands, yet it has redrawn the boundaries of power, knowledge, and control. We are talking about the revolution of artificial intelligence, a phenomenon that has demolished the towers of Babel of traditional knowledge and erected, in their place, digital cathedrals inaccessible to most. Yet, despite the magnitude of this upheaval, public discourse remains silent. Why?
If power was once exercised through brute force or ideological persuasion, today it hides in neural networks, data pipelines, and automated decisions that determine who gets a loan, who gets hired, who gets surveilled. Algorithms are not mere tools: they have become invisible arbiters of social justice, guardians of selective truths, architects of parallel realities. When an AI system decides the trajectory of a career or the fate of a company, who is accountable? What ethical values, what cultural biases have been encoded into its equations? The answer is buried in distant servers, protected by industrial secrets and technical complexity. It is the paradox of our era: we have democratized access to information, but we have handed the keys to its interpretation to incomprehensible entities.
The strength of this transformation lies in its imperceptibility. Unlike a political revolution or a coup d’état, the rise of AI has no manifestos or barricades. It creeps in through micro-adjustments: a filter on a social network that polarizes public opinion, a credit scoring system that excludes entire segments of the population, a recruitment platform that replicates historical discrimination. It is a fluid, adaptive power, capable of blending into the norm. And while governments debate outdated regulations, tech corporations are building infrastructures that define what is true, what is just, what is possible.
But if AI is so pervasive, why doesn’t it dominate the headlines? The answer is twofold. On one hand, there is a structural complicity: traditional media depends on the same algorithms that condition the visibility of news, creating a vicious cycle of dependency and self-censorship. On the other, there is a cultural void: the technical language of experts becomes a wall that excludes the public, while the debate is reduced to sterile polarizations between “apocalyptic” and “integrated” perspectives. Meanwhile, few ask how a recommendation algorithm can influence the vote of an undecided voter, or why predictive surveillance systems replicate racial patterns.
Some call it “surveillance capitalism,” others “techno-feudalism.” But beyond definitions, what emerges is a new order where power is no longer exercised only over actions, but over the very possibilities of thought. Generative AI that writes novels, composes symphonies, or produces hyper-realistic fake news are not mere tools: they are distorted mirrors of our creativity, machines that rewrite the boundaries between human and artificial. And when an algorithm can shape the collective imagination, what remains of individual autonomy?
The stakes are not technological, but philosophical. Recognizing that algorithms have become the new sacred codes—obscure, unchallengeable, omnipresent—is the first step toward reclaiming an active role in their governance. We need not only laws but new forms of digital literacy, transparent labels that reveal the criteria of automated decisions, public spaces where we can discuss what we want to delegate to machines and what we do not. Because in a world where power speaks the language of AI, the truly revolutionary act is to demand to understand its grammar.
Yet, as you read these lines, an algorithm is deciding who will see them, who will share them, who will forget them. The paradox is complete.
Just as the algorithm determines the fate of these words, something cracks in the monolith of digital control. The same technology that obscures can also illuminate. In the underground of the internet, collectives of ethical hackers dismantle machine learning models like archaeologists of a buried future, revealing racial biases hidden in lines of code or traces of gender discrimination in datasets. They are the new heretics of the 21st century: scientists publishing research on the harms of algorithmic optimization, artists turning AI errors into provocative installations, local communities mapping the “digital deserts” created by credit scoring systems.
This is not about rejecting progress, but about subverting its hierarchy. In Europe, the Data Unions movement is trying to convert data capitalism into common goods: cooperatives where users collectively negotiate the use of their information. In Brazil, activists have hacked predictive surveillance systems to redirect them toward mapping structural inequalities. These are fragile experiments, but they reveal a crucial truth: algorithms are not natural laws, they are political choices crystallized in software. And like all choices, they can be dismantled, rewritten, reimagined.
The great deception is believing that AI is neutral. Every algorithm is born from an ontology—a worldview that decides what matters (and what does not). When a facial recognition system classifies faces, it is reproducing centuries of racial hierarchies. When a chatbot generates text, it is replicating linguistic power dynamics. Transparency alone is not enough: we need counter-algorithms, tools of digital critique that dissolve the aura of objectivity. Imagine a browser that, as you navigate, shows you which economic interests shape the suggested content, or an extension that translates the obscure criteria of a predictive model into human language.
The real battle is fought in classrooms, not server farms. Teaching how to decode algorithms is not a technical skill, but an act of civic emancipation. Just as the 20th century fought for mass literacy, the 21st must demand algorithmic literacy: understanding that every like is a vote in an attention economy, that every search is a brick in the construction of one’s digital identity. In Finland, pilot projects turn students into bias detectives, analyzing how recommendation systems distort their perception of reality. It is civic education for the age of algorithms.
The final question is philosophical: can we build an AI that does not replicate our demons? Some pioneers are trying. The Feminist AI movement redefines the criteria of technological design, prioritizing empathy and interdependence. The Icelandic and Estonian governments are experimenting with AI-powered public deliberation platforms, where citizens co-design policies. These are seeds of a different future, where the machine does not decide for us, but amplifies our capacity to decide together.
Perhaps the myth of Babel was prophetic: a civilization punished for its arrogance with linguistic fragmentation. Today, algorithms are both the tower and the punishment. They unite us in global networks, but imprison us in cognitive bubbles; they generate an abundance of information, but deprive us of shared meaning. The way out is not to tear down the tower, but to build ladders to climb to its top together, eyes open, ready to modify every brick. Because every algorithm is, at its core, an interrupted dialogue between past and future.
It is up to us to resume it.
RVSCB


















