Redacción HC
09/06/2023
In a world increasingly shaped by artificial intelligence, a sobering warning has emerged from one of the highest authorities in global human rights. Michelle Bachelet, then UN High Commissioner for Human Rights, issued a firm call to halt or regulate AI systems that risk infringing on fundamental freedoms. Her statement, rooted in a 2021 OHCHR report, urges governments and corporations to rethink how AI is deployed in everyday life—from welfare programs to border surveillance—before its harms become irreversible.
This is not a call against technology, but against its blind, opaque, and unaccountable use. The UN’s message is clear: AI’s integration into key systems has outpaced human rights protections. Without urgent action, the world risks allowing algorithms to make decisions with life-altering consequences—without oversight, consent, or transparency.
Artificial intelligence now plays a decisive role in determining access to services, policing, and even freedom of movement. But as the OHCHR report reveals, this integration is happening faster than regulators can respond.
At the heart of the concern lies AI’s reliance on large datasets—many of which are biased, incomplete, or collected without informed consent. These systems often operate as black boxes, making critical decisions without offering any explanation. According to Bachelet, “We cannot afford to continue playing catch-up regarding AI—dealing with human rights consequences after the fact.”
The OHCHR emphasizes how AI technologies routinely gather and analyze personal data with minimal transparency. Automated systems merge information across platforms, creating detailed digital profiles that individuals are often unaware of. Even supposedly anonymized data may be reverse-engineered to identify individuals, raising serious long-term risks about misuse and surveillance.
AI has already contributed to concrete harms. In documented cases, welfare applicants were unjustly denied assistance due to algorithmic profiling. Facial recognition tools have misidentified individuals, leading to wrongful arrests. These errors are not random—they reflect the structural bias encoded into datasets, disproportionately affecting vulnerable populations such as ethnic minorities, migrants, and low-income communities.
Perhaps the most alarming development is the spread of real-time biometric surveillance. Facial recognition systems deployed at public events, transit hubs, and border checkpoints present a direct threat to freedoms of movement, assembly, and expression. The OHCHR recommends an immediate moratorium on AI-powered surveillance until proper legal safeguards are in place.
Bachelet's report outlines that the expansion of these technologies has created a reality in which individuals may be monitored, flagged, or restricted based on an algorithm’s opaque assessment—without the chance to challenge or even understand that process.
One of the most urgent concerns highlighted by the report is the lack of enforceable legal standards governing AI. While some regions (such as the European Union) have begun proposing legislation like the AI Act, many countries operate in a legal vacuum. Existing frameworks often lack teeth, relying on voluntary guidelines rather than binding rules.
This regulatory lag has created what experts call “policy catch-up”—a condition where AI tools are widely deployed before lawmakers fully understand their implications. The result? Governments and societies are left reacting to harm after it occurs, rather than preventing it.
The UN proposes several measures to reverse course:
These proposals are not just theoretical—they’re grounded in real-world cases where unchecked AI caused harm. The report stresses the need for enforceable protections, especially for populations least able to challenge unfair outcomes.
Beyond regulatory reform, the OHCHR emphasizes the need to shift how societies think about digital life. The report argues that privacy, freedom of expression, and protection from discrimination must be considered non-negotiable rights in any digital system—not optional features or technological afterthoughts.
This recognition forms the foundation for a growing global movement that includes civil society organizations, legal experts, and digital rights advocates. From court challenges in the UK to facial recognition protests in Latin America, a new wave of activism is taking root.
As artificial intelligence becomes more embedded in daily life, the risks to human rights are no longer theoretical—they are present, measurable, and escalating. The UN’s call for action is a critical reminder: just because AI can do something, doesn’t mean it should.
Protecting human rights in the digital age requires more than innovation—it requires restraint, accountability, and above all, justice. Governments, corporations, and civil society must now choose whether they will build AI systems that reinforce dignity and equality—or allow invisible code to erode the very freedoms we depend on.
Topics of interest
TechnologyReference: Bachelet M. Artificial intelligence risks to privacy demand urgent action – Bachelet. OHCHR [Internet]. 2021 Sep 15. Available on: https://www.ohchr.org/en/press-releases/2021/09/artificial-intelligence-risks-privacy-demand-urgent-action-bachelet