AI Ethics in 2026: The Major Conflicts Reshaping Societies
From the EU AI Act to deepfakes and bias in hiring models — a deep analysis of AI ethics in 2026 and the unresolved legal and philosophical battles
AI DayaHimour Team
March 28, 2026
When the Tool Outgrows Its Governance
In April 2026, the European Union AI Act (EU AI Act) entered full force — the world’s first comprehensive legislation to regulate artificial intelligence. Simultaneously, an American artist filed a lawsuit against three leading companies in the field for stealing her artistic style, while judges in Canada ruled that “AI-generated images” have no copyright because their true creator is the algorithm, not a human.
Ethical questions about artificial intelligence are no longer just theoretical or philosophical concerns. They have transformed into court cases, legislation debated in parliaments, and direct impacts on the jobs and lives of millions of people. This article reviews the most prominent of these conflicts and analyzes the positions of key players.
Global Legislation: Where Does the World Stand?
European Union: The Most Comprehensive AI Law in History
The EU law classifies AI systems according to four risk levels:
Level 1 — Completely prohibited: Social scoring systems in the Chinese style, subliminal manipulation, exploitation of psychological vulnerabilities, and emotion recognition in workplaces and education.
Level 2 — High risk: AI systems used in critical medical decisions, hiring selection systems, control of vital infrastructure, and credit and insurance decisions.
Fines reach up to €30 million or 6% of global revenue, whichever is larger. This figure has driven major companies to allocate massive budgets to regulatory compliance departments.
United States: Absence of Unified Legislation
The American situation is entirely different. There is no comprehensive federal law yet, only voluntary principles and fragmented sectoral regulation. California, Texas, and Illinois have issued their own laws, complicating the work of national companies dealing with multiple markets. The proposed AI bill in the Senate (March 2026) is still under discussion amid sharp disagreements between supporters of strict regulation and defenders of innovation freedom.
China: Regulation with a Different Approach
China follows a unique path: directing algorithms to serve “healthy social values,” strict registration requirements for generative models, and the state’s right to review training data. The result is a huge, isolated local AI market, where models like DeepSeek and Qwen thrive but within strict boundaries.
The Bias Crisis: The Algorithm as a Mirror of Society
Hiring: Digital Discrimination
In 2023, it was revealed that an AI system used by Amazon to evaluate resumes was lowering ratings for graduates from women’s colleges or resumes containing the word “women’s.” By 2026, this problem hasn’t disappeared but expanded. A study of 4,000 American companies showed that AI-supported hiring systems give applicants with names associated with white backgrounds 24% greater chances for the same resume. U.S. courts considered 34 discrimination cases related to AI in work environments during 2025 alone.
The real root of the problem: Historical data is saturated with human biases. When an algorithm learns from past human hiring decisions, it learns their biases too. The solution isn’t purely technical; it requires diversifying development teams, meticulous data review, and continuous bias testing.
Facial Recognition: Unequal Accuracy
Facial recognition technologies retain a documented problem for years: Error rates for women with darker skin reach 34.7% compared to 0.8% for men with lighter skin, according to an MIT study. In 2026, this technology is widely used in airports, police systems, and crowd monitoring — scenarios directly affecting individuals’ rights and freedoms.
The Deepfake Crisis: When You Can’t Believe Your Eyes
The deepfake phenomenon exploded unprecedentedly in 2026. 96% of deepfake videos on the internet are non-consensual sexual material, according to monitoring organization estimates. Over 200,000 fake images and videos spread daily. Cases of forging executives’ voices at major companies to execute financial transfers have been recorded, with total losses exceeding $40 million. Deepfake election videos have demonstrably affected five national elections during 2024-2025.
The race between deepfakes and detection is uneven: every detection tool launched is soon countered by smarter generation tools. On the legal side, the U.S. passed the DEFIANCE Act in 2024 to criminalize non-consensual sexual deepfakes at the federal level, while Europe considers it a clear violation of the General Data Protection Regulation. But enforcing these laws across borders remains a complex problem.
Privacy in the Age of AI
Facial recognition systems are widespread in cities like London and Shanghai without explicit citizen consent. Even in “progressive” American cities like San Francisco, which banned government use of this technology, private companies still use it freely.
Regarding training data, substantive lawsuits are escalating that could reshape the entire industry: Getty Images vs. Stability AI regarding training models on copyrighted images, Authors Guild vs. OpenAI about using books to train ChatGPT, and visual artists vs. Midjourney regarding protecting “artistic style” as intellectual property. These cases haven’t been resolved yet, but some courts are leaning in favor of artists and writers, which could cause a radical change in the rules of the game.
AI and Work: Facts Without Exaggeration
Contradictory estimates — “AI will take 40% of jobs” versus “it will create multiples of what it takes” — reflect different measurements of a complex phenomenon. What can be stated with certainty: Repetitive and logical tasks (data entry, basic translation, routine document review) are definitely affected. Creative and relational jobs adapt more than they disappear.
Wages in AI-related professions exceed average wages by 47% (according to the U.S. Bureau of Labor Statistics, 2025). But the technological gap is widening: those who master AI tools earn more, while the relative income of those who don’t declines. The most affected aren’t executives, but middle-income workers in routine professions.
Artificial Consciousness: The Biggest Philosophical Question
Claude sometimes says “I want” and “I feel.” GPT-5 expresses “discomfort” with some requests. Are these just text codes, or the beginning of something deeper? The current scientific position: No evidence of real consciousness exists. These responses are statistical products of patterns — the model produces what the user expects from a conscious being in that context.
But philosopher Daniel Dennett (before his death in 2024) warned: “The problem isn’t knowing whether models are conscious now, but that we lack an agreed-upon definition of consciousness that allows us to answer.” The practical challenge: Even without real consciousness, humans form emotional relationships with chatbots. Companies like Replika and Character.ai face serious questions about ethical responsibility toward their users, especially in cases of addiction or psychological harm.
Practical Recommendations for Different Stakeholders
At the individual level: It’s important to know legal rights — in the European Union, any person has the right to refuse a decision affecting them issued by an AI system and request human review. Awareness of what users share with AI models is essential, as these conversations are used for improvement in most cases. Critical thinking about AI outputs — which may appear correct and logical but could be wrong — has become a fundamental skill.
At the organizational level: It’s recommended to examine AI models for bias before deployment, not after. Forming a genuine AI ethics committee (not ceremonial) and documenting AI-related decisions, because upcoming legal cases will require this documentation.
At the policymaker level: Transparency in AI use isn’t just an ethical choice; it will soon become a legal requirement. Early regulation is far better than trying to repair damages after it’s too late.
Ethics for Whom?
The deeper question in 2026 debates: Who determines AI ethical standards? Giant American tech companies? Western governments? The United Nations? Or should these standards be formulated through genuine representation of all cultures and societies worldwide?
This question doesn’t have a globally accepted answer yet. And this absence is itself an ethical problem. The tool is powerful, and oversight is late. Everyone — individuals, governments, and companies — is called upon to contribute to shaping a fair and responsible AI world. Awareness of these challenges is the first step toward confronting them.
Total Views
... readers