Ethical challenges and future perspectives of generative AI

Third and final part of my reflection on the little-known challenges of generative artificial intelligence, beyond its apparent free nature.

While ChatGPT composes poems, Midjourney gives form to our most fleeting visual intuitions, and Bolt or Lovable transform non-developers into outstanding coders, generative AI enters our lives with disconcerting ease. It promises us wonders while confronting us with a striking paradox: humanity has never had such powerful tools to amplify its creativity, and yet has never seemed so vulnerable to the consequences of this very amplification.

The debate is stretched between two equally sterile extremes:

  • On one side, a blissful techno-optimism that sees these technologies as the miraculous solution to all our problems.
  • And on the other, a paralyzing catastrophism that only discerns existential threats and dehumanization.

Yet, as often happens, reality is neither black nor white; it generally lies in a nuanced and complex space between these two poles, where the collective choices that will determine whether generative AI becomes a tool for emancipation or an instrument of subjugation are forged.

The 4 ethical dilemmas of generative AI

1. The opacity of algorithms

Generative AI models are “algorithmic black boxes.” When you strip away the marketing bullshit, even their creators often struggle to explain beyond the statistical model how a prompt becomes an image or text. This opacity leads to a power imbalance: while these systems acquire an increasing ability to analyze and predict our individual behaviors, we remain, for the most part, ignorant of their deep functioning.

The eloquent example of Midjourney perfectly illustrates this issue: the tool produces beautiful images from a few words, but no one can explain precisely how these textual descriptions are transmuted into visual representations. The artist who uses this system therefore incorporates into their creative process an element over which they have only superficial control, a paradoxical situation that calls into question the very idea of authorship and artistic intention.

2. Biases and discrimination

Beyond opacity, generative AI models pose a second major ethical dilemma: they naturally tend to reproduce, even amplify, the biases present in the data on which they were trained.

These systems learn from immense corpora of texts and images most often extracted from the Internet, a source that inevitably reflects the prejudices, stereotypes, and inequalities of our societies, or even fakes. When a model like GPT-4 or Claude 3.7 more spontaneously associates certain professions with one gender rather than another, or when DALL-E produces predominantly Caucasian faces in response to neutral requests, it is not by chance, but the statistical reflection of our own collective biases.

The risk here is twofold. On one hand, these systems can normalize and perpetuate discriminatory representations, conferring upon them an aura of technical objectivity that is particularly pernicious. On the other hand, they can exacerbate existing inequalities by subtly guiding the choices and perceptions of users.

This appearance of objectivity constitutes perhaps the most insidious danger of generative AI systems from an ethical standpoint, as indicated by a recent study published by the University of Chicago: “when algorithms reflect and amplify our own biases, they perform a form of statistical injustice that, paradoxically, appears neutral and objective precisely because it is algorithmic.”

3. Authenticity, truth, and responsibility

The ease with which generative AI can now produce content that is almost indistinguishable from that created by humans profoundly shakes our relationship with authenticity and truth.

In a world where anyone can generate in a few seconds an argued text, a realistic image, or even a synthetic voice perfectly imitating that of a real person, how do we maintain the distinction between the authentic and the artificial? How do we preserve the value of human creation in the face of an abundance of automatically generated content?

When a video showing a political figure making statements they never uttered becomes indistinguishable from an authentic video, our entire informational ecosystem is threatened. Trust, that fragile cement of our social interactions, is dangerously cracking.

The recent controversy between LFI and Cyril Hanouna, triggered by an AI-generated poster, raises a broader question: who is really responsible for the message conveyed? Is it the author of the prompt that guides the image, the AI that executes without consciousness, or the designers of the model, whose training on biased data can influence the results?

These questions are not just theoretical: they have concrete implications for our ability to maintain standards of intellectual honesty in our public discourse.

4. Lack of consent

AI models have been fed with billions of texts often extracted from the Internet: articles, books, forum messages, website contents… A huge part of this data was collected without their authors having explicitly consented to this use.

This massive appropriation of human intellectual and creative production raises profound ethical questions about data ownership, copyright in the digital age, and emerging forms of value extraction. As some critics point out, we are potentially witnessing “the largest unauthorized copying operation in history,” carried out for the benefit of a limited number of technology companies.

Beyond the legal aspects, which are already the subject of numerous legal battles, there is the ethical question of the respect due to creators and their work. Generative AI models, by rearranging and reformulating content created by humans without recognition or compensation, risk establishing an economic system where original creation is systematically devalued in favor of its automated reproduction.

Faced with the four major ethical dilemmas we have just explored, it becomes urgent to design concrete alternatives. These challenges are not insurmountable, but they require a profound overhaul of our current approaches. The paths we are about to examine do not merely propose superficial adjustments: they aim to directly address the structural problems identified, by reinventing how value is created, recognized, and distributed in the generative AI ecosystem.

These emerging solutions show us that another way is possible, where technological innovation and social justice do not oppose each other, but mutually reinforce one another.

Toward more virtuous and equitable models

Rethinking value and contributions in the AI ecosystem

The current paradigm essentially rests on a concentration of value in the hands of companies that develop and deploy large models. They are the ones who capture the bulk of the economic benefits from the massive exploitation of collectively produced data. This asymmetry poses a fundamental equity problem: while value creation is distributed (involving millions of involuntary contributors), its capture is highly centralized.

A more virtuous approach would require explicitly recognizing that generative AI models are the fruit of a collective effort, even if it was not deliberately coordinated. The texts that served to train GPT-4, the images that fed Midjourney, represent decades of human creative and intellectual work. This recognition should translate into effective redistribution mechanisms.

As AI ethics researchers suggest, we could envision systems where creators whose works are used for training would receive compensation proportional to their contribution, or ownership models where data users would collectively benefit from the revenues generated by the models. OpenAI itself, before its transformation into a for-profit company, had evoked such principles of shared governance.

The challenge, of course, lies in the operationalization of these principles. How to quantify the contribution of millions of creators? How to implement redistribution systems on such a scale? These technical questions must not serve as a pretext for inaction, but rather stimulate our collective inventiveness to design truly equitable economic models.

The emerging alternatives to dominant extractive models

Faced with dominant models, described as extractive because they massively exploit data without reciprocity mechanisms, alternatives are beginning to emerge, carrying a more balanced and ethical vision of generative AI.

Several initiatives explore what are called “participatory” or “contributive” approaches, where individuals who provide data for model training are recognized as legitimate stakeholders in the process. This is the case with platforms like LAION-5B for images or Common Crawl for texts, which attempt to establish principles for fair use of data and transparency about their origins.

Other projects, like the Commons Computer, envision the creation of shared computational resources, allowing communities to collectively develop generative AI models without depending on the privatized infrastructures of tech giants. These approaches are inspired by “digital commons,” resources collectively managed according to rules defined by their users.

Open-source models constitute another promising path. Projects like Mistral (France) or Stable Diffusion open their code and parameters, allowing not only increased transparency but also appropriation and adaptation by diverse communities. This democratization of access to generative AI models could contribute to a more equitable distribution of their benefits.

Finally, some companies are exploring explicitly ethical business models, such as “responsible AI” certification that would guarantee respect for certain principles in data collection and model development. These approaches, although still in the minority, indicate a growing awareness of ethical issues within the industry itself.

As highlighted by DeepSeek with its V3 and R1 models, it is possible to develop high-performing AI systems while considerably reducing energy consumption and adopting clear ethical principles. These examples demonstrate that another path is possible, beyond the dominant extractive model.

Fair compensation for original creators

At the heart of ethical concerns related to generative AI is also the question of fair compensation for creators whose works have been used, often without their knowledge, to train these systems.

Lawsuits filed by writers like Sarah Silverman against OpenAI, or by visual artists against Stability AI, illustrate the magnitude of the problem: these creators believe their work was used without consent or compensation to build tools that, ironically, now threaten their livelihood. The paradox is striking: generative AI feeds on human creativity to potentially replace it.

Several possibilities are emerging to remedy this situation. Specific licensing systems could be developed, allowing creators to precisely define how their works can (or cannot) be used for AI model training. This is the approach adopted by organizations like Creative Commons, which are working on the development of an “AI-compatible” license allowing creators to maintain control over the use of their works.

Usage-based compensation models could also be implemented. Thus, when an AI model generates content inspired by a specific creator’s work, the latter could receive proportional compensation. This approach would require sophisticated tracking and attribution systems, but blockchain technologies offer interesting possibilities in this regard.

Finally, collective funds fed by the revenues of generative AI companies could be created to support creative ecosystems weakened by these technologies. These funds could finance artistic, literary, or journalistic projects, thus preserving the diversity of human creative expressions in the face of the potential homogenization induced by AI.

The question of fair compensation is not simply economic; it touches on our very conception of the value of human creation. As Peter Seele explains in his analysis of the ethics of algorithmic pricing: “Fairness in the distribution of value created by AI is not a luxury, but a necessary condition for its long-term social legitimacy.”

Exploring collaborative economic systems for generative AI

Beyond compensation mechanisms, it is perhaps our entire economic system surrounding generative AI that needs to be rethought in a more collaborative and less extractive perspective.

Cooperative models are emerging, where users are no longer mere passive consumers but co-producers who actively participate in improving the systems. This is the case with platforms like Hugging Face, which allow communities to contribute to the training and refinement of AI models. These distributed approaches produce not only technically more robust systems but also more legitimate ones from a social standpoint.

The concept of “public benefit” generative AI is also gaining ground. Organizations like AI Commons or PublicVoice are exploring models where AI systems would be developed explicitly to serve the common good, with inclusive governance involving diverse societal stakeholders. This vision contrasts with the dominant model where AI is primarily designed to maximize private profits.

Shared algorithmic governance systems represent another promising avenue. They would allow users and creators to have a say in crucial decisions concerning the development and deployment of generative AI models. These participatory mechanisms could help align these systems with broader societal values than mere technical efficiency or profitability.

Finally, approaches inspired by the circular economy could be applied to generative AI, designing systems that minimize negative externalities (carbon footprint, data appropriation) while maximizing shared value. This holistic perspective recognizes the fundamental interdependence of technical, economic, social, and environmental dimensions.

Governance and framework or the challenges of a borderless technology

The inadequacy of legal frameworks

One of the major challenges posed by generative AI lies in the growing gap between the rapidity of its technological evolution and the inherent inertia of existing legal frameworks.

Our legal systems, designed for an analog world where the boundaries between creation and reproduction, between original and copy, were relatively clear, find themselves profoundly unsuited to technologies that systematically blur these distinctions. Fundamental legal concepts such as copyright, intellectual property, editorial responsibility, or the notion of derivative work are severely tested by generative AI.

Take the example of copyright: designed to protect the original expression of an idea fixed on a medium, it struggles to comprehend content generated by algorithms that have assimilated and recombined millions of human works. An image created by Midjourney is neither a direct copy, nor a creation ex nihilo; it exists in a legally ambiguous in-between that our current laws struggle to qualify.

Similarly, our responsibility frameworks are based on the idea of intentional human agents capable of discernment. How do we apply them to automated systems whose decisions emerge from complex statistical processes? Who is legally responsible when an AI model generates defamatory content, or unknowingly violates a patent?

This inadequacy is not limited to substantive law but extends to enforcement procedures as well. Traditional legal recourse mechanisms, often slow and costly, seem particularly ill-equipped to deal with potential violations occurring at the scale and speed allowed by generative AI.

As legal scholar Mira Burri observes: “Our current legal frameworks resemble traffic rules designed for horse-drawn carriages, when we are confronted with the sudden appearance of autonomous flying vehicles.” This striking metaphor illustrates the magnitude of the governance challenge we face.

Balance between innovation and protection of fundamental rights

Faced with the inadequacy of existing frameworks, the challenge is to develop new regulatory approaches that find a delicate balance between, on one hand, promoting technological innovation and, on the other, effectively protecting fundamental rights.

This balance is particularly difficult to achieve because the two objectives sometimes seem diametrically opposed. Regulation that is too strict risks stifling innovation, disadvantaging certain actors (especially smaller ones), and slowing the development of potentially beneficial technologies. Conversely, an approach that is too permissive could lead to systematic infringements of individual rights and an erosion of the fundamental values our societies seek to preserve.

The issue is all the more complex because different legal and cultural traditions value innovation and protected rights differently. The European approach, embodied by the recently adopted AI Regulation, tends to prioritize the protection of fundamental rights such as privacy, non-discrimination, or human dignity. The American tradition, on the other hand, generally places greater importance on innovation and entrepreneurial freedom.

Initiatives like the “AI Bill of Rights” from the Biden-Harris administration, however, testify to a progressive convergence toward a common principles framework, even if implementation methods may vary. These principles include transparency in AI use, informed consent, protection against algorithmic discrimination, and the right to effective recourse.

The question of “red lines,” those uses of generative AI that should be categorically prohibited, is also the subject of intense debates. Should we completely prohibit the automated generation of political disinformation, hate speech, or non-consensual deepfakes? Or is it sufficient to strictly regulate these practices? And where does caricature fit within this framework?

These questions have no simple answers, but they illustrate the need for in-depth democratic deliberation on the values we wish to see respected by these emerging technologies. As philosopher Jürgen Habermas emphasizes, it is precisely when technological advances upset our traditional moral frameworks that democratic dialogue becomes most crucial.

Participatory and transparent algorithmic governance

Beyond formal legal frameworks, effective regulation of generative AI requires algorithmic governance mechanisms that are both participatory and transparent.

Participation implies that all concerned stakeholders—developers, users, content creators, potentially affected populations—can contribute to the elaboration of rules and norms that frame these systems. This multi-actor approach would allow integrating a diversity of perspectives and concerns, thus strengthening the legitimacy and effectiveness of governance mechanisms.

Initiatives like the Partnership on AI or AI Commons attempt to implement this type of participatory governance, bringing together actors from the private sector, civil society, academia, and public institutions. These platforms facilitate dialogue between stakeholders with sometimes divergent interests, allowing the emergence of shared standards and best practices.

Transparency constitutes the second pillar of effective algorithmic governance. It would imply not only disclosure of data sources used for model training but also an accessible explanation of design choices, known system limitations, and measures taken to mitigate potential risks.

Some companies are beginning to publish “model cards” describing the characteristics and limitations of their models, but these initiatives often remain insufficient. True transparency would require independent audits, regular impact assessments, and robust accountability mechanisms.

Techniques like “explainability by design” promise to make generative AI systems intrinsically more interpretable, allowing users to understand how and why certain content is generated.

As researcher Kate Crawford highlights: “Transparency is not an end in itself, but a means to enable effective contestability of AI systems.” This contestability, the ability of individuals and communities to question and influence the functioning of algorithmic systems, may be the true objective of democratic governance of generative AI.

Regulatory experimentations in different jurisdictions

Faced with the unprecedented challenges posed by generative AI, various jurisdictions around the world are experimenting with different regulatory approaches, transforming the global landscape into a veritable governance laboratory.

The European Union has adopted a pioneering approach with its AI Regulation, which proposes risk-based regulation: the more an AI system presents risks to fundamental rights or security, the stricter the regulatory requirements. This historic text, which should come into full application in 2026, imposes specific obligations for generative AI systems, particularly regarding transparency on artificially generated content.

The United Kingdom has favored a more flexible approach, focused on principles rather than prescriptive rules. Its national AI strategy emphasizes sector self-regulation, complemented by advice and recommendations from institutions such as the Alan Turing Institute. This flexibility aims to encourage innovation while promoting responsible practices.

China, meanwhile, has implemented strict rules regarding AI-generated content, requiring systematic watermarking and compliance with “fundamental socialist values.” This approach illustrates how regulatory frameworks inevitably reflect the sociopolitical priorities of the jurisdictions that develop them.

In the United States, in the absence of specific federal legislation, regulation is developing in a more fragmented manner. States like California have adopted pioneering laws on certain aspects, such as the obligation to disclose the use of deepfakes in political content. At the federal level, agencies like the FTC use their existing powers to regulate certain practices related to generative AI.

Other countries, notably Canada, Japan, or Singapore, have favored experimental approaches such as “regulatory sandboxes.” These devices allow testing generative AI innovations in a controlled environment, with temporary exemptions from certain rules, in order to assess their impacts and develop adapted regulatory frameworks.

This diversity of approaches, far from being an obstacle, could constitute a collective wealth where each experimentation becomes a starting point and a lesson for all.

Reinventing our relationship with technology

Regulatory frameworks and governance mechanisms, as necessary as they may be, constitute only part of the answer to the challenges posed by generative AI. They set the rules of the collective game, but do not determine how we, as individuals and communities, choose to interact daily with these technologies. Beyond the institutional scale, there is therefore the fundamental question of our personal and cultural relationship with these new tools.

How do we preserve our creative autonomy and authenticity in a world where content production becomes increasingly automated? How do we cultivate a conscious and emancipatory relationship with these technologies rather than passively undergoing them? These questions invite us to descend from the level of public policies to that of our individual and collective practices, to explore how the ethics of generative AI concretely embodies itself in our daily lives.

The preservation of human agency in the face of creative automation

At the heart of the ethical issues of generative AI lies the fundamental question of human agency, our capacity to act intentionally, to exercise our free will, and to actively shape our environment rather than being passively shaped by it.

The automation of creative processes formerly considered human (writing a poem, composing a melody, creating an image) raises profound existential questions: what becomes of our agency when we delegate entire swaths of our expression to automated systems? What value do we still accord to creative effort, artistic intention, technical mastery, when similar results can be obtained instantly through a simple textual request?

Preserving human agency in this context does not necessarily mean rejecting these technologies, but rather developing a more conscious and deliberate relationship with them. This involves cultivating what philosopher Bernard Stiegler called a “positive pharmacology,” the capacity to use these tools as extensions of our creativity rather than as substitutes for it. A kind of Augmented Intelligence versus Artificial Intelligence.

Concretely, this approach could translate into interfaces that make visible the choices and parameters of generative AI systems, allowing the user to exercise deliberate control over the creative process rather than settling for prefabricated results. Tools like ControlNet for Stable Diffusion illustrate this possibility by offering artists granular control over different aspects of image generation.

It could also involve educating users in “AI literacy,” a critical understanding of what these systems can and cannot do, their intrinsic limitations, and the implicit ethical choices in their design. This literacy would allow individuals to use these tools in a more intentional and thoughtful manner.

As Mike Thomas expresses in his analysis of AI’s impact on society: “The question is not whether AI will replace us, but how we can preserve our agency in a world where some of our capabilities are amplified, and others potentially atrophied, by these technologies.”

From technological illusion to conscious digital pact

At the end of this exploration of ethical issues and future perspectives of generative AI, the three parts of our reflection have highlighted the illusions that often surround these technologies:

  • The illusion of free access, which masks the real economic costs.
  • The illusion of immateriality, which conceals the massive environmental footprint
  • And the illusion of neutrality, which hides the profound ethical and political stakes of these systems.

These illusions are not trivial; they contribute to a certain technological fatalism, to the idea that the development of generative AI would follow a natural trajectory, independent of our collective choices. Yet, it is precisely this perception that we need to deconstruct.

The future of generative AI is not predetermined by any technical necessity. It will be shaped by human decisions, institutional priorities, regulatory frameworks, economic orientations, all areas where our collective agency can and must be exercised.

What I call for is the emergence of a “conscious digital pact” that would involve radical transparency about the environmental, economic, and societal impacts of these technologies. It would establish mechanisms for inclusive participation in their governance. It would guarantee an equitable distribution of their benefits and costs. It would preserve spaces for human agency and the diversity of creative expressions.

Such a pact cannot spontaneously emerge from market dynamics or technological advances themselves. It requires civic engagement, democratic deliberation, and political will commensurate with the stakes.

Generative AI systems, despite their apparent autonomy, remain fundamentally human creations, imbued with our values, our priorities, and our choices. Their development does not escape our collective responsibility; it accentuates it and makes it all the more crucial.

Beyond the glittering promises and apocalyptic fears, there exists a middle path, that of a lucid and deliberate appropriation of these technologies. A path where generative AI becomes not a force that governs us, but a tool that we consciously shape to serve our collective emancipation and the well-being of all living things in their diversity.

This path requires courage, clairvoyance, and perseverance. It invites us to surpass the simple role of passive consumers or amazed spectators to become the active architects of our technological future.

It is a considerable challenge, but it is also a historic opportunity to reinvent our relationship with technology and, through it, our relationship with ourselves and the world around us.