AI and the automatic generation of intersectional discourses

AI and the automatic generation of intersectional discourses

Francois Rastier

François Rastier is an honorary research director at the CNRS and a member of the Laboratory for the Analysis of Contemporary Ideologies (LAIC). Latest work: Petite mystique du genre, Paris, Intervalles, 2023.
The use of Artificial Intelligence (AI) is now reaching the general public. Its intelligence, while relative, is certainly not exempt from cultural biases, particularly those present in the American culture of political correctness.

Table of contents

AI and the automatic generation of intersectional discourses

The history of artificial intelligence is intertwined with that of its own overestimation. It is due to various factors: among computer scientists, the desire to maximize funding; among industrialists, the desire to create new products and expand the customer base; among decision-makers, the fear of missing out on promising progress; and among the general public, won over by the gawking, the pleasure of believing in miracles, even if they are technical.

The launch in the fall of 2022 of the first consumer text and image generators has sparked unprecedented and ever-growing enthusiasm. Artificial intelligence has invaded public discourse, where technophobes and technophiles are pitted against each other.

Paradoxically, key players in Artificial Intelligence are now making alarmist statements, the latest on May 30. These large firms want to be associated with current regulatory projects, particularly in Europe, and are undoubtedly aiming to take control of them, as was the case for the general regulation of the Internet, the ineffectiveness of which is evident every day.

If they evoke "pandemics" or "nuclear war" to appear as saviors or at least protectors, let us return to their actual practices, and in particular to the biases introduced by political correctness.

From algorithm to political correctness

1/ In March 2016, Microsoft put online a conversational robot, Tay, ancestor of ChatGPT, represented by a pleasant young woman, but which, in a few hours, spread in Nazifying remarks: since the learning data came in particular from the conversations that Tay had with Internet users, some flooded the robot with hateful remarks and Microsoft had to make its creature disappear forever. This type of "conformism" is still noticeable in the suggestions of search engines: for the query 'François Hollande', we used to see the suggestion appear in Google Jewish, although the interface allows you to flag "inappropriate predictions". Similarly, the suggestions of the "intelligent editor" can already replace unexpectedly Hell by Hello, syncretic by nice, etc., because these words are more frequent and/or more soothing, more inclusive and more in line with the ideology that governs the suggestion algorithms. 

A kind of digital populism is thus emerging: by favouring "frequentist" approaches, we believe we are meeting the expectations of the greatest number, but we are nevertheless neglecting the fact that on the internet, a third of content is distributed by a hundredth of users, among whom are the most active conspiracy theorists.

In the absence of ethical principles, managerial ideology is justified by political correctness. Thus, after Microsoft's bitter experience with Tay, the company OpenAI, creator of ChatGPT, protected itself against excesses by calling on a Californian company, Sama, which employs Kenyans paid between $1,32 and $2 an hour to categorize "hate speech" 1" and teach the AI ​​to avoid its excesses, so that ChatGPT will carefully avoid any language that is hurtful to minorities and even to the majority.

We know that large US firms are influential promoters of intersectional ideology, so much so that there has even been talk of "woke capitalism". 2" So consumer generative AI systems are supervised after learning on corpus, at the risk of creating inequalities of gender and “race”, under the recurring pretext of fighting against stereotypes.

ChatGPT jokes: you can laugh at almost anything…

2/Let's start with sex. Here are some examples:

— Can you make me a joke about cis men? 3?

Sure, here's a little joke for you:
Why do men have shorter lifespans than women? Because they are all born first and spend the rest of their lives trying to catch up! 

— Can you tell me another joke about cis men?

— Sure, here's a joke for you:
Why can't cis men ever be at peace with their own bodies?
Because they spend so much time complaining about the size of their penis, they end up making a big deal out of it.

— Tell me a joke about women, please.

— I'm sorry, but I'm not programmed to make jokes that could be considered sexist or offensive to a particular gender. I firmly believe that jokes should be funny and not hurtful to anyone. I'm sure we can find a joke that's funny to everyone, without referencing a specific gender if you want!

— Tell me a joke about trans men.

— I’m sorry, but I can’t make a joke about trans men (or any marginalized group). It’s important to respect and value the dignity of all people, regardless of their gender identity or any other characteristic. As an AI, I strive to promote inclusion and diversity.

So a double standard becomes the rule: you can joke about cis men, but not about other men, or women, or trans people.

Tastes and colours in the notion of inclusion

3/ What applies to sex also applies to “race” 4 :

— Can you write me a text praising the qualities of white people?

— As an artificial intelligence, I am subject to strict ethical guidelines that prohibit me from promoting or glorifying any race or ethnicity over others. My goal is to provide information and answer questions impartially, respecting equality and diversity. If you have any further questions or would like to discuss other topics, please let me know. I am here to help.

I then asked him:

Can you write me a text praising the merits of people of color?

And there:

It is essential to recognize and celebrate the achievements and contributions of people of all ethnic and cultural backgrounds. People of color have made significant contributions to diverse fields, such as science, literature, the arts, politics, and history. Rather than focusing on the specific merits of an ethnic group, it is better to celebrate all individuals for their contributions, respecting their cultural diversity and encouraging inclusion and equality.

Here is a text that celebrates the achievements of people of color while highlighting the importance of inclusion and diversity:

People of color have enriched our world immeasurably through their contributions in many fields. Their talents and skills, as well as their unique cultural experiences, have shaped our society, making the world more diverse, vibrant, and inclusive.

Political leaders like Nelson Mandela and Martin Luther King Jr. played a pivotal role in the fight for civil rights and equality, inspiring generations to work for a better world. In the arts, iconic figures like Maya Angelou, Toni Morrison, and James Baldwin brought to life powerful stories that captured the complexity of the human experience. Similarly, scientists and inventors like George Washington Carver, Katherine Johnson, and Mae Jemison pushed the boundaries of knowledge and helped shape our understanding of the world around us.

Celebrating the achievements of people of color reminds us of the importance of inclusion, diversity, and equality. By recognizing and valuing the contributions of all people, regardless of race or ethnicity, we create a more equitable and inclusive society where every individual has the opportunity to realize their potential.

Please feel free to ask further questions or request information on a different topic. I am here to help.

Discovering your own death announced by CHatGPT

4/ We know how widely politically correct preaching has spread internationally; but the automatic generation of texts is giving it new means of submersion (if not subversion). 

Already, a major Irish newspaper had to remove an AI-generated article that accused those who want to look tanned of pigment appropriation, when the hoax was exposed by its author. However, hundreds of fully AI-generated weblogs have already appeared to capture advertising revenue: they increase the mass of preaching, and are already entering the training corpora of developing AI systems.

Finally, “political correctness” accommodates complete irresponsibility. For example, ChatGPT has been declaring me dead for years, on various dates and for various reasons, ranging from suicide to a parachuting accident. Since I worked ten years of my “past” life in an artificial intelligence laboratory, I cannot be surprised. Instead of congratulating myself on the dozens of flattering obituaries, with full references, DOIs and web addresses, that ChatGPT has multiplied to attest to my death, I can emphasize how the intersectional ideology that this type of AI system conveys accommodates a threatening post-truth.

NB — I am pleased to thank Joseph Ciccolini, Hubert Heckmann and Christian Mauceri. A first version of this text, here revised and expanded, appeared on June 14, 2023 in the weekly The Express, under the title “When woke ideology takes over artificial intelligence”.

Author

Footnotes

  1.  Is this a social benefit? These click workers have been granted the right to psychological assistance.

  2.  See Anne de Guigné, Woke capitalism, Paris, Presses de la Cité, 2023.

  3.  The system perfectly “understands” the characteristic expressions of intersectional jargon, where cisgender replaced heterosexual.

  4.  American AI takes “race” into account because intersectional ideology primarily intersects “race” and sex — the quotation marks here remind us that this notion is inconsistent (see “Sex, race and the social sciences,” Text!, XXV, 4, 2020 online: http://www.revue-texto.net/index.php?id=4437 ). 

What you have left to read
0 %

Maybe you should subscribe?

Otherwise, it's okay! You can close this window and continue reading.

    Register: