Tr-AI-wreck: The Privilege Problem in AI DevelopmentTr-AI-wreck:

“We don’t see things as they are; we see them as we are.” – Anaïs Nin

In 1999, the Woodstock ’99 festival attempted to revive the hippie spirit of the iconic 1969 festival, but instead, it descended into chaos, with tragic reports of sexual assaults and three deaths. The event earned the label “trainwreck”. This week’s Thursday Thought draws an unlikely parallel between Woodstock ’99 and the burgeoning issue of AI’s privilege problem, warning of an oncoming “Tr-AI-wreck.

The Ivory Tower

Contrasting the genuine and harmonious atmosphere of the first two Woodstock festivals, Woodstock ’99 took a different turn. Organisers seemed more focused on monetary gains, as evidenced by exorbitant prices for essentials like water and food. At the same time, the acts and promoters indulged in comparative luxury within the VIP tent. Inadequate provisions like showers and sanitary facilities added to the attendees’ misery, leading to health issues like “trench mouth” caused by sewage-contaminated drinking water. In an article in the New York Times, Rage Against Machine’s Tom Morello wrote that the organisers were “greedy promoters who wrung every cent out of thirsty concertgoers.” The festival became a convergence of heat-exhausted, disillusioned youths feeling the sting of exploitation.

The price gouging and under-investment in basics were only part of the problem.

In the must-see Netflix documentary, “Trainwreck: Woodstock ’99”, promoter John Scher admitted he was unfamiliar with the bands in the festival’s lineup. On the surface, that may not seem like a problem, but the schedule consisted of nu-metal acts like Limp Biscuit, Rage Against the Machine, Metallica and Korn. The energy these bands conjure was far unlike the peace-promoting vibes of the first two Woodstock. In one scene in the documentary, a young volunteer voices his concern about the acts chosen for the event, but the promoters quickly silence him.

Some commentators even argued that Limp Biscuit invoked the riot, with lead singer Fred Durst sparking the crowd to rage. Undoubtedly, the music genre fueled the fire (in every sense of the word), but the documentary shows that the anger was well underway before the acts took the stage.

So, what does this have to do with developing AI? Well, think of the Woodstock ’99 promoters as those who create AI without considering multiple worldviews. In this case, we see extreme differences: the goal of the profits over people, the ignorance of the music choice, and the split between the VIP section and the slums. It’s a similar tale of privilege and disconnection regarding Ai development. This disconnect will undoubtedly lead to many problems. It already has, as we will soon find out. But before we continue, let’s shed some light on the term ‘privilege’.

The Privilege Problem

Privilege is blind to those who have it. It’s an unearned advantage that a specific group of people have due to their social identity, race, gender, religion, or class. Let me share an innocent story that highlights the situation.

One of my acquaintances has a private jet and generously offers “a lift” to those in need. He provided lifts to a family whose young son needed specialist medical attention in the US on several occasions. This boy had never travelled by plane before. In a way, he was “born into” an elite worldview of transport.

Thankfully, months later, he recovered, but an amusing phenomenon unravelled. When he embarked on his first-ever family holiday abroad, he went to the airport as before, but to a different departure gate. He waited this time longer than usual, he thought to himself. And then he boarded the plane. Oh dear, it wasn’t pretty!

Amidst a flood of tears, he sobbed, “Mommy, Daddy… why are all these people… on our plane!”

Was he a bad kid, or did he not know any better?

So many of us are “born into” such situations, but the challenge with privilege is that, like many of our biases, it’s invisible. Privilege doesn’t mean people in a dominant group don’t face challenges, but that the nature of their challenges is different from the non-dominant group.

Regarding AI development, privilege is a narrow perspective that only accounts for a privileged worldview. Think of privilege as a blinker, restricting the field of vision when designing AI systems, making them less inclusive and capable of serving a global audience. This results in unconsciously biased systems, perpetuating societal inequalities on a digital platform. Furthermore, this blinkered view exacerbates the gap between AI and under-represented groups, making the technology less accessible for those who may benefit most.

AI development can unknowingly reinforce biases and perpetuate the gap between privileged and underrepresented groups, making the technology less accessible to those who could benefit most.

Take, for example, the recent cases of mortgage applicants being denied because they were black.  Specifically, 80% of Black applicants, 70% of Native Americans and 40% of Latino applicants are more likely to be rejected. This is a significant problem given that about half of US banks now offer online or app-based loan applications. As organisations of all kinds invest more in AI and algorithmic lending becomes the norm, machine bias must be monitored, but by whom (or what) another algorithm? I have to add that even when organisations have the best intentions, the algorithms are modelled on existing data. That data is already biased because history is awash with inequality. The curation of machine training data can make a model’s predictions susceptible to bias. When building models, we must be aware of human biases manifesting in training data, so we can take proactive steps to mitigate their effects.

I was the MC for the tech stage at the Fifteen Seconds festival in Graz, Austria. We ran a track called the “Innovation Show Live”. One of our guests was Phaedra Boinodiris. Phaedra shared some examples of AI coding gone askew, even when the companies had the best intentions. One example she shared was from the good intentions of the COMPAS system (which stands for Correctional Offender Management Profiling for Alternative Sanctions). Analysis by ProPublica revealed that black defendants were far more likely than white defendants to be incorrectly judged to be at a higher risk of reoffending. In contrast, white defendants were likelier than black defendants to be incorrectly labelled low-risk.

The data that feeds algorithms and AI is already imbalanced and will perpetuate the issues:

White households have a median net worth 10 times higher than Black households.

Black and Hispanic Americans are underrepresented in leadership positions in the tech industry.

Black and Hispanic workers make up only 5% of the tech workforce.

Only 3% of venture capital-backed companies are led by women.

White job applicants receive more callbacks than equally qualified Black job applicants.

Only 3% of executives in the tech industry are Black.

More than 80% of AI professors are men.

Over 70% of AI researchers are men.

This complex problem must be nipped in the bud before the algorithm gets too complex based on initial biased data. AI algorithms can perpetuate and amplify societal biases or help us even them.

Next week, I will begin the added features for the paid tier. Thank you for your kind support to those who levelled up. I am going to release the Thursday Thought audio versions, narrated over alpha wave music for focus and concentration. It is an experiment in the true spirit of innovation. 

The post Tr-AI-wreck: The Privilege Problem in AI DevelopmentTr-AI-wreck: appeared first on The Innovation Show.

Leave a Comment

Your email address will not be published. Required fields are marked *