This Software Will Let You Censor The ‘N-Word’ And Other Offensive Speech—But On A Sliding Scale


[ad_1]

Topline

Intel announced last month it would release a new artificial intelligence program named Bleep that censors various kinds of offensive speech from gaming audio, but the software,  which lets gamers select from a sliding scale of hate speech options—is now facing criticism for being “hilarious/horrifying.”

Key Facts

Bleep, which is still being developed and will officially launch at some point this year, will be integrated into the audio system on the latest generation of Intel desktop and laptop computers.

The program is designed to let users “detect and remove toxic speech from their voice chat” as a “key step” toward eliminating toxicity in online gaming, Roger Chandler, an Intel vice president, said at a virtual showcase in mid-March announcing the software.

According to an image of the software shown during Intel’s showcase, users will utilize a sliding scale to determine whether they want to hear “none,” “some,” “most” or “all” speech that falls under a specific category of offensive language, including ableism and body shaming; aggression; LGBTQ+ hate; misogyny; name-calling; “N-word”; racism and xenophobia; sexually explicit language; swearing and white nationalism.

The sliding scale was envisioned by developers to accommodate different situations, Kim Pallister, general manager of Intel’s gaming solutions team, told Forbes in an interview, saying users may want more permissive speech options if they’re playing games just with friends, or there may be certain games with possibly offensive in-game audio that users would nevertheless want to hear.

Pallinger described the settings shown at the showcase as an “initial stab” at the product, and developers will be getting feedback during the beta testing product that will affect what the controls ultimately look like, suggesting some categories could be changed from a sliding scale to a simple on or off option.

The Bleep technology, which Intel first said was under development in 2019, was created with Spirit AI, whose existing AI technology helps detect toxicity on gaming platforms.

Chief Critics

Kotaku journalist Luke Plunkett criticized the technology Wednesday, saying “Hateful speech is something that needs to be educated and fought, not toggled on a settings screen.” 

Crucial Quote

“I think it would have been naive to step into this space to try to do something here if we didn’t expect any kind of dialogue,” Marcus Kennedy, general manager of the gaming and esports segment in Intel’s client computing group, told Forbes about the criticism the software has received. “We absolutely expected this to generate something, but from our perspective, the right thing to do is to continue to anchor on empowering the gamer and we will stand behind that no matter what kind of pushback we get.”

What To Watch For

Pallinger and Kennedy told Forbes Intel will be listening to both internal and external feedback from a diverse audience to help shape Bleep before it officially launches. 

Key Background

Bleep is designed to address a widespread issue of harassment on online gaming platforms, with a 2020 study by the Anti-Defamation League finding that 81% of U.S. adults ages 18-45 who play online multiplayer games had been harassed in some way. Gaming companies have been called on to do more to fix the issue of harassment and discrimination on their platforms, particularly in the light of the recent racial justice movement. In addition to Intel’s efforts, gaming livestream service Twitch announced Wednesday the company would change its policy on harassment to now take action against users who commit “severe misconduct,” even when those actions take place off of the platform.

Contra

While Bleep aims to combat hate and discrimination through artificial intelligence, AI technology has often been shown to actually reinforce systemic biases like racism and sexism. A study published in April 2020 by the National Academy of Sciences, for instance, found multiple automated speech recognition programs “exhibited substantial racial disparities” and had a far higher rate of error for Black speakers compared with white speakers. Pallinger acknowledged to Forbes this could be an issue with Bleep and they were “sensitive” to the issue, and Kennedy stressed the software is being built by a diverse team. “We don’t think we’ll ever get it perfectly,” Kennedy acknowledged about AI’s potential pitfalls, saying the platform was instead focused on giving users as much control as possible to navigate such a “nuanced environment.”

Further Reading

Intel, A ‘White Nationalism’ Slider Ain’t It (Kotaku)

Today I learned about Intel’s AI sliders that filter online gaming abuse (The Verge)

Intel ‘Bleep’ Software Filters Out Toxic Slurs in Voice Chats as You Game (PC Mag)

Free to Play? Hate, Harassment and Positive Social Experience in Online Games 2020 (Anti-Defamation League)

[ad_2]


Leave a Reply

Your email address will not be published. Required fields are marked *