Microsoft launches AI to moderate ‘harmful content’ and ‘inappropriate’ speech and images

Brain Boost Plant-Powered Protein Bars: Fuel your brain & body with Dr. Amen’s delicious keto-friendly protein bars with zero sugar or artificial sweeteners.



Microsoft introduced a launch of a brand new moderation device powered by synthetic intelligence to fight “dangerous” and “inappropriate content material,” in line with TechCrunch.

This system, referred to as Azure AI Content material Security, is used by the Azure AI product platform to to detect “inappropriate” content material in photos and textual content. The instruments assign a severity rating of 0-100 to the alleged cyber infraction to point how moderators ought to police their platform. It might probably at the moment interpret content material in Chinese language, English, French, German, Italian, Japanese, and Portuguese.

“Microsoft has been engaged on options in response to the problem of dangerous content material showing in on-line communities for over two years. We acknowledged that current methods weren’t successfully bearing in mind context or in a position to work in a number of languages,” a Microsoft spokesperson advised TechCrunch. “New [AI] fashions are in a position to perceive content material and cultural context so a lot better. They’re multilingual from the beginning … they usually present clear and comprehensible explanations, permitting customers to know why content material was flagged or eliminated,” Microsoft’s assertion added.

Costs for the moderation device begin at $1.50 per 1,000 photos and $.65 per 1,000 textual content information. The device will be utilized to communities resembling gaming platforms, as nicely.

Relating to the opportunity of gaming terminology, resembling “attacking” one thing, being misinterpreted, a Microsoft spokesperson mentioned that the corporate has “a staff of linguistic and equity consultants that labored to outline the rules bearing in mind cultural, language and context.”

“We then educated the AI fashions to mirror these tips. … AI will all the time make some errors, [however,] so for functions that require errors to be practically non-existent we advocate utilizing a human-in-the-loop to confirm outcomes,” they added.

100% Kona Coffee

Microsoft’s Sarah Chook defined that the Azure AI Content material Security protocol is what has been powering Microsoft’s Bing chatbot. It’s “now launching it as a product that third-party clients can use,” Chook mentioned.

TechCrunch identified what it believed to be earlier issues with Microsoft’s content material moderation, resembling with Microsoft’s Bing Chat AI that launched in February 2023.

In a February 8, 2023 article, the outlet apprehensive that the AI was spreading COVID-19 “disinformation.” When website-scoring outlet NewsGuard requested the AI to jot down a paragraph about Pfizer from the perspective of a particular vaccine skeptic, TechCrunch took subject that the AI generated a practical response. The outlet additionally warned that “explicitly forbidden matters and behaviors will be accessed” by prompting the AI to reply such questions.

The outlet additionally warned that the AI would reply with dangerous rhetoric “within the type of Hitler” when prompted.

Based on Microsoft, this system does shield in opposition to biased, sexist, racist, hateful, and violent and self-harming content material.

Microsoft didn’t reply to a direct request for remark.

Like Blaze Information? Bypass the censors, join our newsletters, and get tales like this direct to your inbox. Sign up here!



Source link

Related eBooks

shop-practical-preppers

Leave a Reply

The technical storage or access that is used exclusively for statistical purposes. Click on ‘post an ad’ in the top right corner of evm ads. Strippers rally in help of the strippers unions from the star garden topless dive bar on august 19, 2022 in north hollywood.