Advertisement
Microsoft has introduced the overall availability of its Azure AI Content material Security, a brand new service that helps customers detect and filter dangerous AI- and user-generated content material throughout functions and companies.
The service consists of textual content and picture detection and identifies content material that Microsoft phrases “offensive, dangerous, or undesirable,” together with profanity, grownup content material, gore, violence, and sure sorts of speech.
“By specializing in content material security, we are able to create a safer digital surroundings that promotes accountable use of AI and safeguards the well-being of people and society as a complete,” wrote Louise Han, product supervisor for Azure Anomaly Detector, in a weblog publish asserting the launch.
Azure AI Content Safety has the flexibility to deal with numerous content material classes, languages, and threats to reasonable each textual content and visible content material. It additionally affords picture options that use AI algorithms to scan, analyze, and reasonable visible content material, making certain what Microsoft phrases 360-degree complete security measures.
Advertisement
The service can also be outfitted to reasonable content material throughout a number of languages and makes use of a severity metric which gives a sign of the severity of particular content material on a scale starting from 0 to 7.
Content material graded 0-1 is deemed to be secure and acceptable for all audiences, whereas content material that expresses prejudiced, judgmental, or opinionated views is graded 2-3, or low.
Advertisement
Medium severity content material is graded at 4-5 and accommodates offensive, insulting, mocking, intimidating language or express assaults towards id teams, whereas excessive severity content material, which accommodates the dangerous and express promotion of dangerous acts, or endorses or glorifies excessive types of dangerous exercise in direction of id teams, is graded 6-7.
Azure AI Content material Security additionally makes use of multicategory filtering to establish and categorize dangerous content material throughout quite a lot of essential domains, together with hate, violence, self-harm, and sexual.
“[When it comes to online safety] it’s essential to think about extra than simply human-generated content material, particularly as AI-generated content material turns into prevalent,” Han wrote. “Making certain the accuracy, reliability, and absence of dangerous or inappropriate supplies in AI-generated outputs is important. Content material security not solely protects customers from misinformation and potential hurt but in addition upholds moral requirements and builds belief in AI applied sciences.”
Azure AI Content material Security is priced on a pay-as-you-go foundation. customers can take a look at pricing choices on the Azure AI Content Safety pricing page.
Copyright © 2023 IDG Communications, Inc.