Microsoft’s Azure AI Content Safety service includes image and text detection to identify and grade content based on the likelihood that it will cause harm. Credit: Shutterstock Microsoft has announced the general availability of its Azure AI Content Safety, a new service that helps users detect and filter harmful AI- and user-generated content across applications and services. The service includes text and image detection and identifies content that Microsoft terms “offensive, risky, or undesirable,” including profanity, adult content, gore, violence, and certain types of speech. “By focusing on content safety, we can create a safer digital environment that promotes responsible use of AI and safeguards the well-being of individuals and society as a whole,” wrote Louise Han, product manager for Azure Anomaly Detector, in a blog post announcing the launch. Azure AI Content Safety has the ability to handle various content categories, languages, and threats to moderate both text and visual content. It also offers image features that use AI algorithms to scan, analyze, and moderate visual content, ensuring what Microsoft terms 360-degree comprehensive safety measures. The service is also equipped to moderate content across multiple languages and uses a severity metric which provides an indication of the severity of specific content on a scale ranging from 0 to 7. Content graded 0-1 is deemed to be safe and appropriate for all audiences, while content that expresses prejudiced, judgmental, or opinionated views is graded 2-3, or low. Medium severity content is graded at 4-5 and contains offensive, insulting, mocking, intimidating language or explicit attacks against identity groups, while high severity content, which contains the harmful and explicit promotion of harmful acts, or endorses or glorifies extreme forms of harmful activity towards identity groups, is graded 6-7. Azure AI Content Safety also uses multicategory filtering to identify and categorize harmful content across a number of critical domains, including hate, violence, self-harm, and sexual. “[When it comes to online safety] it is crucial to consider more than just human-generated content, especially as AI-generated content becomes prevalent,” Han wrote. “Ensuring the accuracy, reliability, and absence of harmful or inappropriate materials in AI-generated outputs is essential. Content safety not only protects users from misinformation and potential harm but also upholds ethical standards and builds trust in AI technologies.” Azure AI Content Safety is priced on a pay-as-you-go basis. Interested users can check out pricing options on the Azure AI Content Safety pricing page. Related content news AR/VR headset sales decline is temporary: IDC A steep year-on-year drop in global shipments in Q1 was the result of market in transition. By Paul Barker Jun 18, 2024 4 mins Headsets Technology Industry opinion Apple's cautious AI strategy is absolutely right It is via simple, friendly and optional functions that the great masses will be introduced to — and actually use — AI tools. By Marcus Jerräng Jun 18, 2024 5 mins Apple Generative AI news Varjo wants you to create photorealistic VR ‘scenes’ with your phone The Finnish VR headset firm said its Teleport device will lower the barrier for 3D content creation with an app that lets users create a virtual environment — without any training or special equipment. By Matthew Finnegan Jun 18, 2024 4 mins Augmented Reality Virtual Reality Vendors and Providers news analysis When it comes to AI, Apple is opening up for intelligence Apple is becoming increasingly open as its research teams cook up Apple Intelligence. By Jonny Evans Jun 18, 2024 4 mins Apple Developer Generative AI Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe