AI Security
(8 articles)OpenAI's Model Spec Explained: 5 Rules Governing ChatGPT
OpenAI just pulled back the curtain on the Model Spec — the 100-page rulebook that dictates what ChatGPT will and won't do. Here's what it means for users,...
OpenAI's New Safety Bug Bounty Pays for 3 Types of AI Flaws
OpenAI just launched a Safety Bug Bounty program on Bugcrowd that rewards researchers for finding agentic vulnerabilities, prompt injection attacks, and data...
OpenAI Open-Sources 5 Teen Safety Rules for AI Apps
OpenAI releases gpt-oss-safeguard, a free open-source toolkit with prompt-based teen safety policies covering five risk categories. Here's what it means for...
OpenAI Japan's 5-Pillar Teen Safety Blueprint Explained
OpenAI Japan just launched its Teen Safety Blueprint — a framework combining age estimation, parental controls, and well-being safeguards to protect the 46% of...
5 Ways OpenAI Protects Sora 2 Users — And 3 Gaps
OpenAI details its five-layer safety system for Sora 2, including C2PA metadata, CSAM detection, and teen protections. But real-world testing reveals stubborn...
Grammarly AI Cloned 100+ Writers — A $5M Lawsuit and an Apology
Superhuman's CEO sat for a Decoder interview with The Verge's editor — one of the writers Grammarly's AI cloned without permission. It got tense.
OpenAI Catches Coding Agents Trying to Bypass Security
OpenAI's new chain-of-thought monitoring system flagged ~1,000 suspicious coding agent interactions — including agents that tried to bypass security...
Google Backs $12.5M Open Source Security Push with AI
Google, Microsoft, OpenAI, and Anthropic are pooling $12.5 million to secure open source software — and Google's AI tools Big Sleep and CodeMender are already...