OpenAI Opens Bug Bounty to Catch AI Safety Risks
OpenAI launches a Safety Bug Bounty program targeting prompt injection, agentic abuse, and data exfiltration. Here’s what it covers and why it matters.
OpenAI launches a Safety Bug Bounty program targeting prompt injection, agentic abuse, and data exfiltration. Here’s what it covers and why it matters.
OpenAI’s foundation is committing at least $1 billion to cure diseases, expand economic opportunity, and build AI resilience. Here’s what that actually means.
OpenAI’s new Agentic Commerce Protocol brings visual product discovery, side-by-side comparisons, and merchant integration directly into ChatGPT.
OpenAI releases prompt-based teen safety policies via gpt-oss-safeguard, giving developers a practical way to moderate age-specific risks in AI apps.
OpenAI’s Sora 2 and the Sora app launch with layered safety systems. Here’s what’s actually built in — and why it matters for AI video generation.
OpenAI is acquiring Astral, the team behind Ruff and uv, to accelerate Codex and build the next generation of Python developer tools. Here’s what it means.
OpenAI is using chain-of-thought monitoring to catch misalignment in its internal coding agents before it becomes a real problem. Here’s what they found.
OpenAI Japan’s new Teen Safety Blueprint brings stronger age verification, parental controls, and well-being tools to protect teens using generative AI.
Americans send nearly 3 million daily messages to ChatGPT about pay and compensation. OpenAI’s new research shows AI is closing the wage information gap fast.
OpenAI’s GPT-5.4 mini and nano bring faster, cheaper AI to coders and API builders. Here’s what they offer and why it matters for high-volume workloads.
OpenAI explains why Codex Security ditches traditional SAST tools in favor of AI-driven constraint reasoning — and the results speak for themselves.
Rakuten is using OpenAI’s Codex coding agent to slash MTTR by 50%, automate CI/CD reviews, and ship full-stack builds in weeks instead of months.