return to news
  1. OpenAI rolls out AI model to detect, redact sensitive data in real time

Business News

OpenAI rolls out AI model to detect, redact sensitive data in real time

Upstox

2 min read | Updated on April 23, 2026, 09:51 IST

SUMMARY

OpenAI has launched “Privacy Filter,” an open-weight AI model designed to detect and redact personally identifiable information (PII) in text, addressing growing data privacy concerns.

OpenAI

The model can run locally, allowing sensitive data to be processed without leaving a user’s system, reducing exposure risks.

OpenAI on Thursday released “Privacy Filter”, an open-weight artificial intelligence model designed to detect and redact personally identifiable information (PII) in text, as companies face rising pressure to strengthen data privacy safeguards.

Open FREE Demat Account within minutes!
Join now

The model is built for developers and enterprises seeking to integrate privacy protections directly into AI systems, including workflows such as data training, logging and content review.

“Today we’re releasing OpenAI Privacy Filter, an open-weight model for detecting and redacting personally identifiable information (PII) in text,” the company said in a statement.

OpenAI said the model can run locally on devices, allowing sensitive data to be filtered without being sent to external servers, reducing exposure risks.

“It can run locally, which means that PII can be masked or redacted without leaving your machine,” the company said.

Unlike traditional rule-based systems that rely on fixed patterns such as email or phone number formats, Privacy Filter uses context-aware language understanding to identify a broader range of personal data in unstructured text.

“Privacy protection in modern AI systems depends on more than pattern matching,” OpenAI said, adding that such systems “often miss more subtle personal information and struggle with context.”

The model processes long documents in a single pass and supports up to 128,000 tokens, enabling use in large-scale, high-throughput environments. It classifies sensitive information across categories including names, addresses, contact details, account numbers and secrets such as passwords or API keys.

OpenAI said Privacy Filter achieved a 96% F1 score on a standard benchmark for PII masking, rising to about 97% after adjustments for dataset issues identified during evaluation.

The company added the model can be fine-tuned for specific use cases with relatively small datasets, improving accuracy in domain-specific applications.

“With this release, developers can run Privacy Filter in their own environments, fine tune it to their own use cases, and build stronger privacy protections,” OpenAI said.

Privacy Filter is being released under an Apache 2.0 license on platforms including GitHub and Hugging Face.

OpenAI cautioned the tool is not a replacement for compliance or human oversight, particularly in sensitive sectors.

“Privacy Filter is not an anonymization tool, a compliance certification, or a substitute for policy review in high-stakes settings,” the company said.

About The Author

Upstox
Upstox News Desk is a team of journalists who passionately cover stock markets, economy, commodities, latest business trends, and personal finance.

Next Story