Local PII Redaction (April 2026)
OpenAI released the Privacy Filter, an open-weight 1.5B parameter model designed to detect and redact Personally Identifiable Information (PII) before data leaves your infrastructure.
Enterprise Architecture
- User submits raw text containing sensitive data.
- Local Privacy Filter scans and replaces PII with tokens (e.g.,
[NAME_1], [CREDIT_CARD]).
- Sanitized text is sent to the OpenAI API for processing.
- API returns results. Local system maps tokens back to original PII.
Data Retention Policies
| Plan | Data Used for Training? | Retention |
| API (default) | No | 30 days for abuse monitoring |
| API (zero retention) | No | 0 days — nothing stored |
| ChatGPT Free | Yes (opt-out available) | Varies |
| ChatGPT Enterprise | No | Configurable |
Compliance Certifications
- SOC 2 Type II: Enterprise security controls verified
- GDPR: EU data processing agreements available
- HIPAA: BAA available for healthcare customers
🔒 Zero-Trust Pattern: Privacy Filter + Zero Retention API = sensitive data never touches OpenAI's servers in readable form. This satisfies the strictest compliance requirements.