AI Readiness & Interoperability
Security and Privacy: What Not to Expose
AI access must respect security boundaries and avoid exposing sensitive data.
TL;DR
- • AI should never broaden access beyond RLS.
- • Sensitive fields require strict controls.
The problem (layman)
- • AI can expose sensitive data if not constrained.
- • Security rules are inconsistent across tools.
Why it matters
- • Data leaks are high‑risk.
- • Compliance depends on consistent enforcement.
Symptoms
- • AI answers include restricted data.
- • Different tools show different access scopes.
Root causes
- • Security rules not applied to AI queries.
- • Lack of data classification.
What good looks like
- • AI access follows the same RLS policies.
- • Sensitive fields are masked or excluded.
How to fix (steps)
- • Define data classification and access rules.
- • Apply RLS consistently for AI.
- • Audit AI responses for leakage.
Pitfalls
- • Assuming RLS is automatically enforced.
- • Exposing raw data when only aggregates are needed.
Checklist
- • Data classification complete.
- • RLS applied to AI queries.
- • Audit process in place.