AI Policy
Human judgement comes first.
LRARE is not an AI-first business. Our default is non-AI product design. This policy sets strict conditions for any future AI-enabled capability.
Policy position
We do not position AI as a core product promise. We do not ship AI for marketing value. Any AI use must pass legal, security, and product governance review before release.
No AI system may make binding or irreversible decisions about individuals. No AI output may be treated as authority without qualified human review.
If a proposed AI capability does not create clear, measurable user benefit, it is rejected.
Default state
No deployment without approval.
Every AI feature requires explicit approval, documented controls, and a rollback plan.
Hard limits
We do not train models on personal data without a clear legal basis and explicit purpose.
We do not permit opaque scoring systems that cannot be explained or challenged.
We do not allow unsupervised use in high-stakes or regulated workflows.
We do not use AI outputs as legal advice.
We do not keep any AI capability in production without ongoing monitoring and controls.
Data, security, and accountability
Personal and sensitive data are protected under strict access controls, logging, and minimisation. Any AI-related processing is subject to additional security review.
We maintain auditable records for any approved AI capability, including owner, purpose, risks, mitigation controls, and retirement criteria.
Accountability remains with LRARE. Responsibility is never delegated to a model or vendor.
Commitment
If controls are not met, we do not deploy.
This policy is intentionally strict. We review it regularly and tighten it further as needed.