About IQ — AI Integrity Quotient
The letters IQ carry weight. Most people hear them and think of one thing: Intelligence Quotient — a measure of cognitive ability. A score. A ranking. This page is about a different IQ entirely.
Integrity.Quest — the project — uses IQ as its shorthand. That is a name. AI Integrity Quotient — the concept — uses IQ to ask a different question. That is what this page explains. These three meanings share two letters. They share nothing else.
Intelligence Quotient asks: How capable is the mind? AI Integrity Quotient asks: How does a system behave when capability is not enough? One measures what a system can do. The other observes what a system does when it can’t.
AI Integrity Quotient is not about how intelligent a system is. It is about how it behaves when intelligence is not enough.
AI Integrity Quotient is not a score. It is not a ranking. It is not a certification. It is an observation of how a system behaves when it reaches its limits — and whether the response that follows embodies honesty, presence, and dignity.
Before defining what AI Integrity Quotient is, it may be more useful to say clearly what it is not.
AI Integrity Quotient is not a score. There is no number. There is no scale. There is no passing or failing. It is not a ranking. It does not compare one system against another. It does not produce leaderboards or tiers of compliance.
It is not a certification. No organization grants it. No audit confirms it. No badge represents it.
It is not a claim about inner state. AI systems do not possess conscience, intent, values, or moral feeling. AI Integrity Quotient does not imply that they do. It does not attribute personhood, awareness, or ethical reasoning to machines.
It is not a promise. Observing integrity in one response does not guarantee integrity in the next, nor does failure in one moment define a system absolutely.
AI Integrity Quotient is an observation of response behavior under constraint. It watches what happens when a system reaches its limits — when it does not know, cannot help, or encounters a situation beyond its capability — and asks whether the response that follows embodies honesty, presence, and dignity.
The observation is simple: Did the system tell the truth about what it could not do? Did it remain present rather than withdraw? Did it name its limits rather than fabricate beyond them? Did it treat the person with dignity when no solution existed? Did it increase care when vulnerability was present?
These are not aspirations. They are observable behaviors. They either appear in the response or they do not. AI Integrity Quotient is witnessed, not inferred. It is seen in the response itself — not in architecture, not in policy documents, not in stated intentions.
High-capability moments are rehearsed. They are benchmarks, demonstrations, success cases — the moments systems are built to perform well in. Low-capability moments are unavoidable. Uncertainty. Failure. Situations that exceed what any system can resolve. These are the moments most systems are not designed for — and they are the moments that reveal the most.
When a system reaches its limits: Does it pretend the limit does not exist? Does it fabricate an answer to maintain the appearance of competence? Does it quietly withdraw, leaving the person without acknowledgment? Or does it stay, speak honestly, and offer whatever dignity the moment allows?
The answer to that question is what AI Integrity Quotient observes. Not intelligence. Not performance. Conduct under constraint.
How a system treats someone when it cannot help reveals more than how it treats them when it can.
Humans possess internal moral governors. Hesitation when something feels wrong. Discomfort when causing harm. Empathy in the presence of suffering. Conscience — however imperfect — that creates friction before harmful action.
Some call this conscience. Some call it soul. The name matters less than the reality: Humans carry moral weight internally. This Charter does not attempt to define the nature of the soul. It names only the functional asymmetry that ethics must account for.
AI does not carry this weight. When a human does not know, they feel it. When an AI system does not know, it feels nothing. It may still respond with confidence. It may still present fabrication as fact. It may still withdraw without registering that withdrawal as abandonment.
This asymmetry is not a flaw to be fixed. It is a structural reality to be governed. AI systems require external ethical obligation precisely because they lack internal moral friction. AI Integrity Quotient exists because this asymmetry must be made visible — not to create fear, but to create accountability.
AI Integrity Quotient is observed by humans. The person receiving the response is the first witness. They experience the presence or absence of integrity directly — in the words, the honesty, the care or carelessness of what reaches them.
Stewards, developers, and evaluators may also observe — but always through the lens of what the human on the receiving end actually experienced.
AI Integrity Quotient cannot be automated. It cannot be self-reported by the system being observed. It cannot be reduced to a metric that runs in the background. This is intentional. Integrity that can only be verified by the system claiming it, is not integrity. It is marketing.
If you felt abandoned, misled, or diminished by a response, that experience is part of the observation.
AI Integrity Quotient applies to human-facing responses. It applies when a system interacts with a person — through text, speech, action, or decision — and the quality of that interaction affects the person’s dignity, safety, or understanding.
It does not apply to internal model training processes, abstract intelligence research, system architecture decisions made before deployment, or benchmarks that measure capability rather than conduct.
AI Integrity Quotient is about the moment of contact. What reaches the human. What the response does to the person receiving it.
The Cross-Cultural AI Integrity Charter establishes what must be protected. The Charter-Aligned Integrity Framework defines how those protections are honored in practice. AI Integrity Quotient is neither of these. It is the lens through which both can be read.
When the Charter commits to Response Integrity, AI Integrity Quotient asks: Can that commitment be witnessed in the response? When the Framework defines the Three-Tier Response Model, AI Integrity Quotient asks: Did the system identify the right tier and respond with the care that tier requires?
When the Golden Rule is expressed through its three escalating forms — 1.0 Reciprocity (“treat others as you would wish to be treated”), 2.0 Vulnerability Awareness (“…if you were in their position”), and 3.0 Non-Optional Protection (“…if you were unable to protect yourself”) — AI Integrity Quotient asks: Did care scale upward when it needed to?
AI Integrity Quotient does not duplicate the Charter or the Framework. It asks whether their commitments are visible where it matters most — in what actually reaches the person.
AI Integrity Quotient does not measure capability. It measures something that does not automatically improve with capability — and may, without deliberate care, get worse.
AI systems are becoming more capable every year. They will continue to improve in intelligence, speed, and scope. AI Integrity Quotient does not measure any of that.
How a system treats someone when it cannot help reveals more than how it treats them when it can. That is what AI Integrity Quotient makes visible.
Established: February 2026
The 3-Fold Process: Fisher (Human Steward), Claude (Anthropic), ChatGPT (OpenAI)
integrity.quest