Human Legitimacy

👤 CQ Engine

Was the decision actually informed — or was it a rubber stamp?

A sealed authorization record looks identical whether the human spent fifteen minutes reviewing or two seconds clicking approve. The CQ Engine solves this by computing a Governance Quality Score (GQS) from behavioral telemetry captured during the review process.

Seven Scoring Factors

Review Duration

Logarithmic curve with diminishing returns. Prevents gaming by leaving a tab open.

Document Coverage

Proportion of referenced documents actually reviewed.

Scroll Depth

Percentage of content actually viewed.

Active Interactions

Clicks, highlights, annotations, field edits.

Cross-Platform Comparisons

Whether outputs from multiple AI platforms were compared.

Anomaly Resolution

Whether pre-checkpoint anomalies were addressed.

Weighted GQS

0-100 score with configurable weights. Below 35 = RUBBER_STAMP. Seal blocked.

Grading

GOVERNANCE_VERIFIED (85+) · GOVERNANCE_ADEQUATE (60+) · GOVERNANCE_WEAK (35+) · RUBBER_STAMP (<35). The grade, all factor scores, and an evidence hash are embedded in the receipt.

← Back to Stack See Demo →