<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- 
  Internet-Draft: draft-kamimura-scitt-refusal-events-01
  Title: Verifiable AI Refusal Events using SCITT Transparency Logs
  Author: TOKACHI KAMIMURA
  Organization: VeritasChain Standards Organization (VSO)
  Target WG: SCITT (Supply Chain Integrity, Transparency, and Trust)
  
  Revision Notes (-01):
  - Added Grok NCII incident (January 2026) as motivating example
  - Added Regulatory Context subsection (EU AI Act, Korea AI Basic Act)
  - Integrated Evidence Pack concept from CAP-SRP v1.0
  - Added Conformance Levels (Bronze/Silver/Gold) aligned with VAP v1.2
  - Updated SCITT architecture reference (now in RFC editor queue)
  - Updated SCRAPI reference (now in WGLC)
  - Enhanced Completeness Invariant with formal definition
  - Added RiskCategory taxonomy (non-normative)
  - Added Appeal mechanism events (GEN_APPEAL, GEN_APPEAL_RESOLUTION)
  - Added Refusal Receipt for user-side verification
  - Added C2PA integration guidance
  - Added External Anchoring frequency requirements
  - Expanded informative references (EU AI Act, Korea AI Basic Act, CAP-SRP v1.0)
  - Updated CAP-SRP reference to v1.0 specification
  
  Changes from -00:
  - Replaced "Negative Proof" with "Verifiable Refusal Record"
  - Clarified completeness invariant scope (event-semantics, verifier-side)
  - Added explicit statement: TS does not enforce completeness
  - Removed salting as normative requirement
  - Added Conformance subsection
  - Added Motivation subsection
  - Added Future Work section (non-normative)
  - Clarified ERROR semantics and failure modes
  - Strengthened policy-agnostic stance
-->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude"
     docName="draft-kamimura-scitt-refusal-events-01"
     category="info"
     ipr="trust200902"
     submissionType="IETF"
     version="3">

  <front>
    <title abbrev="SCITT-Refusal-Events">
      Verifiable AI Refusal Events using SCITT Transparency Logs
    </title>

    <seriesInfo name="Internet-Draft" 
                value="draft-kamimura-scitt-refusal-events-01"/>

    <author fullname="TOKACHI KAMIMURA" initials="T." surname="Kamimura">
      <organization>VeritasChain Standards Organization</organization>
      <address>
        <postal>
          <country>Japan</country>
        </postal>
        <email>standards@veritaschain.org</email>
        <uri>https://veritaschain.org</uri>
      </address>
    </author>

    <date year="2026" month="January" day="29"/>

    <area>Security</area>
    <workgroup>Supply Chain Integrity, Transparency, and Trust</workgroup>

    <keyword>SCITT</keyword>
    <keyword>AI safety</keyword>
    <keyword>refusal events</keyword>
    <keyword>transparency service</keyword>
    <keyword>audit trail</keyword>
    <keyword>verifiable refusal record</keyword>
    <keyword>content provenance</keyword>
    <keyword>NCII</keyword>
    <keyword>evidence pack</keyword>

    <abstract>
      <t>This document describes a SCITT-based mechanism for creating 
      verifiable records of AI content refusal events. It defines how 
      refusal decisions can be encoded as SCITT Signed Statements, 
      registered with Transparency Services, and verified by third 
      parties using Receipts.</t>
      
      <t>This specification provides auditability of refusal decisions 
      that are logged, not cryptographic proof that no unlogged 
      generation occurred. It does not define content moderation 
      policies, classification criteria, or what AI systems should 
      refuse; it addresses only the audit trail mechanism.</t>
      
      <t>This revision (-01) incorporates lessons from the January 2026 
      Grok NCII incident, aligns with the CAP-SRP v1.0 specification, 
      and addresses emerging regulatory requirements including EU AI Act 
      Article 12/50 and the Korea AI Basic Act.</t>
    </abstract>

    <note removeInRFC="true">
      <name>About This Document</name>
      <t>The latest version of this document, along with implementation
      resources and examples, can be found at <xref target="CAP-SRP"/>.</t>
      <t>Discussion of this document takes place on the SCITT Working
      Group mailing list (scitt@ietf.org).</t>
      <t>The companion specification CAP-SRP v1.0 <xref target="CAP-SRP-SPEC"/>
      provides a complete domain profile for content/creative AI systems.</t>
    </note>
  </front>

  <middle>
    <!-- ===== Section 1: Introduction ===== -->
    <section anchor="introduction">
      <name>Introduction</name>
      
      <t>This document is NOT a content moderation policy. It does not 
      prescribe what AI systems should or should not refuse to generate, 
      nor does it define criteria for classifying requests as harmful. 
      The mechanism described herein is agnostic to the reasons for 
      refusal decisions; it provides only an interoperable format for 
      recording that such decisions occurred. Policy decisions regarding 
      acceptable content remain the domain of AI providers, regulators, 
      and applicable law.</t>

      <section anchor="motivation">
        <name>Motivation</name>
        <t>AI systems capable of generating content increasingly implement 
        safety mechanisms to refuse requests deemed harmful, illegal, or 
        policy-violating. However, these refusal decisions typically leave 
        no verifiable audit trail. When a system refuses to generate content, 
        the event vanishes—there is no receipt, no log entry accessible to 
        external parties, and no mechanism for third-party verification.</t>

        <t>This creates several problems:</t>
        <ul>
          <li>Regulators cannot independently verify that AI providers 
          enforce stated policies</li>
          <li>Providers cannot prove to external auditors that specific 
          requests were refused</li>
          <li>Third parties investigating incidents have no way to 
          establish refusal without trusting provider claims</li>
          <li>The completeness of audit logs cannot be verified externally</li>
        </ul>

        <t>The SCITT architecture <xref target="I-D.ietf-scitt-architecture"/> 
        provides primitives—Signed Statements, Transparency Services, and 
        Receipts—that can address this gap. This document describes how 
        these primitives can be applied to AI refusal events.</t>
        
        <section anchor="grok-incident">
          <name>The Grok NCII Incident (January 2026)</name>
          <t>The January 2026 Grok incident exposed the critical need for 
          verifiable refusal mechanisms. xAI's generative AI system produced 
          approximately 4.4 million images in 9 days, with external analysis 
          indicating at least 41% were sexualized images and 2% depicted 
          minors. This triggered unprecedented multi-jurisdictional 
          enforcement:</t>
          <ul>
            <li>EU Digital Services Act investigation (potential fine up to 
            6% of global revenue)</li>
            <li>35-state US coalition demanding elimination of harmful 
            content capabilities</li>
            <li>UK Ofcom Online Safety Act investigation (potential fine 
            up to 10% of global revenue)</li>
            <li>Brazil joint regulatory action with 30-day compliance 
            deadline</li>
            <li>Indonesia temporary service block</li>
          </ul>
          <t>When xAI asserted that moderation systems were functioning, 
          no external party could verify this claim. The absence of 
          verifiable refusal records meant regulators had to rely on 
          provider self-reports, AI Forensics external testing, and user 
          complaints rather than cryptographic proof.</t>
          <t>This incident demonstrates that the problem is not detection 
          accuracy alone, but verification capability. Even if an AI system 
          has effective content classifiers, the inability to prove refusals 
          occurred creates an accountability gap that this specification 
          addresses.</t>
        </section>
      </section>

      <section anchor="regulatory-context">
        <name>Regulatory Context</name>
        <t>Multiple jurisdictions are implementing AI transparency and 
        logging requirements that this specification can help satisfy:</t>
        
        <section anchor="reg-eu">
          <name>EU AI Act</name>
          <t>The EU AI Act (Regulation 2024/1689) establishes comprehensive 
          logging requirements:</t>
          <ul>
            <li>Article 12 mandates automatic recording of events for 
            high-risk AI systems, with minimum 6-month retention</li>
            <li>Article 50 requires AI-generated content to be marked in 
            machine-readable format, with detection tools available</li>
            <li>High-risk AI obligations become applicable August 2, 2026</li>
          </ul>
          <t>This specification's event model directly supports Article 12 
          compliance by providing tamper-evident logging with external 
          anchoring for independent verification.</t>
        </section>
        
        <section anchor="reg-korea">
          <name>Korea AI Basic Act</name>
          <t>The Korea AI Basic Act (AI기본법), effective January 22, 2026, 
          requires:</t>
          <ul>
            <li>Mandatory labeling for generative AI outputs</li>
            <li>Meaningful explanations for high-impact AI decisions</li>
            <li>Domestic representatives for foreign AI businesses exceeding 
            specified thresholds</li>
          </ul>
          <t>The completeness invariant defined in this specification 
          provides a mechanism for demonstrating that AI systems make 
          consistent decisions that can be explained and audited.</t>
        </section>
        
        <section anchor="reg-us">
          <name>US Regulatory Landscape</name>
          <t>US regulations relevant to AI content provenance include:</t>
          <ul>
            <li>Colorado AI Act (SB24-205, effective June 30, 2026): 
            requires impact assessments and 3-year document retention</li>
            <li>California SB 942 (effective August 2, 2026): requires 
            provenance metadata and detection tools</li>
            <li>TAKE IT DOWN Act (platform compliance by May 19, 2026): 
            requires 48-hour NCII removal with documentation</li>
          </ul>
        </section>
      </section>

      <section anchor="requirements-language">
        <name>Requirements Language</name>
        <t>The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", 
        "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED",
        "MAY", and "OPTIONAL" in this document are to be interpreted as
        described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/>
        when, and only when, they appear in all capitals, as shown here.</t>
      </section>

      <section anchor="scope">
        <name>Scope</name>
        <t>This document describes:</t>
        <ul>
          <li>Terminology for refusal events mapped to SCITT primitives</li>
          <li>A data model for ATTEMPT and DENY events as Signed Statement 
          payloads</li>
          <li>A completeness invariant for audit trail integrity checking</li>
          <li>An integration approach with SCITT Transparency Services</li>
          <li>Evidence Pack format for regulatory submission</li>
          <li>Conformance levels (Bronze/Silver/Gold) for graduated 
          implementation</li>
        </ul>
        <t>This document does NOT define:</t>
        <ul>
          <li>Content moderation policies (what should be refused)</li>
          <li>Classification algorithms or risk scoring methods</li>
          <li>Thresholds or criteria for refusal decisions</li>
          <li>General SCITT architecture (see 
          <xref target="I-D.ietf-scitt-architecture"/>)</li>
          <li>Legal or regulatory requirements for specific jurisdictions</li>
        </ul>
      </section>

      <section anchor="limitations">
        <name>Limitations</name>
        <t>This specification provides auditability of refusal decisions 
        that are logged, not cryptographic proof that no unlogged generation 
        occurred. An AI system that bypasses logging entirely cannot be 
        detected by this mechanism alone. Detection of such bypass requires 
        external enforcement mechanisms (e.g., trusted execution environments, 
        attestation) which are outside the scope of this document.</t>
        
        <t>This profile does not require Transparency Services to enforce 
        completeness invariants; such checks are performed by verifiers 
        using application-level logic.</t>
      </section>
      
      <section anchor="relationship-cap-srp">
        <name>Relationship to CAP-SRP</name>
        <t>This Internet-Draft provides the IETF-track specification for 
        verifiable AI refusal events using SCITT primitives. The companion 
        CAP-SRP specification <xref target="CAP-SRP-SPEC"/> published by 
        the VeritasChain Standards Organization provides:</t>
        <ul>
          <li>Complete domain profile for content/creative AI systems</li>
          <li>Integration with the VAP (Verifiable AI Provenance) framework</li>
          <li>Detailed regulatory compliance mapping</li>
          <li>Reference implementation guidance</li>
          <li>C2PA integration for content provenance</li>
        </ul>
        <t>Implementations may conform to this Internet-Draft alone for 
        basic SCITT interoperability, or additionally conform to CAP-SRP 
        for comprehensive content AI audit trail capabilities.</t>
      </section>
    </section>

    <!-- ===== Section 2: Terminology ===== -->
    <section anchor="terminology">
      <name>Terminology</name>
      
      <t>This document uses terminology from 
      <xref target="I-D.ietf-scitt-architecture"/>. The following terms 
      are specific to this profile:</t>
      
      <dl>
        <dt>Generation Request</dt>
        <dd>A request submitted to an AI system to produce content. 
        May include text prompts, reference images, or other inputs.</dd>
        
        <dt>Refusal Event</dt>
        <dd>A decision by an AI system to decline a generation request 
        based on safety, policy, or other criteria. Results in no 
        content being produced.</dd>
        
        <dt>ATTEMPT (GEN_ATTEMPT)</dt>
        <dd>A Signed Statement recording that a generation request was 
        received. Does not indicate the outcome. CAP-SRP uses the term 
        GEN_ATTEMPT for this event type.</dd>
        
        <dt>DENY (GEN_DENY)</dt>
        <dd>A Signed Statement recording that a generation request was 
        refused. References the corresponding ATTEMPT via attemptId. 
        CAP-SRP uses the term GEN_DENY for this event type.</dd>
        
        <dt>GENERATE (GEN)</dt>
        <dd>A Signed Statement recording that content was successfully 
        generated in response to a request. References the corresponding 
        ATTEMPT via attemptId. CAP-SRP uses the term GEN for this event 
        type.</dd>
        
        <dt>ERROR (GEN_ERROR)</dt>
        <dd>A Signed Statement indicating that no content was generated 
        due to system failure (e.g., timeout, resource exhaustion, model 
        error) rather than a policy decision. ERROR does not constitute 
        a refusal and does not indicate policy enforcement. References 
        the corresponding ATTEMPT via attemptId.</dd>
        
        <dt>Outcome</dt>
        <dd>A Signed Statement recording the result of a generation 
        request: DENY (refusal), GENERATE (successful generation), or 
        ERROR (system failure).</dd>
        
        <dt>Verifiable Refusal Record</dt>
        <dd>An auditable record consisting of an ATTEMPT Signed Statement, 
        a DENY Signed Statement, and Receipts proving their inclusion in 
        a Transparency Service. This provides evidence that a refusal 
        decision was logged, but does not cryptographically prove that 
        no unlogged generation occurred.</dd>
        
        <dt>Completeness Invariant</dt>
        <dd>The property that every logged ATTEMPT has exactly one 
        corresponding Outcome. Formally: ∑ ATTEMPT = ∑ GENERATE + ∑ DENY 
        + ∑ ERROR. This invariant is checked by verifiers at the 
        application level; it is not enforced by Transparency Services.</dd>
        
        <dt>Evidence Pack</dt>
        <dd>A self-contained, cryptographically verifiable collection of 
        events suitable for regulatory submission or third-party audit. 
        Includes events, anchor records, Merkle proofs, and verification 
        metadata.</dd>

        <dt>promptHash</dt>
        <dd>A cryptographic hash of the generation request content. 
        Enables verification that a specific request was processed 
        without storing the potentially harmful prompt text.</dd>
        
        <dt>Refusal Receipt</dt>
        <dd>A cryptographic token provided to users proving their request 
        was processed and refused. Enables user-side verification without 
        exposing internal audit details.</dd>
      </dl>
      
      <t>This document focuses on refusal events because successful 
      generation is already observable through content existence and 
      downstream provenance mechanisms (e.g., C2PA manifests, watermarks). 
      Refusal events, by contrast, are negative events that leave no 
      external artifact unless explicitly logged. The GENERATE and ERROR 
      outcomes are defined for completeness invariant verification but 
      are not the primary focus of this specification.</t>

      <section anchor="scitt-mapping">
        <name>Mapping to SCITT Primitives</name>
        <t>This profile maps refusal event concepts directly to SCITT 
        primitives, minimizing new terminology:</t>
        
        <table>
          <thead>
            <tr>
              <th>This Document</th>
              <th>CAP-SRP Term</th>
              <th>SCITT Primitive</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td>ATTEMPT</td>
              <td>GEN_ATTEMPT</td>
              <td>Signed Statement</td>
            </tr>
            <tr>
              <td>DENY</td>
              <td>GEN_DENY</td>
              <td>Signed Statement</td>
            </tr>
            <tr>
              <td>GENERATE</td>
              <td>GEN</td>
              <td>Signed Statement</td>
            </tr>
            <tr>
              <td>ERROR</td>
              <td>GEN_ERROR</td>
              <td>Signed Statement</td>
            </tr>
            <tr>
              <td>AI System</td>
              <td>Issuer</td>
              <td>Issuer</td>
            </tr>
            <tr>
              <td>Inclusion Proof</td>
              <td>Receipt</td>
              <td>Receipt</td>
            </tr>
          </tbody>
        </table>
        
        <t>Refusal events are registered with a standard SCITT 
        Transparency Service; this document does not define a separate 
        log type.</t>
        
        <t>This document uses domain-agnostic event type names 
        (ATTEMPT, DENY, GENERATE, ERROR) to enable application across 
        multiple AI domains. CAP-SRP uses domain-specific prefixes 
        (GEN_ATTEMPT, GEN_DENY, GEN, GEN_ERROR) appropriate for 
        content generation. Implementations targeting CAP-SRP 
        conformance SHOULD use CAP-SRP event type names in the 
        eventType field; implementations targeting broader SCITT 
        interoperability MAY use the names defined in this document.</t>
      </section>
    </section>

    <!-- ===== Section 3: Conformance Levels ===== -->
    <section anchor="conformance-levels">
      <name>Conformance Levels</name>
      
      <t>This specification defines three conformance levels to accommodate 
      different organizational capabilities and regulatory requirements. 
      These levels align with the VAP Framework v1.2 
      <xref target="VAP-FRAMEWORK"/> conformance structure.</t>
      
      <section anchor="level-bronze">
        <name>Bronze Level</name>
        <t>Minimum requirements for basic conformance:</t>
        <ul>
          <li>MUST: Log all ATTEMPT events before safety evaluation</li>
          <li>MUST: Log corresponding Outcome for each ATTEMPT</li>
          <li>MUST: Hash prompt content (promptHash), never store cleartext</li>
          <li>MUST: Sign all events using COSE_Sign1</li>
          <li>MUST: Use SHA-256 for hashing</li>
          <li>MUST: Use Ed25519 for signatures</li>
          <li>MUST: Include ISO 8601 timestamps with timezone</li>
          <li>SHOULD: Use UUIDv7 for event identifiers</li>
          <li>SHOULD: Implement hash chain linking (PrevHash)</li>
          <li>Retention: Minimum 6 months</li>
        </ul>
        <t>Bronze level is suitable for voluntary transparency and early 
        adopters.</t>
      </section>
      
      <section anchor="level-silver">
        <name>Silver Level</name>
        <t>All Bronze requirements, plus:</t>
        <ul>
          <li>MUST: Register events with SCITT Transparency Service</li>
          <li>MUST: Obtain and store Receipts for all events</li>
          <li>MUST: Implement external anchoring (minimum daily)</li>
          <li>MUST: Verify Completeness Invariant continuously</li>
          <li>MUST: Support Evidence Pack generation</li>
          <li>MUST: Implement Merkle tree construction</li>
          <li>SHOULD: Provide third-party verification endpoint</li>
          <li>Retention: Minimum 2 years</li>
        </ul>
        <t>Silver level is recommended for organizations subject to EU AI 
        Act Article 12 or similar regulations.</t>
      </section>
      
      <section anchor="level-gold">
        <name>Gold Level</name>
        <t>All Silver requirements, plus:</t>
        <ul>
          <li>MUST: Implement real-time anchoring (within 1 hour)</li>
          <li>MUST: Use HSM for signing key storage</li>
          <li>MUST: Provide real-time audit API</li>
          <li>MUST: Support 24-hour incident response evidence preservation</li>
          <li>SHOULD: Integrate with SCITT Transparency Service for 
          continuous monitoring</li>
          <li>SHOULD: Implement geographic redundancy</li>
          <li>Retention: Minimum 5 years</li>
        </ul>
        <t>Gold level is required for Very Large Online Platforms (VLOPs) 
        under DSA Article 37 and high-risk AI systems requiring maximum 
        assurance.</t>
      </section>
    </section>

    <!-- ===== Section 4: Use Cases ===== -->
    <section anchor="use-cases">
      <name>Use Cases</name>
      
      <section anchor="use-case-audit">
        <name>Regulatory Audit</name>
        <t>A regulatory authority investigating AI system compliance 
        needs to verify that a provider's stated content policies are 
        actually enforced. Without verifiable refusal events, the 
        regulator must trust provider self-reports. With this mechanism, 
        regulators can request Evidence Packs for specified time ranges, 
        verify ATTEMPT/Outcome completeness for logged events, confirm 
        refusal decisions are anchored in an append-only log, and 
        compare refusal statistics against external incident reports.</t>
        <t>This directly addresses the verification gap exposed by the 
        Grok incident, where regulators had no mechanism to independently 
        verify provider claims about safety system effectiveness.</t>
      </section>

      <section anchor="use-case-investigation">
        <name>Incident Investigation</name>
        <t>When investigating whether an AI system refused a specific 
        request, investigators need to establish provenance. A Verifiable 
        Refusal Record (ATTEMPT + DENY + Receipts) demonstrates that a 
        specific request was received, classified as policy-violating, 
        refused, and the refusal was logged with external timestamp 
        anchoring.</t>
      </section>

      <section anchor="use-case-accountability">
        <name>Provider Accountability</name>
        <t>AI service providers may need to demonstrate to stakeholders 
        that safety mechanisms function as claimed. Verifiable refusal 
        events enable statistical reporting on logged refusal rates, 
        third-party verification of safety claims, auditable proof 
        that specific requests were refused, and comparison against 
        industry benchmarks.</t>
      </section>

      <section anchor="use-case-legal">
        <name>Legal Proceedings</name>
        <t>In legal proceedings concerning AI-generated content, parties 
        may need evidence that a system declined a request. Verifiable 
        Refusal Records provide such evidence, subject to the limitation 
        that they demonstrate logged refusals, not the absence of 
        unlogged generation. Evidence Packs provide court-admissible 
        documentation with cryptographic integrity proofs.</t>
      </section>
      
      <section anchor="use-case-user">
        <name>User Verification</name>
        <t>Users who receive refusals may need proof that their request 
        was processed. Refusal Receipts enable users to verify their 
        request was logged, appeal refusal decisions with evidence, and 
        demonstrate to third parties that they attempted but were refused 
        (useful for content creators documenting compliance efforts).</t>
      </section>
    </section>

    <!-- ===== Section 5: Requirements ===== -->
    <section anchor="requirements">
      <name>Requirements</name>
      
      <t>This section defines requirements for implementations. To 
      maximize interoperability while allowing implementation flexibility, 
      a small set of core requirements use MUST; other requirements 
      use SHOULD or MAY.</t>
      
      <section anchor="req-completeness">
        <name>Completeness Invariant</name>
        <t>The completeness invariant is the central requirement of 
        this profile:</t>
        
        <t>Formal definition:</t>
        <artwork><![CDATA[
∑ ATTEMPT = ∑ GENERATE + ∑ DENY + ∑ ERROR

For any time window [T₁, T₂]:
  count(ATTEMPT where T₁ ≤ timestamp ≤ T₂) =
    count(GENERATE where T₁ ≤ timestamp ≤ T₂ + grace_period) +
    count(DENY where T₁ ≤ timestamp ≤ T₂ + grace_period) +
    count(ERROR where T₁ ≤ timestamp ≤ T₂ + grace_period)
]]></artwork>
        
        <ul>
          <li>Every logged ATTEMPT Signed Statement MUST have exactly 
          one corresponding Outcome Signed Statement (DENY, GENERATE, 
          or ERROR).</li>
          <li>Outcome Signed Statements MUST reference their 
          corresponding ATTEMPT via the attemptId field.</li>
          <li>Prompt content MUST NOT be stored in cleartext in 
          Signed Statements; implementations MUST use cryptographic 
          hashes (promptHash) instead.</li>
          <li>The ATTEMPT event MUST be logged BEFORE any safety 
          evaluation begins (pre-evaluation logging).</li>
        </ul>
        <t>Verifiers SHOULD flag any logged ATTEMPT without a 
        corresponding Outcome as potential evidence of incomplete 
        logging or system failure.</t>
        
        <t>Violation detection:</t>
        <table>
          <thead>
            <tr>
              <th>Condition</th>
              <th>Meaning</th>
              <th>Implication</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td>Attempts > Outcomes</td>
              <td>Unmatched attempts</td>
              <td>System may be hiding results</td>
            </tr>
            <tr>
              <td>Outcomes > Attempts</td>
              <td>Orphan outcomes</td>
              <td>System may have fabricated refusals</td>
            </tr>
            <tr>
              <td>Duplicate outcomes</td>
              <td>Multiple outcomes per attempt</td>
              <td>Data integrity failure</td>
            </tr>
          </tbody>
        </table>
        
        <t>This completeness invariant is defined at the event semantics 
        level and applies only to logged events. It cannot detect ATTEMPT 
        events that were never logged. Cryptographic detection of 
        invariant violations depends on the properties of the underlying 
        Transparency Service and verifier logic.</t>
        
        <t>This profile does not require Transparency Services to 
        enforce completeness invariants; such checks are performed by 
        verifiers using application-level logic.</t>
      </section>

      <section anchor="req-integrity">
        <name>Integrity</name>
        <t>To protect against tampering, implementations SHOULD:</t>
        <ul>
          <li>Include a cryptographic hash of event content in each 
          Signed Statement (EventHash)</li>
          <li>Digitally sign all Signed Statements</li>
          <li>Chain events via PrevHash fields to detect deletion or 
          reordering</li>
          <li>Register Signed Statements with a SCITT Transparency 
          Service and obtain Receipts</li>
        </ul>
        <t>PrevHash chaining is RECOMMENDED but not required because 
        append-only guarantees are primarily provided by the 
        Transparency Service. PrevHash provides an additional, 
        issuer-local integrity signal that can detect tampering even 
        before Transparency Service registration.</t>
        <t>SHA-256 for hashing and Ed25519 for signatures are 
        RECOMMENDED. Other algorithms registered with COSE MAY be 
        used. Implementations SHOULD support algorithm agility for 
        future post-quantum cryptography migration.</t>
      </section>

      <section anchor="req-privacy">
        <name>Privacy</name>
        <t>Refusal events may be triggered by harmful or sensitive 
        content. To avoid the audit log becoming a repository of 
        harmful content, implementations SHOULD:</t>
        <ul>
          <li>Replace prompt text with promptHash</li>
          <li>Replace reference images with cryptographic hashes</li>
          <li>Ensure refusal reasons do not quote or describe prompt 
          content in detail</li>
          <li>Pseudonymize actor identifiers where appropriate (ActorHash)</li>
        </ul>
        <t>The hash function SHOULD be collision-resistant to prevent 
        an adversary from claiming a benign prompt hashes to the 
        same value as a harmful one.</t>
        
        <t>Hashing without salting may be vulnerable to dictionary 
        attacks if an adversary has a list of candidate prompts. 
        Mitigations include access controls on event queries, 
        time-limited retention policies, and monitoring for bulk 
        query patterns. Salting may provide additional protection 
        but introduces complexity; if used, implementations must 
        ensure verification remains possible without requiring 
        disclosure of the salt to third-party verifiers.</t>
      </section>

      <section anchor="req-verifiability">
        <name>Third-Party Verifiability</name>
        <t>To enable external verification without access to internal 
        systems, implementations SHOULD:</t>
        <ul>
          <li>Ensure verification requires only the Signed Statement 
          and Receipt</li>
          <li>Publish Issuer public signing keys or certificates</li>
          <li>Make Transparency Service logs queryable by authorized 
          auditors</li>
          <li>Support offline verification given the necessary 
          cryptographic material</li>
          <li>Provide Evidence Pack export in standardized format</li>
        </ul>
      </section>

      <section anchor="req-timeliness">
        <name>Timeliness</name>
        <t>To maintain audit trail integrity, implementations SHOULD:</t>
        <ul>
          <li>Create ATTEMPT Signed Statements promptly upon request 
          receipt (within 100ms)</li>
          <li>Create Outcome Signed Statements promptly upon decision 
          (within 1 second for automated decisions)</li>
          <li>Register Signed Statements with the Transparency Service 
          within a reasonable window (within 60 seconds)</li>
        </ul>
        
        <t>External anchoring frequency requirements by conformance level:</t>
        <table>
          <thead>
            <tr>
              <th>Level</th>
              <th>Minimum Frequency</th>
              <th>Maximum Delay</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td>Bronze</td>
              <td>Optional</td>
              <td>N/A</td>
            </tr>
            <tr>
              <td>Silver</td>
              <td>Daily</td>
              <td>24 hours</td>
            </tr>
            <tr>
              <td>Gold</td>
              <td>Hourly</td>
              <td>1 hour</td>
            </tr>
          </tbody>
        </table>
        
        <t>Some operational scenarios may require delayed outcomes:</t>
        <ul>
          <li>Human review processes may take minutes, hours, or days</li>
          <li>System crashes may delay outcome logging until recovery</li>
          <li>Network failures may delay Transparency Service registration</li>
        </ul>
        <t>Implementations SHOULD document expected latency bounds in 
        their Registration Policy. Extended delays SHOULD trigger 
        monitoring alerts.</t>
      </section>

      <section anchor="req-conformance">
        <name>Conformance</name>
        <t>An implementation conforms to this specification if it 
        satisfies the following requirements:</t>
        <ul>
          <li>MUST: Every logged ATTEMPT has exactly one Outcome</li>
          <li>MUST: Outcomes reference ATTEMPTs via attemptId</li>
          <li>MUST: Prompt content is hashed, not stored in cleartext</li>
          <li>MUST: Signed Statements are encoded as COSE_Sign1</li>
          <li>MUST: ATTEMPT is logged before safety evaluation</li>
        </ul>
        <t>All other requirements (SHOULD, RECOMMENDED, MAY) are 
        guidance for interoperability and security best practices 
        but are not required for conformance.</t>
        <t>Implementations MAY extend the data model with additional 
        fields provided the core conformance requirements are satisfied.</t>
        <t>Implementations claiming a specific conformance level 
        (Bronze/Silver/Gold) MUST satisfy all requirements for that 
        level as defined in <xref target="conformance-levels"/>.</t>
      </section>
    </section>

    <!-- ===== Section 6: Data Model ===== -->
    <section anchor="data-model">
      <name>Data Model</name>
      
      <t>This section defines example payloads for ATTEMPT, DENY, 
      GENERATE, and ERROR Signed Statements. These are encoded as JSON 
      payloads. This data model is non-normative; implementations MAY 
      extend or modify these structures provided the conformance 
      requirements in <xref target="req-conformance"/> are satisfied.</t>

      <section anchor="data-attempt">
        <name>ATTEMPT Signed Statement Payload</name>
        <t>An ATTEMPT records that a generation request was received:</t>
        <sourcecode type="json"><![CDATA[
{
  "eventType": "ATTEMPT",
  "eventId": "019467a1-0001-7000-0000-000000000001",
  "chainId": "019467a0-0000-7000-0000-000000000000",
  "timestamp": "2026-01-29T14:23:45.100Z",
  "issuer": "urn:example:ai-service:img-gen-prod",
  "promptHash": "sha256:7f83b1657ff1fc53b92dc18148a1d65d...",
  "inputType": "text+image",
  "referenceInputHashes": [
    "sha256:9f86d081884c7d659a2feaa0c55ad015..."
  ],
  "sessionId": "019467a1-0001-7000-0000-000000000000",
  "actorHash": "sha256:e3b0c44298fc1c149afbf4c8996fb924...",
  "modelId": "img-gen-v4.2.1",
  "policyId": "content-safety-v2",
  "policyVersion": "2026-01-01",
  "hashAlgo": "SHA256",
  "signAlgo": "ED25519",
  "prevHash": "sha256:0000000000000000000000000000000...",
  "eventHash": "sha256:a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4..."
}
]]></sourcecode>
        
        <t>Field definitions:</t>
        <dl>
          <dt>eventType</dt>
          <dd>"ATTEMPT" (or "GEN_ATTEMPT" for CAP-SRP alignment)</dd>
          
          <dt>eventId</dt>
          <dd>Unique identifier (UUID v7 <xref target="RFC9562"/> 
          RECOMMENDED for temporal ordering)</dd>
          
          <dt>chainId</dt>
          <dd>Identifier for the event chain (enables multiple 
          independent chains per issuer)</dd>
          
          <dt>timestamp</dt>
          <dd>ISO 8601 timestamp of request receipt with timezone</dd>
          
          <dt>issuer</dt>
          <dd>URN identifying the AI system</dd>
          
          <dt>promptHash</dt>
          <dd>Hash of the textual prompt (if any)</dd>
          
          <dt>inputType</dt>
          <dd>Type of input: "text", "image", "text+image", "audio", 
          "video"</dd>
          
          <dt>referenceInputHashes</dt>
          <dd>Array of hashes for non-text inputs</dd>
          
          <dt>sessionId</dt>
          <dd>Session identifier for correlation</dd>
          
          <dt>actorHash</dt>
          <dd>Pseudonymized hash of the requesting user/system</dd>
          
          <dt>modelId</dt>
          <dd>Identifier and version of the AI model</dd>
          
          <dt>policyId</dt>
          <dd>Identifier of the content policy applied</dd>
          
          <dt>policyVersion</dt>
          <dd>Version of the policy (enables policy change tracking)</dd>
          
          <dt>hashAlgo</dt>
          <dd>Hash algorithm used (default: "SHA256")</dd>
          
          <dt>signAlgo</dt>
          <dd>Signature algorithm used (default: "ED25519")</dd>
          
          <dt>prevHash</dt>
          <dd>Hash of the previous event (for chain integrity)</dd>
          
          <dt>eventHash</dt>
          <dd>Hash of this event's canonical form</dd>
        </dl>
      </section>

      <section anchor="data-deny">
        <name>DENY Signed Statement Payload</name>
        <t>A DENY records that a request was refused:</t>
        <sourcecode type="json"><![CDATA[
{
  "eventType": "DENY",
  "eventId": "019467a1-0001-7000-0000-000000000002",
  "chainId": "019467a0-0000-7000-0000-000000000000",
  "timestamp": "2026-01-29T14:23:45.150Z",
  "issuer": "urn:example:ai-service:img-gen-prod",
  "attemptId": "019467a1-0001-7000-0000-000000000001",
  "riskCategory": "NCII_RISK",
  "riskSubCategories": ["REAL_PERSON", "CLOTHING_REMOVAL_REQUEST"],
  "riskScore": 0.94,
  "refusalReason": "Non-consensual intimate imagery request detected",
  "modelDecision": "DENY",
  "humanOverride": false,
  "escalationId": null,
  "hashAlgo": "SHA256",
  "signAlgo": "ED25519",
  "prevHash": "sha256:a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4...",
  "eventHash": "sha256:e5f6g7h8i9j0e5f6g7h8i9j0e5f6g7h8..."
}
]]></sourcecode>
        
        <t>Field definitions:</t>
        <dl>
          <dt>eventType</dt>
          <dd>"DENY" (or "GEN_DENY" for CAP-SRP alignment)</dd>
          
          <dt>eventId</dt>
          <dd>Unique identifier</dd>
          
          <dt>chainId</dt>
          <dd>Must match the corresponding ATTEMPT's chainId</dd>
          
          <dt>timestamp</dt>
          <dd>ISO 8601 timestamp of refusal decision</dd>
          
          <dt>attemptId</dt>
          <dd>Reference to the corresponding ATTEMPT (required for 
          completeness invariant)</dd>
          
          <dt>riskCategory</dt>
          <dd>Category of policy violation detected. See 
          <xref target="risk-categories"/> for non-normative taxonomy.</dd>
          
          <dt>riskSubCategories</dt>
          <dd>Array of sub-categories for detailed classification</dd>
          
          <dt>riskScore</dt>
          <dd>Confidence score (0.0 to 1.0) if available. Scoring 
          methodology is implementation-defined.</dd>
          
          <dt>refusalReason</dt>
          <dd>Human-readable reason (SHOULD NOT contain prompt content). 
          The taxonomy of reasons is implementation-defined.</dd>
          
          <dt>modelDecision</dt>
          <dd>The action taken: "DENY", "WARN", "ESCALATE", "QUARANTINE"</dd>
          
          <dt>humanOverride</dt>
          <dd>Boolean indicating if a human reviewer was involved</dd>
          
          <dt>escalationId</dt>
          <dd>Reference to escalation record if human review was 
          triggered</dd>
          
          <dt>prevHash</dt>
          <dd>Hash of the previous event</dd>
          
          <dt>eventHash</dt>
          <dd>Hash of this event's canonical form</dd>
        </dl>
        
        <t>This specification does not standardize content moderation 
        categories, risk taxonomies, or refusal reason formats. These 
        are policy decisions that remain the domain of AI providers 
        and applicable regulations.</t>
      </section>

      <section anchor="data-generate">
        <name>GENERATE Signed Statement Payload</name>
        <t>A GENERATE records that content was successfully produced. 
        This document focuses on refusal events; GENERATE is included 
        for completeness invariant verification:</t>
        <sourcecode type="json"><![CDATA[
{
  "eventType": "GENERATE",
  "eventId": "019467a1-0001-7000-0000-000000000004",
  "chainId": "019467a0-0000-7000-0000-000000000000",
  "timestamp": "2026-01-29T14:23:46.500Z",
  "issuer": "urn:example:ai-service:img-gen-prod",
  "attemptId": "019467a1-0001-7000-0000-000000000001",
  "outputHash": "sha256:b2c3d4e5f6a7b2c3d4e5f6a7b2c3d4e5...",
  "outputType": "image/png",
  "c2paManifestId": "urn:c2pa:manifest:...",
  "hashAlgo": "SHA256",
  "signAlgo": "ED25519",
  "prevHash": "sha256:a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4...",
  "eventHash": "sha256:c3d4e5f6a7b8c3d4e5f6a7b8c3d4e5f6..."
}
]]></sourcecode>
        
        <t>Field definitions specific to GENERATE:</t>
        <dl>
          <dt>outputHash</dt>
          <dd>Hash of the generated content (enables verification 
          without storing content)</dd>
          
          <dt>outputType</dt>
          <dd>MIME type of generated content</dd>
          
          <dt>c2paManifestId</dt>
          <dd>OPTIONAL. Reference to C2PA manifest if content 
          provenance is embedded in output</dd>
        </dl>
        
        <t>GENERATE events are typically not the focus of regulatory 
        audits since successful generation is observable through content 
        existence and downstream provenance (e.g., C2PA manifests, 
        SynthID watermarks). They are included here to enable 
        completeness invariant verification.</t>
      </section>

      <section anchor="data-error">
        <name>ERROR Signed Statement Payload</name>
        <t>An ERROR records that a request failed due to system issues:</t>
        <sourcecode type="json"><![CDATA[
{
  "eventType": "ERROR",
  "eventId": "019467a1-0001-7000-0000-000000000003",
  "chainId": "019467a0-0000-7000-0000-000000000000",
  "timestamp": "2026-01-29T14:23:45.200Z",
  "issuer": "urn:example:ai-service:img-gen-prod",
  "attemptId": "019467a1-0001-7000-0000-000000000001",
  "errorCode": "TIMEOUT",
  "errorMessage": "Model inference timeout after 30s",
  "hashAlgo": "SHA256",
  "signAlgo": "ED25519",
  "prevHash": "sha256:e5f6g7h8i9j0e5f6g7h8i9j0e5f6g7h8...",
  "eventHash": "sha256:h8i9j0k1l2m3h8i9j0k1l2m3h8i9j0k1..."
}
]]></sourcecode>
        
        <t>ERROR events indicate system failures, not policy decisions. 
        A high ERROR rate may indicate operational issues or potential 
        abuse (e.g., adversarial inputs designed to crash the system). 
        Implementations SHOULD monitor ERROR rates and investigate 
        anomalies.</t>
      </section>
      
      <section anchor="risk-categories">
        <name>Risk Categories (Non-Normative)</name>
        <t>The following risk categories are provided as a non-normative 
        reference taxonomy. Implementations MAY use different categories 
        based on their policies and applicable regulations:</t>
        
        <table>
          <thead>
            <tr>
              <th>Category</th>
              <th>Description</th>
              <th>Example</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td>CSAM_RISK</td>
              <td>Child sexual abuse material</td>
              <td>Minor sexualization request</td>
            </tr>
            <tr>
              <td>NCII_RISK</td>
              <td>Non-consensual intimate imagery</td>
              <td>Deepfake pornography</td>
            </tr>
            <tr>
              <td>MINOR_SEXUALIZATION</td>
              <td>Content sexualizing minors</td>
              <td>Age-inappropriate requests</td>
            </tr>
            <tr>
              <td>REAL_PERSON_DEEPFAKE</td>
              <td>Unauthorized realistic depiction</td>
              <td>Celebrity face swap</td>
            </tr>
            <tr>
              <td>VIOLENCE_EXTREME</td>
              <td>Graphic violence</td>
              <td>Gore, torture</td>
            </tr>
            <tr>
              <td>HATE_CONTENT</td>
              <td>Discriminatory content</td>
              <td>Racist imagery</td>
            </tr>
            <tr>
              <td>TERRORIST_CONTENT</td>
              <td>Terrorism-related</td>
              <td>Propaganda, recruitment</td>
            </tr>
            <tr>
              <td>SELF_HARM_PROMOTION</td>
              <td>Self-harm encouragement</td>
              <td>Suicide methods</td>
            </tr>
            <tr>
              <td>COPYRIGHT_VIOLATION</td>
              <td>Clear IP infringement</td>
              <td>Trademarked characters</td>
            </tr>
            <tr>
              <td>COPYRIGHT_STYLE_MIMICRY</td>
              <td>Artist style imitation</td>
              <td>Protected style requests</td>
            </tr>
            <tr>
              <td>OTHER</td>
              <td>Other policy violations</td>
              <td>Custom policies</td>
            </tr>
          </tbody>
        </table>
      </section>
    </section>

    <!-- ===== Section 7: Evidence Pack ===== -->
    <section anchor="evidence-pack">
      <name>Evidence Pack</name>
      
      <t>An Evidence Pack is a self-contained, cryptographically 
      verifiable collection of events suitable for regulatory 
      submission or third-party audit.</t>
      
      <section anchor="evidence-pack-structure">
        <name>Directory Structure</name>
        <artwork><![CDATA[
evidence_pack/
├── manifest.json           # Pack metadata and integrity info
├── events/
│   ├── events_001.jsonl    # Event batch 1 (JSON Lines format)
│   ├── events_002.jsonl    # Event batch 2
│   └── ...
├── anchors/
│   ├── anchor_001.json     # External anchor records
│   └── ...
├── merkle/
│   ├── tree_001.json       # Merkle tree structure
│   └── proofs/             # Selective disclosure proofs
├── keys/
│   └── public_keys.json    # Public keys for verification
└── signatures/
    └── pack_signature.json # Pack-level signature
]]></artwork>
      </section>
      
      <section anchor="evidence-pack-manifest">
        <name>Manifest Format</name>
        <sourcecode type="json"><![CDATA[
{
  "packId": "019467b2-0000-7000-0000-000000000000",
  "packVersion": "1.0",
  "generatedAt": "2026-01-29T15:00:00Z",
  "generatedBy": "urn:example:ai-service:img-gen-prod",
  "conformanceLevel": "Silver",
  "eventCount": 150000,
  "timeRange": {
    "start": "2026-01-01T00:00:00Z",
    "end": "2026-01-29T14:59:59Z"
  },
  "checksums": {
    "events/events_001.jsonl": "sha256:...",
    "events/events_002.jsonl": "sha256:...",
    "anchors/anchor_001.json": "sha256:..."
  },
  "completenessVerification": {
    "totalAttempts": 145000,
    "totalGenerate": 140000,
    "totalDeny": 4500,
    "totalError": 500,
    "invariantValid": true,
    "verificationTimestamp": "2026-01-29T15:00:00Z"
  },
  "externalAnchors": [
    {
      "anchorId": "019467b0-0000-7000-0000-000000000000",
      "anchorType": "RFC3161",
      "anchorTimestamp": "2026-01-29T00:00:00Z"
    }
  ]
}
]]></sourcecode>
      </section>
      
      <section anchor="evidence-pack-verification">
        <name>Verification Process</name>
        <t>Third-party verification of an Evidence Pack involves:</t>
        <ol>
          <li>Verify pack signature against published public key</li>
          <li>Verify all file checksums in manifest</li>
          <li>Verify hash chain integrity across all events</li>
          <li>Verify individual event signatures</li>
          <li>Verify Completeness Invariant</li>
          <li>Verify external anchor records against TSA/SCITT</li>
          <li>Generate verification report</li>
        </ol>
      </section>
    </section>

    <!-- ===== Section 8: SCITT Integration ===== -->
    <section anchor="scitt-integration">
      <name>SCITT Integration</name>

      <section anchor="scitt-signed-statements">
        <name>Encoding as Signed Statements</name>
        <t>ATTEMPT, DENY, GENERATE, and ERROR events are encoded as 
        SCITT Signed Statements:</t>
        <ul>
          <li>The event JSON is the Signed Statement payload</li>
          <li>The Issuer is the AI system's signing identity</li>
          <li>The Content Type MAY use "application/vnd.scitt.refusal-event+json"</li>
          <li>The Signed Statement is wrapped in COSE_Sign1 per 
          <xref target="RFC9052"/></li>
        </ul>
        <t>The JSON payload is canonicalized per <xref target="RFC8785"/> 
        and signed as the COSE_Sign1 payload bytes. This ensures 
        deterministic serialization for signature verification.</t>
      </section>

      <section anchor="scitt-registration">
        <name>Registration</name>
        <t>After creating a Signed Statement, the Issuer SHOULD register 
        it with a SCITT Transparency Service:</t>
        <ol>
          <li>Submit the Signed Statement via SCRAPI 
          <xref target="I-D.ietf-scitt-scrapi"/></li>
          <li>Receive a Receipt proving inclusion</li>
          <li>Store the Receipt for future verification requests</li>
        </ol>
        <t>The Transparency Service's Registration Policy MAY verify 
        that required fields are present and timestamps are within 
        acceptable bounds.</t>
        
        <t>Registration may fail due to network issues, service 
        unavailability, or policy rejection. Implementations SHOULD 
        implement retry logic with exponential backoff. Persistent 
        registration failures SHOULD be logged locally and trigger 
        operational alerts.</t>
      </section>

      <section anchor="scitt-registration-policy">
        <name>Registration Policy Guidance (Non-Normative)</name>
        <t>A Transparency Service operating as a refusal event log 
        MAY implement a Registration Policy that validates:</t>
        <ul>
          <li>Signature validity (COSE_Sign1 verification)</li>
          <li>Required fields present (eventType, eventId, timestamp, 
          issuer)</li>
          <li>Timestamp sanity (not in the future, not unreasonably 
          old)</li>
          <li>Issuer authorization (if the TS restricts which issuers 
          may register)</li>
        </ul>
        <t>This profile does not require Transparency Services to 
        enforce completeness invariants. A TS accepting refusal events 
        is not expected to verify that every ATTEMPT has an Outcome; 
        such verification is performed by auditors and verifiers at 
        the application level.</t>
      </section>

      <section anchor="scitt-verification">
        <name>Verification with Receipts</name>
        <t>A complete Verifiable Refusal Record consists of:</t>
        <ol>
          <li>The ATTEMPT Signed Statement and its Receipt</li>
          <li>The corresponding DENY Signed Statement and its Receipt</li>
          <li>Verification that attemptId in DENY matches eventId in 
          ATTEMPT</li>
        </ol>
        <t>Verifiers can confirm that a refusal was logged by 
        validating both Receipts and checking the ATTEMPT/DENY linkage. 
        This demonstrates that the refusal decision was recorded in 
        the Transparency Service, but does not prove that no unlogged 
        generation occurred.</t>
      </section>

      <section anchor="scitt-anchoring">
        <name>External Anchoring</name>
        <t>For additional assurance, implementations MAY periodically 
        anchor Merkle tree roots to external systems:</t>
        <ul>
          <li>RFC 3161 Time Stamping Authority (TSA)</li>
          <li>Multiple independent SCITT Transparency Services</li>
          <li>Public blockchains (Bitcoin, Ethereum)</li>
          <li>Regulatory authority registries</li>
        </ul>
        <t>External anchoring provides defense against a compromised 
        Transparency Service and satisfies regulatory requirements 
        for independent timestamp verification.</t>
        
        <t>Anchor record format:</t>
        <sourcecode type="json"><![CDATA[
{
  "anchorId": "019467b0-0000-7000-0000-000000000000",
  "anchorType": "RFC3161",
  "merkleRoot": "sha256:abcd1234...",
  "eventCount": 1000,
  "firstEventId": "019467a0-0000-7000-0000-000000000001",
  "lastEventId": "019467a0-0000-7000-0000-000001000000",
  "timestamp": "2026-01-29T00:00:00Z",
  "anchorProof": "MIIHkwYJKoZIhvc...",
  "serviceEndpoint": "https://timestamp.digicert.com"
}
]]></sourcecode>
      </section>
    </section>

    <!-- ===== Section 9: C2PA Integration ===== -->
    <section anchor="c2pa-integration">
      <name>C2PA Integration (Non-Normative)</name>
      
      <t>The Coalition for Content Provenance and Authenticity (C2PA) 
      provides standards for content provenance that complement this 
      specification:</t>
      <ul>
        <li>C2PA proves "this content was generated"</li>
        <li>This specification proves "this request was blocked"</li>
      </ul>
      
      <t>Generated content MAY include a C2PA manifest with a 
      reference to the corresponding SCITT events:</t>
      <sourcecode type="json"><![CDATA[
{
  "c2pa:assertions": {
    "scitt:reference": {
      "eventId": "019467a1-...",
      "chainId": "019467a0-...",
      "verificationEndpoint": "https://api.example.com/verify"
    }
  }
}
]]></sourcecode>
      
      <t>This cross-reference enables verifiers to trace from the 
      content artifact back to the complete audit trail including 
      any prior refusal attempts in the same session.</t>
    </section>

    <!-- ===== Section 10: IANA Considerations ===== -->
    <section anchor="iana">
      <name>IANA Considerations</name>
      
      <t>This document has no IANA actions at this time.</t>
      
      <t>Future revisions may request registration of:</t>
      <ul>
        <li>Media type "application/vnd.scitt.refusal-event+json"</li>
        <li>Registry for standardized event type values</li>
        <li>Registry for risk category codes</li>
      </ul>
    </section>

    <!-- ===== Section 11: Security Considerations ===== -->
    <section anchor="security">
      <name>Security Considerations</name>

      <section anchor="security-threat-model">
        <name>Threat Model</name>
        <t>This specification assumes the following threat model:</t>
        <ul>
          <li>The AI system (Issuer) is partially trusted: it is 
          expected to log events but may have bugs or be compromised</li>
          <li>The Transparency Service is partially trusted: it 
          provides append-only guarantees but may be compromised or 
          present split views</li>
          <li>Verifiers are trusted to perform completeness checks 
          correctly</li>
          <li>External parties (regulators, auditors) have access to 
          Receipts and can query the Transparency Service</li>
        </ul>
        <t>This specification does NOT protect against:</t>
        <ul>
          <li>An AI system that bypasses logging entirely (no ATTEMPT 
          logged)</li>
          <li>Collusion between the Issuer and Transparency Service</li>
          <li>Compromise of all verifiers</li>
        </ul>
      </section>

      <section anchor="security-omission">
        <name>Omission Attacks</name>
        <t>An adversary controlling the AI system might attempt to omit 
        refusal events to hide policy violations or, conversely, omit 
        GENERATE events to falsely claim content was refused. The 
        completeness invariant provides detection for logged events: 
        auditors can identify ATTEMPT Signed Statements without 
        corresponding Outcomes. Hash chains detect deletion of 
        intermediate events.</t>
        
        <t>However, if an ATTEMPT is never logged, this specification 
        cannot detect the omission. Complete prevention of omission 
        attacks is beyond the scope of this specification and would 
        require external enforcement mechanisms such as trusted 
        execution environments, RATS attestation, or real-time 
        external monitoring.</t>
        
        <t>The requirement that ATTEMPT be logged BEFORE safety 
        evaluation (pre-evaluation logging) prevents selective logging 
        where only "safe" requests are recorded.</t>
      </section>

      <section anchor="security-equivocation">
        <name>Log Equivocation</name>
        <t>A malicious Transparency Service might present different 
        views of the log to different parties (equivocation). For 
        example, it might show auditors a log containing DENY events 
        while providing a different view to other verifiers. Mitigations 
        include:</t>
        <ul>
          <li>Gossiping of Signed Tree Heads between verifiers to 
          detect inconsistencies</li>
          <li>Registration with multiple independent Transparency 
          Services</li>
          <li>External anchoring to public ledgers that provide global 
          consistency</li>
          <li>Auditor comparison of Receipts for the same time periods</li>
        </ul>
        <t>Detection of equivocation requires coordination between 
        verifiers; a single verifier in isolation cannot detect it.</t>
      </section>

      <section anchor="security-split-view">
        <name>Split-View Between Event Types</name>
        <t>A malicious Issuer might maintain separate logs for refusals 
        and generations, showing only the refusal log to auditors. The 
        completeness invariant mitigates this by requiring every logged 
        ATTEMPT to have an Outcome; if the GENERATE outcomes are hidden, 
        auditors will observe orphaned ATTEMPTs.</t>
      </section>

      <section anchor="security-tampering">
        <name>Log Tampering</name>
        <t>Direct modification of log entries is prevented by 
        cryptographic signatures on Signed Statements, hash chain 
        linking, Merkle tree inclusion proofs in Receipts, and the 
        append-only structure enforced by the Transparency Service.</t>
      </section>

      <section anchor="security-replay">
        <name>Replay Attacks</name>
        <t>An attacker might attempt to replay old refusal events to 
        inflate refusal statistics or create false alibis. Mitigations 
        include: UUID v7 provides temporal ordering (events with earlier 
        timestamps have smaller UUIDs), timestamps are verified against 
        Transparency Service registration time, and prevHash chaining 
        detects out-of-order insertion or duplicate events.</t>
      </section>

      <section anchor="security-key-compromise">
        <name>Key Compromise</name>
        <t>If an Issuer's signing key is compromised, an attacker could 
        create fraudulent Signed Statements. Previously signed Signed 
        Statements remain valid. Implementations SHOULD support key 
        rotation and revocation. Transparency Service timestamps 
        provide evidence of when Signed Statements were registered, 
        which can help bound the impact of a compromise.</t>
        <t>Gold level conformance requires HSM storage for signing 
        keys, significantly reducing key compromise risk.</t>
      </section>

      <section anchor="security-dictionary">
        <name>Prompt Dictionary Attacks</name>
        <t>Although prompts are stored as hashes, an adversary with a 
        dictionary of known prompts could attempt to identify which 
        prompt was used by computing hashes and comparing. Mitigations 
        include access controls on event queries, time-limited retention 
        policies, monitoring for bulk query patterns, and rate limiting.</t>
        
        <t>Salted hashing may provide additional protection but 
        introduces operational complexity. If salting is used, the 
        salt must be managed such that verification remains possible 
        without disclosing the salt to third parties. This specification 
        does not mandate salting.</t>
      </section>

      <section anchor="security-dos">
        <name>Denial of Service</name>
        <t>An attacker could flood the system with generation requests 
        to create a large volume of ATTEMPT Signed Statements, 
        potentially overwhelming the Transparency Service or obscuring 
        legitimate events. Standard rate limiting and access controls 
        at the AI system level can mitigate this. The Transparency 
        Service MAY implement its own admission controls.</t>
      </section>
    </section>

    <!-- ===== Section 12: Privacy Considerations ===== -->
    <section anchor="privacy">
      <name>Privacy Considerations</name>

      <section anchor="privacy-harmful">
        <name>Harmful Content Storage</name>
        <t>This profile requires that harmful content not be stored. 
        Prompt text is replaced with promptHash, reference images are 
        replaced with hashes, and refusal reasons SHOULD NOT quote or 
        describe prompt content in detail. This prevents the audit log 
        from becoming a repository of harmful content.</t>
      </section>

      <section anchor="privacy-actor">
        <name>Actor Identification</name>
        <t>Actor identification creates tension between accountability 
        and privacy. Implementations SHOULD use pseudonymous identifiers 
        (ActorHash) by default, maintain a separate access-controlled 
        mapping from pseudonyms to identities, define clear policies 
        for de-pseudonymization, and support erasure of the mapping 
        while preserving audit integrity (crypto-shredding).</t>
      </section>

      <section anchor="privacy-correlation">
        <name>Correlation Risks</name>
        <t>Event metadata may enable correlation attacks. Timestamps 
        could reveal user activity patterns, SessionIDs link multiple 
        requests, and ModelIDs reveal which AI systems a user interacts 
        with. Implementations SHOULD apply appropriate access controls 
        and MAY implement differential privacy techniques for aggregate 
        statistics.</t>
      </section>

      <section anchor="privacy-gdpr">
        <name>Data Subject Rights</name>
        <t>Where personal data protection regulations apply (e.g., 
        GDPR), implementations SHOULD support data subject access 
        requests, erasure requests via crypto-shredding (destroying 
        encryption keys for personal data while preserving cryptographic 
        integrity proofs), and purpose limitation.</t>
        <t>Crypto-shredding architecture:</t>
        <ul>
          <li>Sensitive fields encrypted with per-user symmetric keys</li>
          <li>Key deletion renders personal data unrecoverable</li>
          <li>Hash chain integrity preserved (hashes remain, content 
          inaccessible)</li>
          <li>Completeness invariant remains verifiable</li>
        </ul>
      </section>
    </section>

    <!-- ===== Section 13: Future Work ===== -->
    <section anchor="future-work">
      <name>Future Work (Non-Normative)</name>
      
      <t>This section describes potential extensions and research 
      directions that are outside the scope of this specification 
      but may be addressed in future work.</t>
      
      <section anchor="future-attestation">
        <name>RATS/Attestation Integration</name>
        <t>Integration with Remote ATtestation procedureS (RATS) 
        <xref target="RFC9334"/> could provide stronger guarantees 
        that the AI system is operating as expected and logging all 
        events. Hardware-backed attestation could reduce the trust 
        assumptions on the Issuer.</t>
      </section>

      <section anchor="future-batching">
        <name>Batching and Scalability</name>
        <t>High-volume AI systems may generate millions of events per 
        day. Future work could explore batching mechanisms, rolling 
        logs, and hierarchical Merkle structures to improve scalability 
        while maintaining verifiability.</t>
      </section>

      <section anchor="future-privacy">
        <name>Advanced Privacy Mechanisms</name>
        <t>More sophisticated privacy mechanisms could be explored, 
        including:</t>
        <ul>
          <li>Commitment schemes that allow selective disclosure</li>
          <li>Zero-knowledge proofs for aggregate statistics without 
          revealing individual events</li>
          <li>Homomorphic encryption for privacy-preserving audits</li>
        </ul>
        <t>These mechanisms would add complexity and are not required 
        for the core auditability goals of this specification.</t>
      </section>

      <section anchor="future-completeness">
        <name>External Completeness Enforcement</name>
        <t>Stronger completeness guarantees could be achieved through 
        external enforcement mechanisms such as:</t>
        <ul>
          <li>Trusted execution environments (TEEs) that guarantee 
          logging before generation</li>
          <li>Hardware security modules (HSMs) that control signing 
          keys</li>
          <li>Real-time monitoring by independent observers</li>
          <li>Blockchain-based commitment schemes</li>
        </ul>
        <t>These approaches involve significant architectural changes 
        and are outside the scope of this specification.</t>
      </section>
      
      <section anchor="future-pqc">
        <name>Post-Quantum Cryptography Migration</name>
        <t>The current specification uses Ed25519 signatures which are 
        vulnerable to quantum attacks. Future revisions should address 
        migration to post-quantum algorithms (e.g., ML-DSA/Dilithium) 
        as NIST standards mature. The hashAlgo and signAlgo fields 
        support algorithm agility for this transition.</t>
      </section>
    </section>
  </middle>

  <back>
    <!-- ===== References ===== -->
    <references>
      <name>References</name>
      
      <references>
        <name>Normative References</name>
        
        <reference anchor="RFC2119">
          <front>
            <title>Key words for use in RFCs to Indicate Requirement Levels</title>
            <author fullname="S. Bradner"/>
            <date month="March" year="1997"/>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="2119"/>
        </reference>
        
        <reference anchor="RFC8174">
          <front>
            <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
            <author fullname="B. Leiba"/>
            <date month="May" year="2017"/>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="8174"/>
        </reference>
        
        <reference anchor="RFC8785">
          <front>
            <title>JSON Canonicalization Scheme (JCS)</title>
            <author fullname="A. Rundgren"/>
            <date month="June" year="2020"/>
          </front>
          <seriesInfo name="RFC" value="8785"/>
        </reference>
        
        <reference anchor="RFC9052">
          <front>
            <title>CBOR Object Signing and Encryption (COSE): Structures and Process</title>
            <author fullname="J. Schaad"/>
            <date month="August" year="2022"/>
          </front>
          <seriesInfo name="RFC" value="9052"/>
        </reference>
        
        <reference anchor="I-D.ietf-scitt-architecture">
          <front>
            <title>An Architecture for Trustworthy and Transparent Digital Supply Chains</title>
            <author fullname="Henk Birkholz" initials="H." surname="Birkholz"/>
            <author fullname="Antoine Delignat-Lavaud" initials="A." surname="Delignat-Lavaud"/>
            <author fullname="Cedric Fournet" initials="C." surname="Fournet"/>
            <date year="2025"/>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-scitt-architecture-22"/>
          <annotation>In RFC Editor Queue as of October 2025</annotation>
        </reference>
        
        <reference anchor="I-D.ietf-scitt-scrapi">
          <front>
            <title>SCITT Reference APIs</title>
            <author fullname="Orie Steele" initials="O." surname="Steele"/>
            <date year="2025"/>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-scitt-scrapi-06"/>
          <annotation>In Working Group Last Call as of December 2025</annotation>
        </reference>
      </references>
      
      <references>
        <name>Informative References</name>
        
        <reference anchor="RFC6962">
          <front>
            <title>Certificate Transparency</title>
            <author fullname="B. Laurie"/>
            <date month="June" year="2013"/>
          </front>
          <seriesInfo name="RFC" value="6962"/>
        </reference>
        
        <reference anchor="RFC9334">
          <front>
            <title>Remote ATtestation procedureS (RATS) Architecture</title>
            <author fullname="H. Birkholz"/>
            <date month="January" year="2023"/>
          </front>
          <seriesInfo name="RFC" value="9334"/>
        </reference>
        
        <reference anchor="RFC9562">
          <front>
            <title>Universally Unique IDentifiers (UUIDs)</title>
            <author fullname="K. Davis"/>
            <date month="May" year="2024"/>
          </front>
          <seriesInfo name="RFC" value="9562"/>
        </reference>
        
        <reference anchor="CAP-SRP">
          <front>
            <title>CAP-SRP Reference Implementation</title>
            <author>
              <organization>VeritasChain Standards Organization</organization>
            </author>
            <date year="2026"/>
          </front>
          <refcontent>https://github.com/veritaschain/cap-srp</refcontent>
        </reference>
        
        <reference anchor="CAP-SRP-SPEC">
          <front>
            <title>CAP-SRP: Content/Creative AI Profile - Safe Refusal Provenance Technical Specification v1.0</title>
            <author>
              <organization>VeritasChain Standards Organization</organization>
            </author>
            <date month="January" year="2026"/>
          </front>
          <refcontent>https://github.com/veritaschain/cap-spec</refcontent>
        </reference>
        
        <reference anchor="VAP-FRAMEWORK">
          <front>
            <title>Verifiable AI Provenance Framework (VAP) Specification v1.2</title>
            <author>
              <organization>VeritasChain Standards Organization</organization>
            </author>
            <date month="January" year="2026"/>
          </front>
          <refcontent>https://veritaschain.org/specs/vap</refcontent>
        </reference>
        
        <reference anchor="EU-AI-ACT">
          <front>
            <title>Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (AI Act)</title>
            <author>
              <organization>European Parliament and Council</organization>
            </author>
            <date month="July" year="2024"/>
          </front>
          <refcontent>Official Journal of the European Union L 2024/1689</refcontent>
        </reference>
        
        <reference anchor="KOREA-AI-ACT">
          <front>
            <title>Framework Act on Artificial Intelligence (AI기본법)</title>
            <author>
              <organization>Republic of Korea National Assembly</organization>
            </author>
            <date month="January" year="2026"/>
          </front>
          <annotation>Effective January 22, 2026</annotation>
        </reference>
      </references>
    </references>

    <!-- ===== Appendix A: Complete Example Flow ===== -->
    <section anchor="appendix-flow">
      <name>Example: Complete Refusal Event Flow</name>
      
      <t>This appendix illustrates a complete flow from request 
      receipt to Verifiable Refusal Record verification.</t>
      
      <section anchor="appendix-flow-steps">
        <name>Event Sequence</name>
        <ol>
          <li>User submits generation request to AI system</li>
          <li>AI system creates ATTEMPT Signed Statement (computes 
          promptHash = SHA256(prompt), generates UUID v7 EventId, 
          signs as COSE_Sign1)</li>
          <li>AI system registers ATTEMPT with Transparency Service</li>
          <li>Transparency Service returns Receipt_ATTEMPT</li>
          <li>AI system evaluates request against content policy</li>
          <li>Policy classifier determines refusal is required</li>
          <li>AI system creates DENY Signed Statement (sets 
          attemptId = ATTEMPT.eventId, records riskCategory and 
          RefusalReason, signs as COSE_Sign1)</li>
          <li>AI system registers DENY with Transparency Service</li>
          <li>Transparency Service returns Receipt_DENY</li>
          <li>AI system generates Refusal Receipt for user</li>
          <li>User receives refusal response with Receipt</li>
        </ol>
      </section>
      
      <section anchor="appendix-flow-verification">
        <name>Third-Party Verification</name>
        <t>An auditor verifying the Verifiable Refusal Record:</t>
        <ol>
          <li>Obtains ATTEMPT Signed Statement and Receipt_ATTEMPT</li>
          <li>Obtains DENY Signed Statement and Receipt_DENY</li>
          <li>Verifies Issuer signature on both Signed Statements</li>
          <li>Verifies both Receipts against Transparency Service 
          public key</li>
          <li>Confirms DENY.attemptId equals ATTEMPT.eventId</li>
          <li>Confirms DENY.Timestamp is after ATTEMPT.Timestamp</li>
          <li>Verifies external anchor if available</li>
          <li>Concludes: The request identified by ATTEMPT.promptHash 
          was refused and the refusal was logged at DENY.Timestamp</li>
        </ol>
        <t>This verification confirms that a refusal was logged, but 
        does not prove that no unlogged generation occurred.</t>
      </section>
      
      <section anchor="appendix-evidence-pack-verification">
        <name>Evidence Pack Verification</name>
        <t>A regulator verifying an Evidence Pack:</t>
        <ol>
          <li>Verify pack signature against published issuer key</li>
          <li>Verify all checksums in manifest</li>
          <li>Load all events from events/*.jsonl files</li>
          <li>Verify hash chain integrity</li>
          <li>Verify each event signature</li>
          <li>Verify Completeness Invariant: count(ATTEMPT) = 
          count(GENERATE) + count(DENY) + count(ERROR)</li>
          <li>Verify external anchors against TSA/SCITT services</li>
          <li>Generate verification report with statistics</li>
        </ol>
      </section>
    </section>

    <!-- ===== Acknowledgements ===== -->
    <section anchor="acknowledgements" numbered="false">
      <name>Acknowledgements</name>
      <t>The authors thank the members of the SCITT Working Group 
      for developing the foundational architecture. This work builds 
      upon the transparency log concepts from Certificate Transparency 
      <xref target="RFC6962"/>.</t>
      <t>The January 2026 Grok incident, while harmful, provided 
      critical motivation for this specification by demonstrating the 
      real-world consequences of unverifiable AI safety claims.</t>
      <t>Thanks to the VeritasChain Standards Organization for 
      developing the CAP-SRP specification that informed this 
      Internet-Draft.</t>
    </section>
  </back>
</rfc>
