<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<rfc xmlns:xi="http://www.w3.org/2001/XInclude"
     category="info"
     docName="draft-kamimura-rats-behavioral-evidence-01"
     ipr="trust200902"
     obsoletes=""
     updates=""
     submissionType="IETF"
     xml:lang="en"
     version="3">

  <front>
    <title abbrev="RATS and Behavioral Evidence">
      On the Relationship Between Remote Attestation and Behavioral
      Evidence Recording
    </title>

    <seriesInfo name="Internet-Draft" value="draft-kamimura-rats-behavioral-evidence-01"/>

    <author fullname="Tokachi Kamimura" initials="T." surname="Kamimura">
      <organization>VeritasChain Standards Organization (VSO)</organization>
      <address>
        <email>kamimura@veritaschain.org</email>
        <uri>https://veritaschain.org</uri>
      </address>
    </author>

    <date year="2026" month="January" day="10"/>

    <area>Security</area>
    <workgroup>Remote ATtestation ProcedureS</workgroup>

    <keyword>attestation</keyword>
    <keyword>behavioral evidence</keyword>
    <keyword>verifiable</keyword>
    <keyword>accountability</keyword>
    <keyword>evidence recording</keyword>

    <abstract>
      <t>
        This document provides an informational discussion of the conceptual
        relationship between remote attestation, as defined in RFC 9334
        (RATS Architecture), and behavioral evidence recording mechanisms.
        It observes that these two verification capabilities address
        fundamentally different questions - attestation addresses "Is this
        system in a trustworthy state?" while behavioral evidence addresses
        "What did the system actually do?" - and discusses how they could
        conceptually complement each other in accountability frameworks.
        This document is purely descriptive: it does not propose any
        modifications to RATS architecture, define new mechanisms or
        protocols, or establish normative requirements. It explicitly
        does not define any cryptographic binding between attestation
        and behavioral evidence.
      </t>
    </abstract>

    <note removeInRFC="true" title="Discussion Venues">
      <t>
        Discussion of this document takes place on the Remote ATtestation
        ProcedureS (RATS) Working Group mailing list (rats@ietf.org),
        which is archived at
        <eref target="https://mailarchive.ietf.org/arch/browse/rats/"/>.
      </t>
      <t>
        Source for this draft and an issue tracker can be found at
        <eref target="https://github.com/veritaschain/draft-kamimura-rats-behavioral-evidence"/>.
      </t>
    </note>

  </front>

  <middle>

    <!-- ===== Section 1: Introduction ===== -->
    <section anchor="introduction">
      <name>Introduction</name>

      <t>
        The IETF RATS (Remote ATtestation ProcedureS) Working Group has
        developed a comprehensive architecture for remote attestation
        <xref target="RFC9334"/>, enabling Relying Parties to assess the
        trustworthiness of remote systems through cryptographic evidence
        about their state. This attestation capability addresses a
        fundamental question in distributed systems: "Is this system in
        a trustworthy state?"
      </t>

      <t>
        A related but distinct verification need exists in many operational
        contexts: the ability to verify what actions a system has actually
        performed after the fact. This question - "What did the system
        actually do?" - is addressed by behavioral evidence recording
        mechanisms, which create tamper-evident records of system actions
        and decisions.
      </t>

      <t>
        This document observes that these two verification capabilities
        address different aspects of system accountability and discusses
        their conceptual relationship. The document does not propose any
        technical integration, protocol, or cryptographic binding between
        these mechanisms. Any discussion of "complementary" use is purely
        conceptual and does not imply a composed security property.
      </t>

      <section anchor="document-status">
        <name>Document Status</name>

        <t>This document is purely INFORMATIONAL and NON-NORMATIVE. It:</t>

        <ul>
          <li>Does NOT propose any new RATS mechanisms or architecture changes</li>
          <li>Does NOT modify or extend the RATS architecture as defined in
              <xref target="RFC9334"/></li>
          <li>Does NOT define new protocols, claims, tokens, or data formats</li>
          <li>Does NOT establish normative requirements for any implementation</li>
          <li>Does NOT define any cryptographic binding or security composition
              between attestation and behavioral evidence</li>
          <li>Remains fully compatible with and respectful of existing RATS
              concepts and design philosophy</li>
        </ul>

        <t>
          The language in this document uses descriptive terms (MAY, COULD,
          CAN) exclusively to indicate possibilities and observations. This
          document does not use normative requirements language (MUST, SHOULD,
          SHALL) as there are no mandatory behaviors or requirements being
          specified.
        </t>

        <t>
          This document treats behavioral evidence recording systems in
          general terms, using VeritasChain Protocol (VCP)
          <xref target="VCP-SPEC"/> as one illustrative example among
          various possible approaches. Other systems such as Certificate
          Transparency <xref target="RFC6962"/> and general append-only
          log architectures employ similar cryptographic techniques for
          different purposes.
        </t>
      </section>

      <section anchor="motivation">
        <name>Motivation</name>

        <t>
          This document is motivated by an observation that attestation
          and behavioral evidence recording, while both contributing to
          system accountability, answer fundamentally different questions.
          Understanding this distinction could help system architects
          avoid conflating these mechanisms or assuming one substitutes
          for the other.
        </t>

        <section anchor="what-attestation-establishes">
          <name>What Attestation Alone Establishes (X)</name>

          <t>
            Remote attestation, as defined in <xref target="RFC9334"/>,
            enables a Relying Party to assess whether an Attester is in
            a trustworthy state at the time of attestation. When attestation
            succeeds, the Relying Party gains assurance that:
          </t>

          <ul>
            <li>The Attester's software configuration matches expected
                Reference Values</li>
            <li>The Attester is running on genuine, uncompromised hardware
                (where hardware roots of trust are used)</li>
            <li>The Attester's identity can be cryptographically verified</li>
          </ul>

          <t>
            However, attestation alone does NOT establish:
          </t>

          <ul>
            <li>What specific actions the Attester will take after attestation</li>
            <li>Whether the Attester's future behavior will conform to any
                particular policy</li>
            <li>A verifiable record of what the Attester actually did during
                operation</li>
          </ul>
        </section>

        <section anchor="what-behavioral-evidence-establishes">
          <name>What Behavioral Evidence Alone Establishes (Y)</name>

          <t>
            Behavioral evidence recording mechanisms create tamper-evident
            records of system actions and decisions. When properly implemented,
            such mechanisms could provide:
          </t>

          <ul>
            <li>A chronological record of what actions the system performed</li>
            <li>Cryptographic integrity protection for those records</li>
            <li>Evidence for after-the-fact examination of system behavior</li>
          </ul>

          <t>
            However, behavioral evidence recording alone does NOT establish:
          </t>

          <ul>
            <li>That the system recording the evidence was in a trustworthy
                state when it generated the records</li>
            <li>That the system's software, configuration, or hardware was
                uncompromised</li>
            <li>That the records accurately reflect what the system actually
                did (a compromised system could generate false records)</li>
          </ul>
        </section>

        <section anchor="what-both-together-enable">
          <name>What Considering Both Together Could Enable (X+Y)</name>

          <t>
            When an observer has access to both valid Attestation Results
            for a system AND a verifiable behavioral evidence trail from
            that system, the observer could potentially reason:
          </t>

          <ul>
            <li>"At time T1, attestation confirmed the system was in a
                known-good state"</li>
            <li>"The behavioral evidence records actions from T1 to T2"</li>
            <li>"If the system remained in the attested state throughout
                this period, these behavioral records could be considered
                more trustworthy"</li>
          </ul>

          <t>
            <strong>Critical Limitation:</strong> This reasoning is purely
            conceptual and informal. This document explicitly does NOT
            claim that considering attestation and behavioral evidence
            together creates any composed security property. Significant
            trust gaps remain (see <xref target="trust-gaps"/>).
          </t>
        </section>

        <section anchor="trust-gaps">
          <name>Remaining Trust Gaps</name>

          <t>
            Even when both attestation and behavioral evidence are available,
            significant trust gaps remain that this document does not address:
          </t>

          <ul>
            <li><strong>Time-of-check vs. time-of-use:</strong> Attestation
                confirms state at a point in time; the system could be
                compromised immediately after attestation</li>
            <li><strong>Attestation-to-logging binding:</strong> No
                cryptographic mechanism is defined to bind Attestation
                Results to specific behavioral evidence entries</li>
            <li><strong>Logging fidelity:</strong> Even a trustworthy system
                could have bugs in its logging implementation</li>
            <li><strong>Selective omission:</strong> A system could pass
                attestation yet selectively omit events from its behavioral
                records</li>
            <li><strong>Compromised logging infrastructure:</strong> The
                logging infrastructure itself could be compromised
                independently of the attested system</li>
          </ul>

          <t>
            These gaps would need to be addressed by specific technical
            mechanisms not defined in this document. Deployments considering
            both attestation and behavioral evidence should carefully analyze
            their threat model and not assume that informal complementarity
            provides strong security guarantees.
          </t>
        </section>

        <section anchor="domain-examples">
          <name>Domain Examples</name>

          <t>
            The following examples illustrate domains where both capabilities
            could be relevant. These examples are illustrative only and do
            not constitute normative guidance:
          </t>

          <ul>
            <li><strong>Financial Services:</strong> Algorithmic trading
                systems may face regulatory requirements for both system
                integrity verification and decision audit trails</li>
            <li><strong>Artificial Intelligence:</strong> AI governance
                frameworks increasingly distinguish between system
                certification and operational logging</li>
            <li><strong>Critical Infrastructure:</strong> Systems controlling
                physical processes may benefit from attestation of their
                configuration alongside records of their actions</li>
          </ul>
        </section>
      </section>
    </section>

    <!-- ===== Section 2: Terminology ===== -->
    <section anchor="terminology">
      <name>Terminology</name>

      <section anchor="rats-terminology">
        <name>RATS Terminology (Reused Without Modification)</name>

        <t>
          This document reuses terminology from the RATS Architecture
          <xref target="RFC9334"/> without modification or extension.
          The following terms are used exactly as defined in that document:
        </t>

        <dl>
          <dt>Attester</dt>
          <dd>A role performed by an entity that creates Evidence about
              itself and conveys it to a Verifier.
              (RFC 9334, Section 2)</dd>

          <dt>Verifier</dt>
          <dd>A role performed by an entity that appraises Evidence
              about an Attester.
              (RFC 9334, Section 2)</dd>

          <dt>Relying Party</dt>
          <dd>A role performed by an entity that uses Attestation
              Results to make authorization decisions.
              (RFC 9334, Section 2)</dd>

          <dt>Evidence</dt>
          <dd>Information about an Attester's state, generated by the
              Attester.
              (RFC 9334, Section 2)</dd>

          <dt>Attestation Results</dt>
          <dd>Output produced by a Verifier, indicating the
              trustworthiness of an Attester.
              (RFC 9334, Section 2)</dd>

          <dt>Claims</dt>
          <dd>Assertions about characteristics of an Attester.
              (RFC 9334, Section 2)</dd>

          <dt>Reference Values</dt>
          <dd>Expected values against which Evidence is compared.
              (RFC 9334, Section 2)</dd>
        </dl>
      </section>

      <section anchor="behavioral-evidence-terminology">
        <name>Behavioral Evidence Terminology</name>

        <t>
          The following terms are used in this document to describe
          behavioral evidence concepts. These terms are grounded in
          general systems and security literature rather than being
          newly defined by this document.
        </t>

        <t>
          <strong>Note on "Audit" Terminology:</strong> The term "audit"
          in this document follows common systems engineering usage
          (e.g., "audit log", "audit trail") referring to chronological
          records of system events maintained for post-hoc examination.
          This usage is consistent with standard security terminology
          as found in sources such as NIST SP 800-92 (Guide to Computer
          Security Log Management) <xref target="NIST-SP800-92"/> and
          general operating systems literature. It does not imply
          regulatory auditing, financial auditing, or compliance
          certification in any jurisdiction-specific sense.
        </t>

        <dl>
          <dt>Audit Event</dt>
          <dd>A recorded occurrence representing a discrete action,
              decision, or state change in a system. This term is used
              in general systems security contexts and should not be
              confused with RATS Evidence, which describes system state
              rather than behavior.</dd>

          <dt>Audit Trail</dt>
          <dd>A chronologically ordered sequence of Audit Events that
              can be examined to reconstruct system behavior. When
              cryptographic integrity mechanisms are applied, such trails
              may be "verifiable" in the sense that tampering can be
              detected.</dd>

          <dt>Behavioral Evidence</dt>
          <dd>In this document, "behavioral evidence" refers to records
              of what a system has done (its actions and decisions), as
              distinct from RATS Evidence, which describes what state a
              system is in (its configuration and properties). This term
              is used specifically to avoid confusion with the precise
              RATS definition of "Evidence".</dd>
        </dl>
      </section>
    </section>

    <!-- ===== Section 3: Conceptual Layering ===== -->
    <section anchor="conceptual-layering">
      <name>Conceptual Layering</name>

      <t>
        This section describes an observational framework for understanding
        how attestation and behavioral evidence recording address different
        verification needs. This framework is purely conceptual and does not
        define any technical integration or protocol.
      </t>

      <section anchor="attestation-layer">
        <name>Attestation Layer (RATS)</name>

        <t>
          The RATS architecture <xref target="RFC9334"/> addresses
          trustworthiness assessment through remote attestation. At its
          core, attestation answers questions about system state:
        </t>

        <ul>
          <li>Is the system's software configuration as expected?</li>
          <li>Is the system running on genuine, uncompromised hardware?</li>
          <li>Does the system's current state match known-good Reference Values?</li>
          <li>Can the system's identity and configuration be cryptographically
              verified?</li>
        </ul>

        <t>
          These questions are fundamentally about the properties and
          characteristics of a system at a point in time or across a
          measurement period. The RATS architecture provides mechanisms for
          generating, conveying, and appraising Evidence that enables Relying
          Parties to make trust decisions about Attesters.
        </t>

        <t>Key characteristics of attestation as defined by RATS:</t>

        <ul>
          <li>Focuses on system state and configuration</li>
          <li>Enables trust decisions before or during interactions</li>
          <li>Produces Attestation Results for Relying Party consumption</li>
          <li>May rely on hardware roots of trust</li>
          <li>Addresses the question: "Can I trust this system's current state?"</li>
        </ul>
      </section>

      <section anchor="behavioral-evidence-layer">
        <name>Behavioral Evidence Layer</name>

        <t>
          Behavioral evidence recording mechanisms address a different
          category of verification need. Rather than assessing system state,
          they record what a system has done:
        </t>

        <ul>
          <li>What decisions did an algorithm make?</li>
          <li>What actions did the system execute?</li>
          <li>What inputs led to what outputs?</li>
          <li>What was the sequence and timing of operations?</li>
        </ul>

        <t>
          These questions are fundamentally about system behavior over time.
          Verifiable behavioral evidence mechanisms could provide ways to
          record, preserve, and verify the integrity of behavioral records,
          enabling after-the-fact examination of system actions.
        </t>

        <t>Key characteristics of behavioral evidence systems (in general terms):</t>

        <ul>
          <li>Focus on system behavior and decisions</li>
          <li>Enable verification after events have occurred</li>
          <li>Produce verifiable records for post-hoc examination</li>
          <li>May rely on cryptographic structures such as hash chains,
              Merkle trees, or append-only logs</li>
          <li>Address the question: "What did this system do?"</li>
        </ul>

        <t>
          As an illustrative example, VCP <xref target="VCP-SPEC"/> defines
          audit trails using three integrity layers: event integrity (hashing),
          structural integrity (Merkle trees), and external verifiability
          (digital signatures and anchoring). Certificate Transparency
          <xref target="RFC6962"/> uses similar cryptographic techniques
          for a different purpose (public logging of certificates). Other
          behavioral evidence systems could employ different mechanisms.
        </t>
      </section>

      <section anchor="separation-of-concerns">
        <name>Separation of Concerns</name>

        <t>
          The distinction between attestation and behavioral evidence can be
          understood as a separation of concerns:
        </t>

        <table>
          <name>Conceptual Comparison of Attestation and Behavioral Evidence</name>
          <thead>
            <tr>
              <th>Aspect</th>
              <th>Attestation (RATS)</th>
              <th>Behavioral Evidence</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td>Primary Question</td>
              <td>Is this system trustworthy?</td>
              <td>What did this system do?</td>
            </tr>
            <tr>
              <td>Focus</td>
              <td>System state</td>
              <td>System behavior</td>
            </tr>
            <tr>
              <td>Temporal Scope</td>
              <td>Point-in-time or measurement period</td>
              <td>Historical record of actions</td>
            </tr>
            <tr>
              <td>Primary Use Case</td>
              <td>Trust decision before/during interaction</td>
              <td>Post-hoc examination and accountability</td>
            </tr>
            <tr>
              <td>Trust Anchor</td>
              <td>Hardware/software roots of trust</td>
              <td>Logging infrastructure integrity</td>
            </tr>
          </tbody>
        </table>

        <t>
          This separation suggests that attestation and behavioral evidence
          address different needs. This document observes that neither
          mechanism fully substitutes for the other, but explicitly does
          not claim that using both together creates a composed security
          property (see <xref target="security-considerations"/>).
        </t>
      </section>
    </section>

    <!-- ===== Section 4: Relationship ===== -->
    <section anchor="relationship">
      <name>Conceptual Relationship</name>

      <t>
        This section discusses the conceptual relationship between attestation
        and behavioral evidence. All discussion in this section is observational
        and does not define any protocol, binding, or security composition.
      </t>

      <section anchor="different-questions">
        <name>Different Questions, Different Answers</name>

        <t>
          A key observation is that attestation and behavioral evidence
          answer different questions:
        </t>

        <dl>
          <dt>Attestation answers:</dt>
          <dd>"At time T, was this system in a trustworthy state?"</dd>

          <dt>Behavioral evidence answers:</dt>
          <dd>"Between times T1 and T2, what actions did this system perform?"</dd>
        </dl>

        <t>Neither question's answer implies the other's:</t>

        <ul>
          <li>A system could pass attestation at time T but subsequently
              perform unexpected actions (visible only through behavioral
              records, if recorded honestly)</li>
          <li>A system could have complete behavioral records while being
              compromised in ways that attestation might have detected
              (if attestation had been performed)</li>
        </ul>
      </section>

      <section anchor="temporal-considerations">
        <name>Temporal Considerations</name>

        <t>
          Attestation and behavioral evidence may operate on different
          temporal rhythms:
        </t>

        <dl>
          <dt>Attestation Patterns:</dt>
          <dd>
            <ul>
              <li>At system boot or initialization</li>
              <li>Periodically during operation</li>
              <li>On-demand when a Relying Party requests</li>
            </ul>
          </dd>

          <dt>Behavioral Evidence Patterns:</dt>
          <dd>
            <ul>
              <li>Continuously, as events occur</li>
              <li>At varying granularities depending on event significance</li>
              <li>With periodic integrity commitments</li>
            </ul>
          </dd>
        </dl>

        <t>
          One conceptual model involves attestation confirming system state
          at discrete moments, while behavioral evidence records actions
          between those moments. However, this document explicitly notes
          that no cryptographic mechanism is defined to bind these two
          types of evidence together. The "gap" between attestation events
          represents a period during which system state could change
          without detection.
        </t>
      </section>

      <section anchor="no-cryptographic-binding">
        <name>No Cryptographic Binding Defined</name>

        <t>
          This document explicitly does NOT define any cryptographic
          binding between Attestation Results and behavioral evidence
          records. Such a binding would require:
        </t>

        <ul>
          <li>A protocol for cryptographically linking Attestation Results
              to specific behavioral evidence entries</li>
          <li>Mechanisms to prevent replay, substitution, or mismatch
              attacks</li>
          <li>Clear semantics for what such a binding would prove</li>
          <li>Analysis of the composed security properties</li>
        </ul>

        <t>
          None of these are provided by this document. Any deployment
          considering both mechanisms should not assume that informal
          correlation provides the security properties that a formal
          cryptographic binding might offer.
        </t>
      </section>
    </section>

    <!-- ===== Section 5: Example Flow ===== -->
    <section anchor="example-flow">
      <name>Illustrative Example</name>

      <t>
        This section provides a purely illustrative, non-normative example
        of how attestation and behavioral evidence could conceptually
        relate in a hypothetical scenario. This example:
      </t>

      <ul>
        <li>Does NOT define any protocol or establish any requirements</li>
        <li>Is NOT exhaustive of possible deployment patterns</li>
        <li>Does NOT imply that this is the only or best way to use
            these mechanisms</li>
        <li>Is intended solely to illustrate the conceptual distinctions
            discussed in this document</li>
      </ul>

      <t>Consider a hypothetical automated decision-making system:</t>

      <dl>
        <dt>Phase 1: System Startup</dt>
        <dd>
          The system boots and undergoes remote attestation. A Verifier
          confirms that the system's software, configuration, and
          hardware environment match expected Reference Values. A Relying
          Party receives Attestation Results indicating the system is
          in a trustworthy state.
        </dd>

        <dt>Phase 2: Operational Period</dt>
        <dd>
          During operation, the system generates behavioral evidence
          for significant actions. These records are maintained in a
          tamper-evident structure. Note: Nothing cryptographically
          binds these records to the earlier attestation.
        </dd>

        <dt>Phase 3: Post-Hoc Examination</dt>
        <dd>
          An examiner later wishes to understand the system's behavior.
          The examiner could: (a) check that Attestation Results existed
          for relevant times, and (b) examine the behavioral evidence
          for those times. However, the examiner should understand that
          the informal correlation between these does not constitute
          a cryptographic proof (see <xref target="trust-gaps"/>).
        </dd>
      </dl>

      <t>
        This example is purely conceptual. Actual deployments would require
        careful security analysis specific to their threat model.
      </t>
    </section>

    <!-- ===== Section 6: Non-Goals ===== -->
    <section anchor="non-goals">
      <name>Non-Goals and Explicit Out-of-Scope Items</name>

      <t>
        To maintain clarity about this document's limited scope, the
        following items are explicitly out of scope and are NOT addressed:
      </t>

      <section anchor="no-rats-changes">
        <name>No Changes to RATS</name>

        <t>This document does NOT:</t>

        <ul>
          <li>Propose any modifications to the RATS architecture</li>
          <li>Define any new attestation mechanisms, Evidence formats, or
              Attestation Result structures</li>
          <li>Suggest that RATS should incorporate behavioral evidence
              capabilities</li>
          <li>Propose work items for the RATS Working Group</li>
        </ul>
      </section>

      <section anchor="no-protocol">
        <name>No Protocol or Format Definition</name>

        <t>This document does NOT:</t>

        <ul>
          <li>Specify any protocol for connecting attestation and
              behavioral evidence</li>
          <li>Define any new claims, tokens, or data structures</li>
          <li>Mandate any particular behavioral evidence format or
              mechanism</li>
          <li>Define interoperability requirements between attestation
              and behavioral evidence systems</li>
        </ul>
      </section>

      <section anchor="no-requirements">
        <name>No Normative Requirements</name>

        <t>This document does NOT:</t>

        <ul>
          <li>Establish normative requirements for either attestation or
              behavioral evidence implementations</li>
          <li>Specify trust relationships or delegation models</li>
          <li>Define any cryptographic binding or security composition</li>
        </ul>

        <t>
          The sole purpose of this document is to observe and explain the
          conceptual relationship between attestation and behavioral evidence
          as distinct mechanisms addressing different verification questions.
        </t>
      </section>
    </section>

    <!-- ===== Section 7: Security Considerations ===== -->
    <section anchor="security-considerations">
      <name>Security Considerations</name>

      <t>
        This document is purely informational and does not define any
        protocols or mechanisms. However, because it discusses the
        conceptual relationship between two security-relevant mechanisms,
        the following security considerations are important.
      </t>

      <section anchor="no-security-composition">
        <name>No Security Composition Claimed</name>

        <t>
          This document explicitly does NOT claim that considering
          attestation and behavioral evidence together creates any
          composed security property. In particular:
        </t>

        <ul>
          <li>No cryptographic binding is defined between Attestation
              Results and behavioral evidence records</li>
          <li>The informal observation that both mechanisms "could be
              used together" does not imply any formal security guarantee</li>
          <li>Each mechanism's security properties must be analyzed
              independently</li>
          <li>The combination does not automatically inherit the
              security properties of either mechanism</li>
        </ul>
      </section>

      <section anchor="false-sense-of-security">
        <name>Warning Against False Sense of Security</name>

        <t>
          Readers should be cautioned against assuming that having both
          attestation and behavioral evidence provides comprehensive
          security. Specifically:
        </t>

        <ul>
          <li><strong>Attestation does not guarantee future behavior:</strong>
              A system that passes attestation at time T could be
              compromised at time T+1</li>
          <li><strong>Behavioral evidence does not guarantee system integrity:</strong>
              A compromised system could generate false or incomplete
              behavioral records</li>
          <li><strong>Informal correlation is not cryptographic proof:</strong>
              Observing that a system had valid Attestation Results and
              behavioral evidence for the same time period does not
              constitute a cryptographic proof of correct behavior</li>
        </ul>
      </section>

      <section anchor="conceptual-attack-considerations">
        <name>Conceptual Attack Considerations</name>

        <t>
          At a conceptual level (without defining any specific protocol),
          deployments considering both attestation and behavioral evidence
          should be aware of risks including:
        </t>

        <dl>
          <dt>Replay:</dt>
          <dd>
            Without appropriate binding, an attacker could potentially
            present valid attestation from one context alongside behavioral
            evidence from another context.
          </dd>

          <dt>Diversion:</dt>
          <dd>
            An attacker could potentially direct a Relying Party to
            behavioral evidence from a different (legitimately attested)
            system than the one actually performing operations.
          </dd>

          <dt>Relay:</dt>
          <dd>
            An intermediary could potentially relay attestation challenges
            to a legitimate system while recording behavioral evidence
            from a compromised system.
          </dd>

          <dt>Selective omission:</dt>
          <dd>
            A system could pass attestation yet selectively omit events
            from its behavioral evidence records, particularly if the
            logging mechanism is not included in attestation measurements.
          </dd>

          <dt>Time-of-check vs. time-of-use (TOCTOU):</dt>
          <dd>
            A system's state at attestation time may differ from its state
            when generating behavioral evidence, particularly if
            re-attestation is infrequent.
          </dd>
        </dl>

        <t>
          These considerations are presented at a conceptual level to
          inform threat modeling. This document does not define mechanisms
          to address these risks.
        </t>
      </section>

      <section anchor="independent-security-analysis">
        <name>Independent Security Analysis Required</name>

        <t>The following security considerations apply independently:</t>

        <dl>
          <dt>Attestation Security:</dt>
          <dd>
            Security considerations for remote attestation are thoroughly
            addressed in <xref target="RFC9334"/>. These considerations
            are unchanged by this document.
          </dd>

          <dt>Behavioral Evidence Security:</dt>
          <dd>
            <t>Behavioral evidence recording mechanisms have their own
            security considerations, including:</t>
            <ul>
              <li>Key management for signing records</li>
              <li>Protection against selective omission</li>
              <li>Integrity of external anchoring mechanisms</li>
              <li>Privacy considerations for sensitive data</li>
              <li>Trustworthiness of the logging infrastructure itself</li>
            </ul>
          </dd>
        </dl>
      </section>

      <section anchor="threat-model-unchanged">
        <name>RATS Threat Model Unchanged</name>

        <t>
          This document does not alter the RATS threat model as defined
          in <xref target="RFC9334"/>. It introduces no new attack surfaces
          to the RATS architecture. Any deployment-specific threat analysis
          should consider attestation and behavioral evidence as separate
          mechanisms with independent trust assumptions and failure modes.
        </t>
      </section>
    </section>

    <!-- ===== Section 8: IANA Considerations ===== -->
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>

      <t>This document has no IANA actions.</t>
    </section>

  </middle>

  <back>

    <references>
      <name>References</name>

      <references>
        <name>Normative References</name>

        <reference anchor="RFC9334" target="https://www.rfc-editor.org/info/rfc9334">
          <front>
            <title>Remote ATtestation procedureS (RATS) Architecture</title>
            <author initials="H." surname="Birkholz" fullname="Henk Birkholz"/>
            <author initials="D." surname="Thaler" fullname="Dave Thaler"/>
            <author initials="M." surname="Richardson" fullname="Michael Richardson"/>
            <author initials="N." surname="Smith" fullname="Ned Smith"/>
            <author initials="W." surname="Pan" fullname="Wei Pan"/>
            <date month="January" year="2023"/>
          </front>
          <seriesInfo name="RFC" value="9334"/>
          <seriesInfo name="DOI" value="10.17487/RFC9334"/>
        </reference>

      </references>

      <references>
        <name>Informative References</name>

        <reference anchor="RFC6962" target="https://www.rfc-editor.org/info/rfc6962">
          <front>
            <title>Certificate Transparency</title>
            <author initials="B." surname="Laurie" fullname="Ben Laurie"/>
            <author initials="A." surname="Langley" fullname="Adam Langley"/>
            <author initials="E." surname="Kasper" fullname="Emilia Kasper"/>
            <date month="June" year="2013"/>
          </front>
          <seriesInfo name="RFC" value="6962"/>
          <seriesInfo name="DOI" value="10.17487/RFC6962"/>
        </reference>

        <reference anchor="NIST-SP800-92" target="https://csrc.nist.gov/publications/detail/sp/800-92/final">
          <front>
            <title>Guide to Computer Security Log Management</title>
            <author>
              <organization>National Institute of Standards and Technology</organization>
            </author>
            <date month="September" year="2006"/>
          </front>
          <seriesInfo name="NIST Special Publication" value="800-92"/>
        </reference>

        <reference anchor="VCP-SPEC" target="https://veritaschain.org/spec">
          <front>
            <title>VeritasChain Protocol (VCP) Specification Version 1.1</title>
            <author>
              <organization>VeritasChain Standards Organization</organization>
            </author>
            <date year="2025"/>
          </front>
        </reference>

      </references>

    </references>

    <section anchor="acknowledgments" numbered="false">
      <name>Acknowledgments</name>

      <t>
        The author thanks the RATS Working Group for developing the
        comprehensive attestation architecture documented in RFC 9334.
        This document builds upon and respects the careful design work
        reflected in that architecture. The author also thanks reviewers
        who provided feedback emphasizing the importance of clearly
        distinguishing conceptual observations from security claims.
      </t>
    </section>

    <section anchor="changes" numbered="false">
      <name>Changes from -00</name>

      <ul>
        <li>Strengthened Motivation section with explicit X/Y/X+Y reasoning</li>
        <li>Added explicit "Remaining Trust Gaps" subsection</li>
        <li>Significantly expanded Security Considerations with warnings
            against false sense of security and conceptual attack
            considerations</li>
        <li>Added NIST SP 800-92 reference to ground "audit" terminology</li>
        <li>Added explicit "No Cryptographic Binding Defined" section</li>
        <li>Clarified that the illustrative example is non-normative and
            non-exhaustive</li>
        <li>Restructured Non-Goals into categorized subsections</li>
        <li>Added RFC 9334 section references to terminology definitions</li>
        <li>Removed unused references (SCITT architecture,
            reference-interaction-models, RFC 9162)</li>
      </ul>
    </section>

  </back>

</rfc>
