<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE rfc [
  <!ENTITY RFC2119 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.2119.xml">
  <!ENTITY RFC8174 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.8174.xml">
  <!ENTITY RFC8439 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.8439.xml">
  <!ENTITY RFC5869 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.5869.xml">
  <!ENTITY RFC8610 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.8610.xml">
  <!ENTITY RFC8949 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.8949.xml">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt"?>
<rfc xmlns:xi="http://www.w3.org/2001/XInclude"
     category="exp"
     docName="draft-myclerk-vfs-00"
     ipr="trust200902"
     obsoletes=""
     updates=""
     submissionType="independent"
     xml:lang="en"
     version="3">

  <front>
    <title abbrev="MyClerk VFS">The MyClerk Virtual File System: Distributed Storage for Family Networks</title>

    <seriesInfo name="Internet-Draft" value="draft-myclerk-vfs-00"/>

    <author fullname="Michael J. Arcan" initials="M.J." surname="Arcan">
      <organization>Arcan Consulting</organization>
      <address>
        <email>rfc@arcan-consulting.de</email>
        <uri>https://myclerk.eu</uri>
      </address>
    </author>

    <date year="2025" month="December"/>

    <area>Applications</area>
    <workgroup>Independent Submission</workgroup>

    <keyword>file system</keyword>
    <keyword>distributed</keyword>
    <keyword>erasure coding</keyword>
    <keyword>encryption</keyword>
    <keyword>family</keyword>

    <abstract>
      <t>
        This document specifies the MyClerk Virtual File System (VFS), a
        distributed storage system designed for family networks. The VFS
        provides end-to-end encrypted, redundant storage across heterogeneous
        nodes using adaptive chunking and Reed-Solomon erasure coding
        <xref target="REED-SOLOMON"/>.
        Metadata synchronization uses Conflict-free Replicated Data Types
        (CRDTs) <xref target="CRDT"/> with a hybrid LWW-Register and OR-Set
        approach.
      </t>
      <t>
        The system supports tiered storage, automatic health monitoring with
        self-healing capabilities, and federation between separate family
        installations. It is designed to be accessible to non-technical users
        while providing enterprise-grade data durability.
      </t>
    </abstract>

  </front>

  <middle>

    <!-- ============================================================ -->
    <section anchor="introduction">
      <name>Introduction</name>

      <t>
        Modern families have storage devices distributed across multiple
        locations: a primary home with NAS, a vacation house with a mini-PC,
        grandparents with a Raspberry Pi, and mobile devices requiring remote
        access. The MyClerk VFS unifies these resources into a coherent,
        redundant file system.
      </t>

      <t>
        The VFS operates on top of the MyClerk Protocol
        (draft-myclerk-protocol) for all communication. This document specifies
        the storage layer, including data organization, redundancy mechanisms,
        and metadata synchronization.
      </t>

      <section anchor="requirements-language">
        <name>Requirements Language</name>
        <t>
          The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
          "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and
          "OPTIONAL" in this document are to be interpreted as described in
          BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and
          only when, they appear in all capitals, as shown here.
        </t>
      </section>

      <section anchor="terminology">
        <name>Terminology</name>

        <dl>
          <dt>Chunk</dt>
          <dd>A fixed-size segment of file data, typically 4-16 MB, that forms
              the unit of storage, encryption, and distribution.</dd>

          <dt>Fragment</dt>
          <dd>One piece of an erasure-coded chunk. A chunk with k=3, m=2
              produces 5 fragments.</dd>

          <dt>Core Node</dt>
          <dd>A storage node with high availability (&gt;95% uptime over 30
              days) that participates in guaranteed fragment storage.</dd>

          <dt>Bonus Node</dt>
          <dd>A storage node with variable availability that provides
              additional redundancy when online.</dd>

          <dt>CRDT</dt>
          <dd>Conflict-free Replicated Data Type. A data structure that can
              be replicated across multiple nodes and merged without
              conflicts.</dd>

          <dt>LWW-Register</dt>
          <dd>Last-Writer-Wins Register. A CRDT where concurrent writes are
              resolved by timestamp.</dd>

          <dt>OR-Set</dt>
          <dd>Observed-Remove Set. A CRDT that supports add and remove
              operations without conflicts.</dd>
        </dl>
      </section>

    </section>

    <!-- ============================================================ -->
    <section anchor="architecture">
      <name>Architecture Overview</name>

      <t>
        The VFS consists of four layers:
      </t>

      <ol>
        <li><strong>Virtual Namespace:</strong> CRDT-based directory structure
            visible to users.</li>
        <li><strong>Content-Addressed Chunk Store:</strong> Encryption,
            chunking, and hashing layer.</li>
        <li><strong>Distributed Fragment Storage:</strong> Erasure coding and
            distribution across nodes.</li>
        <li><strong>Federation Layer:</strong> Sharing between separate VFS
            installations.</li>
      </ol>

      <section anchor="design-principles">
        <name>Design Principles</name>

        <ul>
          <li><strong>Zero Data Loss:</strong> Erasure coding ensures data
              survives multiple node failures.</li>
          <li><strong>End-to-End Encryption:</strong> Only authorized users
              can read data; storage nodes see only ciphertext.</li>
          <li><strong>Self-Healing:</strong> Automatic fragment redistribution
              when nodes fail.</li>
          <li><strong>User-Friendly:</strong> Complex internals hidden behind
              simple folder interface.</li>
        </ul>
      </section>

    </section>

    <!-- ============================================================ -->
    <section anchor="chunking">
      <name>Adaptive Chunking</name>

      <t>
        The VFS uses adaptive chunk sizes based on file size to optimize
        storage efficiency:
      </t>

      <table anchor="chunk-sizes">
        <name>Adaptive Chunk Sizes</name>
        <thead>
          <tr>
            <th>File Size</th>
            <th>Chunk Size</th>
            <th>Rationale</th>
          </tr>
        </thead>
        <tbody>
          <tr><td>&lt; 1 MB</td><td>No chunking</td><td>Avoid overhead</td></tr>
          <tr><td>1-100 MB</td><td>4 MB</td><td>Good balance</td></tr>
          <tr><td>&gt; 100 MB</td><td>16 MB</td><td>Fewer fragments</td></tr>
        </tbody>
      </table>

      <t>
        Files smaller than 1 MB are stored as a single unit. This avoids the
        overhead of chunk management for small configuration files and
        documents.
      </t>

    </section>

    <!-- ============================================================ -->
    <section anchor="erasure-coding">
      <name>Erasure Coding</name>

      <t>
        The VFS uses Reed-Solomon erasure coding to provide redundancy. Each
        chunk is encoded into k data fragments plus m parity fragments. Any k
        fragments are sufficient to reconstruct the original chunk.
      </t>

      <section anchor="erasure-profiles">
        <name>Erasure Coding Profiles</name>

        <table anchor="ec-profiles">
          <name>Erasure Coding Profiles</name>
          <thead>
            <tr>
              <th>Profile</th>
              <th>k</th>
              <th>m</th>
              <th>Overhead</th>
              <th>Tolerance</th>
              <th>Use Case</th>
            </tr>
          </thead>
          <tbody>
            <tr><td>Economy</td><td>4</td><td>1</td><td>25%</td><td>1 node</td><td>Non-critical data</td></tr>
            <tr><td>Standard</td><td>3</td><td>2</td><td>67%</td><td>2 nodes</td><td>Recommended default</td></tr>
            <tr><td>Critical</td><td>4</td><td>4</td><td>100%</td><td>4 nodes</td><td>Important documents</td></tr>
            <tr><td>Paranoid</td><td>4</td><td>5+</td><td>&gt;125%</td><td>5+ nodes</td><td>Maximum security</td></tr>
          </tbody>
        </table>

        <t>
          The Standard profile with k=3, m=2 is RECOMMENDED for most use cases.
          It provides tolerance for two simultaneous node failures with
          reasonable storage overhead.
        </t>
      </section>

      <section anchor="fragment-distribution">
        <name>Fragment Distribution</name>

        <t>
          Fragments MUST be distributed across different nodes to maximize
          fault tolerance. The distribution algorithm SHOULD:
        </t>

        <ul>
          <li>Place at least k fragments on Core Nodes</li>
          <li>Distribute fragments across different physical locations when
              possible</li>
          <li>Use Bonus Nodes for additional copies</li>
          <li>Avoid placing multiple fragments on the same node</li>
        </ul>
      </section>

    </section>

    <!-- ============================================================ -->
    <section anchor="health-monitor">
      <name>Fragment Health Monitor</name>

      <t>
        The VFS continuously monitors fragment availability and takes
        corrective action when necessary.
      </t>

      <section anchor="health-levels">
        <name>Health Levels</name>

        <table anchor="health-status">
          <name>Fragment Health Status</name>
          <thead>
            <tr>
              <th>Status</th>
              <th>Condition</th>
              <th>Wait Time</th>
              <th>Action</th>
            </tr>
          </thead>
          <tbody>
            <tr><td>GREEN</td><td>&gt;= k+2 online</td><td>-</td><td>No action</td></tr>
            <tr><td>YELLOW</td><td>k+1 online</td><td>6 hours</td><td>Warning, then redistribute</td></tr>
            <tr><td>ORANGE</td><td>k online</td><td>30 minutes</td><td>Immediate redistribution</td></tr>
            <tr><td>RED</td><td>&lt; k online</td><td>0</td><td>Emergency recovery</td></tr>
          </tbody>
        </table>
      </section>

      <section anchor="node-failure-timeline">
        <name>Node Failure Escalation</name>

        <t>
          When a node becomes unreachable, the following escalation timeline
          applies:
        </t>

        <dl>
          <dt>0-30 minutes</dt>
          <dd>Ping retry with exponential backoff (10s, 30s, 90s). No status
              change or alerts.</dd>

          <dt>30 minutes - 1 hour</dt>
          <dd>Alert sent to family administrator: "Node XY unreachable".</dd>

          <dt>1-6 hours</dt>
          <dd>YELLOW status. System evaluates redistribution options and
              checks capacity on other nodes.</dd>

          <dt>6-24 hours</dt>
          <dd>Redistribution begins. Fragments are copied to available nodes.
              Priority: Critical &gt; Standard &gt; Economy. Bandwidth limit:
              50% of available capacity.</dd>

          <dt>&gt;72 hours</dt>
          <dd>Node downgrade: Core Node becomes Bonus Node. Fragments are no
              longer primarily placed on this node.</dd>
        </dl>
      </section>

    </section>

    <!-- ============================================================ -->
    <section anchor="crdt-metadata">
      <name>CRDT-Based Metadata</name>

      <t>
        The VFS uses Conflict-free Replicated Data Types (CRDTs) for metadata
        synchronization across nodes. This ensures eventual consistency
        without coordination.
      </t>

      <section anchor="hybrid-crdt">
        <name>Hybrid CRDT Approach</name>

        <t>
          The VFS employs a hybrid CRDT strategy:
        </t>

        <ul>
          <li><strong>LWW-Register:</strong> Used for file content references,
              file attributes (size, mtime), and single-value fields.</li>
          <li><strong>OR-Set:</strong> Used for directory entries, tag lists,
              and permission sets.</li>
        </ul>

        <t>
          This combination provides optimal semantics for file system
          operations: LWW-Register ensures the latest file version wins,
          while OR-Set correctly handles concurrent file creation and
          deletion in directories.
        </t>
      </section>

      <section anchor="lww-register">
        <name>LWW-Register Specification</name>

        <t>
          The following CDDL <xref target="RFC8610"/> schema defines the
          LWW-Register structure. Values are encoded using CBOR
          <xref target="RFC8949"/>.
        </t>

        <sourcecode type="cddl"><![CDATA[
lww-register<T> = {
    value: T,
    timestamp: uint,          ; Unix microseconds
    node-id: bstr .size 16,   ; Writer node ID
}

; Merge rule: Higher timestamp wins
; Tie-breaker: Lexicographically higher node-id wins
]]></sourcecode>
      </section>

      <section anchor="or-set">
        <name>OR-Set Specification</name>

        <sourcecode type="cddl"><![CDATA[
or-set<T> = {
    elements: [* or-set-element<T>],
}

or-set-element<T> = {
    value: T,
    add-id: bstr .size 16,    ; Unique ID for this add operation
    removed: bool,            ; Tombstone flag
}

; Add: Create new element with unique add-id
; Remove: Set removed=true for all elements with matching value
; Merge: Union of all elements, removed flags propagate
]]></sourcecode>
      </section>

      <section anchor="file-metadata">
        <name>File Metadata Structure</name>

        <sourcecode type="cddl"><![CDATA[
file-metadata = {
    id: bstr .size 16,                    ; Unique file ID
    name: lww-register<tstr>,             ; File name
    parent: lww-register<bstr .size 16>,  ; Parent directory ID
    content-hash: lww-register<bstr .size 32>, ; BLAKE3 hash
    size: lww-register<uint>,             ; File size in bytes
    mtime: lww-register<uint>,            ; Modification time
    chunks: lww-register<[* chunk-ref]>,  ; Chunk references
    tags: or-set<tstr>,                   ; User tags
    permissions: or-set<permission>,      ; Access permissions
}

chunk-ref = {
    hash: bstr .size 32,      ; BLAKE3 hash of chunk
    offset: uint,             ; Offset in file
    size: uint,               ; Chunk size
    ec-profile: tstr,         ; Erasure coding profile
}

permission = {
    principal: bstr .size 16, ; User or group ID
    access: uint,             ; Access flags (read, write, etc.)
}
]]></sourcecode>
      </section>

      <section anchor="directory-metadata">
        <name>Directory Metadata Structure</name>

        <sourcecode type="cddl"><![CDATA[
directory-metadata = {
    id: bstr .size 16,                    ; Unique directory ID
    name: lww-register<tstr>,             ; Directory name
    parent: lww-register<bstr .size 16>,  ; Parent directory ID
    children: or-set<bstr .size 16>,      ; Child file/directory IDs
    mtime: lww-register<uint>,            ; Modification time
    permissions: or-set<permission>,      ; Access permissions
}
]]></sourcecode>
      </section>

    </section>

    <!-- ============================================================ -->
    <section anchor="encryption">
      <name>Encryption</name>

      <t>
        All data is encrypted end-to-end using ChaCha20-Poly1305
        <xref target="RFC8439"/>. Storage nodes never see plaintext.
      </t>

      <section anchor="key-hierarchy">
        <name>Key Hierarchy</name>

        <t>
          Keys are derived hierarchically using HKDF <xref target="RFC5869"/>:
        </t>

        <artwork><![CDATA[
Family Master Key (FMK)
    |
    +-- Folder Key: /family/photos/
    |       |
    |       +-- File Key: photo.jpg
    |               |
    |               +-- Chunk Key: a1b2c3...
    |
    +-- Folder Key: /family/documents/
    |
    +-- Session Keys (for Federation)
]]></artwork>
      </section>

      <section anchor="key-derivation">
        <name>Key Derivation</name>

        <sourcecode><![CDATA[
folder_key = HKDF-SHA256(
    IKM  = family_master_key,
    salt = folder_id,
    info = "myclerk-vfs-folder-v0",
    L    = 32
)

file_key = HKDF-SHA256(
    IKM  = folder_key,
    salt = file_id,
    info = "myclerk-vfs-file-v0",
    L    = 32
)

chunk_key = HKDF-SHA256(
    IKM  = file_key,
    salt = chunk_index (4 bytes, big-endian),
    info = "myclerk-vfs-chunk-v0",
    L    = 32
)
]]></sourcecode>
      </section>

      <section anchor="nonce-construction">
        <name>Nonce Construction</name>

        <t>
          Each encryption operation requires a unique 96-bit nonce:
        </t>

        <artwork><![CDATA[
 0                   1                   2                   3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|                      Timestamp (32 bits)                      |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|                       Random (32 bits)                        |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|                      Counter (32 bits)                        |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
]]></artwork>

        <t>
          This construction provides collision probability less than 2^-80
          per year at 1 million operations per second.
        </t>
      </section>

      <section anchor="content-hashing">
        <name>Content Hashing</name>

        <t>
          All content is hashed using BLAKE3 <xref target="BLAKE3"/> for
          content addressing:
        </t>

        <ul>
          <li>Chunks: BLAKE3 hash of plaintext content</li>
          <li>Files: Merkle tree of chunk hashes</li>
        </ul>
      </section>

    </section>

    <!-- ============================================================ -->
    <section anchor="node-classification">
      <name>Node Classification</name>

      <t>
        Nodes are classified based on their reliability:
      </t>

      <table anchor="node-types">
        <name>Node Classification</name>
        <thead>
          <tr>
            <th>Type</th>
            <th>Criteria</th>
            <th>Role</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>Core Node</td>
            <td>&gt;95% uptime (30 days), &lt;100ms latency</td>
            <td>Guaranteed fragment storage (k fragments)</td>
          </tr>
          <tr>
            <td>Bonus Node</td>
            <td>Variable availability</td>
            <td>Additional redundancy when online</td>
          </tr>
        </tbody>
      </table>

      <t>
        Classification is automatic based on observed metrics. A node that
        has been offline for more than 72 hours is automatically downgraded
        from Core to Bonus.
      </t>

    </section>

    <!-- ============================================================ -->
    <section anchor="tiered-storage">
      <name>Tiered Storage</name>

      <t>
        Data is automatically migrated between storage tiers based on access
        patterns:
      </t>

      <table anchor="storage-tiers">
        <name>Storage Tiers</name>
        <thead>
          <tr>
            <th>Tier</th>
            <th>Storage Type</th>
            <th>Latency</th>
            <th>Content</th>
          </tr>
        </thead>
        <tbody>
          <tr><td>Hot</td><td>NVMe/SSD</td><td>&lt;100ms</td><td>Last 7 days</td></tr>
          <tr><td>Warm</td><td>SSD/HDD</td><td>&lt;500ms</td><td>7-30 days</td></tr>
          <tr><td>Cold</td><td>HDD/Archive</td><td>&gt;1s</td><td>&gt;30 days</td></tr>
        </tbody>
      </table>

    </section>

    <!-- ============================================================ -->
    <section anchor="caching">
      <name>Caching</name>

      <t>
        Each client maintains a three-level cache:
      </t>

      <dl>
        <dt>L1 (RAM)</dt>
        <dd>Hot metadata and small files. Configurable size (default 512 MB).
            Latency &lt;10ms.</dd>

        <dt>L2 (SSD)</dt>
        <dd>Frequently accessed chunks. Configurable size (default 50 GB).
            Latency &lt;100ms. LRU eviction.</dd>

        <dt>L3 (Remote)</dt>
        <dd>On-demand retrieval from storage nodes. Multi-source streaming
            for large files.</dd>
      </dl>

    </section>

    <!-- ============================================================ -->
    <section anchor="operations">
      <name>VFS Operations</name>

      <t>
        VFS operations use operation codes in the range 0x0500-0x05CF of the
        MyClerk Protocol.
      </t>

      <section anchor="basic-ops">
        <name>Basic Operations (0x0500-0x050F)</name>

        <table>
          <name>Basic VFS Operations</name>
          <thead>
            <tr><th>Code</th><th>Name</th><th>Description</th></tr>
          </thead>
          <tbody>
            <tr><td>0x0500</td><td>VFS_MOUNT</td><td>Mount VFS</td></tr>
            <tr><td>0x0501</td><td>VFS_UNMOUNT</td><td>Unmount VFS</td></tr>
            <tr><td>0x0502</td><td>VFS_STAT</td><td>Get file/directory info</td></tr>
            <tr><td>0x0503</td><td>VFS_LIST</td><td>List directory contents</td></tr>
            <tr><td>0x0504</td><td>VFS_READ</td><td>Read file data</td></tr>
            <tr><td>0x0505</td><td>VFS_WRITE</td><td>Write file data</td></tr>
            <tr><td>0x0506</td><td>VFS_CREATE</td><td>Create file/directory</td></tr>
            <tr><td>0x0507</td><td>VFS_DELETE</td><td>Delete file/directory</td></tr>
            <tr><td>0x0508</td><td>VFS_RENAME</td><td>Rename/move</td></tr>
            <tr><td>0x0509</td><td>VFS_SYNC</td><td>Force sync</td></tr>
          </tbody>
        </table>
      </section>

      <section anchor="chunk-ops">
        <name>Chunk Operations (0x0510-0x051F)</name>

        <table>
          <name>Chunk Operations</name>
          <thead>
            <tr><th>Code</th><th>Name</th><th>Description</th></tr>
          </thead>
          <tbody>
            <tr><td>0x0510</td><td>CHUNK_STORE</td><td>Store chunk</td></tr>
            <tr><td>0x0511</td><td>CHUNK_RETRIEVE</td><td>Retrieve chunk</td></tr>
            <tr><td>0x0512</td><td>CHUNK_VERIFY</td><td>Verify integrity</td></tr>
            <tr><td>0x0513</td><td>CHUNK_DELETE</td><td>Delete chunk</td></tr>
            <tr><td>0x0514</td><td>CHUNK_LOCATE</td><td>Find chunk locations</td></tr>
            <tr><td>0x0515</td><td>CHUNK_REPLICATE</td><td>Replicate chunk</td></tr>
          </tbody>
        </table>
      </section>

      <section anchor="fragment-ops">
        <name>Fragment Operations (0x0520-0x052F)</name>

        <table>
          <name>Fragment Operations</name>
          <thead>
            <tr><th>Code</th><th>Name</th><th>Description</th></tr>
          </thead>
          <tbody>
            <tr><td>0x0520</td><td>FRAGMENT_STORE</td><td>Store fragment</td></tr>
            <tr><td>0x0521</td><td>FRAGMENT_RETRIEVE</td><td>Retrieve fragment</td></tr>
            <tr><td>0x0522</td><td>FRAGMENT_STATUS</td><td>Get fragment status</td></tr>
            <tr><td>0x0523</td><td>FRAGMENT_REDISTRIBUTE</td><td>Redistribute</td></tr>
            <tr><td>0x0524</td><td>FRAGMENT_HEALTH_REPORT</td><td>Health report</td></tr>
          </tbody>
        </table>
      </section>

      <section anchor="metadata-ops">
        <name>Metadata Operations (0x0530-0x053F)</name>

        <table>
          <name>Metadata Operations</name>
          <thead>
            <tr><th>Code</th><th>Name</th><th>Description</th></tr>
          </thead>
          <tbody>
            <tr><td>0x0530</td><td>META_SYNC_REQUEST</td><td>Request metadata sync</td></tr>
            <tr><td>0x0531</td><td>META_SYNC_DIFF</td><td>Send diff</td></tr>
            <tr><td>0x0532</td><td>META_CONFLICT_RESOLVE</td><td>Resolve conflict</td></tr>
            <tr><td>0x0533</td><td>META_SNAPSHOT</td><td>Create snapshot</td></tr>
            <tr><td>0x0534</td><td>META_RESTORE</td><td>Restore from snapshot</td></tr>
          </tbody>
        </table>
      </section>

      <section anchor="share-ops">
        <name>Sharing Operations (0x0550-0x055F)</name>

        <table>
          <name>Sharing Operations</name>
          <thead>
            <tr><th>Code</th><th>Name</th><th>Description</th></tr>
          </thead>
          <tbody>
            <tr><td>0x0550</td><td>SHARE_CREATE</td><td>Create share</td></tr>
            <tr><td>0x0551</td><td>SHARE_ACCEPT</td><td>Accept share</td></tr>
            <tr><td>0x0552</td><td>SHARE_REVOKE</td><td>Revoke share</td></tr>
            <tr><td>0x0553</td><td>SHARE_LIST</td><td>List shares</td></tr>
            <tr><td>0x0554</td><td>SHARE_UPDATE</td><td>Update permissions</td></tr>
            <tr><td>0x0555</td><td>COMMON_FOLDER_CREATE</td><td>Create common folder</td></tr>
            <tr><td>0x0556</td><td>COMMON_FOLDER_JOIN</td><td>Join common folder</td></tr>
            <tr><td>0x0557</td><td>COMMON_FOLDER_LEAVE</td><td>Leave common folder</td></tr>
            <tr><td>0x0558</td><td>COMMON_FOLDER_SYNC</td><td>Force sync</td></tr>
          </tbody>
        </table>
      </section>

      <section anchor="emergency-ops">
        <name>Emergency Operations (0x05C0-0x05CF)</name>

        <table>
          <name>Emergency Operations</name>
          <thead>
            <tr><th>Code</th><th>Name</th><th>Description</th></tr>
          </thead>
          <tbody>
            <tr><td>0x05C0</td><td>EMERGENCY_REVOKE</td><td>Immediate revocation</td></tr>
            <tr><td>0x05C1</td><td>EMERGENCY_KEY_INVALIDATE</td><td>Invalidate all keys</td></tr>
            <tr><td>0x05C2</td><td>EMERGENCY_CACHE_WIPE</td><td>Request cache wipe</td></tr>
            <tr><td>0x05C3</td><td>EMERGENCY_CONFIRM</td><td>Client confirmation</td></tr>
            <tr><td>0x05C4</td><td>EMERGENCY_STATUS</td><td>Query cut-off status</td></tr>
            <tr><td>0x05C5</td><td>EMERGENCY_RESTORE</td><td>Restore access</td></tr>
          </tbody>
        </table>
      </section>

    </section>

    <!-- ============================================================ -->
    <section anchor="federation">
      <name>Federation</name>

      <t>
        The VFS supports sharing between separate installations (families).
        Two sharing modes exist:
      </t>

      <dl>
        <dt>Simple Share</dt>
        <dd>Owner retains data on their Core Nodes. Recipient gets cache
            access only. Connection loss = no access.</dd>

        <dt>Common Folder</dt>
        <dd>Both parties have full copies. Both can upload. Connection loss
            does not affect access.</dd>
      </dl>

      <section anchor="cache-security">
        <name>Cache Security</name>

        <t>
          Shared access uses time-limited tokens:
        </t>

        <ul>
          <li>Validity check interval: 10 minutes</li>
          <li>Offline tolerance: 1 hour</li>
          <li>Key rotation interval: 24 hours</li>
        </ul>

        <t>
          Emergency Cut-Off immediately invalidates all tokens and keys.
        </t>
      </section>

    </section>

    <!-- ============================================================ -->
    <section anchor="security-considerations">
      <name>Security Considerations</name>

      <section anchor="encryption-security">
        <name>Encryption</name>
        <t>
          All data is encrypted with ChaCha20-Poly1305 before leaving the
          client. Storage nodes never see plaintext. Key compromise affects
          only the specific folder/file/chunk level in the hierarchy.
        </t>
      </section>

      <section anchor="nonce-security">
        <name>Nonce Handling</name>
        <t>
          Nonce reuse with the same key completely compromises security.
          The timestamp+random+counter construction ensures collision
          probability below 2^-80 per year.
        </t>
      </section>

      <section anchor="crdt-security">
        <name>CRDT Security</name>
        <t>
          CRDT operations are authenticated using the MyClerk Protocol's
          session keys. Unauthorized nodes cannot inject or modify metadata.
        </t>
      </section>

      <section anchor="federation-security">
        <name>Federation Security</name>
        <t>
          Federation uses separate session keys. Emergency Cut-Off can
          immediately revoke all access. Cached data becomes cryptographically
          inaccessible when keys are rotated.
        </t>
      </section>

    </section>

    <!-- ============================================================ -->
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>

      <t>
        This document has no IANA actions.
      </t>
    </section>

  </middle>

  <back>

    <references>
      <name>References</name>

      <references>
        <name>Normative References</name>
        &RFC2119;
        &RFC8174;
        &RFC8439;
        &RFC5869;
        &RFC8610;
        &RFC8949;
      </references>

      <references>
        <name>Informative References</name>

        <reference anchor="BLAKE3" target="https://github.com/BLAKE3-team/BLAKE3-specs/blob/master/blake3.pdf">
          <front>
            <title>BLAKE3: One function, fast everywhere</title>
            <author initials="J." surname="O'Connor" fullname="Jack O'Connor"/>
            <date year="2020"/>
          </front>
        </reference>

        <reference anchor="CRDT" target="https://hal.inria.fr/inria-00555588">
          <front>
            <title>A comprehensive study of Convergent and Commutative Replicated Data Types</title>
            <author initials="M." surname="Shapiro" fullname="Marc Shapiro"/>
            <date year="2011"/>
          </front>
        </reference>

        <reference anchor="REED-SOLOMON" target="https://ieeexplore.ieee.org/document/1057464">
          <front>
            <title>Polynomial Codes Over Certain Finite Fields</title>
            <author initials="I.S." surname="Reed" fullname="Irving S. Reed"/>
            <author initials="G." surname="Solomon" fullname="Gustave Solomon"/>
            <date year="1960"/>
          </front>
        </reference>

      </references>

    </references>

    <section anchor="acknowledgements" numbered="false">
      <name>Acknowledgements</name>
      <t>
        This specification is part of the MyClerk project, a privacy-first
        family orchestration platform currently in development. For more
        information, visit https://myclerk.eu.
      </t>
    </section>

  </back>

</rfc>
