<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.6.36 (Ruby 3.2.2) -->
<?rfc tocindent="yes"?>
<?rfc strict="yes"?>
<?rfc compact="yes"?>
<?rfc subcompact="yes"?>
<?rfc comments="yes"?>
<?rfc inline="yes"?>
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-ietf-moq-requirements-01" category="info" submissionType="IETF" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.17.4 -->
  <front>
    <title abbrev="MoQ Use Cases and Requirements">Media Over QUIC - Use Cases and Requirements for Media Transport Protocol Design</title>
    <seriesInfo name="Internet-Draft" value="draft-ietf-moq-requirements-01"/>
    <author initials="J." surname="Gruessing" fullname="James Gruessing">
      <organization>Nederlandse Publieke Omroep</organization>
      <address>
        <postal>
          <country>Netherlands</country>
        </postal>
        <email>james.ietf@gmail.com</email>
      </address>
    </author>
    <author initials="S." surname="Dawkins" fullname="Spencer Dawkins">
      <organization>Tencent America LLC</organization>
      <address>
        <postal>
          <country>United States</country>
        </postal>
        <email>spencerdawkins.ietf@gmail.com</email>
      </address>
    </author>
    <date year="2023" month="July" day="10"/>
    <area>applications</area>
    <workgroup>MOQ Mailing List</workgroup>
    <keyword>Internet-Draft QUIC</keyword>
    <abstract>
      <?line 72?>

<t>This document describes use cases and requirements that guide the specification of a simple, low-latency media delivery solution for ingest and distribution, using either the QUIC protocol or WebTransport as transport protocols.</t>
    </abstract>
    <note>
      <name>Note to Readers</name>
      <?line 76?>

<t><em>RFC Editor: please remove this section before publication</em></t>
      <t>Source code and issues for this draft can be found at
<eref target="https://github.com/moq-wg/moq-requirements">https://github.com/moq-wg/moq-requirements</eref>.</t>
      <t>Discussion of this draft should take place on the IETF Media Over QUIC (MoQ)
mailing list, at <eref target="https://www.ietf.org/mailman/listinfo/moq">https://www.ietf.org/mailman/listinfo/moq</eref>.</t>
    </note>
  </front>
  <middle>
    <?line 86?>

<section anchor="intro">
      <name>Introduction</name>
      <t>This document describes use cases and requirements that guide the specification of a simple, low-latency media delivery solution for ingest and distribution <xref target="MOQ-charter"/>, using either the QUIC protocol <xref target="RFC9000"/> or WebTransport <xref target="WebTrans-charter"/> as transport protocols.</t>
      <section anchor="note-for-moq-working-group-participants">
        <name>Note for MOQ Working Group participants</name>
        <t>When adopted, this document is intended to capture use cases that are in scope for work on the MOQ protocol <xref target="MOQ-charter"/>, and requirements that arise from these use cases.</t>
        <t>As of this writing, the authors have not planned to request publication on this document, based on our understanding of the IESG's statement on "Support Documents in IETF Working Groups" <xref target="IESG-sdwg"/>, which says (among other things):</t>
        <ul empty="true">
          <li>
            <t>While writing down such things as requirements and use cases help to get a common understanding (and often common language) between participants in the working group, support documentation doesn’t always have a high archival value. Under most circumstances, the IESG encourages the community to consider alternate mechanisms for publishing this content, such as on a working group wiki, in an informational appendix of a solution document, or simply as an expired draft.</t>
          </li>
        </ul>
        <t>It seems reasonable for the working group to improve this document, and then consider whether the result justifies publication as a part of the RFC archival document series.</t>
      </section>
    </section>
    <section anchor="term">
      <name>Terminology</name>
      <t>The key words "<bcp14>MUST</bcp14>", "<bcp14>MUST NOT</bcp14>", "<bcp14>REQUIRED</bcp14>", "<bcp14>SHALL</bcp14>", "<bcp14>SHALL
NOT</bcp14>", "<bcp14>SHOULD</bcp14>", "<bcp14>SHOULD NOT</bcp14>", "<bcp14>RECOMMENDED</bcp14>", "<bcp14>NOT RECOMMENDED</bcp14>",
"<bcp14>MAY</bcp14>", and "<bcp14>OPTIONAL</bcp14>" in this document are to be interpreted as
described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they
appear in all capitals, as shown here.</t>
      <?line -18?>

<section anchor="distinguishing-between-interactive-and-live-streaming-use-cases">
        <name>Distinguishing between Interactive and Live Streaming Use Cases</name>
        <t>The MOQ charter <xref target="MOQ-charter"/> lists three use cases as being in scope of the MOQ protocol</t>
        <ul empty="true">
          <li>
            <t>use cases including live streaming, gaming, and media conferencing</t>
          </li>
        </ul>
        <t>but does not include (directly or by reference) a definition of "live streaming" or "interactive" (a term that has been used to describe gaming and media conferencing, as distinct from "live streaming"). It seems useful to describe these two terms, as classes of use cases, before we describe individual use cases in more detail.</t>
        <t>MOQ participants have discussed making this distinction based on quantitative measures such as latency, but since MOQ use cases can include an arbitrary number of relays, we offer a distinction that is based on how users experience that distinction. If two users are able to interact in the way that seems interactive, as described in the proposed definitions, the use case is interactive; if two users are unable to interact in that way, the use case is live streaming.</t>
        <t>We propose these definitions:</t>
        <dl>
          <dt><strong>Interactive</strong>:</dt>
          <dd>
            <t>a use case with coupled bidirectional media flows</t>
          </dd>
        </dl>
        <t>Interactive use cases have bidirectional media flows sufficiently coupled with each other, that media from one sender can cause the receiver to reply by sending its own media back to the original sender.</t>
        <t>For instance, a speaker in a conferencing application might make a statement, and then ask, "but what do you folks think?" If one of the listeners is able to answer in a timeframe that seems natural, without waiting for the current speaker to explicitly "hand over" control of the conversation, this would qualify as "Interactive".</t>
        <dl>
          <dt><strong>Live Streaming</strong>:</dt>
          <dd>
            <t>a use case with unidirectional media flows, or uncoupled bidirectional flows</t>
          </dd>
        </dl>
        <t>Live Streaming use cases allow consumers of media to "watch together", without having a sense that one consumer is experiencing the media before another consumer. This does not require the delivery of live media to be strictly synchronized between media consumers, but only that from the viewpoint of individual consumers, media delivery <strong>appears to be</strong> synchronized.</t>
        <t>It is common for live streaming use cases to send media in one direction, and "something else" in the other direction - for instance, a video receiver might be returning requests that the sender change the media encoding or media rate in use, or reorgient a camera. This type of feedback doesn't qualify as "bidirectional media".</t>
        <t>If two sender/receivers are each sending media to the other, but what's being carried in one direction has no relationship with what's being carried in the other direction, this would not qualify as "Interactive".</t>
        <t><strong>Note: these descriptions are a starting point. Feedback and pushback are both welcomed.</strong></t>
      </section>
    </section>
    <section anchor="overallusecases">
      <name>Use Cases Informing This Proposal</name>
      <t>Our goal in this section is to understand the range of use cases that are in scope for "Media Over QUIC" <xref target="MOQ-charter"/>.</t>
      <t>For each use case in this section, we also describe</t>
      <ul spacing="compact">
        <li>the number of senders or receiver in a given session transmitting distinct streams,</li>
        <li>whether a session has bi-directional flows of media from senders and receivers, which may also include timely non-media such as haptics or timed events.</li>
      </ul>
      <t>It is likely that we should add other characteristics, as we come to understand them.</t>
      <section anchor="interact">
        <name>Interactive Media</name>
        <t>The use cases described in this section have one particular attribute in common - the target the lowest possible latency as can be achieved at the trade off of data loss and complexity. For example,</t>
        <ul spacing="compact">
          <li>It may make sense to use FEC <xref target="RFC6363"/> and codec-level packet loss concealment <xref target="RFC6716"/>, rather than selectively retransmitting only lost packets. These mechanisms use more bytes, but do not require multiple round trips in order to recover from packet loss.</li>
          <li>It's generally infeasible to use congestion control schemes like BBR <xref target="I-D.draft-cardwell-iccrg-bbr-congestion-control"/> in many deployments, since BBR has probing mechanisms that rely on temporarily inducing delay, but these mechanisms can then amortize the consequences of induced delay over multiple RTTs.</li>
        </ul>
        <t>This may help to explain why interactive use cases have typically relied on protocols such as RTP <xref target="RFC3550"/>, which provide low-level control of packetization and transmission, with addtional support for retransmission as an optional extension.</t>
        <section anchor="gaming">
          <name>Gaming</name>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">One to One</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">Yes</td>
              </tr>
            </tbody>
          </table>
          <t>In this use case the computation for running a video game (single or
multiplayer) is performed externally on a hosted service, with user inputs from
input devices sent to the server, and media, usually video and audio of gameplay
returned. This may also include the client receiving other types of signaling,
such as triggers for haptic feedback, as well as the client sending media such
as microphone audio for in-game chat with other players. Latency may be
considerably important in this use case as updates to video occur in response
user input, with certain genres of games having high requirements in
responsiveness and/or a high frequency of user input.</t>
        </section>
        <section anchor="remdesk">
          <name>Remote Desktop</name>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">One to Many</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">Yes</td>
              </tr>
            </tbody>
          </table>
          <t>Similar to the gaming use case in many requirements, but where a user wishes to
observe or control the graphical user interface of another computer through
local user interfaces.  Latency requirements with this use case are marginally
different than the gaming use case as greater input latency may be more
tolerated by users. This use case may also include a need to support signalling
and/or transmitting of files or devices connected to the user's computer.</t>
        </section>
        <section anchor="vidconf">
          <name>Video Conferencing/Telephony</name>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">Many to Many</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">Yes</td>
              </tr>
            </tbody>
          </table>
          <t>In the Video Conferencing/Telephony use case, media is both sent and received.
This use case typically includes audio and video media, and may also include one or more  additional media types, such as "screen sharing" and other content such as slides, documents, or video presentations.
This may be done
as client/server, or peer to peer with a many to many relationship of both
senders and receivers. The target for latency may be as large as 200ms or more for
some media types such as audio, but other media types in this use case have much
more stringent latency targets.</t>
        </section>
      </section>
      <section anchor="lm-media">
        <name>Live Media</name>
        <t>The use cases in this section like those in <xref target="interact"/> do set some expectations to minimise high and/or highly variable latency, however their key difference is that are seldom bi-directional as their basis is on mass-consumption of media or the contribution of it into a platform to syndicate, or distribute. Latency is less noticeable over loss, and may be more accepting of having slightly more latency to increase guarantee of delivery.</t>
        <section anchor="lmingest">
          <name>Live Media Ingest</name>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">One to One</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">No</td>
              </tr>
            </tbody>
          </table>
          <t>Where media is received from a source for onwards handling into a distribution
platform. The media may comprise of multiple audio and/or video sources.
Bitrates may either be static or set dynamically by signaling of connection
information (bandwidth, latency) based on data sent by the receiver, and the
media may go through additional steps of transcoding or transformation before
being distributed.</t>
        </section>
        <section anchor="lmsynd">
          <name>Live Media Syndication</name>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">One to One</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">No</td>
              </tr>
            </tbody>
          </table>
          <t>Where media is sent onwards to another platform for further distribution and not
directly used for presentation to an audience, however may be monitored by
operational systems and/or people. The media may be compressed down to a bitrate
lower than source, but larger than final distribution output. Streams may be
redundant with failover mechanisms in place.</t>
        </section>
        <section anchor="lmstream">
          <name>Live Media Streaming</name>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">One to Many</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">No</td>
              </tr>
            </tbody>
          </table>
          <t>Where media is received from a live broadcast or stream either as a broadcast
with fixed duration or as ongoing 24/7 output. The number of receivers may vary
depending on the type of content; breaking news events may see sharp, sudden
spikes, whereas sporting and entertainment events may see a more gradual ramp
up with a higher sustained peak with some changes based on match breaks or
interludes.</t>
          <t>Such broadcasts may comprise of multiple audio or video outputs with different
codecs or bitrates, and may also include other types of media essence such as
subtitles or timing signalling information (e.g. markers to indicate change of
behaviour in client such as advertisement breaks). The use of "live rewind"
where a window of media between the live edge and trailing edge can be made
available for clients to playback, either because the local player falls behind the leading
edge or because the viewer wishes to play back from a point in the past.</t>
        </section>
      </section>
      <section anchor="hybrid-interactive-and-live-media">
        <name>Hybrid Interactive and Live Media</name>
        <t>For the video conferencing/telephony use case, there can be additional scenarios
where the audience greatly outnumbers the concurrent active participants, but
any member of the audience could participate.
This use case can have an audience as large as Live Media Streaming as described in <xref target="lmstream"/>, but also relies on the interactivity and bi-directionality of conferencing as in Video Conferencing as described in <xref target="vidconf"/>. For this reason, this type of use case can be considered a "hybrid". There can be additional functionality as well that overlap between the two, such as "live rewind", or recording abilities.</t>
        <t>Another consideration is the limits of "human bandwidth" - as the number of
sources are included into a given session increase, the amount of media that can
usefully understood by a single person diminishes. To put it more simply - too
many people talking at once is much more difficult to understand than one person
speaking at a time, and this varies on the audience and circumstance.
Subsequently this will define some limitations in the number of potential
concurrent or semi-concurrent, bidirectional communications that occur.</t>
      </section>
    </section>
    <section anchor="req-sec">
      <name>Requirements for Protocol Work</name>
      <t>Our goal in this section is to understand the requirements that result from the use cases described in <xref target="overallusecases"/>.</t>
      <section anchor="notes-to-the-reader">
        <name>Notes to the Reader</name>
        <ul spacing="compact">
          <li>Note: the intention for the requirements in this document is that they are useful for MOQ working group participants, to recognize constraints, and useful for readers outside the MOQ working group to understand the high-level functionality of the MOQ protocol, as they consider implementation and deployment of systems that rely on the MOQ protocol.</li>
        </ul>
      </section>
      <section anchor="proto-cons">
        <name>Specific Protocol Considerations</name>
        <t>In order to support the various topologies and patterns of media flows with the protocol, the protocol <bcp14>MUST</bcp14> support both sending and receiving of media streams, as separate actions or concurrently in a given connection.</t>
        <section anchor="delivery-assurance-vs-delay">
          <name>Delivery Assurance vs. Delay</name>
          <t>Different use cases have varying requirements with respect to the tradeoffs associated in having guarantee of delivery vs delay - in some (such as telephony) it may be acceptable to drop some or all of the media as a result of changes in network connectivity, throughput, or congestion whereas in other scenarios all media must arrive at the receiving end even if delayed. There <bcp14>SHOULD</bcp14> be support for some means for a connection to signal which media may be abandoned, and behaviours of both senders receivers defined when delay or loss occurs. Where multiple variants of media are sent, this <bcp14>SHOULD</bcp14> be done so in a way that provides pipelining so each media stream may be processed in parallel.</t>
        </section>
        <section anchor="support-webtransportraw-quic-as-media-transport">
          <name>Support Webtransport/Raw QUIC as media transport</name>
          <t>There should be a degree of decoupling from the underlying transport protocols and MoQ itself despite the "Q" in the name, in particular to provide future agility and prevent any potential ossification being tied to specific version(s) of dependant protocols.</t>
          <t>Many of the use cases will be deployed in contexts where web browsers are the common application runtime; thus the use of existing protocols and APIs is desireable for implementations. Support for WebTransport <xref target="I-D.draft-ietf-webtrans-overview"/> will be defined, although implementations or deployments running outside browsers will not need to use WebTransport, thus support for the protocol running directly atop QUIC should be provided.</t>
          <t>Considerations should be made clear with respect to modes where WebTransport "falls back" to using HTTP/2 or other future non-QUIC based protocol.</t>
        </section>
        <section anchor="MOQ-negotiation">
          <name>Media Negotiation &amp; Agility</name>
          <t>All entities which directly process media will have support for a variety of media codecs, both codecs which exist now and codecs that will be defined in the future. Consequently the protocol will provide the capability for sender and receiver to negotiate which media codecs will be used in a given session.</t>
          <t>The protocol <bcp14>SHOULD</bcp14> remain codec agnostic as much as possible, and should allow for new media formats and codecs to be supported without change in specification.</t>
          <t>The working group should consider if a minimum, suggestive set of codecs should be supported for the purposes of interop, however this <bcp14>SHOULD</bcp14> avoid being strict to simplify use cases and deployments that don't require certain capability e.g. telephony which may not require video codecs.</t>
        </section>
      </section>
      <section anchor="media-data-model">
        <name>Media Data Model</name>
        <t>As the protocol will handle many different types of media, classifications, and variations when all entities describe the media a model should be defined which represents this, with a clear addressing scheme. This should factor in at least, but not limited to allow future types:</t>
        <dl>
          <dt>Media Types</dt>
          <dd>
            <t>Video, audio, subtitles, ancillary data</t>
          </dd>
          <dt>Classifications</dt>
          <dd>
            <t>Codec, language, layers</t>
          </dd>
          <dt>Variations</dt>
          <dd>
            <t>For each stream, the resolution(s), bitrate(s). Each variant should be uniquely identifiable and addressable.</t>
          </dd>
        </dl>
        <t>Considerations should be made to addressing of individual audio/video frames as opposed to groups, in addition to how the model incorporates signalling of prioritisation, media dependency, and cacheability to all entities.</t>
      </section>
      <section anchor="pub-media">
        <name>Publishing Media</name>
        <t>Many of the use cases have bi-directional flows of media, with clients both sending and receiving media concurrently, thus the protocol should have a unified approach in connection negotiation and signalling to send and received media both at the start and ongoing in the lifetime of a session including describing when flow of media is unsupported (e.g. a live media server signalling it does not support receiving from a given client).</t>
        <t>In the initiation of a session both client and server must perform negotiation in order to agree upon a variety of details before media can move in any direction:</t>
        <ul spacing="compact">
          <li>Is the client authenticated and subsequently authorised to initiate a connection?</li>
          <li>What media is available, and for each what are the parameters such as codec, bitrate, and resolution etc?</li>
          <li>Can media move bi-directionally, or is it unidirectional only?</li>
        </ul>
      </section>
      <section anchor="naming">
        <name>Naming and Addressing Media Resources</name>
        <t>As multiple streams of media may be available for concurrent sending such as multiple camera views or audio tracks, a means of both identifying the technical properties of each resource (codec, bitrate, etc) as well as a useful identification for playback should be part of the protocol. A base level of optional metadata e.g. the known language of an audio track or name of participant's camera should be supported, but further extended metadata of the contents of the media or its ontology should not be supported.</t>
        <section anchor="scoped-to-an-origindomain-application-specific">
          <name>Scoped to an Origin/Domain, Application specific.</name>
        </section>
        <section anchor="allows-subscribing-or-requesting-for-the-data-matching-the-name-by-the-consumers">
          <name>Allows subscribing or requesting for the data matching the name by the consumers</name>
        </section>
      </section>
      <section anchor="Packaging">
        <name>Packaging Media</name>
        <t>Packaging of media describes how raw media will be encapsulated. There are at a high level two approaches to this:</t>
        <ul spacing="compact">
          <li>Within the protocol itself, where the protocol defines the ancillary data required to decode each media type the protocol supports.</li>
          <li>A common encapsulation format such as there are advantages to using an existing generic media packaging format (such as CMAF <xref target="CMAF"/> or other ISOBMFF <xref target="ISOBMFF"/> subsets) which define a generic method for all media and handles ancillary decode information.</li>
        </ul>
        <t>The working group must agree on which approach should be taken to the packaging of media, taking into consideration the various technical trade offs that each approach provides.</t>
        <ul spacing="compact">
          <li>If the working group decides to describe media encapsulation as part of the MOQ protocol, this will require a new version of the MOQ protocol in order to signal the receiver that a new media encapsulation format may be present.</li>
          <li>If the working group decides to use a common encapsulation format, the mechanisms within the protocol <bcp14>SHOULD</bcp14> allow for new encapsulation formats to be used. Without encapsulation agility, adding or changing the way media is encapsulated will also require a new version of the MOQ protocol, to signal the receiver that a new media encapsulation format may be present.</li>
        </ul>
        <t>MOQ protocol specifications will provide details on the supported media encapsulation(s).</t>
      </section>
      <section anchor="med-consumption">
        <name>Media Consumption</name>
        <t>Receivers <bcp14>SHOULD</bcp14> be able to as part of negotiation of a session <xref target="MOQ-negotiation"/> specify which media to receive, not just with respect to the media format and codec, but also the varient thereof such as resolution or bitrate.</t>
      </section>
      <section anchor="MOQ-network-entities">
        <name>Relays, Caches, and other MOQ Network Elements</name>
        <section anchor="pull-push">
          <name>Pull &amp; Push</name>
          <t>To enable use cases where receivers may wish to address a particular time of media in addition to having the most recently produced media available, both "pull" and "push" of media <bcp14>SHOULD</bcp14> be supported, with consideration that producers and intermediates <bcp14>SHOULD</bcp14> also signal what media is available (commonly referred to as a "DVR window"). Behaviours around cache durations for each MoQ entity should be defined.</t>
        </section>
      </section>
      <section anchor="MOQ-security">
        <name>Security</name>
        <section anchor="authentication-authorisation">
          <name>Authentication &amp; Authorisation</name>
          <t>Whilst QUIC and conversely TLS supports the ability for mutual authentication through client and server presenting certificates and performing validation, this is infeasible in many use cases where provisioning of client TLS certificates is unsupported or impractical. Thus, support for a primitive method of authentication between MoQ entities <bcp14>SHOULD</bcp14> be included to authenticate entities between one another, noting that implementations and deployments should determine which authorisation model if any is applicable.</t>
        </section>
        <section anchor="MOQ-media-encryption">
          <name>Media Encryption</name>
          <t>End-to-end security describes the use of encryption of the media stream(s) to provide confidentiality in the presence of unauthorized intermediates or observers and prevent or restrict ability to decrypt the media without authorization. Generally, there are three aspects of end-to-end media security:</t>
          <ul spacing="compact">
            <li>Digital Rights Management, which refers to the authorization of receivers to decode a media stream.</li>
            <li>Sender-to-Receiver Media Security, which refers to the ability of media senders and receivers to transfer media while protected from authorized intermediates and observers, and</li>
            <li>Node-to-node Media Security, which refers to security when authorized intermediaries are needed to transform media into a form acceptable to authorized receivers. For example, this might refer to a video transcoder between the media sender and receiver.</li>
          </ul>
          <t>**Note: "Node-to-node" refers to a path segment connecting two MOQ nodes, that makes up part of the end-to-end path between the MOQ sender and ultimate MOQ receiver.</t>
          <t>Support for encrypted media <bcp14>SHOULD</bcp14> be available in the protocol to support the above use cases, with key exchange and decryption authorisation handled externally. The protocol <bcp14>SHOULD</bcp14> provide metadata for entities which process media to perform key exchange and decrypt.</t>
        </section>
      </section>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t>This document makes no requests of IANA.</t>
    </section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>As this document is intended to guide discussion and consensus, it introduces
no security considerations of its own.</t>
    </section>
  </middle>
  <back>
    <references>
      <name>References</name>
      <references>
        <name>Normative References</name>
        <reference anchor="RFC2119">
          <front>
            <title>Key words for use in RFCs to Indicate Requirement Levels</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <date month="March" year="1997"/>
            <abstract>
              <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="2119"/>
          <seriesInfo name="DOI" value="10.17487/RFC2119"/>
        </reference>
        <reference anchor="RFC8174">
          <front>
            <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
            <author fullname="B. Leiba" initials="B." surname="Leiba"/>
            <date month="May" year="2017"/>
            <abstract>
              <t>RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="8174"/>
          <seriesInfo name="DOI" value="10.17487/RFC8174"/>
        </reference>
      </references>
      <references>
        <name>Informative References</name>
        <reference anchor="RFC3550">
          <front>
            <title>RTP: A Transport Protocol for Real-Time Applications</title>
            <author fullname="H. Schulzrinne" initials="H." surname="Schulzrinne"/>
            <author fullname="S. Casner" initials="S." surname="Casner"/>
            <author fullname="R. Frederick" initials="R." surname="Frederick"/>
            <author fullname="V. Jacobson" initials="V." surname="Jacobson"/>
            <date month="July" year="2003"/>
            <abstract>
              <t>This memorandum describes RTP, the real-time transport protocol. RTP provides end-to-end network transport functions suitable for applications transmitting real-time data, such as audio, video or simulation data, over multicast or unicast network services. RTP does not address resource reservation and does not guarantee quality-of- service for real-time services. The data transport is augmented by a control protocol (RTCP) to allow monitoring of the data delivery in a manner scalable to large multicast networks, and to provide minimal control and identification functionality. RTP and RTCP are designed to be independent of the underlying transport and network layers. The protocol supports the use of RTP-level translators and mixers. Most of the text in this memorandum is identical to RFC 1889 which it obsoletes. There are no changes in the packet formats on the wire, only changes to the rules and algorithms governing how the protocol is used. The biggest change is an enhancement to the scalable timer algorithm for calculating when to send RTCP packets in order to minimize transmission in excess of the intended rate when many participants join a session simultaneously. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="STD" value="64"/>
          <seriesInfo name="RFC" value="3550"/>
          <seriesInfo name="DOI" value="10.17487/RFC3550"/>
        </reference>
        <reference anchor="RFC6363">
          <front>
            <title>Forward Error Correction (FEC) Framework</title>
            <author fullname="M. Watson" initials="M." surname="Watson"/>
            <author fullname="A. Begen" initials="A." surname="Begen"/>
            <author fullname="V. Roca" initials="V." surname="Roca"/>
            <date month="October" year="2011"/>
            <abstract>
              <t>This document describes a framework for using Forward Error Correction (FEC) codes with applications in public and private IP networks to provide protection against packet loss. The framework supports applying FEC to arbitrary packet flows over unreliable transport and is primarily intended for real-time, or streaming, media. This framework can be used to define Content Delivery Protocols that provide FEC for streaming media delivery or other packet flows. Content Delivery Protocols defined using this framework can support any FEC scheme (and associated FEC codes) that is compliant with various requirements defined in this document. Thus, Content Delivery Protocols can be defined that are not specific to a particular FEC scheme, and FEC schemes can be defined that are not specific to a particular Content Delivery Protocol. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="6363"/>
          <seriesInfo name="DOI" value="10.17487/RFC6363"/>
        </reference>
        <reference anchor="RFC6716">
          <front>
            <title>Definition of the Opus Audio Codec</title>
            <author fullname="JM. Valin" initials="JM." surname="Valin"/>
            <author fullname="K. Vos" initials="K." surname="Vos"/>
            <author fullname="T. Terriberry" initials="T." surname="Terriberry"/>
            <date month="September" year="2012"/>
            <abstract>
              <t>This document defines the Opus interactive speech and audio codec. Opus is designed to handle a wide range of interactive audio applications, including Voice over IP, videoconferencing, in-game chat, and even live, distributed music performances. It scales from low bitrate narrowband speech at 6 kbit/s to very high quality stereo music at 510 kbit/s. Opus uses both Linear Prediction (LP) and the Modified Discrete Cosine Transform (MDCT) to achieve good compression of both speech and music. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="6716"/>
          <seriesInfo name="DOI" value="10.17487/RFC6716"/>
        </reference>
        <reference anchor="RFC9000">
          <front>
            <title>QUIC: A UDP-Based Multiplexed and Secure Transport</title>
            <author fullname="J. Iyengar" initials="J." role="editor" surname="Iyengar"/>
            <author fullname="M. Thomson" initials="M." role="editor" surname="Thomson"/>
            <date month="May" year="2021"/>
            <abstract>
              <t>This document defines the core of the QUIC transport protocol. QUIC provides applications with flow-controlled streams for structured communication, low-latency connection establishment, and network path migration. QUIC includes security measures that ensure confidentiality, integrity, and availability in a range of deployment circumstances. Accompanying documents describe the integration of TLS for key negotiation, loss detection, and an exemplary congestion control algorithm.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="9000"/>
          <seriesInfo name="DOI" value="10.17487/RFC9000"/>
        </reference>
        <reference anchor="I-D.draft-cardwell-iccrg-bbr-congestion-control">
          <front>
            <title>BBR Congestion Control</title>
            <author fullname="Neal Cardwell" initials="N." surname="Cardwell">
              <organization>Google</organization>
            </author>
            <author fullname="Yuchung Cheng" initials="Y." surname="Cheng">
              <organization>Google</organization>
            </author>
            <author fullname="Soheil Hassas Yeganeh" initials="S. H." surname="Yeganeh">
              <organization>Google</organization>
            </author>
            <author fullname="Ian Swett" initials="I." surname="Swett">
              <organization>Google</organization>
            </author>
            <author fullname="Van Jacobson" initials="V." surname="Jacobson">
              <organization>Google</organization>
            </author>
            <date day="7" month="March" year="2022"/>
            <abstract>
              <t>   This document specifies the BBR congestion control algorithm.  BBR
   ("Bottleneck Bandwidth and Round-trip propagation time") uses recent
   measurements of a transport connection's delivery rate, round-trip
   time, and packet loss rate to build an explicit model of the network
   path.  BBR then uses this model to control both how fast it sends
   data and the maximum volume of data it allows in flight in the
   network at any time.  Relative to loss-based congestion control
   algorithms such as Reno [RFC5681] or CUBIC [RFC8312], BBR offers
   substantially higher throughput for bottlenecks with shallow buffers
   or random losses, and substantially lower queueing delays for
   bottlenecks with deep buffers (avoiding "bufferbloat").  BBR can be
   implemented in any transport protocol that supports packet-delivery
   acknowledgment.  Thus far, open source implementations are available
   for TCP [RFC793] and QUIC [RFC9000].  This document specifies version
   2 of the BBR algorithm, also sometimes referred to as BBRv2 or bbr2.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-cardwell-iccrg-bbr-congestion-control-02"/>
        </reference>
        <reference anchor="I-D.draft-kpugin-rush">
          <front>
            <title>RUSH - Reliable (unreliable) streaming protocol</title>
            <author fullname="Kirill Pugin" initials="K." surname="Pugin">
              <organization>Facebook</organization>
            </author>
            <author fullname="Alan Frindell" initials="A." surname="Frindell">
              <organization>Facebook</organization>
            </author>
            <author fullname="Jorge Cenzano Ferret" initials="J. C." surname="Ferret">
              <organization>Facebook</organization>
            </author>
            <author fullname="Jake Weissman" initials="J." surname="Weissman">
              <organization>Facebook</organization>
            </author>
            <date day="10" month="May" year="2023"/>
            <abstract>
              <t>   RUSH is an application-level protocol for ingesting live video.  This
   document describes the protocol and how it maps onto QUIC.

Discussion Venues

   This note is to be removed before publishing as an RFC.

   Discussion of this document takes place on the mailing list (), which
   is archived at .

   Source for this draft and an issue tracker can be found at
   https://github.com/afrind/draft-rush.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-kpugin-rush-02"/>
        </reference>
        <reference anchor="I-D.draft-lcurley-warp">
          <front>
            <title>Warp - Live Media Transport over QUIC</title>
            <author fullname="Luke Curley" initials="L." surname="Curley">
              <organization>Twitch</organization>
            </author>
            <author fullname="Kirill Pugin" initials="K." surname="Pugin">
              <organization>Meta</organization>
            </author>
            <author fullname="Suhas Nandakumar" initials="S." surname="Nandakumar">
              <organization>Cisco</organization>
            </author>
            <author fullname="Victor Vasiliev" initials="V." surname="Vasiliev">
              <organization>Google</organization>
            </author>
            <date day="13" month="March" year="2023"/>
            <abstract>
              <t>   This document defines the core behavior for Warp, a live media
   transport protocol over QUIC.  Media is split into objects based on
   the underlying media encoding and transmitted independently over QUIC
   streams.  QUIC streams are prioritized based on the delivery order,
   allowing less important objects to be starved or dropped during
   congestion.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-lcurley-warp-04"/>
        </reference>
        <reference anchor="I-D.draft-jennings-moq-quicr-arch">
          <front>
            <title>QuicR - Media Delivery Protocol over QUIC</title>
            <author fullname="Cullen Fluffy Jennings" initials="C. F." surname="Jennings">
              <organization>cisco</organization>
            </author>
            <author fullname="Suhas Nandakumar" initials="S." surname="Nandakumar">
              <organization>Cisco</organization>
            </author>
            <date day="11" month="July" year="2022"/>
            <abstract>
              <t>   This specification outlines the design for a media delivery protocol
   over QUIC.  It aims at supporting multiple application classes with
   varying latency requirements including ultra low latency applications
   such as interactive communication and gaming.  It is based on a
   publish/subscribe metaphor where entities publish and subscribe to
   data that is sent through, and received from, relays in the cloud.
   The information subscribed to is named such that this forms an
   overlay information centric network.  The relays allow for efficient
   large scale deployments.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-jennings-moq-quicr-arch-01"/>
        </reference>
        <reference anchor="I-D.draft-jennings-moq-quicr-proto">
          <front>
            <title>QuicR - Media Delivery Protocol over QUIC</title>
            <author fullname="Cullen Fluffy Jennings" initials="C. F." surname="Jennings">
              <organization>cisco</organization>
            </author>
            <author fullname="Suhas Nandakumar" initials="S." surname="Nandakumar">
              <organization>Cisco</organization>
            </author>
            <author fullname="Christian Huitema" initials="C." surname="Huitema">
              <organization>Private Octopus Inc.</organization>
            </author>
            <date day="11" month="July" year="2022"/>
            <abstract>
              <t>   Recently new use cases have emerged requiring higher scalability of
   media delivery for interactive realtime applications and much lower
   latency for streaming applications and a combination thereof.

   draft-jennings-moq-arch specifies architectural aspects of QuicR, a
   media delivery protocol based on publish/subscribe metaphor and Relay
   based delivery tree, that enables a wide range of realtime
   applications with different resiliency and latency needs.

   This specification defines the protocol aspects of the QuicR media
   delivery architecture.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-jennings-moq-quicr-proto-01"/>
        </reference>
        <reference anchor="I-D.draft-ietf-webtrans-overview">
          <front>
            <title>The WebTransport Protocol Framework</title>
            <author fullname="Victor Vasiliev" initials="V." surname="Vasiliev">
              <organization>Google</organization>
            </author>
            <date day="24" month="January" year="2023"/>
            <abstract>
              <t>   The WebTransport Protocol Framework enables clients constrained by
   the Web security model to communicate with a remote server using a
   secure multiplexed transport.  It consists of a set of individual
   protocols that are safe to expose to untrusted applications, combined
   with a model that allows them to be used interchangeably.

   This document defines the overall requirements on the protocols used
   in WebTransport, as well as the common features of the protocols,
   support for some of which may be optional.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-webtrans-overview-05"/>
        </reference>
        <reference anchor="CMAF">
          <front>
            <title>Information technology — Multimedia application format (MPEG-A) — Part 19: Common media application format (CMAF) for segmented media</title>
            <author>
              <organization/>
            </author>
            <date year="2020"/>
          </front>
        </reference>
        <reference anchor="ISOBMFF">
          <front>
            <title>Information Technology - Coding Of Audio-Visual Objects - Part 12: ISO Base Media File Format</title>
            <author>
              <organization/>
            </author>
            <date year="2022"/>
          </front>
        </reference>
        <reference anchor="IESG-sdwg" target="https://www.ietf.org/about/groups/iesg/statements/support-documents/">
          <front>
            <title>Support Documents in IETF Working Groups</title>
            <author>
              <organization/>
            </author>
            <date year="2016" month="November"/>
          </front>
        </reference>
        <reference anchor="MOQ-charter" target="https://datatracker.ietf.org/wg/moq/about/">
          <front>
            <title>Media Over QUIC (moq)</title>
            <author>
              <organization/>
            </author>
            <date year="2022" month="September"/>
          </front>
        </reference>
        <reference anchor="WebTrans-charter" target="https://datatracker.ietf.org/wg/webtrans/about/">
          <front>
            <title>WebTransport (webtrans)</title>
            <author>
              <organization/>
            </author>
            <date year="2021" month="March"/>
          </front>
        </reference>
        <reference anchor="Prog-MOQ" target="https://datatracker.ietf.org/meeting/interim-2022-moq-01/materials/slides-interim-2022-moq-01-sessa-moq-use-cases-and-requirements-individual-draft-working-group-draft-00">
          <front>
            <title>Progressing MOQ</title>
            <author>
              <organization/>
            </author>
            <date year="2022" month="October"/>
          </front>
        </reference>
      </references>
    </references>
    <?line 405?>

<section anchor="acknowledgements">
      <name>Acknowledgements</name>
      <t>The authors would like to thank several authors of individual drafts that fed into the "Media Over QUIC" charter process:</t>
      <ul spacing="compact">
        <li>Kirill Pugin, Alan Frindell, Jordi Cenzano, and Jake Weissman (<xref target="I-D.draft-kpugin-rush"/>,</li>
        <li>Luke Curley (<xref target="I-D.draft-lcurley-warp"/>), and</li>
        <li>Cullen Jennings and Suhas Nandakumar (<xref target="I-D.draft-jennings-moq-quicr-arch"/>), together with Christian Huitema (<xref target="I-D.draft-jennings-moq-quicr-proto"/>).</li>
      </ul>
      <t>We would also like to thank Suhas Nandakumar for his presentation, "Progressing MOQ" <xref target="Prog-MOQ"/>, at the October 2022 MOQ virtual interim meeting. We used his outline as a starting point for the Requirements section (<xref target="req-sec"/>).</t>
      <t>We would also like to thank Cullen Jennings for suggesting that we distinguish
between interactive and live streaming use cases based on the users' perception,
rather than quantitative measurements. In addition we would also like to thank
Lucas Pardue, Alan Frindell, and Bernard Aboba for their reviews of the
document.</t>
      <t>James Gruessing would also like to thank Francesco Illy and Nicholas Book for
their part in providing the needed motivation.</t>
      <t>We would also like to thank Suhas Nandakumar for his presentation, "Progressing MOQ" <xref target="Prog-MOQ"/>, at the October 2022 MOQ virtual interim meeting. We used his outline as a starting point for the Requirements section (<xref target="req-sec"/>).</t>
      <t>We would also like to thank Cullen Jennings for suggesting that we distinguish
between interactive and live streaming use cases based on the users' perception,
rather than quantitative measurements. In addition we would also like to thank
Lucas Pardue, Alan Frindell, and Bernard Aboba for their reviews of the
document.</t>
      <t>James Gruessing would also like to thank Francesco Illy and Nicholas Book for
their part in providing the needed motivation.</t>
    </section>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA+19644cx5Hu/3qK3CGwIud0zwxpr2yPDWmHQ1KiljdzKBHG
YmFUV2V3p6e6qlxZNc32iMA+xP7Zf/ss+yj7JCe+iMisrJ4mJR8Ye84B1oCl
nu6qzMjIuHxxydR8Ps9611f23Ly0pcvN6xvbmd9///zSzM333prL3Ftv8ro0
b+2fB9fZja17b5ZNpy+86/Lat03Xmzdd0zdFU5kn1rtVneWLRWdvaODm958Z
Kiubos43REDZ5ct+7my/nG+aP8+75KH52cOsyHu7arrduXH1ssl839l8c26e
P333LCvpt/Msc213bvpu8P2js7PfnD3Kcnrm3ORtWzl63TW1z7ZNd73qmqEl
wl7/3rzMXeXqlXnhfJ9d2x39XNKgdW+72vbzJ6CJ+UETEul/zKumJlp31met
Ozf/TCueGfqHq0sidGY8caKzS0+fdhv90HeuoJ+KZtPm+OCHRfxMH3iFM1oV
EWL/JcvyoV833XlmzJz+b+gHf26+OzHfdIP1nojlb4Vn39E//d4vTbfKa/cX
Xu+5eWVL21VEOu3Am2FROXttzetN19iWn7Yb4sC5+RMGOgHz/3GFb06IMP69
aIa6B9df2X6tI01JuzoxT/LtNX1OCLtqbV2QKKW/TAl7hwfq3lxsLPEnNy9e
XKYEeRmglPc/S9n3tettaa56kgJPUkDi0W1olhvIhDFvn13+4h/+4excPn75
iy9/ET7+6uGX+vE3Z2dn/PDz+ZMTEcQi78qtraq5K4puNSdhnhdNvbIe5ONj
3zXV+eSV63ZYuXpOArie/lAVQ1fZ3Xybd+30lz/ZuqaN8yzyJPBFN8+7Yv2T
D7VQtulTrDlbu+ihkfOGFPnG2S2v6vLlxbNzZpoq+9HzwKSmNr0t1nVTNaud
+a9//Tfzcqh6t2HlTjTHyPPm/ss3T7+ZXzzgR9/kpPcPf3NuLkmM6ZlPvwUK
HrDZ8HYFiacN46ePmCxWYPPo7NEZb8LV68cvn32G4ncjxXOau4QGv16ai6F0
zfwH54e8Mq8Xf7IFmaq5UvnoHOOax2SF1HQ9c5U1z3jQPSoeMRVPr76Z+3K7
UjrybmX7c7Pu+9afn55ut1uWyhMS69N80Qz9KdsVf+qsX516SCOr9qkfWhjI
OVm6Qb6ZLOxKfjZPws+kVmzWzHuyVVjaNzxuSuPDX5hXzQ2R+vBLkEqWbF6s
aZm2O0wsvZWTXBTXthuJ3q5OSaCU9glJ+77gPj33IJ3/yra0ugX9/utZZNh7
u2Bv8H9GSpDcQ/SEkZlP98OTE4peQm3MwzMm5yHIIX+0mhNn/goyNtb2xPBT
B/vvNnOsjJXu7OEpSQl9l1e0n5UrrZ8feGhOHs7n/MfgLZkQ+ntOFnPqzMhX
uBtXkpDORXO3ss9zlh/97uxswgEsphMjj91OV/666BvsxKOHYSvm87nJFx7L
67Ps3dp5E2TPEOVF5xbkNYhCU0SfnFJo+jXp7GqgZdJHC1tcuGVQ6WZpcuPd
pq3szFTNdl4RGXWxU+0vbUWWt9uRK6yGYANIpGE5eabSwSMu+LcZUYElWQfn
wpOxwLUBS9Crk83Pibj4R3jKn8ia66a3f3yFf/TNH9/anDwf+YNjsu/mael6
cqqGiIYBoIWSgaT5iDWezATIXFgi1JoWTlKWepxlV83QFcSnhlgB4p335Gt5
SfwybxaxEa/TtwM9kvfZ74KUrWhdwwIuC6o2F42bSMNXRPsT54uB9laYm4zr
181QlSS75LPbKidCYK6JSWwe7mgpIa0H2UYRTUVsnhEx5ncHLRYe2+T1KR6D
xwRhXykfN64sK5tl9wCEuqYchEG39xz+/Pj/tkiZ29vEHH78+JMidnurAODj
xzvidnu7b9TooU/K4L17BtIn6JjA5cR+m5bed4Vrc6De7P3a1iYvG7Kj5Uz3
PDDUwQUQA0pykn1DHG37gSRz5C4zk8AtPIUvmlamhBkJAoLpkxXuceTw7uSd
oxmWXbPBED6ZkNZ24aNsbjsHKznjiQSterPOSZ9IASGndS2EYwpsUaJSQl+y
2JlZ0AwlvidVM6RApLSA2mAcz2jZE39BihpcKh7++W6Tlh9dORa/XTvyFD7f
eXM/J9RC06hkAGM9IMD0lXm/BjbQhRKtW+LzQG/JM5CACfvAz3F31rZqsXzy
NyTWhUCj6cLu441mSXscfieurYZ8ZR+QIem3ln5I5QVrAyfUUxj2FIgkhAeB
mcLisrG+/q9//Xeavdpimbw3uVm71drATbobwkf0/8GeEHgmusymoW0qXEfD
gMbC+lnkvCFNpK0h2jx/B4IHgtw7lk2KqRxGyCuETLRBpLMkabXzGzGTvPke
fJONB3KWUAkMJU4Sxfl0YWbrrh0iImKscSPuI6oJW5JeuA9qMoI9GKUJCBOG
ZIeh6XX7oaVtKsWgkhw/J6tq7QYbmHsaclFZteZ73MXqaKAu+olxDmxev+a9
09Vv1zZaF3LThKDNnygMJQtHTEvFH0TxzgbZhnOKexItgCdkwWp3j7But3EK
dm/vEZM3bICtoWgVFJee8Nr3V++OZvJv8+o1f377lMzc26dP8Pnq24sXL+KH
TJ+4+vb19y+ejJ/GNy9fv3z59NUTeZm+NZOvsqOXF384EjYcvX7z7vnrVxcv
jkREUzMGA0VMXFi2Zl3bWaD+nEJ+dRgl3nl8+eY//+PhL0lL/4548ejhw9+Q
iZU/fv3wV7+kP4i5tczW1LSv8ifxbpdBGvKOBaWqYCddTwBtBi6T6ySlpT2x
xMXjfwZn/uXc/G5RtA9/+ZV+gQVPvgw8m3zJPLv7zZ2XhYkHvjowTeTm5Ps9
Tk/pvfjD5O/A9+TL332NDIKZP/z1119l7I2esHMnryv6FywLpzcIG7obATUv
8OGKEyp4LCZrRM7gStR37HsSBhmwCp1N3ROxf2ExUnRPKuypV4KZHV9xdVEN
peAWIsYHYmZmpf8GoQIHSOuWtK91gYxHRl6fLR47HxnGmvslKX3Rk7SQai92
pJPyBplXwImlI/ulwONoOuER3jhyI4eOyFgbqJ04yTUvjpg4eHFyQZiV0E/Q
yTJZ8nYUvfjY/YkfnJhonGjw5VBNhheX3G8bJkaEvKhyD+7RMiIrZwHIbu34
8hhyTHhOdr/DUz3yKlnG25N6HXYcpaBTBOz5dTTjYS2MnIMH//NAr7mecy/E
hNwTbPHR0CuoIwJpywiTFSIQI0EFm3vZQfqYdwtHQItwXz1wrEnL7GxFHm2G
xTXLJfzOhBLeIqIuUkRWABMQQCFHAKOKWfmp5DVi/JI5K0/CbrFfgAdQQYgO
ON/J67JPiZzIDqeWDc+TtLcNaBmFTl1rWHZAezrMb43bp2WoD1NDVBA5d0eb
yhXt6/tIh0pRQgzBnePjxCAcH59n58TVOOCWcDNybYTNS7NwoljijEXMlwTY
yVSkRiXBQpCgT75ForGkIIB2BaoaJuEZbU5Cw8BsJkvV16A5DVk5bxm4QGSK
fJCVkXQUFpGCgE+AANJ9PMnGiAQaPkEGWlDoj8fwWtO5lQNpMihx7BnHFwKF
ZgAa5GeurXiaiV5Pkl0bAlg9tARoK6LVBC/k/pocKsR/yyLYmF0zEPyorj1D
y+uvjyCKWJ9aTBhYW0MSaGODGFDssQ3EIFW37PKNTcWScBgBtmrGrGwwXS5I
NiCdYug6xhm6LhqU9IMW4rARR2t2tsTII6OJzkAP/Ulf+1yCdwkHOEYl1a/c
klHXUSIKR/C+x1MHc1jECFJ+QkoY1A31YRlU6dtzYYkvqugBBmoESjo2lTI0
Lflom/cA9c2K0dvRyC8SW95cSIRX1mJXwjjYjWhQxCbaIFhiffNaoorwxonR
oFk9lcYP/GKMb4m4SkynUriwWj6gXfG7ulh3Te3+AiaoK4+eRpYntpVhEtMc
gjmDVHDbuJpxZ+INkjf3Yu3jY0FXXug4Pp4QIEia4TyHLxCsqd1Jw9WGNUtn
cDWzMm6iAknfbCyHV8ZW3h4FAypcjA+bucb+o27SSmwzqr4o4QLGgJQAefMQ
h2qYy+kHtR4k6CubbB5CHYk8O/2mQ0jj2N2zHHa26VaO4S2tjjiX6872O8E5
S2tLNi4chn3RTzTjgCWEhqj7EapOw1LE+rMhDDYsCkZkjew4zMkXAXUVeUdy
Wd5hNGOXumEfyrZ/7VrRvU+9fmADJkoPQf6s4iMdch6dDnxjyzOLj4WN7Ngu
sWiemGeBd5CIdvBr+YOeXTQg01YkbiR8x8eIjMaiotQGMBBvxRt2dsTg23sw
YmQCaPdYFCl0ej10ZtXQjyFeCfk/x4I6xuniUVhAUnz1ieTLfs78aB8rq1fh
7Ry99ZQGxjUUwYywj5jIdIwISITEiyyqzLMnWNFHoshKIpEzVBvXSwIjAE/R
Tj+jUUPImsdXGNq6+R3rOtpMNieBAEkiqaiGtMqG8BEvIOA4+CeyRnVTz2WM
gAbXOYlCwevgcpOxN8ilRMNSuWsbzBhxRVOheVmqRIKxJGpkgj3GYfi15eyE
vbuPG8nOpRhF9otzmvydRtXjPu9huURSGNRAtQQsDxXFoHkvGUjeUzWKc946
qTyIN2+2nA9riOHw5SHNmfuQQibpcMQIZJHl5S4vGepiE1CzoDG88B415Mp+
cP3uxLBkfcg5gQqRed7zTjAUURfGiNI8e3opCU/UQZHL5IFKW8wrmraiFRXX
RCxPQr6hsHnFsby886uHXyKDRjZRkh055K2yzNAKUdZE6tgNVcgsyagelhKW
IMkQgSYOQha73qr7IliUesgN6pG0MNNxep3Y3HLs0nRlQHoF1FykM1nACTOC
7NoKCIqswA7JJIpKnOIo3utY041QxxckMFYk0Dx+/BYJxL+uLEyMRXCV1zuS
orZqdlrjl6AHQ0LXCJMvxKhHdrC0d+Al12U3bUPxj2PCy4GBRokASPjU7zMT
MiQ4k1jak5sOkM3DASKrp95/KDgioZEY5Y0sfvvuHTSQrSgEKKQyAQ5zWtJ2
vUuDlX2gTz6QsHDFolA5icBigjxq/tt3b0SeUJYfM7JItaE8wPl/FsYEe8q2
avOAAGoRNTZdAtxgHNRuhcTokq1k+qimBZtWn7QfSAXxA1uIe+YbAS+39ySc
J6vw40VU7R/ND8iamh+zH80c//tf873/ZT8eH1+JgTx9G4zj8fGPxryuWebo
X3jm8cTO4oE/WA6ixNJEB6EJ13boYz3ddAO3A0ToswL6v48iR4VYJtPdzHe2
ewBLSjAV7hEm9gMnaSuRr5yiY4+UnEerAMCUAHHPHoWm9KxSGX8mccEzMIJk
DhSB4EVAkJjzQLFl4PGFNPyQoyqPPQSdICsTZEZ+3ERBm/oMLLpikCUOxo1J
ekJZLMVoMcpR6JplQa5ol1YreCYwSdxLRGPqHqqKHxzHnwIrjJTRAxtXEIRY
w8QL9YI558zpgj0SOCUkCafJuL0IRStaD3nukBymqG2HTDLJY1730ZnELab5
hhalXMYfwremoAgNj3bWt9DfbNwV3abC0nj0BBm3Tliy4mYgDV042T8pUrg6
09EAFKx4kdOmC5WBZSdWYqdwR6dTvXhrNyhtPbH+um9a0g8aljzk9d9cQV6S
1fyMhly5jYO/VQlcTWONaHbTlQeIbBlz8sq2zq+Z31mzYCEGDAnmhsft8nYN
WxY4QYqz5ALsMgntoJfsCMk1rdZZ1dx9geQiCsZkO3gT9yQB3o7QgmMVzUq3
5DxDL5720HJJdlaE6PqwWRFRiAyya836prIIY0pkQzitpIoXh7mjgbmpraQ3
gyUVfYPCZSo2U19PgY+rLMO5YCmInzXtn4yjSaruCx/5ppL1A0v8ZZJUOX1H
oALqh4oHKQQSLn87OYOA/QxBey4c/yx1gYEheEbmEYEK28gEH1O4PGX46CiV
414NDV4SE6D2lE3r/vZwgqgT4ASv5yZJE7aRY33tiGAsUgWeADMnuDm5E5IT
PVtBfVQ6W2axhCOpF6GnJeMRSoz+ZIQIJGUl0ZNxOho29TR4BRT/rEA0/rf4
aNFP+k71NIlFSYjAvuxghMHgMYBpTjdMRZ0TzPQjPjw6O9v4yCF6OEN2IWVP
XDKzXdMmzJP0oTu2mlHOBl6CR0ZuhrBfPeqd0KfNAC/SOKPaSAB0J87Yjy4Y
dvbrRqzZ7W0MUD4CGXtaPS8G6adCt4PZ6WoyjaCRy7yio/gMZ0w7nycBxwx5
cXsjBUvXcSkxWJuCc8gxyCV8XxKs3osLxYnSmwtC05ybRPoz934uCaU2VFeE
myHrCAMb2jSAROENkc+ED+2BUdji7Mglo9uXJSi2dtjRwSI6hP8iO0yGhhfG
MBagf9QYNX8UUhW2DUZKvSMJ+mqNpBo/ETePNazjDqHVQOElcZ4tfkiKqcVK
9vW5NKFge6Uf5b8PML5quIuks6PxCeZGQiHUx7l7CdrS1NsctWKkdiupzTHn
09aZLGyD6JoMC1bCYnNzCHY0xArRYJ1GGyHzkfQ/RuEGiAZva+cN5zJzgDLu
AiVEuavJnYkZRJ4+IDrMor4DRCUNAOb+gubburJfz8KuPRiLPRwfs+1d7Cb1
gJiCz8Y1rZrgt1MLSni4lU4XeLcxGch/jnRIkjeThNkoouVdAblSaZYGqmoD
6f6/KiNeGmhEGriYEGGsqCCkZTl0mvZLGqvAQ3o4i7VVLoFyj0fiG2RMlg7L
GdpgaqJO1mjFYzSSNS2QibJ+R8zfRFTa2oakbF8SFxIOoRUSISzqOSzGCxG4
DBmWkJlgYRTTzp5Bv19yqWeysmbogXS1gBD8GsHlcqhLgHZ2XcvcVRIujyE3
WWhuzDuw8TERztvOf/03guWfYR04X7/omrwkR9SzVjKVQWG5VyX+ngkP3Afw
fei0m6uTDp5Vg4U++uXpryIv302ylmNOG7wlf0Tw1rYafGnPWsigKyr5Lc1t
peJc263X/CC/761lOMMtUGVp68y35DU5C2lhvw0354VyPNrNOVjiVNbeOLn4
AAL8XBDp8k2bDW0AK3CgtAI/eLxPS0fBTH5kLywVhKTgvOGSElMOBJKx92aE
RyJyNfBvylH/U7Y12lXhqcYMMSzIOHHHOEfF338KL05jZ611kArB2ysSojB6
wd3GISXLfjKifjMxw/ZkdYJQ5dpKfcipyw4VlWZJthG+tpEwNkTbAXSVJAo9
LZo3RJj1QERmEE5Ic0RntzTyURZiN/zVbMclhCqYVErpBVuubMgOSRssf6PJ
1U1eEk69oV9i25cQxmtAHC+pguiwxsqyhHYS6pMhqCpUS9ZOqwSVzSHIGU/W
TN9E5S2NOHkQKT+rHkpdLjQMkGAIevx2t+hcebhZh02MFBRkDkhJWpc+7Q8E
KT2zMWSaE6dX2JoQYuOV0dLXKfZb4kskjIZe1Dl0AdahiKzEpZ0jbHUzAPyN
DSZgMmrB2fz4CqG7vQgJVErf4uhLJhj/oKndb8G4vY2m96N4AtYLzk76YHfG
hCaaGsHjKdrFt2KWkrI/m/67weEBCkIA+1HS9Iz1pflQa2nB7k3WvrCxuxDV
AHO0ZnE4Yi05uIvLoU4IDukuKV3f4ORWO9GXftskMWKqblrnLJqOrXO+IE3q
pR3xIqlqc24rVs5YAzfcY0Haux42oC+AtSMzD3m36BIyBYtaSmNDVQZUOq1l
BVCuHccbHPxKCvlYIbEjk6Yp4BKp/TQNZzzQWc6pUcIaHigRpo21kXhJ6khC
QYGIxHPSOTonNW0yjk8FhVBYV7Ej4jYAiZAQBWr/lEMTC3o+9+tOuVRgZeKM
my10FGncCLiUhkOINorkKPGozyR9uSfkQxaSzu+5OoZarKN95pYeK06JN0Ij
QzUroyduGzhXl1dZosKMyDduPn412+uz0K7fIgScLFbIUnKT6p3DofEwKJqx
OVv45zkFuH99/fVOl7q218bOhk8U7G5v90u/H8cOfR8yUnJIBBWzWKmW1vuY
bL9DxJ02Vzf2FeykX0ua98IpgGlX8dRKavVqhZ4KVir4Lf5FW8rDQJ2cZoEV
9uEMxd3B7/IP+EWrKVP7cKAjc6Y6uhv7mvlIxthZzmctYkGL0/AK2qeFq72B
hfFXeuRjlI3L1Ip4khJ+gVMIHzkBF0t8IQvJzg6uasAWtuiKdnrahJwIihtp
tZqL15pntcky078MNwKH8UP2rgzQMak/hGFDDZ0TZpb2E7gnL2QNkkZWHeL0
XrRmY0irwcKT0Gpz4T0Baqj7DdmkJ6jL4VBQyP7uFdkAnkNXyzSbjPQ+zRCE
m2vHzXKJhlzfFI5TwK4OOZCDGQ6iQAuDc+5vaLi0FAosAVM8YIupeTdOr4TW
tLJrWnkLoUEV+8b0ZCgiClVgeFQFzzRRTZ4JliIwCa54FgJ0LnoIZ0OtNuB8
1IHZI0UMw7NqwDjgfFDXMXTqk4wAA8NaWg7Qb8krlnoUfKu2bSNhkdQRNYWY
12Lh8mRDWUQZKoc2iDRgzeEHyQ+UotURGPuQ8IwtFWOUJOa85Gb3UKmVBJdY
XZITje5CyMA5vrpP5F/SdzDlbLDGVZXcQNmIcMaOVq2/etO6lqSBS4z0EDes
pJIflkXPFxKGOz6sQny3lYp2OJzzXo9q4o/Tt/lWjl6hwiauO/zGOdEudnmA
abTqVReEk3v/uIExmn1wrGJFOHAYixmNiwcIkNgKI1CA2IvZPPp9bC7DWfWZ
kh8aOYDQtRK9HPjcVb5yVUCGbcfho2FsEDypQT9HPMwmSaHeaf0kWD1sK/18
3z+QJSH4RWohPUDGpQnVmFHr2b9j29jyCr85Rv4AxWe+be0CceU29gyHgzow
2kmHakfIiXDHb+n3wcdpaEb7QY4K7LHw4s1zzu4S/8jUxKhp6hZIFq8SPdk7
QPdTB9RxwCMukIV+hsNEa87M7U0k1aXYUBEr4cErRhbwiGgiCWUsLDMlbCYc
SPV74hXCyDHXlaPqydI7yqjKCbJ+e65sfAYxJ0WZOKayb6M3DbRNNnDCtCMN
MClGPBLiQcq37969OX0EFojFU+lEZxXTJSmIic+9pwHSK7tqSFBZBP7eXKg8
395Dg1o9/kZO94L4BqEG3ldbFlmgCq+6yyxmh5QyMRcgKwAjNKciSzETS6cp
CxmZhY4WsB3bkBRJ7ElE0FdZ8gkDhwQDJ/vGLwb9ZSXIWwlgdnr7ADd+ppUl
cDgwwU7sdyBWiRnU1u2FJydS0IkkqJ3tcJFFLWOQCakbtKmx6VNvGrrAxCuE
/jZuVwaltd0GHMN5Fz9hUpN4J22ZR9+ypl/gudMDtkriFCzqjCPWw4k6riIN
G0SGK/a12F8r3lqmHkV7nD6qz9DhlIG2GREca9q01DQ6ofymcaVaSmlwFhdK
6o5e0uk54lTj5ehGg77a0B8WGiGSneb01JgBGVsT076ykDTBqgShirI8QRXh
JX1d8anXu+LFJRSr7V1jjX6SXZvJ2Zy4A4ro2UmLiWDPnqfqlh71CS6cjUSV
MH0EBlhTZzX1zucHfOiDUoOTl2W4M0B62rTur6MtCbQ20jzaI4WF8+JIkYBJ
HEKK6VSRFGvDqzwnTyWX/+Cv7FzyILNQS42JRKy5II7hCA9qM2Qmp0zJcHMI
8X8Wj7/i046P7f8QWUVPxc5ZgSAzxXLh/Cc51VnIgd5HJvEpnlVElDCPAliy
GoDluK2H6GCPxk1Kwir8/ZPWHDwZOTvtp2cWnIpo8bEMPg3XtHIICIeC+VCy
HG7V3A2+xjkl3nbeblcXTYfeP4SqSQ4W4TthXBxLDkcwQtc+4ISUdtlM0Ppt
0AbZwyhnIutvxmO5oULdDotYoj6MRPQ0z2dahENzkqZVPxNPxZMLMViajagk
KpyyXk8w0/4tgasI0nQNttilYZVJXJnY1JFz4QhC2pQRksigMZwLQEO6HjWV
moYL2eWlBXLSc8djYkpPLaru4iMrNrgyukAkNuvRXEr6PE/PekjPxCTfnhxr
DB525J6mjjWyZG4/OIlNK3y4K71aQekVFyy5eGaQzMpxknYJTpiYdtnmjMaH
lhsHEycvpwd9OPei25rjbOGNlTPcu/H4wDl3J09a8HCDAGSz4PiUyUpTXHLB
gFP10ZXZSfj1NY35fjwjhpNSIckv6rAM5mMb2hok1Q4F7YEXQ4RbiDVSUxIu
SojHzG1fYK7LPBy84TVOFQJy3PABIdrBvUNN6Ij+WrJQ42HRi9GWiCa+tSE1
enuvDl2oF34M9TQDMcpXCDSnpY0xvRc0MCwzjiQnWLhGweBa6k58Lw6st4a7
IUxVq7kLB5744ihuksPxQlR0xAMyqztdhbm/z1Ti4oO0HTMPia5glce7o2JR
JkXdyen5iHbNBeNfI6ku+jU2+NIO59wYIKAAB+drlI2Dx5GOvnTlYAQiQ+k7
juk6dK4Juw5AIPGcoWzOTcUlWxidfDw+11sN0kcvD3HBd3Uvx/x1fGh+OkeI
rXHmpNQy+2s+unj6pAHYnJmLJNYLCFBfu6j0xOUiWirOK/LRqPRsINPLZcyw
z8wMbamIB8bEixC7KDxOnEj8hmR2/DVK6nhXDDxel2/TaIIWSy4sb/2A3o6Y
juGzQn1oU5UdxnGp4AVCOtfxYVbz3uEY2dSNSBpAy8PTnwRQiUWawpWAFPWY
N98BlGRDuGwzdVayU3zw4CKE3+OKkvvQYkZtXGB5QzIml2yEiI+vr9C4nM8w
UAAhc7eRs+GCtTAiLlqjqBv/kvtsJFjUK9UQj8sn+pHNbO8fhDhPKgh5MhVZ
XjGeY04NFksAsE/ZJdxJasQHQw5JyElap9Zpoy8ftQr3HdUhi9neEaIZHogt
TNNS1CRBHM1TPEWjAQTvYpw4JL5w+xGfBL9DNy2PM2Ppefx4YDDZXUR1iXWa
JtfHWk2IQHIO8jQxdOiliQPWBGPa16RtekmseFDaYsKOQ4WftUxgvvxzIjxT
+xU7YbYH1C5Ee5Og9tBoIaJFiH3CGoxodo+5kraYMW4W68XhbrBSSGRGAJDa
EWG6Fn5/Judnf2OOT3Z1Epv7ac4iwCmV5RE0HpgNoU4SuF4mrZe39+j5tBmT
zHHsIEpSwfFI+Si5KQCcoEc50Jjmiz7qUnaTtEkfT+PO2IfhJpyDBYo0vTFm
N5I6fdBm6YEnU4mKkxq6BJaNDTDCjrd6ScQlO4dZ0vCMbXilpYanldZOQh6M
v56HMOmjuM03A23O39O//JosWkMbwCxLsrNswqf9TWj2SMJEvfInJJg1jIin
oSdxoNRmJBT0gvhrTb7JsS01wyPAZWh21BKd0tp9hIOzR+MMd6oZQCt6q8PU
dEoRANNo8zXncHgUBKJRm31S7TiIuoH5YDkqvXlFfShjvaMnP7zVRh5cefJ4
rIbkcsCPY9fYYuZH8I58Pu/O7m4+REuMlhDvmNr0+qdu5cUYaGgiVEML/hvt
cqR4vZYoWBr5rgPkC969uIreXZBCklPcDL2E/pPhQ3Pp3XBLrQKftQZoZksQ
qpgSheG3m7xyZXrRAl8UEs8vhpMu+5LIhgT6GjppZX6sYDLbXkwqWX1ugyGP
Ceg1+NlearftkBWSoJWhAczDdNWhwyTulbOpuYl9HpCGJO4bnw0D8NmrWo+4
o9GbtQK3u+yVBPYzhCoaJQK7DQCNwox0r0OaZcnBKSRXcLNkf8a8+dO66HbB
oEKiWNTJRISvSbKe1uW8b+aW91fFb0S5aYllHGyC/iWaQ10oKT2hY0jCISnX
R89qpWkP7UK1rukvdl9TAfvkXJMqcihbMeLXhGuSHiKrC9ISokJCOUwhmM58
E07SzhLoKtc/5WzaJf4bWRKyG8IYxudPKGLpSV/eogXfo4WVUK9clBKSmktt
K5RWmISCaSvpCMvzCS+Bv6VtFmQEtxeaxZSWT8ymTBmr/YfOovDT3BEeD4xs
+cZAuHg58iQJmk9tEHuksEPsoLgFpbQguMaKforYKGuSRz40ETcUYYNQBNNT
WKGNPXofbrfib6aF/GTE5AhOes5cjJJct8F0SSe25D5D+zy3UY5NZylPJyxN
bos4SvlwlCwZXpTTiXyBdMz/wDBQNAjXjjd8uC+IggicrJwg8kQyeayUNgyQ
UMa3YMM04fuEzLTUqSodvXKCq6Ir3MfEe90s+aJJz1GrZ8aBHPtBqzli4aLx
mFoyicXSo73SR7uPwYNhiRkJoX9S6ZsW+PjYluQEP0UO9349v3h1sdfKs39D
rOxFHW8EZSuB93iA6LX3B7nwd1ut0otR5TrZcrw9V702bjyA95LzRYJofFYn
SlNME/x8Folvh9IbcJFyAmkXBZJFFfp65b9WwEFtuPNULkGR81oNt/td0xTc
dhafmZYHuBCuQegytDtyU8Kd60PClXu6KeeI2v7JdYgV3uCa+Zm5qPLaPOvw
nx6oKGD5Dv2a5tLWfyG/KaD3O9z/8N4679GOeT8txid31X/8OKOxXwz07CXf
Uz99Mr28/uPHB2Kt5vRoVZHmfKd30/N8VwPuNXhFH/PrYUNYdzLQJ+665zHD
HUwi/pdrvtWDaP52cL3d5D85EIs7jSS3nW21lkpAdbo7dwjko+I4I5+cYJnd
vW6bYp9wnTjfnyu6G2/ePnv0iM3EjesYCurV4EbvE6eIVqvHmIrcKt/TyGh4
egNOzMFN+ilDgyTxILRR/uRC93eHa99a0g1Iamv1Yhi+IDILttDtNZx/8lqn
eOZBQU7nv4DF4NN2xMUsvSbk0M2AvLoT8zyJgLafXlL2YqBpcZl/Odg7wg9C
H8MAdqW5WDSLPLDSAfFofpt9QBasCXFw7z+f8Wl2PuP2PHJo5jkajDHbK7KY
TUUUPW6aaz5jKrOxu3G1GtyYQhUXvCEgexPyY/8jqf8jqf9fSOr/BqqDtAAm
aAAA

-->

</rfc>
