<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.6.43 (Ruby 3.2.2) -->
<?rfc tocindent="yes"?>
<?rfc strict="yes"?>
<?rfc compact="yes"?>
<?rfc subcompact="yes"?>
<?rfc comments="yes"?>
<?rfc inline="yes"?>
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-ietf-moq-requirements-02" category="info" submissionType="IETF" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.18.0 -->
  <front>
    <title abbrev="MoQ Use Cases and Requirements">Media Over QUIC - Use Cases and Requirements for Media Transport Protocol Design</title>
    <seriesInfo name="Internet-Draft" value="draft-ietf-moq-requirements-02"/>
    <author initials="J." surname="Gruessing" fullname="James Gruessing">
      <organization>Nederlandse Publieke Omroep</organization>
      <address>
        <postal>
          <country>Netherlands</country>
        </postal>
        <email>james.ietf@gmail.com</email>
      </address>
    </author>
    <author initials="S." surname="Dawkins" fullname="Spencer Dawkins">
      <organization>Tencent America LLC</organization>
      <address>
        <postal>
          <country>United States</country>
        </postal>
        <email>spencerdawkins.ietf@gmail.com</email>
      </address>
    </author>
    <date year="2023" month="September" day="29"/>
    <area>applications</area>
    <workgroup>MOQ Mailing List</workgroup>
    <keyword>Internet-Draft QUIC</keyword>
    <abstract>
      <?line 77?>

<t>This document describes use cases and requirements that guide the specification of a simple, low-latency media delivery solution for ingest and distribution, using either the QUIC protocol or WebTransport as transport protocols.</t>
    </abstract>
    <note>
      <name>Note to Readers</name>
      <?line 81?>

<t><em>RFC Editor: please remove this section before publication</em></t>
      <t>Source code and issues for this draft can be found at
<eref target="https://github.com/moq-wg/moq-requirements">https://github.com/moq-wg/moq-requirements</eref>.</t>
      <t>Discussion of this draft should take place on the IETF Media Over QUIC (MoQ)
mailing list, at <eref target="https://www.ietf.org/mailman/listinfo/moq">https://www.ietf.org/mailman/listinfo/moq</eref>.</t>
    </note>
  </front>
  <middle>
    <?line 91?>

<section anchor="intro">
      <name>Introduction</name>
      <t>This document describes use cases and requirements that guide the specification of a simple, low-latency media delivery solution for ingest and distribution <xref target="MOQ-charter"/>, using either the QUIC protocol <xref target="RFC9000"/> or WebTransport <xref target="WebTrans-charter"/> as transport protocols.</t>
      <section anchor="note-for-moq-working-group-participants">
        <name>Note for MOQ Working Group participants</name>
        <t>When adopted, this document is intended to capture use cases that are in scope for work on the MOQ protocol <xref target="MOQ-charter"/>, and requirements that arise from these use cases.</t>
        <t>As of this writing, the authors have not planned to request publication on this document, based on our understanding of the IESG's statement on "Support Documents in IETF Working Groups" <xref target="IESG-sdwg"/>, which says (among other things):</t>
        <ul empty="true">
          <li>
            <t>While writing down such things as requirements and use cases help to get a common understanding (and often common language) between participants in the working group, support documentation doesn’t always have a high archival value. Under most circumstances, the IESG encourages the community to consider alternate mechanisms for publishing this content, such as on a working group wiki, in an informational appendix of a solution document, or simply as an expired draft.</t>
          </li>
        </ul>
        <t>It seems reasonable for the working group to improve this document, and then consider whether the result justifies publication as a part of the RFC archival document series.</t>
      </section>
    </section>
    <section anchor="term">
      <name>Terminology</name>
      <t>The key words "<bcp14>MUST</bcp14>", "<bcp14>MUST NOT</bcp14>", "<bcp14>REQUIRED</bcp14>", "<bcp14>SHALL</bcp14>", "<bcp14>SHALL
NOT</bcp14>", "<bcp14>SHOULD</bcp14>", "<bcp14>SHOULD NOT</bcp14>", "<bcp14>RECOMMENDED</bcp14>", "<bcp14>NOT RECOMMENDED</bcp14>",
"<bcp14>MAY</bcp14>", and "<bcp14>OPTIONAL</bcp14>" in this document are to be interpreted as
described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they
appear in all capitals, as shown here.</t>
      <?line -18?>

<section anchor="distinguishing-between-interactive-and-live-streaming-use-cases">
        <name>Distinguishing between Interactive and Live Streaming Use Cases</name>
        <t>The MOQ charter <xref target="MOQ-charter"/> lists three use cases as being in scope of the MOQ protocol</t>
        <ul empty="true">
          <li>
            <t>use cases including live streaming, gaming, and media conferencing</t>
          </li>
        </ul>
        <t>but does not include (directly or by reference) a definition of "live streaming" or "interactive" (a term that has been used to describe gaming and media conferencing, as distinct from "live streaming"). It seems useful to describe these two terms, as classes of use cases, before we describe individual use cases in more detail.</t>
        <t>MOQ participants have discussed making this distinction based on quantitative measures such as latency, but since MOQ use cases can include an arbitrary number of relays, we offer a distinction that is based on how users experience that distinction. If two users are able to interact in the way that seems interactive, as described in the proposed definitions, the use case is interactive; if two users are unable to interact in that way, the use case is live streaming.</t>
        <t>We propose these definitions:</t>
        <dl>
          <dt><strong>Interactive</strong>:</dt>
          <dd>
            <t>a use case with coupled bidirectional media flows</t>
          </dd>
        </dl>
        <t>Interactive use cases have bidirectional media flows sufficiently coupled with each other, that media from one sender can cause the receiver to reply by sending its own media back to the original sender.</t>
        <t>For instance, a speaker in a conferencing application might make a statement, and then ask, "but what do you folks think?" If one of the listeners is able to answer in a timeframe that seems natural, without waiting for the current speaker to explicitly "hand over" control of the conversation, this would qualify as "Interactive".</t>
        <dl>
          <dt><strong>Live Streaming</strong>:</dt>
          <dd>
            <t>a use case with unidirectional media flows, or uncoupled bidirectional flows</t>
          </dd>
        </dl>
        <t>Live Streaming use cases allow consumers of media to "watch together", without having a sense that one consumer is experiencing the media before another consumer. This does not require the delivery of live media to be strictly synchronized between media consumers, but only that from the viewpoint of individual consumers, media delivery <strong>appears to be</strong> synchronized.</t>
        <t>It is common for live streaming use cases to send media in one direction, and "something else" in the other direction - for instance, a video receiver might be returning requests that the sender change the media encoding or media rate in use, or reorgient a camera. This type of feedback doesn't qualify as "bidirectional media".</t>
        <t>If two sender/receivers are each sending media to the other, but what's being carried in one direction has no relationship with what's being carried in the other direction, this would not qualify as "Interactive".</t>
        <t><strong>Note: these descriptions are a starting point. Feedback and pushback are both welcomed.</strong></t>
      </section>
      <section anchor="draft-alignment">
        <name>Alignment with terminology in related drafts</name>
        <t>The MOQ working group has used a wide variety of terms, with some, but not enough, effort to reconcile terms in use in various drafts. As these drafts are being adopted by the MOQ working group, it seems right to make every effort to align terminology for ease of reading and implementation.</t>
        <t>This draft does not yet, but will, align with the terminology defined in <xref target="I-D.draft-ietf-moq-transport"/>, as a starting point. We note that -00 of that draft observes that the working group hasn't converged on terminology and definitions, and if MOQ terminology changes, the terminology in this draft will change accordingly.</t>
      </section>
    </section>
    <section anchor="overallusecases">
      <name>Use Cases Informing This Proposal</name>
      <t>Our goal in this section is to understand the range of use cases that are in scope for "Media Over QUIC" <xref target="MOQ-charter"/>.</t>
      <t>For each use case in this section, we also describe</t>
      <ul spacing="compact">
        <li>the number of senders or receiver in a given session transmitting distinct streams,</li>
        <li>whether a session has bi-directional flows of media from senders and receivers, which may also include timely non-media such as haptics or timed events.</li>
      </ul>
      <t>It is likely that we should add other characteristics, as we come to understand them.</t>
      <section anchor="interact">
        <name>Interactive Media</name>
        <t>The use cases described in this section have one particular attribute in common - the target the lowest possible latency as can be achieved at the trade off of data loss and complexity. For example,</t>
        <ul spacing="compact">
          <li>It may make sense to use FEC <xref target="RFC6363"/> and codec-level packet loss concealment <xref target="RFC6716"/>, rather than selectively retransmitting only lost packets. These mechanisms use more bytes, but do not require multiple round trips in order to recover from packet loss.</li>
          <li>It's generally infeasible to use congestion control schemes like BBR <xref target="I-D.draft-cardwell-iccrg-bbr-congestion-control"/> in many deployments, since BBR has probing mechanisms that rely on temporarily inducing delay, but these mechanisms can then amortize the consequences of induced delay over multiple RTTs.</li>
        </ul>
        <t>This may help to explain why interactive use cases have typically relied on protocols such as RTP <xref target="RFC3550"/>, which provide low-level control of packetization and transmission, with addtional support for retransmission as an optional extension.</t>
        <t>To provide an overview of interactive use cases, we can consider a conferencing session comprised of:</t>
        <ul spacing="compact">
          <li>Multiple emitters, publishing on multiple tracks (audio, video tracks and at different qualities)</li>
          <li>A media switch, sourcing tracks that represent a subset of tracks from across all
the emitters. Such subset may represent tracks representing top 5 speakers at
higher qualities and lot of other tracks for rest of the emitters at lower qualities.</li>
          <li>Multiple receivers, with varied receiving capacity (bandwidth limited), subscribing to subset of the tracks</li>
        </ul>
        <artwork><![CDATA[
                                   SFU:t1, E1:t2, E3:t6
 .───.  E1: t1,t2,t3,t4                          .───.
( E1  )─────┐                           ┌────▶ ( R1  )
 `───'      │                           │       `───'
            │                           │
            └───────▶─────────┐         │
                     │         │────────┘
 .───.  E2: t1,t2    │   SFU   │   SFU:t1,E1:t2 .───.
( E2  )─────────────▶│         │──────────────▶( R2  )
 `───'               │         │                `───'
           ┌────────▶└─────────┴─────────┐
           │                             │
           │                             │
           │                             │
           │                             │
 .───.     │                             │       .───.
( E3  )────┘                             └─────▶( R3  )
 `───'   E3: t1,t2,t3,t4,t5,t6          E3: t2,  `───'
                                        E1: t2,
                                        E2: t2,
                                        SFU: t1
]]></artwork>
        <t>This setup relies on the following functionalities:</t>
        <ul spacing="compact">
          <li>Media Switches source new tracks but retain the media payload from
the original emitters. This implies publishing new Track IDs
sourced from the SFU, with object payload unchanged from the
original emitters.</li>
          <li>Media Switches propagate a subset of tracks as-is from the emitters
to the subscribers. This implies Track IDs to be unchanged between
the emitters and the receivers.</li>
          <li>Subscribers explictly request one or more media tracks in appropriate qualities and
dynamically move between the qualtiies during the course of the session.</li>
        </ul>
        <t>Another topology for the conferencing use-case is to use
multiple distribution networks for delivering the media, with
media switching functionality running across distribution
networks and these media functions as part of the
core distribution network as shown below.</t>
        <artwork><![CDATA[
                   Distribution Network A
 E1: t1,t2,t3,t4
                                    SFU:t1, E1:t2, E3:t6
    .───.        ┌────────┐      ┌────────┐      .───.
   ( E1  )───────│ Relay  │──────│ Relay  ├───▶ ( R1  )
    `───'        └─────┬──┘      └──┬─────┘      `───'
                       │ ┌────────┐ │
   E2: t1,t2           └─┤ Relay  │─┘
             ┌──────────▶└────┬───┘         SFU:t1,E1:t2
    .───.    │                 │                  .───.
   ( E2  )───┘                 │              ┌─▶( R2  )
    `───'                      │              │   `───'
                   ┌────────┐  │   ┌────────┬─┘
             ──────┤ Relay  │──┴───│ Relay  │─┐
             |     └─────┬──┘      └──┬─────┘ │
             |           │ ┌────────┐ │       │
             |           └─┤ Relay  │─┘       │
    .───.    |             └────────┘         │   .───.
   ( E3  )───┘         Distribution Network B └─▶( R3  )
    `───'                                         `───'
     E3: t1,t2,t3,t4,t5,t6                        E3: t2,
                                                  E1: t2,
                                                  E2: t2,
                                                 SFU: t1
]]></artwork>
        <t>Such a topology needs to meet all the properties listed in the
homogenous topology setup, however having multiple distribution networks,
and relying on the distribution networks to carry out the media delivery,
brings in further requirements towards a data model that enables tracks
to be uniquely identifiable across the distribution networks and not
just within a single distribution network.</t>
        <section anchor="gaming">
          <name>Gaming</name>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">One to One</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">Yes</td>
              </tr>
            </tbody>
          </table>
          <t>In this use case the computation for running a video game (single or
multiplayer) is performed externally on a hosted service, with user inputs from
input devices sent to the server, and media, usually video and audio of gameplay
returned. This may also include the client receiving other types of signaling,
such as triggers for haptic feedback, as well as the client sending media such
as microphone audio for in-game chat with other players. Latency may be
considerably important in this use case as updates to video occur in response
user input, with certain genres of games having high requirements in
responsiveness and/or a high frequency of user input.</t>
        </section>
        <section anchor="remdesk">
          <name>Remote Desktop</name>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">One to Many</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">Yes</td>
              </tr>
            </tbody>
          </table>
          <t>Similar to the gaming use case in many requirements, but where a user wishes to
observe or control the graphical user interface of another computer through
local user interfaces.  Latency requirements with this use case are marginally
different than the gaming use case as greater input latency may be more
tolerated by users. This use case may also include a need to support signalling
and/or transmitting of files or devices connected to the user's computer.</t>
        </section>
        <section anchor="vidconf">
          <name>Video Conferencing/Telephony</name>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">Many to Many</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">Yes</td>
              </tr>
            </tbody>
          </table>
          <t>In the Video Conferencing/Telephony use case, media is both sent and received.
This use case typically includes audio and video media, and may also include one or more  additional media types, such as "screen sharing" and other content such as slides, documents, or video presentations.
This may be done
as client/server, or peer to peer with a many to many relationship of both
senders and receivers. The target for latency may be as large as 200ms or more for
some media types such as audio, but other media types in this use case have much
more stringent latency targets.</t>
        </section>
      </section>
      <section anchor="lm-media">
        <name>Live Media</name>
        <t>The use cases in this section like those in <xref target="interact"/> do set some expectations to minimise high and/or highly variable latency, however their key difference is that are seldom bi-directional as their basis is on mass-consumption of media or the contribution of it into a platform to syndicate, or distribute. Latency is less noticeable over loss, and may be more accepting of having slightly more latency to increase guarantee of delivery.</t>
        <section anchor="lmingest">
          <name>Live Media Ingest</name>
          <t>In a typical live video ingest, the broadcast client - for example, an Open Broadcaster Software (OBS) client,
publishes the video content to an ingest server under a provider domain.</t>
          <artwork><![CDATA[
               E1: t1,t2,t3   ┌──────────┐
 .─────────────.              │          │
(    Emitter    )────────────▶│  Ingest  │
 `─────────────'              │  Server  │
                              │          │
                              └──────────┘

]]></artwork>
          <t>The Track IDs are scoped to the broadcast for the application under
a provider domain.</t>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">One to One</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">No</td>
              </tr>
            </tbody>
          </table>
          <t>Where media is received from a source for onwards handling into a distribution
platform. The media may comprise of multiple audio and/or video sources.
Bitrates may either be static or set dynamically by signaling of connection
information (bandwidth, latency) based on data sent by the receiver, and the
media may go through additional steps of transcoding or transformation before
being distributed.</t>
        </section>
        <section anchor="lmsynd">
          <name>Live Media Syndication</name>
          <t><strong>Note: We need to add a description for Live Media Syndication, matching the descriptions of <xref target="lmingest"/> and <xref target="lmstream"/></strong></t>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">One to One</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">No</td>
              </tr>
            </tbody>
          </table>
          <t>Where media is sent onwards to another platform for further distribution and not
directly used for presentation to an audience, however may be monitored by
operational systems and/or people. The media may be compressed down to a bitrate
lower than source, but larger than final distribution output. Streams may be
redundant with failover mechanisms in place.</t>
        </section>
        <section anchor="lmstream">
          <name>Live Media Streaming</name>
          <t>In a reference live streaming example shown below,
the emitter streams one or more tracks
as part of the application operated under a
provider domain, which is then distributed
to multiple clients using some form of distribution
server operating under the same provider domain,
over a content distribution network.</t>
          <artwork><![CDATA[
                                                                 DS: t1,t2
                                                                   .───.
                                                          ┌──────▶( S1  )
                                                          │        `───'
                                                          │
        E1: t1,t2,t3 ┌──────────┐    ┌──────────────┬─────┘     DS: t1
.─────────.          │          │    │              │         .───.
(   E1    )─────────▶│  Ingest  ├────┤ Distribution │───────▶( S2  )
`─────────'          │  Server  │    │      Server  |         `───'
                     │          │    │              │
                     └──────────┘    └──────────────┴─────┐
                                                          │        .───.
                                                          └──────▶( S3  )
                                                                   `───'
                                                                DS: t1,t2, t3
]]></artwork>
          <t>In this setup, one can visualize the ingest and
distribution as two separate systems operating
within a given provider domain. One implication of this organization is that the
Track Ids used by the emitter need not match
the ones referred to by the subscribers.
This can be the case because the distribution server sources the media as
new tracks (for instance, if the media is transcoded after ingest)</t>
          <t><strong>Note: the previous paragraph describes the relationship between Live Media Ingest and Live Media Streaming - it might better go in a separate subsection, and should also include Live Media Syndication</strong></t>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">One to Many</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">No</td>
              </tr>
            </tbody>
          </table>
          <t>In Live Media Streaming, media might be received from a live broadcast or stream either as a broadcast
with fixed duration or as ongoing 24/7 output. The number of receivers may vary
depending on the type of content; breaking news events may see sharp, sudden
spikes, whereas sporting and entertainment events may see a more gradual ramp
up with a higher sustained peak with some changes based on match breaks or
interludes.</t>
          <t>Such broadcasts may comprise of multiple audio or video outputs with different
codecs or bitrates, and may also include other types of media essence such as
subtitles or timing signalling information (e.g. markers to indicate change of
behaviour in client such as advertisement breaks). The use of "live rewind"
where a window of media between the live edge and trailing edge can be made
available for clients to playback, either because the local player falls behind the leading
edge or because the viewer wishes to play back from a point in the past.</t>
        </section>
      </section>
      <section anchor="hybrid-interactive-and-live-media">
        <name>Hybrid Interactive and Live Media</name>
        <t>For the video conferencing/telephony use case, there can be additional scenarios
where the audience greatly outnumbers the concurrent active participants, but
any member of the audience could participate.
This use case can have an audience as large as Live Media Streaming as described in <xref target="lmstream"/>, but also relies on the interactivity and bi-directionality of conferencing as in Video Conferencing as described in <xref target="vidconf"/>. For this reason, this type of use case can be considered a "hybrid". There can be additional functionality as well that overlap between the two, such as "live rewind", or recording abilities.</t>
        <t>Another consideration is the limits of "human bandwidth" - as the number of
sources are included into a given session increase, the amount of media that can
usefully understood by a single person diminishes. To put it more simply - too
many people talking at once is much more difficult to understand than one person
speaking at a time, and this varies on the audience and circumstance.
Subsequently this will define some limitations in the number of potential
concurrent or semi-concurrent, bidirectional communications that occur.</t>
      </section>
    </section>
    <section anchor="req-sec">
      <name>Requirements for Protocol Work</name>
      <t>Our goal in this section is to understand the requirements that result from the use cases described in <xref target="overallusecases"/>.</t>
      <section anchor="notes-to-the-reader">
        <name>Notes to the Reader</name>
        <ul spacing="compact">
          <li>Note: the intention for the requirements in this document is that they are useful for MOQ working group participants, to recognize constraints, and useful for readers outside the MOQ working group to understand the high-level functionality of the MOQ protocol, as they consider implementation and deployment of systems that rely on the MOQ protocol.</li>
        </ul>
      </section>
      <section anchor="proto-cons">
        <name>Specific Protocol Considerations</name>
        <t>In order to support the various topologies and patterns of media flows with the protocol, the protocol <bcp14>MUST</bcp14> support both sending and receiving of media streams, as separate actions or concurrently in a given connection.</t>
        <section anchor="quic-capabilities-and-properties">
          <name>QUIC Capabilities and Properties</name>
          <t>With QUIC being the underlying protocol brings capabilities and functionalities for many of the requirements such as connection migration and re-use, greater controls over packet reliability, congestion control, re-ordering and flow directionality, multiplexing and head of line blocking. Utilising aspects of the QUIC protocol which would then necessitate reimplementation of these capabilities already present in other parts of the QUIC protocol should only be done so if requirements deem them incompatible.</t>
        </section>
        <section anchor="delivery-assurance-vs-delay">
          <name>Delivery Assurance vs. Delay</name>
          <t>Different use cases have varying requirements with respect to the tradeoffs associated in having guarantee of delivery vs delay - in some (such as telephony) it may be acceptable to drop some or all of the media as a result of changes in network connectivity, throughput, or congestion whereas in other scenarios all media must arrive at the receiving end even if delayed. There <bcp14>SHOULD</bcp14> be support for some means for a connection to signal which media may be abandoned, and behaviours of both senders receivers defined when delay or loss occurs. Where multiple variants of media are sent, this <bcp14>SHOULD</bcp14> be done so in a way that provides pipelining so each media stream may be processed in parallel.</t>
        </section>
        <section anchor="support-webtransportraw-quic-as-media-transport">
          <name>Support Webtransport/Raw QUIC as media transport</name>
          <t>There should be a degree of decoupling from the underlying transport protocols and MoQ itself despite the "Q" in the name, in particular to provide future agility and prevent any potential ossification being tied to specific version(s) of dependant protocols.</t>
          <t>Many of the use cases will be deployed in contexts where web browsers are the common application runtime; thus the use of existing protocols and APIs is desireable for implementations. Support for WebTransport <xref target="I-D.draft-ietf-webtrans-overview"/> will be defined, although implementations or deployments running outside browsers will not need to use WebTransport, thus support for the protocol running directly atop QUIC should be provided.</t>
          <t>Considerations should be made clear with respect to modes where WebTransport "falls back" to using HTTP/2 or other future non-QUIC based protocol.</t>
        </section>
        <section anchor="MOQ-negotiation">
          <name>Media Negotiation &amp; Agility</name>
          <t>All entities which directly process media will have support for a variety of media codecs, both codecs which exist now and codecs that will be defined in the future. Consequently the protocol will provide the capability for sender and receiver to negotiate which media codecs will be used in a given session.</t>
          <t>The protocol <bcp14>SHOULD</bcp14> remain codec agnostic as much as possible, and should allow for new media formats and codecs to be supported without change in specification.</t>
          <t>The working group should consider if a minimum, suggestive set of codecs should be supported for the purposes of interop, however this <bcp14>SHOULD</bcp14> avoid being strict to simplify use cases and deployments that don't require certain capability e.g. telephony which may not require video codecs.</t>
        </section>
      </section>
      <section anchor="media-data-model">
        <name>Media Data Model</name>
        <t>As the protocol will handle many different types of media, classifications, and variations when all entities describe the media a model should be defined which represents this, with a clear addressing scheme. This should factor in at least, but not limited to allow future types:</t>
        <dl>
          <dt>Media Types</dt>
          <dd>
            <t>Video, audio, subtitles, ancillary data</t>
          </dd>
          <dt>Classifications</dt>
          <dd>
            <t>Codec, language, layers</t>
          </dd>
          <dt>Variations</dt>
          <dd>
            <t>For each stream, the resolution(s), bitrate(s). Each variant should be uniquely identifiable and addressable.</t>
          </dd>
        </dl>
        <t>Considerations should be made to addressing of individual audio/video frames as opposed to groups, in addition to how the model incorporates signalling of prioritisation, media dependency, and cacheability to all entities.</t>
      </section>
      <section anchor="pub-media">
        <name>Publishing Media</name>
        <t>Many of the use cases have bi-directional flows of media, with clients both sending and receiving media concurrently, thus the protocol should have a unified approach in connection negotiation and signalling to send and received media both at the start and ongoing in the lifetime of a session including describing when flow of media is unsupported (e.g. a live media server signalling it does not support receiving from a given client).</t>
        <t>In the initiation of a session both client and server must perform negotiation in order to agree upon a variety of details before media can move in any direction:</t>
        <ul spacing="compact">
          <li>Is the client authenticated and subsequently authorised to initiate a connection?</li>
          <li>What media is available, and for each what are the parameters such as codec, bitrate, and resolution etc?</li>
          <li>Can media move bi-directionally, or is it unidirectional only?</li>
        </ul>
      </section>
      <section anchor="naming">
        <name>Naming and Addressing Media Resources</name>
        <t>As multiple streams of media may be available for concurrent sending such as multiple camera views or audio tracks, a means of both identifying the technical properties of each resource (codec, bitrate, etc) as well as a useful identification for playback should be part of the protocol. A base level of optional metadata e.g. the known language of an audio track or name of participant's camera should be supported, but further extended metadata of the contents of the media or its ontology should not be supported.</t>
        <section anchor="scoped-to-an-origindomain-application-specific">
          <name>Scoped to an Origin/Domain, Application specific.</name>
        </section>
        <section anchor="allows-subscribing-or-requesting-for-the-data-matching-the-name-by-the-consumers">
          <name>Allows subscribing or requesting for the data matching the name by the consumers</name>
        </section>
      </section>
      <section anchor="Packaging">
        <name>Packaging Media</name>
        <t>Packaging of media describes how raw media will be encapsulated. There are at a high level two approaches to this:</t>
        <ul spacing="compact">
          <li>Within the protocol itself, where the protocol defines the ancillary data required to decode each media type the protocol supports.</li>
          <li>A common encapsulation format such as there are advantages to using an existing generic media packaging format (such as CMAF <xref target="CMAF"/> or other ISOBMFF <xref target="ISOBMFF"/> subsets) which define a generic method for all media and handles ancillary decode information.</li>
        </ul>
        <t>The working group must agree on which approach should be taken to the packaging of media, taking into consideration the various technical trade offs that each approach provides.</t>
        <ul spacing="compact">
          <li>If the working group decides to describe media encapsulation as part of the MOQ protocol, this will require a new version of the MOQ protocol in order to signal the receiver that a new media encapsulation format may be present.</li>
          <li>If the working group decides to use a common encapsulation format, the mechanisms within the protocol <bcp14>SHOULD</bcp14> allow for new encapsulation formats to be used. Without encapsulation agility, adding or changing the way media is encapsulated will also require a new version of the MOQ protocol, to signal the receiver that a new media encapsulation format may be present.</li>
        </ul>
        <t>MOQ protocol specifications will provide details on the supported media encapsulation(s).</t>
        <section anchor="handling-scalable-video-codecs">
          <name>Handling Scalable Video Codecs</name>
          <t>Some video codecs have a complex structure. Consider an
application using both temporal layering and spatial layering. It would
send for example:</t>
          <ul spacing="compact">
            <li>an object representing the 30 fps frame at 720p</li>
            <li>an object representing the spatial enhancement of that frame to 1080p</li>
            <li>an object representing the 60 fps frame at 720p</li>
            <li>an object representing the spatial enhancement of that 60 fps frame to
1080p</li>
          </ul>
          <t>The encoding of the 30 fps frame depends on the previous 30 fps frames,
but not on any 60 fps frame. The encoding of the 60 fps depends on the
previous 30 fps frames, and possibly also on the previous 60 fps frames
(there are options). The encoding of the spatial enhancement depends on
the corresponding 720p frames, and also on the previous 1080p
enhancements. Add a couple of layers, and the
expression of dependencies can be very complex. The AV1 documentation for
example provides schematics of a video stream with 3 frame rate options at
15, 30 and 60 fps, and two definition options, with a complex graph of
dependencies. Other video encodings have similar provisions. They may
differ in details, but there are constants:
if some object is dropped, then all objects that have a dependency on it
are useless.</t>
          <t>Of course, we could encode these dependencies as properties of the object
being sent, stating for example that "object 17 can only be decoded if
objects 16, 11 and 7 are available." However, this approach leads to a lot
of complexity in relays. We believe that a linear approach is
preferable, using attributes of objects like delivery order or priorities.</t>
        </section>
        <section anchor="application-choice-for-ordering">
          <name>Application choice for ordering</name>
          <t>The conversion from dependency graph to linear ordering is not unique.
The simple graph in our example could be ordered either "frame rate first"
versus "definition first". If the application chooses frame rate first,
the policy is expressed as "in case of congestion, drop the spatial
enhancement objects first, and if that is not enough drop the 60 fps frames".
If the application chooses "definition first", the policy becomes
"drop the 60 fps frames and their corresponding 1080p enhancement first,
and if that is not enough also drop the 1080p enhancement of the 30 fps frames".</t>
          <t>More complex graphs will allow for more complex policies, maybe for example
"15 fps at 720p as a minimum, but try to ensure at least 30fps, then try to
ensure 1080p, and if there is bandwidth available forward 60 fps at 1080p".
Such linearization requires choices, and the choices should be
made by the application, based on the user experience requirements of
the application.</t>
          <t>The relays will not understand all the variation of what the media is
but the applications will need a way to indicate to the relays the
information they will need to correctly order which data is sent first.</t>
        </section>
        <section anchor="linear-ordering-using-priorities">
          <name>Linear ordering using priorities</name>
          <t>We propose to express dependencies using a combination of object number and
object priority.</t>
          <t>Let's consider our example of an encoding providing both spatial enhancement and
frame rate enhancement options, and suppose that the application has expressed
a preference for frame rate. We can express that policy as follow:</t>
          <ul spacing="compact">
            <li>the frames are ordered first by time and when the time is the same by resolution.
This determines the "object number" property.</li>
            <li>the frame priority will be set to 1 for the 720p 30 fps frame,
2 for the 720p 60 fps frames, and 3 for all the enhancement frames.</li>
          </ul>
          <t>If the application did instead express a preference for definition, object numbers
will be assigned in the same way, but the priorities will be different:</t>
          <ul spacing="compact">
            <li>the frame priority will be set to 1 for the 720p 30 fps I frames and 2
for the 720p 30 fps P and B frames,
3 and 4 for the 1080p enhancements of the 60 fps frames, and 5 and 6 for the 60 fps
frames and their enhancements.</li>
          </ul>
          <t>Object numbers and priorities will be set by the publisher of the track, and
will not be modified by the relays.</t>
        </section>
      </section>
      <section anchor="med-consumption">
        <name>Media Consumption</name>
        <t>Receivers <bcp14>SHOULD</bcp14> be able to as part of negotiation of a session <xref target="MOQ-negotiation"/> specify which media to receive, not just with respect to the media format and codec, but also the varient thereof such as resolution or bitrate.</t>
      </section>
      <section anchor="MOQ-network-entities">
        <name>Relays, Caches, and other MOQ Network Elements</name>
        <section anchor="intervals-and-congestion">
          <name>Intervals and congestion</name>
          <t>It is possible to use groups as units of congestion control. When the
sending strategy is understoud, the objects in the group can be
assigned sequence numbers and drop priorities that capture the encoding dependencies,
such that:</t>
          <ul spacing="compact">
            <li>an object can only have dependencies with other objects in the same group,</li>
            <li>an object can only have dependencies with other objects with lower sequence numbers,</li>
            <li>an object can only have dependencies with other objects with lower or equal drop priorities.</li>
          </ul>
          <t>This simple rules enable real-time congestion control decisions at relays and other nodes.
The main drawback is that if a packet with a given drop priority is actually dropped,
all objects with higher sequence numbers and higher or equal drop priorities in the
same group must be dropped. If the group duration is long, this means that the quality
of experience may be lowered for a long time after a brief congestion. If the group
duration is short, this can produce a jarring effect in which the quality
of experience drops perdiodically at the tail of the group.</t>
        </section>
        <section anchor="pull-push">
          <name>Pull &amp; Push</name>
          <t>To enable use cases where receivers may wish to address a particular time of media in addition to having the most recently produced media available, both "pull" and "push" of media <bcp14>SHOULD</bcp14> be supported, with consideration that producers and intermediates <bcp14>SHOULD</bcp14> also signal what media is available (commonly referred to as a "DVR window"). Behaviours around cache durations for each MoQ entity should be defined.</t>
        </section>
        <section anchor="relay-behavior">
          <name>Relay behavior</name>
          <t>In case of congestion, the relay
will use the priorities to selectively drop the "least important" objects:</t>
          <ul spacing="compact">
            <li>if congestion is noticed, the relay will drop first the lesser priority
layer. In our example, that would mean the objects marked at
priority 6. The relay will drop all objects marked at that priority,
from the first dropped object to the end of the group.</li>
            <li>if congestion persists despite dropping a first layer, the relay will
start dropping the next layer, in our example the objects marked at
priority 5.</li>
            <li>if congestion still persist after dropping all but the highest priority
layer, the relay will have to close the group, and start relaying
the next group.</li>
          </ul>
          <t>When dropping objects within the same priority:</t>
          <ul spacing="compact">
            <li>higher object numbers in the same group, which are later in the group,
are "less important" and more likely to be dropped than objects in the
same group with a lower object number. Objects in a previous group are
"less important" than objects in the current group and <bcp14>MAY</bcp14> be dropped
ahead of objects in the current group.</li>
          </ul>
          <t>The specification above assumes that the relay can detect the onset
of congestion, and has a way to drop objects. There are several ways to
achieve that result, such as sending all objects of a group in
a single QUIC stream and making explicit action at the time of
relaying, or mapping separate priority layers into different QUIC streams
and marking these streams with different priorities. The exact
solution will have to be defined in a draft that specifies transport
priorities.</t>
        </section>
        <section anchor="high-loss-networks">
          <name>High Loss Networks</name>
          <t>Web conferencing systems are used on networks with well over 20% packet
loss and when this happens, it is often on connections with a relatively
large round trip times. In these situtation, forward error correction or
redundant transmitions are used to provide a reasonable user
experience. Often video is turned off in. There are multiple machine
learning based audio codecs in development that targeting a 2 to 3 Kbps
rate.</t>
          <t>This can result in scenarios where very small audio objects are sent at
a rate of several hundreds packets per second with a high network loss
rate.</t>
        </section>
        <section anchor="interval-between-access-points">
          <name>Interval between access points</name>
          <t>In the streaming scenarios, there is an important emphasis on resynchronization, characterized by a short distance between "access points". This can be used for features like fast-forward or rewinding, which are common in non-real-time streaming. For real-time streaming experiences such as watching a sport event, frequent access points allow "channel surfers" to quickly join the broadcast and enjoy the experience. The interval between these access points will often be just a few seconds.</t>
          <t>In video encoding, each access point is mapped to a fully encoded frame
that can be used as reference for the "group of blocks". The encoding of
these reference frames is typically much larger than the differential
encoding of the following frames. This creates a peak of traffic at the
beginning of the group. This peak is much easier to absorb in streaming
applications that tolerate higher latencies than interactive video
conferences. In practice, many real time conferences tend to use much
longer groups, resulting in higher compression ratios and smoother
bandwidth consumption along with a way to request the start of a new
group when needed. Other real time conferences tend to use very short
groups and just wait for the next group when needed.</t>
          <t>Of course, having longer blocks create other issues. Realtime conferences also need to accomodate the occasional occasional late comer, or the disconnected user who want to resynchronize after a network event. This drives a need for synchronization "between access points". For example, rather than waiting for 30 seconds before connecting, the user might quickly download the "key" frames of the past 30 seconds and replay them in order to "synchronize" the video decoder.</t>
        </section>
        <section anchor="media-insertion-and-redirection">
          <name>Media Insertion and Redirection</name>
          <t>In all of the applicable use cases defined in <xref target="overallusecases"/> it may be necessary for consumers to be aware of changes to the source of media being inserted, or be instructed to consume media from a different source. These may be done for the insertion of advertising or for operational movement of consumers, amongst other reasons. Within the media insertion scenario an existing stream being consumed may change as a result of a different source being spliced which necessitates the decoder being reset as parameters such as video frame rate, image resolution etc may have changed. For redirection, consumers may be signalled to consume media from a different source which may also require re-initialization of decoder.</t>
          <t>In both of these scenarios, triggering may occur either through an event provided in the media such as a <xref target="SCTE-35"/> marker, or through an external trigger. Both should be supported.</t>
        </section>
      </section>
      <section anchor="MOQ-security">
        <name>Security</name>
        <section anchor="authentication-authorisation">
          <name>Authentication &amp; Authorisation</name>
          <t>Whilst QUIC and conversely TLS supports the ability for mutual authentication through client and server presenting certificates and performing validation, this is infeasible in many use cases where provisioning of client TLS certificates is unsupported or impractical. Thus, support for a primitive method of authentication between MoQ entities <bcp14>SHOULD</bcp14> be included to authenticate entities between one another, noting that implementations and deployments should determine which authorisation model if any is applicable.</t>
        </section>
        <section anchor="MOQ-media-encryption">
          <name>Media Encryption</name>
          <t>Much of the early discussion about MOQ security was not entirely coherent. Some contributors pushed for "end-to-end security", and some contributors pushed for the ability of intermediate nodes to have sufficient visibility into media payloads to accomplish the responsibilities those intermediate nodes were given. Some contributors may have pushed for both, at various times. It is worthwhile to clarify what "security" means in a MOQ context.</t>
          <t>Generally, there are three aspects of media security:</t>
          <ul spacing="compact">
            <li>Digital Rights Management, which refers to the authorization of receivers to decode a media stream.</li>
            <li>Sender-to-Receiver Media Security, which refers to the ability of media senders and receivers to transfer media while protected from unauthorized intermediates and observers, and</li>
            <li>Node-to-node Media Security, which refers to security when authorized intermediaries are needed to transform media into a form acceptable to authorized receivers. For example, this might refer to a video transcoder between the media sender and receiver.</li>
          </ul>
          <t>"End-to-end security" describes the use of encryption of one or more media stream(s) over an end-to-end path, to provide confidentiality in the presence of any intermediates or observers and prevent or restrict ability to decrypt that media.</t>
          <t>"Node-to-node security" refers to the use of encryption of one or more media stream(s) over a path segment connecting two MOQ nodes, that makes up part of the end-to-end path between the MOQ sender and ultimate MOQ receiver, to provide confidentiality in the presence of unauthorized intermediates or observers and prevent or restrict ability to decrypt that media.</t>
          <t>Many MOQ deployment models rely on intermediate nodes, and these intermediate nodes may have a variety of responsibilities, including, but not limited to,</t>
          <ul spacing="compact">
            <li>rate adaptation based on media metadata</li>
            <li>routing media based on the characteristics of the media</li>
            <li>caching media</li>
            <li>allowing "watch in-progress broadcasts from the beginning", "instant replay" and "fast forward"</li>
            <li>transcoding media</li>
          </ul>
          <t>Some of these responsibilities require authorization to see more media headers and even media payload than others. The protocol <bcp14>SHOULD</bcp14> allow MOQ intermediate nodes to perform a variety of responsibilities, without having access to media headers and/or media payloads that they do not require to carry out their responsibilities.</t>
          <t>Support for encrypted media <bcp14>SHOULD</bcp14> be available in the protocol to support the above use cases, with key exchange and decryption authorisation handled externally. The protocol <bcp14>SHOULD</bcp14> provide metadata for entities which process media to perform key exchange and decrypt.</t>
        </section>
      </section>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t>This document makes no requests of IANA.</t>
    </section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>As this document is intended to guide discussion and consensus, it introduces
no security considerations of its own.</t>
    </section>
  </middle>
  <back>
    <references>
      <name>References</name>
      <references anchor="sec-normative-references">
        <name>Normative References</name>
        <reference anchor="RFC2119">
          <front>
            <title>Key words for use in RFCs to Indicate Requirement Levels</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <date month="March" year="1997"/>
            <abstract>
              <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="2119"/>
          <seriesInfo name="DOI" value="10.17487/RFC2119"/>
        </reference>
        <reference anchor="RFC8174">
          <front>
            <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
            <author fullname="B. Leiba" initials="B." surname="Leiba"/>
            <date month="May" year="2017"/>
            <abstract>
              <t>RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="8174"/>
          <seriesInfo name="DOI" value="10.17487/RFC8174"/>
        </reference>
        <reference anchor="I-D.draft-ietf-moq-transport">
          <front>
            <title>Media over QUIC Transport</title>
            <author fullname="Luke Curley" initials="L." surname="Curley">
              <organization>Twitch</organization>
            </author>
            <author fullname="Kirill Pugin" initials="K." surname="Pugin">
              <organization>Meta</organization>
            </author>
            <author fullname="Suhas Nandakumar" initials="S." surname="Nandakumar">
              <organization>Cisco</organization>
            </author>
            <author fullname="Victor Vasiliev" initials="V." surname="Vasiliev">
              <organization>Google</organization>
            </author>
            <date day="5" month="July" year="2023"/>
            <abstract>
              <t>   This document defines the core behavior for Media over QUIC Transport
   (MOQT), a media transport protocol over QUIC.  MOQT allows a producer
   of media to publish data and have it consumed via subscription by a
   multiplicity of endpoints.  It supports intermediate content
   distribution networks and is designed for high scale and low latency
   distribution.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-moq-transport-00"/>
        </reference>
      </references>
      <references anchor="sec-informative-references">
        <name>Informative References</name>
        <reference anchor="RFC3550">
          <front>
            <title>RTP: A Transport Protocol for Real-Time Applications</title>
            <author fullname="H. Schulzrinne" initials="H." surname="Schulzrinne"/>
            <author fullname="S. Casner" initials="S." surname="Casner"/>
            <author fullname="R. Frederick" initials="R." surname="Frederick"/>
            <author fullname="V. Jacobson" initials="V." surname="Jacobson"/>
            <date month="July" year="2003"/>
            <abstract>
              <t>This memorandum describes RTP, the real-time transport protocol. RTP provides end-to-end network transport functions suitable for applications transmitting real-time data, such as audio, video or simulation data, over multicast or unicast network services. RTP does not address resource reservation and does not guarantee quality-of- service for real-time services. The data transport is augmented by a control protocol (RTCP) to allow monitoring of the data delivery in a manner scalable to large multicast networks, and to provide minimal control and identification functionality. RTP and RTCP are designed to be independent of the underlying transport and network layers. The protocol supports the use of RTP-level translators and mixers. Most of the text in this memorandum is identical to RFC 1889 which it obsoletes. There are no changes in the packet formats on the wire, only changes to the rules and algorithms governing how the protocol is used. The biggest change is an enhancement to the scalable timer algorithm for calculating when to send RTCP packets in order to minimize transmission in excess of the intended rate when many participants join a session simultaneously. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="STD" value="64"/>
          <seriesInfo name="RFC" value="3550"/>
          <seriesInfo name="DOI" value="10.17487/RFC3550"/>
        </reference>
        <reference anchor="RFC6363">
          <front>
            <title>Forward Error Correction (FEC) Framework</title>
            <author fullname="M. Watson" initials="M." surname="Watson"/>
            <author fullname="A. Begen" initials="A." surname="Begen"/>
            <author fullname="V. Roca" initials="V." surname="Roca"/>
            <date month="October" year="2011"/>
            <abstract>
              <t>This document describes a framework for using Forward Error Correction (FEC) codes with applications in public and private IP networks to provide protection against packet loss. The framework supports applying FEC to arbitrary packet flows over unreliable transport and is primarily intended for real-time, or streaming, media. This framework can be used to define Content Delivery Protocols that provide FEC for streaming media delivery or other packet flows. Content Delivery Protocols defined using this framework can support any FEC scheme (and associated FEC codes) that is compliant with various requirements defined in this document. Thus, Content Delivery Protocols can be defined that are not specific to a particular FEC scheme, and FEC schemes can be defined that are not specific to a particular Content Delivery Protocol. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="6363"/>
          <seriesInfo name="DOI" value="10.17487/RFC6363"/>
        </reference>
        <reference anchor="RFC6716">
          <front>
            <title>Definition of the Opus Audio Codec</title>
            <author fullname="JM. Valin" initials="JM." surname="Valin"/>
            <author fullname="K. Vos" initials="K." surname="Vos"/>
            <author fullname="T. Terriberry" initials="T." surname="Terriberry"/>
            <date month="September" year="2012"/>
            <abstract>
              <t>This document defines the Opus interactive speech and audio codec. Opus is designed to handle a wide range of interactive audio applications, including Voice over IP, videoconferencing, in-game chat, and even live, distributed music performances. It scales from low bitrate narrowband speech at 6 kbit/s to very high quality stereo music at 510 kbit/s. Opus uses both Linear Prediction (LP) and the Modified Discrete Cosine Transform (MDCT) to achieve good compression of both speech and music. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="6716"/>
          <seriesInfo name="DOI" value="10.17487/RFC6716"/>
        </reference>
        <reference anchor="RFC9000">
          <front>
            <title>QUIC: A UDP-Based Multiplexed and Secure Transport</title>
            <author fullname="J. Iyengar" initials="J." role="editor" surname="Iyengar"/>
            <author fullname="M. Thomson" initials="M." role="editor" surname="Thomson"/>
            <date month="May" year="2021"/>
            <abstract>
              <t>This document defines the core of the QUIC transport protocol. QUIC provides applications with flow-controlled streams for structured communication, low-latency connection establishment, and network path migration. QUIC includes security measures that ensure confidentiality, integrity, and availability in a range of deployment circumstances. Accompanying documents describe the integration of TLS for key negotiation, loss detection, and an exemplary congestion control algorithm.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="9000"/>
          <seriesInfo name="DOI" value="10.17487/RFC9000"/>
        </reference>
        <reference anchor="I-D.draft-cardwell-iccrg-bbr-congestion-control">
          <front>
            <title>BBR Congestion Control</title>
            <author fullname="Neal Cardwell" initials="N." surname="Cardwell">
              <organization>Google</organization>
            </author>
            <author fullname="Yuchung Cheng" initials="Y." surname="Cheng">
              <organization>Google</organization>
            </author>
            <author fullname="Soheil Hassas Yeganeh" initials="S. H." surname="Yeganeh">
              <organization>Google</organization>
            </author>
            <author fullname="Ian Swett" initials="I." surname="Swett">
              <organization>Google</organization>
            </author>
            <author fullname="Van Jacobson" initials="V." surname="Jacobson">
              <organization>Google</organization>
            </author>
            <date day="7" month="March" year="2022"/>
            <abstract>
              <t>   This document specifies the BBR congestion control algorithm.  BBR
   ("Bottleneck Bandwidth and Round-trip propagation time") uses recent
   measurements of a transport connection's delivery rate, round-trip
   time, and packet loss rate to build an explicit model of the network
   path.  BBR then uses this model to control both how fast it sends
   data and the maximum volume of data it allows in flight in the
   network at any time.  Relative to loss-based congestion control
   algorithms such as Reno [RFC5681] or CUBIC [RFC8312], BBR offers
   substantially higher throughput for bottlenecks with shallow buffers
   or random losses, and substantially lower queueing delays for
   bottlenecks with deep buffers (avoiding "bufferbloat").  BBR can be
   implemented in any transport protocol that supports packet-delivery
   acknowledgment.  Thus far, open source implementations are available
   for TCP [RFC793] and QUIC [RFC9000].  This document specifies version
   2 of the BBR algorithm, also sometimes referred to as BBRv2 or bbr2.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-cardwell-iccrg-bbr-congestion-control-02"/>
        </reference>
        <reference anchor="I-D.draft-kpugin-rush">
          <front>
            <title>RUSH - Reliable (unreliable) streaming protocol</title>
            <author fullname="Kirill Pugin" initials="K." surname="Pugin">
              <organization>Facebook</organization>
            </author>
            <author fullname="Alan Frindell" initials="A." surname="Frindell">
              <organization>Facebook</organization>
            </author>
            <author fullname="Jorge Cenzano Ferret" initials="J. C." surname="Ferret">
              <organization>Facebook</organization>
            </author>
            <author fullname="Jake Weissman" initials="J." surname="Weissman">
              <organization>Facebook</organization>
            </author>
            <date day="10" month="May" year="2023"/>
            <abstract>
              <t>   RUSH is an application-level protocol for ingesting live video.  This
   document describes the protocol and how it maps onto QUIC.

Discussion Venues

   This note is to be removed before publishing as an RFC.

   Discussion of this document takes place on the mailing list (), which
   is archived at .

   Source for this draft and an issue tracker can be found at
   https://github.com/afrind/draft-rush.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-kpugin-rush-02"/>
        </reference>
        <reference anchor="I-D.draft-lcurley-warp">
          <front>
            <title>Warp - Live Media Transport over QUIC</title>
            <author fullname="Luke Curley" initials="L." surname="Curley">
              <organization>Twitch</organization>
            </author>
            <author fullname="Kirill Pugin" initials="K." surname="Pugin">
              <organization>Meta</organization>
            </author>
            <author fullname="Suhas Nandakumar" initials="S." surname="Nandakumar">
              <organization>Cisco</organization>
            </author>
            <author fullname="Victor Vasiliev" initials="V." surname="Vasiliev">
              <organization>Google</organization>
            </author>
            <date day="13" month="March" year="2023"/>
            <abstract>
              <t>   This document defines the core behavior for Warp, a live media
   transport protocol over QUIC.  Media is split into objects based on
   the underlying media encoding and transmitted independently over QUIC
   streams.  QUIC streams are prioritized based on the delivery order,
   allowing less important objects to be starved or dropped during
   congestion.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-lcurley-warp-04"/>
        </reference>
        <reference anchor="I-D.draft-jennings-moq-quicr-arch">
          <front>
            <title>QuicR - Media Delivery Protocol over QUIC</title>
            <author fullname="Cullen Fluffy Jennings" initials="C. F." surname="Jennings">
              <organization>cisco</organization>
            </author>
            <author fullname="Suhas Nandakumar" initials="S." surname="Nandakumar">
              <organization>Cisco</organization>
            </author>
            <date day="11" month="July" year="2022"/>
            <abstract>
              <t>   This specification outlines the design for a media delivery protocol
   over QUIC.  It aims at supporting multiple application classes with
   varying latency requirements including ultra low latency applications
   such as interactive communication and gaming.  It is based on a
   publish/subscribe metaphor where entities publish and subscribe to
   data that is sent through, and received from, relays in the cloud.
   The information subscribed to is named such that this forms an
   overlay information centric network.  The relays allow for efficient
   large scale deployments.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-jennings-moq-quicr-arch-01"/>
        </reference>
        <reference anchor="I-D.draft-jennings-moq-quicr-proto">
          <front>
            <title>QuicR - Media Delivery Protocol over QUIC</title>
            <author fullname="Cullen Fluffy Jennings" initials="C. F." surname="Jennings">
              <organization>cisco</organization>
            </author>
            <author fullname="Suhas Nandakumar" initials="S." surname="Nandakumar">
              <organization>Cisco</organization>
            </author>
            <author fullname="Christian Huitema" initials="C." surname="Huitema">
              <organization>Private Octopus Inc.</organization>
            </author>
            <date day="11" month="July" year="2022"/>
            <abstract>
              <t>   Recently new use cases have emerged requiring higher scalability of
   media delivery for interactive realtime applications and much lower
   latency for streaming applications and a combination thereof.

   draft-jennings-moq-arch specifies architectural aspects of QuicR, a
   media delivery protocol based on publish/subscribe metaphor and Relay
   based delivery tree, that enables a wide range of realtime
   applications with different resiliency and latency needs.

   This specification defines the protocol aspects of the QuicR media
   delivery architecture.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-jennings-moq-quicr-proto-01"/>
        </reference>
        <reference anchor="I-D.draft-ietf-webtrans-overview">
          <front>
            <title>The WebTransport Protocol Framework</title>
            <author fullname="Victor Vasiliev" initials="V." surname="Vasiliev">
              <organization>Google</organization>
            </author>
            <date day="6" month="September" year="2023"/>
            <abstract>
              <t>   The WebTransport Protocol Framework enables clients constrained by
   the Web security model to communicate with a remote server using a
   secure multiplexed transport.  It consists of a set of individual
   protocols that are safe to expose to untrusted applications, combined
   with an abstract model that allows them to be used interchangeably.

   This document defines the overall requirements on the protocols used
   in WebTransport, as well as the common features of the protocols,
   support for some of which may be optional.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-webtrans-overview-06"/>
        </reference>
        <reference anchor="CMAF">
          <front>
            <title>Information technology — Multimedia application format (MPEG-A) — Part 19: Common media application format (CMAF) for segmented media</title>
            <author>
              <organization/>
            </author>
            <date year="2020"/>
          </front>
        </reference>
        <reference anchor="ISOBMFF">
          <front>
            <title>Information Technology - Coding Of Audio-Visual Objects - Part 12: ISO Base Media File Format</title>
            <author>
              <organization/>
            </author>
            <date year="2022"/>
          </front>
        </reference>
        <reference anchor="SCTE-35" target="https://www.scte.org/standards/library/catalog/scte-35-digital-program-insertion-cueing-message/">
          <front>
            <title>Digital Program Insertion Cueing Message (SCTE-35)</title>
            <author>
              <organization/>
            </author>
            <date year="2022"/>
          </front>
        </reference>
        <reference anchor="IESG-sdwg" target="https://www.ietf.org/about/groups/iesg/statements/support-documents/">
          <front>
            <title>Support Documents in IETF Working Groups</title>
            <author>
              <organization/>
            </author>
            <date year="2016" month="November"/>
          </front>
        </reference>
        <reference anchor="MOQ-charter" target="https://datatracker.ietf.org/wg/moq/about/">
          <front>
            <title>Media Over QUIC (moq)</title>
            <author>
              <organization/>
            </author>
            <date year="2022" month="September"/>
          </front>
        </reference>
        <reference anchor="WebTrans-charter" target="https://datatracker.ietf.org/wg/webtrans/about/">
          <front>
            <title>WebTransport (webtrans)</title>
            <author>
              <organization/>
            </author>
            <date year="2021" month="March"/>
          </front>
        </reference>
        <reference anchor="Prog-MOQ" target="https://datatracker.ietf.org/meeting/interim-2022-moq-01/materials/slides-interim-2022-moq-01-sessa-moq-use-cases-and-requirements-individual-draft-working-group-draft-00">
          <front>
            <title>Progressing MOQ</title>
            <author>
              <organization/>
            </author>
            <date year="2022" month="October"/>
          </front>
        </reference>
        <reference anchor="I-D.draft-nandakumar-moq-scenarios">
          <front>
            <title>Exploration of MoQ scenarios and Data Model</title>
            <author fullname="Suhas Nandakumar" initials="S." surname="Nandakumar">
              <organization>Cisco</organization>
            </author>
            <author fullname="Christian Huitema" initials="C." surname="Huitema">
              <organization>Private Octopus Inc.</organization>
            </author>
            <author fullname="Cullen Fluffy Jennings" initials="C. F." surname="Jennings">
              <organization>Cisco</organization>
            </author>
            <date day="13" month="March" year="2023"/>
            <abstract>
              <t>   This document delineates a set of key scenarios and details the
   requirements that they place on the MoQ data model.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-nandakumar-moq-scenarios-00"/>
        </reference>
      </references>
    </references>
    <?line 752?>

<section anchor="acknowledgements">
      <name>Acknowledgements</name>
      <t>A significant amount of material in <xref target="overallusecases"/> and <xref target="req-sec"/>} was taken from <xref target="I-D.draft-nandakumar-moq-scenarios"/>. We thank the authors of that draft, Suhas Nandakumar, Christian Huitema, and Cullen Jennings.</t>
      <t>The authors would like to thank several authors of individual drafts that fed into the "Media Over QUIC" charter process:</t>
      <ul spacing="compact">
        <li>Kirill Pugin, Alan Frindell, Jordi Cenzano, and Jake Weissman (<xref target="I-D.draft-kpugin-rush"/>,</li>
        <li>Luke Curley (<xref target="I-D.draft-lcurley-warp"/>), and</li>
        <li>Cullen Jennings and Suhas Nandakumar (<xref target="I-D.draft-jennings-moq-quicr-arch"/>), together with Christian Huitema (<xref target="I-D.draft-jennings-moq-quicr-proto"/>).</li>
      </ul>
      <t>We would also like to thank Suhas Nandakumar for his presentation, "Progressing MOQ" <xref target="Prog-MOQ"/>, at the October 2022 MOQ virtual interim meeting. We used his outline as a starting point for the Requirements section (<xref target="req-sec"/>).</t>
      <t>We would also like to thank Cullen Jennings for suggesting that we distinguish
between interactive and live streaming use cases based on the users' perception,
rather than quantitative measurements. In addition we would also like to thank
Lucas Pardue, Alan Frindell, and Bernard Aboba for their reviews of the
document.</t>
      <t>James Gruessing would also like to thank Francesco Illy and Nicholas Book for
their part in providing the needed motivation.</t>
    </section>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA819S5MbyZHmPX9FLmg7TXKB4kvdLdXIpC0+upsaskmx2KLJ
xsZmEkAAyCaQCWUkWITYlMl03sMcZGtasznsYY46ru0P2J+iX7L++SMeCVSR
LbXtTllLrAIyIzw8PPztHpPJpOjrfu1Oy6duXlflszeuK3/5zeMH5aT8xrvy
QeWdL6tmXr5wv9nVndu4pvflou30hZdd1fht2/Xl867t21m7Lh86Xy+boppO
O/eGBm5/ecVQxbydNdWGAJh31aKf1K5fTDbtbyZd8tDk9t1iVvVu2Xb707Ju
Fm3h+85Vm9Py8aOXXxRz+u60KOptd1r23c73d2/f/gm9U9Ezp2W13a5rer1u
G19ctN3rZdfutgTYs1+WT6t6XTfL8knt++K129PXcxq06V3XuH7yEDAxPmhC
Av2fq3XbEKx754ttfVr+I614XNL/1c2cAB2XnjDRuYWn3/Yb/aXv6hl9NWs3
2wq/+N00/E6/8ArHtCoCxP1TUVS7ftV2p0VZTuh/JX3hT8tfnJRfdjvnPQHL
nwrOfkH/7wfftN2yaurf8npPy6/d3HVrAp124Pluuq7da1c+23St2/LTbkMY
OC2/xUAnQP5/XeKTEwKMv5+1u6YH1r92/UpHykE7PykfVhev6fcEsPOta2ZE
Suk3OWAv8UDTl2cbR/ipyidPHqQAeRlgLu9fCdk3Td27eXneExV4ogIij25D
s7wBTZTliy8e3Pv009un8utn9z67Z79+fucz/fUnt2/f5ocfTx6eCCHOqm5+
4dbrST2bdcsJEfNk1jZL5wE+fu27dn2avfJ6u1vWzYQIcJV/sZ7turXbTy6q
bpt/861rGto4zyRPBD/rJlU3W33woS0OW/4Un5wLN+1xIictHeQ3tbvgVT14
evbFKSNND/vosSGpbcrezVZNu26X+/Ivv/9j+XS37usNH+7k5JTyfHn96fNH
X07ObvCjzys693d+clo+IDKmZy5/CxDcYLbh3RIUTxvGT48YLD7A5d3bd2/z
Jpw/u//0iysgfhkhntDcc5zgZ4vybDev28mvar+r1uWz6bduRqxqolDePcW4
5X3iQsq6vqjXrvyCBx1AcRdQnD94+Why71OFouqWrj8tV32/9ae3bl1cXJz4
We9OiKhvMWsgcvG31vW0q7r9LVo9cQr6hh6hMSbzelnTJ9i2ZVdtJkTSrhNC
2jmCfkKnz1dLdytb8kN5C5wVbxFb0rfKB/wWrYPfKq8rrDeOrePxo/MvJ35+
sbx8JSAdXkk1bXf9LeaP/lbtPK+tFyZ8y++2YPQT4tg7+SSD9ly+Lh/a18Qe
mD2Xr4jnAtwvedwUxjv3yq/bNwTqnc8AKnHkyWxF2+W648DSWxXR9+y16yLQ
F8tbdDAU9gykoUy7Ts9lODp3W1rdlL7/8Tgg7JWbslT760CxE3gMHhuZ8XTd
nswgeorjX965zeDcATjY/Qlh5nuAsXGuJ4TfqiHH6s0EK2PmcfvOLaJ2+qxa
036u67nzkyMPTTwIi//YeUeskP6eEI3nQplkXv2mntNhmwgHupB9njD96Ge3
b2cYYFIWYYXdTlf+bNa32Im7d2wrJpNJWU09ltcXxctV7UujvZIgn3X1lKQf
QVjOgm6RQlj2K+I9yx0tk351kCmzemGsqV2UVenrzXbtxuW6vZisCYxmtlcu
NndrkiDdnkT6eme8jEgaEoBnmteQ7FP+bkxQYEmuhpDkyZjgtqYT0avZ5lcE
XPjDnvInsuam7d0/f43/69t/fuEqkuAk126SnCofzeuelIOSgAYjo4USo6f5
CDWe2B3AnDoC1JVbCHtZ6s2iOG933Yzw1BIqAHztPekMvCR+mTeL0IjX6dMd
PVL1xU+NyogPrXZTiF4ctYmcuIwafkawP6z9bEd7K8hNxvWrdreeE+2S7rFd
VwQIxA4hidnDwSkljfFGsVHNbE1oHhMw5U+Pciw8tqmaW3gMkh+A/UzxuKnn
87UrimtQ6Lp2vhMEvbtW48/3/7FJqnz3LmGH799/kMTevVNF5v37A3J7927I
1OihS2nw2rUS1CdaPinJGf8ut/R+Pau3FbT34tXKNWU1b4mPzse654bQGiKA
EDAnYd+3hNFtvyPKjNhlZJKSDknhZ+1WpgQbMQLB9MkKBxg5vjtVV9MMi67d
YAifTEhrO/OBNi+6GlxyzBOJ1u3LVUXniQ4g6LRpBHBMgS1KjpTAlyx2XE5p
hjk+p6NW0gGiQwu9AIjjGR1L4k/ooJpIxcMfLzZp+UGUY/EXq5okha/2vrxe
kfZF0yhlQFe8QYrfz8pXK+g4ulCC9YLwvKO35BlQQIY+4DPuzsqtt1g+yRsi
65moePnCruONdkF7bN8T1pY70kluECPpLxx9kdIL1gZMqKQoWVLAIhIcGDIF
xfPW+eYvv/8fNPv6AsvkvanKVb1clRCT9RtSjuh/O3dCRgDBVW5a2qZZ3dEw
gHHm/DhgvqSTSFtDsHn+DADvyHTYM22SbVhjhGoN0482iM4sUVpT+42wSd58
D7zJxsMCEJMPCCVMEsRVvrDyon5dw7IjxJZ11F8JatKR6VzUb5VlGD+I1ARN
GYxkj6Hpdfd2S9s0F4ZKdPyYuKpzG2xg5WnI6dopNx9gF6ujgbogJ+Ic2Lx+
xXunq79YucBdSEyTJVB+S+Y0cThCWkr+AIp31mgbwinsSeAApK/WfOyukc7e
bWpV2t9dIyRvmAG7kqxuQDz3pK99c/5yNJZ/y6+f8e8vHhGbe/HoIX4//+rs
yZPwS6FPnH/17JsnD+Nv8c0Hz54+ffT1Q3mZPi2zj4rR07NfjwQNo2fPXz5+
9vXZk5GQaMrGwKAIiVPH3Kzbdg7WS+ULExhzvHP/wfP/8z/v/IhO6X8iXNy9
c+cnxGLljx/f+fxH9Acht5HZ2ob2Vf4k3O0LUEPVMaGs1+CTUPuJdAnLJDrp
0NKeOMLizX8EZv7ptPzpdLa986Of6QdYcPah4Sz7kHF2+MnBy4LEIx8dmSZg
M/t8gOkc3rNfZ38b3pMPf/pzeELKyZ0f//xnBUujhyzcSerK+TPOwm4a0g3r
N6LUPMEv5+wYwmPB6SR0BlGismMoSVjJAFfoXCqeCP1TtrOCeFJiT6US2Gx8
pW5m691c9BYCxhsw43Kp/wJQUQfo1C1oX5sZPDcFSX3meCx8ZBgy7OZ06Gc9
UQsd7emezqS8QewV6sSiJv6liscon3CEN0Z1xNCImHWJYydCcsWLIyTuvAg5
I2YF9BI4mSbnvB2zXmTscOIbJ2VgTjT4YrfOhheR3F+0DIwQ+WxdeWCPlhFQ
OTZF9sLFl6PJkeGc+H6Hp3r4h4qCtyeVOiw45qKdwvFQvQ5s3NbCmrNJ8N/s
6LW6Zx8SIaHypLb4wOhVqSMAactIJ5sJQUSAZszuZQfp16qb1j3cAmWzY1uT
ltm5NUm0MRbXLhaQOxkkvEUEXYCIuAAmIAWFBAGYKmblp5LXCPELxqw8Cb7F
cgESQAkhCOBqL6/LPiV0IjuccjY8T9S+bQFLJDoVrbZs0/Z0mL8v6yEsu+Y4
NAQFgXM4Wk5XtK+vAhxKRQkwpO7cvJkwhJs3T4tTwmoY8IL0ZvgMSTefl9Na
DpYIYyHzBSnsxCpSppLoQqCgS98i0liQEUC7gqNqk/CMriKiYcVsLEvV13By
WuJy3rHiApKZVTtZGVHHzMFSEOUTSgCdfTzJzIgIGjJBBpqS6Y/H8Frb1csa
oMmghLEv2L4QVWgMRYPkzGsnkiY715nTbkMKVo9TAm0raKuJvlD51yRQQf4X
TIJtuW93pH6sX3tWLV//fARSxPqUY4LBugaUQBtrZEC2x4UBA5fjoqs2LiVL
0sNIYVuPGZUtpqtEkzVNZ7brOtYzdF00KJ0PWkiNjRitWNgSIkelOmwNHvqT
PvaVGO9iDrCNSkd/XS9Y6xolpDCC9L2ZC5jjJEYq5SVUwkrdrjlOg0p9AxGW
yKI1PcCKGiklHbNKGZqWPLqoeij17ZK1t1HEF5Etby4owitqsSs2DnYjMBTh
ic4IS7hv1YhVYW+clGo0q6RS+4FfDPYtAbcW1qkQTp2GQWhX/L6Zrbq2qX8L
JKgoD5JGlie8ldUkhtmMuRIu7W1bN6x3JtIgeXNga9+8KdqVFzhu3swAEE2a
1Xk2X0BYOd9JzdWWT5bOUDeMyrCJqkj6duPYvCrd2ruRMVDBYni4nKjtH88m
rcS18ejLIZyCGdAhgP/f7FA1c9n9oNyDCH3pks2DqSOWZ6efdDBpahb3TIed
a7tlzeotrY4wV+nO9nvRcxbOzZm5sBn2SZ+djCOcECdExY9AdcuWItyfGaHx
sEAYATWy42Ann5jWNas6osv5AaJZd2lalqHM+1f1Vs7eZa8f2YDs0IOQrzz4
cIecBqED2bjlmUXGgkd2zJeYNE/KLwx3oIjtzq/kD3p22gJMtyZyI+K7eZPV
27N1vWzY1OBV9ImlRMDzMs3y82Q7iWO1spfeR/U2N/yAJtbuyCyFv+pNRfjo
+XSq6sXTgWAF+0CDa9rdcjUu3WIBi5wFEB2uGfwI/JYSEf7BgO1OnXz+pDzz
hiIBlRfMe6EOIoix/hisZCQHe5bpnuZl+eP4DEdgeNUZhnCM2BfKWlU1N92V
vXDBmXBizj72RwbutXe9El69JjEjo8smrFw2DasaQk1k1A2ibnCGBlcaO6b8
Eap4xY4l5cKT27dFFkF+MlDtlDSlNy453QfbiWMokmspamEKITsQU+2MsbBg
ZKfPCbNQ7W1Aa4nPFhgxxlLNZmSgEyjrPdvyMZwvUTkAyeh9zuoZsYR31yB2
SWgRqTDzJCp9tuvKZUtf2kTmsa6ZtUbPkuhAPHNqEVziLhxGeUZD6071IGZA
Ub/MYWBNnGzuaKjQsWc4os4ubM0L91QuzbrLkn4liJy4vpkQNnUvLjczlUSe
+DGNak6WKrzCxlg9OdAHopRnAWgAiNtTmas5Ajek0fMCzPKARkXys2mbiYxh
9suqIuY143VwoBeHjKykIArX9Wtngpewos77aj5XHgrEEnMkpcFjHKb2C/an
ucN93Ig/OdWqZb/YC8+fKQOL+zywPhJKYTUcwkDMu926IjT24jPnPVUxPhHi
5liZ6J/tBXtwW0I4tE9zzFfegh5EHTUhAnEPebmr5mycYRMQZaMxvOAe2Rtr
97bu9yclU9bbil3+IJnHPe8EMy9VutgGKr949EBc9MhAgPedB5q72WRN065p
RbPXBCxPAobrqjWLBHnn8zufga+QFBf3XAV6WztG6Bp+gYzqWHFawxcqo3rI
djDmxKcJmNhsnu57pwoXKfKpTrdBJgAtrOw4IERo3jL7J15gtskMx1yoM1nA
CSOCJPESOj9xATCXBXHpWjV/3uuQTRGUcz8jgnFCgeX9+y/g8v5+CRmEWLgD
qgYce7tu95pdI2Y6hsRZIytyKmpIQAdTewdcMlvdECcn6caAz3esGs9hsgue
+iEyQUNiGRFKe1IszcjwUNngh1Z9dTdjG5pGYrskovjFy5fehBQIyJzvMGcq
WtLFap+a10PTlLQ2st7WTArrWoRDCOmEk//i5XOhJyTExBgCnMNQEDhixcSY
WEuyrZq2IyagkBqzLtUhiDko3zJX/oK5ZPqoOrLbrT7p3tIR9Cqb2wAEHtHc
FcHZkTUzt2aTObjtc2vWGCtOKqJBCFKcFsVEkluAb4ezwuwzcezD9rUHOKiO
0AqSSsaqneuHFYdHibkveEpVHvva+RuY5EyZtifczFZICtt1Yl3J60pr246o
iBVwvyPZL650eYIPVDXrmOOs10XJ9GQwn5Tn2FB9C9QSB9MBwgc8bbstPzUb
2SOwW3IQhfAWAOc1rVsGQuNICgpvpA+OfgMCCABXTcY4yTCcSijQCKufJrhE
QSfaQvzl+pQmJxWVHlrXGyRz3eAsOZYCsoAURSvbnKIofve733EKwQd+zr/4
5rS/My4f3Tnt79I/9077z4ry5C9//L38d1Liq5Ieoa/7e+P+R5ePlbxVXKfX
yvJG+MT++9crYPnLH/9b9vB//9/l9fIFhinKfwkff2IP/+HKoezb5MXi+BOX
vD94+o8HS1EYj39+sNzhiEcBod8vH+tPw325q/sSR6HdTH/HzvLGDrfm7rGt
ueQ/rPAjITzyLm3g3eMbeBkChvi5ZAOHxDIE+ZL90v/+19W7lk90FaEcbOx/
gMczKvm4l/S3AZncOyCTP31goAOsMwXcO0IBxGlSrjLuPx33n8Wh+GtiSJee
36t+mGPdHX/883e/3/M4WgQ889hCVRPiwWSKso7hLUdj0cI3yW7ZXaP2C8sD
kbgsCs9ZFCKIIolIDUl3FTBQqDqEbprEgbWt9uu2mrMgVOkXvNtRDDJIMPRD
eFrkOEZ/idHLxw+ReyyTzqMbkZamUqnlNNEwHy2ATd74bFEemfnIwhCZqJbw
sh2R55Wf1D5Ob+NgZeIDM2l3uKywDvWjRgjVdzpQDspgQJv8ZWjP4wTqIu9Z
WZTMFvbUd2IPqG9OAId9u8XSuhpLy9QFmni+b6qNKp6ciWb+XACAh/saD893
nTmXkYLhQ1hA9TTk5aifmXSV6NdRJTrqdZaNaC4D74qgsGXJUw3B0Xaqvqgv
OPNvy/YXqaI2pGBCz44Tr00TS2cowgyKb2+YsyE4fpykSBQzDlIegTLG+aeO
TtLJpWrNw/Tlr/Xls2Kou3zUCT+uEpXlAV8tr5ZCJvw/5pmU8dKfl2lP8t8f
yhdsJ10mjJPv/y3lxFGbKstj8viY0PzzgPMnz/z54GF95iNYNoD8AF5U8mVa
zgDQfx8i4k/FYJYrZvj9JZrCnw8XVOba1DFiOCZjj8rdw50eKGOHIvZgHF1X
ol0d39APDvOH8oO79SHqlUGufOrPx/fm8MGD/fx9rqkdEP6/5mN+pyP/zWR8
oKx/N0Dbh2k3PnzVSJfR8eDtAbF9l414pa77p+SxP5THyO/eZeR3lKfet+kS
xa78COI78nNAeB9QCfMfVRA/WmdL3vye2mHy5vfUE8NPpjCyi6KK8rxxbs5C
G8UJnOdmeSWoaWGnn+9DuK5YtZt2iWCUj0Ow8jlGLgzCQhbcvloBGBfiLV/v
1cHDgeqjqgKnKHcIX4uLbxBJHhfTjrNmCcLFrmNtJU8+bi9QBIRUHriLNy29
Kb4ex5kv3twWpsjVpHzBw4jSwXpRc1qEqhqXQ4nlkK5UIC+TdRgOQSA1/BIU
sAP+WvmlRLPfXZP8rvdF8d1Z8Jx/V/4KabTld8V35QQ//2Uy+Cm+u3nzXOIP
t16YZnnzJp3RZw27dOkfPHM/C2PggV87zqoRR36Iv2gG7nbXh0KxqG6pt22J
dJDrurS2M1Wv2rvuBjRAohxEnxDBeMtZu2tx31ZEIkxLCKjViK5LZobngA1N
Kap4wb/T/uIZmDZNH9RxROK6JFkP2fc7Hl9AYw8gPINQ7gAnwCokVO/mqsIf
hmSw6DVH3aMbTPXe/VacxKidrVD5MC7MbUu7tFxCcQeSJHoTwvMafaHjVPl0
/DzSjpEKeoB0dTpwK6j7Ar0kIUwY0zMO+LBRxCAJpskeeWJVDLSeKfRYcboS
ve5hprRdXzV9iNWELUb4eYvaHj5agrd2Ntt1EtX2W7jHi7gruk0z4gcwBun0
d4KSJVe56nHn7O/s3NVNoaMhDuckSHOr7SxVfNGJE36v0USdTs/FC7dBUPah
86/hJ313jYad0x8/+AF5WjX7K07Ieb2pEc5SClzmySchqpGu3HImHCch8Mou
yAJmfBcaToZZZ958HrertitYbIYJOjgLrshZJLk+OJccZ+qQDVCs28MXiC4C
YWTboeHzjBJgVlYdG9HrfRH95hzIOrZcop1l51CqJpsVAnZCg2ypEh9du67S
vALOM9SDF4Y5OIEVyyFxKEugQs4bDlyhZJOH0hblogbvZjtSOAXhs6H9k3E0
a7H7xAe8KWX9iin+QWK/3nrp1g7HDynwdCBg2/5wdAYC+whCeywYvxI6Q6Bl
UyEVFZkrEq+I4ef5SZEjPMahFONeGQ1eEhag/JRZ63B7Uj8Egkp1lkXHPDIW
XIz8rIOvwa+qjjOeOdvPstV65oL6qJQ6jkNOv+TiCTwaKJFUopMYgSMqmxM8
Becng6feMqmAahAnEVD+V0Jgcj45cYXPaZKcREQE9BVHA/gcm7VYNeef5aTO
Gcfdkn+5e/v2xgcM0cMFsndS9IQla9yK8+gYJ+lDB7yag4gbSAkeGVpEswQG
DRiBT6vDnqRh/PVG8gsOwvjD4D1HdftVK9zs3bsQ/3+PwDMcZrwY5CPOdDsY
nXVDrBEwct2PnFH8DmFMO18l8fyoHNKS645rS4zbzMRrZDkk3q3n7WaYdiFC
lN6cVr7mZFXEBCvvJ5JhuLV0e8Fm9FFFtQtBS0hDZCtBhvbQUZjj7Ekko40F
U1BQ1VwUsEi+gPwiPkyMhhfGUWLE1OOJUfaHrBy3NSal0pEIfbnq2RvXxSwH
TrmedZwmtdxVxN96xxzfdFvlWMm+PpaqRGyvFCi+Z85R2QmXLEk5QfKAJBRN
u7aa0/73pohIoqOlSCC8+2xLh/a+PUfLO28X/QX25Pqz++c39MVxoQ5dLdeS
qexgc/awlU7KuZTEE+BcIsmE4nZDioS604bmTOozKz/GiQIz/ORDz6T2a/zJ
fBKwdq8zBOKxxa8fF6bSGJVujZjN//KREH1yBKJzwduh9X7wc7CADz1+dVwK
BrjuClhGdHHzwURmVxCtkZ7MIZxmqvOWF8e2/P+RcfN1yyWwwW1e+yAaNYhv
EQ+A3zZiISIvfS2FRcwlMseysQyRCzIsjr3lMjD3MZs3CNdbQZ7JfMSp76Pq
BNo33tayYU7ErmBAcCuOPvPgo8jArA/MonoOgEqqF5Nw/dg4zI1YqcK2L+sJ
muxpgi7UDxRxTcvWdMxU2hNT2HqNnjQ+ZjLznxEOyVAvJMM0stP5ITM7V84r
1d/rDTjx+5jXi8RM1QqR5ValKb68b8eHIuWo0rCBZL8necEE/Lt3gXVKthc+
kDTA9++R+fv/j0S9FB8LMTIrDRafSCss2nwcmU/BvA+hLo0TjLk+NlGjlD2D
OB1nt5tUDuKrQRsDVtwLOICsLNbvae83wYDbupaIfHgQpuI5QBsJJFMhbsKn
aCr0XkhWiuTI8VkQLYiVKP18weG8bGXtrodRqMUXpgKSZTknJlNZbvaiqteS
uBWTv0iZ4aYGR+guFBEw1cneqxwNpXzDegMVlWlEaFwkET5LJc2UZfUs5RGn
jFUKmt3cxGQx4JmWDMY6Egno5EDBYRU4jkhnr90IWGNjkoE2kbIxFcq6u7Dt
eF72rsDbMJy+YLRWQcJf4so6Jsq//8/DcxX+f/tQB87mv/LnuAIC//N5iGb9
tSMH4X3gjP5rBwxvZ2rUxyhRl6/10v8uC8HJLhZXKWUnGdQHf1wSLJKfPE2k
5GDlB1S1Aw3t3/In/j0PN1yaZsTbzjGvq/S7TwZwJ/pcthb7PAZUPkgHH4us
y97+sP73kc+l/x2mM/3rD3MsfqgTfGw1vJX3/rYTHH5+mPMrP4EHkuF2T/jq
42Cxc6CFKwVJWr7hjmqW1Rz71RS5auC1+IskEHJFTJYHCVCEaIUUTAyVdtZk
OO0lttNhcNLGgcGGhyKphsNc65xU4TQ5yTod8tlZTWMRSivyInk70ff0lTT9
RjxAWhXA1j3M5qmL5bnZslXOqdadxI0qXyRZTtfzcr96kTxZ+6Dpov5gIV5P
YPlGVnwGHesNV1wBxezJTZoWia6deJ0sF+fQqg/NEoZqygSuC6s8ZCQuW6lx
iduK3Kak4NHKQ1I33nF9+YfXeS/3dELpfdwcXaT5NJMCy9xgY30s2p2taVxm
Q3FxV/i+EL2wfgtddNcp5XbSEWbZAqt3f3Tr86BfvsxqimKNJPTNN1W3L+Zu
q7EbjVZaRaZqRn9PczvpYED05bV6h9/3zrE3lFvqzOeOtLBt/Zrz5WEAwBEK
n7eVyKENI8dauNBkME4liiVRGRfYdqSUFrut+To1fdzvPN6npSO5PNYTWpFZ
NAv5DArkONAFO//YQXyiseKAUf8hczeYuoJTDTmEqELBZTXsJlWTwF/mbs5D
b1o7S2YF1HJ1pBZE8Ny9zgqmWO0NQYMys4zdyfIEkQ7Os2evm3j8rJCuXZC5
ClddK1EwC9aZz3b+BsFwL92hBFk3hGR2gglpttG5Cxp5VFjoB3+1F3EJaRYe
v+DmS2e1G9JWjT9RJrep5q6o3tA3oY2Qaflwca+rvUQagw8hskKJDEmkkIyj
9RrVt8TkJQVxLfWYBU/W5m+iwCMNWPEg0s5Az6HUeVsDCiIMcT5/tZ929fx4
8xc+6lLul/kMY4yjPxLj6BmNVgeW+CFmrkGJq1dEs0mlNq2Ep9acLCDH2bpK
NdaUQIFLO5GwJVogPrBxxgKyUWfMTMMrvRsGWACl9MGK9nUWIjjK14ctPVJX
hFjHfC7yvN5YeoNcSOA4d5bXUkuct5Fgc/gwtnQEAot/vZciOpb00sxKa7ON
72Vrn7pQ9MOVzaMVk8OIT8nRXcxTOi1aLq0Q3qCj8TY7L6TDJCGm9Lhp3byW
wpbVtA5lL2dJlwQOjUdVxUlNCzOY0Wq3AXzmPxuRwNWwfRAJhWkSUujKjGpu
jsK80tR8+trBboOGyEljCKyQ0FFIEx74aqQys21ZVQpJI6SfeTjuwNr4NBIu
6TgSUUAZ4HCQdCKb0DFtCw5viWem7Ks1CyJuKyEBFgSRtB9PjaYo6CE2rAqt
pKJfJi64MElHkUYg5iqk4bhqKJBkpHhUTyZ93k4KJDhznL/n2lXU9qOCWeq2
RSjxRmhgSdlKlMTbFsK1rtZFcoTZSbqpJ/Gj8aBvh3aRm1m8iskKSQ5cKH3Q
ND00SUdzP042+M2E1KnvXx190PVQ27WFHPNLymnfvRsWZr+PHR+9ed2l6Sjq
WaPyya0cg0v0AIiDtmmJor6X/j/SDMq6SubV7TmX1NrSJXp08KGC3OJvtEWh
DdRJd1RwYW89OQ8HP8Qf9Betdcz5w5EOX2M9o/tYbpg3F9DSeys35SweNX7y
stLBwIL4c20hGmnjQcpF0PGBX+AIpHgPQwGuJTGwsNNuDJouZzV9JESQG5XW
knNpeehyEJeZ/lVyYzkb34L/obtCkr5kw1qFO8fbzVqoNBVeslD0DHF2QOBm
McqgDlRua/qg2gYGyzM+D5mCRfEKsPNj4vxnescGS5ZfWIMm7M2GYw3qVJiS
mK3p7meEbbIgwgnToYs737kJd1WxhBVNt/ESu9WyaIhWAWI/PlL6PMYovK2G
YexSmUvbcVCE39pTK6J/abdDTG5K2hio/qT8pqeZvMjdLbdB15XlLWPF4ys9
UNjpSwuEbEHLJwJoQOUyBLOVFJ9rnMG9+f+5PlyCCXSiL5lXbUYuU9c0ixJq
+SLH/Nw55mUbSDrcmNCjflyp5KE1+DnznswuCIU3JLkeIsUYrYgtxWhQKA0T
y3rp5ClLyCFDKZCyQK7/bxcLlHH4dlZXmpyqgfajYXSCQIu7J9yjouX8Rcvi
M83zBstVTe7gGL41xJoTjctbMCDXoVuVORQ4bMBsHnqXmlh1rCMxEn3DxKKR
Nc6sk/NnRGfWYNiroOnyrGofI8EUPXSgbPZJKI/Nh0baRmDPeMWS9AgNTJtF
ItKY1IJrnkrVyGmr0uMERsYGlbWySEM9FbQlIpC58P5gPnnLqgltMaItbc1a
LjiQIdX2kkUhspnoRCNiZlhyIknTJ1xSckSaXpXRuKpArdzv1froqTPLl9t6
S9TQSHBEmo6k/NGWRc/PJIBVc4tcwrtbK2lbS+BX2iAef9x6UV3IKUIap5Vo
yXccRe9Cpw4gjVa97Iw4ueMYVzcF5SAyyyMtoBnRuLaF1Fa3xgh+W/ciXEe/
DC2tcNPHWMG3Zhx9LORf7Ljbc7Ws12Y/wH8lGWT7qG+V6MkRWmgrQ681Sc9k
I7aVvr7ub8iS4CJBUC5tW/004eDx1LMWiG1j+Sz4Zk/KWxx8xtuFm8L7cBE6
FWpyMjqJpCG0jvRr0k7/nr7f+TANzUj82Pep5BEUnj1/zClEhD9iNcG2ztkq
V/PHczJo2/2h6z3QVjYskIkeXYzQ/G25Gk4kKYyhKUZItzbdKaCAR4Tj1KLi
WGYK2FgwkJ7vTHewkUOUuEJqLVNvpFGlE4TrBwpPfAaeiXK2RnPcIY9Ger1t
YIa0kbohSO6OBHiA8tXLl89v3QUKhOMpdaI7jigS7KjKNLNrakZ/7ZYtESqT
wN+VZ0rP766hyVATvyPV7IzwBqJmwSi8LKBAD7yeXUYxC6QUiVXao8ta4sGX
NRZOp44tGZmJjhZwEVvJqL45oAg7r7LkE1YvE0sp2Td+0c6vuL5V0u/17haJ
HSfpi8CwIcFl/NuAVWB2yusGRuyJpAAFEJTPdrgGqJExiIU0LVoNMetTaWqd
fAZOaChNgBSed9V22TvnMyS1iXTSRp2o+VAnHSR32tZfQcxNCp0xWgTo482p
irsN/AdLlrXYXykE1qkjacfpw/HZdeht6kPbk3ab5jNGIVS9aeu5ckppqygi
FKGTxT7hfbldotRB8uuT2OPHsu2TnWYnZvSTxfZSaW8gc61hVWLHyGF5iPSf
pyh94V77h+TFuU9OW/TERPDMBzuWjsBhB9TuYyEtLIIle5Uet7TBsIlwrcGJ
SI+KAdYUmqRw11LrU1Ipw6nmc7upRPoSaXK5jrYg06aVBmA9HJ2+j030tIuJ
NKxjkhRuw6s8JUklV6fhr+JUvGVjS9gN7maseUYYQ+NgJFURm8yRUuDeJcL/
ODTdx297vizkVwFV9FTofiYqyFh1Oes6T0J1bJ7y6/A3P8KzqhElyLukYKmZ
G6oq0c6v5uaSZGWYzbt4MgpuCWlxM1iuoW630noYVxHwVQjSUl89fPgY3ZF5
23m7YSx06N8Eh0biqYeTh3RcXIZgjV+twgvqhOQPM5ug9Ts7DbKHgc6E1p/H
XgOWBr3dTUMe9HFNRHsIX9HmzSpg1Pl+hdUd+qUGk3octZKhnaX3JtD+LaBX
cUE/trhOje8yEWXCUyPmrPFpmvlvoQbAaN1I0fBQG9xL5Ku2GMTCQXPS2w6i
+1J7pevZxa98sNn2DSIQ7u8msksJslRph1mLwSZRmaTJo0nYiD0NMKj/gbF9
4yRURnAHxfRCF4VXRLBEbBhBMivbSVqKliEx7ZRWsTa+23J1WiLk59yz3Fu3
Xd3WqpFuCnxzxD56AU65w1xW54V7S0CbM7ZPGazUESrXmtR6fHRlLjO/fk5j
voqdqdGf2UJBchwWxj4uLHdeAjI4oNxxInpImBspK7HrWcLlFq6fYa4HlbX7
lY4R2YEAHbfclph2cNBKGe6Cn4uvMraoP4u8RE7iC2cO9HfXGit1PPPR1Avp
c4uBoZkHwKIT2E6gLTOmwnHfXI5ksXIt0UkJ+KOjr5i7ZqYq19ybw4qv3eNU
+qQIFsZExaJJ04avD5FKWLyR1vxV5g41rhxv3guhu1TrTtIDg7ZbnrH+W4pD
FB3ArEkb7XDFGb2iFNBLrxukJZrEkbKxdOVABCxD6R0XnLoojxJ0HVGBRHJa
wik3hpszh9HJY9Pu3jXRqRRqMDi80vRaJbwKPX3TOcy2DtnlqETgxi63HmoC
5Fli65kGqK+drbXPe2xK1nbWPqVOOpJL6W+aF8zI0DyT0KZapAihi8zjRIiE
T4hm47eBUmOyByReV12k1gQtlkRYtfU77hds7hjuUNxbLaTsMPJ0TAqY07/m
FvrlK8nRycSIuAE0iSD/ShQq4Ui5umKaol4uwTePJd4QDu7lwkp2iptHnpn5
HVdUx9skg0ctLnD+hmhMrvYxi48vzVG7nPtQkgFhzYUMs3Y9pY2IayrJ6sY/
couWGIt6ISXscfmNvpQeP/6G2XkSZ6qSqYjzCvOMPjV22bIC7FN0CXaSTIKj
Joc45MSt0+i0QZbHU4Vb1hrzYm4PiGiMB0LtQR6wzMIIgT2FTqhqQPAuhonN
8cVdhh7LwczhpuWxZyy9BSS0KU92d5C8nIdgYkTPLJCKjTx1DB17KRPA6mBM
CxJkNVViKx6ltuCwY1Pho5YJna+6ioTHyr9CDvnFkWNn1l5m1B4bLbSG8jj2
r9SaHSB3qbEH6M3CvdjcNS4FR2ZQAFI+IkjX9ICPxPz4B8Z4tquZbe5zn4Wp
U0rLUWk8MhtMHWHuX1k5zjmRO2sAlsAA6xaXJ25yg9c0am0EDJViN4vOFelI
2hRZpRLzJFYEtLvsWow1U2Q8whvJh3yLDodmuGw0raJjRo0YuvQuy3t90rLv
3S4XWy8WFHj/53dvb69+wyZ3zQqRFIti6t0LlTR2vnP7xx8a57MfdOZstL4t
BADmjPGCg8XhksWeC1QQsibTh/y4MGu9FS07nU1yroaT6BP58MUlw4vDW7xU
mnY2hCed0RfXozgT9ctSv4ZgHENZhKkQPaOTlgj8HnYhA+soNILdZFB09Ody
KLmxhMOM7F6IlVzuLZfhKBsIdnTtQgIth8X0nMhyzn51Z3DRH+qYreolxFHY
51JJo/JFaAmiQRQ2k+/pdnOgubULGfrizqdj7AVgFAwrvBdtdmXWVj1L5vTR
syxpte2iSFdzUj5jRUCAsA1RPuC1cwOD7sWl/xLpAsTHtNcBRJHyptBBWvea
sxsQejot6oUG/+SscCv+druVizXV39XqtdZ6idcbifWY/wKbWveFZluglphY
3LOFdt2TtsmsJ/AS4jVKyb5Je+zEHgGRyKxaaidRMS4jVMXXNo+BGin4dz5n
GghBXifpzfWisDXc+Wxc3rnDm/O5qHFmgp2Myq/E8amSP6gbSCmUmjV0Ky7Y
s2rd2O22jL3nyxamyGZ740zoIEQOt17wfngc3QX3MXF2u2poJs9LN0C5cj3e
bsM6BZe8iTdJvULXMgNitmprK/vUuL6wLr16iAkffohk94T2aHEKa0gIqMWT
IR64Ex5GrpnVV2q5ddT2YWa6IA+A/jiSuTlKDsyi7nw/KgAKHf5RcjLkmxPT
cqp8UeyhHo4jBWrblh7c651CWp6HJDp2L3vLYdYg9Fii3QlDKzIZoKiX4e0u
C7sXLd5UEkfJuOnopLgC/MPVavKLLGDqcJeBL0bHxzbuV3cDPsscNGPLipzL
oZcrJ2yawwGOiDesrXjadi5nWd4UNdMWN+kjvLIa/J+Y0tSlp7YY3fmUR1d5
LU6FEMpgbtWxH9Q1fifWJPu7CSpmrsyb5JFCH+GFJLsGZsfX2Vmf78zZgjpU
QzENzi+PTiQrXE6CFX+oBur1dEVJZB9EO6hgb7Pa3QkRJBf1qnu2S+/Uy/JC
SAoM3lbDTLhMDJQm2WXW2SwEK7CHF3ali6nYhQqBdGwbz8mVPdU+yyBXg05n
hvBNM885PS2+zpZdF66OlKtd2UyFZW4VwEydoWo15zjCDiOHy2/fa+2E57JD
eSioblo3YfUqEDTXEoVD1nVXhkf7iSeOXUSmPafcTFxMQQsSDSFo08e0IUyR
sKjsQJnYF1fpVm8TrA52gy+KCHyMewyEil0ujw7js7CZyR29jBNJCBFeUnnt
j3xqF8oYE+kie+aNYFqFkxyQsRucvYT4RBOJvXqSok8V5Wpyr5GTm3zUFTPK
UD4yib4/SWEI6A/+I8QqoegHbxbzg5T5oCvg3fzrzw7V3nvB7dGvcvTLU3pV
2ADh83rOFVJIaTNUHqA9cu5xTle+sFUgSLZM4t6Mtovk7o6ErGOs3EKR+T59
Txw9TkUEiouPPfScv70f7JCS8IVPfhSePpADfmCCpMj+VDTd8LI8grmH0irT
7UkvzNCnCToHiMF6lYtaQ5ZQtsAOX4aiCKyQy/vnEmkKHSBYJ0uixA+SZjrv
rhFTTNvrvC+KUO2V5F2FWyOjmyiNtmShGrkBKk3OeK9+g32Wo9CHC/fGDHxo
qjjMBkxzCWIqQVI6YSxfupoRJSEJWL2KSQwk1iQJOl7oPbAP2BM7TlpYwedh
7UgfrZUKLOmEP55YTPK9sHCuiHlTrS3XwTQtu9cpXH6kHioJpnKbvkYLFA4T
UzlTTgzdEAdh+Jd7CcxJScFObJSgtunRE8+YmIJFOJd2KU5GeqwEJfSn5Qtb
CZynZnAqcrRJIh4eeEWC7SGX/6ZiKulxOICXWYXcSPc3DMYfSReK4VJ/oGGh
vaG9+hBrdoOQGgfdDo5m6T6KVPn1hMXJkZuX4Lz0akCbjhFpsWnZwQvNh9Ny
5l11wXEly/Hn7BfNdFZ7WoKrKXxMMNWslz6aZtoWqVXL71pt4TEq0e8uW791
j437KF5zMHiZL5g16rTdxVqddYviUDY3JXIX1AJpe78vONUvqIrqpuQd0TSe
igdRMc51vKgUrV16snIIihQCUl47SzgFbZDYxo1RNMi3SMRF3i1JKbk3WdjY
5dBhvdwidV63c23wY1ebke5tLJyBUB3w+Y624u/oH7/iK5mUcJJcSlbk85pV
FPAlSR1MBzEdVIP+4cbULGtDMqmZveKuMozbaKqcXJSlQZMYjmaNb7QlOKXb
3wiXa47iDAe5x/Cd6M3PeaBDUnYxjVIWZ1zxKDD+g+/dJ7nJR2PkiNDCz7/e
Z4XlbESNHv7qhRZn4lr0+zF3uZIr1TjTJBChj6F2ZN8ye98fZi+FxqVcMClj
dpy9cMzKDgJYRLSVX6acts3ukgvW6EisvNDhdWTHlDltnUmL2prWzZMZtQIL
44mGi2/glHLBd7InPYWdinQqMh+G3pYttQk4jpl44RLbudxkFbjLZ+JfHE6d
8pfwnlGAvDpmbUlzowVUZRfGqVUNgCN+cHCGmEBRW+25ekGypnkksYtkaF7v
EE24pYTTZ8LjHDt2b8PzAx/PB9Hx6RHg6P8RLREQlUNF+KDwqYbMfNb3B9t0
sLty+xyZm2u9lt0udGX7ilfET8P9VcY1GfZYuQgQpGIglcgGBROeiYBcez0U
4BYh1T6IXaaTYMfxzYj7LSYkznXi3DtRb8FsE+FRSulipjNg46K0UeGnUjoF
8aR8Ft+rottd3iNYaKADaI7MF+481xeRqn/26wRIrMzKgq56VV0ZWTiNlGzk
5JCmttukN9HKhkMkwdCcyYdII1YPbGQ3EuH20YHBZ1DhSHMSvOM6RDzG/Yr1
/s0yKWaMdbghCS45y6zwCxLqpgjVrJJlLmECKfp/LV215Fp4rUgLolAEVGE0
ytlHm0rIMVSxhTMl8Q8Jncfk1WRKX8icEhgW97olHOVdClKtTcI8bwm0IlgK
2enKk7krvaKXUaX753xSD3LgmP4KuR9PUP+iJgX7c6aDSxOt+Vqn2dpp23m5
Xxs5R1zVdvf2f1Z9rwjXoqrPouYbZkmV9XyrM/q3EJ+BopsknXk7KdKxBIKn
kBL2eNcob45nwaCIrHttFz8ObkMSuG1nzi6xsJKebdbKuba7uneaCRdum9SS
c1N0EIgyFYpOLAOuTU4Jwdzcne+ERbuaSMshI2wDKm5cgQRiroAQX6OkR2ns
mENBtOB2u9EW2L121xUhcRfw3Sv/YUpGvBqKoSONVn/x/cdWsSVKGUcm/AYH
RJtk6DGxOiZIB70Gnu8yltO3IkR1uJhBL4qFbEDlcdvM01YfocIMm10E8zXa
nKGEHgVt3kvvhthsOna3C3Bb24WabwWNbeTdZrvi3rvs7vX7ZrbqWuv+M04u
H/6tszJ2aM3clIerAA2SUQbKSLO3NSwZ2hYuXAULU2M8C1J4JkZanOAF7Y35
QpQmmteBuru2mUSzKixSGhoc+SLRz2PW5IUli1XSnUUasYytbX2fo1Qd/CMk
bzTIbd91dII9V7r8ZlfPXpPM+rZVfh+b2Ei7l29b7ZCU0DhYTz3cRTlv+cTM
kOQoEwLZU0IajbtQcvGSQJtHSMeaLJQMxD0Cqq0l4ZXSmkDikXPxWRXWuiDs
VOXL3A3I+qkwf+RXovBVtjgLlxeyjuRVcYlJcwm7t4zDDElzSIwd2LTEpfIA
fHLhnbgzlbS4/pctIDTDkf6laIGgwqaYumWtJVepDilv8zvWPwFXJWvi8NS3
3ZRPvFFRkcUMhH9oJ3xTjqQtq7pRmuwSXd6gIjB+ZbBb/hqdqbR7OTJ31Fdg
D5ZIyzTnEfcKh7GLHlGalS/cSfO+FRLr1MnRG4AsssJvWnYtFDEklDbYrtiM
VgakioTdVSf8RD2AnEtUqO4lFcwONWUar//wOoRvgoMU5g+jL8UPWNWxti1q
rdk8WXRdDVrFihClUoU6UmpSrIDxFw5X4w3AYmMzNKKdEebaeaVll+2MjrGm
Qcdf19xaqN1oU3ohXB+vJ5ArIVYtLUV6ZqcMNXoojLkz41FynKPu19t9CVz8
lbPicnSU5Y/yK9GzG8uBUEsZuHfb+IblvptyIE4YjcxJky5jbOj1ytc08vF/
7fYjO9GW0SxhyTC0ZKBzZ6Fe6shjRuAoQcVI/LfMuyRRwS5xsL5pHgkRWhnx
woW0dGnmGiu19WjmbpNEdTvSfiOpBpfye6SEagq6JAurBlhxj/Sk8tsuq5FM
8aT7kxxBgAx7nBsvcWQFWWoWHeSxza8t9RBRN5Uhw63x8TaEcB7qgBGcQu1b
pYmFnPOQdPVFnr9Fs8OixmhY0yy9XU8gepg/STOQzW1kU5nykOX2qq4vq9bh
pdeX1vMN6uYP12mldNi6UByWtELQ65iELvRhZLH1GosY1kEkBUylJO3XG+TK
57UQcs07FHy90NPUhkBc44QEdA+0xOV7bGJSv5clcXZuIuUgazvQWisutP9Y
611C04dUcZP7iLgOCaX1fKePppiEdt6NsJNQ5ltmexo6ntGJOH/w8tHk3qd0
EqRzmvKyOI5e7mTznpT3OfB7WEegDVUcwRNLdL3+qVGSs1gwowW9WiJTyXl+
tarXXm06jaPA1wlfwMsn5yFLXU57Uhu72fVSwpYNb8s4LBtK8iBRgykmuPVs
kWoifEdaWT2vzJUnV1LUJDMqCeXYtUBDH23IRVNtQ+fHCrLZBrVVUp0uukC1
xvnf8YUraYkyGZewqLj4ilPccabyVZtkCF7M2qWRvNDVCnIuqV+Kz9oAfFGV
9NbiAJ0Y1VV/UNo+rHRV4ghhcVPf0722csEFp39Kkpky74z5P2pm3d5ilaAo
JuGJCx+j4g/0rDKAbD/IKhLEO9F7qinysRHQM1okUWhpQH3N7YFm7YpP7UnJ
2cbhRpG2w83GfqVCeESqy6RvJ44JSQYbqbPtqvdSarXyYnV2S3RHHfI4TFBZ
mVpAQfoOezyyq5l90FK2aw4BsJtIruEKzWHstpeD2RAykRDRsQUHzpisANxo
DE06FCiod4AtCtJf+hXt8VqdkfQMh3qRjxjwpGEd9qBgN7QNBG32lyjakOqz
Plj1dHbRBzM20LFiQxmOnZEP6yWJiHX5AnqKR0NSYvQbtt+sxnihEpy3QOgv
MtwYS4lVMlXWMAQpG9IEFftugXHr8KewXDJb3HAD/cj9Q/w036wQLgkSRCLj
XvRIFi27xqB3w2gJRwrlvjHNDubOYXMHkLHhHwQ3HgxOdD02EfeBw76I6h3B
Rt2laQpiUuKTvLNOMmJy8VKmqUrgj/VNhkvMU5HloT1vl/UKTLGaIZVIavTo
yEkdtOu19iGRwcBne3AXtxAC9z7hHvlIhgpjbyuci8SlBZNCagErPboa8HHS
17RVfpftIDQ228CsVQtrJNpkICmJJlIFyMKMeRQsOdvyuOicKv/KRfNCadAl
a5LRWOC8bpxmZiwaNtpUr5GOts0KigZIy3ZSmHPYR5ixG3ArfB4vMPl+WL7i
wPwg6OYqcwCYtJ5jkeZDx7lDzhsyJo/z5cB5syrlIV8fx9rtY00PxuCM0v5t
Xm01yT92ApaiW63txJMkHWNNe5adGb193ioBwrmjN2eVeM7s78r8MiN2quFm
S9qvJcelk/bCIdQXHDIkQkfSnLtXe1Hjywu9eQgOwRESw5IbaWRWqQ4KavKB
CAxFUxnrZ5bnUnpfaSfDyjprZdJWI0EQThouOF4mBnI4LtytQv0DG2u9UNSX
obZ9EP4JlLigZagRhH6P8zbrFtIPrtatu4OpuQ101DOVPYQkgCQPLETehwVz
g4aIEssKmrEmAuBKOPfWTENWGwMjytVDKdRML5c9jnljCaFcWeDP2gDl3X+S
7bgMHG4f+vjs67NBN0gNBoQum8LpmuAe4zOC93iAYAoNBzmTlidZt05u8KmS
dbnjcrpEhxVTyCPBWwM7yBxCAoUvmkSAZ4kW0ssGQF0gdXoymXBfJoB2NkMl
+RqtoVllJ5jYumXTBKZSbGeL6G0tnVGP+U/kbiXrpIqOWKgN5ipYPunv3v08
ttFq6Onq9Y4szcmmpTfMqkUb4lccfWxeJ6qaD8Vo/Pq4PN8huvl1GGVcPlgx
d6Lj+dWOGOBG77Z8sCMzvSl/4ZjBeI222qiS1yA3IrY6q8VkkpmT3ig8vx6x
hXUEZj+Y6FbPICVhtY6Ya/ZsYjLRnaJk9R/qDv7757slV7qvCdwvyIInabEe
l79AS+PygWt+S8aWgP8Lwh8hpPYeHYuvp53IXm8xyKQj3fz9+zGN/WRHzz7Y
dWui5ezJ9Yw/nBD33L5/f0M0w8kQNTzfEK/5QN/qo7xn8AZ2k6qbrXjMvl06
dj3w8T7YjQ8OxMeZRjrhFPeLeJtBvjsHAPJlzHDcJxdfkRh5rgKHy/qf/XJE
5IePJvQ7em1r3PnZrG+nHEe9e5eZ9pu6Y/8BM+96w7ekcyjplYZA+CKMXc+N
Ptlxwj5wTojnsIpZeVnLYeshfD05Hx9a6HB32Per/azM/L6QSzDo7x3ZfoVp
UvWgJ/vgZqvopDgowfCfgCPyfZaExSL1Gv9mV4GXVupyqFBgouWJj5OEsovL
l1Q82dG05XOS4Tt3QPycjA0G383Ls2k7rQyVLKS0uQeL98K4JWHwF+x2/rLb
6VZfis4vuDcp6QzlY07Bo9m+JonQrgmi+237mqsfZTZWVmu7GCXmALG5s2kJ
AVqD8n8BNKzfIre3AAA=

-->

</rfc>
