<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.6.13 (Ruby 3.1.2) -->
<?rfc tocindent="yes"?>
<?rfc strict="yes"?>
<?rfc compact="yes"?>
<?rfc subcompact="yes"?>
<?rfc comments="yes"?>
<?rfc inline="yes"?>
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-gruessing-moq-requirements-02" category="info" submissionType="IETF" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.13.0 -->
  <front>
    <title abbrev="MoQ Use Cases and Requirements">Media Over QUIC - Use Cases and Requirements for Media Transport Protocol Design</title>
    <seriesInfo name="Internet-Draft" value="draft-gruessing-moq-requirements-02"/>
    <author initials="J." surname="Gruessing" fullname="James Gruessing">
      <organization>Nederlandse Publieke Omroep</organization>
      <address>
        <postal>
          <country>Netherlands</country>
        </postal>
        <email>james.ietf@gmail.com</email>
      </address>
    </author>
    <author initials="S." surname="Dawkins" fullname="Spencer Dawkins">
      <organization>Tencent America LLC</organization>
      <address>
        <postal>
          <country>United States of America</country>
        </postal>
        <email>spencerdawkins.ietf@gmail.com</email>
      </address>
    </author>
    <date year="2022" month="July" day="11"/>
    <area>applications</area>
    <workgroup>MOQ Mailing List</workgroup>
    <keyword>Internet-Draft QUIC</keyword>
    <abstract>
      <t>This document describes use cases that have been discussed in the IETF community under the banner of "Media Over QUIC", provides analysis about those use cases, recommends a subset of use cases that cover live media ingest, syndication, and streaming for further exploration, and describes requirements that should guide the design of protocols to satisfy these use cases.</t>
    </abstract>
    <note>
      <name>Note to Readers</name>
      <t><em>RFC Editor: please remove this section before publication</em></t>
      <t>Source code and issues for this draft can be found at
<eref target="https://github.com/fiestajetsam/draft-gruessing-moq-requirements">https://github.com/fiestajetsam/draft-gruessing-moq-requirements</eref>.</t>
      <t>Discussion of this draft should take place on the IETF Media Over QUIC (MoQ)
mailing list, at <eref target="https://www.ietf.org/mailman/listinfo/moq">https://www.ietf.org/mailman/listinfo/moq</eref>.</t>
    </note>
  </front>
  <middle>
    <section anchor="intro">
      <name>Introduction</name>
      <t>This document describes use cases that have been discussed in the IETF community under the banner of "Media Over QUIC", provides analysis about those use cases, recommends a subset of use cases that cover live media ingest, syndication, and streaming for further exploration, and describes requirements that should guide the design of protocols to satisfy these use cases.</t>
      <section anchor="for-the-impatient-reader">
        <name>For The Impatient Reader</name>
        <ul spacing="compact">
          <li>Our proposal is to focus on live media use cases, as described in <xref target="propscope"/>, rather than on interactive media use cases or on-demand use cases.</li>
          <li>The reasoning behind this proposal can be found in <xref target="analy-interact"/>.</li>
          <li>The requirements for protocol work to satisfy the proposed use cases can be found in <xref target="req-sec"/>.</li>
        </ul>
        <t>Most of the rest of this document provides background for these sections.</t>
      </section>
      <section anchor="why-quic">
        <name>Why QUIC For Media?</name>
        <t>It is not the purpose of this document to argue against proposals for work on media applications that do not involve QUIC. Such proposals are simply out of scope for this document.</t>
        <t>When work on the QUIC protocol (<xref target="RFC9000"/>) was chartered (<xref target="QUIC-goals"/>), the key goals for QUIC were:</t>
        <ul spacing="compact">
          <li>Minimizing connection establishment and overall transport latency for applications, starting with HTTP,</li>
          <li>Providing multiplexing without head-of-line blocking,</li>
          <li>Requiring only changes to path endpoints to enable deployment,</li>
          <li>Enabling multipath and forward error correction extensions, and</li>
          <li>Providing always-secure transport, using TLS 1.3 by default.</li>
        </ul>
        <t>These goals were chosen with HTTP (<xref target="I-D.draft-ietf-quic-http"/>) in mind.</t>
        <t>While work on "QUIC version 1" (version codepoint 0x00000001) was underway, protocol designers considered potential advantages of the QUIC protocol for other applications. In addition to the key goals for HTTP applications, these advantages were immediately apparent for at least some media applications:</t>
        <ul spacing="compact">
          <li>QUIC endpoints can create bidirectional or unidirectional ordered byte streams.</li>
          <li>QUIC will automatically handle congestion control, packet loss, and reordering for stream data.</li>
          <li>QUIC streams allow multiple media streams to share congestion and flow control without otherwise blocking each other.</li>
          <li>QUIC streams also allow partial reliability, since either the sender or receiver can terminate the stream early without affecting the overall connection.</li>
          <li>With the DATAGRAM extension (<xref target="I-D.draft-ietf-quic-datagram"/>), further partially reliable models are possible, and applications can send congestion controlled datagrams below the MTU size.</li>
          <li>QUIC connections are established using an ALPN.</li>
          <li>QUIC endpoints can choose and change their connection ID.</li>
          <li>QUIC endpoints can migrate IP address without breaking the connection.</li>
          <li>Because QUIC is encapsulated in UDP, QUIC implementations can run in user space, rather than in kernel space, as TCP typically does. This allows more room for extensible APIs between application and transport, allowing more rapid implementation and deployment of new congestion control, retransmission, and prioritization mechanisms.</li>
          <li>QUIC is supported in browsers via HTTP/3 or WebTransport.</li>
          <li>With WebTransport, it is possible to write libraries or applications in JavaScript.</li>
        </ul>
        <t>The specific advantages of interest may vary from use case to use case, but these advantages justify further investigation of "Media Over QUIC".</t>
      </section>
    </section>
    <section anchor="term">
      <name>Terminology</name>
      <section anchor="moq-meaning">
        <name>The Many Meanings of "Media Over QUIC"</name>
        <t>Protocol developers have been considering the implications of the QUIC protocol (<xref target="RFC9000"/>) for media transport for several years, resulting in a large number of possible meanings of the term "Media Over QUIC", or "MOQ". As of this writing, "Media Over QUIC" has had at least these meanings:</t>
        <ul spacing="compact">
          <li>any kind of media carried directly over the QUIC protocol, as a QUIC payload</li>
          <li>any kind of media carried indirectly over the QUIC protocol, as an RTP payload (<xref target="RFC3550"/>)</li>
          <li>any kind of media carried indirectly over the QUIC protocol, as an HTTP/3 payload</li>
          <li>any kind of media carried indirectly over the QUIC protocol, as a WebTransport payload</li>
          <li>the encapsulation of any Media Transport Protocol in a QUIC payload</li>
          <li>an IETF mailing list (<xref target="MOQ-ml"/>), which was requested "... for discussion of video ingest and distribution protocols that use QUIC as the underlying transport", although discussion of other Media Over QUIC proposals have also been discussed there.</li>
        </ul>
        <t>There may be IETF participants using other meanings as well.</t>
        <t>As of this writing, the second bullet ("any kind of media carried indirectly over the QUIC protocol, as an RTP payload"), seems to be in scope for the IETF AVTCORE working group, and was discussed at some length at the February 2022 AVTCORE working group meeting <xref target="AVTCORE-2022-02"/>, although no drafts in this space have yet been adopted by the AVTCORE working group.</t>
      </section>
      <section anchor="latent-cat">
        <name>Latency Requirement Categories</name>
        <t>}The "Operational Considerations for Streaming Media" document (<xref target="I-D.draft-ietf-mops-streaming-opcons"/>) described a range of latencies of interest to streaming media providers, as</t>
        <ul spacing="compact">
          <li>ultra low-latency (less than 1 second)</li>
          <li>low-latency live (less than 10 seconds)</li>
          <li>non-low-latency live (10 seconds to a few minutes)</li>
          <li>on-demand (hours or more)</li>
        </ul>
        <t>Because the IETF Media Over QUIC community now expresses interest in interactive media <xref target="interact"/> and live media <xref target="lm-media"/>} use cases will have requirements that are significantly less than the "streaming media"-defined "ultra-low latency".</t>
        <t>Within this document, we are using</t>
        <ul spacing="compact">
          <li>near real-time (less than 50 ms)</li>
          <li>Ull-200 (less than 200 ms)</li>
        </ul>
        <t>Perhaps obviously, these last two latency bands are the shortened form of "ultra-low latency - 50 ms" and "ultra-low-latency - 200 ms".</t>
        <t>Perhaps less obviously, bikeshedding on better names and more useful values is welcomed.</t>
      </section>
    </section>
    <section anchor="priorart">
      <name>Prior and Existing Specifications</name>
      <t>Several draft specifications have been proposed which either encapsulate
existing Media Transport Protocols in QUIC
(<xref target="I-D.draft-sharabayko-srt-over-quic"/>), make use of RTP, RTCP, and SDP
(<xref target="I-D.draft-engelbart-rtp-over-quic"/>) or define their own new Media Transport
Protocol on top of QUIC. Some have already seen deployment into the wild (e.g.
<xref target="I-D.draft-kpugin-rush"/>, <xref target="I-D.draft-lcurley-warp"/>) where as others are
unconfirmed. Whilst most just focus on defining wire format,
<xref target="I-D.draft-jennings-moq-quicr-arch"/> defines an architecture using a pub/sub
model for both producers and consumers.</t>
      <section anchor="comparison-of-existing-specifications">
        <name>Comparison of Existing Specifications</name>
        <ul spacing="compact">
          <li>Some use QUIC Datagram frames, while others use QUIC streams.</li>
          <li>All drafts take differing approaches to flow/stream identification and management. Some address congestion control and others just defer this to QUIC to handle.</li>
          <li>Some drafts specify ALPN identification, while others do not.</li>
        </ul>
      </section>
    </section>
    <section anchor="overallusecases">
      <name>Use Cases Informing This Proposal</name>
      <t>Our goal in this section is to understand the range of use cases that have been proposed for "Media Over QUIC".</t>
      <t>Although some of the use cases described in this section came out of "RTP over QUIC" proposals, they are worth considering in the broader "Media Over QUIC" context, and may be especially relevant to MOQ, depending on whether "RTP over QUIC" requires major changes to RTP and RTCP, in order to meet the requirements arising out of the corresponding use cases.</t>
      <t>An early draft in the "media over QUIC" space,
<xref target="I-D.draft-rtpfolks-quic-rtp-over-quic"/>, defined several key use cases. Some of the
following use cases have been inspired by that document, and others have come from discussions with the
wider MOQ community (among other places, a side meeting at IETF 112).</t>
      <t>For each use case in this section, we also define</t>
      <ul spacing="compact">
        <li>the number of senders or receiver in a given session transmitting distinct streams,</li>
        <li>whether a session has bi-direction flows of media from senders and receivers, and</li>
        <li>the expected lowest latency requirements using the definitions specified in <xref target="term"/>.</li>
      </ul>
      <t>It is likely that we should add other characteristics, as we come to understand them.</t>
      <section anchor="interact">
        <name>Interactive Media</name>
        <t>The use cases described in this section have one particular attribute in common - the target latency for these cases are on the order of one or two RTTs. In order to meet those targets, it is not possible to rely on protocol mechanisms that require multiple RTTs to function effectively. For example,</t>
        <ul spacing="compact">
          <li>When the target latency is on the order of one RTT, it makes sense to use FEC <xref target="RFC6363"/> and codec-level packet loss concealment <xref target="RFC6716"/>, rather than selectively retransmitting only lost packets. These mechanisms use more bytes, but do not require multiple RTTs in order to recover from packet loss.</li>
          <li>When the target latency is on the order of one RTT, it is impossible to use congestion control schemes like BBR <xref target="I-D.draft-cardwell-iccrg-bbr-congestion-control"/>, since BBR has probing mechanisms that rely on temporarily inducing delay and amortizing the consequences of that over multiple RTTs.</li>
        </ul>
        <t>This may help to explain why these use cases often rely on protocols such as RTP <xref target="RFC3550"/>, which provide low-level control of packetization and transmission.</t>
        <section anchor="gaming">
          <name>Gaming</name>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">One to One</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">Yes</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Latency</strong></td>
                <td align="left">Ull-50</td>
              </tr>
            </tbody>
          </table>
          <t>Where media is received, and user inputs are sent by the client. This may also
include the client receiving other types of signalling, such as triggers for
haptic feedback. This may also carry media from the client such as microphone
audio for in-game chat with other players.</t>
        </section>
        <section anchor="remdesk">
          <name>Remote Desktop</name>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">One to One</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">Yes</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Latency</strong></td>
                <td align="left">Ull-50</td>
              </tr>
            </tbody>
          </table>
          <t>Where media is received, and user inputs are sent by the client. Latency
requirements with this usecase are marginally different than the gaming use
case. This may also include signalling and/or transmitting of files or devices
connected to the user's computer.</t>
        </section>
        <section anchor="vidconf">
          <name>Video Conferencing/Telephony</name>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">Many to Many</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">Yes</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Latency</strong></td>
                <td align="left">Ull-50 to Ull-200</td>
              </tr>
            </tbody>
          </table>
          <t>Where media is both sent and received; This may include audio from both
microphone(s) or other inputs, or may include "screen sharing" or inclusion of
other content such as slide, document, or video presentation. This may be done
as client/server, or peer to peer with a many to many relationship of both
senders and receivers. The target for latency may be as large as Ull-200 for
some media types such as audio, but other media types in this use case have much
more stringent latency targets.</t>
        </section>
      </section>
      <section anchor="hybrid-interactive-and-live-media">
        <name>Hybrid Interactive and Live Media</name>
        <t>For the video conferencing/telephony use case, there can be additional scenarios
where the audience greatly outnumbers the concurrent active participants, but
any member of the audience could participate. As this has a much larger total
number of participants - as many as Live Media Streaming <xref target="lmstream"/>, but with
the bi-directionality of confercing, this should be considered a "hybrid".</t>
      </section>
      <section anchor="lm-media">
        <name>Live Media</name>
        <t>The use cases in this section, unlike the use cases described in <xref target="interact"/>, still have "humans in the loop", but these humans expect media to be "responsive", where the responsiveness is more on the order of 5 to 10 RTTs. This allows the use of protocol mechanisms that require more than one or two RTTs - as noted in <xref target="interact"/>, end-to-end recovery from packet loss and congestion avoidance are two such protocol mechanisms that can be used with Live Media.</t>
        <t>To illustrate the difference, the responsiveness expected with videoconferencing is much greater than watching a video, even if the video is being produced "live" and sent to a platform for syndication and distribution.</t>
        <section anchor="lmingest">
          <name>Live Media Ingest</name>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">One to One</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">No</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Latency</strong></td>
                <td align="left">Ull-200 to Ultra-Low</td>
              </tr>
            </tbody>
          </table>
          <t>Where media is received from a source for onwards handling into a distribution
platform. The media may comprise of multiple audio and/or video sources.
Bitrates may either be static or set dynamically by signalling of connection
inforation (bandwidth, latency) based on data sent by the receiver.</t>
        </section>
        <section anchor="lmsynd">
          <name>Live Media Syndication</name>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">One to One</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">No</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Latency</strong></td>
                <td align="left">Ull-200 to Ultra-Low</td>
              </tr>
            </tbody>
          </table>
          <t>Where media is sent onwards to another platform for further distribution. The
media may be compressed down to a bitrate lower than source, but larger than
final distribution output. Streams may be redundant with failover mechanisms in
place.</t>
        </section>
        <section anchor="lmstream">
          <name>Live Media Streaming</name>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">One to Many</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">No</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Latency</strong></td>
                <td align="left">Ull-200 to Ultra-Low</td>
              </tr>
            </tbody>
          </table>
          <t>Where media is received from a live broadcast or stream. This may comprise of
multiple audio or video outputs with different codecs or bitrates. This may also
include other types of media essence such as subtitles or timing signalling
information (e.g. markers to indicate change of behaviour in client such as
advertisement breaks). The use of "live rewind" where a window of media behind
the live edge can be made available for clients to playback, either because the
local player falls behind edge or because the viewer wishes to play back from a
point in the past.</t>
        </section>
      </section>
      <section anchor="od-media">
        <name>On-Demand Media</name>
        <t>Finally, the "On-Demand" use cases described in this section do not have a tight linkage between ingest and streaming, allowing significant transcoding, processing, insertion of video clips in a news article, etc. The latency constraints for the use cases in this section may be dominated by the time required for whatever actions are required before media are available for streaming.</t>
        <section anchor="od-ingest">
          <name>On-Demand Ingest</name>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">One to Many</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">No</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Latency</strong></td>
                <td align="left">On Demand</td>
              </tr>
            </tbody>
          </table>
          <t>Where media is ingested and processed for a system to later serve it to clients
as on-demand media. This media provided from a pre-recorded source, or captured from live output, but in either case, this media is not immediately passed to viewers, but is stored for "on-demand" retrieval, and may be transcoded upon ingest.</t>
        </section>
        <section anchor="od-stream">
          <name>On-Demand Media Streaming</name>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">One to Many</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">No</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Latency</strong></td>
                <td align="left">On Demand</td>
              </tr>
            </tbody>
          </table>
          <t>Where media is received from a non-live, typically pre-recorded source. This may
feature additional outputs, bitrates, codecs, and media types described in the
live media streaming use case.</t>
        </section>
      </section>
    </section>
    <section anchor="propscope">
      <name>Proposed Scope for "Media Over QUIC"</name>
      <t>Our proposal is that "Media Over QUIC" discussions focus first on the use cases described in <xref target="lm-media"/>, which are Live Media Ingest (<xref target="lmingest"/>),
Syndication (<xref target="lmsynd"/>), and Streaming (<xref target="lmstream"/>). Our reasoning for this suggestion follows.</t>
      <t>Each of the above use cases in <xref target="overallusecases"/> fit into one of three classifications of solutions.</t>
      <section anchor="analy-interact">
        <name>Analysis for Interactive Use Cases</name>
        <t>The first group, Interactive Media, as described in <xref target="interact"/>, and covering gaming (<xref target="gaming"/>), screen sharing (<xref target="remdesk"/>), and general video conferencing (<xref target="vidconf"/>), are
largely covered by RTP, often in conjunction with WebRTC <xref target="WebRTC"/>, and related protocols today.</t>
        <t>Whilst there may be
benefit in these use cases having a QUIC based protocol it may be more
appropriate given the size of existing deployments to extend the RTP
protocols and specifications.</t>
      </section>
      <section anchor="analy-lm">
        <name>Analysis for Live Media Use Cases</name>
        <t>The second group of classifications, in <xref target="lm-media"/>, covering Live Media Ingest (<xref target="lmingest"/>),
Live Media Syndication (<xref target="lmsynd"/>), and Live Media Streaming (<xref target="lmstream"/>) are likely the use cases that will benefit most from
this work.</t>
        <t>Existing ingest and streaming protocols such as HLS <xref target="RFC8216"/> and DASH <xref target="DASH"/>
are reaching limits towards how low they can reduce latency in live streaming
and for scenarios where low-bitrate audio streams are used, these protocols add a significant
amount of overhead compared to the media bitstream itself.</t>
        <t>For this reason, we suggest that work on "Media Over QUIC" protocols target these use cases at this time.</t>
      </section>
      <section anchor="analy-od">
        <name>Analysis for On-Demand Use Cases</name>
        <t>The third group, <xref target="od-media"/>, covering On-Demand Media Ingest (<xref target="od-ingest"/>) and On-Demand Media streaming (<xref target="od-stream"/>) is unlikely to benefit from work in
this space. Without the same "Live Media" latency requirements that would motivate deployment of new protocols, existing protocols such as HLS and
DASH are probably "good enough" to meet the needs of these use cases.</t>
        <t>This does not mean that existing protocols in this space are perfect. Segmented protocols such as HLS and DASH were developed to overcome the deficiencies of TCP, as used in HTTP/1.1 <xref target="RFC7230"/> and HTTP/2 <xref target="RFC7540"/>, and do not make full use of the possible congestion window along the path from sender to receiver. Other protocols in this space have their own deficiencies. For example, RTSP <xref target="RFC7826"/> does not have easy ways to add support for new media codecs.</t>
        <t>Our expectation is that these use cases will not drive work in the "Media Over QUIC" space, but as new protocols come into being, they may very well be taken up for these use cases as well.</t>
      </section>
    </section>
    <section anchor="req-sec">
      <name>Requirements for Protocol Work</name>
      <t>TODO: Quite a lot, really ...</t>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t>This document makes no requests of IANA.</t>
    </section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>As this document is intended to guide discussion and consensus, it introduces
no security considerations of its own.</t>
    </section>
  </middle>
  <back>
    <references>
      <name>Informative References</name>
      <reference anchor="RFC3550">
        <front>
          <title>RTP: A Transport Protocol for Real-Time Applications</title>
          <author fullname="H. Schulzrinne" initials="H." surname="Schulzrinne">
            <organization/>
          </author>
          <author fullname="S. Casner" initials="S." surname="Casner">
            <organization/>
          </author>
          <author fullname="R. Frederick" initials="R." surname="Frederick">
            <organization/>
          </author>
          <author fullname="V. Jacobson" initials="V." surname="Jacobson">
            <organization/>
          </author>
          <date month="July" year="2003"/>
          <abstract>
            <t>This memorandum describes RTP, the real-time transport protocol.  RTP provides end-to-end network transport functions suitable for applications transmitting real-time data, such as audio, video or simulation data, over multicast or unicast network services.  RTP does not address resource reservation and does not guarantee quality-of- service for real-time services.  The data transport is augmented by a control protocol (RTCP) to allow monitoring of the data delivery in a manner scalable to large multicast networks, and to provide minimal control and identification functionality.  RTP and RTCP are designed to be independent of the underlying transport and network layers.  The protocol supports the use of RTP-level translators and mixers. Most of the text in this memorandum is identical to RFC 1889 which it obsoletes.  There are no changes in the packet formats on the wire, only changes to the rules and algorithms governing how the protocol is used. The biggest change is an enhancement to the scalable timer algorithm for calculating when to send RTCP packets in order to minimize transmission in excess of the intended rate when many participants join a session simultaneously.  [STANDARDS-TRACK]</t>
          </abstract>
        </front>
        <seriesInfo name="STD" value="64"/>
        <seriesInfo name="RFC" value="3550"/>
        <seriesInfo name="DOI" value="10.17487/RFC3550"/>
      </reference>
      <reference anchor="RFC6363">
        <front>
          <title>Forward Error Correction (FEC) Framework</title>
          <author fullname="M. Watson" initials="M." surname="Watson">
            <organization/>
          </author>
          <author fullname="A. Begen" initials="A." surname="Begen">
            <organization/>
          </author>
          <author fullname="V. Roca" initials="V." surname="Roca">
            <organization/>
          </author>
          <date month="October" year="2011"/>
          <abstract>
            <t>This document describes a framework for using Forward Error Correction (FEC) codes with applications in public and private IP networks to provide protection against packet loss.  The framework supports applying FEC to arbitrary packet flows over unreliable transport and is primarily intended for real-time, or streaming, media.  This framework can be used to define Content Delivery Protocols that provide FEC for streaming media delivery or other packet flows.  Content Delivery Protocols defined using this framework can support any FEC scheme (and associated FEC codes) that is compliant with various requirements defined in this document. Thus, Content Delivery Protocols can be defined that are not specific to a particular FEC scheme, and FEC schemes can be defined that are not specific to a particular Content Delivery Protocol.   [STANDARDS-TRACK]</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="6363"/>
        <seriesInfo name="DOI" value="10.17487/RFC6363"/>
      </reference>
      <reference anchor="RFC6716">
        <front>
          <title>Definition of the Opus Audio Codec</title>
          <author fullname="JM. Valin" initials="JM." surname="Valin">
            <organization/>
          </author>
          <author fullname="K. Vos" initials="K." surname="Vos">
            <organization/>
          </author>
          <author fullname="T. Terriberry" initials="T." surname="Terriberry">
            <organization/>
          </author>
          <date month="September" year="2012"/>
          <abstract>
            <t>This document defines the Opus interactive speech and audio codec. Opus is designed to handle a wide range of interactive audio applications, including Voice over IP, videoconferencing, in-game chat, and even live, distributed music performances.  It scales from low bitrate narrowband speech at 6 kbit/s to very high quality stereo music at 510 kbit/s.  Opus uses both Linear Prediction (LP) and the Modified Discrete Cosine Transform (MDCT) to achieve good compression of both speech and music.  [STANDARDS-TRACK]</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="6716"/>
        <seriesInfo name="DOI" value="10.17487/RFC6716"/>
      </reference>
      <reference anchor="RFC7230">
        <front>
          <title>Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and Routing</title>
          <author fullname="R. Fielding" initials="R." role="editor" surname="Fielding">
            <organization/>
          </author>
          <author fullname="J. Reschke" initials="J." role="editor" surname="Reschke">
            <organization/>
          </author>
          <date month="June" year="2014"/>
          <abstract>
            <t>The Hypertext Transfer Protocol (HTTP) is a stateless application-level protocol for distributed, collaborative, hypertext information systems.  This document provides an overview of HTTP architecture and its associated terminology, defines the "http" and "https" Uniform Resource Identifier (URI) schemes, defines the HTTP/1.1 message syntax and parsing requirements, and describes related security concerns for implementations.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="7230"/>
        <seriesInfo name="DOI" value="10.17487/RFC7230"/>
      </reference>
      <reference anchor="RFC7540">
        <front>
          <title>Hypertext Transfer Protocol Version 2 (HTTP/2)</title>
          <author fullname="M. Belshe" initials="M." surname="Belshe">
            <organization/>
          </author>
          <author fullname="R. Peon" initials="R." surname="Peon">
            <organization/>
          </author>
          <author fullname="M. Thomson" initials="M." role="editor" surname="Thomson">
            <organization/>
          </author>
          <date month="May" year="2015"/>
          <abstract>
            <t>This specification describes an optimized expression of the semantics of the Hypertext Transfer Protocol (HTTP), referred to as HTTP version 2 (HTTP/2).  HTTP/2 enables a more efficient use of network resources and a reduced perception of latency by introducing header field compression and allowing multiple concurrent exchanges on the same connection.  It also introduces unsolicited push of representations from servers to clients.</t>
            <t>This specification is an alternative to, but does not obsolete, the HTTP/1.1 message syntax.  HTTP's existing semantics remain unchanged.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="7540"/>
        <seriesInfo name="DOI" value="10.17487/RFC7540"/>
      </reference>
      <reference anchor="RFC7826">
        <front>
          <title>Real-Time Streaming Protocol Version 2.0</title>
          <author fullname="H. Schulzrinne" initials="H." surname="Schulzrinne">
            <organization/>
          </author>
          <author fullname="A. Rao" initials="A." surname="Rao">
            <organization/>
          </author>
          <author fullname="R. Lanphier" initials="R." surname="Lanphier">
            <organization/>
          </author>
          <author fullname="M. Westerlund" initials="M." surname="Westerlund">
            <organization/>
          </author>
          <author fullname="M. Stiemerling" initials="M." role="editor" surname="Stiemerling">
            <organization/>
          </author>
          <date month="December" year="2016"/>
          <abstract>
            <t>This memorandum defines the Real-Time Streaming Protocol (RTSP) version 2.0, which obsoletes RTSP version 1.0 defined in RFC 2326.</t>
            <t>RTSP is an application-layer protocol for the setup and control of the delivery of data with real-time properties.  RTSP provides an extensible framework to enable controlled, on-demand delivery of real-time data, such as audio and video.  Sources of data can include both live data feeds and stored clips.  This protocol is intended to control multiple data delivery sessions; provide a means for choosing delivery channels such as UDP, multicast UDP, and TCP; and provide a means for choosing delivery mechanisms based upon RTP (RFC 3550).</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="7826"/>
        <seriesInfo name="DOI" value="10.17487/RFC7826"/>
      </reference>
      <reference anchor="RFC8216">
        <front>
          <title>HTTP Live Streaming</title>
          <author fullname="R. Pantos" initials="R." role="editor" surname="Pantos">
            <organization/>
          </author>
          <author fullname="W. May" initials="W." surname="May">
            <organization/>
          </author>
          <date month="August" year="2017"/>
          <abstract>
            <t>This document describes a protocol for transferring unbounded streams of multimedia data.  It specifies the data format of the files and the actions to be taken by the server (sender) and the clients (receivers) of the streams.  It describes version 7 of this protocol.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="8216"/>
        <seriesInfo name="DOI" value="10.17487/RFC8216"/>
      </reference>
      <reference anchor="RFC9000">
        <front>
          <title>QUIC: A UDP-Based Multiplexed and Secure Transport</title>
          <author fullname="J. Iyengar" initials="J." role="editor" surname="Iyengar">
            <organization/>
          </author>
          <author fullname="M. Thomson" initials="M." role="editor" surname="Thomson">
            <organization/>
          </author>
          <date month="May" year="2021"/>
          <abstract>
            <t>This document defines the core of the QUIC transport protocol.  QUIC provides applications with flow-controlled streams for structured communication, low-latency connection establishment, and network path migration. QUIC includes security measures that ensure confidentiality, integrity, and availability in a range of deployment circumstances.  Accompanying documents describe the integration of TLS for key negotiation, loss detection, and an exemplary congestion control algorithm.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="9000"/>
        <seriesInfo name="DOI" value="10.17487/RFC9000"/>
      </reference>
      <reference anchor="I-D.draft-cardwell-iccrg-bbr-congestion-control">
        <front>
          <title>BBR Congestion Control</title>
          <author fullname="Neal Cardwell">
            <organization>Google</organization>
          </author>
          <author fullname="Yuchung Cheng">
            <organization>Google</organization>
          </author>
          <author fullname="Soheil Hassas Yeganeh">
            <organization>Google</organization>
          </author>
          <author fullname="Ian Swett">
            <organization>Google</organization>
          </author>
          <author fullname="Van Jacobson">
            <organization>Google</organization>
          </author>
          <date day="7" month="March" year="2022"/>
          <abstract>
            <t>   This document specifies the BBR congestion control algorithm.  BBR
   ("Bottleneck Bandwidth and Round-trip propagation time") uses recent
   measurements of a transport connection's delivery rate, round-trip
   time, and packet loss rate to build an explicit model of the network
   path.  BBR then uses this model to control both how fast it sends
   data and the maximum volume of data it allows in flight in the
   network at any time.  Relative to loss-based congestion control
   algorithms such as Reno [RFC5681] or CUBIC [RFC8312], BBR offers
   substantially higher throughput for bottlenecks with shallow buffers
   or random losses, and substantially lower queueing delays for
   bottlenecks with deep buffers (avoiding "bufferbloat").  BBR can be
   implemented in any transport protocol that supports packet-delivery
   acknowledgment.  Thus far, open source implementations are available
   for TCP [RFC793] and QUIC [RFC9000].  This document specifies version
   2 of the BBR algorithm, also sometimes referred to as BBRv2 or bbr2.

            </t>
          </abstract>
        </front>
        <seriesInfo name="Internet-Draft" value="draft-cardwell-iccrg-bbr-congestion-control-02"/>
      </reference>
      <reference anchor="I-D.draft-kpugin-rush">
        <front>
          <title>RUSH - Reliable (unreliable) streaming protocol</title>
          <author fullname="Kirill Pugin">
            <organization>Facebook</organization>
          </author>
          <author fullname="Alan Frindell">
            <organization>Facebook</organization>
          </author>
          <author fullname="Jordi Cenzano">
            <organization>Facebook</organization>
          </author>
          <author fullname="Jake Weissman">
            <organization>Facebook</organization>
          </author>
          <date day="7" month="March" year="2022"/>
          <abstract>
            <t>   RUSH is an application-level protocol for ingesting live video.  This
   document describes the protocol and how it maps onto QUIC.

Discussion Venues

   This note is to be removed before publishing as an RFC.

   Discussion of this document takes place on the mailing list (), which
   is archived at .

   Source for this draft and an issue tracker can be found at
   https://github.com/afrind/draft-rush.

            </t>
          </abstract>
        </front>
        <seriesInfo name="Internet-Draft" value="draft-kpugin-rush-01"/>
      </reference>
      <reference anchor="I-D.draft-lcurley-warp">
        <front>
          <title>Warp - Segmented Live Media Transport</title>
          <author fullname="Luke Curley">
            <organization>Twitch</organization>
          </author>
          <date day="9" month="July" year="2022"/>
          <abstract>
            <t>   This document defines the core behavior for Warp, a segmented live
   media transport protocol.  Warp maps live media to QUIC streams based
   on the underlying media encoding.  Media is prioritized to reduce
   latency when encountering congestion.

            </t>
          </abstract>
        </front>
        <seriesInfo name="Internet-Draft" value="draft-lcurley-warp-01"/>
      </reference>
      <reference anchor="I-D.draft-sharabayko-srt-over-quic">
        <front>
          <title>Tunnelling SRT over QUIC</title>
          <author fullname="Maxim Sharabayko">
            <organization>Haivision Network Video GmbH</organization>
          </author>
          <author fullname="Maria Sharabayko">
            <organization>Haivision Network Video GmbH</organization>
          </author>
          <date day="28" month="July" year="2021"/>
          <abstract>
            <t>   This document presents an approach to tunnelling SRT live streams
   over QUIC datagrams.

   QUIC [RFC9000] is a UDP-based transport protocol providing TLS
   encryption, stream multiplexing, and connection migration.  It was
   designed to become a faster alternative to the TCP protocol
   [RFC7323].

   An Unreliable Datagram Extension to QUIC [QUIC-DATAGRAM] adds support
   for sending and receiving unreliable datagrams over a QUIC
   connection, but transfers the responsibility for multiplexing
   different kinds of datagrams, or flows of datagrams, to an
   application protocol.

   SRT [SRTRFC] is a UDP-based transport protocol.  Essentially, it can
   operate over any unreliable datagram transport.  SRT provides loss
   recovery and stream multiplexing mechanisms.  In its live streaming
   configuration SRT provides an end-to-end latency-aware mechanism for
   packet loss recovery.  If SRT fails to recover a packet loss within a
   specified latency, then the packet is dropped to avoid blocking
   playback of further packets.

   The Datagram Extension to QUIC could be used as an underlying
   transport instead of UDP.  This way QUIC would provide TLS-level
   security, connection migration, and potentially multi-path support.
   It would be easier for existing network facilities to process, route,
   and load balance the unified QUIC traffic.  SRT on its side would
   provide end-to-end latency tracking and latency-aware loss recovery.

            </t>
          </abstract>
        </front>
        <seriesInfo name="Internet-Draft" value="draft-sharabayko-srt-over-quic-00"/>
      </reference>
      <reference anchor="I-D.draft-engelbart-rtp-over-quic">
        <front>
          <title>RTP over QUIC</title>
          <author fullname="Jörg Ott">
            <organization>Technical University Munich</organization>
          </author>
          <author fullname="Mathis Engelbart">
            <organization>Technical University Munich</organization>
          </author>
          <date day="24" month="June" year="2022"/>
          <abstract>
            <t>   This document specifies a minimal mapping for encapsulating RTP and
   RTCP packets within QUIC.  It also discusses how to leverage state
   from the QUIC implementation in the endpoints to reduce the exchange
   of RTCP packets and how to implement congestion control.

Discussion Venues

   This note is to be removed before publishing as an RFC.

   Source for this draft and an issue tracker can be found at
   https://github.com/mengelbart/rtp-over-quic-draft.

            </t>
          </abstract>
        </front>
        <seriesInfo name="Internet-Draft" value="draft-engelbart-rtp-over-quic-04"/>
      </reference>
      <reference anchor="I-D.draft-jennings-moq-quicr-arch">
        <front>
          <title>QuicR - Media Delivery Protocol over QUIC</title>
          <author fullname="Cullen Jennings">
            <organization>cisco</organization>
          </author>
          <author fullname="Suhas Nandakumar">
            <organization>Cisco</organization>
          </author>
          <date day="11" month="July" year="2022"/>
          <abstract>
            <t>   This specification outlines the design for a media delivery protocol
   over QUIC.  It aims at supporting multiple application classes with
   varying latency requirements including ultra low latency applications
   such as interactive communication and gaming.  It is based on a
   publish/subscribe metaphor where entities publish and subscribe to
   data that is sent through, and received from, relays in the cloud.
   The information subscribed to is named such that this forms an
   overlay information centric network.  The relays allow for efficient
   large scale deployments.

            </t>
          </abstract>
        </front>
        <seriesInfo name="Internet-Draft" value="draft-jennings-moq-quicr-arch-01"/>
      </reference>
      <reference anchor="I-D.draft-rtpfolks-quic-rtp-over-quic">
        <front>
          <title>RTP over QUIC</title>
          <author fullname="Jörg Ott">
            <organization>Technische Universitaet Muenchen</organization>
          </author>
          <author fullname="Roni Even">
            <organization>Huawei</organization>
          </author>
          <author fullname="Colin Perkins">
            <organization>University of Glasgow</organization>
          </author>
          <author fullname="Varun Singh">
            <organization>CALLSTATS I/O Oy</organization>
          </author>
          <date day="1" month="September" year="2017"/>
          <abstract>
            <t>   QUIC is a UDP-based protocol for congestion controlled reliable data
   transfer, while RTP serves carrying (conversational) real-time media
   over UDP.  This draft discusses design aspects and issues of carrying
   RTP over QUIC.

            </t>
          </abstract>
        </front>
        <seriesInfo name="Internet-Draft" value="draft-rtpfolks-quic-rtp-over-quic-01"/>
      </reference>
      <reference anchor="I-D.draft-ietf-mops-streaming-opcons">
        <front>
          <title>Operational Considerations for Streaming Media</title>
          <author fullname="Jake Holland">
            <organization>Akamai Technologies, Inc.</organization>
          </author>
          <author fullname="Ali Begen">
            <organization>Networked Media</organization>
          </author>
          <author fullname="Spencer Dawkins">
            <organization>Tencent America LLC</organization>
          </author>
          <date day="21" month="April" year="2022"/>
          <abstract>
            <t>   This document provides an overview of operational networking issues
   that pertain to quality of experience when streaming video and other
   high-bitrate media over the Internet.

            </t>
          </abstract>
        </front>
        <seriesInfo name="Internet-Draft" value="draft-ietf-mops-streaming-opcons-10"/>
      </reference>
      <reference anchor="I-D.draft-ietf-quic-datagram">
        <front>
          <title>An Unreliable Datagram Extension to QUIC</title>
          <author fullname="Tommy Pauly">
            <organization>Apple Inc.</organization>
          </author>
          <author fullname="Eric Kinnear">
            <organization>Apple Inc.</organization>
          </author>
          <author fullname="David Schinazi">
            <organization>Google LLC</organization>
          </author>
          <date day="4" month="February" year="2022"/>
          <abstract>
            <t>This document defines an extension to the QUIC transport protocol to add support for sending and receiving unreliable datagrams over a QUIC connection.
            </t>
          </abstract>
        </front>
        <seriesInfo name="Internet-Draft" value="draft-ietf-quic-datagram-10"/>
      </reference>
      <reference anchor="I-D.draft-ietf-quic-http">
        <front>
          <title>HTTP/3</title>
          <author fullname="Mike Bishop">
            <organization>Akamai</organization>
          </author>
          <date day="2" month="February" year="2021"/>
          <abstract>
            <t>The QUIC transport protocol has several features that are desirable in a transport for HTTP, such as stream multiplexing, per-stream flow control, and low-latency connection establishment.  This document describes a mapping of HTTP semantics over QUIC.  This document also identifies HTTP/2 features that are subsumed by QUIC and describes how HTTP/2 extensions can be ported to HTTP/3.
            </t>
          </abstract>
        </front>
        <seriesInfo name="Internet-Draft" value="draft-ietf-quic-http-34"/>
      </reference>
      <reference anchor="I-D.draft-ietf-webtrans-overview">
        <front>
          <title>The WebTransport Protocol Framework</title>
          <author fullname="Victor Vasiliev">
            <organization>Google</organization>
          </author>
          <date day="7" month="March" year="2022"/>
          <abstract>
            <t>   The WebTransport Protocol Framework enables clients constrained by
   the Web security model to communicate with a remote server using a
   secure multiplexed transport.  It consists of a set of individual
   protocols that are safe to expose to untrusted applications, combined
   with a model that allows them to be used interchangeably.

   This document defines the overall requirements on the protocols used
   in WebTransport, as well as the common features of the protocols,
   support for some of which may be optional.

            </t>
          </abstract>
        </front>
        <seriesInfo name="Internet-Draft" value="draft-ietf-webtrans-overview-03"/>
      </reference>
      <reference anchor="AVTCORE-2022-02" target="https://datatracker.ietf.org/meeting/interim-2022-avtcore-01/session/avtcore">
        <front>
          <title>AVTCORE 2022-02 interim meeting materials</title>
          <author>
            <organization/>
          </author>
          <date year="2022" month="February"/>
        </front>
      </reference>
      <reference anchor="MOQ-ml" target="https://www.ietf.org/mailman/listinfo/moq">
        <front>
          <title>Moq -- Media over QUIC</title>
          <author>
            <organization/>
          </author>
          <date>n.d.</date>
        </front>
      </reference>
      <reference anchor="DASH" target="https://www.iso.org/standard/79329.html">
        <front>
          <title>ISO/IEC 23009-1:2019: Dynamic adaptive streaming over HTTP (DASH) -- Part 1: Media presentation description and segment formats (2nd edition)</title>
          <author>
            <organization/>
          </author>
          <date>n.d.</date>
        </front>
      </reference>
      <reference anchor="WebRTC" target="https://www.w3.org/groups/wg/webrtc">
        <front>
          <title>Web Real-Time Communications Working Group</title>
          <author>
            <organization/>
          </author>
          <date>n.d.</date>
        </front>
      </reference>
      <reference anchor="QUIC-goals" target="https://datatracker.ietf.org/doc/charter-ietf-quic/01/">
        <front>
          <title>Initial Charter for QUIC Working Group</title>
          <author>
            <organization/>
          </author>
          <date year="2016" month="October"/>
        </front>
      </reference>
    </references>
    <section anchor="acknowledgements">
      <name>Acknowledgements</name>
      <t>The authors would like to thank the many authors of the specifications
referenced in <xref target="priorart"/> for their work. The authors would also like to thank
Alan Frindell, Luke Curley, and Maxim Sharabayko for text contributions to this
draft.</t>
      <t>James Gruessing would also like to thank Francesco Illy and Nicholas Book for
their part in providing the needed motivation.</t>
    </section>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA+1ca3PbRpb9jl/RRX9Y20tSkj1OJtqp2VUkO9GUH4qlTGpr
a2sKBJokIhBg0IBoRtF/33Pu7caDpDyZqeyzNlWxaaLRj/s899GcTCZRndW5
PTXvbJrF5sOdrcx331+em4n53llzHjvrTFyk5qP9qckqu7JF7cy8rPwLN1Vc
uHVZ1eaqKusyKXNzYV22KKJ4NqvsHSYuv/vMVFFaJkW8wgbSKp7Xk0XVWOey
YjFZlT9Nqt7IyfGLKIlruyir7anJinkZubqy8erUXL6+eROleHYaRdm6OjV1
1bj6xfHxV3gnxphTE6/XeYbXs7Jw0aasbhdV2ayxuw/fmXdxlmNF8zZzdXRr
t3icYtKitlVh68kFNyZEwYLY/1/ivCyw4a110To7Nf+GY48N/siKFBsdGwdy
VHbu8Gm78h/qKkvwKClX65gfXDNrP+ODnHCMU2Ej9t+jKG7qZVmdRsZM8L/B
A3dq/jQ13wTyyLdKuD/hT7fzpKwWcZH9LOc9Ne9taqscWwcbrppZntlbaz6s
qtKuZXRSNkVNqr639dKPlAd2BdKcmh+5wjSz9fxfFvxmih0Pt3Y9NRfx5haf
exu7XtsigTz1nww3dsMBRW3OVhb0ic3bt+fDDX1fZLVNzXUN5jpTzsPI/u6c
LpPqKrvbjCgp1QoL3lE8jPn45vzlq1fHp/rxi5dfvAwfvzz5wn/88sXLMODL
V79rP/7+RRjw+xft2K+Oj49l4svJxVSFOImrdGPzfJIlSbWYQBEmSVksrOOp
+bGuyvx08MrtullkxQRyuxw+yJOmyu12somr9fCJW8ZVPIu3t+XEVfWkhOpO
oC7JcJTFuvksxoCqXj826EdbFBAdJ0rHx9UkrpKdneD9eZnfOhnwudnIAcy0
dhNVUGpzucax3YFxMhl0N15UUOVHni/ren3g2cbOatof2chdZjfCh7M/35x/
+Ph68uL4xQsYjVORlTquFrY+NZzJnR4dcUW8m9zaSiRmCsE8WllbY7NHGVU/
W+kM8V2dlJWdHJ8cOWpYWRz5r3RiNZ8jv6rxqxo/h/FzGkgg/h3nbiSvibUy
b+ysauJqK2/he1ijySo/vOPNZtPbKcR7FRdHOUwWBfwIjBts5135k5lMvJEu
g1Xn2hdn199+ZgVXygJi6SDGR19+9fLFV9NlvcoH819efzi6fH1uoCjHX01O
Tl8cn3x1ai620PwsMXEar6lwphUA3cO3NzdX5il38Iy7u4JYmpPge9aVdTAG
YhpMal1SZWv5TJ/h7II20qguO/P0Bb7EaxzwjMf6wc4+3pw/frDNSzmXmH13
tFkcQXiqOhkcCnPAOcX55CZbwWHBLDdFcBrmBzgNHuQbzsAVSdDJogRL/wYR
g7s7SqC4EIZOvo8gW0PqwuxBVsy5jhR/K155bxNBkj4kdTnDyN+NIUsnX0TR
BPSNZ44bqKPoZpk5g7UbIaISdwaL2sAjJOKY62Vcm2UMns2sBf0zlzTOwfZm
BZ5Z8bHiqUCSemsa+LpKHsziosBH2ObRDoYYjcHT8i5Lxe/H+dZhE/GsbGq8
WGLldvWxqax6wRQj6BydrTnlzv4SkaKckrWStTKxqnS0Reo5NVZ5aQWPtJs3
FR2bsZ/WeVn1hnWU6EMNXcwtyyZPzaLBAeSkqQAbbmvtsQ5GlsZhPjffckj/
TFPlQVHW9i/v+Udd/gXCBbK5KHoOx2FeQ3zh5M06t3gBO1jheJgGVHI2EdGf
WWzfmjWdth7veRRdl02VYJUS++IhMufg++Wg8rJYSGyCr+NbsMrEdfSHIJeL
rF42MzrHo3kG6sU/2trFq6O/BsD+iBNdqFhwayBDbzVPrDoGsljnMbZX9uRm
F1w+BSh8Fq087qIRAztq84dfbev+6Km7ytI0t1H0hHCtKtNGyXb/JOM/H/5f
8P97BP/JE/MGq9+QfAC5dUbiq/CDb+ZDU3Gmdelg4zKZaw4WOcpM74w9KsWu
3bEw5v6e77ukXNuHBxAxllNi9wXnEN8Lu3dgKgBQjJikwI6gQm/PE9ktaOdK
QiEIwxJ4XkW83etAp2Qbwt5JWPDhoZtoJ1wKlDMMPnao5xewvQ0dWApTTmAX
uEb0rnS1aiCXCp/7ot4K4AweiF4P06iFILO8ffG8+mG5Vb18E+K6f4YKbZZb
8U7QosuabIIp0902FXe7vyZOBR/YwCotYkDxuiWckkBODvYoS/oRmcpdWsoS
WXFX5uAcdzQ1102y7M2DaM64bLXOt4bqhC2IEPSMn98MTvbDEvocFuXG5Ywt
I57e33v4/vDwzGwgYt4xgw941rl3PB7L+4gNjXzTOeQNhp9Spt9lRbbKfqbo
AOkW3n7TvMJ0u6XQhyJHRY7z3NRt4JzDgRfJVubsE4VRI/bDGTcw2QKexljp
SjgrmLLJ6wze41MYQ5IsoWaTcj5hJGlmeZkQMPA9DbsFixUgHw5LG0KmQUOX
BiZoXWZiBEr8A9um4sNsbLl3TvCaX3br8qVYpQrBCdBYVeEEAMZVOPwnHMzp
UTBwsPU438RbR3luwNGWGGNoAB/fvL02J9OXZrbFHuYx1pvSllN0lQGkO44A
MSw66pBtj4UP5DHUCJYxFdHIcLwgGyNhJRgjju1kZJ6Gz/SxQhVz/OlY/ztR
WRE/gDOMO4FSM4k3KQEOykdJWsP1FwLm4vQuBrxdaCC7L4/kfyl2rC8FU3g2
vKpIl6zZF0Q5+lByVMt7Kwq9spWoXm3BfoyHLimipu8lBoHpL1f2gH6KhMtu
OymhgUpgL2tIGXjqmY5zYj64yuE3SovZtg4hgVhc1aAM2hA3dckYPYFqbOGN
C3h108XMxsfMIDbxNHZbOhUqWD+ZPTg8nZ2gOG5X8CtC5vJy0yqNP2Z4SIu8
pHXprSrSzXf88q2OCZs2mesUzNgYdkq+P7CuK/3iayo0KFLZPItngD81BAgS
D7xkM+/DaJ0FZOA4oKHN6PpJbZgmiC8JLoP0pDauQLGwsXg+J9WxHw4JtqYz
SNzbD1QXPr44uzn75uPZu05RH1OgEJ+LJQyYwp8Fq+tpSFKoizfSsNcuw3fK
pYGx51l4xAMMziElYTG4Lkuacavvbr4HmX62LW27I+lyraEVJyoWpjBnb6/e
Tx+R3GVJF8a9qSHkMlnVt92XF4+8u8qwPXDh8oqKCffrWvrPwJPbQP4h2b+2
SUz3LhPCUcHmx2vX0PqLg//+4mrsH8K9CXLo0atqCGuIDyDjUAI7BD14dMts
ZR4ewkLdnF+Zerv2OpWWQDlG4LCIogOzQLeqLFeiOF4GyMWzq0vSvt4QD/c4
J9TqmWqZR/yBzBSvs3Rn7x5qBidCu1fYzUHFrqzMvMokulCpWVdZWcHwaboQ
CktWZa5nPBgqNWtuR4k4q3AyWuA7qDbt4tFLqhEC+zZR3apA/8uxyQTiBKml
OdhgaQs8OqviKlPkOBBjLPen+C6+ljyFuifmIpNsLhmQvrEXfEiYtoq35o4J
n3kFuge0x9XC57GZSZiwY79/bEAvoMWgfABJpOBCKXMoBiG0MzdiMsq8XGyB
6GhAHgTxca/v4mILtBdL5u/gFHiFQeBKx+DNq87V3UE316R0FzoFrxfkn7LQ
UuugyxtCMIqhGuUOHIlNt2LHzBa2TiImRxOORcCBGPCpgvYWzWqmwVjLwlXv
aFyapz8UqmGF0bsP342m5sy1mJbMJ246QJRlzEOnndNUZoXlxFeStLeMHzCf
HimJK0gRtEH8IuHrnbf2A5KI5sb+u3ibl3H62enw5a+ZsDAfgRH8hJ7szIKD
7L/N9F7XfsMtD/SzNzGHd7bTS7+K8iMFKRGTPYpqYN/PQZAumoEVP7dZZnDp
xHoM5qBt2PtoOp2KTKaDRAgjrdLH5GrzMlZ8oMkc0AuaGea0TiB2chhBkvlW
tCZsfkTrSpeyWO4spQhxN6HSBUiijoI4dtIZfM+qmYK1piGa+eSGOPIEgJ4O
Tp2nrtJqUEwAmed4+5CKKGSB9gPhNXDhIOTot5XYEdjhrFWYhm2Do/2wz58j
pOA3PkkquV71JORiR4rYA93cFgsGMRrWDpLxhydrs/n39ztVBmYhWo4VpWbF
nOaQ6KTolZU3W9BHWBOn5boWVCzLH1xQ4/O3PkLsFU7NuZZC6Zjun0gIybpT
DSP9QOM++gDjHHv8fe4NszfFJNp1myYSURp1Qfw+BDxYyqHB7lIyMdw/QRS4
reFstuP4CK/bJVc+3y8ZikrSOzSasOoV7Hm5mYSQ+GlOcCUI58SLGO1Vf4jk
i/rjjv1Ax5FFWUz2R3djJGVh5sAk2FhTW3mpyw89BT8r8fyEOM+iKIC4RzOb
XaKwAHS1n1jUYDqnJUR2KD11f99lkERe8/6zfDWRjw942CWIJG4SidrP4GmW
ZFEQikCroWodgbj10Q4vRjjwPCto4IQJpFnISxBIEC8FUQ6CAgNpZSGxGORf
Af/MDFo+qVk/6THl1bFZCWm/z3NozHH/Gf/Jh9GVrZaw6qac3WVl4/JtCGNz
cbKbss2UzGJJl1Y+EFoS/nHzLA4JkNk7hJnoHkZC3e75pHuu++Bpw0Zkj73d
zLJbywAj1RwKITKLM4XU3zmv4GBwaN7kQHk5k/KZWE5IhU0FkF0R0srg158k
n71gmVwwo9fO+ycCe2GTocvXHvr4JPtwZIe82vyh+iwfSvYijMiG1R7zkmKr
pMlhYAAeKzKLh1wx299oJhDWeow/zq/U4F5fXA0neqQOTTNCdyri52OwclNI
mLCz1Q59SiJkzVV9jpDG3Ds+yF+6pbMo+oEH1EszJ9AaqLWdLqZRf3e98jst
ef9RvwAviULxn/An4iNFDKOmgDWZZxW5bJhbItRnipawvcttyyk1V1dZX8oc
D/bxSCEeiq8UEsfIbxCaJHUTtA82bN3MjlwziyQEFxM/wwYpGWmTyD414HZQ
3srnfc/ZgVJlTpHFIwIZRc+VwC1uufABOoIYir4gJeBtT492WJvoeW7O8jx4
RKkQpdl8rnECIqqqjJOlJiKZbDnyiY2MzTTtNlS/4gLBkGR3dUsh/N4PKDXX
qjsSJoB+1qeIsZBsEH9rpmkajuj3qHq2lfzBzj52Dqspa9HsrsfpUjpOJI3J
5a5C9eD+iU/IgERiw6HgLIUwk9chBZ990H0KNpRavKb6g5N9tHzVWoK5hDX7
MeFZACmCgHxo1E03qLIMNpTEHK859xGxWddV0MFPMdlbMc3AMZC/fkzoC2qI
0VkIOhBZkXf2Uz32zBaEaoUZIc1kGRCTMMDpYyq4LYI1hl6K1dvdm/eODhP+
yPx0l/bmQOlHE6uF3UkmkU8I83xtpedaqSqyWNPWXiTbDeuku+iXwM4Kn5pT
0+3PPlrtNGT4fM3ACHym0YbGKbjqEBYzGdytrIKs24swjU/RdBzuZCWDXc2q
gEClABNce097ZDwdmOYsumhEs16yzoYsll66Dv88jVdlG0lISZhAz1AaWhiN
NQVFnZy8eAaasfwkedQ2MbIjhIo5GNwoEWibSNYu+tfUqRvkTiX+W+Aj044a
SPlcUy27SMXuJXWwWGPMGqQpbl9h2D/LJm1eW4yV6+IboU5YXjPTur6vfuhO
gQjxOmiOlwkHA/4YCJradK280mOou/fePxRAJZnDWqAW53KAk9zzcWNDDRcG
0nOAtS3gS2giDptoSXXjGbtnaFbqHy57QFWVVQrrClQ14fVrLIdIUAn3roEm
AAkrDhofC4spNBinsb220QyqYgoDdRmaFl/OU3VlTFzwHwIRP97caMlkV5eZ
7dWpXUj2sdjYT/hVJGAvXu8lHJWunkldCYGrid8CANCSl+bf7zDTVKqp9lPM
jOiYkir1yANHzNzBE2Fy2SlRFolZdInCN6/PjeRw2L7oQwaWqpJJzsRcv0pC
o5oAkwsM0ne+PPlit2juYFr9vrtMrGqHFAtzghmdVbLImvBqqcM9Cfhlhcdp
CtNXcw/TrG9s2QFBNRUF6u18+veTDI+zVZ+1Iqb7IMEBeBC8U3nM119/HAC/
X9XFSUJq/Yav00ZAemYaWO0Kj0pXbbEzppTxz6wAOBMTZHO4O6mUgI611pF9
DcEx+4QVfOYFUwm5BgSd+k4XOs2lzddSw/0Eq5vRMe61aGAm0HFP4JlMh/XF
Iegae0nCkAzzIbsG4CJpgZJMvArrQrK+rRX4jL5YlCfmG406758s5AOsyC9n
rSn4xfyZUZP5JfrFTPjfP052/ot+ef78Wm3s0cdgX58//8WYD4UwGn/h5efP
v+5Z6jjniH+1Tp74ZIq8xHD01bF0ClRtZ40LljtVPyg1l6xYN7VvP6Am+aRN
kmeCRlvi0zlFkIe88b0zOsJP2aXW6u1aGcowHehGEmmB+qDHYkE3AuMXLdlK
mZi5tSlbOXbWkrTatu+CeouG+VaII8r1EhoSxU2alWJUEe4sCOkS8Rj05K2r
3oYI4Yn5aFclOHNh3S1Drvsn8FGw9Lf/VxjnJ40G/tfjmkwMm+CQWFKmFWJE
LaZJ/CItLyGlovLMFyK+scumIBIdu7nFI3qtgbGdmznCC6cx8V0GrY98GZEp
3DKg9eofnNwlAPErz6k/Sw76HHEod0arcnQDq062s/YDvWWM+tvxTcpHROL4
+2/mHF/0uaA9Jkrc6kK/TGDpP3UEDbT0skyh5ztRJ+ZPnWQVSl8qowRInaf/
9ghwhSiYGQ4Qa2REJ/DMZ9ojj5sYk/R0yeWg87gHlfGapv/7/cs99iOGSUXz
nBe6I7APVJQ311Z9oPwtYhczyhXCyt8w0RqGLzNJeMhBD6JMccvBT1LBg6/0
m8D6WirDh5CGo33ptXyoUQonFfKqLw/lgG5QwHgtUheQt8KrkQABlj/gK4vO
ZXvwpcjy2+2sytIBwORh3rZIU4MBSrsSN+nLdd3KdVc1lfJG6JoL/TIxXbwt
wODSRZq44ZQ8GT2qWbB/RVvJNIRwwekmTSX67TfXL5EISSIyZ2VD2DGYNRHo
3b5SW6ksCrmWUtkimZQZZH4d51GvfNkvxkzEenMp/N0Rp5e9Z3ZYwxZ6afKK
UhRJnD3QR4ZkmF7pmPi6DUG6Bgoz229ais1oKRwa+QJEPwJo09G7EcBesNYU
Aqw+k2HoJ77Z79bmtEfLBud2IW7Oy3I96pfG/WMNp4JkSm1opAG5w5ZHY9Mx
vfu6YMoo8w0QuzDyFac5OfaBRL9fIhyj1wb7eIRQyqrSjjqITZSlbAo/cH6o
9aQuJ1bVmihvu4eKQx6v7VG6K7M0pthJPhzLON8xeXiHXkMayRbT4HS8JYiE
p2J+qq5Cm1FwdYnq2C4Z23hW5hJd7auqkJn7EU0L0cYmrpOlpi3lDZycwXk2
7yk8HYHlGJ/BTM2IRZGRvw3iu02JVmpJ+0ujQNcMvVeF9V6yJ8mXWq6lQGvl
9r8Q07wvDzpG2mTxjCxPvC03jwIcFYvYOL0KIK2DBbswnaY0Nd0mFOoTIQrk
Umeh09I/EEtUmQp3G1iof/UwRbmi68GIf52JiKiL8/WGGc0+m/iMNG0gBNTr
QAKZgLt62Ectke+O0it6yranrO1ssrRejoPneGZmMaWVCfS4jgcwLni/fe5e
92SBLKZs/C9gsBwusJL8K1pU3sl5aAIayDc5GnUcFYO+0vIjNIE1FRGHmTJO
8k8h+BemqnUNXgnfR3Oi3WErAxwlwNTUO6AW4MBnNEXK1KyYgXmc5RqjdsYn
E+FL7AFOdc7sSevMfmtOfQal7rPq71JFKdlKbjthvbLtRe1BwZ6WRTta1mqY
UtjHIF2cIdkdCQs8A91jQedOfKn7pRTQR7QotpnJXTOZsc6E+p16dndmqZEs
ljH4uRV4VEoDBy9ih8ZJolILp51BjCSbNwg+ozgFL2qcWjJQ0iDpnqn98d5U
DDvoucHMo1BfM/xXuemOoBcyBNnIeJsuWry3ihkO3EHspA2VOqK70PZ2RLQM
ncedoWqr+FFewj75oBeCm+cuXP2QBcrBaMNLpoLUna9Y8UW5YuHFINJucQ9b
1pAERVAfismF9hQEGFWmLYx6o2GlOthRO3T0q7KrPs+m5U/wcrGEFmfFbbyw
bRNnrzOprfz3+jd7rQIajkLaZAR8b6K3wVgfceRjv+EJNF47Ta8XdsMgG8af
Lb+2TpTDAfwTW2LmLNyGqT8HHLuwSdud2xYZaSvwEEsrXBtAGpZBBKeHXuB2
hL895/vZq10RaUnhbVLHoxYagEn/SdjgbzJIeMno1vbMkG6PmF0aZoVfnjhw
lVs8WnG5XNCXRJ5MkNZlUBDGpV3Di0wbLEu/S6c1c/AoE4LTit8Fz0F9i9es
R/txoqFqytSvgMVe+UK41q7gM/H9ywnQG6f5DlU4n1KmiNRl4P2o3fZIctaZ
vYvzQe0wyDJ7wtdlUIM9bu+7IbD9f4If+gzbd72PNDrhi3Gv5fsAqzqnEc0B
yNlA0IuVve8Ztz5m7P2Op2ov/t+xR7CjXcdS11wUVNy3vvjS9HXbuneo47i7
2KfV8cFFQUYw+y/1a5LabDHPKvrg4vOxZ9dYFRLcNBL7IcJTDvV24OHZOOpj
S3km4JIdMdL60h7/aT86h8/jcbobhu2VNdcsQjSnBVsmSV7LfRKfWZjxYvDA
Xt7f7/YSPODQvtFFYk6+WllmOqFMva4h5pzLvOld/zsLV1S5o35KpmtpuH+y
c9FRY38ls++z3KsWHrq22Q93NZK90+aARUsyXxkgOYcpOj4M6edA7AXCUBbB
97NEHB2SnjK6gpAS3fLuG1dVvyJ9S1oOkUpk8WOo5W38JYGPNyy26YewbUnL
2XRwHzaNt/5mmfaEt1220QybVObsFWKImyQQlo4UDXTa2D2rgyVjPiGSXhkA
SGIvLWdL+1v2s3C77fDq2p70Kh8vdmj7CM4adTsWNDBo9TkgDj1l2JeGfOXl
wHf/ao8sY7uhzI331a3l+1/XtkeCun3FOxhTDHVQFLwtldvdPhrpqQzckgYu
mtdI253L6pZ6Gch8CFMdqKR9+/ZaK2n8yRhfquUPYOBL/vXwEClkiTUjkgOL
C998MM8WRr0DtdV7QJbJkK4O6i9MtzuI/I3MLu/pATVLdiH205CjvZ6mTYtp
aLfsSUiaSsdGiw6jeMVf5pFyK/jHq6b6k0ZVV53waB2n8H1ctbP5fBoSuuK7
aAKll8ObPk/9cB1zz8D31Ezz27tqJC3cdBBAiAekuPP1+0Jcpl6IMUGVBmMG
+5oekNZd0NCJbIcUKWUYsDvU9UWyAxi8lup8pjTfag5T5U88u9AEsXPXRT6V
20v6WwIgIEt4o07yR4c7Sjx9meldlbDQlIL9e1ktmcedNTks0YQkIsVy1a8q
ZwDVWzNalCXipoL9ZaNBH1VhbRou4uz8VoD/hQarKJBXDnS3B3Yw7KaXpW3F
houpudbfhxmY5J0Nq9rJRdhwfUmElrzVNhjfcJNkXQO7NrQ6zZdm/p7LyfRE
dZq/E+V1Wh688F+/+t1x8BQ+PJNW2XkD89K4tu2u7VDo5XN92MvfGFv4GJIZ
la65yLdMaNrLfNDs0CMkkqCwa6rtH27YngLXcO0r/vyVK/abBo7IHNDYreGN
bUkiwSz4W3eiXRQdf8dD0OJUgZumhuO2m3EZ7+utGFyuklYUYS/tGgfvWQF/
tZGRAHPofYHVPiYBP5I19n2Ict2OWXT2cUhEADYUCAZ6bUU9I9Lecnmy/3N3
bfMxf4JH6uD6awwQ4A8XH07Ndw1vC/L6Qj2WJnjow3Qqc12evT/buYOx+8Mk
2uNTlOGqkcge35MJrnlLngWc3UlCWamdJ9ObBkWqoq0/4dG7QBSagG3hGt8H
5X83xboIy7uwUjK8MsKrHNzUpvA/vMKUB7d2ltwW5SZnrkR/z09Mqf58nfMW
R4tApSQVb9VHSEnLD/LKMIQiUWV92aH9vQ/fFP8QeJfpL0poomG4opTcB8tG
ZzkMy5uKv9GXI0p82+DhuXR2q56+iz9lK3PdtrvrKgBP2uLiE6BOJ8xcJD1C
IMbOr+89ugGszSoNIhtzSdngmu8RcpSASubrsryVYqweizVAnnrd/lxCsKG2
td9S0vgPfriyzMBRAAA=

-->

</rfc>
