<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.6 (Ruby 3.0.2) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-wh-rtgwg-application-aware-dc-network-02" category="std" consensus="true" submissionType="IETF" xml:lang="en" version="3">
  <!-- xml2rfc v2v3 conversion 3.20.0 -->
  <front>
    <title abbrev="APDN">Application-aware Data Center Network (APDN) Use Cases and Requirements</title>
    <seriesInfo name="Internet-Draft" value="draft-wh-rtgwg-application-aware-dc-network-02"/>
    <author initials="H." surname="Wang" fullname="Haibo Wang">
      <organization>Huawei</organization>
      <address>
        <email>rainsword.wang@huawei.com</email>
      </address>
    </author>
    <author initials="K." surname="Yao" fullname="Kehan Yao">
      <organization>China Mobile</organization>
      <address>
        <email>yaokehan@chinamobile.com</email>
      </address>
    </author>
    <author initials="W." surname="Pan" fullname="Wei Pan">
      <organization>Huawei</organization>
      <address>
        <email>tarzan.pan@huawei.com</email>
      </address>
    </author>
    <author initials="H." surname="Huang" fullname="Hongyi  Huang">
      <organization>Huawei</organization>
      <address>
        <email>hongyi.huang@huawei.com</email>
      </address>
    </author>
    <date year="2024" month="March" day="01"/>
    <area>General</area>
    <workgroup>Network Working Group</workgroup>
    <keyword>APDN</keyword>
    <keyword>Data Center</keyword>
    <abstract>
      <?line 64?>

<t>The deployment of large-scale AI services within data centers introduces significant challenges to established technologies, including load balancing and congestion control. Additionally, the adoption of cutting-edge network technologies, such as in-network computing, is on the rise within AI-centric data centers. These advanced network-assisted application acceleration technologies necessitate the flexible exchange of cross-layer interaction information between end-hosts and network nodes.</t>
      <t>The Application-aware Data Center Network (APDN) leverages the Application-aware Networking (APN) framework for application side to furnish the data center network with detailed application-aware information. This approach facilitates the rapid advancement of network-application co-design technologies. This document delves into the use cases of APDNs and outlines the associated requirements, setting the stage for enhanced performance and efficiency in data center operations tailored to the demands of AI services.</t>
    </abstract>
  </front>
  <middle>
    <?line 73?>

<section anchor="intro">
      <name>Introduction</name>
      <t>The advent of large AI models like AlphaGo and ChatGPT4 has positioned distributed training for AI large models as a pivotal operation within large-scale data centers. To enhance the efficiency of training these substantial models, a significant number of computing units—such as thousands of GPUs operating in tandem—are deployed for parallel processing, aiming to minimize job completion time (JCT). This setup necessitates frequent and bandwidth-heavy communications among concurrent computing nodes, introducing a novel multi-party communication mode that demands heightened throughput performance, load balancing proficiency, and congestion management capabilities from the data center network.</t>
      <t>Traditionally, data center technology primarily views the network as a mere conduit for data transmission for upper-layer applications, offering basic connectivity services. Yet, the scenario of large AI model training is increasingly incorporating network-assisted technologies, such as offloading parts of the computation to the network. This approach seeks to boost AI job efficiency through the joint optimization of network communication and computing applications. In many current instances of network assistance, operators tailor and implement proprietary protocols on a limited scale, leading to a lack of widespread interoperability.</t>
      <t>However, as AI data centers grow and diversify in offering cloud services for various AI tasks, emerging data center network technologies must account for serving different transports and applications. Building large-scale data centers now involves not just ensuring device interoperability but also facilitating interaction between network devices and end-host services.</t>
      <t>This document illustrates use cases that requires application-aware information between network nodes and applications. Current ways of conveying information are limited by the extensibility of packet headers, where only coarse-grained information can be transmitted between the network and the host through a limited space (for example, one-bit ECN [RFC3168] or DSCP in IP layer).</t>
      <t>The Application-aware Networking (APN) framework <xref target="I-D.li-apn-framework"/> delineates how application-aware information, including APN identification (ID) and/or parameters (e.g., network performance requirements), is encapsulated by network edge devices. This information is then carried in packets across an APN domain to support service provisioning, enable fine-grained traffic steering, and adjust network resources. An extension of the APN framework caters to the application side <xref target="I-D.li-rtgwg-apn-app-side-framework"/>, allowing APN domain resources to be allocated to applications that encapsulate the APN attribute in packets.</t>
      <t>This document delves into the application side of the APN framework to foster enriched interaction between hosts and networks within the data center, outlining several use cases and the corresponding requirements for Application-aware Data center Network (APDN).</t>
      <section anchor="terminology">
        <name>Terminology</name>
        <t>APDN: APplication-aware Data center Network</t>
        <t>SQN: SeQuence Number</t>
        <t>TOR: Top Of Rack switch</t>
        <t>PFC: Priority-based Flow Control</t>
        <t>NIC: Network Interface Card</t>
        <t>ECMP: Equal-Cost Multi-Path routing</t>
        <t>AI: Artificial Intelligence</t>
        <t>JCT: Job Completion Time</t>
        <t>PS: Parameter Server</t>
        <t>INC: In-Network Computing</t>
        <t>APN: APplication-aware Network</t>
      </section>
      <section anchor="requirements-language">
        <name>Requirements Language</name>
        <t>The key words "<bcp14>MUST</bcp14>", "<bcp14>MUST NOT</bcp14>", "<bcp14>REQUIRED</bcp14>", "<bcp14>SHALL</bcp14>", "<bcp14>SHALL
NOT</bcp14>", "<bcp14>SHOULD</bcp14>", "<bcp14>SHOULD NOT</bcp14>", "<bcp14>RECOMMENDED</bcp14>", "<bcp14>NOT RECOMMENDED</bcp14>",
"<bcp14>MAY</bcp14>", and "<bcp14>OPTIONAL</bcp14>" in this document are to be interpreted as
described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they
appear in all capitals, as shown here.</t>
        <?line -18?>

</section>
    </section>
    <section anchor="use-case-and-requirements-for-application-aware-data-center-network">
      <name>Use Case and Requirements for Application-aware Data Center Network</name>
      <section anchor="fine-grained-packet-scheduling-for-load-balancing">
        <name>Fine-grained packet scheduling for load balancing</name>
        <t>Traditional data centers utilize the per-flow Equal-Cost Multi-Path (ECMP) method to distribute traffic evenly across several paths. These centers, primarily focused on cloud computing, handle a vast number of data flows. Despite the large quantity, these flows are predominantly small and short-lived, allowing the ECMP method to facilitate a nearly uniform traffic distribution across multiple pathways.</t>
        <t>Contrastingly, the communication dynamics shift markedly during the training of large AI models. This process demands unprecedented bandwidth levels, where a singular data flow between multiple machines could potentially max out the upstream bandwidth of a server’s egress Network Interface Controller (NIC), with single data flow throughputs approaching or exceeding 100GB x X.</t>
        <t>Applying traditional per-flow ECMP strategies, such as hash-based or round-robin algorithms, often results in the concurrent allocation of large ("elephant") flows to a single pathway. This can lead to severe congestion, notably when two simultaneous 100Gb/s flows vie for the same 100Gb/s bandwidth, significantly impacting the completion time for AI jobs.</t>
        <t>To mitigate these issues, there's a pivotal shift towards implementing a fine-grained, per-packet ECMP strategy. This approach ensures the distribution of all packets from a single flow across multiple paths, enhancing balance and preventing congestion. However, due to the varying delays (propagation and switching) across these paths, such a strategy may result in significant packet disorder upon arrival at the destination, thereby degrading the performance of both transport and application layers.</t>
        <t>A viable solution is the resequencing of out-of-order packets at the egress Top-of-Rack (ToR) switch, employing per-packet ECMP. This assumes multipath transmission extends from ingress to egress ToRs, with the reordering principle ensuring that the packet departure sequence from the last ToR mirrors the arrival sequence at the first ToR.</t>
        <t>Achieving packet reordering at the egress ToR necessitates a clear indication of packet arrival sequences at the ingress ToR. Current protocols do not directly mark sequence numbers (SQNs) at the Ethernet and IP layers.</t>
        <ul spacing="normal">
          <li>
            <t>Presently, SQNs are encapsulated within transport layers (e.g., TCP, QUIC, RoCEv2) or application protocols. Relying on these SQNs for packet reordering requires network devices to interpret a vast array of transport/application layer information.</t>
          </li>
          <li>
            <t>SQNs at the transport/application layer are allocated per flow, with each having distinct sequence number spaces and initial values. These cannot directly represent the packet arrival sequence at the initial ToR. Although assigning a specific reordering queue to each flow at the egress ToR and reordering based on upper-layer SQNs is conceivable, the associated hardware resource demands are significant.</t>
          </li>
          <li>
            <t>Direct modification of upper-layer SQNs by network devices to reflect ToR-ToR pairwise SQNs compromises end-to-end transmission reliability.</t>
          </li>
        </ul>
        <t>Consequently, a mechanism to convey specific order information across the multipath forwarding domain, from the initial to the final device with reordering capabilities, is essential.</t>
        <t>The Application-aware Networking (APN) framework is proposed to transport critical ordering information. In this context, it records the sequence number of packets as they arrive at the ingress ToR (each ToR-ToR pair having a unique, incremental SQN), facilitating packet reordering by the egress ToR based on this data.</t>
        <t>Requirements:</t>
        <ul spacing="normal">
          <li>
            <t>[REQ1-1] The APN framework <bcp14>SHOULD</bcp14> tag each packet with an SQN alongside the APN ID to enable reordering. The ingress ToR <bcp14>SHOULD</bcp14> assign and log an SQN for each packet based on its arrival sequence, with SQN granularity adaptable to ToR-ToR, port-port, or queue-queue levels.</t>
          </li>
          <li>
            <t>[REQ1-2] The APN-encapsulated SQN <bcp14>MUST</bcp14> remain unaltered within the multipathing domain and may be removed at the egress device.</t>
          </li>
          <li>
            <t>[REQ1-3] The APN framework <bcp14>SHOULD</bcp14> convey necessary queue information (i.e., the sorting queue ID) to support fine-grained reordering. The queue ID <bcp14>SHOULD</bcp14> match the granularity of SQN assignments. Additionally, the APN framework COULD transport path details to expedite the differentiation between out-of-order packets and packet loss.</t>
          </li>
        </ul>
      </section>
      <section anchor="inc-uc">
        <name>Enhancing Distributed Machine Learning Training with In-Network Computing</name>
        <t>Distributed machine learning training frequently employs the AllReduce communication mode<xref target="mpi-doc"/> for efficient cross-accelerator data transfer. This method is pivotal in scenarios involving data and model parallelism, where parallel execution across multiple processors necessitates the exchange of intermediate results, such as gradient data, as a core component of the communication process.</t>
        <t>The Parameter Server (PS) architecture<xref target="atp"/>, which centralizes gradient data aggregation through a server from multiple clients and redistributes the aggregated results, often faces incast congestion challenges due to simultaneous large-volume data transmissions to the server.</t>
        <t>In-network computing (INC) introduces a paradigm shift by delegating the server's processing tasks to network switches. Utilizing network devices equipped with high-capacity switching and computational abilities (for basic arithmetic operations) as surrogate parameter servers for gradient aggregation enables the consolidation of multiple data streams into a singular network stream. This approach not only alleviates server-side incast congestion but also leverages the superior speed of on-switch computing (e.g., ASICs) over traditional server-based processing (e.g., CPUs), offering a boon to distributed computing applications.</t>
        <t>As outlined in <xref target="I-D.draft-lou-rtgwg-sinc"/>, the realization of INC requires network devices to comprehend the computing tasks dictated by applications, including the accurate parsing of relevant data units and the coordination of synchronization signals across diverse data sources.</t>
        <t>Present implementations like ATP<xref target="atp"/> and NetReduce<xref target="netreduce"/> necessitate that switches interpret upper-layer protocols and application-specific logic, which remains tailored to particular applications due to the absence of standardized transport or application protocols for INC. To accommodate a broad spectrum of INC applications, network devices must exhibit versatility across various protocol formats.</t>
        <t>Moreover, while end users may encrypt payloads for security, they might be inclined to expose certain non-sensitive data to benefit from accelerated INC operations. However, the current protocol landscape does not facilitate easy access to necessary INC data without decrypting the entire payload, posing interoperability challenges between applications and INC functionalities.</t>
        <t>The Application-aware Networking (APN) framework emerges as a solution, capable of conveying essential information for INC tasks and their associated data segments, thereby enabling the offloading of specific computational tasks to the network.</t>
        <t><em>Requirements</em>:</t>
        <ul spacing="normal">
          <li>
            <t>[REQ2-1] The APN framework <bcp14>MUST</bcp14> include identifiers to differentiate among INC tasks.</t>
          </li>
          <li>
            <t>[REQ2-2] The APN framework <bcp14>MUST</bcp14> accommodate the transport of application data in varied formats and lengths, such as gradient data for INC, along with the specified operations.</t>
          </li>
          <li>
            <t>[REQ2-3] To augment INC efficiency, the APN framework <bcp14>SHOULD</bcp14> transmit additional application-aware information to support computational processes without undermining end-to-end transport reliability.</t>
          </li>
          <li>
            <t>[REQ2-4] The APN framework <bcp14>MUST</bcp14> have the capability to convey comprehensive INC outcomes and document the computational status within data packets.</t>
          </li>
        </ul>
      </section>
      <section anchor="enhanced-congestion-control-with-precise-feedback-mechanisms">
        <name>Enhanced Congestion Control with Precise Feedback Mechanisms</name>
        <t>Data center environments encompass various congestion scenarios, notably:</t>
        <ul spacing="normal">
          <li>
            <t>The prevalent use of multi-accelerator collaborative AI model training, employing AllReduce and All2All communication patterns (<xref target="inc-uc"/>), often leads to server-side incast congestion as multiple clients simultaneously transmit substantial volumes of gradient data.</t>
          </li>
          <li>
            <t>Diverse load balancing methodologies across different flows can induce overload conditions on specific links.</t>
          </li>
          <li>
            <t>The inherent randomness of service access within data centers frequently triggers traffic bursts, extending queue lengths and precipitating congestion.</t>
          </li>
        </ul>
        <t>To mitigate these challenges, the industry has developed an array of congestion control algorithms tailored for data center networks. ECN-based congestion control mechanisms, such as DCTCP<xref target="RFC8257"/> and DCQCN<xref target="dcqcn"/>, leverage ECN marks based on switch buffer occupancy levels to signal congestion.</t>
        <t>However, these approaches are constrained by the use of a singular 1-bit mark within packet headers to denote congestion, limiting the scope of conveyed congestion details due to header space restrictions. Alternative strategies, such as HPCC++ <xref target="I-D.draft-miao-ccwg-hpcc"/>, adopt in-band telemetry to cumulatively append congestion data at each hop, increasing packet length and bandwidth consumption.</t>
        <t>A compromise solution, AECN<xref target="I-D.draft-shi-ippm-advanced-ecn"/>, endeavors to encapsulate critical congestion indicators along the path while minimizing data overhead through hop-by-hop aggregation, including queue delay and congested hop counts. This model allows end-hosts to specify the congestion metrics of interest, with network devices incrementally compiling this data en route. APN frameworks can facilitate this nuanced exchange, enabling tailored congestion data accumulation.</t>
        <t><em>Requirements</em>:</t>
        <ul spacing="normal">
          <li>
            <t>[REQ3-1] The APN framework <bcp14>MUST</bcp14> empower data senders to specify the congestion metrics they wish to gather.</t>
          </li>
          <li>
            <t>[REQ3-2] The APN framework <bcp14>MUST</bcp14> enable network nodes to log and update selected measurements accordingly. This may encompass metrics such as port queue lengths, link monitoring rates, PFC frame counts, probed RTTs, and variability, among others. Additionally, the APN <bcp14>MAY</bcp14> tag each measurement with its collector, assisting in the identification of potential congestion points.</t>
          </li>
        </ul>
      </section>
    </section>
    <section anchor="encapsulation">
      <name>Encapsulation</name>
      <t>The encapsulation of application-aware information proposed by use cases of APDN in the APN Header <xref target="I-D.draft-li-apn-header"/> will be defined in the future version of the draft.</t>
    </section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>TBD.</t>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t>This document has no IANA actions.</t>
    </section>
  </middle>
  <back>
    <references>
      <name>References</name>
      <references anchor="sec-normative-references">
        <name>Normative References</name>
        <reference anchor="RFC2119">
          <front>
            <title>Key words for use in RFCs to Indicate Requirement Levels</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <date month="March" year="1997"/>
            <abstract>
              <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="2119"/>
          <seriesInfo name="DOI" value="10.17487/RFC2119"/>
        </reference>
        <reference anchor="RFC8174">
          <front>
            <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
            <author fullname="B. Leiba" initials="B." surname="Leiba"/>
            <date month="May" year="2017"/>
            <abstract>
              <t>RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="8174"/>
          <seriesInfo name="DOI" value="10.17487/RFC8174"/>
        </reference>
      </references>
      <references anchor="sec-informative-references">
        <name>Informative References</name>
        <reference anchor="mpi-doc" target="https://www.mpi-forum.org/docs/mpi-4.1">
          <front>
            <title>Message-Passing Interface Standard</title>
            <author>
              <organization/>
            </author>
            <date year="2023" month="August"/>
          </front>
        </reference>
        <reference anchor="dcqcn" target="https://conferences.sigcomm.org/sigcomm/2015/pdf/papers/p523.pdf">
          <front>
            <title>Congestion Control for Large-Scale RDMA Deployments</title>
            <author>
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
        </reference>
        <reference anchor="netreduce" target="https://arxiv.org/abs/2009.09736">
          <front>
            <title>NetReduce - RDMA-Compatible In-Network Reduction for Distributed DNN Training Acceleration</title>
            <author>
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
        </reference>
        <reference anchor="atp" target="https://www.usenix.org/conference/nsdi21/presentation/lao">
          <front>
            <title>ATP - In-network Aggregation for Multi-tenant Learning</title>
            <author>
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
        </reference>
        <reference anchor="I-D.li-apn-framework">
          <front>
            <title>Application-aware Networking (APN) Framework</title>
            <author fullname="Zhenbin Li" initials="Z." surname="Li">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Shuping Peng" initials="S." surname="Peng">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Daniel Voyer" initials="D." surname="Voyer">
              <organization>Bell Canada</organization>
            </author>
            <author fullname="Cong Li" initials="C." surname="Li">
              <organization>China Telecom</organization>
            </author>
            <author fullname="Peng Liu" initials="P." surname="Liu">
              <organization>China Mobile</organization>
            </author>
            <author fullname="Chang Cao" initials="C." surname="Cao">
              <organization>China Unicom</organization>
            </author>
            <author fullname="Gyan Mishra" initials="G. S." surname="Mishra">
              <organization>Verizon Inc.</organization>
            </author>
            <date day="3" month="April" year="2023"/>
            <abstract>
              <t>   A multitude of applications are carried over the network, which have
   varying needs for network bandwidth, latency, jitter, and packet
   loss, etc.  Some new emerging applications have very demanding
   performance requirements.  However, in current networks, the network
   and applications are decoupled, that is, the network is not aware of
   the applications' requirements in a fine granularity.  Therefore, it
   is difficult to provide truly fine-granularity traffic operations for
   the applications and guarantee their SLA requirements.

   This document proposes a new framework, named Application-aware
   Networking (APN), where application-aware information (i.e.  APN
   attribute) including APN identification (ID) and/or APN parameters
   (e.g.  network performance requirements) is encapsulated at network
   edge devices and carried in packets traversing an APN domain in order
   to facilitate service provisioning, perform fine-granularity traffic
   steering and network resource adjustment.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-li-apn-framework-07"/>
        </reference>
        <reference anchor="I-D.li-rtgwg-apn-app-side-framework">
          <front>
            <title>Extension of Application-aware Networking (APN) Framework for Application Side</title>
            <author fullname="Zhenbin Li" initials="Z." surname="Li">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Shuping Peng" initials="S." surname="Peng">
              <organization>Huawei Technologies</organization>
            </author>
            <date day="22" month="October" year="2023"/>
            <abstract>
              <t>   The Application-aware Networking (APN) framework defines that
   application-aware information (i.e.  APN attribute) including APN
   identification (ID) and/or APN parameters (e.g. network performance
   requirements) is encapsulated at network edge devices and carried in
   packets traversing an APN domain in order to facilitate service
   provisioning, perform fine-granularity traffic steering and network
   resource adjustment.  This document defines the extension of the APN
   framework for the application side.  In this extension, the APN
   resources of an APN domain is allocated to applications which compose
   and encapsulate the APN attribute in packets.  When the network
   devices in the APN domain receive the packets carrying APN attribute,
   they can directly provide fine-granular traffic operations according
   to these APN attributes in the packets.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-li-rtgwg-apn-app-side-framework-00"/>
        </reference>
        <reference anchor="I-D.draft-lou-rtgwg-sinc">
          <front>
            <title>Signaling In-Network Computing operations (SINC)</title>
            <author fullname="Zhe Lou" initials="Z." surname="Lou">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Luigi Iannone" initials="L." surname="Iannone">
              <organization>Huawei Technologies France S.A.S.U.</organization>
            </author>
            <author fullname="Yizhou Li" initials="Y." surname="Li">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Zhangcuimin" initials="" surname="Zhangcuimin">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Kehan Yao" initials="K." surname="Yao">
              <organization>China Mobile</organization>
            </author>
            <date day="15" month="September" year="2023"/>
            <abstract>
              <t>   This memo introduces "Signaling In-Network Computing operations"
   (SINC), a mechanism to enable signaling in-network computing
   operations on data packets in specific scenarios like NetReduce,
   NetDistributedLock, NetSequencer, etc.  In particular, this solution
   allows to flexibly communicate computational parameters, to be used
   in conjunction with the payload, to in-network SINC-enabled devices
   in order to perform computing operations.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-lou-rtgwg-sinc-01"/>
        </reference>
        <reference anchor="RFC8257">
          <front>
            <title>Data Center TCP (DCTCP): TCP Congestion Control for Data Centers</title>
            <author fullname="S. Bensley" initials="S." surname="Bensley"/>
            <author fullname="D. Thaler" initials="D." surname="Thaler"/>
            <author fullname="P. Balasubramanian" initials="P." surname="Balasubramanian"/>
            <author fullname="L. Eggert" initials="L." surname="Eggert"/>
            <author fullname="G. Judd" initials="G." surname="Judd"/>
            <date month="October" year="2017"/>
            <abstract>
              <t>This Informational RFC describes Data Center TCP (DCTCP): a TCP congestion control scheme for data-center traffic. DCTCP extends the Explicit Congestion Notification (ECN) processing to estimate the fraction of bytes that encounter congestion rather than simply detecting that some congestion has occurred. DCTCP then scales the TCP congestion window based on this estimate. This method achieves high-burst tolerance, low latency, and high throughput with shallow- buffered switches. This memo also discusses deployment issues related to the coexistence of DCTCP and conventional TCP, discusses the lack of a negotiating mechanism between sender and receiver, and presents some possible mitigations. This memo documents DCTCP as currently implemented by several major operating systems. DCTCP, as described in this specification, is applicable to deployments in controlled environments like data centers, but it must not be deployed over the public Internet without additional measures.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8257"/>
          <seriesInfo name="DOI" value="10.17487/RFC8257"/>
        </reference>
        <reference anchor="I-D.draft-miao-ccwg-hpcc">
          <front>
            <title>HPCC++: Enhanced High Precision Congestion Control</title>
            <author fullname="Rui Miao" initials="R." surname="Miao">
              <organization>Meta</organization>
            </author>
            <author fullname="Surendra Anubolu" initials="S." surname="Anubolu">
              <organization>Broadcom, Inc.</organization>
            </author>
            <author fullname="Rong Pan" initials="R." surname="Pan">
              <organization>AMD</organization>
            </author>
            <author fullname="Jeongkeun Lee" initials="J." surname="Lee">
              <organization>Google</organization>
            </author>
            <author fullname="Barak Gafni" initials="B." surname="Gafni">
              <organization>NVIDIA</organization>
            </author>
            <author fullname="Jeff Tantsura" initials="J." surname="Tantsura">
              <organization>NVIDIA</organization>
            </author>
            <author fullname="Allister Alemania" initials="A." surname="Alemania">
              <organization>Intel</organization>
            </author>
            <author fullname="Yuval Shpigelman" initials="Y." surname="Shpigelman">
              <organization>NVIDIA</organization>
            </author>
            <date day="29" month="February" year="2024"/>
            <abstract>
              <t>   Congestion control (CC) is the key to achieving ultra-low latency,
   high bandwidth and network stability in high-speed networks.
   However, the existing high-speed CC schemes have inherent limitations
   for reaching these goals.

   In this document, we describe HPCC++ (High Precision Congestion
   Control), a new high-speed CC mechanism which achieves the three
   goals simultaneously.  HPCC++ leverages inband telemetry to obtain
   precise link load information and controls traffic precisely.  By
   addressing challenges such as delayed signaling during congestion and
   overreaction to the congestion signaling using inband and granular
   telemetry, HPCC++ can quickly converge to utilize all the available
   bandwidth while avoiding congestion, and can maintain near-zero in-
   network queues for ultra-low latency.  HPCC++ is also fair and easy
   to deploy in hardware, implementable with commodity NICs and
   switches.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-miao-ccwg-hpcc-02"/>
        </reference>
        <reference anchor="I-D.draft-shi-ippm-advanced-ecn">
          <front>
            <title>Advanced Explicit Congestion Notification</title>
            <author fullname="Hang Shi" initials="H." surname="Shi">
              <organization>Huawei</organization>
            </author>
            <author fullname="Tianran Zhou" initials="T." surname="Zhou">
              <organization>Huawei</organization>
            </author>
            <author fullname="Zhenqiang Li" initials="Z." surname="Li">
              <organization>China Mobile</organization>
            </author>
            <date day="11" month="December" year="2023"/>
            <abstract>
              <t>   This document proposes Advanced Explicit Congestion Notification
   mechanism enabling host to obtain the congestion information at the
   bottleneck.  The sender sets the congestion information collection
   command in the packet header indicating the network device to update
   the congestion information field per hop.  The receiver carries the
   updated congestion information back to the sender in the ACK.  The
   sender then leverage the rich congestion information to do congestion
   control.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-shi-ippm-advanced-ecn-00"/>
        </reference>
        <reference anchor="I-D.draft-li-apn-header">
          <front>
            <title>Application-aware Networking (APN) Header</title>
            <author fullname="Zhenbin Li" initials="Z." surname="Li">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Shuping Peng" initials="S." surname="Peng">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Shuai Zhang" initials="S." surname="Zhang">
              <organization>China Unicom</organization>
            </author>
            <date day="12" month="April" year="2023"/>
            <abstract>
              <t>   This document defines the application-aware networking (APN) header
   which can be used in a variety of data planes.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-li-apn-header-04"/>
        </reference>
      </references>
    </references>
    <?line 230?>

<section numbered="false" anchor="acknowledgements">
      <name>Acknowledgements</name>
    </section>
    <section numbered="false" anchor="contributors">
      <name>Contributors</name>
    </section>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA51b63LbRpb+z6fodX6MlCEoS3YmiWp2Mgzli2YsWZHlyqam
UltNoEl2DKIRNECZUblqHmL+7L99ln2UeZL9zukLGiTlvaRSMgkC3afP9TsX
ZFk2sq2sin+XpanUuWibTo1y2aqlabbnwrbFyHbztbZWm+puW+OWyxd3L0e6
bvhm2549ffrt07NRKavluVDVaNTqtsRt07ouNVbCc5m8l40SF7KVYqaqVjXi
WrX3pvkgjqY3F9fH4r1VYiatsgK0iFv1a6cbtcatdiTn80ZtsB5uHBUmr+Qa
qxeNXLTZ/Spr2uX9MpO7m2VFnlVujwzUmbk1pWqVPR91dSH5wxeCPpyLs6dn
Z7gH/4ss42tCW7HQZakKoSshu9assXQuy3Ir5lvxcV2eNYtc6IWoTCuWekPH
xqbyXLxSlWpkOaKNl43p6vN41B/xR1dL8Youj0Yf7s9HQmTuXPQhYQ9W69qV
aXBHht90Zc/F64n4ETzGV8eB11LPTbhkmqWs9G/MAPzUyXulcVmtpS7PRSOx
AmgoJve4/c8r/nmSm3Wy/F8n4idp4up/VStZ+SvDxWcrXUlxZea6VP0WW2k+
0CN/zunnNf+6s8OPE3Ejq7jDj0r7758nvpXNb7Ka1Fj7IOHgCx5JGWOq5VaL
ePXzy6/47smq2+HMqDINSX2jzkcjXS36b0Ksa51BE+mjEF7dr5S1cqmyGwlT
gZQvSY4LmSvxjuxLNgXfDWJw782leGmabs2XohY+y55+45aUzVK1oK1ta3t+
cnJ/fz+hPRf0zARLnGB3e0KXnk9O8UiR/5pXA3JmOJaydGT62DamFHhavKGV
s3dQZSVuL66m4kLVpdk6Szu0dW6qhWpUlSs7sXoJzjgC/OeTs6enX53UxeKk
lrVq7En91dmzCb6TNFTbqKLL1YAwWMMtX4XGEwXZzKxrcHYOii6rLBgL38Pk
E9kX2raNnnctLPLi+lrckUYTl6d5rkoYHN15kH7ZfNQbpljO7Qn5qsnTb79+
9gfcLNt6QNn07gY0gQbvN8R0uWzUUkYqrrqy1VmrKlm14o2SDZHwqMA6qyr9
kbfumXhS2UKfnZ7UjcLPLa99UsLK8F8G7wMi20bmLb7erZQoonSEWYiShWdZ
eNNLYVWz0ZCLuNctjI70SIqc/YeFaUDmxGYrIKpKL+C+sEq+ghNTpBqiNQIK
IueltiuwtVX5qjKlWWplx3g8L7uCOFwaWYi5hHvP6Su557zXrdzp1kRMi0LT
FfKRY9GCdlmYmu8B5XnXtng6U8VSicDe4Y62y1dCEuGR/1CwuqPnxuSQsRIt
22iECn/i6WVG5210Pjj8RIB3lijYgGqcLQQCMk1LOpSECyETFRrQhMfAPqtb
igi09aJUH1lP1UfwETzgozXG2qyUWwQ1TdtLp7bRZeDzHPsrVSE6FtnK2NZF
uXDMyhQwLifw/1PQLNUG27EsDz7qbyex4Qk8sGjgIHkFUueUCVYXijRi0UGp
7YoXTFgaaSXGQytbuM4hG/2WyalJChAb7mmMhGzhC3XJzHT0NrLWRZBRUPEo
qYS23GRgEJR4IB2/PBxhxw8Xqtwo1nvDy8P8RM6IAssSvxzTTdeWuvIkQB9M
riVpRJNADiijYn3lm2AjkDQxTFUrp0/wdHxMfOFF1QLmpWHfWzG0Q2Fqr1nY
EDwzDVmaI7BAAKoKR15vzJPgCda6KBBggVIuvSkzMx6+YMv+5B0E2Jc6B1pp
DXUqrSj1B3wt65V8ZZjI2Uq2r27unosVrKw2ls0V5BSJb22DX6XjYi23qF8R
j0lR641pZdkfLNhi6px2jNEEzvG5E2aB7LhjyyYLrElwtNXYwm07xqapB6u6
9ZwYu+jdg+gq3dp//v0fwYcAOnU2MPfVzXsbyMW9IJXisVrjftJY52Jxdjpy
LRvyj6WAzrLtk++Res0EGsikwufflPjFzHl7QEr2GnqtxNFfZnfHXiuhP12d
+g/gSdIwkhXJYo4/97poV9lKyc2W1lrjELlXFeAnbAjfmndNQ8/0R2VnMY7u
nV0yLm5A85rDE47Q7izInARTZBuVbqX0coVIRjJfAY8uV1g/1evxruMHR4Lc
xrtRAE/ARtgMc1nLOdm55jOb9WOuhBxeI9Ogkd4ULX2LjfVaNhrQe6PVvTPc
4I5YJdcIrURN0emWpcgLQbEq6/MWvtrVOJ531Il7ATPNAtGZzjiXFqEES0Fy
wHoajIx2KX5SrQtsFjSCILNvdb02a/JEOTICUqGS3EJumtp4HdyLR4dDIegi
ITD3IVXWZiLAaYMPWCZlyK7PtUp94Eg/Nwg7RCdpbmKBXvi8xi9GkytB0F57
tJx45B2NcgoQlDJl5wT+ihQCOuiVFyi9JZWy6XLu6E7TnHGaJvhIXl2TdbFK
4SzQAcSchpTBtCY3JcMBCSe31sQ/djtQWeW4hQPjN5l/oB1hZ8oCb8nCxWfe
jVV0Cx18be4pio6J32DPAEMhf7tnWgrg/sbqBbv3qC15abqiR2GkYxvSi45X
aqX9AFniCM2S7j4UTQdoY41cmsCI6SqnxrwyPakXjB5bp9PQIg8ghmz/vtOl
Q2yPeGK4iXscYGM4UFLm+gttqSrb8XkKRQfZY5JAaBCytKaP4M6P9lgn4Jtw
LreSIzJgnjTCDSM3Eu2OUC95yT5qs7vyUdl+Hmns7c9O8gCLZl4j7+XWughS
bdTWnaZfjVYPijXfupj1Eb7Sas8QPFlDuVQLNyoLcHYs7lfkhExVkuuVjVXZ
klwBFxD6lRHAQGtwTS1v4EkfeLWq4O/MtmChibLXlFUeMSL5KMlMYEKVyuZw
fy9m1+Jvty9nz07/8M3PgjKnd7Mb0trLG8GuDxHqMaT5Gbj48PDdZXYxKTWA
WZXF658+EezCMVl2K7KWz8kpTSuwgYBhItIvgk85urw4prOf+EC8Vqy1R2qy
nIwjc1LslYK2Y04S4NRkbbtSeuGFpzjx8GrpnWQqGM1hhQTUwNNw1cdJGFrE
AB90McmFWUvNXtcinsASg1qTZ9poCjaMGRAhKFFYgDdREyB2crwAlIodiAuj
smArDIRC2U3XMJXTKuid88QM8kFDLxaq0zU2xIA9PN9LLRTJKgLWGf2YChGE
lKW5D2LxZ4yUcPxQfE/OfCX3mtiVM9WE85FS2XpkmTB0z/x3gfveMQ4enXIV
mIciVI78b6WKgx5pL9uK2fIOLBn7zICYYDmvKhNnFCwSQRxsqYE16L5U/Rxi
Ppy95YeyNwL6X3wh7lQDVMlIZzSiH6jQ+b9YZTR69wPufad+6KiuIK4ZFoO3
b2/Pgbhr8XYhbikCWpw3X41GNy9n5+IG4amBC8uAdMCxl5B6KA+NRteXs75Y
2ZevZlS6Gr2YXd2cixe/drLMZuSXXDHkRiIfhIOikAD6L0F9wyZNAJ7WKEu9
JAJHI8Djc/EXwI9ZD5zvAJxB2jtQFgweR2o2dJLLa5CTVIRmAW4Qnw6yKbKG
OZsWkcUbpOsdMKrzfR/UVlA51IonV+/f3T0Zu3/F9Vv+fPvih/eXty8u6PO7
19M3b+KHkb/j3eu3799c9J/6J2dvr65eXF+4h3FVDC6NnlxNf3riDP/J25u7
y7fX0zdPOCcZ2ASdxlkdKzXQC1ct7AhxLYdJORf1/ezmv/7z9Dns/F/g8s9O
T7+FQ3Zfvjn9+jm+ICxVbjcOTe4rFHk7gpUp2XB5uywJtSOuc7KF5AWOHJaD
gAYd/fJvxJmfz8Uf53l9+vxP/gIdeHAx8GxwkXm2f2XvYcfEA5cObBO5Obi+
w+khvdOfBt8D35OLf/yOopjITr/57k+sPrEhsdeP+Jyhz3ZMFFr4Mo0AHjVY
clddGfLsYaI1SIuG+A3aX1L2SZ6IcpkFme9hkzwiez1GaoRUmB12n+XHOAQv
R0rhA1xwejUejxU0v/M4ScIW0FHyHQRnGAAnVTok+QWingQOtmmmzqcgarHw
Bfyn9jHCZU84AGBA68qGVrkb2QSg9whGmuqt2NmuSVVJHtDQps1KoPIiiV20
Ih07OXVfc6IsGQqPZZDEUOCPbIicccVAZgan0nBSzA1Ci7AE9pM4Fyd045CI
JTlRsa3kWudkQHrRIgdqPqgCGxYOXtMDMT/cL9h4VOILDzFR7yowIVeElFRS
N+DaXxmBJ5VI4OCwZM/rGAPjYdaSmjMIZ8gxSmijaRVXWkDjWn6kAOgqZzUY
ouQ62Q3kSkY6qvnn3/8DMGvZEJEHgoULJiXkfoR4AlDGFUPOglVCW1906LNV
5gth2lwpjq+nT5+++l58FP8G7pPFMVJvE+vojYDE7nKIYQq9knblgx2Wxp7I
RhozZ7e3pFC4WnMBAJwgwANOWeHBQVJ+8dDH4zAnuKMnqlQ1NL59cuxVltNO
f1SvOF6qBPwpOWXgSIamkvLJmHIxwEXnnwVYikVIaLJSlEwSG+Yn1m+y0a4Y
yZUIhMz4c5TWOK2XUe1hXRMq8iq4W7bylb5fzJyhGRW5Wr30KM5SK9R2xNGW
FO13aQnQqXlr4P+gqDFfdyWpFPmOWVDe+aWi2u7WKzgX9cXZgV2SApZlBOVc
WIq8ZhU4ZLmUf3Pl0RV3yliwhU1tPKm9HCYiFgOKTgVUjZR+67LjkrLGI6pG
yGVfBXH4CnccBxIc3/z+ThHjgWFoW69opGdpYdPzB8cGMlFUr+JctNEb8Fq2
vmZM/scnUywR5DgFrNHXPVxgiPkRuDY3ML9YN9jNiV1OSIKfQrE4ZbGm7JKc
iIjlwmXu3Ra8RGYWmaMxJkmOPO8WAD/pFoafR3fm9tgziaohVGzlgtZQI4Ii
QNnWKkhRBtJDGY/TocKLH6vwbtTHCvveWu9wHOVMpCteIvNktYjVDk5amGGe
74pqbNA+4c+r+vJlSeEMq8M2mobrVJSneNHE2/16C924u4mr0Ay1cQU83iWh
aZdnt8N6McJ+6RBaoXvf45fZ3TtKIDCFto/ljr5oVhiu+xQAM3nLbh/eOx7A
BWzoOFILexyWfEF6VimnO6GMQCqTIZ3gLiaFQ3qEY/YgAw+5VtQ/93DI6e9m
N2MB3Dgbi1sze7E5OxY7falI+gQgzAUA1weEhfGWrmq/y9tYN9qtSEFZIqIO
OAXMlKEX4cg82TORQWMLJ3fHbUNQf/Qx4kifOEPp2Vl5JVXk8lbSF/nIsvN2
Vxqu2uMyUEAHboxA8p3qMZqsBjJtlG8up9r9mLKGJVlfpiU1TqjWZNkxsSO3
tcrJRaXsxRrOQfIJnPvd02aiOHnGB+FqUIdnNlJ8RKxVIHBOtaydztwK4YUh
dihIRGxEFxMPSnK5YC4QpOrrSpDs3p5JaShRjUYtSnoc5Gd0hFrq5l4HVaPI
CY+gqSBAVc3WZKoqhh6qUaXui8tAQ47hbCLUqaC+sbZr2swVH3v+Oo86KETG
eJI4RPxMAZd1hgs1495PBWn60IUYTCmEK+uyxiXySBs1rnRmrYODj7aiP1Mg
dMi1Ntb3N6PBI1vlySkRNx60iC995ktzBPDuIITsOOfcnBHOjjVEF2hdnw8M
ZNVWBxwgvAypZyrMYG2SsgCsPHZdGoItIBFSBmAdVLn3XUuoCPfbRM12OTww
LjiYpozn5CspQz7NTn8Wd3u1LJ/rtnLpDMpvyhIDcARZgqbzlq5J7x+/vGAD
dHXGnj72CgMu+NWdTbNVlmYZ1uUicrJnPIsmDu84De+26DkgjorSDSqHy0LW
LZMBgjy3AfkoRaM/Y/Lp7DEy5zdc6jKJPDmLPMkGwYP24VpDo7gg2UGb4bqT
sJIaRm8QfEbCWXNizNpsqHQycE/OJHoKnn1GKt5MXWymFpQ7RGqnR3qiJr43
iAP3HpIK2kmpeFAN3hVZeCLsi7Vzh2NSXkP/WR1YmKxch0ZvhgeZOe2KJslu
xM1wOPT0sUa+5VPy2GzSw9bKYdhXxZpGCU818ZW3FxFzp3NbVy77jINT/RAX
a9WhSh9PO+RZl38ajdKlfCJLKZVbqp9baIK/9VjTz8aUpR8622+MPzz4cb5P
n5w5+O5o60d74nzQoLMMJnnM6qsN5AF9YkTA3veIrW+4xQ4g6yb3isOsAcJB
SOPj+IH6qPJHChKuPkAgdAAXXZeqH0tilLOGXCmT85ltnxZzwsDFd9A0dm10
OF2XHZrKj5Tslzj87j5C7NZsxdHNO8DGBsJpEUaBpR8eZFtTg+F+pbEzD2tJ
qmHtkCBkMmrXd7xcxcHFt8iBvNRch3P4oq9reVTuF2IL86d26f2CcRT0iTBf
OsLWj8T5tG+QfbtuKmSIxGR/siA2XxypYMzlgfk1cXR5PTtOZ/Iky7rQy7VP
ozmPK5kFYe6IV/ydTWZRXFOZtgxbuMSKwOB7Lg4m0wUR2FAkAv5xblOs9HKV
UfDPebwhZK9JR1/66ko/x8G9RjccIblmoloCLHG26ZjLxsg2DBcOYvPOn8Gh
9CjwVNYuftlQcEHyqYuI2qLIme2uKOUbRUnBK7KCf9+tKBA05to3CXmj2VYc
VdwGO6APsd09HK+DE1fUPSHARiESmXCVOfalgnaJzfTd5QxcMaS9acHK7+yi
bCJX/9js5j31MeOEgaS5jWpYvn107gLZpg3jbdwe8O0/Nylfms53AbFhTibp
UmSyxshwaOlnUydGwGqlYjcsEOL0EolqGxqvwwmbvu/LNprnXeMVxfqyAoCz
2sjgDnigK+m6GQK8kUy7rXL4iDDTzTmALGOr1k1rBK3x3dTRyCerfaHKdy/d
oNzdjfdVvGscUn54iFPM+Gk4EApMEcwvSSrTVKNPvHfqLllE/TT8kQf/6IDO
cFSQqhI6Z00fNF2TEpWcW+XLPdbPmsPFFknIfyyrZsOE1HlIj2ZP1ohNrlo+
b6gtQXS2TbcO2jEU666K8AiL+rjSNIpAQpCtm5jwkgnjMWF/4TAUSecK5zVc
ewMruE5TUP8VvoOQHI7XbGuCLlvqllg/HwM1Cn2DrVjTRJtrmeXOBhy2MdzH
aFpChhWxnjrqNMnvvTm12Sq1oNkxriqGcI8F6MS9l0vqg6yVO9UVRAokpfCs
WNj4AZuk/6Ck3fLi1jvwgCdpE6aE3DPV4AvFhw3WQliMsQEffcyjm2H+Jh3T
SeJYAG0DheHyDfZadFXu3BE79/9PtsdDTcpPhIaC4dillKUaTtbEtHIAmb3e
ecfhDR35WZL3O/NVSz+SG6qdHDECb5IJOVL+YFTDOBaDZjooNxp9maZoX8Yc
7exwjsaJiHNjKg6u+OGLFDMrP7wZDzeJ6549um5qeYOKEte9E8NlnkCPyZDc
yCqZj0vqIPqk4rwLsDzDxy6T7IuknmcU0XpFjyRTYgTH0LEQ+Ez97OChVCNk
sn7ACalhDH2fn+BKsqSh8HyQ9G85kHl0VcEzE6xcO2UYXmBQgwknef4o81dy
47geKyLbpDwTQ54lj8EOoWtx0ZfkYsd+Zy6TYj0+dcOXM/oRmD5NAucPvKvD
8kHEyqn69BKAY07F9KtQP7LIh5KZEFVtNOKh609DNvRGTeJuE3AT85LYdGLF
J9ZQT0SWdBaaewnwa5D+wMuVcs5TrJsDU69pfb/PuIhN+HY2pTmDYTYhW1AP
13T08ODzvE/HAa5Tu8y6ftnn0Jq0+5lBit+B/KI2psPlDtDzFODAUFwR0QGI
nQlol+iFcc0INsJwpmvPUadPV3xwCme8BM0ma+eESQQx9OvKuQdXsVm5ZUBs
YdYVhQlyaX66zAeOQ2/6JCkvIOJyyU7J97bnXWPJebrGSV+Y8M4i9MJyXYeC
V9IOO9QN7IPM2JfbCprd3PJLBQUVdgxlGrLqq+r7bwslvdce6sTB7eGMLKLu
i9m1R8wHlool1cTzXczuZoBz39EczNlXX3tMdzH7YXb98MCvyhH+DfCeZyap
EWL76pfH9fOOhCsM8GotaVzala5cjkiYc8iuFB5YFXMQ5SrVlN60vvjji4je
0JJk5pSnOLkt42U9nDTlcKNgusMGMo+GxsQxhwz6IDzkW6j7ePjolvUjpQDI
9C6VBztTKrZVztQPtdZf38xmv//9MMtYa2myPEeWsapzzjL4LTB6qWvOQV4R
9CZ9IQ/branSh/VLzhfU8KUCVxdofZfE1ONkoj5WnViPh69UMKO7de2FMk3q
9glWmb4gZUhIRxqeIU1eZ+GtsUw5PQFZSm6M43065RhL2wnRvltHd7tA63ow
oMrBWv8GSSwHkY9Y8WCAL3rgoNl8m+GfNE9OMyhnwdyOTt/DoE4JHuLx8TBM
4twzD8jY5OUzUl92QtuQd8cXORQpgI0FJFz2Rd9dqJ/UzUsXJbVHZb4GLmii
ArFSTYZB17nIBBjzE1XnQmEoYY0TnBc8xJ5u5EGDWNCP4Llnn8FzCFcw2SZA
zSpY2P/AHU437vk1OSMgoBUVfsJuj6M8X6sfDqljBVeOR7rDL6aDDupBUZUT
ut6FqTOCiNz1KcPQhM+LfKwPtAXrZCQ0cPZjDjjQCeTWxnVJqRoyFjcvZ45U
rzw07WVoxPD27s666UFCEh4bjT3ANXTsR6vPV9Of+n5GchCnTJTcE5bAOQ2/
emG5+7kMozc7g+HU9AmzSqk8anphxVedgaaCZeInl9Oo9NIOlD6AQWMDC955
7+3BQBod7rVzmsP6ihuNd/6UJi814M6c7HQR6jHckut4xIBfJ+mnunkJGsz/
QrzzSS0hQYI8HpLjQN9fMG4Ul9Pr6f6vg/FRCsaVcXfKPJSH6K1CgpG0yDT/
UJn7ksbi3bvfD+euw6aKf32ykKVVTz7xbgxIqfAEl/bITaPRfwNwe7pWRkIA
AA==

-->

</rfc>
