<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.1 (Ruby 3.0.2) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-wh-rtgwg-application-aware-dc-network-00" category="std" consensus="true" submissionType="IETF" xml:lang="en" version="3">
  <!-- xml2rfc v2v3 conversion 3.18.1 -->
  <front>
    <title abbrev="APDN">Application-aware Date Center Network (APDN) Use Cases and Requirements</title>
    <seriesInfo name="Internet-Draft" value="draft-wh-rtgwg-application-aware-dc-network-00"/>
    <author initials="H." surname="Wang" fullname="Haibo Wang">
      <organization>Huawei</organization>
      <address>
        <email>rainsword.wang@huawei.com</email>
      </address>
    </author>
    <author initials="H." surname="Huang" fullname="Hongyi  Huang">
      <organization>Huawei</organization>
      <address>
        <email>hongyi.huang@huawei.com</email>
      </address>
    </author>
    <date year="2023" month="October" day="23"/>
    <area>General</area>
    <workgroup>Network Working Group</workgroup>
    <keyword>keyword1</keyword>
    <keyword>keyword2</keyword>
    <keyword>keyword3</keyword>
    <abstract>
      <?line 55?>

<t>Deploying large-scale AI services in data centers poses new challenges to traditional technologies such as load balancing and congestion control. Besides, emerging network technologies such as in-network computing are gradually accepted and used in AI data centers. These network-assisted application acceleration technologies require that cross-layer interaction information can be flexibly transmitted between end-hosts and network nodes.</t>
      <t>APDN (Application-aware Date Center Network) adopts the APN framework for application side to provide more application-aware information for the data center network, enabling the fast evolution of network-application co-design technology. This document elaborates use cases of APDNs and provides specific requirements.</t>
    </abstract>
  </front>
  <middle>
    <?line 62?>

<section anchor="intro">
      <name>Introduction</name>
      <t>Distributed training for AI large model has gradually become an important business in large-scale data centers after the emergence of large AI models such as AlphaGo and ChatGPT4.
In order to improve the efficiency of large model training, large amounts of computing units (for example, thousands of GPUs running simultaneously) are used to perform computing processing in parallel to reduce JCT(job completion time). The concurrent computing nodes require periodic and bandwidth-intensive communications.</t>
      <t>The new multi-party communication mode and characteristics between computing units put forward higher requirements for the throughput performance, load balancing capability, and congestion handling capabilities of the entire data center network.
Traditional data center technology usually regards the network purely as the data transmission carrier for upper-layer applications, and the network provides basic connectivity services.
However, in the scenario of large AI model training, network-assisted technology (e.g., offloading partial computation in the network) is being introduced to improve the efficiency of AI jobs by joint optimization of network communication and computing applications.
In most existing network-assisted cases, the network operators customize and implement private protocols in a very small scope, but cannot achieve general interoperability. 
However, emerging technology for data center network needs to consider serving different transports and applications, as the scale of AI data centers continues to increase and there is a trend to provide cloud services for different AI jobs. The construction of large-scale data centers not only needs to consider general interoperability between devices, but also needs to consider the interoperability between network devices and end-host services.</t>
      <t>This document illustrates use cases that requires application-aware information between network nodes and applications. Current ways of conveying information are limited by the extensibility of packet headers, where only coarse-grained information can be transmitted between the network and the host through a limited space (for example, one-bit ECN [RFC3168] in IP layer).</t>
      <t>The Application-aware Networking (APN) framework <xref target="I-D.li-apn-framework"/> defines that application-aware information (i.e.  APN attribute) including APN identification (ID) and/or APN parameters (e.g. network performance requirements) is encapsulated at network edge devices and carried in packets traversing an APN domain in order to facilitate service provisioning, perform fine-granularity traffic steering and network resource adjustment. Application-aware Networking (APN) framework for application side <xref target="I-D.li-rtgwg-apn-app-side-framework"/> defines the extension of the APN framework for the application side. In this extension, the APN resources of an APN domain is allocated to applications which compose and encapsulate the APN attribute in packets.</t>
      <t>This document explores the APN framework for application side to provide richer interactive information between hosts and networks within the data center. This document provides some use cases and proposes the corresponding requirements for APplication-aware Data center Network (APDN).</t>
      <section anchor="terminology">
        <name>Terminology</name>
        <t>APDN: APplication-aware Data center Network</t>
        <t>SQN: SeQuence Number</t>
        <t>TOR: Top Of Rack switch</t>
        <t>PFC: Priority-based Flow Control</t>
        <t>NIC: Network Interface Card</t>
        <t>ECMP: Equal-Cost Multi-Path routing</t>
        <t>AI: Artificial Intelligence</t>
        <t>JCT: Job Completion Time</t>
        <t>PS: Parameter Server</t>
        <t>INC: In-Network Computing</t>
        <t>APN: APplication-aware Network</t>
      </section>
      <section anchor="requirements-language">
        <name>Requirements Language</name>
        <t>The key words "<bcp14>MUST</bcp14>", "<bcp14>MUST NOT</bcp14>", "<bcp14>REQUIRED</bcp14>", "<bcp14>SHALL</bcp14>", "<bcp14>SHALL
NOT</bcp14>", "<bcp14>SHOULD</bcp14>", "<bcp14>SHOULD NOT</bcp14>", "<bcp14>RECOMMENDED</bcp14>", "<bcp14>NOT RECOMMENDED</bcp14>",
"<bcp14>MAY</bcp14>", and "<bcp14>OPTIONAL</bcp14>" in this document are to be interpreted as
described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they
appear in all capitals, as shown here.</t>
        <?line -18?>

</section>
    </section>
    <section anchor="use-case-and-requirements-for-application-aware-date-center-network">
      <name>Use Case and Requirements for Application-aware Date Center Network</name>
      <section anchor="fine-grained-packet-scheduling-for-load-balancing">
        <name>Fine-grained packet scheduling for load balancing</name>
        <t>Traditional data centers adopt per-flow ECMP method to balance traffic across multiple paths. In traditional data centers that focus on cloud computing, due to the diversity of services and random access, the amount of data flows is large but most of them are typically small and short. The ECMP method can realize near-equally distribution of traffic on multiple paths.</t>
        <t>In contrast, the communication pattern is different during the large AI model training.
It is observed that the traffic requires large bandwidth than ever. A single data flow between multiple machines can usually saturate the upstream bandwidth of the entire server's egress NIC (for example, the throughput of single data flow can reach nearly X*100GB).
When per-flow ECMP (e.g., hash-based or round-robin ECMP) is applied, it is common to concurrently distribute elephant flows to a single path. For example, two concurrent 100Gb/s flows may be distributed to the same path with completing for available bandwidth 100Gb/s. In such case, traffic congestion is obvious and greatly affects flow completion time of AI jobs.</t>
        <t>Therefore, it is necessary to implement fine-grained per-packet ECMP -- all the packets of the same flow are sprayed over multiple paths to achieve balancing and avoid congestion. Due to the differences between the delay (propagation, switching) of different paths, packets in the same flow would be likely to be in extensible disorder when they arrive at the end-host, causing performance degradation of upper transport and application. 
To this end, a feasible method is to reorder the disordered packets at the egress TOR (top of rack switch) with applying per-packet ECMP. 
Assuming the scope of multipath transmission starts from ingress to egress TORs, the principle of reordering is that for each TOR-TOR pair, the order in which packets leave the last TOR is consistent with the order in which they arrive at the first TOR.</t>
        <t>To realize packet reordering in egress TOR, the entering order of packets arriving at ingress TOR should be clearly indicated. 
Looking back to existing protocols, the sequence number(SQN) information is not directly indicated at the Ethernet and IP layers.</t>
        <ul spacing="normal">
          <li>
            <t>As far as current implementations, the per-flow/application SQN is generally encapsulated in transport (e.g., TCP, QUIC, RoCEv2) or applications. If reordering packets depends on that SQN, the network devices <bcp14>MUST</bcp14> be able to parse large amount of transport/application layers.</t>
          </li>
          <li>
            <t>The SQN in the upper-layer protocol is allocated based on each transport/application-level flow. That is, the sequence number space and initial value of different flows may be different, and cannot be directly used to express the sequence in which packets arrive at the initial TOR. Although it is possible to assign a corresponding reordering queue to each flow on the egress TOR and reorder packets with the SQN of the upper layer, the hardware resource consumption cannot be overlooked.</t>
          </li>
          <li>
            <t>If the network device directly overwrites TOR-TOR pairwise SQN to the upper-layer SQN, the end-to-end transmission reliability will no longer work.</t>
          </li>
        </ul>
        <t>Therefore, specific order information needs to be transmitted from the first device to the last device with reordering functionality given a multipath forwarding domain.</t>
        <t>APN framework is explored to carry the important order information which, in this case, records sequence number of the packets arriving in the ingress TOR (for example, each TOR-TOR pair has an independent and incremental SQN), and the egress TOR reorders the packets according to that information.</t>
        <t>Requirements:</t>
        <ul spacing="normal">
          <li>
            <t>[REQ1-1] APN <bcp14>SHOULD</bcp14> encapsulate each packet with SQN besides APN ID for reordering. The ingress TOR <bcp14>SHOULD</bcp14> assign and record SQN with certain granularity in each packet regarding their arriving order. The granularity of SQN assignment can be TOR-TOR, port-port, queue-queue.</t>
          </li>
          <li>
            <t>[REQ1-2] The SQN in APN <bcp14>MUST NOT</bcp14> be modified inside the multi-pathing domain and could be cleared from the APN at the egress device.</t>
          </li>
          <li>
            <t>[REQ1-3] APN <bcp14>SHOULD</bcp14> be able to carry necessary queue information (i.e., the sorting queue ID) usable for fine-grained reordering process. The queue ID <bcp14>SHOULD</bcp14> be in the same granularity as SQN assignment.</t>
          </li>
        </ul>
      </section>
      <section anchor="inc-uc">
        <name>In-network computing for distributed machine learning training</name>
        <t>Distributed training of machine learning commonly applies AllReduce communication mode<xref target="mpi-doc"/> for cross-accelerator data transfer in the scenarios of data parallelism and model parallelism which perform parallel execution of an application on multiple processors.<br/>
The exchange of intermediate results (i.e., gradient data in machine learning) of per-processor training occupies the majority of the communication process.</t>
        <t>Under the Parameter Server(PS) architecture <xref target="atp"/> (a centralized parameter server is responsible for collecting gradient data from multiple clients, aggregating and sending the aggregation results back to each client), when multiple clients send a large amount of gradient data to the same server simultaneously, it is prone to induce incast (many-to-one) congestion from the perspective of server.</t>
        <t>In-network computing (INC) offloads the processing behavior of the server to the switch. 
When an on-path network device with both high switching and line-rate computing (regarding simple arithmetic operations) capabilities is used as a parameter server to replace the traditional end-host server for gradient aggregation("addition" operation), the distributed AI training application can complete gradient aggregation on the way. On one hand, it turns multiple data streams to single stream within the network, eliminating incast congestion on the server.
On the other hand, distributed computing applications can also benefit from INC due to faster on-switch computing (e.g., ASIC) compared with servers (e.g., CPU).</t>
        <t><xref target="I-D.draft-lou-rtgwg-sinc"/> argues that to implement in-network computing, network devices need to be aware of computing tasks required by applications and correctly parse corresponding data units. For multi-source computing, synchronization signals of different data source streams need to be explicitly indicated as well.</t>
        <t>Current implementations (e.g., ATP<xref target="atp"/>, NetReduce<xref target="netreduce"/>) requires the switches either to parse upper-layer protocol or their private protocol that is invisible to common devices because there are still neither general transport or application protocols for INC. 
To support various INC applications, the switch <bcp14>MUST</bcp14> adapt to all kinds of transport/application protocols.<br/>
Furthermore, the end users may simply apply encryption to the whole payload to achieve security, although they are willing to provide some non-sensitive information to benifit from accelerated INC operations. In such case, the switch is unable to fetch those information necessary for INC operations without decryption of the whole payload.
Current status of protocols make it difficult for applications and INC operations to interoperate.</t>
        <t>Fortunately, APN is able to transmit information about the requested INC operations as well as the corresponding data segments, with which the applications can offload some analysis and calculation to the network.</t>
        <t>Requirements:</t>
        <ul spacing="normal">
          <li>
            <t>[REQ2-1] APN <bcp14>MUST</bcp14> carry identifier to distinguish different INC tasks.</t>
          </li>
          <li>
            <t>[REQ2-2] APN <bcp14>MUST</bcp14> support to carry various formats and length of application data (such as gradients in this use case) to apply INC and the expected operations.</t>
          </li>
          <li>
            <t>[REQ2-3] In order to improve the efficiency of INC, APN <bcp14>SHOULD</bcp14> be able to carry other application-aware information that can be used to assist computations and make sure not to compromise the reliability of end-to-end transport.</t>
          </li>
          <li>
            <t>[REQ2-4] APN <bcp14>MUST</bcp14> be able to carry complete INC results and record the computation status in the data packets.</t>
          </li>
        </ul>
      </section>
      <section anchor="refined-congestion-control-that-requires-feedback-of-accurate-congestion-information">
        <name>Refined congestion control that requires feedback of accurate congestion information</name>
        <t>The data center includes at least the following congestion scenarios:</t>
        <ul spacing="normal">
          <li>
            <t>Multi-accelerator collaborative AI model training commonly adopts AllReduce and All2All communication modes (<xref target="inc-uc"/>). When multiple clients send a large amount of gradient data to a server at the same time, incast congestion is likely to occur from server side.</t>
          </li>
          <li>
            <t>Different flows may adopt different methods and strategies of load balancing, it may cause overload on individual links.</t>
          </li>
          <li>
            <t>Due to random access to services in data center, there are still bursts of traffic that could increase the length of queueing and incur congestion.</t>
          </li>
        </ul>
        <t>The industry has proposed different types of congestion control algorithms to alleviate traffic congestion over the paths in data center network.
Among them, ECN-based congestion control algorithms are commonly used in data centers, such as DCTCP<xref target="RFC8257"/>, DCQCN<xref target="dcqcn"/>, etc., which uses ECN to mark congestion according to the occupancy of switch buffer.</t>
        <t>But these methods could only use a 1-bit mark in the packet to indicate congestion information (i.e., queue size reaching a threshold) and are unable to embrace more in-situ measurement information due to limited header space.
Other proposals, for example, HPCC++ <xref target="I-D.draft-miao-ccwg-hpcc"/> collect congestion information along the path hop by hop through inband telemetry, which will keep appending the information of interests to the data packets. However, it greatly increases the length of data packets as traversing hops and requires more consumption of bandwidth resources.</t>
        <t>A trade-off method such as AECN<xref target="I-D.draft-shi-ccwg-advanced-ecn"/> can be used to collect the most important information representing the congestion along the path. Meanwhile, AECN-like methods apply hop-by-hop calculation to reduce the carrying of redundant information. For example, queue delay and the number of congested hops can be calculated cumulatively as packets traverse the path.<br/>
In this use case, the end-host can clarify the scope of the information desired to collect, and the network device needs to record/update the corresponding information hop-by-hop, to the data packet. The collected information might echoed back to the sender via transport protocol. APN could serve such interaction between hosts and switches to realize customized information collection.</t>
        <t>Requirements:</t>
        <ul spacing="normal">
          <li>
            <t>[REQ3-1] APN framework <bcp14>MUST</bcp14> allow the data sender to express its intention about which measurement it wants to collect.</t>
          </li>
          <li>
            <t>[REQ3-2] APN <bcp14>MUST</bcp14> allow network nodes to record/update necessary measurement results, if the nodes decide to do so. The measurement could be queue length of ports, monitored rate of links, the number of PFC frames, probed RTT, variation and so on. APN <bcp14>MAY</bcp14> record the collector of each measurement in order that information consumers can identify possible congestion points.</t>
          </li>
        </ul>
      </section>
    </section>
    <section anchor="encapsulation">
      <name>Encapsulation</name>
      <t>The encapsulation of application-aware information proposed by use cases of APDN in the APN Header <xref target="I-D.draft-li-apn-header"/> will be defined in the future version of the draft.</t>
    </section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>TBD.</t>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t>This document has no IANA actions.</t>
    </section>
  </middle>
  <back>
    <references>
      <name>References</name>
      <references anchor="sec-normative-references">
        <name>Normative References</name>
        <reference anchor="RFC2119">
          <front>
            <title>Key words for use in RFCs to Indicate Requirement Levels</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <date month="March" year="1997"/>
            <abstract>
              <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="2119"/>
          <seriesInfo name="DOI" value="10.17487/RFC2119"/>
        </reference>
        <reference anchor="RFC8174">
          <front>
            <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
            <author fullname="B. Leiba" initials="B." surname="Leiba"/>
            <date month="May" year="2017"/>
            <abstract>
              <t>RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="8174"/>
          <seriesInfo name="DOI" value="10.17487/RFC8174"/>
        </reference>
      </references>
      <references anchor="sec-informative-references">
        <name>Informative References</name>
        <reference anchor="mpi-doc" target="https://www.mpi-forum.org/docs/mpi-4.1">
          <front>
            <title>Message-Passing Interface Standard</title>
            <author>
              <organization/>
            </author>
            <date year="2023" month="August"/>
          </front>
        </reference>
        <reference anchor="dcqcn" target="https://conferences.sigcomm.org/sigcomm/2015/pdf/papers/p523.pdf">
          <front>
            <title>Congestion Control for Large-Scale RDMA Deployments</title>
            <author>
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
        </reference>
        <reference anchor="netreduce" target="https://arxiv.org/abs/2009.09736">
          <front>
            <title>NetReduce - RDMA-Compatible In-Network Reduction for Distributed DNN Training Acceleration</title>
            <author>
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
        </reference>
        <reference anchor="atp" target="https://www.usenix.org/conference/nsdi21/presentation/lao">
          <front>
            <title>ATP - In-network Aggregation for Multi-tenant Learning</title>
            <author>
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
        </reference>
        <reference anchor="I-D.li-apn-framework">
          <front>
            <title>Application-aware Networking (APN) Framework</title>
            <author fullname="Zhenbin Li" initials="Z." surname="Li">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Shuping Peng" initials="S." surname="Peng">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Daniel Voyer" initials="D." surname="Voyer">
              <organization>Bell Canada</organization>
            </author>
            <author fullname="Cong Li" initials="C." surname="Li">
              <organization>China Telecom</organization>
            </author>
            <author fullname="Peng Liu" initials="P." surname="Liu">
              <organization>China Mobile</organization>
            </author>
            <author fullname="Chang Cao" initials="C." surname="Cao">
              <organization>China Unicom</organization>
            </author>
            <author fullname="Gyan Mishra" initials="G. S." surname="Mishra">
              <organization>Verizon Inc.</organization>
            </author>
            <date day="3" month="April" year="2023"/>
            <abstract>
              <t>   A multitude of applications are carried over the network, which have
   varying needs for network bandwidth, latency, jitter, and packet
   loss, etc.  Some new emerging applications have very demanding
   performance requirements.  However, in current networks, the network
   and applications are decoupled, that is, the network is not aware of
   the applications' requirements in a fine granularity.  Therefore, it
   is difficult to provide truly fine-granularity traffic operations for
   the applications and guarantee their SLA requirements.

   This document proposes a new framework, named Application-aware
   Networking (APN), where application-aware information (i.e.  APN
   attribute) including APN identification (ID) and/or APN parameters
   (e.g.  network performance requirements) is encapsulated at network
   edge devices and carried in packets traversing an APN domain in order
   to facilitate service provisioning, perform fine-granularity traffic
   steering and network resource adjustment.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-li-apn-framework-07"/>
        </reference>
        <reference anchor="I-D.li-rtgwg-apn-app-side-framework">
          <front>
            <title>Extension of Application-aware Networking (APN) Framework for Application Side</title>
            <author fullname="Zhenbin Li" initials="Z." surname="Li">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Shuping Peng" initials="S." surname="Peng">
              <organization>Huawei Technologies</organization>
            </author>
            <date day="22" month="October" year="2023"/>
            <abstract>
              <t>   The Application-aware Networking (APN) framework defines that
   application-aware information (i.e.  APN attribute) including APN
   identification (ID) and/or APN parameters (e.g. network performance
   requirements) is encapsulated at network edge devices and carried in
   packets traversing an APN domain in order to facilitate service
   provisioning, perform fine-granularity traffic steering and network
   resource adjustment.  This document defines the extension of the APN
   framework for the application side.  In this extension, the APN
   resources of an APN domain is allocated to applications which compose
   and encapsulate the APN attribute in packets.  When the network
   devices in the APN domain receive the packets carrying APN attribute,
   they can directly provide fine-granular traffic operations according
   to these APN attributes in the packets.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-li-rtgwg-apn-app-side-framework-00"/>
        </reference>
        <reference anchor="I-D.draft-lou-rtgwg-sinc">
          <front>
            <title>Signaling In-Network Computing operations (SINC)</title>
            <author fullname="Zhe Lou" initials="Z." surname="Lou">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Luigi Iannone" initials="L." surname="Iannone">
              <organization>Huawei Technologies France S.A.S.U.</organization>
            </author>
            <author fullname="Yizhou Li" initials="Y." surname="Li">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Zhangcuimin" initials="" surname="Zhangcuimin">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Kehan Yao" initials="K." surname="Yao">
              <organization>China Mobile</organization>
            </author>
            <date day="15" month="September" year="2023"/>
            <abstract>
              <t>   This memo introduces "Signaling In-Network Computing operations"
   (SINC), a mechanism to enable signaling in-network computing
   operations on data packets in specific scenarios like NetReduce,
   NetDistributedLock, NetSequencer, etc.  In particular, this solution
   allows to flexibly communicate computational parameters, to be used
   in conjunction with the payload, to in-network SINC-enabled devices
   in order to perform computing operations.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-lou-rtgwg-sinc-01"/>
        </reference>
        <reference anchor="RFC8257">
          <front>
            <title>Data Center TCP (DCTCP): TCP Congestion Control for Data Centers</title>
            <author fullname="S. Bensley" initials="S." surname="Bensley"/>
            <author fullname="D. Thaler" initials="D." surname="Thaler"/>
            <author fullname="P. Balasubramanian" initials="P." surname="Balasubramanian"/>
            <author fullname="L. Eggert" initials="L." surname="Eggert"/>
            <author fullname="G. Judd" initials="G." surname="Judd"/>
            <date month="October" year="2017"/>
            <abstract>
              <t>This Informational RFC describes Data Center TCP (DCTCP): a TCP congestion control scheme for data-center traffic. DCTCP extends the Explicit Congestion Notification (ECN) processing to estimate the fraction of bytes that encounter congestion rather than simply detecting that some congestion has occurred. DCTCP then scales the TCP congestion window based on this estimate. This method achieves high-burst tolerance, low latency, and high throughput with shallow- buffered switches. This memo also discusses deployment issues related to the coexistence of DCTCP and conventional TCP, discusses the lack of a negotiating mechanism between sender and receiver, and presents some possible mitigations. This memo documents DCTCP as currently implemented by several major operating systems. DCTCP, as described in this specification, is applicable to deployments in controlled environments like data centers, but it must not be deployed over the public Internet without additional measures.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8257"/>
          <seriesInfo name="DOI" value="10.17487/RFC8257"/>
        </reference>
        <reference anchor="I-D.draft-miao-ccwg-hpcc">
          <front>
            <title>HPCC++: Enhanced High Precision Congestion Control</title>
            <author fullname="Rui Miao" initials="R." surname="Miao">
              <organization>Alibaba Group</organization>
            </author>
            <author fullname="Surendra Anubolu" initials="S." surname="Anubolu">
              <organization>Broadcom, Inc.</organization>
            </author>
            <author fullname="Rong Pan" initials="R." surname="Pan">
              <organization>Intel, Corp.</organization>
            </author>
            <author fullname="Jeongkeun Lee" initials="J." surname="Lee">
              <organization>Intel, Corp.</organization>
            </author>
            <author fullname="Barak Gafni" initials="B." surname="Gafni">
              <organization>NVIDIA</organization>
            </author>
            <author fullname="Yuval Shpigelman" initials="Y." surname="Shpigelman">
              <organization>NVIDIA</organization>
            </author>
            <author fullname="Jeff Tantsura" initials="J." surname="Tantsura">
              <organization>NVIDIA</organization>
            </author>
            <author fullname="Guy Caspary" initials="G." surname="Caspary">
              <organization>Cisco Systems</organization>
            </author>
            <date day="5" month="July" year="2023"/>
            <abstract>
              <t>   Congestion control (CC) is the key to achieving ultra-low latency,
   high bandwidth and network stability in high-speed networks.
   However, the existing high-speed CC schemes have inherent limitations
   for reaching these goals.

   In this document, we describe HPCC++ (High Precision Congestion
   Control), a new high-speed CC mechanism which achieves the three
   goals simultaneously.  HPCC++ leverages inband telemetry to obtain
   precise link load information and controls traffic precisely.  By
   addressing challenges such as delayed signaling during congestion and
   overreaction to the congestion signaling using inband and granular
   telemetry, HPCC++ can quickly converge to utilize all the available
   bandwidth while avoiding congestion, and can maintain near-zero in-
   network queues for ultra-low latency.  HPCC++ is also fair and easy
   to deploy in hardware, implementable with commodity NICs and
   switches.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-miao-ccwg-hpcc-00"/>
        </reference>
        <reference anchor="I-D.draft-shi-ccwg-advanced-ecn">
          <front>
            <title>Advanced Explicit Congestion Notification</title>
            <author fullname="Hang Shi" initials="H." surname="Shi">
              <organization>Huawei</organization>
            </author>
            <author fullname="Tianran Zhou" initials="T." surname="Zhou">
              <organization>Huawei</organization>
            </author>
            <date day="10" month="July" year="2023"/>
            <abstract>
              <t>   This document proposes Advanced Explicit Congestion Notification
   mechanism enabling host to obtain the congestion information at the
   bottleneck.  The sender sets the congestion information collection
   command in the packet header indicating the network device to update
   the congestion information field per hop.  The receiver carries the
   updated congestion information back to the sender in the ACK.  The
   sender then leverage the rich congestion information to do congestion
   control.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-shi-ccwg-advanced-ecn-00"/>
        </reference>
        <reference anchor="I-D.draft-li-apn-header">
          <front>
            <title>Application-aware Networking (APN) Header</title>
            <author fullname="Zhenbin Li" initials="Z." surname="Li">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Shuping Peng" initials="S." surname="Peng">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Shuai Zhang" initials="S." surname="Zhang">
              <organization>China Unicom</organization>
            </author>
            <date day="12" month="April" year="2023"/>
            <abstract>
              <t>   This document defines the application-aware networking (APN) header
   which can be used in a variety of data planes.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-li-apn-header-04"/>
        </reference>
      </references>
    </references>
    <?line 236?>

<section numbered="false" anchor="acknowledgements">
      <name>Acknowledgements</name>
    </section>
    <section numbered="false" anchor="contributors">
      <name>Contributors</name>
    </section>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA6Vc7XLbRpb9z6fodX6sNENQlpxMEtXsZBXKjjVly4otVzY1
ldpqAk2yYxBA0IBorsvvss+yT7bn3tvdaJBUNrs7NSWTINB9+36c+4lkWTZx
na6Kf9dlXZlL1bW9meS6M6u63V0q1xUT1y821jlbV/e7BrfcPL9/MbFNyze7
7uLp02+fXkxKXa0ulakmk852JW67aprSYiU8l+mtbo26xrJqbqrOtOrWdNu6
/aBOru6ub0/Ve4dftDNOgRb11vzW29ZscKub6MWiNQ9YDzdOijqv9AarF61e
dtl2nbXdarvK9P5mWZFnleyRPX06qReuLk1n3OWkbwrNH75Q9OFSXTy9uMie
0v9VlvE1ZZ1a2rI0hbKV0n1Xb7B0rstypxY79XFTXrTLXNmlqupOrewDHRub
6kv1g6lMq8sJbbxq6765jEf9CX9stVI/0OXJ5MP2cqJUpj6YHX4uztMvF+mX
Z1i779Z1i/sz/GArd6leztRP4Di+Cj9earuow6W6XenK/gezAz/1emssLpuN
tuWlajVWoIVnW9z+r2v+eZbXm/HyeCxdv65WO6vi1d/fYs13z9b93gaTqm6J
lQ/mcjKx1XL4ptSmsRnESx+V8jr02jinVya709A/sO6GdGepc6PekdLqtuC7
QQzuvbtRL+q23/ClKNpn2dNvZEndrkwH2rqucZdnZ9vtdkZ7LumZGZY4w+7u
jC59OSNpFPlveTUiZ45jGUdHpo9dW5cKT6tXtHL2Dvph1Nvr11fq2jRlvRP1
PbZ1XldL05oqN27m7AqcEQL857OLp+dfnTXF8qzRjWndWfPVxbMZvpM0TNea
os/NiDCo2Fu+Cq0hCrJ5vWnA2QUouqmyoIF8D5NPZF9b17V20XdQ8+vbW3VP
ikFcvspzU0KL6c6j9Ov2o31givXCnREAzJ5++/Wzv+Bm3TUjyq7u70ATaPDG
qK5Wq9asdKTidV92NutMpatOvTK6JRIeFVjvTGU/8tYDE88qV9iL87OmNfi5
47XPSl1P8L8MJg0iu1bn3WQigqEzliwzxzK7ulHOtA8W4iB7h+polTNMOdXU
BEqV2ap8Dfs3pACqqwF9urC0kS5VZ/J1VZf1yuI31+drpZ0qa12ohQYs5rQf
wVo+qE8u6jNT3xtnC+OmsBzTrujOwKijq9qBkdCUpu94bWDrCvT0DFAa0mtI
prQl+MUYhiOmx5qp+zV4FfbKyLwcPzPgKC8U1GBMTSv4rLq17lTe1s5lpd4B
1S2trkXFonnTcXWlFkYtS/MROrkj7lVuYzvacgEajKngOopsXbtOXEA4ZVWD
OzM1mRD8w1v8EadyqnRRN1ioW0O4d7dq2QLDeDnSuPSMxHwSZ9PWD/RxU2PJ
A2cyOgstQQsn/AzUQoqVXpQkE7pjqV2nzENd9vxgvRz4nZCQ1xmOaFcJj3ck
H/ggAFJPKKJMqRd1S26LJAp2klJiPWKK8MsfAJrSmNwubR6ExCg085awsUVR
mglc3w3pX0CDT19Y+voZd6Wg0AVAoCNDg9hmwKLClGoNZRx0bmGgjWAcpL5p
6rYjW170gGzgN6lfam0j84ITN8JO1n+yZjqX7IQtebNB/a/KZq1/qPnEcyjf
D3f3X84mN+BtW9A6Ne0PThhZcgk+WKy5G9YU6sPJpv6q3tQ92ES3DWbVVxaX
Tujw5qPeNKWZYtm6d9idb/3h7j1soa+YR85ugGS6Mrih3J2yUbL1kXbBaUF/
krVBJNCGfRrY0+iWsKWkewXd1d/n9ye/1gt+BIELm6DdmFO2XIKPvG9bUo1h
TTaVaJvY09YF9IB4tcCfrS26dUYWWjm4XHpwgyOKFpKG0MIEdBtGZNDU7cY3
MfMEydaazBxbAM5yF214n3n4QsoDGyrU2q7WkFGqltGWujViotWabve8AmyC
3XsgmutGL2xpu910H1DX+F6O7rFiIqwIVUcsOWKxs8l9AuTpDYMxQoyi5eS3
2kJgJeBT07eGQNcNmODBjSNmkNO2FsvRSfsGh/NImSCAk8OMVg3WvNAOIsQ5
KwNTfcDJo6uaTV7WW/Ng2impED3tQLqG1A8tKFH4A8hPDnpiZqvZFI8vifGs
p9ACC86IYLUH9pTWUwqWF0Y0WTBFlP5xSwRVUG1HofSvNR5SAGu78cFkApN7
2icSj04vYSBDwKYmtP1IGjl40eGYjJnTEZfrhrxbDRjKkcfUoEC025LJMe42
rX0g/4KTdHVelwxmWoHpkMMGSgGeY5UpwK4jJ0fpgM7XFmJRK8kExCfyVqK6
cGZRcNHpJ0IgTTmiqfjXFBx6QBvIbbWiCXi4sEuOhDpRPQJgcQp7Sua8lhAK
ixhGWEwxia16iW9slSOfcSaoZstJESm3qYrUY+Zl3RdD/MTUR3q8pCNowbd4
lxNU9JhTIC7WFazq8MiPMTUiUGGYDpGILl19ZBHiwqMLBHb7hfj8ITRJbG8y
dtBIFnsKMscemuMjD3jufwgr9vcXNN+X4kzNPe5v9c77q+rB7MT8htVo9RI2
xQHWTmzwIyO/Py6ebHT+wXRqbTSYAo5tWcrM+LzWrTPZilCDA8iDUO5YBJfa
VkA0ZpuHd6hPoMk1lMSNfWtdmWxhO/V8fqv+8fbF/Nn5X775hQzu5k4xaJ5S
HEiadBgF+siP2HCCiO80Cfk+ffruJruelRZRV5XF658/Q8RLilBETL8vnRM7
MzPFwaTufIB0SkZS9oyT9AN0Cwa0DHB1cnN9Smw4o+AJP5OT3xhWcMbZAewH
jzfyjoyswEzduL7UHJ138SFTAN5THRVHU0g4QYJ1JCOgjJP0g2koauTnjOAx
YEI2TRpBKOe1W0ybvBc7jBC8EK9IIyoQ05IKYXmCdQV4RSDgc5xAHxS+7lss
potfYRl0oNn/Tm5HQ/VBmKHuU1EwndGPj8g2ar6gzvGcgK7ubzZDkIwfSAhh
gWl8PJyPbXCPuxBIWdY5iwwcTu0XRmYRypIbqz24JhKOq0cdS8R5gDnmI5LZ
1vxf0pwWRKTp2sNxLDpIx0C/BUeq/QRoP18ZshFKCwZA9JmKJNUd+wSgGTxW
xVZ0EBte3R1J96JnHNcQCRwmX3yh7k27seJLJWu8/GPLTCbvfsS978yPPach
t/1mYVow/c3bS3VfN+rNUr2FKJQDD/L1ZHL3Yn6p7hBvkTVkiNQg7hdlvQ21
ocnk9mY+lP+G2tWc6laT5/PXd5fq+W+ILbM5oaRUQu50t1aAS4piQP8NqG8Z
VSgMozXK0nKeNJkgR7hUf0eSMB+ShHskCSDtHSgLeIMjtQ90kptbkJOUg+Yh
mCI+HWVTZA1zNi3Lqle6WvV6ZQSRP5idopKiU09ev393/2Qq/6rbN/z57fMf
39+8fX5Nn9+9vHr1Kn6Y+DvevXzz/tX18Gl4cv7m9evnt9fyMK6q0aXJk9dX
Pz+RCPrJm7v7mze3V6+eSIyaaiSdBgaw8J6/aQ0DqptAS3PYmiDn9/O7//rP
8y8BM/8EB3Rxfv4toES+fHP+9Zf4AidZyW7sKOUrNHk3gb0Z3XKAiLgQNg1Q
LSXscut6C2uCe4UR/+kfxJlfLtVfF3lz/uXf/AU68Ohi4NnoIvPs8MrBw8LE
I5eObBO5Obq+x+kxvVc/j74HvicX//odsjGjsvNvvvsbq08s8R9U+MXS/0hh
h7XwhfdDHJn4GMYBzoq+DLWKcdqI3R9J8pwUisjFZUuyXLJJBaNZ14zcsoSJ
nk5zrUsSZBgcdu/WTtzEYxtwbLGEFsJRVD5YjknMVBU9qyXDqWVnLbFZjKeJ
WfC5cC1cj3M+hZGCBd3JuxHxjhyPpH0U+3I6JP5uI9q/a3wLQzIXWhmK2XYS
nadHpxgP0X9JCVEFpc7Mb5IBF6E6FJypZwzVBvaYMqGcjMuc2nVTD/ZpOof7
wCF2l0PCUPRtqJ49ksEi1+voGaQVBGuFcJiLCJ6YGHB7ZoTaB91ZKUq8EIko
CotC4sGyD04vHmRDmRzFEMSOUANwuuvb4Kr7BuwwepPsMa44MIntPyOCWLVU
CoNDOCgojaofJPp9wrw0EDaQLEDDv/3p/OnTH74/nU1+Avzsaa/P4dfarb1P
wnZYHylMWy+ATnQXh5YcIJhiqiwzlKRDVaY6qS6lIsehStOsqbQn6kahTaCW
hD6j9ktysm26kiKSF2fOP7vRlHElixfBDBycFi/HoUasf3nL1g/alpraGgPL
/cpsh1wopFhjGtUhKRKx2jzYuhe7gkw0HRH3mbxzntvjgltSr6AiKjm71oAS
E9hWGbJK3e58zcNXDpYjlIKEPFKxjKgnAQuk44ZY3esNn57pIJN1TYu8BxKE
Eu0ZGHPfVxrGXQb9UNu0NDZT1ynILEPnaZS2wcIgkROKzbQ0Z6Y+zsGyp4wz
0UZ5/2mkPBSfIuXbui8pKUSy94FqY8Htxgy0ZMFLBkIOlP2novwFh/HGHFLu
KaTZcwqTpkmFodpzrBdxYW0ofuxnzQCj+9pH8hX0HXZltNDhAc86Kb36rGg9
EBh9jIuUiSkjLFQnHcJC7N8OYeGpqC3tvvNUp6IHKVfO9ZsAclw+oiVEuqT2
o/Kh6zQVc5Yt8B/P8M6gdKDB+4MGsJmzdhA5cg6uCUQPBMskCMEjGZHeaNvK
o3JmiEdSk3Da0mhfvCupjUHPMEhUXE+jAoRlSD1Y4Igwl7aVJSiHqaNr8XxJ
ya2So00Dkspvsk2sWzjZg7W+i7whMuHUvALmpQCmRXrB6RjY/6quOd1ckMyI
laFiGGt8sq+DF+E8oOI84ATJwekoSbJSqCqA83mXbhJO/ZzKZkicWB1DDYNQ
JFNXECliRU2VR4HHiByhWsdS9ch+luZyIIT29pUwbDwqEdgqsQPvC+7nd1OF
YHI+VW/r+fOHi1M1zg8JO0dqE1hcmMZws6MSNcLe4xJqqEBwBAuOMzRTokn1
o1FzxYcLQtnoQJ4vCnyhQITPV3n/OhTMg3jG6bX3cJUo99H1sxIYWTI0UaRD
ynJcxL4oxRXgynLZ+0GXvRmj357/8pd9P0IqwPyD14rQAkKyLsab7ntgdGOz
CVSQ4airknpPq7V3O0ihBcHIDzhuH+qDbDrKE/uJD2A2MUrX1T6acajpITAQ
FI2cpOJ9lKAtS0X4uEY+yyF7LPoQTPSbJhQMPVPIjZUwP9ghZH2zPKJJA+Po
5i0ya+NGmLW1TojxDi1Vkaid5Dy6OuNKdYqmrSltqPduLTxwVSNXgKNslTSD
UvceG6kB3wbTjyXlvUIow/QAeP5InlKGUX+J2ZrIZ9lXuWQPRBoPFEGcg0/w
PTQu9nORacYpe1Lx4SoV14NY3agaKGXfoSF7eA5WvmlMliVuAv85kd+3Di/9
A/T1tpoi8DjEPfA73DymXnEl+ML5OVtdLikhdJ7gduiLJVrquebGxOREMzvV
WpAqOSZ4laablwTAlF2fZ+e/cN3MZ8ZpIY5p9v6JhUUqt5BZDX7m5pp96iBD
yaJSLvhlg3WydRGZvJYEtwaSAf/SqqqtRptLz9GHC2BdZDvvK5umj0NKtLxs
ypGoL9h7ASBugzpk9GcqsJDx31lkysUvKQzTWUN5gpZBMgajYEcjRUXcGhrG
VBkMCuq7dakjTg1EipypbMUyBjKejWSTuBZR7SHsFmw7KNZ7iMc5BwCkinzv
eCES3ihGT72fNOeFueHJhJI04k15D60e834mdbObY0M70igb0h+fbVLcxSNQ
w/QFTWfkWZ8/Op5B4eP+05LNUXLDWR6NTpR+ROywp//pkx+++/yZ6ZKRnjj/
ExqSDHVLifXSfrOLdYgwyGDdhhVAUvf0qvd3vqcQBx/MR5PHqgIUNo0PRsUF
EU1N8YLiyqP5mCMnXbGX5tLexhSWTBhahadc0AZKGCyXGIhQnGCfZZzlcLge
9kg4nOd9Y33NeqN/rYOtHSlreOWZTN5XIZnYL8We3L2jARHs38HZ9S31NXTX
gPsnUjlqOUAuhp6RryQQzIuHF+fPwqrBwZx1anxGNrbIubykX6geGQbwfL7o
TBXwZfiNnaXwLwbKhEqyyulU8rb9xXkt6vLtxX1jutJE3x9rPDwTUmuwsjLS
jGbFhRmQDz1BDrgj/45fT9P0PqILDU02RvoavphGWEklqSOWeHJzOz8NQw/e
rwzjOQuz1g+2jv7PUxwOwUkfluZKjCZlZSDcD2sY7hc1/tAYzJBaswSoUppx
USkhagB+x7mBIoxZQxkoImn8WJ47Hc+7WCcBJ/nXQ+XhFLcpKcj19bJYsxx1
uP24SpRaohUnT3QhDz0ZqDidhpQ5YtPVzWA8o2E3XYUKizm6QYhLt3o3U2/o
q+HZHtYJmEqVlF9Zm6QCx9GYr0X5mlzSqRrG86j/XInue21K9Mdv7dVl8ka+
1pTGeRrSIx4fReET8tzBAgnaEkSzVkLFQq2XpgIpnKoy0YJU6JKvXb27mZ/y
ZXaarDtClAu3zO/enwJifDtUhuHLuvddUTCCoBxW2If29qg4dWyKdHqQ01GU
64NcqciPpuM67T7EaTMeMhixQdx/64N5SQfH6QlLj8fEpHAoUURMICJZblfl
ayBBmA8i5woGj7MyUQV5NmhEQj+Fxja3e2k60htTloQL8+NZeBTI/Z0H6Okw
av3pU5zF/vz5dKg6D7CAL8ay9sSM+GhGK/1n2x6MG/lQlmps1JAPAZAUaoOY
FoYqZMYP6XDZsOPcxu8dRmaGssBec3gYbiKrh6ZKvcyBVrr7gVx871iFxwNF
w0klRNSFbljTqLj5wfopyeNJf9yU/PiLviVSN5x3+QSOgKyVRJsBUPSLqx3t
TjJLj8Hbdc1V0R23fZK6qENM0cqwYEiefXHKcPrn84XQFOdedUV2SVXKg6Y4
a1Jlo0nH8AiqRLwZQPmgEj3wiQC6CpHs0nRcL6NJgHF6GWJbL5BkbUaDuqdM
MvLBe6YRH2ZRpx2UuWdJDILe6A+GAJUMyOYwvP2JATHgva3ZE4exqY56mbDb
DufpDDltnoFxMU4PifF4LmlBtBO1ZC/GHTIvmGUYVzsCGs6sNhLLMDTGsuMh
EHunLqLVgI2ds2Fcpswp1Uv0KI6EPpIvXoR8kZVd0pAw8yM2XkgpsbdunWAT
nY/RkktcstJFslIwtJjbBIsTtgm99OaBtJdSK2J2nITp6OBPXczow+TFaZhD
2YkZh7T6I0VJVEAbdDeSiATsjw1WY8Xp7+Zq4kF/f8BKXiWQVDXUzGR2M509
FV6w9joKm6mwJIgI4jZWUHBU5gF9+9UgYvZwzC8TSRxQHkMV4loIiJNM3of/
cTTW21o6IBOmd1Qcolhyvnn4MsjeuOAS7otjb5I58g8fIA7NrIF70pcaTY3K
aJrhrgUyHOcL8cgV6q3kh3GhmMaxpssUSpr7UYIhrx8QJB40ZJNUU966GDJN
4hS+XVyV5ZGsE+710yef2n4+namf/l8phQ6hqy8rcHpBHbzpkUiPOuWxOUXJ
XSuYHtORgisR10eqvjIxMFi3dJBEK2QCdOUnz8dTCBzA0gLiraUWqrl2TSEJ
PFAPJw2f9IFt0PfsRq1/jnGPv6g0PXD/i751XXDA3AgVE+OSTJzu5cJkhBau
dYSsxFLrNu0jipJRLoZz7riI52e5inQEedeYMJi6r+C6XNWcxTgfJSCE4V76
YauWW55S4qNu5/i0A1RfQfM4dd1MaW7Ud7x/f2viUdTZ8IJUOrQxje+bXM/v
5wj9vqP5n4uvvqb473r+4/z20yd+OZC+w4fPpt4H9TTWRuOrON1Gc3QdCdkr
URqpKWiPoT48WPTERvJA34ubdCYqmEguEA19P+dpWd7HA46vGUrGzFHuI4AR
iiJS3HLUjOMJA5Y9zSMYh2CiOJVOKr3JEoMWs1m0lEHym1JIJBAp9aBREx77
5GLYxqc8YepXRo2lz4L8it2CqBBPSo2qxi/v5vM//1mNMpyN1XWW58hw1k1O
GY6vfTx2SnqneRW1SK3rhvIU+idMJNtqwc7QUNgPrQ6S5O7AB2MacltJgSRd
PRScDNlZ6K+PIH94QaOLwwbB9Nye7aVPcuwzDO+C4OB1vG9g5qdNFiwwDETE
2VR+b47zfJMhFArd7vg21XPS5IS/bm2Fvbp4oD57kRlS8n2/HLjOxTAqGgw9
hpQ/rfEvZAbmpdYwksxMvTa6AudJ8ERVRug8QCsHLuBCtthlJL294M2/MsVb
kNv2NVG6XBV7VO0NqYgByPRDfAkn9js8waS5JALPhrA7AU2/YToe/DtAe5PX
JjmhmtzsBWVDr4orL1waoUrycjceDdjXO3pbsB1J4vAFIl92io0qiVfO5N33
I3F1uv7A6OkRtQ7vcvDGe+8GbOxq3SmTr2tuzUrdUEoqXAsF2idpaEhHZhyA
CbyxAxYFTd8lPRxIjhl2N8wTxLd49t5Y8AXSxztBz0JkP/TTJKGlaGlggD9F
0s21HGvTOMSQ3AiAjBCR3taoOpdIbBZ3TjMB2W/8BsiB8IbsMN3DB6dT+o8S
sCLww0gS/eR3gdChFtmlj8UOjRjCAEf8EtEU5l3ZjpuKHH5SVEMRynTPUO5e
zIV3NBfU1jRN+/b+fsqZzPDylkOsVYm0X1/9PI6hmStSZeVC89ijhDRkr7Pn
IZBfXqJ+oiRju6E5niBOQ2+a8QzXF+p57PRx8MxthPTSXpp1JF2Jgc9id/hG
bnDHdNCX4vLGhTp5FUW8Ic0Uc7xm/DsLRXh82XNrgL3AkOXzEgTsX6h3vr5B
A+f8ZpPkSDjQ99dy0Jur26vDX0eD0RTGVbXcKdbGIypZxgZMi1zlH6p6W9Ib
J/KfNPh0KZI3xb88WcJzmyefeTeee6fqaN0+dtNk8t8YNFK8ckQAAA==

-->

</rfc>
