<?xml version="1.0" encoding="UTF-8"?>
  <?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
  <!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.29 (Ruby 2.6.10) -->


<!DOCTYPE rfc  [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">

]>


<rfc ipr="trust200902" docName="draft-kompella-rtgwg-mlnwsched-01" category="info" submissionType="IETF" tocInclude="true" sortRefs="true" symRefs="true">
  <front>
    <title abbrev="ML NW sched">Scheduling Network Resources for Machine Learning Clusters</title>

    <author initials="K." surname="Kompella" fullname="Kireeti Kompella">
      <organization>Juniper Networks</organization>
      <address>
        <postal>
          <city>Sunnyvale</city>
          <region>California</region>
          <code>94089</code>
          <country>United States of America</country>
        </postal>
        <email>kireeti.ietf@gmail.com</email>
      </address>
    </author>
    <author initials="V. P." surname="Beeram" fullname="Vishnu Pavan Beeram">
      <organization>Juniper Networks</organization>
      <address>
        <postal>
          <city>Sunnyvale</city>
          <region>California</region>
          <code>94089</code>
          <country>United States of America</country>
        </postal>
        <email>vbeeram@juniper.net</email>
      </address>
    </author>
    <author initials="A." surname="Mahale" fullname="Aditya Mahale">
      <organization>Cerebras Systems</organization>
      <address>
        <postal>
          <city>Sunnyvale</city>
          <region>California</region>
          <code>94085</code>
          <country>United States of America</country>
        </postal>
        <email>aditya.ietf@gmail.com</email>
      </address>
    </author>
    <author initials="R." surname="Bhargava" fullname="Raghav Bhargava">
      <organization>Crusoe</organization>
      <address>
        <postal>
          <city>Sunnyvale</city>
          <region>California</region>
          <code>94085</code>
          <country>United States of America</country>
        </postal>
        <email>rbhargava@crusoe.ai</email>
      </address>
    </author>

    <date year="2025"/>

    <area>Routing</area>
    <workgroup>RTG WG</workgroup>
    <keyword>multipath, bandwidth, scheduling, ML clusters</keyword>

    <abstract>


<?line 84?>

<t>Large Language Models (LLMs) are pushing the boundaries of technology. The scale that they have reached currently vastly exceeds the capacity of any single compute unit (XPU); this requires a distributed approach where multiple XPUs are connected via a "backend" network, sometimes in a single data center, but increasingly in multiple data centers connected by a "data center interconnect" (DCI). We are approaching the point where the scale exceeds that of a single data center, thus requiring multiple such data centers connected via a "data center interconnect" network. Training and inferencing are expensive and critical operations, thus they are typically scheduled, i.e., the (compute) resources they need are carefully estimated, allocated and deployed so that these resources are efficiently used. However, while compute investment in these LLM processing clusters dwarfs that of networks, it is becoming increasingly clear that the latter can greatly impact the former. This has been the focus of recent conferences, including the fantel Birds of a Feather meeting in IETF 123, @Scale: Networking 2025 and Open Compute Project 2025.</t>

<t>This memo proposes that the same care that is taken regarding allocation of compute resources to jobs be taken with networking resources: that they are estimated, allocated and deployed alongside compute resources; that they have contingency plans in case of network glitches; and that a holistic view be taken in order to optimize job completion times of training and inferencing jobs.</t>



    </abstract>



  </front>

  <middle>


<?line 90?>

<section anchor="intro"><name>Introduction</name>

<t>Large Language Models (LLMs) are pushing the industry to ever greater scale, both in training and in inference. This leads to more critical use of backend networks and a higher stake in producing timely results. A major learning from recent work is that the network cannot be taken for granted: a dropped or delayed packet can delay, stall or even abort a Machine Learning (ML) job, requiring more effort in checkpointing and managing job restarts, dealing with network congestion, and dealing with network failures. The problems get exacerbated in multi-tenant clusters where multiple jobs are run and job isolation becomes a key requirement. The fantel Birds of a Feather meeting (BoF) illustrated well the role the network plays in ML jobs, the potential for network events to disrupt jobs, and some early thoughts on how to handle these events. While the BoF was very successful in exposing these issues, we believe that adding a proactive approach would be beneficial; this can go hand in hand with the reactive approach of dealing effectively with network events.</t>

<t>This memo proposes that the network resources are reserved/scheduled in coordination with ML job scheduler, which is responsible for reserving compute resources (Central Processing Units [CPUs], Graphics Processing Units [GPUs], XPUs, memory, storage, ...). This is especially useful when multiple jobs are run in each cluster; an example is GPUaaS (GPU as a Service), or running several inference jobs simultaneously, or multi-tenancy. Reserving network resources reduces the probability of some disruptive network events and improves job isolation. This is the network analogy of reserving compute resources and ideally can be done at the same time. Essentially, when an ML job is scheduled, the “size” of the job (type of model, complexity of model, number of parameters, etc.) determines how many CPU/GPU/XPU cores are needed and how much memory and storage is needed; typically, the same parameters determine the amount of network resources needed during different collective (i.e., inter-XPU) communication stages (Broadcast, AllReduce, Reduce, etc.) Job placement (i.e., which XPUs to allocate for this job?) also determines the source(s) and destination(s) of the communication. If, at the time the job is scheduled, network resources are also reserved (and potentially, backup resources are put in place), the probability that network events can disrupt the job is reduced (although not eliminated). One can also set up the communication pathway and reserve resources when a collective communications API call (<xref target="MPI"/> or <xref target="NCCL"/> or the like) is made; this is especially relevant for long-running jobs where the time between communications phases can be long, and the phases vary from (say) Broadcast to AllReduce to quiescent. Finally, if backup pathways for a given communication are set up, traffic can quickly be protected when a failure happens, and in parallel, the sources can be notified of the failure and can reduce their traffic they send, build new end-to-end pathways or otherwise handle the failure.</t>

<t>The previous paragraph suggests a proactive methodology. Fast congestion notification and signaling constitutes a reactive methodology. These fit well together. One can couple network resource scheduling with fast event detection, signaling and mitigation for an overall much-reduced impact of network events on job progress.</t>

<section anchor="terminology"><name>Terminology</name>

<t>The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL
NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED",
"MAY", and "OPTIONAL" in this document are to be interpreted as
described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they
appear in all capitals, as shown here.
<?line -6?></t>

<section anchor="definition-of-commonly-used-terms"><name>Definition of Commonly Used Terms</name>

<t>This section provides definitions for terms and abbreviations that are used in this memo.</t>

<dl>
  <dt>XPU:</dt>
  <dd>
    <t>one of several types of processing units: central processing unit (CPU), graphics processing unit (GPU), language processing unit (LPU), tensor processing unit (TPU) and the like. They fall under the category of "compute resources".</t>
  </dd>
  <dt>TE:</dt>
  <dd>
    <t>traffic engineering, a technology that allows the specification of constraints (such as "admin groups" or colors) to guide the layout of</t>
  </dd>
  <dt>phop:</dt>
  <dd>
    <t>previous hop (of N), a node and link that feeds in to junction N</t>
  </dd>
  <dt>nhop:</dt>
  <dd>
    <t>next hop (of N): a node that is fed by N over a specified link.</t>
  </dd>
  <dt>MPTE:</dt>
  <dd>
    <t>multipath TE, a technology that combines all the features of TE while offering multipathing with weighted load balancing for unicast traffic</t>
  </dd>
  <dt>MCTE:</dt>
  <dd>
    <t>multicast TE, a technology that combines all the features of TE with load balancing for multicast traffic</t>
  </dd>
  <dt>ML:</dt>
  <dd>
    <t>machine learning, a powerful technique to learn from data without explicit programming, used to solve problems of AI.</t>
  </dd>
  <dt>junction:</dt>
  <dd>
    <t>a node in a DAG, with 0 or more phops, and 0 or more nhops. A junction with 0 phops is an ingress; a junction with 0 nhops is an egress. Other junctions are transit. A junction may be a unicast or a multicast junction. A DAG must have 1 or more ingresses, 1 or more egresses, and 0 or more transit junctions.</t>
  </dd>
  <dt>DSF:</dt>
  <dd>
    <t>disaggregated scheduled fabric, a methodology for packet spraying in networks with multipathing.</t>
  </dd>
  <dt>DCI:</dt>
  <dd>
    <t>data center interconnect</t>
  </dd>
  <dt>DAG:</dt>
  <dd>
    <t>directed acyclic graph</t>
  </dd>
</dl>

</section>
</section>
</section>
<section anchor="problem-statement"><name>Problem Statement</name>

<t>Consider the ML cluster <xref target="mlc-1"/>:</t>

<figure title="ML Cluster 1" anchor="mlc-1"><artwork><![CDATA[
        S1         .... S2 
      / ...\.......   /    \      Note: L1 & L2 are connected to S2;
    L1..    L2      L3      L4          L3 & L4 are connected to S1.
   /  \    /  \    /  \    /  \   All links are 400G links.
  X1  X2  X3  X4  X5  X6  X7  X8
]]></artwork></figure>

<t>The bottom layer consists of XPUs X1 through X8. The next layer up consists of "leaf" switches L1 through L4. The top layer consists of "spine" switches S1 and S2. All links between layers are 400Gbps; thus there is no oversubscription in the network, provided:</t>

<t><list style="numbers" type="1">
  <t>All XPUs are well-behaved.</t>
  <t>All switches load balance fairly and perfectly.</t>
</list></t>

<t>However, "fair" load balancing is insufficient unless the load balancing is done on a per-packet (or better, per-cell) basis ("packet spraying") <xref target="DSF"/>. If load balancing is done on a per-flow basis ("flow level multipathing"), it is highly unlikely to be perfectly balanced across the next hops, in which case one next hop may see too much traffic, leading to congestion, packet delays or even packet drops. Disaggregated Scheduled Fabric (DSF) uses per-packet or per-cell load balancing, but it comes at a cost, and may not scale (and scale is a big consideration in these networks).</t>

<t>With flow level multipathing, say X1 and X2 are both sending 400G of traffic to L1. L1 tries to load balance X1's traffic to S1 and S2 (in principle, 200G each). In practice, that may turn out to be 220G to S1 and 180G to S2. L1 does the same with X2's traffic; let's say this goes 190G to S1 and 210G to S2. The L1-S1 link will be congested, with 410G of traffic.</t>

<t>On the "downward" side (traffic going to the XPUs), there can be an "in-cast" problem: say both X1 and X3 are sending traffic to X6. In the worst case, each sends 400G for a total of 800G to X6, but the L3-X6 link can only transmit 400G. Thus, half the traffic will be dropped.</t>

<t>If the entire cluster (here, XPUs X1 through X8) is working on a single ML job, things are a bit simpler (but the issues remain). However, if this cluster is used for inferencing, or multi-tenant workloads, additional considerations arise. Tenant 1 (or inferencing job 1) (T1) may be using XPU X1 and part of X6; tenant 2 (or job 2) (T2) may be using XPU X3 and another part of X6.</t>

<t>If T1 and T2 simultaneously require communication to X6, there could be contention for the L3-X6 link. Again, this could lead to congestion, and hence delayed or dropped packets. But now, the issue is inter-tenant.</t>

<t>As stated in the Introduction <xref target="intro"/>, such delayed or dropped packets can have big consequences for the jobs that are running. Issues such as these are the motivation for DSF, packet spraying and fast congestion notification.</t>

<section anchor="collective-operation"><name>Collective Operation</name>

<t>Collective operations <xref target="CO"/> are used in distributed computing for the participating compute entities to exchange information. One example is the Message Passing Interface <xref target="MPI"/>; others are the NVIDIA Collection Communications Library <xref target="NCCL"/> and the ROCm Communication Collectives Library <xref target="RCCL"/>. These are used by the compute entities in a deep learning cluster to send information to each other, or as a group.</t>

<t>Collective operations include both unicast and multicast communications. Thus, in scheduling network resources, both patterns should be covered.</t>

</section>
<section anchor="compsched"><name>Compute Scheduling</name>

<t>In shared compute environments, such as a compute cluster or a cloud, a scheduler is commonly used to orchestrate access to compute resources. SLURM <xref target="SLURM"/> is a commonly used scheduler in Linux clusters; its documentation says "First, [SLURM] allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work." Another is KAI <xref target="KAI"/> which says "KAI Scheduler is a robust, efficient, and scalable Kubernetes scheduler that optimizes GPU resource allocation for AI and machine learning workloads." There are several other schedulers in common use.</t>

<t>A scheduler offers several features. The following are taken from SLURM:</t>

<t><list style="numbers" type="1">
  <t>Accounting</t>
  <t>Advanced reservation</t>
  <t>Gang scheduling (time sharing for parallel jobs)</t>
  <t>Backfill scheduling</t>
  <t>Topology optimized resource selection</t>
  <t>Resource limits by user or bank account</t>
  <t>Sophisticated multifactor job prioritization algorithms</t>
</list></t>

<t>KAI offers the following:</t>

<t><list style="numbers" type="1">
  <t>Batch Scheduling</t>
  <t>Bin Packing &amp; Spread Scheduling</t>
  <t>Workload Priority</t>
  <t>Hierarchical Queues</t>
  <t>Resource distribution</t>
  <t>Fairness Policies</t>
  <t>Workload Consolidation</t>
  <t>Elastic Workloads</t>
  <t>Dynamic Resource Allocation (DRA)</t>
  <t>GPU Sharing</t>
</list></t>

<t>To summarize, a compute scheduler allows effective and optimal sharing of compute resources among multiple tenants and multiple jobs, while ensuring fairness, enforcing limits and enabling accounting. Without a scheduler, multitenancy and multiple jobs would be impractical and chaotic.</t>

<t>Note that multi-tenancy is implicit. There may be ways to reserve resources for a particular tenant or group of tenants with allocating them, but the documentation doesn't say how.</t>

</section>
<section anchor="nwsched"><name>Network Scheduling</name>

<t>In shared network environments (which almost all networks are), a scheduler can be used to orchestrate access to network resources -- primarily bandwidth, but also highly prized links(*), QoS, etc.</t>

<t>The primary task of network resource scheduling is to reserve resource along a pathway (tunnel) from one or more XPUs (ingresses) to another set of XPUs (egresses). Note that the paradigm here is of uni-directional reservations; this is more general than bidirectional reservations, as the traffic requirements may not be symmetric.</t>

<t>Given that X1 wants to send 20Gbps to {X2, X3, X4}, one would create a tunnel from X1 to {X2, X3, X4} with 20Gbps capacity. Note that this traffic might be unicast (distributing different parts of a matrix to the recipients) or broadcast (distributing the same information to all). If further, one wanted to use certain links exclusively, one can color links in the network and state that this tunnel must/must not use links of a certain color. Thus, link coloring is a tool that network administrators can use to hold back links for a subset of job types. The compute analogy would be to hold back some XPUs, mark them "blue" and allow only a subset of jobs to use those XPUs.</t>

<t>Link coloring allows a provider to partition their network to optimally serve their customers. While links in a Clos network (as most ML clusters are) are perfectly symmetrical, once one gets into "distributed clusters" that are connected via DCI links, link coloring and other link attributes will find greater use.</t>

<t>Reserving bandwidth means that a particular job J1 (probably) won't step on another job J2's traffic. Say J1 is using a tunnel T1 with a reservation of 20G, and J2 is using a tunnel T2 with a reservation of 50G. The reservation procedure ensures any links T1 and T2 traverse in common have sufficient bandwidth for both T1 and T2 (and any other tunnels with reservations). Of course, J1 may use more than its allocated bandwidth; this can negatively impact J2. To reduce/prevent this, one can apply a policer at the ingress of J1's tunnels to ensure that J1 sends no more than its allocated share over each tunnel. This policer can drop traffic over the limit, or simply mark it as such, so that if the other jobs on a common link are not using their full quota, J1's traffic can go through.</t>

<t>This last point is crucial for multi-tenancy. A provider who cannot provide hard (or at least soft) guarantees to their customers that they will in fact get the resources they asked (and paid) for will soon be out of business.</t>

<t>Elastic bandwidth is a very useful feature that goes along with elastic compute. If a job's requirements are: start me off with 5 XPUs, but expand that to 8 as the need arises, and shrink it back down to 5 when no longer needed, then the job's bandwidth requirements are likely to grow and shrink in tandem. Thus, in addition to making binding reservations, one must be able to adjust those reservations as needs change.</t>

<t>Finally, not all jobs (and all customers) are created equal. Priority and preemption are powerful tools in schedulers to give preference to certain jobs over others. Without these tools, a provider would be helpless if their cluster were overrun with low priority jobs. In addition, it would be nice to have a graceful way of managing preemption.</t>

<section anchor="traffic-engineering"><name>Traffic Engineering</name>

<t>All the features mentioned in the last section are available today, in bandwidth-aware traffic engineering (TE).</t>

<t>TE constraints allow a user to specify constraints on the path a tunnel will take. These can include administrative groups (colors), shared risk link groups (SRLGs), TE metric, other metrics such as delay, bandwidth reservations, and many others.</t>

<t>Bandwidth reservation allows the allocation of bandwidth resources to a tunnel. Policers are a useful adjunct to enforce limits.</t>

<t>Elastic bandwidth (aka "auto-bandwidth") allows a tunnel to dynamically adjust its reservations (within limits).</t>

<t>Priority and preemption are implemented by all vendors. Graceful preemption is possible using "soft preemption".</t>

<t>New traffic engineering parameters such as available buffer space, available queue-pairs for communication, etc. will be introduced and discussed in a future version of this memo, as well as in companion documents.</t>

</section>
<section anchor="multipathing"><name>Multipathing</name>

<t>There is one missing piece with "regular" TE: ML clusters (and Clos networks or fat trees in general) make heavy use of multipathing, and often have multiple ingresses and egresses for their communications. Current traffic engineering techniques focus on a single path from one ingress to one egress. However, a new technique for multipath TE that allows for multiple ingresses and egresses and multiple paths between them is being developed that has relevance here <xref target="I-D.kompella-teas-mpte"/>.</t>

</section>
</section>
<section anchor="comparing-compute-and-network-scheduling-features"><name>Comparing Compute and Network Scheduling Features</name>

<t>In this section, we look at compute scheduling features, and ask whether the corresponding feature exists in network scheduling.</t>

<texttable title="Comparing SLURM and Network Scheduling">
      <ttcol align='left'>SLURM - Compute Scheduling Features</ttcol>
      <ttcol align='left'>Network Scheduling (Feature Availability)</ttcol>
      <c>Accounting</c>
      <c>Yes</c>
      <c>Advanced reservation</c>
      <c>Yes (bandwidth calendaring)</c>
      <c>Gang scheduling</c>
      <c>Yes (primary effort is on compute)</c>
      <c>Backfill scheduling</c>
      <c>N/A</c>
      <c>Topology optimized resource selection</c>
      <c>Yes</c>
      <c>Resource limits by user or bank account</c>
      <c>Yes (via controller policy) (enforcement via policers)</c>
      <c>Sophisticated multifactor job prioritization algorithms</c>
      <c>No (maybe N/A)</c>
</texttable>

<texttable title="Comparing KAI and Network Scheduling">
      <ttcol align='left'>KAI features</ttcol>
      <ttcol align='left'>Network Scheduling (Feature Availability)</ttcol>
      <c>Batch Scheduling</c>
      <c>Yes (via multi-ingress/multi-egress tunnels)</c>
      <c>Bin Packing &amp; Spread Scheduling</c>
      <c>Yes ("least-fill", "max-fill")</c>
      <c>Workload Priority</c>
      <c>Yes</c>
      <c>Hierarchical Queues</c>
      <c>Yes (via QoS in the data plane)</c>
      <c>Resource distribution</c>
      <c>Yes (via tunnel priority)</c>
      <c>Fairness Policies</c>
      <c>Yes</c>
      <c>Workload Consolidation</c>
      <c>N/A</c>
      <c>Elastic Workloads</c>
      <c>Yes ("auto-bandwidth")</c>
      <c>Dynamic Resource Allocation (DRA)</c>
      <c>N/A (multivendor is a given)</c>
      <c>GPU Sharing</c>
      <c>Yes (link sharing)</c>
</texttable>

<t>As can be seen, almost all features are supported; some other features are supported in network scheduling that may not have analogies in compute scheduling.</t>

</section>
<section anchor="back-to-the-problem"><name>Back to the Problem</name>

<t>Back to <xref target="mlc-1"/>.</t>

<t>With flow level multipathing, say X1 and X2 both send 400G of traffic to L1. L1 tries to load balance X1's traffic to S1 and S2 (in principle, 200G each). In practice, that may turn out to be 220G to S1 and 180G to S2. However, L1 knows that it's only supposed to send 200G to S1 from X1. S1 adjusts its load balancing weights ("adaptive load balancing") until the traffic sent to each of S1 and S2 is 200G. L1 does the same with X2's traffic; if all works well, L1 will send a total of 400G to each of S1 and S2.</t>

<t>On the "downward" side (traffic going to the XPUs), there can be an "in-cast" problem: say both X1 and X3 are sending traffic to X6. Now, X1 has a TE tunnel to X6 with only 200G; similarly for X3. So, in principle, the L3-X6 link should only carry 400G.</t>

<t>Reservations can be temporarily exceeded; that is equally true with compute reservations. Depending on the enforcement policies, an oversubscription situation should be temporary and is clearly visible (since accounting is easy), allowing more severe enforcement should it be persistent.</t>

</section>
</section>
<section anchor="proposal"><name>Proposal</name>

<t>Multipath TE (MPTE) <xref target="I-D.kompella-teas-mpte"/> has all the features of Traffic Engineering, including the above-mentioned TE constraints. However, whereas "regular" TE <xref target="RFC2702"/> considers a TE path with one ingress, one egress and a single path between them, MPTE allows multiple ingresses and egresses, and considers all paths between ingresses and egressses that meet the TE constraints. Thus, MPTE build a directed acyclic graph (DAG) between ingresses and egresses. This allows traffic flowing over the MPTE DAG to be load balanced across these paths. Moreover, MPTE computes near optimal load balancing factors at each node; it does not simply use an equally weighted scheme.</t>

<t>This memo proposes the use of MPTE to compute, set up and allocate bandwidth for unicast collection communication among compute nodes in a deep learning cluster.</t>

<t>Multicast TE (MCTE) uses similar constructs as MPTE (namely, DAGs and junctions) to set up point-to-multipoint and multipoint-to-multipoint tunnels among compute nodes. MCTE also obeys TE constraints and allocates bandwidth resources. Thus, whatever the type of communication is required at various phases of a deep learning job, there is a TE construct to allocate network resources and instantiate the communication pattern.</t>

<t>Both MPTE and MCTE can preprogram "backup" paths in case of a link or node failure.</t>

<t>We believe the use of MPTE and MCTE will reduce the incidence of congestion in a deep learning cluster. Of course, congestion can happen for a number of reasons, including network failures. Thus congestion notification will be needed; however, with the state installed in the network for the TE tunnels, a node X that detects a (link or node) failure knows exactly what tunnels are affected by a given failure and which ingress nodes to notify. Furthermore, X can quickly put in place a backup path to protect against that failure until the ingresses can either reduce the traffic they send, or compute alternate end-to-end tunnels.</t>

</section>
<section anchor="conclusion"><name>Conclusion</name>

<t>As mentioned in the Introduction, to make optimal use of deep learning clusters, especially when multiple jobs (e.g., inferencing or multi-tenancy) are run, and multi-tenancy is in play, network scheduling takes on increasing importance as a proactive measure to prevent network events such as congestion. (This works orthogonally to packet spraying.) One can add fast network event notification as a reactive measure. Together, these techniques present a more holistic approach and should allow much better utilization of ML resources.</t>

</section>
<section anchor="iana-considerations"><name>IANA Considerations</name>

<t>None, for now.</t>

</section>
<section anchor="security-considerations"><name>Security Considerations</name>

<t>TBD</t>

</section>


  </middle>

  <back>


<references title='References' anchor="sec-combined-references">

    <references title='Normative References' anchor="sec-normative-references">



<reference anchor="RFC2119">
  <front>
    <title>Key words for use in RFCs to Indicate Requirement Levels</title>
    <author fullname="S. Bradner" initials="S." surname="Bradner"/>
    <date month="March" year="1997"/>
    <abstract>
      <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
    </abstract>
  </front>
  <seriesInfo name="BCP" value="14"/>
  <seriesInfo name="RFC" value="2119"/>
  <seriesInfo name="DOI" value="10.17487/RFC2119"/>
</reference>

<reference anchor="RFC8174">
  <front>
    <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
    <author fullname="B. Leiba" initials="B." surname="Leiba"/>
    <date month="May" year="2017"/>
    <abstract>
      <t>RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.</t>
    </abstract>
  </front>
  <seriesInfo name="BCP" value="14"/>
  <seriesInfo name="RFC" value="8174"/>
  <seriesInfo name="DOI" value="10.17487/RFC8174"/>
</reference>


<reference anchor="I-D.kompella-teas-mpte">
   <front>
      <title>Multipath Traffic Engineering</title>
      <author fullname="Kireeti Kompella" initials="K." surname="Kompella">
         <organization>Juniper Networks</organization>
      </author>
      <author fullname="Luay Jalil" initials="L." surname="Jalil">
         <organization>Verizon</organization>
      </author>
      <author fullname="Mazen Khaddam" initials="M." surname="Khaddam">
         <organization>Cox Communications</organization>
      </author>
      <author fullname="Andy Smith" initials="A." surname="Smith">
         <organization>Oracle Cloud Infrastructure</organization>
      </author>
      <date day="7" month="July" year="2025"/>
      <abstract>
	 <t>   Shortest path routing offers an easy-to-understand, easy-to-implement
   method of establishing loop-free connectivity in a network, but
   offers few other features.  Equal-cost multipath (ECMP), a simple
   extension, uses multiple equal-cost paths between any two points in a
   network: at any node in a path (really, Directed Acyclic Graph),
   traffic can be (typically equally) load-balanced among the next hops.
   ECMP is easy to add on to shortest path routing, and offers a few
   more features, such as resiliency and load distribution, but the
   feature set is still quite limited.

   Traffic Engineering (TE), on the other hand, offers a very rich
   toolkit for managing traffic flows and the paths they take in a
   network.  A TE network can have link attributes such as bandwidth,
   colors, risk groups and alternate metrics.  A TE path can use these
   attributes to include or avoid certain links, increase path
   diversity, manage bandwidth reservations, improve service experience,
   and offer protection paths.  However, TE typically doesn&#x27;t offer
   multipathing as the tunnels used to implement TE usually take a
   single path.

   This memo proposes multipath traffic-engineering (MPTE), combining
   the best of ECMP and TE.  The multipathing proposed here need not be
   strictly equal-cost, nor the load balancing equally weighted to each
   next hop.  Moreover, the desired destination may be reachable via
   multiple egresses.  The proposal includes a protocol for signaling
   MPTE paths using various types of tunnels, some of which are better
   suited to multipathing.

	 </t>
      </abstract>
   </front>
   <seriesInfo name="Internet-Draft" value="draft-kompella-teas-mpte-01"/>
   
</reference>




    </references>

    <references title='Informative References' anchor="sec-informative-references">

<reference anchor="CO" target="https://en.wikipedia.org/wiki/Collective_operation">
  <front>
    <title>Collective operation</title>
    <author >
      <organization></organization>
    </author>
    <date year="2025" month="November"/>
  </front>
</reference>
<reference anchor="DSF" target="https://engineering.fb.com/2024/10/15/data-infrastructure/open-future-networking-hardware-ai-ocp-2024-meta">
  <front>
    <title>Disaggregated Scheduled Fabric</title>
    <author >
      <organization></organization>
    </author>
    <date year="2024" month="October"/>
  </front>
</reference>
<reference anchor="KAI" target="https://github.com/NVIDIA/KAI-Scheduler">
  <front>
    <title>KAI Scheduler</title>
    <author >
      <organization></organization>
    </author>
    <date year="n.d."/>
  </front>
</reference>
<reference anchor="MPI" target="https://www.mpi-forum.org/docs/mpi-5.0/mpi50-report.pdf">
  <front>
    <title>MPI: A Message-Passing Interface Standard, version 5.0</title>
    <author >
      <organization></organization>
    </author>
    <date year="2025" month="June" day="05"/>
  </front>
</reference>
<reference anchor="NCCL" target="https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/usage/collectives.html">
  <front>
    <title>Collective Operations</title>
    <author >
      <organization></organization>
    </author>
    <date year="2020"/>
  </front>
</reference>
<reference anchor="RCCL" target="https://rocm.docs.amd.com/projects/rccl/en/latest/">
  <front>
    <title>ROCm Communication Collectives Library</title>
    <author >
      <organization></organization>
    </author>
    <date year="2025" month="October" day="31"/>
  </front>
</reference>
<reference anchor="SLURM" target="https://slurm.schedmd.com/overview.html">
  <front>
    <title>SLURM Workload Manager</title>
    <author >
      <organization></organization>
    </author>
    <date year="n.d."/>
  </front>
</reference>


<reference anchor="RFC2702">
  <front>
    <title>Requirements for Traffic Engineering Over MPLS</title>
    <author fullname="D. Awduche" initials="D." surname="Awduche"/>
    <author fullname="J. Malcolm" initials="J." surname="Malcolm"/>
    <author fullname="J. Agogbua" initials="J." surname="Agogbua"/>
    <author fullname="M. O'Dell" initials="M." surname="O'Dell"/>
    <author fullname="J. McManus" initials="J." surname="McManus"/>
    <date month="September" year="1999"/>
    <abstract>
      <t>This document presents a set of requirements for Traffic Engineering over Multiprotocol Label Switching (MPLS). It identifies the functional capabilities required to implement policies that facilitate efficient and reliable network operations in an MPLS domain. This memo provides information for the Internet community.</t>
    </abstract>
  </front>
  <seriesInfo name="RFC" value="2702"/>
  <seriesInfo name="DOI" value="10.17487/RFC2702"/>
</reference>




    </references>

</references>



  </back>

<!-- ##markdown-source:
H4sIAHo3CGkAA9V8+3IbR3rv/3iKDly1C6QA8GLJa1PnnF2KuqxsUtKKdMzU
eivVmGkAYw1m4LkQxspK5UGSqjzLeZQ8yfn9vq97LiAoe5OqVI7KJglMX77+
7ree6XQ6qJIqdWfmOlq5uE6TbGleu2qbF+/NO1fmdRG50izywlzZaJVkzlw6
W2QcdpHWZeWKcmDn88LdnZmrS/P6O1NyoUGcR5ldY924sItq+j5fb1ya2mlR
LbfL6TrNtjJuenwyiG2FcafHp48HEf5c5sXuzCTZIh8MPjP5vMxTV7nybJBs
ijNTFdj09Pj4q+PTQVkVzq7PzKvnNy8GFn+fmXd5XQG2AeFfFnm9wVc3L813
Lwfvt2dmXadVsrHVamLmNou3Scw/y+bkEx4hCsfC+hj0TzbNM8C3c+Vgk5yZ
P1d5hDl5gc0XJf7arfnHXwYDW1ervDgbGDMF+OWZ+WZmvvHnxpfGKEK+SQrn
qqT/KC+WNkv+aqskz87M13WWbFwRCFHKkCipgJfrOst2dzZ18l3hljLhwqYJ
aJQlulqUx9joq0fHX37lP9dZRax+myWVi811BTyXJl+Y87UrkkhnubVN0jPz
XuGbJa5a/GHJ72ZRvm5P1TnKPyTlKqvNW3tnM/PUucKu/6ed5m4uYP3hBwVi
lrnq4FHOYwBkweSrAE3/EBeucPPCluZ6B+5Y/xcO8fhvP4QV4H4NRd7Z5cre
macrC+jvDvHWBQQod/+t4BdzD84fItl8ZpPBIMuLNUC6c2eDAYW9+WTMxZsz
mV5hlqvOzKqqNuXZ0ZHLZtvkPcgYJ3aGUx3x09FFnqYu4tx/ykFhOaZOV8XW
Pjf9563emZ6c4Jtn1y8e2ncJxYdjZcvZYk7cH2HWo6OT46OTx0dYxk5xAjAH
lFNU1YU7wkbZdFHz72mmbI/JU6Ah3kJRTW0yzaPNlKtM166yXXifJaVdLkEH
K4hV5YS/Xtg5ENuH/NH05BjffHP+6jDky6Ra1Qrx63949ezV+RGGTsOaRXdb
PDDdB1dvH1hzu93O1ptkCpLVayEDdH15xK8ez475+/HxtHAbaMjZJl509xhy
UXNurlyJM7rpW1uWtCSvMijchY0c+SiLgaWJuYMKBqUM1hzuk+v4i+kx+fD1
xcXlYSAJ0iy7S8gpPH3s3Cb1husoi6L0qC5dMV3WSewU/poQHUUNs5SzVbVO
H2CkN4GRyj5oJMa7B4Eq8mg9E8jsOhawNkX+A5YsjwqC5LKjlGJUHXW3fffm
Yo2912tosEg27UBSmssEaqnY3ePo4+nnZOrry2/fXR0Gp0zrYj0T6+fByYH0
u8Rt7x1dVjHfgY3T3MbQkhmQVQwGU/gP/GHsHMxvo2owuOQu5tJmyxpjzBUU
R1qa0eXlVTk24H2zqcsViV6tnJlDjYDciaqOykWrLE/z5W5mbvC0jKCVMM5W
HLwz0GwOCsoSYhPVReGyKt2ZOwgefrmfIufiUtaN7MZSu3FVm+0MuQxL4Yyb
unIGmKzM6Pbtt+MnGJ6UWPTHGmavNNbECU6SzGsKn92AQNjObFdQ/959wDqY
WcpZojzLQAgMvUssJg/nNnrvsnhovNTTU4CAJ2usnWQY4SGh0jCRI9/DF6kr
PIxwMnm648hmr87IsrPffMftOg8xCT/9gKEZPbt4NZ6Z75zAGQ4S8L7JMdqf
qmow3WIQGCfmDkILjRIQxuUaQMsaiHoAWo+dh8H16ALdC5uIdwk9QCcQEGaR
fC4IIPRqSQHk06hIKkhE2qr10kMn3MIJ1W7DEUBpGfToxCQzN5vIsUeeIcY4
T3B1ZS60fawExo9FzQUglsmaKnlisGAeiXYmGDE0S77DhzJveLV0nRUF8sUi
iRLlV2ieeGb+mG/dHfG5XSUd1kwyCHW1xkhygS4F2TGgH9YSbRkcVENbsmjJ
5XEIJIC7wdRzh0U5ocdbEdVgA6iBwiE1IjhwsDmW8CVrCI8+pVmG0wRpxHor
yzVd5p9EtQht4UhQ0lpp5bh/BhjjwGwLC1Kn5mlSxKWy1QtsBN4zazqaAqD4
8Obk9POJ+cM1mfEsuIt8To0muIbepfZTVL1V5SlPZ4OBwLh265y42uSlK9tT
lnCOhJb6FQZWFnJKbwfGRthLaUrtChADNTpskZsfEIsAAX7qFqbVtLa9HXrW
0VhC+l9kHEYYyxKW6P6+T/bVH/BMlAHRO7NJbSZqJbLgkpYDzDJNKvA7ZnMf
WcGaVZ5CtSWRoYZvz4HpeRGDGDhhvgGoyV8djyqwIPIiRlR/UUM/JJ3EzUxN
wTqJYziTiN1g1os8hkvENT58lvDjx7/RQiRZDG4vdgSP8qJcit+isqA7c5CB
ktKHrAHOeeYF18dCxXVOsQ6qo1bEebXdiJCsApQlS7JpSUxx0Y0cR0ADRiAq
oBPUXzmDT7O2PyBIDj6GWRT5OsiG0CTpsGOgE6Quy6uWFgyzlwXFJT6jKQIf
w9kFfcArqSWrbAhoJeIqX00IXJpyCLADAzOH22Xs/WB9dHU5JpUmXdWdq2bi
FHLRykXvxTIEPK5p5z15eVZ4EBXEO3ZW8gRdESBjLsnqeTbx3H1g0ALxAHzi
Uu070DlPEUsZuCXQ7fD/irlIR7B/08pllsolaLw9OywSSY4p6kw2JZxJmacq
yKIAxai/d7tg46ladftf1kujp/mLsUlSbl8IZFtE7ELCIk9dj5YQxp0I49Wl
wDXxhhZHqBJwGmkbxpJUlXAjvI2i3lR+Bo9Af8GAauCuapXXyxUG4iyrfMvx
KwzRjcG4ugyMvFgQbgd4zRZqGoKyo0GmyYD5IliwnXnppQpzk7Ksqaq3cMJc
mmAprydiVYdGHAZxdlsvKK/TmNw6d5mjPbOp95/EfCh03Et+C+EFVW5/ISA7
sAfYTz1ZnLfHKv5wA/NJxR5G960tPsGRdfFRY/WFv/Ocul55Q/ZSUjW+gRpj
wCceYbmBQ5GAQYV0uqSY33u2YXQBUAvQ+G1rpBkXl+b7P1/AVfzLxLws7AZL
l4eGvNQhdConcs5CxDovoB0nZjabjb0Ow3+AyhHv6kaQtBCJ7AGJINWJbi8+
tAYUM6p1roV9rb02I/w2llJyzRNGbjyhOsECojhKal2btgpV9ygT7mkzl9dl
upMZHZGN4MG/azB2n0gF8O2dLdECdp6k3l0X/vdiQabZkxnhsDXmMPbpiXuL
pC5jQIMxoFBP5WESyrLkSXpIlqrDxDn0Z9d/oM6fmedlqRLNUwvybZB57t3x
MznxP/7lX0sY1P/4l38T87lS0zqCXyp2Z03bN/Gm9iePAf9lVq/n0EX4YmML
7E8FODGuimZjSA8+wbcD5FQMa8Y44LQjkPIIfIQFCy8K9GS9vyEj6aQrj6m2
UTYj5DrySeszT9qjtxC0W8tTu2YiqOt7tDj1W8e1GJs4WQgD0VVs4uiR+uIS
C0wZjxEVnUAXFmdJAXsKvRHDy6km5jxN3wn3TEz4rTj5GoiFEo5Ew4eVVZ4l
XoP2DP6XSLQoLlDj93A6UvjuHZzKueUUI7okYs1g2lR18CtPyx6wM/NqMQkM
Q15pyN1ni8MqS0AIesuMuGdjO0gKeij1Zm/SRgJHPfV4ck+aREvuyY94Dt7o
dOBTgeTGqVodQ8cEhmHNU7sYKuhN5mS2QFrCYgOce0gwzK5vrTKXP04HaJWX
Lgf0Zpfm/O0rQ+Yzow8frt6++viRiuXDByZ69G+JWZL3CNloFGzsvAXq68bC
pe6OngMJTe96GtSZKK828BU6zYEixjV7sGwQ8LgyqAOuMvH+tAvP7izkSFy9
UWl3Y9PwKbmt4VR+gO/hykh8jxdAqdA0WQSyeqxpocWaZXK3D47QW7E+obPL
eFJAw8LRexx5LrSvNN72ePbuFuzxhoHzJDjHFGdQIJ10OL05KAifLBK6nQsf
vukiEnLbzLMKHyVFA4kEKFCMMbMZSUpPemvwcVrlUzrWzQFxvpw+1jYpXceb
CbuovedR3F0C0yKgLmk84c4s6WCWPd8ESmmVxz5n9IKIb/1Qf5KAP2q7ZJmp
24FRGFPVlbiHjYfSW+5GHKUFgmn1+nK4qStGw0ESorymKd0X6E5BSd2MBeES
+RMlE6mT3AIjrjYCkqVCKjyAqEzsbioqexrk08fmHX3r5RrzKMrADAKkEk4h
ArDPzI1oNDnPQPBKRxjT4PAOr769vhlO9Ld5/Ub+fvf8T9++evf8Gf++/uP5
5WXzx8CPuP7jm28vn7V/tTMv3lxdPX/9TCfjW9P7ajC8Ov/HoXLg8M3bm1dv
Xp9fDjXHAdmN86gWvS0xek4+FKMAPpBwuRxAAyNqm6sr9/Ti7f/995NH0Ax/
9+7FxenJyVfQDvrhy5PfPcIHSoDulmd0LeUjuXRAWbCFZOOA3chuEkRQlA1o
aZhIeK+OfPi/fp/SyE2/+P3/GRCXn5ln8HnhsvkUATOysvS3JWAiokufgyiV
wqTFHXwKmswwUQWcdsaHmFI5TbzCUQccCGCGqEENrTVCa9iws8GZoVNCL8l7
ZXQkJHjp5IeY3SzPJM/GIXtP4K3Czk7MMnik956/lOdpiM/vPb+U57BNJc5y
7+kNrXhQk9TUIkjQksR2nUmmQTK0Wukl7MN7/tiQ+ZznPG9QMJ0aDEjVSRV7
rMGyb73hphVoxF6SOVkp6QGIyUiylCD10MaQDCM14nJItQSjlBew7eA+KQv4
9Ngur0XeBoPNKt8QpEY34bMZ4dHrMUHK4LbJwcE37xWqhaRTScjc/FBnyhav
B4PMr5S5n6rOKmdhlZCkWmii97XoAqZj9WhO96CMX71VNDWVbXPz/BB+gOG5
uDbWx68LhLqMxHm0m+c+CZnTRWuTuliu0WFblyAU5c4sAMwt2EMSIeRnMVK0
eUorQHXRgUoe/Seh4s4HNmzXbbe8lA192iOkYbjpJt+6gpGS7J78WIuCkRFq
uiUrza1IaYTJKSLbSjWpXa9lFRFIzEKscdfJW7Dc+QqsGmhLEDwJJdn/7Pzl
RA9xLAES8y3kIm+J2y/JEJJFarjEz5LRZAXLcE5UO8K4e8OyzjDnDcAbSWWE
keowAl2IaaveTmsrzoNtyCguSIviMJCTcCA8wZeSizxp4PegMaHQfuma7/qH
9UC0oAGDUns9o2faqX220ftCap+kZsdCCyv4hFi5KezOp5KbJJ4gp8vK3Oji
lWz0QCUCA85fKiSFulI22kXgCNWXzGq+VepryZs2azC4YKYgaLa2hQQWaZ1G
05OPH88Gg3/GPymp8d/1SfiL8f3MXJ8a/+yIX3w/03/yGf++12evc5b3Lk/M
b8zl6V79Ccx5ffpEFrk8kZkcI/8uP/e/HzWb8rvf8Iv7i5zMBrrt9+bh33Bt
RQUpVz06Pn6pHzn3Fme7xda32PYWW94+xv9f4P/f4f8vFQ8fzrSy+L+HwJbv
IzInQ/OZ4kt9lXleVRBPJj4LUeIJvT/InIRz2KZaFRKq3H6pCT1RpzocPnV3
xhDivhiacqt5ceIwTL58pJMrqOH7Ww3LDfRJZyYoR26+Pp11kBACCJnf4mS+
kfy9FqQKDbJz0eVlPac3sxEB1DpPWzH0fkMMpjnRXZp6Ix3R6dxR+uJZeNrA
1tGU4lAzhyiBJPQfCJzuwP9N2WnIAcN97cpIKivrUK2CTkghw2oK742UDAld
a+4w9ZI4glACH1Iq5NcRQB5jHlBqRsM9cR2OISOQ/Y8fGT3/4hYLGPlmKfmA
MM+lPRkfjkP9i9l75skyeiHpzruVDTICpijiRV6GtJFaZCli+eyBVley9qGo
zNKRZ3LNqHgzNJEygyRZ815C3B9bUvZlk6wP3xai/D/d92FGQNOYhqjsYpsa
0CN5D3u+rCxGlha2kribGRTN7O8kwtfCr6Qb9E/aEDNPNEKiTrMdHi0bLi3H
4KXvJLg5TAbEN9jiVqXlVrWV1GoYJBJDojS0pKQRZE7FJZIp/QC00V1+vj35
bdkd3AiiGUllJsGpN6wInXJh5j3H4Ck+YWzHNJG4HDw4XAw4hnXlOeL0FBPa
FU++9B9PBZo4DwkhJsLEpNyetqA8wdkrfORpxV1fcvzJV70lT0/aJalrLk+m
eCJ+4jYB5eYucAsTRLLHo5MeeoDtN6omhjFilK0tYigleqmjgJNl7hmPo6gx
NCNUuBDY4+cwyaY06sPgw5wJ4EKYQKvPfapBydTB+O0XglAuDxZgqA3BmGiK
mcNLpammMKq8YmV+Yb481rPffqEcyemXn09hEeT8hE3CKPEKEAXLIkRTDRlc
2VSTEAGMgC5fGgNaXukAZsp4VG9LRjz45IClkLxRqNnmnY4MTeESZfjsc3IQ
hIqZbvAVlgzQa+kEwcoaMcW4U8lPFr4a4oHAn+I4EiOdYul+slwLhGR2ukpx
LJEikNeTQEKUlAymdMqJKNq9Eqw5GSMAww/v09USlzEh7Im7sYUEM7dfPDF+
71NZiJNPOfn00OTPNVjNJHPTWUTRf6Nr35zuFQVCxW0vkeV5wbNmKCmxrk0a
+vxHn0tg55bA9cSjV+ZQ0+6rWclyS40i1EtZOvVVVNWYULRPQccs305aYqrZ
Yw5akYJznZdMPlchDnd7xewPWsz+OPFtLw9uJwwu3nJQqkCKtEk055SUZBP7
+0QlRE3ZLASsqn6tz1uu8yq5a9NFsA2Te64wsbH4REpsJimiQ11t9GjvN02W
OPbFm48feymKbruUhvEhSJM0KVgFngRsQrfsQjpXXse7n6KVzViBCD2gjDSY
YetUqsSt1p5Bc79n0GeKn2hisWxwpB2PzQGlb66X4PWtc21+OaQtfl3LHea9
k3khVdjgZb4LmfH+eSUoZCti2ycQdAWDS6dNFQENgh4pl/JcojWkTCc5i9lD
NNLeG29sQ0gnJr+J6Pp57qBrAVwncXmvSuFbLTbSL5RJrqwRXWg/tjR5ftJD
d+4TfPiMmJDF4drDhJQroCru4OcuKfKMsVQ5aRjeNs8DjsSwRGles5Wmrdoa
0Qk+Gxci9bygSyxVe/h3kTix+f3C38y3Nn74IL/BA4nfubNcZ6cMxM/qn5qO
hCeGNdyQvfRFK/p4wxdJQWfr+z/Lwn9pak8lWR6zfRfbEdsC8mza+bKBtlNh
DoAzt6A5Kvavqg7RimldNAkvKWpIN5rTaiZ9XnCV2JnZ0Jx7RY6jsuv3wwf8
lJQpvV2FvtcNrDiBw1DzRE0rm+9XgNtoWSX/pp6DMXhVo4Mw7U3zbUVScm7T
5J2WKx4DO6pn2k/htMYRkN+I0VD/RBOgepJmQ22HEuoRQ1TkHWgkv1U2c0O2
ybeD5MwhhlZD35DD/JBvoZVwK5LGd94w4af4TgMILXSp6sT3Ly3r5i3/j5Qg
YPqgG0MBRnT/mHOeQnkv6Ny08/j1Tb7RXEfAYdwpMziv1zgw3NUxLNmBJefC
uyIzcws/yyrkHHqdb1bSCSbmTdQC9GjlvQB40jm7o/7qyybpkh9XzG6TKTwK
qy7CFDdPLaLQjtjLd6DGW5yM5/6Nud4UNNr9IU1T8VvdeMcv/5iAQJBg6dH6
U+1gCHuHbKyOP/0LxLIZpeZtzuydjm5WZnoG38cNgZ6nVjrhwggZ/myX2TW+
bDY5b/lz9OzduZCJDHythBwMbqCz6/Uan/7qJh191TKcT0s3TS5akCApca7A
EAf7DS04uNNcq35J2Sry0OwR+kcdonZlL48KCCpNibiGnic4GevMtejUsDIw
5ZOfttsKI9v4bo77+7bNQOzGkBALR5Ia4crCyWDEwnyVj7q6rSHiba01zTrz
Iu19TikRVvmByrFGFepP1Ck7WNV9lY452EPtH1ckSQAVtIt2PK3b4KOvrBne
Zb+tJAxa5Vv1icIFuJ4N8zfWehasKcJ1LJgZqSK16TovpTLRaSss3Lhvu3xs
9mmzdb9lYDqloJL1JJHR3GXjIaVA75Mfm0J0hqSpRn+Pvf+UX2vDhBn4QitX
gb9iy/eH2ji6eiw5SBttXxXiaOl/VMGFdelYtackcHzmV8KxUZMsFjsWogrW
tkNubxRSx4iuWibyDqWNk+XahIQapsCVmWq6VuOmjjYu2+4AAWDpMi2brYj2
5KFZE+9wN2Fnp3ewbHInIFu5W68dNBG5/aUU7QVSRFtb6zv8xKs7lWQgP364
PUVY+jn+f4TggchRQYqkrZWBsyBPccfItT9FedsvF2459JGUtGmSNWs2wl7e
Dxy1irPXkkO58j2Q8D2L5KeQRwB+kg2NPZteYEqa/ob+Sk1+ZM99BfOPJbO3
qAvvxPLE0uHqnRgTuaJCcOdTqY0fJE1lTaE9ZR+HDOhnS30Tk+0fX1HIQsWR
VCtILW6lK8gxw66ydHCBNSPBbzy7M4+Rp/1GGikcJiKjeaHxHddmc2ZOjQhz
5zdSncVcr/I2zasUbNXhCEo/tKg1GrW3lHh3vjfQFu9FmZnhPK3dUKNymhhN
oeztVQYMQ7mXugbY9LJ3Rm+gbEg6SyAiWlZJKE0e4eShQ1wvVIgS0AERkAww
i6YXtSGVNRdpXjYrjCwFERTp3LMVpajtTE1qtpErm5ILIs2/LhlPI/DOzbAX
dvqFhm0Q3b958uzilUK0T2GxxqJ95HuENrpmqXmmRYLnod9cvcm2qbFRumbt
bFO871ookvvrEzPSlqx0NwaFxdRUCAClI0X3lnGdlCI8NGgYzJT8kXbjepa+
OfHGrauuSG5oBHXHvz49NO30gWmPNdXmet9LST9my484FdIdufMkbXM9AJaF
DNdxuCXN0SketCiiJEj42M4faUpp5wmgkHrb3dXF7Dyjg1QXTDYCK1S/5Gqt
JVKRi2/TXK1odu30JmfMqmuTse+g+Zqp2Nw3Mx2xpE+IOaFVO3azEana0Kmk
N+fzf2q/iL+vJSntQWe4LghTXgComhXN8gdhFTdCq/sS6etSvpM1bCste0W+
adS6jNf2Cvh1khiQJOVONURSSS8LwuhJcycp0Sxpw3Cl5j494ZT7C+cVpdfo
EGxefTI/1nllJ/6snc6zZR7yquH2Dd1qf7eMiC/qKHS+7/UGn7f6ZrvKwz0I
/53hNVnJSgJwhIFYs8wX1dgsayuXIzRztKd6THtRRoQXbMmoRu4WqCXr3fCC
u9P0WdokHguUMrHM5eqA8X0fc+KDBfXBIIQNLV+LiZBme9+N7UNKBUZKAeod
CVs7P98rfjGMltT4bdn3MECKMyO3LaBcGHLp/MfeDsy1T6G52wNkfBk8Fn9z
LWnq7uWqIHWTSs0Jqwec8Fh7BLNcGhtd4bt1JSmahbQk4GrPug+hactq8MC3
vc2wAj66dSe3FBLbcgfHSkQ4T7TG0He9KH1itVmyYGKBfkT8A79RQ9YdzmNn
0mWjaURQqWmvJEvR+xZ2H3lb2TKM2hz1uxAX/VhbCF4IQjVbXji33jTdl20j
CXyCspMvE+7LpWuTc0KfPPNN3stQiaPYao6yDbk0rSsrTrpmuHEGVi7dSA1W
RZgs79NhW+d1B3v9favMNsTvO72SxYJNwLzURZt14RGqp0GlzaSijfQ+gdUm
9HD1p0WCJvg+4xVN0QHP25asweB8v4Nnrdn8NoEuyiH0xkl55c4mqadwzKtM
GNiw29RufcPKfvuXGd08H0t/WK+zS/0gq1kP+t3SK7XrjVGPRgKV1jiK0DPd
E9K4kfTaaBa14+yRutouxnSc9IpNQhwIeVOnrxlx/e7yJQcASvVkJl756qc2
r++vcXXlrBeI6DWsXWCcweDpoZHdBrj+Vcbewu1lRtvYmrdqZkLJyysySlwW
VWrUmEcImaWDenBk31sztHWVT5svh+PWu/So5oUnzbKIC+mlmiaxJ9MjMrNE
BNyQtP6UWEp5jtzmb0WDmrDlcU4pexm4ujNHLGupN3vU1g1pXjpD2H342m0P
Ml/nJkSTqG74eF4zoALrWdab2+9/ZAZrCjvjM7a9BLwPx0NlM/F1pnBJNCmh
sXyxxRp9p0TzcoR80faIStAqnco2ZEFhIjTHoRmPcqYCfNWp00sWwIfSVLyJ
VlcQ80W+2j0s3JL+7BC8fNZz3EWndv17aW1Y0CIVTssdPuBmUfE9dZm924U7
l/1uAXHFF3AQVCE1uaYmWaDZq/DBF5iS4l4x40LfC3CQeE0HYBnuL3cKwKIV
mpRFcPIY82Suaaprar1WOtzblsLGyfFNmL221PbhwyfqZdi4SNtWJCGfXOiW
qJ29FjlLjLIFb2X76w6R07TIhw+/fzV9NmveuVTBh5qCtR1LVU2RRpOPF00Q
Gh9Ke73w6lySXlWnu1luDqZ5zqhpP/EpaUg/USnL5BL8DfX0JfQt9IJd3BkL
n0bartoWvs6CYN2ftVQzPVRiCnD+fOgQI//UnKtEyu2Y8c+Dn6e9f81HPGkT
/T+bfwR1+NWBbL/Rp6NWE7KJRl5pkS25w34pwI8PWbdw+VY4MbyMANMOVAMw
9fXROZ79qpKAacD+lYWBABnjZdbiCxYXC41BELmOvBGQHn2O8cFJSWj/kwUF
nig3I8RzUHs4G5Zq+wJbDlWqH+bP4UewBcsSgd1+/lvJ/7Mn+H71ooMOjV28
2B7pJ+e1g4Z+QrJPlzr8ekOJZ6akLa9JrO1P+jdXuFcJkTl4cKAa0iXXn/Lr
4GNJWytfCyBcdLBc0pnorXLwGDnlXiHFBCAOV1MCU96rqITz7vsEGPuLhRZZ
FIxBTKsp10BLrkVhhU4Rxm8jrpevpzzARt/4+uJhJjpvrj+VTi6OtMn7xqOV
smO94YuNeE9SEnPq0x0ecliRtW1ojE/U+5YEoG8QuK9LtSZBlRCysr4Fmb6g
ftk0Gv+NXXlNR97/H+14jfUFWO8z9XeZ3GADnmRABfehV18T781iPqM+k4XF
8yzF9dxrOtWLDsK6sdUryP0R8GtpGdJegaAUlyP0ayw66ADnnkpH26/pJUSM
R5bz7esw3nJUTUs4eSVF01f3yJ/s3o7/U1oFX7PFCiNX0stBj6gJAm6/0MML
zYidJ0xgQTezY5nO0u3nLFNPTJ+V9loHfQeKLBLZAuZUWgdDktYHE/5EFXz7
vNB6mb7nSO46+4s2Ev1LE2Lt6dKpyDZrzcwzt/GH9aFk1yxuvMqchNt7vS7v
Mqlq3x/SdM4EoDSukc5BffPDXaLxyQi+aeQ6tVoB1pa7sb5MRtoVJLkoPQ19
ePw+SeXbntnU7qS5TS4xQFBsOhhcdZ3WEe8TsSf77x5yIJWch67r3M8K7L8I
yM6BlWmbGOhH8L1XIjm+sKgXetCr5V2/3x2fAorQG+lZS+D3LNV42JOO5+5f
6NJ19bve9cTw4MFd/wVXXV3aDgTARt9hPzSveW0FXy4i6Ng/vqbKBBC9Q2sf
uIUCK3n+cvzJ3bTGlJRNXsBTZ+F5pkkhy3a81qPKt6vju43xpY9JZuYK3JYL
lWSqlxOGgLZoWhv272yJNyhN6KKu2MXEzilViNKHrvlrhoa8wuTFsbl0RmO4
dg+95MmFkFIgatu8JuGaeqiTyd3/flUiVEajtktw7961NGL0+q8+0cc38wLl
L71BoC4oUNK271WcJ3odVZK9FJhHfHsnc5aghJKxuRw1Vlsm55DUOq9UK4dK
or2NGw88DKWJA4cAJS+E5UtEuHO3K81+Qq2DtPJQGilw7BZc7QI/hTdb9JHY
vmIvJhPcQQ/L5W69RS8F2T46fSu2z0zYFrZac1INNQ+8TUG6KPnu3irRuvCB
dxSwh5G5NFozFX3MEoRIv1zh/PU/faNfvRl6Ce+8bMuqFZL+vbhzgX3wXfeV
On3ebHYRk97epKeihDKRQuei26/7CU7rlsU6M7TfmPf9ff25fY8IdaokFVu1
fOjFTHX54CX6kKQK7wpZNRo7vOxHa/GC/zRtk7/NPr4xuHEHyubq7K2qR70h
T5qPuvgdN+8hUMePr4uq5GK37bA5M5jSaxXekKhvUui+wsC/4cend1Se2WLD
Y/IdAtqoQJsK76X3hoXu2zZ4N6B9dYMUzfXlC8ayU11qFbwA7DduHcZWWXNp
l0j80GGEAy9U0IShZmlSMq6Vntnm3Qr+9GLXEZxJ/wR7uM8PJOG7XewTX4px
jdr2rHqQ39hQ1r5g48Bbh0ZutpR3ubRXEvYLf+PQ4T5p1VavK0ywu5scDJsA
qKRJ2rcZMvWLSEviEbv/XgirldjchNru3gsTQva2ZfWZGYl5CZnMapUv80z9
wny/u342bl+IEvs++94Oe2+f2HvNhEDHCrS+U2IS6kBtgnJDz5P6Xf275u19
zUu0tN4mLp7WPuQmml69Mwj105BxofK57KhteTXf+etzc9G7YMJ+vQxMLy8q
0044c+2iWvLu+0Nvnj4b/D+zp44vPV4AAA==

-->

</rfc>

