<?xml version="1.0" encoding="UTF-8"?>
  <?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
  <!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.29 (Ruby 2.6.10) -->


<!DOCTYPE rfc  [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">

]>


<rfc ipr="trust200902" docName="draft-kompella-rtgwg-mlnwsched-00" category="info" submissionType="IETF" tocInclude="true" sortRefs="true" symRefs="true">
  <front>
    <title abbrev="ML NW sched">Scheduling Network Resources for Machine Learning Clusters</title>

    <author initials="K." surname="Kompella" fullname="Kireeti Kompella">
      <organization>Juniper Networks</organization>
      <address>
        <postal>
          <city>Sunnyvale</city>
          <region>California</region>
          <code>94089</code>
          <country>United States of America</country>
        </postal>
        <email>kireeti.ietf@gmail.com</email>
      </address>
    </author>
    <author initials="V. P." surname="Beeram" fullname="Vishnu Pavan Beeram">
      <organization>Juniper Networks</organization>
      <address>
        <postal>
          <city>Sunnyvale</city>
          <region>California</region>
          <code>94089</code>
          <country>United States of America</country>
        </postal>
        <email>vbeeram@juniper.net</email>
      </address>
    </author>
    <author initials="A." surname="Mahale" fullname="Aditya Mahale">
      <organization>Cerabras Systems</organization>
      <address>
        <postal>
          <city>Sunnyvale</city>
          <region>California</region>
          <code>94085</code>
          <country>United States of America</country>
        </postal>
        <email>aditya.ietf@gmail.com</email>
      </address>
    </author>

    <date year="2025"/>

    <area>Routing</area>
    <workgroup>RTG WG</workgroup>
    <keyword>multipath, bandwidth, scheduling, ML clusters</keyword>

    <abstract>


<?line 60?>

<t>Large Language Models (LLMs) are pushing the boundaries of technology. The scale that they have reached currently vastly exceeds the capacity of any single compute unit (XPU); this requires a distributed approach where multiple XPUs are connected via a "backend" network, typically in a single data center. We are approaching the point where the scale exceeds that of a single data center, thus requiring multiple such data centers connected via a "data center interconnect" network. Training and inferencing are expensive and critical operations, thus they are typically scheduled, i.e., the (compute) resources they need are carefully estimated, allocated and deployed so that these resources are efficiently used. However, while compute investment in these LLM processing clusters dwarfs that of networks, it is becoming increasingly clear that the latter can greatly impact the former. This has been the focus of recent conferences, including the fantel Birds of a Feather meeting in IETF 123, @Scale: Networking and Open Compute Project.</t>

<t>This memo proposes that the same care be taken regarding networking resources: that they are estimated, allocated and deployed alongside compute resources; that they have contingency plans in case of network glitches; and that a holistic view be taken in order to optimize the running of training and inferencing jobs.</t>



    </abstract>



  </front>

  <middle>


<?line 66?>

<section anchor="intro"><name>Introduction</name>

<t>Large Language Models (LLMs) are pushing the industry to ever greater scale, both in training and in inference. This leads to more critical use of backend networks and a higher stake in producing timely results. A major learning from recent work is that the network cannot be taken for granted: a dropped or delayed packet can delay, stall or even abort a Machine Learning (ML) job, requiring more effort in checkpointing and managing job restarts, dealing with network congestion, and dealing with network failures. The problems get exacerbated in multi-tenant clusters where multiple jobs are run and job isolation becomes a key requirement. The fantel Birds of a Feather meeting (BoF) illustrated well the role the network plays in ML jobs, the potential for network events to disrupt jobs, and some early thoughts on how to handle these events. While the BoF was very successful in exposing these issues, we believe that adding a proactive approach would be beneficial; this can go hand in hand with the reactive approach of dealing effectively with network events.</t>

<t>This memo proposes that the network resources are reserved/scheduled in coordination with ML job scheduler, which is responsible for reserving compute resources (Central Processing Units [CPUs], Graphics Processing Units [GPUs], XPUs, memory, storage, ...). This is especially useful when multiple jobs are run in each cluster; an example is GPUaaS (GPU as a Service), or running several inference jobs simultaneously. Reserving network resources reduces the probability of disruptive network events and improves job isolation. This is the network analogy of reserving compute resources and ideally can be done at the same time. Essentially, when an ML job is scheduled, the “size” of the job (type of model, complexity of model, number of parameters, etc.) determines how many CPU/GPU/XPU cores are needed and how much memory and storage is needed; typically, the same parameters determine the amount of network resources needed during different collective (i.e., inter-XPU) communication stages (Broadcast, AllReduce, Reduce, etc.) Job placement (i.e., which XPUs to allocate for this job?) also determines the source(s) and destination(s) of the communication. If, at the time the job is scheduled, network resources are also reserved (and potentially, backup resources are put in place), the probability that network events can disrupt the job is reduced (although not eliminated).</t>

<t>One can do both: couple network resource scheduling with fast event detection, signaling and mitigation for an overall much-reduced impact of network events on job progress. For very long running jobs, network resource reservation can also be done when going from one communication phase to another (such as from Broadcast to AllReduce, or to a quiescent phase).</t>

<section anchor="terminology"><name>Terminology</name>

<t>The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL
NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED",
"MAY", and "OPTIONAL" in this document are to be interpreted as
described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they
appear in all capitals, as shown here.
<?line -6?></t>

<section anchor="definition-of-commonly-used-terms"><name>Definition of Commonly Used Terms</name>

<t>This section provides definitions for terms and abbreviations that are used in this memo.</t>

<dl>
  <dt>XPU:</dt>
  <dd>
    <t>one of several types of processing units: central processing unit (CPU), graphics processing unit (GPU), language processing unit (LPU), tensor processing unit (TPU) and the like. They fall under the category of "compute resources".</t>
  </dd>
  <dt>TE:</dt>
  <dd>
    <t>traffic engineering</t>
  </dd>
  <dt>ML:</dt>
  <dd>
    <t>machine learning, a powerful technique to learn from data without explicit programming, used to solve problems of AI.</t>
  </dd>
  <dt>DSF:</dt>
  <dd>
    <t>disaggregated scheduled fabric, a methodology for packet spraying in networks with multipathing.</t>
  </dd>
  <dt>DCI:</dt>
  <dd>
    <t>data center interconnect</t>
  </dd>
</dl>

</section>
</section>
</section>
<section anchor="problem-statement"><name>Problem Statement</name>

<t>Consider the ML cluster <xref target="mlc-1"/>:</t>

<figure title="ML Cluster 1" anchor="mlc-1"><artwork><![CDATA[
        S1         .... S2 
      / ...\.......   /    \      Note: L1 & L2 are connected to S2;
    L1..    L2      L3      L4          L3 & L4 are connected to S1.
   /  \    /  \    /  \    /  \   All links are 400G links.
  X1  X2  X3  X4  X5  X6  X7  X8
]]></artwork></figure>

<t>The bottom layer consists of XPUs X1 through X8. The next layer up consists of "leaf" switches L1 through L4. The top layer consists of "spine" switches S1 and S2. All links between layers are 400Gbps; thus there is no oversubscription in the network, provided:</t>

<t><list style="numbers" type="1">
  <t>All XPUs are well-behaved.</t>
  <t>All switches load balance fairly and perfectly.</t>
</list></t>

<t>However, "fair" load balancing is insufficient unless the load balancing is done on a per-packet (or better, per-cell) basis ("packet spraying") <xref target="DSF"/>. If load balancing is done on a per-flow basis ("flow level multipathing"), it is highly unlikely to be perfectly balanced across the next hops, in which case one next hop may see too much traffic, leading to congestion, packet delays or even packet drops. Disaggregated Scheduled Fabric (DSF) uses per-packet or per-cell load balancing, but it comes at a cost, and may not scale (and scale is a big consideration in these networks).</t>

<t>With flow level multipathing, say X1 and X2 are both sending 400G of traffic to L1. L1 tries to load balance X1's traffic to S1 and S2 (in principle, 200G each). In practice, that may turn out to be 220G to S1 and 180G to S2. L1 does the same with X2's traffic; let's say this goes 190G to S1 and 210G to S2. The L1-S1 link will be congested, with 410G of traffic.</t>

<t>On the "downward" side (traffic going to the XPUs), there can be an "in-cast" problem: say both X1 and X3 are sending traffic to X6. In the worst case, each sends 400G for a total of 800G to X6, but the L3-X6 link can only transmit 400G. Thus, half the traffic will be dropped.</t>

<t>If the entire cluster (here, XPUs X1 through X8) is working on a single ML job, things are a bit simpler (but the issues remain). However, if this cluster is used for inferencing, or multi-tenant workloads, additional considerations arise. Tenant 1 (or inferencing job 1) (T1) may be using XPU X1 and part of X6; tenant 2 (or job 2) (T2) may be using XPU X3 and another part of X6.</t>

<t>If T1 and T2 simultaneously require communication to X6, there could be contention for the L3-X6 link. Again, this could lead to congestion, and hence delayed or dropped packets. But now, the issue is inter-tenant.</t>

<t>As stated in the Introduction <xref target="intro"/>, such delayed or dropped packets can have big consequences for the jobs that are running. Issues such as these are the motivation for DSF, packet spraying and fast congestion notification.</t>

</section>
<section anchor="prop"><name>Proposal</name>

<section anchor="compsched"><name>Compute Scheduling</name>

<t>In shared compute environments, such as a compute cluster or a cloud, a scheduler is commonly used to orchestrate access to compute resources. SLURM <xref target="SLURM"/> is a commonly used scheduler in Linux clusters; its documentation says "First, <xref target="SLURM"></xref> allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work." Another is KAI <xref target="KAI"/> which says "KAI Scheduler is a robust, efficient, and scalable Kubernetes scheduler that optimizes GPU resource allocation for AI and machine learning workloads." There are several other schedulers in common use.</t>

<t>A scheduler offers several features. The following are taken from SLURM:</t>

<t><list style="numbers" type="1">
  <t>Accounting</t>
  <t>Advanced reservation</t>
  <t>Gang scheduling (time sharing for parallel jobs)</t>
  <t>Backfill scheduling</t>
  <t>Topology optimized resource selection</t>
  <t>Resource limits by user or bank account</t>
  <t>Sophisticated multifactor job prioritization algorithms</t>
</list></t>

<t>KAI offers the following:</t>

<t><list style="numbers" type="1">
  <t>Batch Scheduling</t>
  <t>Bin Packing &amp; Spread Scheduling</t>
  <t>Workload Priority</t>
  <t>Hierarchical Queues</t>
  <t>Resource distribution</t>
  <t>Fairness Policies</t>
  <t>Workload Consolidation</t>
  <t>Elastic Workloads</t>
  <t>Dynamic Resource Allocation (DRA)</t>
  <t>GPU Sharing</t>
</list></t>

<t>To summarize, a compute scheduler allows effective and optimal sharing of compute resources among multiple tenants and multiple jobs, while ensuring fairness, enforcing limits and enabling accounting. Without a scheduler, multitenancy and multiple jobs would be impractical and chaotic.</t>

<t>Note that multi-tenancy is implicit. There may be ways to reserve resources for a particular tenant or group of tenants with allocating them, but the documentation doesn't say how.</t>

</section>
<section anchor="nwsched"><name>Network Scheduling</name>

<t>In shared network environments (which almost all networks are), a scheduler can be used to orchestrate access to network resources -- primarily bandwidth, but also highly prized links(*), QoS, etc.</t>

<t>The primary task of network resource scheduling is to reserve resource along a pathway (tunnel) from one or more XPUs (ingresses) to another set of XPUs (egresses). Note that the paradigm here is of uni-directional reservations; this is more general than bidirectional reservations, as the traffic requirements may not be symmetric.</t>

<t>Given that X1 wants to send 20Gbps to {X2, X3, X4}, one would create a tunnel from X1 to {X2, X3, X4} with 20Gbps capacity. Note that this traffic might be unicast (distributing different parts of a matrix to the recipients) or broadcast (distributing the same information to all). If further, one wanted to use certain links exclusively, one can color links in the network and state that this tunnel must/must not use links of a certain color. Thus, link coloring is a tool that network administrators can use to hold back links for a subset of job types. The compute analogy would be to hold back some XPUs, mark them "blue" and allow only a subset of jobs to use those XPUs.</t>

<t>Link coloring allows a provider to partition their network to optimally serve their customers. While links in a Clos network (as most ML clusters are) are perfectly symmetrical, once one gets into "distributed clusters" that are connected via DCI links, link coloring and other link attributes will find greater use.</t>

<t>Reserving bandwidth means that a particular job J1 (probably) won't step on another job J2's traffic. Say J1 is using a tunnel T1 with a reservation of 20G, and J2 is using a tunnel T2 with a reservation of 50G. The reservation procedure ensures any links T1 and T2 traverse in common have sufficient bandwidth for both T1 and T2 (and any other tunnels with reservations). Of course, J1 may use more than its allocated bandwidth; this can negatively impact J2. To reduce/prevent this, one can apply a policer at the ingress of J1's tunnels to ensure that J1 sends no more than its allocated share over each tunnel. This policer can drop traffic over the limit, or simply mark it as such, so that if the other jobs on a common link are not using their full quota, J1's traffic can go through.</t>

<t>This last point is crucial for multi-tenancy. A provider who cannot provide hard (or at least soft) guarantees to their customers that they will in fact get the resources they asked (and paid) for will soon be out of business.</t>

<t>Elastic bandwidth is a very useful feature that goes along with elastic compute. If a job's requirements are: start me off with 5 XPUs, but expand that to 8 as the need arises, and shrink it back down to 5 when no longer needed, then the job's bandwidth requirements are likely to grow and shrink in tandem. Thus, in addition to making binding reservations, one must be able to adjust those reservations as needs change.</t>

<t>Finally, not all jobs (and all customers) are created equal. Priority and preemption are powerful tools in schedulers to give preference to certain jobs over others. Without these tools, a provider would be helpless if their cluster were overrun with low priority jobs. In addition, it would be nice to have a graceful way of managing preemption.</t>

<section anchor="traffic-engineering"><name>Traffic Engineering</name>

<t>All the features mentioned in the last section are available today, in bandwidth-aware traffic engineering (TE).</t>

<t>TE constraints allow a user to specify constraints on the path a tunnel will take. These can include acceptable/unacceptable colors and other link properties.</t>

<t>Bandwidth reservation allows the allocation of bandwidth resources to a tunnel. Policers are a useful adjunct to enforce limits.</t>

<t>Elastic bandwidth (aka "auto-bandwidth") allows a tunnel to dynamically adjust its reservations (within limits).</t>

<t>Priority and preemption are implemented by all vendors. Graceful preemption is possible using "soft preemption".</t>

</section>
<section anchor="multipathing"><name>Multipathing</name>

<t>There is one missing piece with "regular" TE: ML clusters (and Clos networks in general) make heavy use of multipathing, and often have multiple ingresses and egresses for their communications. Current traffic engineering techniques focus on a single path tunnel from one ingress to one egress. However, a new technique for multipath TE that allows for multiple ingresses and egresses is being developed that could have relevance here <xref target="I-D.kompella-teas-mpte"/>.</t>

</section>
</section>
<section anchor="comparing-compute-and-network-scheduling-features"><name>Comparing Compute and Network Scheduling Features</name>

<t>In this section, we look at compute scheduling features, and ask whether the corresponding feature exists in network scheduling.</t>

<texttable title="Comparing SLURM and Network Scheduling">
      <ttcol align='left'>SLURM - Compute Scheduling Features</ttcol>
      <ttcol align='left'>Network Scheduling (Feature Availability)</ttcol>
      <c>Accounting</c>
      <c>Yes</c>
      <c>Advanced reservation</c>
      <c>Yes (bandwidth calendaring)</c>
      <c>Gang scheduling</c>
      <c>Yes (primary effort is on compute)</c>
      <c>Backfill scheduling</c>
      <c>N/A</c>
      <c>Topology optimized resource selection</c>
      <c>Yes</c>
      <c>Resource limits by user or bank account</c>
      <c>Yes (via controller policy) (enforcement via policers)</c>
      <c>Sophisticated multifactor job prioritization algorithms</c>
      <c>No (maybe N/A)</c>
</texttable>

<texttable title="Comparing KAI and Network Scheduling">
      <ttcol align='left'>KAI features</ttcol>
      <ttcol align='left'>Network Scheduling (Feature Availability)</ttcol>
      <c>Batch Scheduling</c>
      <c>Yes (via multi-ingress/multi-egress tunnels)</c>
      <c>Bin Packing &amp; Spread Scheduling</c>
      <c>Yes ("least-fill", "max-fill")</c>
      <c>Workload Priority</c>
      <c>Yes</c>
      <c>Hierarchical Queues</c>
      <c>Yes (via QoS in the data plane)</c>
      <c>Resource distribution</c>
      <c>Yes (via tunnel priority)</c>
      <c>Fairness Policies</c>
      <c>Yes</c>
      <c>Workload Consolidation</c>
      <c>N/A</c>
      <c>Elastic Workloads</c>
      <c>Yes ("auto-bandwidth")</c>
      <c>Dynamic Resource Allocation (DRA)</c>
      <c>N/A (multivendor is a given)</c>
      <c>GPU Sharing</c>
      <c>Yes (link sharing)</c>
</texttable>

<t>As can be seen, almost all features are supported; some other features are supported in network scheduling that may not have analogies in compute scheduling.</t>

</section>
<section anchor="back-to-the-problem"><name>Back to the Problem</name>

<t>Back to <xref target="mlc-1"/>.</t>

<t>With flow level multipathing, say X1 and X2 both send 400G of traffic to L1. L1 tries to load balance X1's traffic to S1 and S2 (in principle, 200G each). In practice, that may turn out to be 220G to S1 and 180G to S2. However, L1 knows that it's only supposed to send 200G to S1 from X1. S1 adjusts its load balancing weights ("adaptive load balancing") until the traffic sent to each of S1 and S2 is 200G. L1 does the same with X2's traffic; if all works well, L1 will send a total of 400G to each of S1 and S2.</t>

<t>On the "downward" side (traffic going to the XPUs), there can be an "in-cast" problem: say both X1 and X3 are sending traffic to X6. Now, X1 has a TE tunnel to X6 with only 200G; similarly for X3. So, in principle, the L3-X6 link should only carry 400G.</t>

<t>Reservations can be temporarily exceeded; that is equally true with compute reservations. Depending on the enforcement policies, an oversubscription situation should be temporary and is clearly visible (since accounting is easy), allowing more severe enforcement should it be persistent.</t>

</section>
</section>
<section anchor="conclusion"><name>Conclusion</name>

<t>As mentioned in the Introduction, to make optimal use of ML clusters, especially when multiple smaller jobs (e.g., inferencing) are run, and multi-tenancy is in play, network scheduling takes on increasing importance as a proactive measure to prevent network events such as congestion. (This works orthogonally to packet spraying.) One can add fast network event notification as a reactive measure. Together, these techniques present a more holistic approach and should allow much better utilization of ML resources.</t>

</section>
<section anchor="iana-considerations"><name>IANA Considerations</name>

<t>None, for now.</t>

</section>
<section anchor="security-considerations"><name>Security Considerations</name>

<t>TBD</t>

</section>


  </middle>

  <back>


<references title='References' anchor="sec-combined-references">

    <references title='Normative References' anchor="sec-normative-references">



<reference anchor="RFC2119">
  <front>
    <title>Key words for use in RFCs to Indicate Requirement Levels</title>
    <author fullname="S. Bradner" initials="S." surname="Bradner"/>
    <date month="March" year="1997"/>
    <abstract>
      <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
    </abstract>
  </front>
  <seriesInfo name="BCP" value="14"/>
  <seriesInfo name="RFC" value="2119"/>
  <seriesInfo name="DOI" value="10.17487/RFC2119"/>
</reference>

<reference anchor="RFC8174">
  <front>
    <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
    <author fullname="B. Leiba" initials="B." surname="Leiba"/>
    <date month="May" year="2017"/>
    <abstract>
      <t>RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.</t>
    </abstract>
  </front>
  <seriesInfo name="BCP" value="14"/>
  <seriesInfo name="RFC" value="8174"/>
  <seriesInfo name="DOI" value="10.17487/RFC8174"/>
</reference>




    </references>

    <references title='Informative References' anchor="sec-informative-references">

<reference anchor="DSF" target="https://engineering.fb.com/2024/10/15/data-infrastructure/open-future-networking-hardware-ai-ocp-2024-meta">
  <front>
    <title>Disaggregated Scheduled Fabric</title>
    <author >
      <organization></organization>
    </author>
    <date year="2024" month="October"/>
  </front>
</reference>
<reference anchor="KAI" target="https://github.com/NVIDIA/KAI-Scheduler">
  <front>
    <title>KAI Scheduler</title>
    <author >
      <organization></organization>
    </author>
    <date year="n.d."/>
  </front>
</reference>
<reference anchor="SLURM" target="https://slurm.schedmd.com/overview.html">
  <front>
    <title>SLURM Workload Manager</title>
    <author >
      <organization></organization>
    </author>
    <date year="n.d."/>
  </front>
</reference>



<reference anchor="I-D.kompella-teas-mpte">
   <front>
      <title>Multipath Traffic Engineering</title>
      <author fullname="Kireeti Kompella" initials="K." surname="Kompella">
         <organization>Juniper Networks</organization>
      </author>
      <author fullname="Luay Jalil" initials="L." surname="Jalil">
         <organization>Verizon</organization>
      </author>
      <author fullname="Mazen Khaddam" initials="M." surname="Khaddam">
         <organization>Cox Communications</organization>
      </author>
      <author fullname="Andy Smith" initials="A." surname="Smith">
         <organization>Oracle Cloud Infrastructure</organization>
      </author>
      <date day="7" month="July" year="2025"/>
      <abstract>
	 <t>   Shortest path routing offers an easy-to-understand, easy-to-implement
   method of establishing loop-free connectivity in a network, but
   offers few other features.  Equal-cost multipath (ECMP), a simple
   extension, uses multiple equal-cost paths between any two points in a
   network: at any node in a path (really, Directed Acyclic Graph),
   traffic can be (typically equally) load-balanced among the next hops.
   ECMP is easy to add on to shortest path routing, and offers a few
   more features, such as resiliency and load distribution, but the
   feature set is still quite limited.

   Traffic Engineering (TE), on the other hand, offers a very rich
   toolkit for managing traffic flows and the paths they take in a
   network.  A TE network can have link attributes such as bandwidth,
   colors, risk groups and alternate metrics.  A TE path can use these
   attributes to include or avoid certain links, increase path
   diversity, manage bandwidth reservations, improve service experience,
   and offer protection paths.  However, TE typically doesn&#x27;t offer
   multipathing as the tunnels used to implement TE usually take a
   single path.

   This memo proposes multipath traffic-engineering (MPTE), combining
   the best of ECMP and TE.  The multipathing proposed here need not be
   strictly equal-cost, nor the load balancing equally weighted to each
   next hop.  Moreover, the desired destination may be reachable via
   multiple egresses.  The proposal includes a protocol for signaling
   MPTE paths using various types of tunnels, some of which are better
   suited to multipathing.

	 </t>
      </abstract>
   </front>
   <seriesInfo name="Internet-Draft" value="draft-kompella-teas-mpte-01"/>
   
</reference>




    </references>

</references>



  </back>

<!-- ##markdown-source:
H4sIAO1x9GgAA9Vb63IbR3b+j6foQFW7YAoARVr22nSSXYq6WDZ1sSivubW7
lWrMNICxZqbhuRDGykrtgyRVeZY8yj5Jvu+c7pkBSNnr5E+iskkC09dz/c5l
ZrPZqMma3J2Zq2Tt0jbPypV54Zqtr96a1672bZW42ix9ZZ7bZJ2Vzlw6W5Uc
dpG3deOqemQXi8rdnJnnl+bFt6bmQqPUJ6UtsG5a2WUze+uLjctzO6ua1XY1
K/JyK+Nm9++PUttg3On9049HCf5c+Wp3ZrJy6Ueje8Yvap+7xtVno2xTnZmm
wqan9+9/dv90VDeVs8WZefb4zZORxd9n5rVvG5xtxPOvKt9u8NWbp+bbp6O3
2zNTtHmTbWyznpqFLdNtlvLPurv5lFdI4rWwPgb9q819ifPtXD3aZGfmj41P
MMdX2HxZ469dwT/+PBrZtln76mxkzAzHr8/MV3PzVbg3vjRGCfJVVjnXZPuP
fLWyZfYX22S+PDNftmW2cVVkRC1DkqwBXa7astzd2NzJd5VbyYQLm2fgUZnp
aolPsdFnD+5/+ln43JYNqfpNmTUuNVcN6FwbvzTnhauyRGe5wmb5mXmr55tn
rln+bsXv5okv+lsNrvL7rF6XrXllb2xpHjpX2eL/2m1uFnKs332nh5iXrrnz
KucpDmQh5Ot4mv1LXGCVRWVrc7WDdBT/i0t8/MsvYeVwhxwZlb4qcLwbdzYa
UV+6T8Y8unpyJis0tlq55sysm2ZTnx0fu3IFJcYO5Wq+XHCdY2jeg+OT+8cn
Hx9DFe0MS+GiULSkaSt37DeunC1b/j0rlYWYPFvbKt1C6WY2m/lkM+Mqs8I1
evBgVB5ltV2tQBIrd1RFw19PQMsskZGd9j+YndzHN1+dP7v75KusWbd64he/
f/bo2fkxhs7imtVwWzwwwwdXl9+8fn73qnXeVsVcTECRyuL+xlU3mdvO102R
D1eVVcy3uH/ubQpRKe0Ky49mMKL8YewCVLNJMxpdchdzactVizHmOZif12Zy
efm8PjIgmtm09Zo2tFk7s4AopLbKlP2NS9alz/1qNzdv8LROIFkYZxsO3pm1
vXEQMssTm6StKlc2+c7cgGP45X5InEtrWTexG0sJ5aq23JkaG2Ip3HHTNs5A
IRozuX71zdHnGJ7VWPT7FrpfG2vSDDfJFi25ZjebymM7s107nFxtKNbBzFru
kviydAmH3mQWk8cLm7x1ZTo2QVymptltINI5DpiVGBFOQmkzCc4PvTTfOlks
7haJs/FZ2YStm44c/TVBFl7vjiWx67qNt+Jy3cnrFrcZjKxvX2HwFEfGzzCi
uxOYU9lM/CCcBN0VTlgm8rniAaE1NZRRniZV1vD+BrpUiUGpw+mEpZzQU6iO
WjI12dzNp3LtSeDaEe4TnbLMhS6nygX8WLZcwNVNVlDhpgYL+kR0j8dI3Sb3
O3yofSdQtRusKCdfLrMkU6Fqa5fOzRd+625Iz+06G8hPVt5gpwIjyVRdCgJu
wD+sRX50rtTQUix7dgUagggQQUjewmFRTsjKBKItvNxhNpBGd1CT24bcSOBq
YFEsz5cVkHB9SutHMXpDSV5brunK8CRpRbMqR4aS18orx/1LnDGNwra0YHVu
HmZVWqtYPcFGkD1T0CXKAQVtmJPTj6bmd1cUxrPo2KIsvATrzUWg0qvKfwe5
mY9GcrLCFZ4U2vja1f3davgg4SCODRMF7aEfgYXlmr3R7Vl1NrAIwrWf5Tlh
zKrO0p6B3WKfH5oXkIi3BY12ZpPbsua9EwsG98wzqzxrIKqYzX1kBWvWPofp
yBJDC9pfBtN9lYKOjYcO4KjZX1SdK/hOXoyG70P69B0w4FwtbJGlKfwscOEz
eE+fwkVBmcy7exk/vv+FhjcrU8hnteOpKOEqV/gtRgYY0Tdrke39k3WHc0Hc
IKe0Rd4UnooYlb1VegVr2Am9rAJKZSsKVk0CcdGNXEeOlhUOwg32wGDVc3Nu
CvsdAHgegfey8kWUZmFFNhClyB7oSembngWE8KuKAp6e0cJDBjcQC3wLEllK
yIYHbUTB5KspD5fnHALqwG4vAHqNvR0ITJ5fHpFL06Gx9WpLOIXCs3bJW7Hl
kY4F3WdgL+8Kx9xAIVNnJQbZwtv3d4HoUsJ9OQ1CfcegJXARMEqtbhPkXOTA
aQbeHtbYJq5aiFLgMOIIZo0rLc1BtFEH7o1SJxIDEZVNec4MwYjYbzVZ4ivf
ul10nTSGuv3PW5LJQ//kyGQ5t6/kZFtEA6oUPnd7vIQO7kQHEZ7wXNPgGnGF
JoOkkbdxLFnViDTCiVftpgkzeIUaRzbgGqQLoUq7WmMg7rL2W45fY4huDMHV
ZeCWxeZzO5zXbGFYoSg7ulAaeTgcHgveztdBqzA3q+uWxnVLc5ZnWCqYh1Ts
mTXi4hvxjh248G2eUloXrnT0QDYPsEQMvp6Oe8lvYbyQyh0uBGJH8YD4OXmK
++6JSrjcyPykUY6j9/0jPgEfuvS489Mi397TWKtsyF7Kqs6bq/vE+QRo1RtA
gAwCKqzTJcVhHppmM7nAUSvw+FXvVhky1OaPFwBgf56ap5XdYOX6jhFPdQSR
2lRuWYlS+wq2cWrm8/lRsGD4D2dypLq6fTIWClF+QB/IcxI7KA9dAJWs4ECs
hX2tvTIT/DaWOnLF+yXuaEpjEm1+TZtr896c6h51xj1t6Xxb5wDBrzvq3GZI
BdoGKCQabxdZHhBvkH7KxoFqiCAVGA78sq/VPTWG/IehIhxXCPFhTsmyFD1C
F0sLYVIPMzl08TTtc/O4rlVx891UqWyjanPvAQDkxL/99d9ruMu//fU/xEuu
hUxmAsAo7qWgi5vKeXL3Q7h8+LJsiwVMDr7YWATBjnZualyTzI+gJPgE0IWT
U/8LRgiQqGPw7BgCgwWrIPGEmAFNyEiiZxUmNSoqTzy5jvy8B7PT/ur9Cfqt
5aktGAoPkUVP07B12opPSbOlSAoxXJ6rapuJgmQB6TNGMyRFgeAmUWWEY1lR
jx7CPKTAMM3UnOf5axGcqYm/lSZfgrCwtYkY8riyqq1EOzCSEV2J4op9Ajd+
C2yRA1QPaCr3lltMiDzEacGDqYXgV4GXe4edm2fLaRQYykrH7n2xuNsyyRGi
eTIT7tm5CLKCQKTdHEyCEAv64K2PprcUSYzhgf4IQAi+ZXA+1UVunKtzMcQf
sP8Fb+1S2BoY3Jel0wW8wKszJkJoNQ6vNMjKqT1dgnV6AiFzomigzlalGnvB
FEBeK2U72YNtvJiYXIR2Fk8YwoaBxIWbYR4vAwIACdbwfk+wivg7gufOcKlD
vXViJb1uzysKP6IVEDVf+Q6+8bt9Qd2sCbApYqAbkcJE4lTYT5nQyS+HDCTY
C6i2BgDE1QIJZaEjIOZ798wbkUdJJzACcYJWcGygkvHzb67ejKf627x4KX+/
fvz1N89eP37Ev6++OL+87P4YhRFXX7z85vJR/1c/8+Ll8+ePXzzSyfjW7H01
Gj8//8NYccj45as3z16+OL8ca+gI6UkRpYnWSTAsdBOV3lROQpl6BP0BtF6o
v3148eq//vPkgXn37h9eP7k4PTn57P378OHTk988wAcSXHfzJf2/fGSIMwJU
YGDJTAQkI7GbDDCXCAk6BgMHiAEjMx/9029zmqjZJ7/9lxFpec88AjCBYyWz
IDuI8ApZ+htEyULoOgR5tUon5egGHoEGL07UVDqtRIgDJHWeaU4goCQQgIF3
RxraWnATFuhsdCaCg92j86QbEIQ5CLuZ2UGEmATkcPAEkAJWcspgQHHDredP
5Xkeg6hbzy/lOSxLjbvcevqGNlhDQgTt2VuJkiB1S1K7LSUKlOyUpvp59vEt
bzpmwPyY98UVmJEwg8TlaPT8ko+KEIjEwGhKaOm3riJ6kTxa9n0r0iQjVI8k
s0OL4lsGBpscWLNRlbdFIasI9TELsOBmEEkwN/sM55Lc6hlN4CC32aPBpeQ2
eRa4u7VPRfmE7SHAqjeV3YVkQhcUio3r6hPMzmKji2ey0QdyUQyDX+nhNH1M
/RmNLggtI5X7ega0o8iT2cn792ej0b/hn6Q2+e/qJP5FSDg3V6cmPDvmF3+a
6z/5jH9/0mcvPPO2lyfmV+by9CAPCNpdnX4ui1yeyEyOkX+XH4XfD7pN+d2v
+MXtRU7mI932T+bDv2EMIWflW3VoD+7ff6ofOfcad7vG1tfY9hpbXn+M/z/B
/7/B/58qHd6daYb3n8egVihqmZOxuaf0UrsJZ9VAehgpVzxkndWNiIQAA2zT
rCtxetefagRYuh+aMBxOdzhjDGlcjk291fwJaRgnXz7QyY3f3LHVuN5A3Acz
wTkq2tXpfECEBUSKuS+Z39NksZE8j+YcK4VrXvxj3S5oWTdiszSV12dugw1L
ITQnukuX92W8Ols4ZozSeXzanU1y5AsLK5IwHM4YdAokgXqCwUD2o1GXWRxz
wHg4R/SDMW/dxoQkjEcOS6N25dZI8bKe2WXsMAuqNoHWgR6SDebXCY58hHkg
qZmMD/RxfAQdgXK/f08c9rNbLHOA4biUfMhxmXxPicdHMcXJdA9Dq5IWMd8F
F9cRI1IKHiGpfB0DEIjQ2m8kTxlwqGbhyv4hrCDCcUeZ8YrNg8WcSl5KonK/
l0EJ15YcT91ld+K3CIMBfH66cGMmINMR7WQ9pDZNXCDyAfUAPokzCdwlacI0
UuKJxTUVtBOsqLl9Aa76Z8bYcZGtVAvSkDvv083RehLsfCsg8W42ACdii2vV
lmu1VpLcQxwmFBKjoTlIcTagGAyXaKbUZehChvJ8ffLreji4U0QEDfT7GW69
YQrxlAszVAb6fcYnTFcQtomr58WbFp6Jvkgl4vQUE/oVTz4NH0/lNKmPoQVD
KvEZ16f9UT7H3Rt85G0FOqw4/uSzvSVPT/olaWsuT2Z4QtuBBcG5hYvSwlBD
9nhwskceUPulmolxCry0tVUKo8TU8iTSRJFu42UULYbGFpWLoTF+jrNyRkA7
ji72TA4ujIm8+kh4Fdk0oPj1J0JQLg8RqBtRjKlmJTi8Vp5KGIDxDYsvS/Pp
fb379ScqkZx++dEMHkHuz7MJpMNGZY1oQhYhmVro4NrmGrPFY0RyhVwqyPJM
BzDm4lWDL5nw4tM7PMURJTzm9v2gMqbJAJIMn0N0B0VomByBXGHJeHrNtQE4
FTYrjwbFmmwZ0mfhEPhTcA0pMsiuSwCxlxDdhgInEXGaCmoF8fY0kCfKagI7
nXIihvYgZ29OjgAG8YNSviCm5QOmFgJzN7aSIOz6k89N2PtUFuLkU04+vWvy
RwqcQ5DUL6Lkf6Nrvzk9yCPFFO1B0BVkIYhmzEGy/kEehjhyX0rg51ag9TSQ
V+bQ0h6aWcmXSForJtiZaw9pd7WYMLQPwcfSb6c9M9XtMZuhRMG9zmumMZoY
E7iD6sc7rX68n4bK5ge3EwGXCk80qiCKVMK6e0oKrotDQsQLVVMxixGpml8b
arKFb7KbPuyGb5jewrqkhoTwPYlo87NlTH0ENLvxNaTt3T2mY99LCBsraYMG
pXf3GDAI5MYYmIF6jbOkXVLOlTdZ5Uvi4XraHdp2z6NOiHFIct+ybNanao3w
NUR3MRjwFWGNpOrhoxMBIv52GnAe2gTevZPfCESzsPNgucFOpbnMyvaHrgzx
uWHmNkbDIYVFPz1+klV0mH+Udf/cJaJqFsMxOdSaj1kK8OVs8GV32EFWOZ67
9IhPj/gQB6tUDKRKkLbB2dLsMxMlNWOnqU3CFl8VYirmY3MedBE3ZefFu3f4
KRE4AYsefq8jQ0kCm9/yQl3BOdQo4PktM+NftQtXlWz9GtBLK8ihgiiJ5j7/
EigSxRA7KrjYDxJ7+4aTvxG9Vxej8bTepNtQK5/CPFKIujg4jWc+su7mLp1t
+grU0uM829gQEIpwjEBDN4og5kT6gBjT8lN6oxhwkEni908ts+W9+E+UIZB5
SSdJZMksF1AP1feIcx5C/5b0T/08fv0GCqbJ7EDDdJBxc7kmLjgw9v4Z5u8g
kQsRXVGZhYWrtHpyDr3ym7UUfcVCiS9ZAuYEQw4w5FkR1UYqMGnFj2smSygU
gYTNkGBKm4cWgcRA6+U7cOMVbsZ7/8pcbSra3f0hXX/OK914xy+/yMAgKLDU
Zb9uHWzZ3iW7fpdw+ycIR0pqzSvP/ICO7lZmhI3v045Bj3MrRe84QoY/2pW2
wJfdJue9fE4evT4XNlGAr5SRiDM9bFVR4NNf3HRgrnqBo4hv676wpfktshL3
igIBjb2jOlH4YQuMuhbNQe2VeGKXh0PgpeIVSAFFZYuZePcgE5yMdRaaf+1E
GZQK6RU7LH/JNrJvsru9b18AZGlGUDKuJM0zaws/QdDJlEMAzj1gwWJ0mIUm
cuZBpQNs2NL4NF1WfEAPBYZED1nS5uwzUQQiVXKPiF1asZRIgoGjddEqZ9Hj
x31bTYRe/roRJLv2W83CxobaPRcWOmD3HFiXjx44MDNRQ2rzAhGT5Cz7VoKK
Sfuh6wrw+qe91u36wWxGRaXoSSza9cbykpLHDvHrphKbIZmGyT9i76/9lVZP
jOZJdBUgaFu/vaumM7Rj2Z280U4VYU6zBgNh7oBCHOL2LnFOyMr+AkHUiLgk
Wx/8WASGtWu69MzExRFz0wuR1DpgONNsVZiYE8EUoMNZCqiYBOg7sMZ1qEkz
K8sDrFypWdg1yZ59aNY0YKYuchj0C9Rd+Au21buicLBElPan2Y10MOGkAMxb
G6r6DG4QVjKfw4/vrk8RWXyE/x8A/0mlQRQpkVYWxj5CPKUdg4/9KSrbYbnY
MLhPpKyPdAsIgZxTADSEcdIbzr36HPUq9D0UFgN+iKEg6JNt6OxZAYMr6Yoa
+yt1IW7X1qpYHcJ/JMmZZVuRy+HG0tUSQIxJXNUAn4dsWIeDWAHzofiUwAdW
YcB+witUNO3+9ZWEBeDKMX8It7iVriDXjLvK0jFi1KCS3wRxZyjq8/2qmk2L
rMxER32lEL3VQtDa0yLC3YWN1GYxXaeyTfcq+X8FHNHox3p1Z1H3lhJ0FzoC
LLanMTPjRd66sQZWdDEaBR/sVUcKw7jXugbE9HLvjsFB2Zg3lOKUWFll4dpl
fatKbAbTtkcxAjogAZFxzKrrP+lYZc1F7utuhYmlIoIjg759MYpa2+yya51e
2ZxSkGgKbcWQCLGTN+Nhw2tcaNzHQfv9oY8unumJDjks3lisj3xvm7BmramC
ZYbnscdM0WTf3NAZXVM429WChh6K7P4SobbWZ/PdETgsrqZxG8keBMsn4wZZ
ISA0WBjMlBSAduAEkUa0rM5tr3gJdsMiKBz/8vSuaacfmPaxZkv2a6FSIUI8
EUCFtErsAkv7cB2HZS7aDQC3RKqD/G9PImqC5In6+RPNCuwCA/SkwXcPbTHs
x0sCpLZivghUofmlVItBF0Mu2Kbroux2HfQjlUyMamNRKCZ/yWyaD0XwYwBT
qVRzQm927GYjWrUhqCSaCykc9V+k35eSVwxHZ1eiEExlAUfVxFbpP3hWgRGS
1NdMmC4V2lritlJ+R3jdmXUZr9U64DrJCEmeaacWImukNIooetp1Dmea6OoE
rtb0VWCcSj9bR8RQBosOxWaDsvm+9Y2dhruGI4Qmr5Aai92yhNWhA5yEr9ok
drvtAUC2SHb2Zrv2sfcxfGf4qoIklnBwhIFYs/bL5sisWisNkZrgPTA9g55Y
UV6IJaMa6SdUT7bXhw240zVd2Cw9klPKxNpLu6DkeNkOSnqwuWA0imFDL9fi
IqThIPRghZBSDyPZXEVHItYuzA+GXxyjJTd+Xe8jDLDizEiHJYwLQy6d/3Hw
AwuthHZtvCDGpxGxhP7yrHaxh3BdkbtZo+6ECWBO+Fg7HEovrRKuCq07ktcq
Y2YJ5+rvenhC01dGgMC3e5thBXx0RXSs9AQhNyl9t1YiwkWmaeJ96EXtE6/N
rDMTC8QR6Xf8Rh3ZcDivXcqrBQg7cA9w6UlWav8MRYroW8R9EnxlLzDqcxR3
IS76vrVQvBiEasKzcq7QMpu4p65UDUwg3m2QdiAVMqk+u9gdx3RTQBmqcVRb
0cC6D7k0MycrToduuAMDa5dvpIymKkyRD9mwrQu2gx1+IiBEApt4A2nDZs49
Ul5KW926QISKNGi0LbsLEu0itNqRFtt9eyIwZGBbxZtgAx4PK/znoRc25lRM
oQnZPgcqxiG2WkiG/MZmeeBwyvZlDOzEbWa3koa53U1gJm8eH0m7gWREpdk7
GFUIoWY9iLvZHrnc7Y1RRCOBSu8cRemZ7hFPWKvp1/cMNAbbNDzkcVv2HxRC
1IcAgklQcDxztBYPB5rTO9cAuKSRrs8vSNv5YHi0VL475lxTG7FCbKPBoWaU
SaPOh/F+zADdaa8m9q01Y9s2ftZ9OT7qUWAgCZuRNRsiUC9oH13Xnu5NKHSC
3LkhefJT6iOVEEoFffROdBE+N/XUhqdR+gZzxAPW2nWrPmlMNzAYMp6rQD4f
lA4lqg2hIQ1Jpq0uiGGSUIAbV25FfDY2bx6f7QFRsRFDvCpKHmJGljbeUh3t
zS6+KrBfsxRhWMLHqU516ZIu3tUETPwQMvhU6GGJA9S40LfE7hT+rk2mji/K
DMpQItjDAJIkiHCF6B0fXWiV6wpPFrfdDtpvOnctq0HLFNmqiPQPP3wxeUFI
4ksWdj3rGbKEll7C+3C5k9ypxvHv3v322ezRvHvpuIHTn4HHjqX9rqig2bKL
LmpK78rTPAn2R7I0zaC7S9rbc+8J8w8zdZI3CxOVj8yGwEEqNJVYrdIu8HQw
Fk5YWj36vqDBgpDNH7W0MLurJBLP+eNdl5iEp+ZcTaT0dh79OPpxtvev+4gn
fWb6R/MHMIFf3ZGeNvp00psEFu7ldcZyxR0Oc9dhfEwTxTdERO7iO26Ydkf6
GlNfHJ/j2d+Vwzbdsf/OTHY8GQM81v8qthhXCpoRak2CNZQeRY4JaLrmaf+H
GXDeyJsJAhD4T9wNS/W9SL2EKtfvls/xe4gF8+hR3H78pez/MTD8MN0+IIeC
7aCdx/rJBSOgsYqw7Kdz82G9sQDwGXnLNtHC/qB/c4VbqXuZgwd3pO+H7Pra
X0VQIL1yfGVNpOjO/P5gYjBtEeJwyq3Mv4mHuDv9H4XyVgkg3vfQOWLsz1YG
ZFEIBimtPk0jAyLCEisMqgZhG4ELoQDwATH6KhTE7hai8zqmjmsnjbN9trmD
YFInazcbaCy7/CWTpGjl7iF3G7K+9YWAWuGiZKxI7Ky8w5ZqEp0mIaYRQ9sj
QZF+2TU3/sJOoK4L6P9HC1DnZHGst6UCP0bjbPqRlJ3QPravaqa4WyykgOey
sECwWjDYQaPb1mXyYhhEN7X6As3+CAA8eoZ8L6NdC8DwmnYAGXtyQHJPpYvm
7+lfQlBCkQs9sXDeclWNo528N9n18jwIN7u14/+V9qQXbOvAyLX0HhD4dGj4
+hO9vPCM1PmcGRfYZnZJEhNdf8S66tTsi9JBu1K9FgQkiyS2gjuVdqWYVQyo
OtyoAcj1lRZ49PV5eVNHxKfWcFUan9rAl0EJsVtrbh65TbhsiH2GbnETTOY0
vnmx11laZ00b+hnWXWY6HEoBvnQr6euJN5kC9QmQaOIGxUU5rK13R/qi87Z7
11SK8PvnCftkTWi1ZCOtk4YaAsBSCgO+FNt3K7ocdthMQ47BdaXWgNcHYH86
fIFu/825mgnumCabuPlKXlfqeqWOYqfNtK+K7pU25e2c3fROU4pDCXTqX5xn
XATrKzbK1nsvWxYY0errFTFBefACTGyV6Zt05giP16FLjc2izdqvfKmy4g+7
fOZHJr7fY9PQ77O3w17Ljx6ve4MznI5p1JXT4k5IZvQhyobSyJdElOfd2+bd
25+aNBK2awAvHbHaAmzg/vOIwpR7fduOvFN+/uLcXOw1urHoXELz5A1bLeea
K5e0EpQeDn3z8NHovwEbeLcyUkkAAA==

-->

</rfc>

