<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.29 (Ruby 3.2.3) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-yl-bmwg-cats-02" category="info" submissionType="IETF" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.31.0 -->
  <front>
    <title abbrev="BM for CATS ">Benchmarking Methodology for Computing-aware Traffic Steering</title>
    <seriesInfo name="Internet-Draft" value="draft-yl-bmwg-cats-02"/>
    <author initials="K." surname="Yao" fullname="Kehan Yao">
      <organization>China Mobile</organization>
      <address>
        <email>yaokehan@chinamobile.com</email>
      </address>
    </author>
    <author initials="P." surname="Liu" fullname="Peng Liu">
      <organization>China Mobile</organization>
      <address>
        <email>liupengyjy@chinamobile.com</email>
      </address>
    </author>
    <author initials="X." surname="Yi" fullname="Xinxin Yi">
      <organization>China Unicom</organization>
      <address>
        <email>yixx3@chinaunicom.cn</email>
      </address>
    </author>
    <author initials="Q." surname="Xiong" fullname="Quan Xiong">
      <organization>ZTE</organization>
      <address>
        <email>xiong.quan@zte.com.cn</email>
      </address>
    </author>
    <author initials="M.-N." surname="Tran" fullname="Minh-Ngoc Tran">
      <organization>ETRI</organization>
      <address>
        <email>mipearlska@etri.re.kr</email>
      </address>
    </author>
    <date year="2025" month="November" day="17"/>
    <workgroup>bmwg</workgroup>
    <abstract>
      <?line 45?>

<t>Computing-aware traffic steering(CATS) is a traffic engineering approach based on the awareness of both computing and network information. This document proposes benchmarking methodologies for CATS.</t>
    </abstract>
  </front>
  <middle>
    <?line 49?>

<section anchor="introduction">
      <name>Introduction</name>
      <t>Computing-aware traffic Steering(CATS) is a traffic engineering approach considering both computing and network metrics, in order to select appropriate service instances. Some of the latency-sensitive, throughput-sensitive applications or compute-intensive applications need CATS to guarantee effective instance selection, which are mentioned in <xref target="I-D.ietf-cats-usecases-requirements"/>. There is also a general CATS framework <xref target="I-D.ietf-cats-framework"/> for implementation guidance. However, considering there are many computing and network metrics that can be selected for traffic steering, as proposed in <xref target="I-D.ietf-cats-metric-definition"/>, some benchmarking test methods are required to validate the effectiveness of different CATS metrics. Besides, there are also different deployment approaches, i.e. the distributed approach, the centralized approach and the hybrid approach, and there are also multiple objectives for instance selection, for example, instance with lowest end-to-end latency or the highest system utilization. The benchmarking methodology proposed in this document is essential for guiding CATS implementation.</t>
    </section>
    <section anchor="definition-of-terms">
      <name>Definition of Terms</name>
      <t>This document uses the following terms defined in <xref target="I-D.ietf-cats-framework"/>:</t>
      <ul spacing="normal">
        <li>
          <t>CATS: Computing-aware Traffic Steering</t>
        </li>
        <li>
          <t>C-PS: CATS path-selection</t>
        </li>
      </ul>
      <t>This document further defines:</t>
      <ul spacing="normal">
        <li>
          <t>CATS Router: Router that supports CATS mechanisms for traffic engineering.</t>
        </li>
        <li>
          <t>ECMP: Equal cost multi-path routing</t>
        </li>
      </ul>
    </section>
    <section anchor="test-methodology">
      <name>Test Methodology</name>
      <section anchor="test-setup">
        <name>Test Setup</name>
        <t>The test setup in general is compliant with <xref target="RFC2544"/>.  As is mentioned in the introduction, there are basically three approaches for CATS deployment. The centralized approach, the distributed approach, and the hybrid approach. 
The difference primarily sits in how CATS metrics are collected and distributed into the network and accordingly, where the CATS path selector(C-PS) is placed to make decisions, as is defined in <xref target="I-D.ietf-cats-framework"/>.</t>
        <section anchor="test-setup-centralized-approach">
          <name>Test Setup - Centralized Approach</name>
          <t><xref target="centralized-test-setup"/> shows the test setup of the centralized approach to implement CATS. The centralized test setup is similar to the Software Defined Networking(SDN) standalone mode test setup defined in <xref target="RFC8456"/>. The DUT locates at the same place with the SDN controller. In the centralized approach, SDN controller takes both the roles of CATS metrics collection and the decision making for instance selection as well as traffic steering. The application plane test emulator is connected with the forwarding plane test emulator via interface 2(I2). The SND controller is connected to Edge server manager via interface 4(I4). The interface(I1) of the SDN controller is connected with the forwarding plane. Service request is sent from application to the CATS ingress router through I2. CATS metrics are collected from Edge server manager via I4. The traffic steering polocies are configured through I1. 
In the forwarding plane, CATS router 1 serves as the ingress node and is connected with the host which is an application plane emulator. CATS router 2 and CATS router 3 serve as the egress nodes and are connected with two edge servers respectively. Both of the edge servers are connected with edge server manager via I3. I3 is an internal interface for CATS metrics collection within edge sites.</t>
          <figure anchor="centralized-test-setup">
            <name>Centralized Test Setup</name>
            <artwork><![CDATA[
      +-----------------------------------------------+
      |       Application-Plane Test Emulator         |
      |                                               |
      |   +-----------------+      +-------------+    |
      |   |   Application   |      |   Service   |    |
      |   +-----------------+      +-------------+    |
      |                                               |
      +-+(I2)-----------------------------------------+
        |
        | 
        |   +-------------------------------+    +-------------+
        |   |       +----------------+      |    |             |
        |   |       | SDN Controller |      |    |     Edge    | 
        |   |       +----------------+      |----|    Server   |
        |   |                               | I4 |    Manager  |
        |   |    Device Under Test (DUT)    |    |             | 
        |   +-------------------------------+    +--------+----+
        |             |                                   |
        |             |                                   |
      +-+------------+(I1)--------------------------+     |
      |                                             |     |
      |         +------------+                      |     |
      |         |    CATS    |                      |     |
      |         |   Router  1|                      |     | I3
      |         +------------+                      |     |
      |         /            \                      |     |
      |        /              \                     |     |
      |    l0 /                \ ln                 |     |
      |      /                  \                   |     |
      |    +------------+  +------------+           |     |
      |    |    CATS    |  |    CATS    |           |     |
      |    |  Router 2  |..|   Router 3 |           |     |
      |    +------------+  +------------+           |     |
      |          |                |                 |     |
      |    +------------+  +------------+           |     |
      |    |   Edge     |  |   Edge     |           |     |
      |    |  Server 1  |  |  Server 2  |           |     |
      |    |   (ES1)    |  |   (ES2)    |           |     |
      |    +------------+  +------------+           |     |
      |          |               |                  |     |
      |          +---------------+------------------------+    
      |     Forwarding-Plane Test Emulator          |
      +------------------------------------ --------+
]]></artwork>
          </figure>
        </section>
        <section anchor="test-setup-distributed-approach">
          <name>Test Setup - Distributed Approach</name>
          <t><xref target="distributed-test-setup"/> shows the test setup of the distributed approach to implement CATS. In the distributed test setup, The DUT is the group of CATS routers, since the decision maker is the CATS ingress node, namely CATS router 1. CATS egress nodes, CATS router 2 and 3, take the role of collecting CATS metrics from edge servers and distribute these metrics towards other CATS routers. Service emulators from application plane is connected with the control-plane and forwarding-plane test emulator through the interface 1.</t>
          <figure anchor="distributed-test-setup">
            <name>Distributed Test Setup</name>
            <artwork><![CDATA[
      +---------------------------------------------+
      |       Application-Plane Test Emulator       |
      |                                             |
      |   +-----------------+      +-------------+  |
      |   |   Application   |      |   Service   |  |
      |   +-----------------+      +-------------+  |
      |                                             |
      +---------------+-----------------------------+
                      |  
                      |                                   
      +---------------+(I1)-------------------------+     
      |                                             |
      |   +--------------------------------+        |
      |   |      +------------+            |        |
      |   |      |    CATS    |            |        |
      |   |      |   Router  1|            |        |
      |   |      +------------+            |        | 
      |   |      /            \            |        |
      |   |     /              \           |        |
      |   | l0 /                \ ln       |        |
      |   |   /                  \         |        |
      |   | +------------+  +------------+ |        |
      |   | |    CATS    |  |    CATS    | |        |
      |   | |  Router 2  |..|   Router 3 | |        |
      |   | +------------+  +------------+ |        |
      |   |      Device Under Test (DUT)   |        |
      |   +--------------------------------+        |
      |        |                |                   |
      |    +------------+  +------------+           |
      |    |   Edge     |  |   Edge     |           |
      |    |  Server 1  |  |  Server 2  |           |
      |    |   (ES1)    |  |   (ES2)    |           |
      |    +------------+  +------------+           |       
      |           Control-Plane and                 |
      |      Forwarding-Plane Test Emulator         |
      +------------------------------------ --------+
]]></artwork>
          </figure>
        </section>
        <section anchor="test-setup-hybrid-approach">
          <name>Test Setup - Hybrid Approach</name>
          <t>As is explained in <xref target="I-D.ietf-cats-framework"/>, the hybrid model is a combination of distributed and centralized models. In hybrid model, some stable CATS metrics are distributed among involved network devices, while other frequent changing CATS metrics may be collected by a centralized SDN controller. At the mean time, Service scheduling function can be performed by a SDN controller and/or CATS router(s). The entire or partial C-PS function may be implemented in the centralized control plane, depending on the specific implementation and deployment. The test setup of the hybird model also follows <xref target="centralized-test-setup"/> as defined in section before.</t>
        </section>
      </section>
      <section anchor="control-plane-and-forwarding-plane-support">
        <name>Control Plane and Forwarding Plane Support</name>
        <t>In the centralized approach, Both of the control plane and forwarding plane follow Segment Routing pattern, i.e. SRv6<xref target="RFC8986"/>. The SDN controller configure SRv6 policies based on the awareness of CATS metrics and traffic is steered through SRv6 tunnels built between CATS ingress nodes and CATS egress nodes. The collection of CATS metrics in control plane is through Restful API or similar signalling protocols built between the SDN controller and the edge server manager.</t>
        <t>In the distributed approach, In terms of the control plane, EBGP<xref target="RFC4271"/> is established between CATS egress nodes and edge servers. And IBGP<xref target="RFC4271"/> is established between CATS egress nodes with CATS ingress nodes. BGP is chosen to distribute CATS metrics in network domain, from edge servers to CATS ingress node. Carrying CATS metrics is implemented through the extension of BGP, following the definition of <xref target="I-D.ietf-idr-5g-edge-service-metadata"/>. Some examples for defining sub-TLVs are like:</t>
        <ul spacing="normal">
          <li>
            <t>Delay sub-TLV: The processing delay within edge sites and the transmission delay in the network.</t>
          </li>
          <li>
            <t>Site Preference sub-TLV: The priority of edge sites.</t>
          </li>
          <li>
            <t>Load sub-TLV: The available compute capability of each edge site.</t>
          </li>
        </ul>
        <t>Other sub-TLVs and can be gradually defined according to the CATS metrics agreement defined in <xref target="I-D.ietf-cats-metric-definition"/>.</t>
        <t>In the hybrid approach, the metric distribution follows the control plane settings in both centralized and distributed approach, according to the actual choices in what metrics are required to be distributed centrally or distributedly.</t>
        <t>In terms of the forwarding plane, SRv6 tunnels are enabled between CATS ingress nodes with CATS egress nodes.</t>
        <t>Service flows are routed towards service instances by following anycast IP addresses in all of the approaches.</t>
      </section>
      <section anchor="topology">
        <name>Topology</name>
        <t>In terms of all of the approaches to test CATS performance in laboratory environments, implementors consider only single domain realization, that is all CATS routers are within the same AS. There is no further special requirement for specific topologies.</t>
      </section>
      <section anchor="device-configuration">
        <name>Device Configuration</name>
        <t>Before implementation, there are some pre-configurations need to be settled. 
Firstly, in all of the approaches, application plane functionalities must be settled. CATS services must be setup in edge servers before the implementation, and hosts that send service requests must also be setup.</t>
        <t>Secondly, it comes to the CATS metrics collector setup.
In the centralized approach and the hybrid approach, the CATS metrics collector need to be first setup in the edge server manager. A typical example of the collector can be the monitoring components of Kubernetes. It can periodically collect different levels of CATS metrics. Then the connecton between the edge server manager and the SDN controller must be established, one example is to set restful API or ALTO protocol for CATS metrics publication and subscription.</t>
        <t>In the distributed approach and the hybrid approach, the CATS metrics collector need to be setup in each edge site. In this benchmark test, the collector is setup in each edge server which is directly connected with a CATS egress node. Implementors can use plugin software to collect CATS metrics. Then each edge server must set BGP peer with the CATS egress node that's directly connected. In each each edge server, a BGP speaker is setup.</t>
        <t>Thirdly, The control plane and fordwarding plane functions must be pre-configured. In the centralized approach and the hybrid approach, the SDN controller need to be pre-configured and the interface between the SDN controller and CATS routers must be tested to validate if control plane policies can be correctly downloaded and it metrics from network side can be correctly uploaded. In the distributed approach and the hybrid approach, the control plane setup is the iBGP connections between CATS routers. For both the approaches. the forwarding plane functions, SRv6 tunnels must be pre-established and tested.</t>
      </section>
    </section>
    <section anchor="reporting-format">
      <name>Reporting Format</name>
      <t>CATS benchmarking tests focus on data that can be measured and controllable.</t>
      <ul spacing="normal">
        <li>
          <t>Control plane configurations:
          </t>
          <ul spacing="normal">
            <li>
              <t>SDN controller types and versions;</t>
            </li>
            <li>
              <t>northbound and southbound protocols.</t>
            </li>
          </ul>
        </li>
        <li>
          <t>Forwarding plane configurations:
          </t>
          <ul spacing="normal">
            <li>
              <t>forwarding plane protocols (e.g., SRv6);</t>
            </li>
            <li>
              <t>the number of routers;</t>
            </li>
            <li>
              <t>the number of edge servers;</t>
            </li>
            <li>
              <t>the number of links;</t>
            </li>
            <li>
              <t>edge server types, versions.</t>
            </li>
          </ul>
        </li>
        <li>
          <t>Application plane configurations:
          </t>
          <ul spacing="normal">
            <li>
              <t>Traffic types and configurations.</t>
            </li>
          </ul>
        </li>
        <li>
          <t>CATS Metrics:
Each test MUST clearly state what CATS metrics it use for traffic steering, according to the CATS metrics definition in <xref target="I-D.ietf-cats-metric-definition"/>.  </t>
          <ul spacing="normal">
            <li>
              <t>For L0 metrics, benchmarking tests MUST declare metric types, units, statistics(e.g, mean, max, min), and formats. Benchmarking tests SHOULD optionally declare metric sources(e.g, nominal, estimation, aggregation).</t>
            </li>
            <li>
              <t>For L1 metrics, benchmarking tests MUST declare metric types, statistics, normalization functions, and aggregation functions. Benchmarking tests SHOULD optionally declare metric sources(e.g, nominal, estimation, aggregation).</t>
            </li>
            <li>
              <t>For L2 metrics, benchmarking tests MUST declare metric type and normalization functions.</t>
            </li>
          </ul>
          <t>
<strong>Detailed normalization functions and aggregation functions will be listed in appendix A (TBD.)</strong></t>
        </li>
      </ul>
    </section>
    <section anchor="benchmarking-tests">
      <name>Benchmarking Tests</name>
      <section anchor="cats-metrics-collection-and-distribution">
        <name>CATS Metrics Collection and Distribution</name>
        <ul spacing="normal">
          <li>
            <t>Objective:
To determine that CATS metrics can be correctly collected and distributed to the DUTs which are the SDN controller in the centralized approach and the CATS ingress node in the distributed approach, as anticipated within a pre-defined time interval for CATS metrics update.</t>
          </li>
          <li>
            <t>Procedure:</t>
          </li>
        </ul>
        <t>In the centralized approach and the hybrid approach, the edge server manager periodically grasp CATS metrics from every edge server that can provide CATS service. Then it passes the information to the SDN controller through publish-subscription methods. Implementors then should log into the SDN controller to check if it can receive the CATS metrics from the edge server manager.</t>
        <t>In the distributed approach and the hybrid approach, the collectors within each edge server periodically grasp the CATS metrics of the edge server. Then it distributes the metrics to the CATS egress node it directly connected. Then Each CATS egress node further distributes the metrics to the CATS ingress node. Implementors then log into the CATS ingress node to check if metrics from all edge servers have been received.</t>
        <t>For all of these approaches, to test whether metrics can be received within the pre-defined time interval, implementors could compare the timestamp when receiving the current metric and the timestamp when the last metric arrives from the logs. If the time difference is exactly equal to the pre-defined metric update time interval, then CATS metrics collection is correct.</t>
      </section>
      <section anchor="session-continuity">
        <name>Session continuity</name>
        <ul spacing="normal">
          <li>
            <t>Objective:
To determine that traffic can be correctly steered to the selected service instances and TCP sessions are maintained for specific service flows.</t>
          </li>
          <li>
            <t>Procedure:
Enable several hosts to send service requests. In distributed approach, log into the CATS ingress node to check the forwarding table that route entries have been created for service instances. Implementors can see that a specific packet which hits the session table, is matched to a target service intance. Then manually increasing the load of the target edge server. From the host side, one can see that service is going normally, while in the interface of the CATS router, one can see that the previous session table aging successfully which means CATS has steer the service traffic to another service instance.</t>
          </li>
        </ul>
        <t>In the centralized approach and the hybrid approach, implementors log into the management interface of the SDN controller and can check routes and sessions.</t>
      </section>
      <section anchor="end-to-end-service-latency">
        <name>End-to-end Service Latency</name>
        <ul spacing="normal">
          <li>
            <t>Objective:
To determine that CATS works properly under the pre-defined test condition and prove its effectiveness in service end-to-end latency guarantee.</t>
          </li>
          <li>
            <t>Procedure:
Pre-define the CATS metrics distribution time to be T_1 seconds. Enable a host to send service requests. In distributed approach, log into the CATS ingress node to check if route entries have been successfully created. Suppose the current selected edge server is ES1. Then manually increase the load of ES1, and check the CATS ingress node again. The selected instance has been changed to ES2. CATS works properly. Then print the logs of the CATS ingress router to check the time it update the route entries. The time difference delta_T between when the new route entry first appears and when the previous route entry last appears should equals to T_1. Then check if service SLA can be satisfied.</t>
          </li>
        </ul>
        <t>In the centralized approach and the hybrid approach, implementors log into the management interface of the SDN controller and can check routes and sessions.</t>
      </section>
      <section anchor="system-utilization">
        <name>System Utilization</name>
        <ul spacing="normal">
          <li>
            <t>Objective:
To determine that CATS can have better load balancing effect at server side than simple network load balancing mechanism, for example, ECMP.</t>
          </li>
          <li>
            <t>Procedure:
Enable several hosts to send service requests and enable ECMP at network side. Then measure the bias of the CPU utilization among different edge servers in time duration dela_T_2. Stop services. Then enable the same number of service requests and enable CATS at network side(the distributed approach, the centralized approach, and the hybrid approach are tested separately.). Measure the bias of the CPU utilization among the same edge servers in time duration dela_T_2. Compare the bias value from two test setup.</t>
          </li>
        </ul>
      </section>
      <section anchor="load-balancing-variance">
        <name>Load Balancing Variance</name>
        <ul spacing="normal">
          <li>
            <t>Objective:
To test the load balancing variance under different path selection algorithms, which could evaluate the traffic steering effectiveness of these algorithms. Low variance value means the algorithm performs better for traffic steering. Algorithms that are compared include ECMP, global-min, and Proportional-Integral-Derivative (PID).</t>
          </li>
          <li>
            <t>Procedure:
There are three test rounds for three different path selection algorithm respectively. In distributed approach, pre-configure the control plane function C-PS in CATS ingress router, while in the centralized and hybrid approach, the path selection function is configured in the SDN controller. For each test round, implementors initiate the same number of service flows to multiple service edge sites. For example, the number of service flows are set to 100, while the number of service edge sites are 3. Implementors need to record the number of service flows at each site, and calculate the load balancing variance, according to the following equations:</t>
          </li>
        </ul>
        <t>** n_avg = (n_s1 + n_s2 + n_s3) / 3 **</t>
        <t>** var_alg = (abs(n_s1 - n_avg) + abs(n_s2 - n_avg) + abs(n_s3 - n_avg))/ (n_avg * 3) **</t>
        <t>Where 'n_s1', 'n_s2', and 'n_s3' refer to the number of service flows that are steered to the corresponding edge site, while 'n_avg' refers to the average number of service flows that are directed to each site. 'var_alg' refers to the average variance of service flows among all three edge sites, which is used to evaluate the load balancing effectiveness of each algorithm. It is calculated by adding three absolute values and then divide the sum by three times of the average number of service flows. Lower variance value means better load balancing effect.</t>
      </section>
    </section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>The benchmarking characterization described in this document is constrained to a controlled environment (as a laboratory) and includes controlled stimuli. The network under benchmarking MUST NOT be connected to production networks.
Beyond these, there are no specific security considerations within the scope of this document.</t>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t>This document has no IANA actions.</t>
    </section>
    <section anchor="acknowledgements">
      <name>Acknowledgements</name>
    </section>
  </middle>
  <back>
    <references anchor="sec-combined-references">
      <name>References</name>
      <references anchor="sec-normative-references">
        <name>Normative References</name>
        <reference anchor="RFC2544">
          <front>
            <title>Benchmarking Methodology for Network Interconnect Devices</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <author fullname="J. McQuaid" initials="J." surname="McQuaid"/>
            <date month="March" year="1999"/>
            <abstract>
              <t>This document is a republication of RFC 1944 correcting the values for the IP addresses which were assigned to be used as the default addresses for networking test equipment. This memo provides information for the Internet community.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="2544"/>
          <seriesInfo name="DOI" value="10.17487/RFC2544"/>
        </reference>
        <reference anchor="RFC4271">
          <front>
            <title>A Border Gateway Protocol 4 (BGP-4)</title>
            <author fullname="Y. Rekhter" initials="Y." role="editor" surname="Rekhter"/>
            <author fullname="T. Li" initials="T." role="editor" surname="Li"/>
            <author fullname="S. Hares" initials="S." role="editor" surname="Hares"/>
            <date month="January" year="2006"/>
            <abstract>
              <t>This document discusses the Border Gateway Protocol (BGP), which is an inter-Autonomous System routing protocol.</t>
              <t>The primary function of a BGP speaking system is to exchange network reachability information with other BGP systems. This network reachability information includes information on the list of Autonomous Systems (ASes) that reachability information traverses. This information is sufficient for constructing a graph of AS connectivity for this reachability from which routing loops may be pruned, and, at the AS level, some policy decisions may be enforced.</t>
              <t>BGP-4 provides a set of mechanisms for supporting Classless Inter-Domain Routing (CIDR). These mechanisms include support for advertising a set of destinations as an IP prefix, and eliminating the concept of network "class" within BGP. BGP-4 also introduces mechanisms that allow aggregation of routes, including aggregation of AS paths.</t>
              <t>This document obsoletes RFC 1771. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="4271"/>
          <seriesInfo name="DOI" value="10.17487/RFC4271"/>
        </reference>
        <reference anchor="RFC8456">
          <front>
            <title>Benchmarking Methodology for Software-Defined Networking (SDN) Controller Performance</title>
            <author fullname="V. Bhuvaneswaran" initials="V." surname="Bhuvaneswaran"/>
            <author fullname="A. Basil" initials="A." surname="Basil"/>
            <author fullname="M. Tassinari" initials="M." surname="Tassinari"/>
            <author fullname="V. Manral" initials="V." surname="Manral"/>
            <author fullname="S. Banks" initials="S." surname="Banks"/>
            <date month="October" year="2018"/>
            <abstract>
              <t>This document defines methodologies for benchmarking the control-plane performance of Software-Defined Networking (SDN) Controllers. The SDN Controller is a core component in the SDN architecture that controls the behavior of the network. SDN Controllers have been implemented with many varying designs in order to achieve their intended network functionality. Hence, the authors of this document have taken the approach of considering an SDN Controller to be a black box, defining the methodology in a manner that is agnostic to protocols and network services supported by controllers. This document provides a method for measuring the performance of all controller implementations.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8456"/>
          <seriesInfo name="DOI" value="10.17487/RFC8456"/>
        </reference>
        <reference anchor="RFC8986">
          <front>
            <title>Segment Routing over IPv6 (SRv6) Network Programming</title>
            <author fullname="C. Filsfils" initials="C." role="editor" surname="Filsfils"/>
            <author fullname="P. Camarillo" initials="P." role="editor" surname="Camarillo"/>
            <author fullname="J. Leddy" initials="J." surname="Leddy"/>
            <author fullname="D. Voyer" initials="D." surname="Voyer"/>
            <author fullname="S. Matsushima" initials="S." surname="Matsushima"/>
            <author fullname="Z. Li" initials="Z." surname="Li"/>
            <date month="February" year="2021"/>
            <abstract>
              <t>The Segment Routing over IPv6 (SRv6) Network Programming framework enables a network operator or an application to specify a packet processing program by encoding a sequence of instructions in the IPv6 packet header.</t>
              <t>Each instruction is implemented on one or several nodes in the network and identified by an SRv6 Segment Identifier in the packet.</t>
              <t>This document defines the SRv6 Network Programming concept and specifies the base set of SRv6 behaviors that enables the creation of interoperable overlays with underlay optimization.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8986"/>
          <seriesInfo name="DOI" value="10.17487/RFC8986"/>
        </reference>
      </references>
      <references anchor="sec-informative-references">
        <name>Informative References</name>
        <reference anchor="I-D.ietf-cats-usecases-requirements">
          <front>
            <title>Computing-Aware Traffic Steering (CATS) Problem Statement, Use Cases, and Requirements</title>
            <author fullname="Kehan Yao" initials="K." surname="Yao">
              <organization>China Mobile</organization>
            </author>
            <author fullname="Luis M. Contreras" initials="L. M." surname="Contreras">
              <organization>Telefonica</organization>
            </author>
            <author fullname="Hang Shi" initials="H." surname="Shi">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Shuai Zhang" initials="S." surname="Zhang">
              <organization>China Unicom</organization>
            </author>
            <author fullname="Qing An" initials="Q." surname="An">
              <organization>Alibaba Group</organization>
            </author>
            <date day="12" month="October" year="2025"/>
            <abstract>
              <t>   Distributed computing is a computing pattern that service providers
   can follow and use to achieve better service response time and
   optimized energy consumption.  In such a distributed computing
   environment, compute intensive and delay sensitive services can be
   improved by utilizing computing resources hosted in various computing
   facilities.  Ideally, compute services are balanced across servers
   and network resources to enable higher throughput and lower response
   time.  To achieve this, the choice of server and network resources
   should consider metrics that are oriented towards compute
   capabilities and resources instead of simply dispatching the service
   requests in a static way or optimizing solely on connectivity
   metrics.  The process of selecting servers or service instance
   locations, and of directing traffic to them on chosen network
   resources is called "Computing-Aware Traffic Steering" (CATS).

   This document provides the problem statement and the typical
   scenarios for CATS, which shows the necessity of considering more
   factors when steering traffic to the appropriate computing resource
   to better meet the customer's expectations.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-cats-usecases-requirements-08"/>
        </reference>
        <reference anchor="I-D.ietf-cats-framework">
          <front>
            <title>A Framework for Computing-Aware Traffic Steering (CATS)</title>
            <author fullname="Cheng Li" initials="C." surname="Li">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Zongpeng Du" initials="Z." surname="Du">
              <organization>China Mobile</organization>
            </author>
            <author fullname="Mohamed Boucadair" initials="M." surname="Boucadair">
              <organization>Orange</organization>
            </author>
            <author fullname="Luis M. Contreras" initials="L. M." surname="Contreras">
              <organization>Telefonica</organization>
            </author>
            <author fullname="John Drake" initials="J." surname="Drake">
              <organization>Independent</organization>
            </author>
            <date day="3" month="November" year="2025"/>
            <abstract>
              <t>   This document describes a framework for Computing-Aware Traffic
   Steering (CATS).  Specifically, the document identifies a set of CATS
   components, describes their interactions, and provides illustrative
   workflows of the control and data planes.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-cats-framework-18"/>
        </reference>
        <reference anchor="I-D.ietf-cats-metric-definition">
          <front>
            <title>CATS Metrics Definition</title>
            <author fullname="Kehan Yao" initials="K." surname="Yao">
              <organization>China Mobile</organization>
            </author>
            <author fullname="Cheng Li" initials="C." surname="Li">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Luis M. Contreras" initials="L. M." surname="Contreras">
              <organization>Telefonica</organization>
            </author>
            <author fullname="Jordi Ros-Giralt" initials="J." surname="Ros-Giralt">
              <organization>Qualcomm Europe, Inc.</organization>
            </author>
            <author fullname="Hang Shi" initials="H." surname="Shi">
              <organization>Huawei Technologies</organization>
            </author>
            <date day="20" month="October" year="2025"/>
            <abstract>
              <t>   Computing-Aware Traffic Steering (CATS) is a traffic engineering
   approach that optimizes the steering of traffic to a given service
   instance by considering the dynamic nature of computing and network
   resources.  In order to consider the computing and network resources,
   a system needs to share information (metrics) that describes the
   state of the resources.  Metrics from network domain have been in use
   in network systems for a long time.  This document defines a set of
   metrics from the computing domain used for CATS.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-cats-metric-definition-04"/>
        </reference>
        <reference anchor="I-D.ietf-idr-5g-edge-service-metadata">
          <front>
            <title>BGP Extension for 5G Edge Service Metadata</title>
            <author fullname="Linda Dunbar" initials="L." surname="Dunbar">
              <organization>Futurewei</organization>
            </author>
            <author fullname="Kausik Majumdar" initials="K." surname="Majumdar">
              <organization>Oracle</organization>
            </author>
            <author fullname="Cheng Li" initials="C." surname="Li">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Gyan Mishra" initials="G. S." surname="Mishra">
              <organization>Verizon</organization>
            </author>
            <author fullname="Zongpeng Du" initials="Z." surname="Du">
              <organization>China Mobile</organization>
            </author>
            <date day="18" month="September" year="2025"/>
            <abstract>
              <t>   This draft describes a new Edge Metadata Path Attribute and some Sub-
   TLVs for egress routers to advertise the Edge Metadata about the
   attached edge services (ES).  The edge service Metadata can be used
   by the ingress routers in the 5G Local Data Network to make path
   selections not only based on the routing cost but also the running
   environment of the edge services.  The goal is to improve latency and
   performance for 5G edge services.

   The extension enables an edge service at one specific location to be
   more preferred than the others with the same IP address (ANYCAST) to
   receive data flow from a specific source, like a specific User
   Equipment (UE).


              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-idr-5g-edge-service-metadata-30"/>
        </reference>
      </references>
    </references>
    <?line 303?>



  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA9U8bW/cNprfDfg/EMiH2PFoWttpt+vDHurEztXYJPVmJnt7
hwMCjoYzw41GmhUlO9Mk/e33vJAUqZex46QHnAs0tkQ+5PPK541KkmR/7+ZM
nO7vzYs0l2t1JualXFTJNktm69tlksrKJN+f7O/BL2dC54tif8/Us7U2Rhd5
td3AjKvL6Yv9vUpXGfzxTOXpai3L9zpfileqWhXzIiuWW7EoSvG8WG/qCt4k
8laWSkxhrYVOxaRSqoTH+3tyNisV7Ojf9/eEePaKZ51PJ/t7t8szgVva34NR
NcAtz/b3EsGb/qtayVz8lyxwWlHC0OcrnUvxqpjpTOFDtZY6OxNbWbzHsT+n
+H5Nr8dpsW5AXSvY+Etd3wUp0/UGhm7/uR2G9Q+df9CwL90C9jbXNK7Zlv7w
4ZTh1PRunOZCNID+VgN6/wCKLz2k/55eBgA+4Lvxv2DYz79VtAuA0Mx/pfNV
8npZpEjy3MO4nL65CoCs9UbJMjPv5c+qKvW4VOP3JdI7BVaXelZXRHT+Ly/K
taz0jTpDCG9ePD/54elT9/vTkz8du99/evrDj/73P//0I0FASQrnXyUXY62q
BQtcbVQqjTJJqf5V61KtVV6ZnnGLErC7Lcr3Pe/WiEKazNVC57oC8sRj9LxM
flgmar5UiVHljU4VTpFzWUmLY5IkQs5MVcq0wr/b0ltZ6TVWeg9QUA+FNkL6
dyAhOufXQm42ZSHTlZgBbnNR5KJaKUGwcmWMKBZiVlQrkbp1hMznIlcVYig8
xYp8LKYrWAVUtkbKCAC7KYBcYhYq39orn4ZXTpHGwqG21vM5SvT+3iNxBfwt
5nWK0MXHRzr48/Mu1CdfijpIktFzfrgDWWaeGQHWIKkwXlSFMCpTacWwNqWW
lRKWczDMVDJPlRmLSbFWSEqkbQZj8nQLDIZVUdZG8Lgs6uUK1myeIshMp0Rb
YENpN6USoAOOaY8ArOZETNzVspagUkAHoRYL2B+OdtuxW4ZJI3G70oA/Eg95
Bo8ABmD38eM9RP/zZ2S5grlI4cwUQOYlCE0pM96HV4QOPP/m82eSAb3eZASU
cIHd6znudCx+KW7VjSpHEYsqWpQ2LfPtbmbBYFmJFAzVzCEOKOKabUUZCWmc
0PYSoaO7nz+PhEHGRhJeKVNZMTe0SUuzOfLlRmaAGsgICoJnjVO0uYYnJSoP
0c+iMIbjC3E3owBzonczfq42WbElvXNSjeP1GGiIS821YVsJ23ADCJxIYQ5w
TP8WvCFC4svVdlbqcIZ9EW5iXWeVBvaJYvZPRof1uk/c8Ln6IJHbo2bArQaV
y4DTQDiVz5OqSOAfpyco+rQXvVzhCLMFjq0FcBw27S2PGjIz24inVWSi4Heg
PAo+iCzuDQUPpxP5Y6Ecs0268NxHhk1VuTZgmhqZSIpFUuFTslCxRazRGiIq
iyIDdFlYEABN7xe6RlPI/j+hrZ3dw2ehsck1jkVkNrJaJZ4T3b0t6hL5ardi
gsXEmwLEpjyz/7JCmXqzKcrKOEFNwXvRZm0izQps7ZjhXT5/dQ0HPHgEGagt
6gkKT4KbE2ABSYmZzlNkdeCp0VP7eKKqesMoKFY3g0+QfM7+AGpoFjINRpDl
6+NH6w2g2RLnBodENg8ZEx4xobbB2Qh2Nsu2aKmVCnTMH2GBCrI89inWaIcy
DugcHo1TmsS6DvoCxwzIuYbdwElhcO+r4jYyGbTpFKSMzR2CDhcFNAtayxlL
HCDTFA414EC2xXMBUcchXnqsHhflAYoVHaubTKZs19byPWxRpRodcEOmVN9T
rvnsfxQxVyTieUC+c0sLHPjxY0DYBLmfEPfhJDFABlawQCjsodtr5mDjXsmt
H9LmXCheBui91pmkcx+BTopFRep3YRF9zfRE32Ny8fpQoIGbywxETKyLebSv
iDbWIbUnqrh4OwV7CHQC+QJlw6UMkIvpzeJMy1+8FuQEI6PLMfhLg6iOWoNF
BQwz7OzgHHiq6AiKpMhKEBo7J52Ox8hx1NZ+S4/8v1VZhv+2z1lGMXBdEK3c
0kaBQZAVwsTV85zl12MMqwG9yUT3TbrREmVblQuk08nB1ckhrzZ5fREiHwEH
Xl6Cv01eG7wDl0IuVRvW04OrpxaWf3hwdXzoxKtF3fvtHtxC6ymii4CYoIiR
NS6LdUQiK3B8LuXLEh2G0tlj8h3F1cl4lw0gkEOIXj1l3Nq8Ehswvil66gws
X+hlTa6MW/QY1dcKXhu/Ee/H7vOYFzYkE0RHRiNHxUDx6ifaCo8JdlPRzcx7
JMfxfxytd0JAwyenvAO3AdWsb9gCMo7RDm4LoRqiAdGV2bCXk23BMUP9sSIQ
DesBpYZofwqae2qRI+HK8QDzoucPmB6tRMBgQBi0rjDSQBv5+++/Y1iJP0fJ
l/0cuYmf+B+0vY7cyTWRm8z0pdM59/OpPfG+P9HE7naP+vA46kz8FG+12Qf+
49TMPvw2Kz4Ix6PkCK3Sl7MjgIHLh7/fzeSjHnxiCA6hDqSjAOEY60/9ED6R
LXze2MJPHQhkhXrQuHMT+DsNmrAaDW5i6OcTWDoe9MpqYC+EC0Xi8jbHGJ/E
/QCO5MMhQnwVN456uBHCvvvn0zeZfJRE+z7C0+0OHB6qD5+GJh91l7j/ZPqN
DOXwhnZOtvGNON49GYz1N975d+Go//myyd/Fw/pn903Ovm/PxdlZfs+VO3P7
1+6b3KbVIO36Jre5PMj1gclvnHMgPo3HAddP75z8Vdvu/Drw4I8hmDO4jmDh
33dNtqb22E22f5/ca7I4uJwcO7tp/z5p7OgfhXPn196/d05uG/FBo057iQG8
8H7wTo8ptL33+BF+RevgfTwTj/pjYUFlr788DmPoJrh+/Lk/4r4IMgRNxC0w
5A6SB/cPufvSHH0ht40ewuENsJEPiDWvswRnfuMjVfbszQi8XwxA2xEqh2Kd
yAld/hEVobJtHKPYECIMDUY9UcXpiAJoHzrjfpxb7tKHzl2nwCsODqJ0DAIx
qslXFyg7EItTOi5EsgkXXcBjuoEiR0T9oZQNUBMeg7toIrakL552UV4Vhr3i
+CuDjAeGGA91Nh7m7D8wuPgWqz0At3tbq5gBHXhC7Hh158/gbnb6kkfR3G/K
3H7q9zC3PTvy4D7tmjbsdd45rd/f3DntPpsUPfOGPcxdy+3wLYem3eFVDq+2
058cmnaHkzA07Q4fcse0Xd7jN94k/QwHov3THqgC8S+DDzrT7u+kPdAlfaAz
+kA39Gsc0H4LZtMg9lTDE3c3Qe/tO34D17Hfp3OuY+gMdlzHHufxF65ahZUa
rrGpD+BZ3F0F+jwKi19YLMm4cSMt1jOdS1dxjZxKoGdY7KBZhhzKEI4t0ptK
zjLVTZFHENcFuG86vymyG9X0EsxJDQ21S6CzR67ZgpL24MVi7XPZcfvWcost
B00CfrZFbILttss351zpWSsJDrFeg4fqXAuTrtS8zqjeUuec+rUtDRtwyopy
7eC36hBAoe+KyIs8MLaMgaVPwB7ebmRJBXAs6zXw7f69s94USUMc7Fou3z9X
G5VTAcD2EWGyXGNFodXhQS5wq2DajSCAi7p00kDNBlw5N2JHFVBGdUdjM+Uz
BWRSY1tGtnopGr1s9M4+nHCFGyfsrKuFJYCIGi0H2z5kBICzS4qA3tii90ZW
mPi3HRuTNzc/clHwzz/5omCLtb4YQ6OxUKOpUDPcxhVLPtbzbLUHq05Y8Anq
OgSzqiGEyABkrbMKKFjdKpV3IynTFFrCuMnWUptiRXsLwJ2YYBSq8fpvgKOL
OhPn11cooq7uavQylxkpAtC/KgB6e3s91ThXu+ypwHAo0xOBNgzGl9Sl0cfk
kbh89h/XxCvsLgTxo54SNDXarFArQ6p1Kk5hWAgGAJ5cPRQcRXld3owFAKR4
cFUYRYXEIPRs88MbvGINNnvUE7vC/M4iEDTLstx2TCCsGpqPMJpUH6iHjaUC
djgKe2IoiA/bbIJjY1eHJGoKNdrZFiNuzmBQANfUs2T68u9s9TP9XlGTSwJO
Vga2zr49I6kF3oO9NzhrTm87hTYvVKBEubGdx3awtZOWmGNcYwJzxHWpXAtH
azVdlLraIqpBKQ/nvSzkPB4sbySoAp5kth8QToKNnOnMAcAsi4dCBu9XOq8a
9PHY5NNjWcp5TX0tzmT6LpCo5OytBjCdUzc7Wjt6+uTGgZZ1esr40MM5jWgi
MZ2t71pWMPVoNUlkuV8zNM2tZpegw6aNm0wr6kRaFXi8I7RbbG4KvYOweW8W
mwi7aEYdasELLAw7fEPD0S2PR1YWV1M5cna+y9Y2eh4bW1zR+QsLohttv+B8
mk0qdTpT0WtoNE/m21TCIXx1LeR8jtCZKoCkQ6LpfHJ9O2JabHyLVohz7ywi
PR703FfE3gs1j8A6INdFiX7uFihxo8sip0bTUWNGMOflWkHhkKMWqHwJysAW
C9glXVfgiDvVqDc1ixJpRBmr0b635nwSdLPmhW+IIw8GhCTofCWz4j2bitHX
AUVs2PbcHtHStdw9Iyek5QuFfWbkqG5KlaThVNvbyxKIsg8igmu90KWpsFtr
iEWjntSgc/CAThX6C+vaVBFcopQVlOg1d9hFpwF7VZwfbCGFaoi9G7YD12BH
p4nbXSx0cuzcEmNhJRkoMCfcKjR0VnDa9sh6F8gOmrzTVxvuat0BN6D8Asnd
EGLQozgX1XaD7YLuHGo8BwfVml+yfAUYyYKabdCgFzlKPM74az0Dj1BRU8cV
tzCDuuhibjsRLbSgCThTN2hIWn4WiXXurCjmhMkdbrylvsYUR6qWJ+WkIXBJ
RgK72xym2nBPfIWNMqEHd/5y+qv32bo9LZt65uUU14bDyqSl3vjO2x0u2tfy
tRHt+OjkwoQOLjCQ5Rq1eEk9W10ITE/fuDQH25FWxLYoLy87thzWjawd8L02
2PtXLzGecS2HsHcnAT3c7myEOId8QXdwo3BrrizQ3gAp7OO+LRNJGHQLPug7
QQar6KouoTpPVxDHoTZPh6KkeStMslaqMUChVbQ7eZiit0Q6EIR4CQ+lqXzc
EWNEZ4zbN4pMq/dfL1o08MGbtQvgpljSz4vbPAMf0G5HV3FZyfnreB52J9cb
ntpbYrsfsTp+F/fBElmQ31Y0iFOR0+JrVhBYN22moffQ28/p+d5yjkIpCOMh
2jxR2B6+EDli2I4QX9DNpP092k/nigYGB2ltMFLG2CG6J7JW0ngRcExGz4wX
eeLzB7zp+LA+o0xg0mm43W5s1IAnJw78Nx6Xw3ZXs6LOeTkDhLN/+hDXds+/
aFOrf+EOUZtQ+UCNl2Mm7aFdniKVej1Df2rh2Nb7Ljz5ewdAXP7evQltD6E+
8nhbbM47rkkLHQbkbjY09IuHjYObCq9YN2jmJVWb6RbB28lUpBleItxiFhA0
kNz8OFal6xlDF4N2BkVBrPoF0RDihrrx8vvmXlmPkNLm5yrN+I4WRUmWnjXA
w8o3elymAgDI3RHlD+H/8gP8T+eHI2dhQRfoKlFnickvv759eSGKDbuFFAxG
64FIluALMvi8AKgyG6EPoNfO2VvC8bGkPw5ZRzx6xw9Fr8ELF4XtO8c+NBLU
ttss3rz6P8E0QvTkQYjynbV+9Cz8J08uVAVBvxocOEwGOOUhMJhhvsPYFC7Y
YMzRfgA/9WD67GJ8+OQJW86IYJjcNy5bGugWmL7oTsBFELFjwuJXdwkMlHBa
AMYYDuqcnYqWM9Y+r4YvrFi1u3g7NcGVxb7G+3t4BZ2g2s0ayBogeUEM9UY6
rw2pSEeRy4Jgup79hBvZ493WGzz3x5xuusbc0hyOl7M7csu7T+Y+rz2KD5al
NJu+VhSYsY3tszv6AP4NehJhCGgdSrCPG2ncBbbg2q+/C9M67myyjxx7s0pC
h97djmx5uhWuY1ZFnc0FBNTNLaU2aPB8Vyp9j36U5o2DACm84tqxzYTxQ3O/
d7lGNgIwPjvYdrp7+NHZYff2QEPxZlMmyJLFoXDouNOUrtNO4Og87Mzw1/7u
sVKc8u2yLuJZV8lCtkXswdRFlFVYyRv0tZXnK+DAcQQa2ibTYeJch8ss3a4U
4dSyMw5WmPkZ1OFOygmFEuNzZ3dwNJxQ6w0u5zbqstdpXVJAbu28TxXHc/BR
Jk0zrCz58qyTWSAo6sjCTw4vAVJpVRKrFd2ntIQPUbKA2fy0ESSeDV0soQYy
sss+qTVRnONGTdR5ravt3fbeuVIdU+/rTbxpfzG7m6BE4k2fQ2TJyxt77xvw
4KpylIszYQK0x95eUn4Vht3QLVGbnyr6s1MUNvWfCfcV9VaMwxVoogw52liE
LTHuayQ+LZV0N9R7viPQSQ0YZQHKhgobmb5X7tLUSlfG0pj5R5sY0fVXWWFh
GbcL8Y8sl6oK1qz4Bj4ZD7CYXCbQOW7QOEHH+NJZMAsgMmQvnCjTLS6MUjld
FO3cL2nEskDI7OXwNVSstzeXc20cblcMAs0esFYZbnRRmxh5cJW4HpRikWdR
I2JMLPSd7Z3mlbRVUUs83qMTaCRZzn0AbS5Fx8qXneuRzYlkjI8svrTeJkNP
JgIJwRJI5GEtchrkiuCXzVV7Vzh4yVfu7+nIYeqBP5mgMLSqqUupY1bRJGMy
V3ufEd0MPKxM6wsIVK63Ha7dzwD4T1rYj4VEin3tV+yJz8KiEhlBTvdM3+GV
RNwaKJa1DJIl9Q80CXoxqPuRQFpDMOZGBKOig8Xby9DbAP25nBwPaKyK9BXG
cezUmKnuluUSLCxX8f16/qovqgcbLOx+sddoJ+4KaiwZdksbCKYrf7BFSty+
0hraTz62Kn+KrVRMQNs80joe5yqr5LupT0r5EzdXt8H8rc3qY1AkbWu2H+qt
RzieTmw33HqrdADTQQIyZbH13HYyNHl57r9DgpHtQgN3/5+Yigl/c+Nt882N
e5oIhG4FHBtcWABnMgMZQhPM+i/sIYDGVHP+OceWDywouAxna6L/4kTrmyL4
gQmbc/6Kk597M3gGQsQNhqlWp2OcJiTKzrRsRPr6bfh5EttV1tRpImdXW5s0
twkt6iF4N30HqjSpio2vxLnEfm59CFu2bLJvu5AgXrSQOBgOeYfEcfAjFRyO
c6bbKHCSQVNB7w/H4tUX0cijdV8SPQ9ccgIPrm2trAt9WwQ9ZS6EAGmmpopn
Xpb+LkuNNo2FpiXUBMCbzkYAb+wke+o1zA0+lUF4ZUvs7VitjfvYEscSCjfq
7Fnn2n3ny0A22vHAxoDEbbMJxpq9F8qzu4Guwm6cAvYlOMfi3AO2zmSpXLSD
Nj/N6jlrwkgsswKokKy1rfFe49d1Sk6oJVdgcCDOzZILAHxDH1ITB9dXF4d9
Z/bUl735oypE6hIz3/YzMvT0bsq27uQPntBReaenuuFbH6kRUud9R1PLJW33
nfQmClr79svwJR1XbdJ9ZSUunyifyibqtIw/pZWdJA3YBG4IqYKPNXlHK/h0
wIvQksap/RgQtSoocpOOv//e0aR/RtgyBdNOWzGMK79BXFiU892rVkwJBGa9
F5ml2BitdmloT/q+6XjBk9tVG/b3njwR+Tt5sxR/EQf5O3MsjuBvc8L/nB6K
78Sp4GwpjAT470AGcaycGR6f8PxDmGGfnfQ8O/XPDr/DhXDFJwLgM+z/JL14
jAAfj+jfk8eML/5++lhQG5nDZZDZTo9boTbF4GZTcJuu545j4mPajl3D538k
nprLeyzG6SdezTNrLB5bYg3B9Yasy3Q6FzDnw+agEadRU1yvjV0xNKq9rkZg
Uml73ohQjwVqpBMpbqies9jwR59mpsjQESRr6zsA0dzcsNuCfX1rnGcNGuZ7
fFfObhKSQccvgvSZ9F3Ok6t7ThQEB9gD+Ny2R9nWoY+PjH2TpNGbz+4LWlG1
Atwq/LIkmO/f3FmLedvZwOfTECRYQI70CmrXt7ZrHrZwgYpgL3/T33XI5Ww+
WUw4C2stdabZq3f+Ch+y0UapoPL61ynnlYKP+Wyab0ba6ejGPlPbgvllVNhx
lRdh9siSMCZU1CqWQkjDLA0oYb8Nd3X++rxLfmCn7Cd9SEuMp2AvBEIG5Z9H
4jx9nxe3GQo+dcMBSNl69Nl9PXMGb/b3/he2uQYLQVcAAA==

-->

</rfc>
