<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.29 (Ruby 3.2.3) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-yl-bmwg-cats-00" category="info" submissionType="IETF" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.28.1 -->
  <front>
    <title abbrev="BM for CATS ">Benchmarking Methodology for Computing-aware Traffic Steering</title>
    <seriesInfo name="Internet-Draft" value="draft-yl-bmwg-cats-00"/>
    <author initials="K." surname="Yao" fullname="Kehan Yao">
      <organization>China Mobile</organization>
      <address>
        <email>yaokehan@chinamobile.com</email>
      </address>
    </author>
    <author initials="P." surname="Liu" fullname="Peng Liu">
      <organization>China Mobile</organization>
      <address>
        <email>liupengyjy@chinamobile.com</email>
      </address>
    </author>
    <date year="2025" month="June" day="09"/>
    <workgroup>bmwg</workgroup>
    <abstract>
      <?line 36?>

<t>Computing-aware traffic steering(CATS) is a traffic engineering approach based on the awareness of both computing and network information. This document proposes benchmarking methodologies for CATS.</t>
    </abstract>
  </front>
  <middle>
    <?line 40?>

<section anchor="introduction">
      <name>Introduction</name>
      <t>Computing-aware traffic Steering(CATS) is a traffic engineering approach considering both computing and network metrics, in order to select appropriate service instances. Some of the latency-sensitive, throughput-sensitive applications or compute-intensive applications need CATS to guarantee effective instance selection, which are mentioned in <xref target="I-D.ietf-cats-usecases-requirements"/>. There is also a general CATS framework <xref target="I-D.ietf-cats-framework"/> for implementation guidance. However, considering there are many computing and network metrics that can be selected for traffic steering, as proposed in <xref target="I-D.ietf-cats-metric-definition"/>, some benchmarking test methods are required to validate the effectiveness of different CATS metrics. Besides, there are also different deployment approaches, i.e. the distributed approach and the centralized approach, and there are also multiple objectives for instance selection, for example, instance with lowest end-to-end latency or the highest system utilization. The benchmarking methodology proposed in this document is essential for guiding CATS implementation.</t>
    </section>
    <section anchor="definition-of-terms">
      <name>Definition of Terms</name>
      <t>This document uses the following terms defined in <xref target="I-D.ietf-cats-framework"/>:
CATS: Computing-aware Traffic Steering
C-PS: CATS path-selection</t>
      <t>This document further defines:</t>
      <t>CATS Router: Router that supports CATS mechanisms for traffic engineering.
ECMP: Equal cost multi-path routing</t>
    </section>
    <section anchor="test-methodology">
      <name>Test Methodology</name>
      <section anchor="test-setup">
        <name>Test Setup</name>
        <t>The test setup in general is compliant with <xref target="RFC2544"/>.  As is mentioned in the introduction, there are basically two approaches for CATS deployment. The centralized approach and the distributed approach. 
The difference primarily sits in how CATS metrics are collected and distributed into the network and accordingly, where the CATS path selector(C-PS) is placed to make decisions, as is defined in <xref target="I-D.ietf-cats-framework"/>.</t>
        <section anchor="test-setup-centralized-approach">
          <name>Test Setup - Centralized Approach</name>
          <t><xref target="centralized-test-setup"/> shows the test setup of the centralized approach to implement CATS. The centralized test setup is similar to the Software Defined Networking(SDN) standalone mode test setup defined in <xref target="RFC8456"/>. The DUT locates at the same place with the SDN controller. In the centralized approach, SDN controller takes both the roles of CATS metrics collection and the decision making for instance selection as well as traffic steering. The SDN controller is connected with application plane via interface 2(I2), and is connected to Edge server manager via interface 4(I4). The Southbound interface(I1) of the SDN controller is connected with the forwarding plane. Service request is sent from application to the SDN controller through I2. CATS metrics are collected from Edge server manager via I4. The traffic steering polocies are configured through I1. 
In the forwarding plane, CATS router 1 serves as the ingress node and is connected with the host which is an application plane emulator. CATS router 2 and CATS router 3 serve as the egress nodes and are connected with two edge servers respectively. Both of the edge servers are connected with edge server manager via I3. I3 is an internal interface for CATS metrics collection within edge sites.</t>
          <figure anchor="centralized-test-setup">
            <name>Centralized Test Setup</name>
            <artwork><![CDATA[
      +-----------------------------------------------+
      |       Application-Plane Test Emulator         |
      |                                               |
      |   +-----------------+      +-------------+    |
      |   |   Application   |      |   Service   |    |
      |   +-----------------+      +-------------+    |
      |                                               |
      +---------------+(I2)---------------------------+
                      |
                      | (Northbound Interface)
           +-------------------------------+    +-------------+
           |       +----------------+      |    |             |
           |       | SDN Controller |      |    |     Edge    | 
           |       +----------------+      |----|    Server   |
           |                               | I4 |    Manager  |
           |    Device Under Test (DUT)    |    |             | 
           +-------------------------------+    +---------+---+
                      | (Southbound Interface)            |
                      |                                   |
      +---------------+(I1)-------------------------+     |
      |                                             |     |
      |         +------------+                      |     |
      |         |    CATS    |                      |     |
      |         |   Router  1|                      |     | I3
      |         +------------+                      |     |
      |         /            \                      |     |
      |        /              \                     |     |
      |    l0 /                \ ln                 |     |
      |      /                  \                   |     |
      |    +------------+  +------------+           |     |
      |    |    CATS    |  |    CATS    |           |     |
      |    |  Router 2  |..|   Router 3 |           |     |
      |    +------------+  +------------+           |     |
      |          |                |                 |     |
      |    +------------+  +------------+           |     |
      |    |   Edge     |  |   Edge     |           |     |
      |    |  Server 1  |  |  Server 2  |           |     |
      |    |   (ES1)    |  |   (ES2)    |           |     |
      |    +------------+  +------------+           |     |
      |          |               |                  |     |
      |          +---------------+------------------------+    
      |     Forwarding-Plane Test Emulator          |
      +------------------------------------ --------+
]]></artwork>
          </figure>
        </section>
        <section anchor="test-setup-distributed-approach">
          <name>Test Setup - Distributed Approach</name>
          <t><xref target="distributed-test-setup"/> shows the test setup of the distributed approach to implement CATS. In the distributed test setup, The DUT is the group of CATS routers, since the decision maker is the CATS ingress node, namely CATS router 1. CATS egress nodes, CATS router 2 and 3, take the role of collecting CATS metrics from edge servers and distribute these metrics towards other CATS routers. Service emulators from application plane is connected with the control-plane and forwarding-plane test emulator through the interface 1.</t>
          <figure anchor="distributed-test-setup">
            <name>Distributed Test Setup</name>
            <artwork><![CDATA[
      +---------------------------------------------+
      |       Application-Plane Test Emulator       |
      |                                             |
      |   +-----------------+      +-------------+  |
      |   |   Application   |      |   Service   |  |
      |   +-----------------+      +-------------+  |
      |                                             |
      +---------------+-----------------------------+
                      |  
                      |                                   
      +---------------+(I1)-------------------------+     
      |                                             |
      |   +--------------------------------+        |
      |   |      +------------+            |        |
      |   |      |    CATS    |            |        |
      |   |      |   Router  1|            |        |
      |   |      +------------+            |        | 
      |   |      /            \            |        |
      |   |     /              \           |        |
      |   | l0 /                \ ln       |        |
      |   |   /                  \         |        |
      |   | +------------+  +------------+ |        |
      |   | |    CATS    |  |    CATS    | |        |
      |   | |  Router 2  |..|   Router 3 | |        |
      |   | +------------+  +------------+ |        |
      |   |      Device Under Test (DUT)   |        |
      |   +--------------------------------+        |
      |        |                |                   |
      |    +------------+  +------------+           |
      |    |   Edge     |  |   Edge     |           |
      |    |  Server 1  |  |  Server 2  |           |
      |    |   (ES1)    |  |   (ES2)    |           |
      |    +------------+  +------------+           |       
      |           Control-Plane and                 |
      |      Forwarding-Plane Test Emulator         |
      +------------------------------------ --------+
]]></artwork>
          </figure>
        </section>
      </section>
      <section anchor="control-plane-and-forwarding-plane-support">
        <name>Control Plane and Forwarding Plane Support</name>
        <t>In the centralized approach, Both of the control plane and forwarding plane follow Segment Routing pattern, i.e. SRv6<xref target="RFC8986"/>. The SDN controller configure SRv6 policies based on the awareness of CATS metrics and traffic is steered through SRv6 tunnels built between CATS ingress nodes and CATS egress nodes. The collection of CATS metrics in control plane is through Restful API built between the SDN controller and the edge server manager. 
In the distributed approach, In terms of the control plane, EBGP<xref target="RFC4271"/> is established between CATS egress nodes and edge servers. And IBGP<xref target="RFC4271"/> is established between CATS egress nodes with CATS ingress nodes. BGP is chosen to distribute CATS metrics in network domain, from edge servers to CATS ingress node. Carrying CATS metrics is implemented through the extension of BGP, following the definition of <xref target="I-D.ietf-idr-5g-edge-service-metadata"/>. Some examples for defining sub-TLVs are like:</t>
        <ul spacing="normal">
          <li>
            <t>Delay sub-TLV: The processing delay within edge sites and the transmission delay in the network.</t>
          </li>
          <li>
            <t>Site Preference sub-TLV: The priority of edge sites.</t>
          </li>
          <li>
            <t>Load sub-TLV: The available compute capability of each edge site.</t>
          </li>
        </ul>
        <t>Other sub-TLVs and can be gradually defined according to the CATS metrics agreement defined in <xref target="I-D.ietf-cats-metric-definition"/>.</t>
        <t>In terms of the forwarding plane, SRv6 tunnels are enabled between CATS ingress nodes with CATS egress nodes. 
Service flows are routed towards service instances by following anycast IP addresses in both of the approaches.</t>
      </section>
      <section anchor="topology">
        <name>Topology</name>
        <t>In terms of both of the approaches to test CATS performance in laboratory environments, implementors consider only single domain realization, that is all CATS routers are within the same AS. There is no further special requirement for specific topologies.</t>
      </section>
      <section anchor="device-configuration">
        <name>Device Configuration</name>
        <t>Before implementation, there are some pre-configurations need to be settled. 
Firstly, in both of the approaches, application plane functionalities must be settled. CATS services must be setup in edge servers before the implementation, and hosts that send service requests must also be setup.</t>
        <t>Secondly, it comes to the CATS metrics collector setup.
In the centralized approach, the CATS metrics collector need to be first setup in the edge server manager. A typical example of the collector can be the monitoring components of Kubernetes. It can periodically collect different levels of CATS metrics. Then the connecton between the edge server manager and the SDN controller must be established, one example is to set restful API for CATS metrics publication and subscription.
In the distributed approach, the CATS metrics collector need to be setup in each edge site. In this benchmark test, the collector is setup in each edge server which is directly connected with a CATS egress node. Implementors can use plugin software to collect CATS metrics. Then each edge server must set BGP peer with the CATS egress node that's directly connected. In each each edge server, a BGP speaker is setup.</t>
        <t>Thirdly, The control plane and fordwarding plane functions must be pre-configured. In the centralized approach, the SDN controller need to be pre-configured and the interface between the SDN controller and CATS routers must be tested to validate if control plane policies can be correctly downloaded and it metrics from network side can be correctly uploaded. In the distributed approach, the control plane setup is the iBGP connections between CATS routers. For both the approaches. the forwarding plane functions, SRv6 tunnels must be pre-established and tested.</t>
      </section>
    </section>
    <section anchor="reporting-format">
      <name>Reporting Format</name>
      <t>The benchmarking test focuses data that can be measured and controllable.</t>
      <ul spacing="normal">
        <li>
          <t>Hardware and software versions of CATS routers, edge servers, and the SDN controller.</t>
        </li>
        <li>
          <t>Three levels of CATS metrics:</t>
        </li>
      </ul>
      <t>For L0, the benchmarking tests include resource-related metrics like CPU utilization, memory utilization, throughput, delay, and service-related metrics like Queries per second(QPS). 
For L1 and L2 metrics, the benchmarking tests include all normalized metrics.</t>
    </section>
    <section anchor="benchmarking-tests">
      <name>Benchmarking Tests</name>
      <section anchor="cats-metrics-collection-and-distribution">
        <name>CATS Metrics Collection and Distribution</name>
        <ul spacing="normal">
          <li>
            <t>Objective:
To determine that CATS metrics can be correctly collected and distributed to the DUTs which are the SDN controller in the centralized approach and the CATS ingress node in the distributed approach.</t>
          </li>
          <li>
            <t>Procedure:</t>
          </li>
        </ul>
        <t>In the centralized approach, the edge server manager periodically grasp CATS metrics from every edge server that can provide CATS service. Then it passes the information to the SDN controller through publish-subscription methods. Implementors then should log into the SDN controller to check if it can receive the CATS metrics from the edge server manager. 
In the distributed approach, the collectors within each edge server periodically grasp the CATS metrics of the edge server. Then it distributes the metrics to the CATS egress node it directly connected. Then Each CATS egress node further distributes the metrics to the CATS ingress node. Implementors then log into the CATS ingress node to check if metrics from all edge servers have been received.</t>
      </section>
      <section anchor="session-continuity">
        <name>Session continuity</name>
        <ul spacing="normal">
          <li>
            <t>Objective:
To determine that traffic can be correctly steered to the selected service instances and TCP sessions are maintained for specific service flows.</t>
          </li>
          <li>
            <t>Procedure:
Enable several hosts to send service requests. In distributed approach, log into the CATS ingress node to check the forwarding table that route entries have been created for service instances. Implementors can see that a specific packet which hits the session table, is matched to a target service intance. Then manually increasing the load of the target edge server. From the host side, one can see that service is going normally, while in the interface of the CATS router, one can see that the previous session table aging successfully which means CATS has steer the service traffic to another service instance. 
In the centralized approach, implementors log into the management interface of the SDN controller and can check routes and sessions.</t>
          </li>
        </ul>
      </section>
      <section anchor="latency">
        <name>Latency</name>
        <ul spacing="normal">
          <li>
            <t>Objective:
To determine that CATS works properly under the pre-defined test condition and prove its effectiveness in service end-to-end latency guarantee.</t>
          </li>
          <li>
            <t>Procedure:
Pre-define the CATS metrics distribution time to be T_1 seconds. Enable a host to send service requests. In distributed approach, log into the CATS ingress node to check if route entries have been successfully created. Suppose the current selected edge server is ES1. Then manually increase the load of ES1, and check the CATS ingress node again. The selected instance has been changed to ES2. CATS works properly. Then print the logs of the CATS ingress router to check the time it update the route entries. The time difference delta_T between when the new route entry first appears and when the previous route entry last appears should equals to T_1. Then check if service SLA can be satisfied.
In the centralized approach, implementors log into the management interface of the SDN controller and can check routes and sessions.</t>
          </li>
        </ul>
      </section>
      <section anchor="sytem-utilization">
        <name>Sytem Utilization</name>
        <ul spacing="normal">
          <li>
            <t>Objective:
To determine that CATS can have better load balancing effect at server side than simple network load balancing mechanism, for example, ECMP.</t>
          </li>
          <li>
            <t>Procedure:
Enable several hosts to send service requests and enable ECMP at network side. Then measure the bias of the CPU utilization among different edge servers in time duration dela_T_2. Stop services. Then enable the same number of service requests and enable CATS at network side(the distributed approach and the centralized approach are tested separately.). Measure the bias of the CPU utilization among the same edge servers in time duration dela_T_2. Compare the bias value from two test setup.</t>
          </li>
        </ul>
      </section>
    </section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>The benchmarking characterization described in this document is constrained to a controlled environment (as a laboratory) and includes controlled stimuli. The network under benchmarking MUST NOT be connected to production networks.
Beyond these, there are no specific security considerations within the scope of this document.</t>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t>This document has no IANA actions.</t>
    </section>
    <section anchor="acknowledgements">
      <name>Acknowledgements</name>
    </section>
  </middle>
  <back>
    <references anchor="sec-combined-references">
      <name>References</name>
      <references anchor="sec-normative-references">
        <name>Normative References</name>
        <reference anchor="RFC2544">
          <front>
            <title>Benchmarking Methodology for Network Interconnect Devices</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <author fullname="J. McQuaid" initials="J." surname="McQuaid"/>
            <date month="March" year="1999"/>
            <abstract>
              <t>This document is a republication of RFC 1944 correcting the values for the IP addresses which were assigned to be used as the default addresses for networking test equipment. This memo provides information for the Internet community.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="2544"/>
          <seriesInfo name="DOI" value="10.17487/RFC2544"/>
        </reference>
        <reference anchor="RFC4271">
          <front>
            <title>A Border Gateway Protocol 4 (BGP-4)</title>
            <author fullname="Y. Rekhter" initials="Y." role="editor" surname="Rekhter"/>
            <author fullname="T. Li" initials="T." role="editor" surname="Li"/>
            <author fullname="S. Hares" initials="S." role="editor" surname="Hares"/>
            <date month="January" year="2006"/>
            <abstract>
              <t>This document discusses the Border Gateway Protocol (BGP), which is an inter-Autonomous System routing protocol.</t>
              <t>The primary function of a BGP speaking system is to exchange network reachability information with other BGP systems. This network reachability information includes information on the list of Autonomous Systems (ASes) that reachability information traverses. This information is sufficient for constructing a graph of AS connectivity for this reachability from which routing loops may be pruned, and, at the AS level, some policy decisions may be enforced.</t>
              <t>BGP-4 provides a set of mechanisms for supporting Classless Inter-Domain Routing (CIDR). These mechanisms include support for advertising a set of destinations as an IP prefix, and eliminating the concept of network "class" within BGP. BGP-4 also introduces mechanisms that allow aggregation of routes, including aggregation of AS paths.</t>
              <t>This document obsoletes RFC 1771. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="4271"/>
          <seriesInfo name="DOI" value="10.17487/RFC4271"/>
        </reference>
        <reference anchor="RFC8456">
          <front>
            <title>Benchmarking Methodology for Software-Defined Networking (SDN) Controller Performance</title>
            <author fullname="V. Bhuvaneswaran" initials="V." surname="Bhuvaneswaran"/>
            <author fullname="A. Basil" initials="A." surname="Basil"/>
            <author fullname="M. Tassinari" initials="M." surname="Tassinari"/>
            <author fullname="V. Manral" initials="V." surname="Manral"/>
            <author fullname="S. Banks" initials="S." surname="Banks"/>
            <date month="October" year="2018"/>
            <abstract>
              <t>This document defines methodologies for benchmarking the control-plane performance of Software-Defined Networking (SDN) Controllers. The SDN Controller is a core component in the SDN architecture that controls the behavior of the network. SDN Controllers have been implemented with many varying designs in order to achieve their intended network functionality. Hence, the authors of this document have taken the approach of considering an SDN Controller to be a black box, defining the methodology in a manner that is agnostic to protocols and network services supported by controllers. This document provides a method for measuring the performance of all controller implementations.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8456"/>
          <seriesInfo name="DOI" value="10.17487/RFC8456"/>
        </reference>
        <reference anchor="RFC8986">
          <front>
            <title>Segment Routing over IPv6 (SRv6) Network Programming</title>
            <author fullname="C. Filsfils" initials="C." role="editor" surname="Filsfils"/>
            <author fullname="P. Camarillo" initials="P." role="editor" surname="Camarillo"/>
            <author fullname="J. Leddy" initials="J." surname="Leddy"/>
            <author fullname="D. Voyer" initials="D." surname="Voyer"/>
            <author fullname="S. Matsushima" initials="S." surname="Matsushima"/>
            <author fullname="Z. Li" initials="Z." surname="Li"/>
            <date month="February" year="2021"/>
            <abstract>
              <t>The Segment Routing over IPv6 (SRv6) Network Programming framework enables a network operator or an application to specify a packet processing program by encoding a sequence of instructions in the IPv6 packet header.</t>
              <t>Each instruction is implemented on one or several nodes in the network and identified by an SRv6 Segment Identifier in the packet.</t>
              <t>This document defines the SRv6 Network Programming concept and specifies the base set of SRv6 behaviors that enables the creation of interoperable overlays with underlay optimization.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8986"/>
          <seriesInfo name="DOI" value="10.17487/RFC8986"/>
        </reference>
      </references>
      <references anchor="sec-informative-references">
        <name>Informative References</name>
        <reference anchor="I-D.ietf-cats-usecases-requirements">
          <front>
            <title>Computing-Aware Traffic Steering (CATS) Problem Statement, Use Cases, and Requirements</title>
            <author fullname="Kehan Yao" initials="K." surname="Yao">
              <organization>China Mobile</organization>
            </author>
            <author fullname="Luis M. Contreras" initials="L. M." surname="Contreras">
              <organization>Telefonica</organization>
            </author>
            <author fullname="Hang Shi" initials="H." surname="Shi">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Shuai Zhang" initials="S." surname="Zhang">
              <organization>China Unicom</organization>
            </author>
            <author fullname="Qing An" initials="Q." surname="An">
              <organization>Alibaba Group</organization>
            </author>
            <date day="14" month="February" year="2025"/>
            <abstract>
              <t>   Distributed computing is a computing pattern that service providers
   can follow and use to achieve better service response time and
   optimized energy consumption.  In such a distributed computing
   environment, compute intensive and delay sensitive services can be
   improved by utilizing computing resources hosted in various computing
   facilities.  Ideally, compute services are balanced across servers
   and network resources to enable higher throughput and lower response
   time.  To achieve this, the choice of server and network resources
   should consider metrics that are oriented towards compute
   capabilities and resources instead of simply dispatching the service
   requests in a static way or optimizing solely on connectivity
   metrics.  The process of selecting servers or service instance
   locations, and of directing traffic to them on chosen network
   resources is called "Computing-Aware Traffic Steering" (CATS).

   This document provides the problem statement and the typical
   scenarios for CATS, which shows the necessity of considering more
   factors when steering traffic to the appropriate computing resource
   to better meet the customer's expectations.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-cats-usecases-requirements-06"/>
        </reference>
        <reference anchor="I-D.ietf-cats-framework">
          <front>
            <title>A Framework for Computing-Aware Traffic Steering (CATS)</title>
            <author fullname="Cheng Li" initials="C." surname="Li">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Zongpeng Du" initials="Z." surname="Du">
              <organization>China Mobile</organization>
            </author>
            <author fullname="Mohamed Boucadair" initials="M." surname="Boucadair">
              <organization>Orange</organization>
            </author>
            <author fullname="Luis M. Contreras" initials="L. M." surname="Contreras">
              <organization>Telefonica</organization>
            </author>
            <author fullname="John Drake" initials="J." surname="Drake">
              <organization>Independent</organization>
            </author>
            <date day="30" month="April" year="2025"/>
            <abstract>
              <t>   This document describes a framework for Computing-Aware Traffic
   Steering (CATS).  Specifically, the document identifies a set of CATS
   components, describes their interactions, and provides illustrative
   workflows of the control and data planes.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-cats-framework-07"/>
        </reference>
        <reference anchor="I-D.ietf-cats-metric-definition">
          <front>
            <title>CATS Metrics Definition</title>
            <author fullname="Kehan Yao" initials="K." surname="Yao">
              <organization>China Mobile</organization>
            </author>
            <author fullname="Hang Shi" initials="H." surname="Shi">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Cheng Li" initials="C." surname="Li">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Luis M. Contreras" initials="L. M." surname="Contreras">
              <organization>Telefonica</organization>
            </author>
            <author fullname="Jordi Ros-Giralt" initials="J." surname="Ros-Giralt">
              <organization>Qualcomm Europe, Inc.</organization>
            </author>
            <date day="3" month="March" year="2025"/>
            <abstract>
              <t>   Computing-Aware Traffic Steering (CATS) is a traffic engineering
   approach that optimizes the steering of traffic to a given service
   instance by considering the dynamic nature of computing and network
   resources.  In order to consider the computing and network resources,
   a system needs to share information (metrics) that describes the
   state of the resources.  Metrics from network domain have been in use
   in network systems for a long time.  This document defines a set of
   metrics from computing domain used for CATS.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-cats-metric-definition-02"/>
        </reference>
        <reference anchor="I-D.ietf-idr-5g-edge-service-metadata">
          <front>
            <title>BGP Extension for 5G Edge Service Metadata</title>
            <author fullname="Linda Dunbar" initials="L." surname="Dunbar">
              <organization>Futurewei</organization>
            </author>
            <author fullname="Kausik Majumdar" initials="K." surname="Majumdar">
              <organization>Oracle</organization>
            </author>
            <author fullname="Cheng Li" initials="C." surname="Li">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Gyan Mishra" initials="G. S." surname="Mishra">
              <organization>Verizon</organization>
            </author>
            <author fullname="Zongpeng Du" initials="Z." surname="Du">
              <organization>China Mobile</organization>
            </author>
            <date day="28" month="April" year="2025"/>
            <abstract>
              <t>   This draft describes a new Edge Metadata Path Attribute and some Sub-
   TLVs for egress routers to advertise the Edge Metadata about the
   attached edge services (ES).  The edge service Metadata can be used
   by the ingress routers in the 5G Local Data Network to make path
   selections not only based on the routing cost but also the running
   environment of the edge services.  The goal is to improve latency and
   performance for 5G edge services.

   The extension enables an edge service at one specific location to be
   more preferred than the others with the same IP address (ANYCAST) to
   receive data flow from a specific source, like a specific User
   Equipment (UE).


              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-idr-5g-edge-service-metadata-29"/>
        </reference>
      </references>
    </references>
    <?line 243?>



  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA808a28bOZLfDfg/EMiHsWG1duxk5mYN3OEc27k1Nsl4I+WA
Aw4YUN2UxE2r2dvstldre377VRUfTfZDVpwMcMoHS92sYrHeRRaTJMnhwd05
e314kKm04BtxzrKKL+tkmyeLzf0qSXmtkx9/PDyAL+dMFkt1eKCbxUZqLVVR
b0uAuLmevzs8qGWdw4+3okjXG159kcWKfRD1WmUqV6stW6qKXapN2dTwJuH3
vBJsDnMtZcpmtRAVPD484ItFJYCi/zg8YOztBwN1MZ8dHtyvzhmSdHgAoxrA
W50fHiTMEP1XseYF+x+uEExVMPRyLQvOPqiFzAU+FBsu83O25eoLjv3PFN9v
6PU0VZsW1a0Awt/L5jlMuWxKGLr9+7aPC9gFzKnkoqmJTPOvUNWG1/JOnCOa
T+8uz35688Z9f3P2b6fu+y9vfvrZf//zLz8TBuR9CH+TXE2lqJdGRI0WKddC
J5X4RyMrsRFFrQfGLStY5L2qvgy82wggOU0ysZSFrEG88RiZVclPq0RkK5Fo
Ud3JVCAIz3jN7RqTJGF8oeuKpzX+7sq7tvLWVt5HKNpjJjXj/h3wVBbmNeNl
WSmertkC1pYxVbB6LRjhKoTWTC3ZQtVrlrp5GC8yVogaV8g8x1QxZfM1zAJK
3iBnGKAtFbCLLUJ13Xh1lfDKqd6UuaVtZJahDhwevGI3IF+VNSliZw+vZPDz
adfSZ1+7dNAkLTPzcMdijfD0BFYNagvjWa2YFrlIa4OrrCSvBbOSg2G65kUq
9JTN1EYgK5G3OYwp0i0IGGZFXZvA40o1qzXM2T5FlLlMibcghsoSJRLgA47p
joBVZcRMpGrV8IrDOMHEcgn04WhHjiUZgCbsfi1h/cg8lBk8AhywuoeHPVT/
6QlFLgAWOZxrBWxegdJUPDd0eEPo4fNvnp5IB+SmzAkprQWolxlSOmV/Uffi
TlSTSEQ1TUpE82K7W1gwmNcsBc+1cAuHJeKcXUOZMK6d0g4yoWe7T08TplGw
kYbXQtdWzTURaXmWoVzueA5LAx1BRfCicYaWSXhSofEQ/+wSpuDwce16Eqyc
+N2Oz0SZqy3ZndNqHC+nwEOcKpPa+Eogw6s9sgtfpgAGQpP/Cl5O3Ntwvk2T
1xIkxdTi74ZyY8JDmoXPxT85CnbSDriXYF05CBV4JIosqVUCf5xJoJYjQWu5
WuMIvQXhbBgIF4jzTkaMeZRtJL468kbwHZiMOg7aibShjiE4cTrWv6lxP1de
0Cibuag2GrxQK/5ELZMan5Izip1fg44Pl7JUOSzX6AUiIPBh/WqNAlw9knW+
RzC/TG5xHC6i5PU68RLo07RsKpSnJUFTPCHATwr0ojq3f43F6KYsVVVrp4kp
BHSpNzoyncCZAs+uLz/cnrPrfzTA4VShEaC6JEgWA/dGFmo4O0fhBokLPbWP
Z6JuSkO8MLak8QkyzDkXWBTafC7BwxmNeniwoR59ErvQOCRyaCiKMH6EpgSB
D5xonm8ZeI7AfHx0CqzL6N+QwXhrGjI1jG9zemcMFiwBYgVosIRZwd1rpHGt
7iO7J+JS0B/jsxB/iBuWo2hC5/FwAE9TiEzA6XyLzh2XiEO8flgLVdURKg7F
xjLnqXFOG/4FSBSpxLxTkz+Ue2qsCeCvIiGyhF0GjLqwvMCBDw8BCxOUckJS
hnCggQ3GdALh28g5yHcg3JuvTSa6MgrVSAO/NzLnFLwR6UwtazKuK7vQj4af
mEDMrj4eM3RdGc9BldhGZRFdEW9sVmnDIrv6PAdPB3wCTQKDwqk0sMvw26gt
TX/1kVEmi4KuppD07PDJ8WBWg8C0yVgQBp4KiiORFlkNQjfmVdTKGCWOVjns
w1H+9yLP8W83WJoldsghuywKo660wCBBwXUDB+8kR80V1RK5cHZ0c3ZsQk0E
DKK5hhyYMilADGGer+BvDPzm6ObNsSUE3Mt6oZoia98f3ZweO8V5llDjqivQ
AwoKRCtkbTaRwwiOQkflIU9aqU20NqdKHfGYrI7dnE13GTZhG1vuzRuzwq4A
WAmeM8Uc2iArlnLVUJLhJj1Fm7Ta1F3axNBTGYd/aibWJGhylKsKM5ICtb0n
G8+vNfp4k0BiAlgMSFtACODgbabRfGeENHzy2lDgCBDt/Nq4NbPGiALw1aJl
mgYh6dIkJfkWUiY0Civ9aNgAKjHG+9dgjq/t4kivCow+XgF9fBgwNUQMXsGg
ljXWAOj4fv/9dyz48HOSfN3nxAE+mj/oUB27k1tiN/nea8tz5j6PXcB9PxFg
n9yToXWc9AAfY1JbOvCPszD78PvM+KI1dmc7Qde0jzhGMfZesKOPkFJZP3Xj
1Og4Gv+cVpwMMCBC8DiG6CR4H3PpcRDBI7mzy9adPfYQkNOiX19HAn6nQTNj
dGMkjH0ewS2aQR+suQ4huBKkWp8LrNTJNI4gJh+PMYF9ixxOdmoEOwoCVCv4
aMw48POfXUp8Oq7EJzHw15nO4xjwSX+K/YHpG/nUcYJ2Atsahp3uBga//p0p
/1M46n+/DvhP8bBh6CHg/McuLELnxZ4z92CH5x4C7vJqlHdDwF0pj0p9BPiT
yyPY43QaSP31s8DfRHbv68iDP4Zhztk6hoW/nwO2fvbUAdvfZ3sBs6Pr2alz
m/b3WetG/6g1974O/t4J3HOHPRcY0hIjeOdT5p3J1bjvHfwwP6PNBR/O2avh
WpjRac+//xDW0G1x/cPTcMV9FewQtBU3w5I72DzYv+Qe3DgcKLltoREOb5FN
fEEszTwryPtLX6maIkBPIFHGArRboZqCzW9jhNXJhA6U8m1czthqI6wiJgMF
yOsJFdC+dEZ6XAbvNgZdZk81WlxHRNsxiESLdtNZoe5ALU4bbuEi26LS1Ua6
X06a4mm46rL1ZWLGIBVtcWcfEuMdel8Q2i0wW7ucfmM98sJq5KXJxsvqghfW
Id9jthesbW9vFQugh4+xHa+e/XxLKvmHCHeY+wPC7UJHGdzjLrDxrPNZsOF8
cyfYPkSyAbjxDHPXdDtyyzGwZ7LK8dl25pNjYM8kCWNgz+SQO8B2ZY/fmUj6
jNehw2AvNIH4y+iDHtj+SdoLU9IXJqMvTEO/JQEd9mB2C8RGNYy4uxm6d+74
HVLH4ZzOpY5hMjiQOrqVsXZlLeX24cycAyLAzpOJcL/VpihsKEWxD82xKNCz
ohzykz0eLHmNu6z24Hr26e5nc6zy51/8sUpnj93vfNNo3BWXtCs+3s0S78Tj
iYjdWsfdfdxdDzbRCWfdQBKWA8pG5jVbiPpeiKKfi+p2VzvMPO1pVLsz3CVB
Fh2GUbJr5v8EQls2Obu4velMP3Dc4E53Brazg6OAoYR+Quk7nU8PiXDCrt/+
1y1JAluooGKg0/SaL3Kp14An4klv8z5Mm6fsArfAXoqOsuA+56cMEFK+vFZa
0HFMkJp3ue3OSzO14RI7FXq5PcD3JoGiglfVtlcZwKy+Fgo0hwTxT2rUMTIH
CidhNwAVOWGDQXC4uqsNDO2Auolsc4U5pjaoAK9uFsn8/X+bc45cfhF0zA91
ocj51r09J50E2aewOoTK6G3vzMKrFJhIoW1Doh1sD9UtM6c4xwxg2G0l3BF3
ZzapKllvcanBqQjCvVc8iwfzOy5z0Afhmp5Yyku+kLlDgFWox0L9Gr9SqdUu
Hyi3XT+rimcNne+741p/Su4O7mKfAEI3pe2Oo++BZqCpc5KhIfVP3iKfglIS
Ba402+VZWr2PXcvhgStcljnW8NRupEz1bUvQXjMaW2wDPeTFNuUQGW5uGc8y
xC3ISBaBP29bItwxP5ur0nduhEseBiM2Y/wxfQhQgmLTYEFUMZCzqjAuboET
d7JSBXWXTVqzwhrZ9X+BS6eWiWIFymEsmFWCu/6gieleoYa0PCq8iTdWw/1Z
/MUsaGErlG+RwZNEbBQK2t3IzOg5xorarF8GLLFp3qUNSNw14bwVACk6LUZh
/wm1kJWVSNIQ1Db0Aeeoca2GgJ7hXO9kpWvs7hgV0mRgL2HZFBR9gFE1hsdN
o+sIMbHK6kr02rTeRO5xYVZEGwqdVaHZ4bmw7bvT2Nyl41N0i50aytwUU0Zc
nAlgQUaLq9HyreZ0DdQGU5QHAT+TmuyAD1i8RL62Cx4NpBes3pbYL+QccBsy
HVbrd/DhRoF3UHRgj55MFajaCPHXZgGJjqCD4RvToAh2IVVmW5EstqDFLxd3
6DE66QPpb+FiNm4WqSJKEoYOt51b7yQQTupBLJ4wbHtxK5XadLzWeNjuE5Pe
MXjZLLz64VTglHVaydL21u1MRPYTVquXcSAw25Ay6DkmvzPpCIj6OPoYDJN8
R0MGlp/WJIu4qaXniWHeyFeBMBuNnT7NCmbQrsEIaHdiHRBhjxASBzIbk5tS
IGluE7BLAFnbD0MkE0sM6g5+MFbCDD7N7bGGtjhfy4pMcT6W0WedlN66mNZ7
hD7NUrLbSjv6GAg8RuUVuN3PfCYvjiKBow9Vo9OWK5edtfqCwho1JA6WxZm6
L3LIXCw5so43i12WiVGrD9yUBnRw4zxmSkyP72Kj5aP8rKiJ81EK4Xecoahr
m8TCYD6UoLRy7KQqoVTDbJ1kQZy0oRCqFiwZEeM7uhxgeh/7TdJLlVKfLKa1
UZ/2RnDt5ewkiUmSvSfA/sJJ+4wqevvC4GSa5bsHC2H4moy4P0pF52vI/UZc
LWXSyMv3PxrJ9FaEmVOaNxnGOq2aCtL2SmBvc+Z1A1Nydnn7OWxqnsDbDeY/
0bP2TsDEpNyGcFcPDCL+WwMxBDhaYg5DwfTob7ezY8obkPBTQvH+rL3K8Mw6
MIuiOzXGXL3HMoKOLiLhBoN2GwvItg+Wssu4AdFvSlB+lLBfXS858HcOhZvA
ZFIWxqd1YkHXjMa7Y23ecPV5roNLDkO9gONOyStKLyN3UIO9vkZFb7G6ykCL
z5/dOxkL01FCAFWMLocOpQBiG4F7SwL8d+h9wtzOBhvwViXXrkk9uMXzTCsj
hXa9TsKQ7i47dKJgjfPotWryjEGq3PYrd1FDVFyL9Av6XmkIB+kKvLHSSwZo
xS/d44hyAO2r3W7YHeB6j45+Y2HL13Zyw9z2THA4dBNIP2wTumskrgfhG/n3
mCnewugLKJJMX89D4URCQMcQFQVrfoeeRHjpwRqYq41mwmwdoNhl0UAV/7zl
u225ntH7TTpDtL/U06900X7nl5DimOm1vTME6+VU10clnQ4r6QEjvqYyHYbd
0SUEW+Wo4RqH4vqwGu7L8U5wrml24gyFNYauBH19y/gUCmF3u2ngDlovR9XC
IuQtF0qefhGurXcta215bORHREzodgWv07URAsRuXq1EHcxZm9tbpMNgnmb3
BWIKEKjd7hcmQM6QLILInt45W6c+Y0yjTDESUe6n1GylELMJVub2g8y9o24T
RTtjkB8MoK1pwwoKetXoePGMr8w2W4p7Z1ACwcIMsyBpKex1mTW3W8mWeYZG
p9DIssJ0BnSlFPiw4VAR7YlEmmS8oLnr1F3sQEKMyzV6RkzQNrcwdjK1Zvve
XMvaM0pjumtu0IkKU1w677KcTNxWGiV+mJhInxBgmEI3qDsX4rB0cr0S/ati
/oajywlDW731M/Z9dxbkH6yWG2FLjPlvpzZnAluxxs6N8v2BVg5+dcycIx2z
tj01BzLarCttKtoa8C4wjGNgEtez0xEjFJEJwjiTXLaep08yX4HTNKcZfj5/
aQQ13vigNS9W9gbHzN17iDXDklRWwB9LxkpHdunmte06kUskiUHMbEp/jzJi
oL0ugaOC21aQQNf8t7kvkO7dpkkh7gP4rd0GAjkKbpt8/FDvEMLxOQ+G22xH
4BU4ig2gU3a1XtpOh2bvL/y1VMi89FKCdP9fWf9si5cvP7c1yZ5+AJFbLcbT
PKNlCw7FZYqu0xg5s84bnaA0GxgF3szCbSZXOncA/SXEzuVSvHdoNy2+IWKb
oyoDgRiRwLCGd4ZkSlNTN0ne6m1c0jG+UXiu4nfvolxJWseT2b1equ9+m/8G
9jKrVen3Yd3OUGFjv921LprNAnfClzsXQbLoLOJoLDveeRfYlE5ms0SLEvxu
jXdsoKz88FXc8AvYlxl4/ZWH6O943ghbBdyroMPQ55qQaoJXxGOiS3tiYHfT
H15p+yZJozdP7rJpVAaDquH/sAC1wL8cVVjwLEbuFiNK4JsJcZgSeYPLwlMN
dsTxPyNojzyOzd6RqbZ1CKWBKU0ujTtzMjQxNSL0w+fZnH38dW5y5OACXdn+
3wkWHC37rdgqI2otwkOIQoWZsGVhzKjo9CQFX26kHXDCXpy+ufh40We/BBc1
zPqQlxhIgBZCwc0+lMV6kX4p1H2OukMHRICSdx49uf9FYgFvDg/+D2kUMQ57
RQAA

-->

</rfc>
