<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.29 (Ruby 2.7.0) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-ietf-cats-metric-definition-03" category="info" submissionType="IETF" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.29.0 -->
  <front>
    <title abbrev="CATS Metrics">CATS Metrics Definition</title>
    <seriesInfo name="Internet-Draft" value="draft-ietf-cats-metric-definition-03"/>
    <author initials="Y." surname="Kehan" fullname="Kehan Yao">
      <organization>China Mobile</organization>
      <address>
        <postal>
          <country>China</country>
        </postal>
        <email>yaokehan@chinamobile.com</email>
      </address>
    </author>
    <author initials="C." surname="Li" fullname="Cheng Li">
      <organization>Huawei Technologies</organization>
      <address>
        <postal>
          <country>China</country>
        </postal>
        <email>c.l@huawei.com</email>
      </address>
    </author>
    <author initials="L. M." surname="Contreras" fullname="L. M. Contreras">
      <organization>Telefonica</organization>
      <address>
        <email>luismiguel.contrerasmurillo@telefonica.com</email>
      </address>
    </author>
    <author initials="J." surname="Ros-Giralt" fullname="Jordi Ros-Giralt">
      <organization>Qualcomm Europe, Inc.</organization>
      <address>
        <email>jros@qti.qualcomm.com</email>
      </address>
    </author>
    <author initials="H." surname="Shi" fullname="Hang Shi">
      <organization>Huawei Technologies</organization>
      <address>
        <postal>
          <country>China</country>
        </postal>
        <email>shihang9@huawei.com</email>
      </address>
    </author>
    <date year="2025" month="July" day="07"/>
    <area>Routing</area>
    <workgroup>Computing-Aware Traffic Steering</workgroup>
    <keyword>CATS, metric</keyword>
    <abstract>
      <?line 79?>

<t>Computing-Aware Traffic Steering (CATS) is a traffic engineering approach that optimizes the steering of traffic to a given service instance by considering the dynamic nature of computing and network resources. In order to consider the computing and network resources, a system needs to share information (metrics) that describes the state of the resources. Metrics from network domain have been in use in network systems for a long time. This document defines a set of metrics from the computing domain used for CATS.</t>
    </abstract>
    <note removeInRFC="true">
      <name>Discussion Venues</name>
      <t>Discussion of this document takes place on the
    Computing-Aware Traffic Steering Working Group mailing list (cats@ietf.org),
    which is archived at <eref target="https://mailarchive.ietf.org/arch/browse/cats/"/>.</t>
      <t>Source for this draft and an issue tracker can be found at
    <eref target="https://github.com/VMatrix1900/draft-cats-metric-definition"/>.</t>
    </note>
  </front>
  <middle>
    <?line 83?>

<section anchor="introduction">
      <name>Introduction</name>
      <t>Service providers are deploying computing capabilities across the network for hosting applications such as distributed AI workloads, AR/VR and driverless vehicles, among others. In these deployments, multiple service instances are replicated across various sites to ensure sufficient capacity for maintaining the required Quality of Experience (QoE) expected by the application. To support the selection of these instances, a framework called Computing-Aware Traffic Steering (CATS) is introduced in <xref target="I-D.ietf-cats-framework"/>.</t>
      <t>CATS is a traffic engineering approach that optimizes the steering of traffic to a given service instance by considering the dynamic nature of computing and network resources. To achieve this, CATS components require performance metrics for both communication and compute resources. Since these resources are deployed by multiple providers, standardized metrics are essential to ensure interoperability and enable precise traffic steering decisions, thereby optimizing resource utilization and enhancing overall system performance.</t>
      <t>Metrics from network domain have already been defined in previous documents, e.g., <xref target="RFC9439"/>, <xref target="RFC8912"/>，and <xref target="RFC8911"/>, and been in use in network systems for a long time. This document focuses on categorizing the relevant metrics at the computing domain for CATS into three levels based on their complexity and granularity.</t>
    </section>
    <section anchor="conventions-and-definitions">
      <name>Conventions and Definitions</name>
      <t>This document uses the following terms defined in <xref target="I-D.ietf-cats-framework"/>:</t>
      <ul spacing="normal">
        <li>
          <t>Computing-Aware Traffic Steering (CATS)</t>
        </li>
        <li>
          <t>Service</t>
        </li>
        <li>
          <t>Service contact instance</t>
        </li>
      </ul>
    </section>
    <section anchor="definition-of-metrics">
      <name>Definition of Metrics</name>
      <section anchor="design-principles-why-three-metric-levels">
        <name>Design Principles - Why Three Metric Levels?</name>
        <t>As outlined in <xref target="I-D.ietf-cats-usecases-requirements"/>, the resource model that defines CATS metrics MUST be scalable, ensuring that its implementation remains within a reasonable and sustainable cost. Additionally, it MUST be useful in practice. To that end, a CATS system should select the most appropriate metric(s) for instance selection, recognizing that different metrics may influence outcomes in distinct ways depending on the specific use case.</t>
        <t>Introducing a definition of metrics requires balancing the following trade-off: if the metrics are too fine-grained, they become unscalable due to the excessive number of metrics that must be communicated through the metrics distribution protocol. (See <xref target="I-D.rcr-opsawg-operational-compute-metrics"/> for a discussion of metrics distribution protocols.) Conversely, if the metrics are too coarse-grained, they may not have sufficient information to enable proper operational decisions.</t>
        <t>Conceptually, it is necessary to define at least two fundamental levels of metrics: one comprising all raw metrics, and the other representing a simplified form—consisting of a single value that encapsulates the overall capability of a service instance.</t>
        <t>However, such a definition may, to some extent, constrain implementation flexibility across diverse CATS use cases. Implementers often seek balanced approaches that consider trade-offs among encoding complexity, accuracy, scalability, and extensibility.</t>
        <t>To ensure scalability while providing sufficient detail for effective decision-making, this document provides a definition of metrics that incorporates three levels of abstraction:</t>
        <ul spacing="normal">
          <li>
            <t><strong>Level 0 (L0): Raw metrics.</strong> These metrics are presented without abstraction, with each metric using its own unit and format as defined by the underlying resource.</t>
          </li>
          <li>
            <t><strong>Level 1 (L1): Metrics normalized within categories.</strong> These metrics are derived by aggregating L0 metrics into multiple categories, such as network and computing. Each category is summarized with a single L1 metric by normalizing it into a value within a defined range of scores.</t>
          </li>
          <li>
            <t><strong>Level 2 (L2): Fully normalized metric.</strong> These metrics are derived by aggregating lower level metrics (L0 or L1) into a single L2 metric, which is then normalized into a value within a defined range of scores.</t>
          </li>
        </ul>
      </section>
      <section anchor="level-0-raw-metrics">
        <name>Level 0: Raw Metrics</name>
        <t>Level 0 metrics encompass detailed, raw metrics, including but not limited to:</t>
        <ul spacing="normal">
          <li>
            <t>CPU: Base Frequency, boosted frequency, number of cores, core utilization, memory bandwidth, memory size, memory utilization, power consumption.</t>
          </li>
          <li>
            <t>GPU: Frequency, number of render units, memory bandwidth, memory size, memory utilization, core utilization, power consumption.</t>
          </li>
          <li>
            <t>NPU: Computing power, utilization, power consumption.</t>
          </li>
          <li>
            <t>Network: Bandwidth, capacity, throughput, bytes transmitted, bytes received, host bus utilization.</t>
          </li>
          <li>
            <t>Storage: Available space, read speed, write speed.</t>
          </li>
          <li>
            <t>Delay: Time taken to process a request.</t>
          </li>
        </ul>
        <t>L0 metrics serve as foundational data and do not require classification. They provide basic information to support higher-level metrics, as detailed in the following sections.</t>
        <t>L0 metrics can be encoded and exposed using an Application Programming Interface (API), such as a RESTful API, and can be solution-specific. Different resources can have their own metrics, each conveying unique information about their status. These metrics can generally have units, such as bits per second (bps) or floating point instructions per second (flops).</t>
        <t>Regarding network-related information, <xref target="RFC8911"/> and <xref target="RFC8912"/> define various performance metrics and their registries. Additionally, in <xref target="RFC9439"/>, the ALTO WG introduced an extended set of metrics related to network performance, such as throughput and delay. For compute metrics, <xref target="I-D.rcr-opsawg-operational-compute-metrics"/> lists a set of cloud resource metrics.</t>
      </section>
      <section anchor="level-1-normalized-metrics-in-categories">
        <name>Level 1: Normalized Metrics in Categories</name>
        <t>L1 metrics are organized into distinct categories, such as computing, communication, and composed metrics. Each L0 metric is classified into one of these categories. Within each category, a single L1 metric is computed using an <em>aggregation function</em> and normalized to a unitless score that represents the performance of the underlying resources according to that category. Potential categories include:</t>
        <!-- JRG Note: TODO, define aggregation and normalization function -->

<ul spacing="normal">
          <li>
            <t><strong>Computing:</strong> A normalized value derived from computing-related L0 metrics, such as CPU, GPU, and NPU utilization.</t>
          </li>
          <li>
            <t><strong>Communication:</strong> A normalized value derived from communication-related L0 metrics, such as communication throughput.</t>
          </li>
          <li>
            <t><strong>Composed:</strong> A normalized value derived from an end-to-end aggregation function by levaraging both computing and communication metrics. For example, end-to-end delay computed as the sum of all delays along a path.</t>
          </li>
        </ul>
        <t>Editor note: detailed categories can be updated according to the CATS WG discussion.</t>
        <t>L0 metrics, such as those defined in <xref target="RFC8911"/>, <xref target="RFC8912"/>, <xref target="RFC9439"/>, and <xref target="I-D.rcr-opsawg-operational-compute-metrics"/>, can be categorized into the aforementioned categories. Each category will employ its own aggregation function (e.g., weighted summary) to generate the normalized value. This approach allows the protocol to focus solely on the metric categories and their normalized values, thereby avoiding the need to process solution-specific detailed metrics.</t>
      </section>
      <section anchor="level-2-fully-normalized-metric">
        <name>Level 2: Fully Normalized Metric.</name>
        <t>The L2 metric is a single score value derived from the lower level metrics (L0 or L1) using an aggregation function. Different implementations may employ different aggregation functions to characterize the overall performance of the underlying compute and communication resources. The definition of the L2 metric simplifies the complexity of collecting and distributing numerous lower-level metrics by consolidating them into a single, unified score.</t>
        <t>TODO: Some implementations may support the configuration of Ingress CATS-Forwarders with the metric normalizing method so that it can decode the information from the L1 or L0 metrics.</t>
        <t>Figure 1 provides a summary of the logical relationships between metrics across the three levels of abstraction.</t>
        <figure anchor="fig-metric-levels">
          <name>Logic of CATS Metrics in levels</name>
          <artwork><![CDATA[
                                    +--------+
                         L2 Metric: |   M2   |
                                    +---^----+
                                        |
                    +-------------+-----+-----+------------+
                    |             |           |            |
                +---+----+        |       +---+----+   +---+----+
    L1 Metrics: |  M1-1  |        |       |  M1-2  |   |  M1-3  | (...)
                +---^----+        |       +---^----+   +----^---+
                    |             |           |             |
               +----+---+         |       +---+----+        |
               |        |         |       |        |        |
            +--+---+ +--+---+ +---+--+ +--+---+ +--+---+ +--+---+
 L0 Metrics:| M0-1 | | M0-2 | | M0-3 | | M0-4 | | M0-5 | | M0-6 | (...)
            +------+ +------+ +------+ +------+ +------+ +------+

]]></artwork>
        </figure>
      </section>
    </section>
    <section anchor="representation-of-metrics">
      <name>Representation of Metrics</name>
      <t>The representation of metrics is a key component of the CATS architecture. It defines how metrics are encoded and transmitted over the network. The representation should be flexible enough to accommodate various types of metrics along with their respective units and precision levels, yet simple enough to enable easy implementation and deployment across heterogeneous edge environments.</t>
      <section anchor="cats-metric-fields">
        <name>CATS Metric Fields</name>
        <t>This section presents the detailed representation of CATS metrics. The design aligns with principles established in similar IETF specifications, such as the network performance metrics defined in <xref target="RFC9439"/>.</t>
        <t>A CATS metric is represented using a set of fields, each describing a property of the metric. This document introduces the following CATS metrics fields:</t>
        <!-- JRG Note and TODO: Define each of the types, formats, etc.. Do we need to standardize them? -->
<figure anchor="fig-metric-def">
          <name>CATS Metric Fields</name>
          <artwork><![CDATA[
- Cats_metric:
      - Metric_type:
            The type of the CATS metric.
            Examples: compute_cpu, storage_disk_size, network_bw,
            compute_delay, network_delay, compute_norm,
            storage_norm, network_norm, delay_norm.
      - Format:
            The encoding format of the metric.
            Examples: int, float.
      - Format_std (optional):
            The standard used to encode and decode the value
            field according to the format field.
            Example: ieee_754, ascii.
      - Length:
            The size of the value field measured in octets.
            Examples: 2, 4, 8, 16, 32, 64.
      - Unit:
            The unit of this metric.
            Examples: mhz, ghz, byte, kbyte, mbyte,
            gbyte, bps, kbps, mbps, gbps, tbps, tflops, none.
      - Source (optional):
            The source of information used to obtain the value field.
            Examples: nominal, estimation, normalization,
            aggregation.
      - Statistics(optional):
            The statistical function used to obtain the value field.
            Examples: max, min, mean, cur.
      - Level:
            The level this metric belongs to.
            Examples: L0, L1, L2.
      - Value:
            The value of this metric.
            Examples: 12, 3.2.
]]></artwork>
        </figure>
        <t>Next, we describe each field in more detail:</t>
        <ul spacing="normal">
          <li>
            <t><strong>Metric_Type (type)</strong>: This field specifies the category or kind of CATS metric being measured, such as computational resources, storage capacity, or network bandwidth. It acts as a label that enables network devices to identify the purpose of the metric.</t>
          </li>
          <li>
            <t><strong>Format (format)</strong>: This field indicates the data encoding format of the metric, such as whether the value is represented as an integer, a floating-point number, or has no specific format.</t>
          </li>
          <li>
            <t><strong>Format standard (format_std, optional)</strong>: This optional field indicates the standard used to encode and decode the value field according to the format field. It is only required if the value field is encoded using a specific standard, and knowing this standard is necessary to decode the value field. Examples of format standards include ieee_754 and ascii. This field ensures that the value can be accurately interpreted by specifying the encoding method used.</t>
          </li>
          <li>
            <t><strong>Length (length)</strong>: This field indicates the size of the value field measured in octets (bytes). It specifies how many bytes are used to store the value of the metric. Examples include 4, 8, 16, 32, and 64. The length field is important for memory allocation and data handling, ensuring that the value is stored and retrieved correctly.</t>
          </li>
          <li>
            <t><strong>Unit (unit)</strong>: This field defines the measurement units for the metric, such as frequency, data size, or data transfer rate. It is usually associated with the metric to provide context for the value.</t>
          </li>
          <li>
            <t><strong>Source (source, optional)</strong>: This field describes the origin of the information used to obtain the metric. It may include one or more of the following non-mutually exclusive values:  </t>
            <ul spacing="normal">
              <li>
                <t>'nominal'. Similar to <xref target="RFC9439"/>, "a 'nominal' metric indicates that the metric value is statically configured by the underlying devices.  For example, bandwidth can indicate the maximum transmission rate of the involved device.</t>
              </li>
              <li>
                <t>'estimation'. The 'estimation' source indicates that the metric value is computed through an estimation process.</t>
              </li>
              <li>
                <t>'directly measured'. This source indicates that the metric can be obtained directly from the underlying device and it does not need to be estimated.</t>
              </li>
              <li>
                <t>'normalization'. The 'normalization' source indicates that the metric value was normalized. For instance, a metric could be normalized to take a value from 0 to 1, from 0 to 10, or to take a percentage value. This type of metrics do not have units.</t>
              </li>
              <li>
                <t>'aggregation'. This source indicates that the metric value was obtained by using an aggregation function.
<!-- JRG: Define aggregation and normalization functions -->
                </t>
              </li>
            </ul>
            <t>
Nominal metrics have inherent physical meanings and specific units without any additional processing. Aggregated metrics may or may not have physical meanings, but they retain their significance relative to the directly measured metrics. Normalized metrics, on the other hand, might have physical meanings but lack units.</t>
          </li>
          <li>
            <t><strong>Statistics (statistics, optional)</strong>: This field provides additional details about the metrics, particularly if there is any pre-computation performed on the metrics before they are collected. It is useful for services that require specific statistics for service instance selection.  </t>
            <ul spacing="normal">
              <li>
                <t>'max'. The maximum value of the data collected over intervals.</t>
              </li>
              <li>
                <t>'min'. The minimum value of the data collected over intervals.</t>
              </li>
              <li>
                <t>'mean'. The average value of the data collected over intervals.</t>
              </li>
              <li>
                <t>'cur'. The current value of the data collected.</t>
              </li>
            </ul>
          </li>
          <li>
            <t><strong>Level (level)</strong>: This field specifies the level at which the metric is measured. It is used to categorize the metric based on its granularity and scope. Examples include L0, L1, and L2. The level field helps in understanding the level of detail and specificity of the metric being measured.</t>
          </li>
          <li>
            <t><strong>Value (value)</strong>: This field represents the actual numerical value of the metric being measured. It provides the specific data point for the metric in question.</t>
          </li>
        </ul>
      </section>
      <section anchor="level-0-metric-representation">
        <name>Level 0 Metric Representation</name>
        <t>Several definitions have been developed within the compute and communication industries, as well as through various standardization efforts---such as those by the <xref target="DMTF"/>---that can serve as L0 metrics. This section provides illustrative examples.</t>
        <!-- JRG: The following two paragraphs seem redundants, as we have
already explained it in the previous section. So I suggest to remove them. -->

<!-- The sources of L0 metrics can be nominal, directly measured, estimated, or aggregated. Nominal L0 metrics are initially provided by resource providers. Dynamic L0 metrics are measured or estimated during the service stage. Additionally, L0 metrics support aggregation when there are multiple service instances.

L0 metrics also support the statistics defined in section 4.1. -->

<!-- TODO: next step would be to update the examples once we agree with (and update as necessary) the above changes regarding the CATS metric specification. -->

<section anchor="compute-raw-metrics">
          <name>Compute Raw Metrics</name>
          <t>This section uses CPU frequency as an example to illustrate the representation of raw compute metrics. The metric type is labeled as compute_CPU_frequency, with the unit specified in GHz. The format should support both unsigned integers and floating-point values. The corresponding metric fields are defined as follows:</t>
          <figure anchor="fig-compute-raw-metric">
            <name>An Example for Compute Raw Metrics</name>
            <artwork><![CDATA[
Basic fields:
      Metric Type: compute_CPU_frequency
      Level: L0
      Format: unsigned integer, floating point
      Unit: GHz
      Length: four octets
      Value: 2.2
Source:
      nominal

|Metric Type|Level|Format| Unit|Length| Value|Source|
    8bits    2bits  1bit  4bits  3bits 32bits  3bits
]]></artwork>
          </figure>
        </section>
        <section anchor="communication-raw-metrics">
          <name>Communication Raw Metrics</name>
          <t>This section takes the total transmitted bytes (TxBytes) as an example to show the representation of communication raw metrics. TxBytes are named as "communication type_TxBytes”. The unit is Mega Bytes (MB). Format is unsigned integer or floating point. It will occupy 4 octets. The source of the metric is "Directly measured" and the statistics is "mean". Example:</t>
          <figure anchor="fig-network-raw-metric">
            <name>An Example for Communication Raw Metrics</name>
            <artwork><![CDATA[
Basic fields:
      Metric type: “communication type_TXBytes”
      Level: L0
      Format: unsigned integer, floating point
      Unit: MB
      Length: four octets
      Value: 100
Source:
      Directly measured
Statistics:
      mean

|Metric Type|Level|Format| Unit|Length| Value|Source|Statistics|
    8bits    2bits  1bit  4bits  3bits 32bits  3bits   2bits
]]></artwork>
          </figure>
        </section>
        <section anchor="delay-raw-metrics">
          <name>Delay Raw Metrics</name>
          <t>Delay is a kind of synthesized metric which is influenced by computing, storage access, and network transmission. Usually delay refers to the overal processing duration between the arrival time of a specific service request and the departure time of the corresponding service response. It is named as "delay_raw". The format should support both unsigned integer or floating point. Its unit is microseconds, and it occupies 4 octets. For example:</t>
          <figure anchor="fig-delay-raw-metric">
            <name>An Example for Delay Raw Metrics</name>
            <artwork><![CDATA[
Basic fields:
      Metric type: “delay_raw”
      Level: L0
      Format: unsigned integer, floating point
      Unit: Microsecond(us)
      Length: four octets
      Value: 231.5
Source:
      aggregation
Statistics:
      max

|Metric Type|Level|Format| Unit|Length| Value|Source|Statistics|
    8bits    2bits  1bit  4bits  3bits 32bits  3bits   2bits
]]></artwork>
          </figure>
        </section>
      </section>
      <section anchor="level-1-metric-representation">
        <name>Level 1 Metric Representation</name>
        <t>L1 metrics are normalized from L0 metrics. Although they don't have units, they can still be classified into types such as compute, communication and composed metrics. This classification is useful because it makes L1 metrics semantically meaningful.</t>
        <t>The sources of L1 metrics is normalization. Based on L0 metrics, service providers design their own algorithms to normalize metrics. For example, assigning different cost values to each raw metric and do weighted summation. L1 metrics do not need further statistical values.</t>
        <section anchor="normalized-compute-metrics">
          <name>Normalized Compute Metrics</name>
          <t>The metric type of normalized compute metrics is “compute_norm”, and its format is unsigned integer. It has no unit. It will occupy an octet. Example:</t>
          <figure anchor="fig-normalized-compute-metric">
            <name>An Example for Normalized Compute Metrics</name>
            <artwork><![CDATA[
Basic fields:
      Metric type: “compute_norm”
      Level: L1
      Format: unsigned integer
      Length: one octet
      Value: 5
Source:
      normalization


|Metric Type|Level|Format|Length|Value|Source|
    8bits    2bits  1bit   3bits 8bits  3bits
]]></artwork>
          </figure>
        </section>
        <section anchor="normalized-communication-metrics">
          <name>Normalized Communication Metrics</name>
          <t>The metric type of normalized communication metrics is “communication_norm”, and its format is unsigned integer. It has no unit. It will occupy an octet. Example:</t>
          <figure anchor="fig-normalized-communication-metric">
            <name>An Example for Normalized Communication Metrics</name>
            <artwork><![CDATA[
Basic fields:
      Metric type: “communication_norm”
      Level: L1
      Format: unsigned integer
      Length: one octet
      Value: 1
Source:
      normalization

|Metric Type|Level|Format|Length|Value|Source|
    8bits    2bits  1bit   3bits 8bits  3bits

]]></artwork>
          </figure>
        </section>
        <section anchor="normalized-composed-metrics">
          <name>Normalized Composed Metrics</name>
          <t>The metric type of normalized composed metrics is “delay_norm”, and its format is unsigned integer.  It has no unit.  It will occupy an octet. Example:</t>
          <figure anchor="fig-normalized-metric">
            <name>An Example for Normalized Composed Metrics</name>
            <artwork><![CDATA[
Basic fields:
      Metric type: “composed_norm”
      Level: L1
      Format: unsigned integer
      Length: an octet
      Value: 8
Source:
      normalization

|Metric Type|Level|Format|Length|Value|Source|
    8bits    2bits  1bit   3bits 8bits  3bits
]]></artwork>
          </figure>
        </section>
      </section>
      <section anchor="level-2-metric-representation">
        <name>Level 2 Metric Representation</name>
        <t>A fully normalized metric is a single value which does not have any physical meaning or unit.  Each provider may have its own methods to derive the value, but all providers must follow the definition in this section to represent the fully normalized value.</t>
        <t>Metric type is “norm_fi”. The format of the value is unsigned integer. It has no unit. It will occupy an octet. Example:</t>
        <figure anchor="fig-level-2-metric">
          <name>An Example for Fully Normalized Metric</name>
          <artwork><![CDATA[
Basic fields:
      Metric type: “norm_fi”
      Level: L2
      Format: unsigned integer
      Length: an octet
      Value: 1
Source:
      normalization

|Metric Type|Level|Format|Length|Value|Source|
    8bits    2bits  1bit   3bits 8bits  3bits
]]></artwork>
        </figure>
        <t>The fully normalized value also supports aggregation when there are multiple service instances providing these fully normalized values. When providing fully normalized values, service instances do not need to do further statistics.</t>
      </section>
    </section>
    <section anchor="comparison-among-metric-levels">
      <name>Comparison among Metric Levels</name>
      <t>Metrics are progressively consolidated from L0 to L1 to L2, with each level offering a different degree of abstraction to address the diverse requirements of various services. Table 1 provides a comparative overview of these metric levels.</t>
      <table anchor="comparison">
        <name>Comparison among Metrics Levels</name>
        <thead>
          <tr>
            <th align="center">Level</th>
            <th align="left">Encoding Complexity</th>
            <th align="left">Extensibility</th>
            <th align="left">Stability</th>
            <th align="left">Accuracy</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="center">Level 0</td>
            <td align="left">Complicated</td>
            <td align="left">Bad</td>
            <td align="left">Bad</td>
            <td align="left">Good</td>
          </tr>
          <tr>
            <td align="center">Level 1</td>
            <td align="left">Medium</td>
            <td align="left">Medium</td>
            <td align="left">Medium</td>
            <td align="left">Medium</td>
          </tr>
          <tr>
            <td align="center">Level 2</td>
            <td align="left">Simple</td>
            <td align="left">Good</td>
            <td align="left">Good</td>
            <td align="left">Medium</td>
          </tr>
        </tbody>
      </table>
      <t>Since Level 0 metrics are raw and service-specific, different services may define their own sets—potentially resulting in hundreds or even thousands of unique metrics. This diversity introduces significant complexity in protocol encoding and standardization. Consequently, L0 metrics are generally confined to bespoke implementations tailored to specific service needs, rather than being standardized for broad protocol use. In contrast, Level 1 metrics organize raw data into standardized categories, each normalized into a single value. This structure makes them more suitable for protocol encoding and standardization. Level 2 metrics take simplification a step further by consolidating all relevant information into a single normalized value, making them the easiest to encode, transmit, and standardize.</t>
      <t>Therefore, from the perspective of encoding complexity, Level 1 and Level 2 metrics are recommended.</t>
      <t>When considering extensibility, Level 0 metrics allow new services to define their own custom metrics. However, this flexibility requires corresponding protocol extensions, and the proliferation of metric types can introduce significant overhead, ultimately reducing the protocol’s extensibility. In contrast, Level 1 metrics introduce only a limited set of standardized categories, making protocol extensions more manageable. Level 2 metrics go even further by consolidating all information into a single normalized value, placing the least burden on the protocol.</t>
      <t>Therefore, from an extensibility standpoint, Level 1 and Level 2 metrics are recommended.</t>
      <t>Regarding stability, Level 0 raw metrics may require frequent protocol extensions as new metrics are introduced, leading to an unstable and evolving protocol format. For this reason, standardizing L0 metrics within the protocol is not recommended. In contrast, Level 1 metrics involve only a limited set of predefined categories, and Level 2 metrics rely on a single consolidated value, both of which contribute to a more stable and maintainable protocol design.</t>
      <t>Therefore, from a stability standpoint, Level 1 and Level 2 metrics are preferred.</t>
      <t>In conclusion, for CATS, Level 2 metrics are recommended due to their simplicity and minimal protocol overhead. If more advanced scheduling capabilities are required, Level 1 metrics offer a balanced approach with manageable complexity. While Level 0 metrics are the most detailed and dynamic, their high overhead makes them unsuitable for direct transmission to network devices and thus not recommended for standard protocol integration.</t>
    </section>
    <section anchor="implementation-guidance-on-using-cats-metrics">
      <name>Implementation Guidance on Using CATS Metrics</name>
      <t>&lt;Authors Note: This part has been moved to <xref target="I-D.ietf-cats-framework"/>, according to he chairs' sugguestion. Since this document is primarily on metric definition, rather than real implementations.&gt;</t>
    </section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>TBD</t>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t>TBD</t>
    </section>
  </middle>
  <back>
    <references anchor="sec-informative-references">
      <name>Informative References</name>
      <reference anchor="I-D.ietf-cats-usecases-requirements">
        <front>
          <title>Computing-Aware Traffic Steering (CATS) Problem Statement, Use Cases, and Requirements</title>
          <author fullname="Kehan Yao" initials="K." surname="Yao">
            <organization>China Mobile</organization>
          </author>
          <author fullname="Luis M. Contreras" initials="L. M." surname="Contreras">
            <organization>Telefonica</organization>
          </author>
          <author fullname="Hang Shi" initials="H." surname="Shi">
            <organization>Huawei Technologies</organization>
          </author>
          <author fullname="Shuai Zhang" initials="S." surname="Zhang">
            <organization>China Unicom</organization>
          </author>
          <author fullname="Qing An" initials="Q." surname="An">
            <organization>Alibaba Group</organization>
          </author>
          <date day="10" month="June" year="2025"/>
          <abstract>
            <t>   Distributed computing is a computing pattern that service providers
   can follow and use to achieve better service response time and
   optimized energy consumption.  In such a distributed computing
   environment, compute intensive and delay sensitive services can be
   improved by utilizing computing resources hosted in various computing
   facilities.  Ideally, compute services are balanced across servers
   and network resources to enable higher throughput and lower response
   time.  To achieve this, the choice of server and network resources
   should consider metrics that are oriented towards compute
   capabilities and resources instead of simply dispatching the service
   requests in a static way or optimizing solely on connectivity
   metrics.  The process of selecting servers or service instance
   locations, and of directing traffic to them on chosen network
   resources is called "Computing-Aware Traffic Steering" (CATS).

   This document provides the problem statement and the typical
   scenarios for CATS, which shows the necessity of considering more
   factors when steering traffic to the appropriate computing resource
   to better meet the customer's expectations.

            </t>
          </abstract>
        </front>
        <seriesInfo name="Internet-Draft" value="draft-ietf-cats-usecases-requirements-07"/>
      </reference>
      <reference anchor="I-D.ietf-cats-framework">
        <front>
          <title>A Framework for Computing-Aware Traffic Steering (CATS)</title>
          <author fullname="Cheng Li" initials="C." surname="Li">
            <organization>Huawei Technologies</organization>
          </author>
          <author fullname="Zongpeng Du" initials="Z." surname="Du">
            <organization>China Mobile</organization>
          </author>
          <author fullname="Mohamed Boucadair" initials="M." surname="Boucadair">
            <organization>Orange</organization>
          </author>
          <author fullname="Luis M. Contreras" initials="L. M." surname="Contreras">
            <organization>Telefonica</organization>
          </author>
          <author fullname="John Drake" initials="J." surname="Drake">
            <organization>Independent</organization>
          </author>
          <date day="24" month="June" year="2025"/>
          <abstract>
            <t>   This document describes a framework for Computing-Aware Traffic
   Steering (CATS).  Specifically, the document identifies a set of CATS
   components, describes their interactions, and provides illustrative
   workflows of the control and data planes.

            </t>
          </abstract>
        </front>
        <seriesInfo name="Internet-Draft" value="draft-ietf-cats-framework-10"/>
      </reference>
      <reference anchor="I-D.rcr-opsawg-operational-compute-metrics">
        <front>
          <title>Joint Exposure of Network and Compute Information for Infrastructure-Aware Service Deployment</title>
          <author fullname="Sabine Randriamasy" initials="S." surname="Randriamasy">
            <organization>Nokia Bell Labs</organization>
          </author>
          <author fullname="Luis M. Contreras" initials="L. M." surname="Contreras">
            <organization>Telefonica</organization>
          </author>
          <author fullname="Jordi Ros-Giralt" initials="J." surname="Ros-Giralt">
            <organization>Qualcomm Europe, Inc.</organization>
          </author>
          <author fullname="Roland Schott" initials="R." surname="Schott">
            <organization>Deutsche Telekom</organization>
          </author>
          <date day="21" month="October" year="2024"/>
          <abstract>
            <t>   Service providers are starting to deploy computing capabilities
   across the network for hosting applications such as distributed AI
   workloads, AR/VR, vehicle networks, and IoT, among others.  In this
   network-compute environment, knowing information about the
   availability and state of the underlying communication and compute
   resources is necessary to determine both the proper deployment
   location of the applications and the most suitable servers on which
   to run them.  Further, this information is used by numerous use cases
   with different interpretations.  This document proposes an initial
   approach towards a common exposure scheme for metrics reflecting
   compute and communication capabilities.

            </t>
          </abstract>
        </front>
        <seriesInfo name="Internet-Draft" value="draft-rcr-opsawg-operational-compute-metrics-08"/>
      </reference>
      <reference anchor="performance-metrics" target="https://www.iana.org/assignments/performance-metrics/performance-metrics.xhtml">
        <front>
          <title>performance-metrics</title>
          <author>
            <organization/>
          </author>
          <date>n.d.</date>
        </front>
      </reference>
      <reference anchor="DMTF" target="https://www.dmtf.org/">
        <front>
          <title>DMTF</title>
          <author>
            <organization/>
          </author>
          <date>n.d.</date>
        </front>
      </reference>
      <reference anchor="RFC8911">
        <front>
          <title>Registry for Performance Metrics</title>
          <author fullname="M. Bagnulo" initials="M." surname="Bagnulo"/>
          <author fullname="B. Claise" initials="B." surname="Claise"/>
          <author fullname="P. Eardley" initials="P." surname="Eardley"/>
          <author fullname="A. Morton" initials="A." surname="Morton"/>
          <author fullname="A. Akhter" initials="A." surname="Akhter"/>
          <date month="November" year="2021"/>
          <abstract>
            <t>This document defines the format for the IANA Registry of Performance
Metrics. This document also gives a set of guidelines for Registered
Performance Metric requesters and reviewers.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="8911"/>
        <seriesInfo name="DOI" value="10.17487/RFC8911"/>
      </reference>
      <reference anchor="RFC8912">
        <front>
          <title>Initial Performance Metrics Registry Entries</title>
          <author fullname="A. Morton" initials="A." surname="Morton"/>
          <author fullname="M. Bagnulo" initials="M." surname="Bagnulo"/>
          <author fullname="P. Eardley" initials="P." surname="Eardley"/>
          <author fullname="K. D'Souza" initials="K." surname="D'Souza"/>
          <date month="November" year="2021"/>
          <abstract>
            <t>This memo defines the set of initial entries for the IANA Registry of
Performance Metrics. The set includes UDP Round-Trip Latency and
Loss, Packet Delay Variation, DNS Response Latency and Loss, UDP
Poisson One-Way Delay and Loss, UDP Periodic One-Way Delay and Loss,
ICMP Round-Trip Latency and Loss, and TCP Round-Trip Delay and Loss.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="8912"/>
        <seriesInfo name="DOI" value="10.17487/RFC8912"/>
      </reference>
      <reference anchor="RFC9439">
        <front>
          <title>Application-Layer Traffic Optimization (ALTO) Performance Cost Metrics</title>
          <author fullname="Q. Wu" initials="Q." surname="Wu"/>
          <author fullname="Y. Yang" initials="Y." surname="Yang"/>
          <author fullname="Y. Lee" initials="Y." surname="Lee"/>
          <author fullname="D. Dhody" initials="D." surname="Dhody"/>
          <author fullname="S. Randriamasy" initials="S." surname="Randriamasy"/>
          <author fullname="L. Contreras" initials="L." surname="Contreras"/>
          <date month="August" year="2023"/>
          <abstract>
            <t>The cost metric is a basic concept in Application-Layer Traffic
Optimization (ALTO), and different applications may use different
types of cost metrics. Since the ALTO base protocol (RFC 7285)
defines only a single cost metric (namely, the generic "routingcost"
metric), if an application wants to issue a cost map or an endpoint
cost request in order to identify a resource provider that offers
better performance metrics (e.g., lower delay or loss rate), the base
protocol does not define the cost metric to be used.</t>
            <t>This document addresses this issue by extending the specification to
provide a variety of network performance metrics, including network
delay, delay variation (a.k.a. jitter), packet loss rate, hop count,
and bandwidth.</t>
            <t>There are multiple sources (e.g., estimations based on measurements
or a Service Level Agreement) available for deriving a performance
metric. This document introduces an additional "cost-context" field
to the ALTO "cost-type" field to convey the source of a performance
metric.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="9439"/>
        <seriesInfo name="DOI" value="10.17487/RFC9439"/>
      </reference>
    </references>
    <section anchor="contributors" numbered="false" toc="include" removeInRFC="false">
      <name>Contributors</name>
      <contact initials="M." surname="Boucadair" fullname="Mohamed Boucadair">
        <organization>Orange</organization>
        <address>
          <email>mohamed.boucadair@orange.com</email>
        </address>
      </contact>
      <contact initials="Z." surname="Du" fullname="Zongpeng Du">
        <organization>China Mobile</organization>
        <address>
          <email>duzongpeng@chinamobile.com</email>
        </address>
      </contact>
    </section>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA9U9WZIcx3X/fYo0+MEZoLqJGVAyOSGbHKyEjCEpYChZ/hAi
pyq7Oz1Vla3Kqmk0OVDwAP7UjyPsm/g0vICv4LfkVstsEGXZiCDQ3ZWV+fLt
Wybn8/ls1uq2VEfi3pPj0zfiRLWNzq14qpa61q029b2ZPDtr1MVgxL1ZLlu1
Ms3uSOh6aWazwuS1rGCmopHLdq5Vu5zDGDuv6I15EaacP3w0s91Zpa2Fb+1u
Ay+9fHb6fFZ31ZlqjmYFTH00y01tVW07ezSD1R/NZKMkQPHadK2uV/dmW9Oc
rxrTbRA0U23o5/nxFsaJU4BhqXPxplWqodHnagcvFEcCd5EJBmo2k127NrCk
mM8E/NG1PRK/X4h/UmtZ0y/Lrix5X/Sb+L009LtpVrLW30vcEEy61rUUJ+ZM
l4oeq0rq8kjspDnH177McUBFzxe5qWhMbrq6RQTS2z0QnizEKz1Y/8la1Sv/
c3/5rzq5VVqcqnxdm9KstLIpFPmi/HJNQ26z9qvFyUI8AcI0qpF2AMSrhRg9
7cNyqkq1NLXOZQpC2Wlb6VWnSgDBvVx1jS5L82Ub3mDwElh+vRCvjZ2/0I0s
2wEovwZy6uHjPiy/6WQJU1biWdeYjcrEyzpfpGD9a2Psl39s9eKPbuQIgq8W
4s16SIqvJFDC/3wnUti1BnZYfX4DPWaEJH3Wtcicc8HLnpg1/FuIx6bLZSF1
M6PVj8Q3DcyJjOdWqXjg4swP/NLQCFrOz/Yvpl5tkKWedn6eARu72Yruezd0
xMaz2jQV7PwC5FWI18+fiM8+PzhIPh/6z59/+ujzo9kMdUXyxsv500XUFJ1V
ubTKzhv1x043qlJ1a8fDlg3Aj9LvHzV5MzcbK7cr+Ac4Cykhy3lOSkE5BQQT
wXB4TADUefI7EsCpwYnn9DjoCSY4f2A9RW/LZqXaI7Fu2409+uST7Xa70LKW
Cxj7iQQ9t6ppM59MzD/12+Lduq1KBPjpyenzHoT4w4eCVFTtkkCazebzuZBn
tm1k3s5mN+lPsYdqc19oK6Ro3UNgCF2753KzaYzM16Jdy1aYTasr/b2y8FUJ
6ycxy/Bua2CiFXBBLaxqLnSuUNxaxIE42wnU/brgt3CKYgdsB6/Vsu0APJgo
9xALWReiVi0yhGiUNV2TK7sAUQeswBS4lJ+O5rrhzQwAszuAuYJnqrD4vl0j
UgLvmlrsOULt84YLZXMQ17BhoALtFr4kIHnrumxMFRYuDEhZLdbyAnauAB/w
BQQB//FDGBx4zzQAXGkQKbpSC3G6BoKA4e2QuQRZWIUUsqrF5at0vf7W3aqw
UEHTInkXzBWVLgoQ/9lHgMK2MUWX445nszeOTkDoC0QmLARIKdSmNDucMk6e
y40EFQHGHqHJQccyXvx+cMG1sa1jnBI0Py5hhe2AgyRsSVvWfgDd8UuBL5VG
FkCb49ef/PY1Ea5ogHuaUsHcF2qt85JIVyFyDCzWMAvAJ+uBJAkE09+Vrd6U
asR4vKFGMUCwtAP9QjbadACdbhWxA3olMNJ2yMoaUY87znW7o60hZlv4zzOv
02YFmSMcBKR59g6kHl6F5fd+Y57tCwU/5LgocD++lOAF6Aws2G02pmmZv8Bg
ElEcj9lkD8i+QUECXGUJc95BvLWjObwF/PHDD1eo3vfvgVvIIfx/pBIAjQCP
ViBpLUhORlxPr5kaecNTKrUCUYiAsmfAWTi+6mpHG1rKGZp0qTca32XahJ8T
gWFCB1YMMpWh8qgLCa7N9zDGL44vAqcDkFqWCQsCtVRDNo/kbUfgqFqe0Zwq
17C8x2/AeYG/o7xlCGCjABBHHXzqoRWAytK5NW5a8FxyIhoIHjCWV5MJtoAp
blRysgRHvtixsmOdRbwG8F6QnHmFBvCpxWqRAReCB4EOxPv37gt6Fu/f//d/
/RsC5n85wMf4w1+mRpfwAZwQAdt2MQ4jhkW5VBcSBgXCtNOK1etUpJCBIY1S
Al5VpRVnEpWuId2kG3q1VO888VbgqHUlaJx2t0AdDK72BZId1SM+j5GZnc36
gBPUCM3SgFu9JZhVAxtOkHyNQIN/NL+tpsChzh4kH1FGW/AngtziBiLAKKOO
O+ABPkG/SHwL8+YoBVbMxe/WOyAHYotHileEtC9ms2OgSNeWV21k0ndEhkht
MDjGhSq9yWZrSVTy5Dz57s0psI+woDhRiDIWNCY/vKRBSWikF07PotGglwzE
2WrQKSAp8IO0hkUQCQYBLBoD+p6D0VuI46LQ7KKWuwymDKvCHiDGYFkANAJG
SWfRyqouULUTtE7u7Np0ZeGsAe2zgvlZ724ajT4I72sP3BRkyKBOgwHJANrc
rGrP4YgXvVyCUkh4vJI79H3KjswVEAFYVqGlIDsNxGvFVu6QzSBGKEhB1Kzl
waRp5B4URKQOcLT3KUhNi6LHHH5BR0IUldKpnAFbN7JQc7NcHgnNXlaqKFtj
BNJ2DrKE7EI8gPoG4RZd7YkLgQ0OpvfVO9DPFoyP4CRECg6hpQIyIo2i9gc+
BLk23WrdgyD4LrgpoERrclMuxN4b4Glm2tuFK+/fO0UFE4JCsgMcTS5jF/us
MBqgMPLWNHJyI2HAAD1I5dq0rKMT1yb1esnwOOOCoIsE/mhW0DMwwCqbtgss
DnqqVohi2exwGpY+VJ8liAtw7xZo1oHhI8Eqva6MGz4CpmI928AqyDxggBq5
9c9Z8+NuyftDNw44CDUnMZpFqQVeZHe3+unHP5M/wT4orIIj6hVs7EKWyBUs
cuDWWVDGrdOr3u4F/3bnXh24K4CAr8wWdtBkzqVNGR0QnVFQgdyo3rUAY0be
TYv0GKqXJdoGb9zZHy00EZh1gRctdHf9m+ibm2VLjpQ6d1KE/qzzyJTj6RgW
eXmyzoOGrZvCO/VsnQDBed6BXoJPLEEEFOOdtmEdnLD90+gkx6Fiu9bB18HJ
Ey4rFCjJkjhegf7JMUEQOGpeyXMYn5HXFu2dc5rslXqEVTZspQHX2VExMcRI
OxcBY+oGDdn9+2RvxEOx9+rh/pF4HRlscf8+WCb051JxckwGyEX9D7oxnTKj
H4VCJ5hfAnLhxtGMmC04KAAz4Y9FjEIfZ6tdGAAiAUHOLnXMFimgBwDoAQDq
vS7Kx5TkOzqD5H0YdcUO0KW+4AXlatWolSSZePUwjCIXJriqcb4sxGvew4rO
MEyxEM9w4z5PjCrAdlUFro2HLkrdqwOPoLNd2ANjipeXTjCDlfV4orwWktIC
mZXtIecQkHMIyHnegRpKUcNr3QkfYHlAUIhzwnDgEQEMCwTwQPrtHLoxGfI8
IEGTAqlTGO66LXCYHG8yWwZPynOshwplt9pIVBQkVKjhe4oSJKLsSADBfJDO
L8H5J3tm2An89rsj8RiUiniOphgmBDE/M+BcoPqMP0VTSUBm9E8aN2CevULa
nwFnbHXRrsMvFpAQvvRe2RCmUTd11YbiXwDpBYL0fGppcFVQhaEo2Q9abwz0
JARfIwTBPeYx2W3eY9lAhAaYfLog8x4ETAoY3pGKAtJbIEeLdOOfwEdTyJMZ
ZU2AajZdFxd504KCW6kjcXwBFCcLbWEJhf6dLNARw7e3EFUo/oIvPVWl3B2J
U4iBRCvPFdl3UKpopsmPBWSDvwocFpkLTZ1CkV8atNbe9MtWclLGED/5ODov
Mfe5jHkMdDSc2sY4COR94F74NMdar8CKz3vylrF+ZJ5G97PvFFp2aW0f4FzW
6LaRQUMTSMZqYzAEY10Mz49jrgWiEQN+UVXho5doSpcSczTH377cj/pOitfP
3pyirw6/swV061hTkks2977vQjwNDnVMBOBocrQ4BkRbEDZJ5iJHL47UPjA2
0KGHJ3mGhoZfxVxjZxcDRYbzr1RN7sqOV3IC4rdwhjYIXThAmwH49842ECWA
NluWRjoO1zWHcg2nAPvjYRy8Ach+DSqyIXXizACEYCW5xwnMGLd/EcJ0wWH7
FyGS9/6gT7VNJWCcf6fRt1uR94tOzyCYqt20PluALHL86vQb8bsXaWoL0EMu
C7LEIFnqgQdm9GYtgSYiMEoucz4K00I8N01IBwWKAkh38vtL2F2Sxs1L0xVJ
DOv8kcQkHByJr6NlOQl2WzwJxhqE4qBn51zZyluiEMpN2fdg07N+9isLFp8E
ykPGhj8IIVo/rwn8cujMh+Rl4qKI37ElVKnrkE05CtqDlUry/WCy0XXuauLb
+5wPjAgiw4vyQNljsrDsK4aggf39lAtdNn/CH8MUd25YAloXrXvIF+Jb07qs
Xdyls8AKjO2v/m4+F79+/QLo14L6Pv3m6TdZCI6SvaQ76O9OzOf/yE5PME5H
4NgcpxtmJ8N7NpSZCyQN0hqVZiQ8eAIZ2l4mNJjAvuHxy0aOuOXS8YVrl++n
WqPExZWJ8W6zKEp8XcxbM4d/xBSboMeH2T0wpOQbuVxvklLugxO4HWVevZMY
KGXpIqQSIpNKl/ruKoo8ypIHAPtQKlKKjWzXsLNnoNBgxpoYIpi7hHucqek2
hatS9LjPxYWg72LiAGadxC9ELFQdSdKDXyS51FRDZwPF6jT4XfRa5iEPaVWv
DajgAaJG0Su83dvvMI7YakCdqjCLHgKpSXrucfZ4q8CXQERx9LHbRzyxcWwV
16UGnOMywqGAIdHFcCrBZVpwDkoUo8VXYGNdysspp4Ra0WwNl0ny7/LCcEjM
ZTLWUd4VG/kUkSsmbMGhj3dGFgHj8nUSnHDpxilW1oITgoMQ3RD8BPU7RYbU
AernNjiz6CgZ045Tk1DZLV9LDKwVMk4vH3O9nvbmeCzBaWlorQZJhLaHqpA/
siHf75L2FP2UlE91aiKm5tAn6irVoEtDOOw7tL6sZUpdsM8Fc1f9WDJDQ0V2
kyiENAQbcSTeYPZoCp9poRAmX+pVxyKJkL6sAbeWk95z0Ftb2VAll6LxhH/T
EBx+WhtY3vgUOIlxodChpndS1zSwDBhq5I6HCYs+R1CUOEgTN04mPcKxYSYH
W0lGAbe01htAE/hhWNIJ7kssKV+T0IEV//SnP1EvxE1/HszdnwdXDwdeYDE6
Epfw9eQQ/rq89ex/uGH2wZ/piQOYDOvo72s3cXnlt96T8coP/PwPhi/0nsQv
NAPQ/8RnbmH8ycH8IFnpMv4LTw75B/7yCD/sLRaL/UlI/nAlJH9IIaFvfxEi
xph44DH9YPTGBI6Gb4/23sPC8EPv7Qd+2fQDfnow/egB7x2Ez5PgUpw8BAJc
Cvpw6D888h8+9R9+4T/8cpIKjs8e3OkDy+EPR+IjUEa+HdSJLDU1/cO9Vyj3
KL+9LlRwSHjYvfczLCW+9u550GchCXZKVb7h45DBRFVzrnax1u8VDq0nm3yt
W9DgoJ4W4mWsDa7NthczpVmEJE9DhijtcGF7MoDHlevA/+GcfonzcfnIkBNX
VQY9uhAEY2dsWgJxjqLX1RQIo0tAuXIK7wkwrvvjioy8TOwgjiQDlq7oCjlK
2t2w5sDxrG+Z8ep2rbDXAB0nBE4VK5ztQjeGW9sWM3JCEgKK51qVhS9Tu/SM
6EVYwZEZ0y6ty3r7TPVisEsrV2+FyULtWNkWNqTtmj1Z2K8uZUONxaEQyTYl
9X/VVJAfi2xDz5i9X9jqcQof8lfYQAxHffy+JDS4xI7rFOMBXEVrg/lzeelB
Q0LIWwyr+73SNa8yDCmJluwxPOWokqBw6xGHZa76gBC2+QL8NQM+c/BCk34U
ck++oIATJXqO6QX7lgE4cnpi7oj/lvq6e8rj1K3Ykz235d7AZxxMgeVwvtvb
fNNhZwwlOd+Cf3X+lvO5jnhvz7ZZbwb/HkVYcZj76p+il9N/zy9BT8Jr/I1e
ps+LsNfnhLnxNkP9zBV2+vS9YrMaC4GUfhsu8Na2hdgzG46s9sfreSJxQx9J
N7lnLMnBUyP3vvcycc04hHRg09NJcAFapdTbv//Fp5iVzbWOIL9S9apdT8CI
HOQQwXEGL16BCuoaljID7j2qkmkEHWYClvssEwe/zMQj+PbLT+Oy34ECHC9K
FTZaFCTqevRX6+8zscK/MOueiXP+p6J/eu+s+MnZxuIo/Luiv1f0d8t/U34U
eAjMTQTyDafwriUlDwGYU7/a09WcYTPJEIdXbak2lYZVMlSO2idie2mk/saS
0CuBGXUyvJ/bGziQR4H3HiLwD4O6ku8Ao5qqRxILNF2TchfYtPHyHFclVAY7
i9YSA8er1nn1MAM3Ff47jNP/FsEbT89Q346PDoAxHy1gzgmvBwyKd3nGhhK9
nK/VuxZTFqGjmPU1SwqgsDKNt5quYO3U7Skq1j1Ur/v37x+xBeG3nPHzkatP
o0B0dq5BPfRNLaCNYz6WyWEK2Bd8km5ppzKTmhYmr5xRDXU4cqsgLrNcPinl
me/DYj8k1pALhb0UFPBDjFi3esmV8E3XYLJvqEoJB6wlxR7LyxABsEvq13Eu
B9aqrtXPcdNb8HnWzrVjFhhYetxMTV2YK6wGylA9mXP1hMuUhJI11slN7Ini
lfsbCGrc7QT1fiaC1IV9+V8mN3gXW3Ar/Y+kw0Xrchd7mfVYkWsbHOTgAfnd
eqA4cXheuz4ucgw9vOMeoSlgF0HUyLfqIy6k1oN5ovXYQqU8wd0prkUkruAS
lNzp0mJejzpsgeCuNZs3tPOpusBHLkmCKA8tCGgGxV5J/17Pk7c3jWKPSsL7
RJQo2RSoyHrnCsYYq3jqo3ymSBy4mgGZHnF9C4vYAyvrdCztKBAbogbTtJJa
ZhtfWsdcadIaTdK2hk8lVY/63ZQ9sSI4ObRqEDaFGUhgS4hm2nLnkIpGXuyh
UR8i1IdsvDdCGzfFUlyEAE4JeNLMQJCyVwmD6RuFeEtsJgNKeTHoLHW1wdvW
5Fr61p80e8a5WypyY0csqPQAAOeXZ7wb7w2wLp0SdL+39HSJafRKhxTlDU6C
JzPAzm2cTGQqvzVsTdxEMayoseOq4+Y9bIwsO2qM5Jw1naJCS/mxcy4+xmZ3
jrVg6X6R4J6Mw0KslHC+4wH3JGEFSY5EuQsZzMmOKGcqFqJfgglGh8TZr8cr
yXe66iofvHNXZZOc1tH1hSmR83juRdht9KA+ZmlIf/FO2y32FspBvncUC1Nh
Jp/zj+sWmgUg6IKPnR67cUmny5gZcEd+ppCjHaGSpA8krDBokE0bgkD0QxhI
0m+BAxJP0qOl/+NtMbOVae8a19R8OyXaVb8nn0LpV3OxeyX0UtH2HuLP4Nsl
Xx6SYMfREHnnmHBYqV7Vx0eoIQdgYlssKZO4/8RdvjVV4nYDYYCxr6+ghJA+
BPG3Kw1brg0jtF+zFIZd0XZ0veaSy2a9s+S5o8Ot0WumzvXQvk06NDQ4gpmR
oevCcyw1/B07sJKTK6h16ExU0l08Wi6jTjRqQ26U113Y3KJXNeVtMCnDNYGL
0LI9koyYLUpKX6Hm6Qp03B2MFgljjNX6KpAIolLm54HopLFDLARaO3y+WnPH
SkdEGHvvNnbxRBg3soH58PgHeh5LLg9S9rLGvik1T7xwn7AKZ0liVUktncnf
kSfgylOqiCaMzhmgTXKty9b3P3DfVuq2+f0moycOEkShABXrVIFXtj2/gwxr
gIgzp+RjwSgX+OMk2usT+PTBkwAp3SwSK4Ur9SGzgCfoJoFPJC3XTNJrP92j
oPT6cIzjVkA994kmioLiTGbshG6k7mL9PH0hnCtCYU2OEbEs52ajJrw9HwPj
GIiDk1iaoV2rckOJeLIV5GZ735eHARpc73aqMvQwpzmIKx2eKNoWe4TQIZ4G
vTgQO4JDwuVUEtQJb3a4CKItSGCbHkghqnGI1ncNcavU9sg8HZtufajer0Hg
qVgqQifFY5uc5S3wZcB86Mb2xePpgjSYjY4b26jXcavKMmk1iwdRQ0qWX1NL
2ERr5/N5v7HDeUw//IBnxt+/h+euOamOfZxJhVYM8vQOcbosESjWvc7JQn0Y
jdJp/3DO1qAik8CCmzXOpyogZoHtonSmjzZGOJr5o4Dq3aZkW0j93q7Twh0H
dACBm2nES/DcVyuFB0YMnrwy3EJZLdjSEUgxiUYR4rgXNGTFRhYki14OOQsy
2LNFsKDJfHwoXGN3Vxn6Wsmeh2a9cLBzIZ66A6uDCYL1QhfWry6KLhx09UrX
oqsybHhMW3Nd4T/1DbbYdM5WhNa68uhzv2kW9N/gxHG0A0lNxLPKp4uDHgGo
3FBj2GNbtRFb77QBzbhpiWPnEMajJdmiU4O1fAqm9lA23FiZpAX2WRecId1z
vEiDuqN9/+mgqNAv+zgIP8IalRPAXht9j/npKOWTb7+LAaJL9TigKTvl5YK3
M65gYdP9oBfU2TQXJqKnCYtSMoyTSb4yAUu/TWLTEGBSOttbEKLBi6++XzgB
5EyIOxToiEddbF2NjhQ3W2Giit27QaqKwztn6TDsthtT+9wGgssFJndGgnmA
+sCpOeqIi7yPqafbl6I4M+oU5ynd8zO5QTeQU7vA0u67K66MwM8GPcpuOBUA
EB9hOqpDYKd647In7gknecXh4nDGIbgH1WmG2ewyAfqS4LpkaC5pmUue+5Jn
uuRJuGb/GTVWw59D/nAA/wjxKX95RP88Oky+9bLEvmUOOGfumYSzxce1t918
tHjMwVwjB+4WvYbMa5gc4yDXRWPwzF1a0eY80t7pu8eUbhpzv8WU0zTfD5qs
kiNUws1HTFTTXTYw8b1Bmyfg/K0b+NOP/7mIdRyA/QSEXfAceyeP9xeOScg7
GrDJuJmdHAJqHzR53m124lNfcBpUXvpu2L2nQztxLxw2TBQjjkSX815ws24W
Cr786qcf/2MKBf/sUfBzysfJ49uKx8HDhwPxGOFhFuMhPwYx8IHyEyf7MFHy
I3siFc4k3EKkpqUGBYvkis7L9OWJf+L2EldMsbsaGMMmwWc8AxZOURfc+he6
6n0RReZo6LLeBRZprmohvnPpR24ubtQS1bmLiLkfMonI0Y3g/fgWOjKfTaMv
UN7x4A8fYA3xnvML3NmfwOaFwtAUu/f8S+3ITMR38ScbEqZRzrmCDpS4d2eT
NS3MNugFcKwaw0dTHPqw7otSjpFWlPMkUXgH6QyA/9zCGKHe6+z+rQ3Xo4PF
LwaymXh9U1Ip3/1fEkrC580iORI4FkV/6OWqkGxw0CVJFVI2MI14jkvMafEJ
fpApU3/c9k5L0c8UL7VoNs7U6CALN2n1q6RqcEzmilMyZIz7R+SS7MyZyiVd
YIKZezTUybasqiCWcilyl7GCd1yDdxr8xHe07WcJF3TCkxIGvaMBo1ueXNtV
G06qyRKTD+26IsUT0HvFgQi+/YyUUWjxxnswnMNJBUoscUc/wZ8k7DfuM8zJ
hlxillLUy66hvF7aheAcWlbeSUbQ+0693sHUJQe8JTwz8OARj2yvQwsRKAWv
cqzXaRP+COlDVwVG9hp5I9IV+T7Qf0jhGSipgxuU1EDxUHUIIemrnaHK6bHT
7Dr94jTLbb1lpzk+u8pNjtQZHDK5Qo9cTfzgNPeHJKJ7ey4ZHw2KvBKf/V/i
mDFUfxW+Obieb/6qbHMN3yRn0O7APWPW8P7hgMtI199Bx6S2wTFO7Di8NcOM
OObnVjII5s/DLR6SPrN89jdklit45S66JaV6z1s5vMpbOaZ7ZSeup+gdz3Jl
QwokQnWWr1PD0tCgeoXesiM/nZ3zxpzKcFz6cwfnuG/FcrtNoy+SThEuytH5
quAL0C1InPJxcUE4MkVp2zS7YGJqgBsMhtv0vRAJmznGx0FvlzpE//0urVBL
/1/VlxGmAdsf/hxs/7fUkSnbU1Fnfng9z19xvhD5/fRKQveSyvbDstTJ3UEt
nRyfXgqPkeOUcfQV47KJJVK3EqXCjJ1L627mqyAo1hb9e7o0ydGIb6yLFyHy
NUGGDt6BgJXpmb8kJoG1wLnFvw/Te4N8lW3pLtVMvOhCUcK8f+qNDpMUBZ3y
4yo5XxaVXouHr4RakisAg6DReZDe6Ty6SMZVfjC5cKHVNh7bdzzCp0ywp+nS
CYXAk0zPfGvck3BWEn5M74m6xNZidzXU5bG7XOpydnnkD7IdPQgfR+feJr9e
hg8Aiy/cASwEgruzzf25hPAnfOl/vXxhDH2Os9DZtRNV6K4S6Z/Bj+nXS/8x
zkLn3N7wQZz+LH7F0ddkFhTRPPKcbyOe5kLr2JAcXL6MdHhPEF12C0EX1W2Z
C8IJ4yxhs9AigMbDXU0Qg0GrWvvTj3/e+GsOqEfUogTj7U21WHc1MCNYGIwJ
L0jOge1gTeJCd7lJPyRmlkWuSE6+xFaQNj19q+Pld7EZk3bUr5HSnfWW6g3t
oG6GeIh3pVDDWe27nuzGnI+P2WK5m5oVWzNOndGl0Xjjkuscppoj5cfSi13p
NtnGyCKC31HKrKaWwUbaNgu85yH1V3YQ2aiETSmI3sTp9R2kQMZXTqUuhS/6
0h0vmN+rfFGg4u5A2+mWFANCfEtUe273cFPDlT9B7VMiXBv0unV0Epqu9/O3
raZNjv1NDFV6JviuON4BlRnBtruCMXcnZ6HQkQ3AV5xAaah/Jot9chtgR3/O
Dph28nI8TytqpBjsn6+VxpCDbpyBVcg8pTca9y7Qy8aySh5XDdo3NuyYsTDm
4J4BzEGcwi2E5JmlVwmGKzb7OdxIYIaHTsv5JDA8BAKqeJo8CWis67V08toT
V7QcayWLTKBaqLivGjsCws2eftmffvx3O7hL8HqJiAtSd7oMN5m5o3dXiobj
kon9MttXspYrhYw/ZueVYU12Le/ehWU3pQyo4Oswz7qmgBVM3UPPBHv6i4w8
vnjHlGm+K0/Ge5xsK4eMmFTxyA74TjFXw20nMUmV+/4J2ngHU4Zb9WcOJDYY
2TZcmquwC7dHIHdcgnKLxM18zW56X/bg2sKk4ybMoq27nyxu/CYGo37gK9gL
whtfCk+ZawrfjbskJLBBzwn0AReWPmBejvPC/wNE8Y1FrI8jkvw98/5CVt4i
p2unWCXS9U5csqEiEzdtMa6oIxxx72+5zm5ir+SqXWrrJHfM96ZRix/XrXgL
XmEAbZa8a1lc8BWmNl+D4ij16P8y0MR79icMJ/oygIDRTajsaEdhT1Q6xhB4
XemU20T1Ycxhh/PLlLLmBp/M7RKvsAtbSe0qcHpqVLkLqd+Snlw+5s9DsRbu
RgzMjZn+DE3kdAw+G39P00fxclhWSS864Dy6L6UW39lwmNjnq351TP9zEeuv
pUKBwwogRdfU2YatVwV3/F95o3jWP1i0pp4d3diPqYvLd9mFu/J7Z54x1tN4
WSiLjTM1Md/Q97BAGZRDN22B3T7ijYK4AjntibO2/BDE4/FTwsvx18fTz+h/
hnEm8/PZ/wAleIIVKGsAAA==

-->

</rfc>
