<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.29 (Ruby 2.7.0) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-ietf-cats-metric-definition-05" category="std" submissionType="IETF" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.25.0 -->
  <front>
    <title abbrev="CATS Metrics">CATS Metrics Definition</title>
    <seriesInfo name="Internet-Draft" value="draft-ietf-cats-metric-definition-05"/>
    <author initials="Y." surname="Kehan" fullname="Kehan Yao">
      <organization>China Mobile</organization>
      <address>
        <postal>
          <country>China</country>
        </postal>
        <email>yaokehan@chinamobile.com</email>
      </address>
    </author>
    <author initials="C." surname="Li" fullname="Cheng Li">
      <organization>Huawei Technologies</organization>
      <address>
        <postal>
          <country>China</country>
        </postal>
        <email>c.l@huawei.com</email>
      </address>
    </author>
    <author initials="L. M." surname="Contreras" fullname="L. M. Contreras">
      <organization>Telefonica</organization>
      <address>
        <email>luismiguel.contrerasmurillo@telefonica.com</email>
      </address>
    </author>
    <author initials="J." surname="Ros-Giralt" fullname="Jordi Ros-Giralt">
      <organization>Qualcomm Europe, Inc.</organization>
      <address>
        <email>jros@qti.qualcomm.com</email>
      </address>
    </author>
    <author initials="G." surname="Zeng" fullname="Guanming Zeng">
      <organization>Huawei Technologies</organization>
      <address>
        <postal>
          <country>China</country>
        </postal>
        <email>zengguanming@huawei.com</email>
      </address>
    </author>
    <date year="2026" month="February" day="02"/>
    <area>Routing</area>
    <workgroup>Computing-Aware Traffic Steering</workgroup>
    <keyword>CATS, metric</keyword>
    <abstract>
      <?line 90?>

<t>Computing-Aware Traffic Steering (CATS) is a traffic engineering approach that optimizes the steering of traffic to a given service instance by considering the dynamic nature of computing and network resources. In order to consider the computing and network resources, a system needs to share information (metrics) that describes the state of the resources. Metrics from network domain have been in use in network systems for a long time. This document defines a set of metrics from the computing domain used for CATS.</t>
    </abstract>
    <note removeInRFC="true">
      <name>Discussion Venues</name>
      <t>Discussion of this document takes place on the
    Computing-Aware Traffic Steering Working Group mailing list (cats@ietf.org),
    which is archived at <eref target="https://mailarchive.ietf.org/arch/browse/cats/"/>.</t>
      <t>Source for this draft and an issue tracker can be found at
    <eref target="https://github.com/VMatrix1900/draft-cats-metric-definition"/>.</t>
    </note>
  </front>
  <middle>
    <?line 94?>

<section anchor="introduction">
      <name>Introduction</name>
      <t>Service providers are deploying computing capabilities across the network for hosting applications such as distributed AI workloads, AR/VR and driverless vehicles, among others. In these deployments, multiple service instances are replicated across various sites to ensure sufficient capacity for maintaining the required Quality of Experience (QoE) expected by the application. To support the selection of these instances, a framework called Computing-Aware Traffic Steering (CATS) is introduced in <xref target="I-D.ietf-cats-framework"/>.</t>
      <t>CATS is a traffic engineering approach that optimizes the steering of traffic to a given service instance by considering the dynamic nature of computing and network resources. To achieve this, CATS components require performance metrics for both communication and compute resources. Since these resources are deployed by multiple providers, standardized metrics are essential to ensure interoperability and enable precise traffic steering decisions, thereby optimizing resource utilization and enhancing overall system performance.</t>
      <t>Metrics from network domain have already been defined in previous documents, e.g., <xref target="RFC9439"/>, <xref target="RFC8912"/>, and <xref target="RFC8911"/>, and been in use in network systems for a long time. This document focuses on categorizing the relevant metrics at the computing domain for CATS into three levels based on their complexity and granularity.</t>
    </section>
    <section anchor="conventions-and-definitions">
      <name>Conventions and Definitions</name>
      <t>This document uses the following terms defined in <xref target="I-D.ietf-cats-framework"/>:</t>
      <ul spacing="normal">
        <li>
          <t>Computing-Aware Traffic Steering (CATS)</t>
        </li>
        <li>
          <t>Service</t>
        </li>
        <li>
          <t>Service site</t>
        </li>
        <li>
          <t>Service contact instance</t>
        </li>
        <li>
          <t>CATS Service Contact Instance ID (CSCI-ID)</t>
        </li>
        <li>
          <t>CATS Service Metric Agent (C-SMA)</t>
        </li>
        <li>
          <t>CATS Network Metric Agent (C-NMA)</t>
        </li>
      </ul>
    </section>
    <section anchor="design-principles">
      <name>Design Principles</name>
      <section anchor="three-level-metrics">
        <name>Three-Level Metrics</name>
        <t>As outlined in <xref target="I-D.ietf-cats-usecases-requirements"/>, the resource model that defines CATS metrics MUST be scalable, ensuring that its implementation remains within a reasonable and sustainable cost. Additionally, it MUST be useful in practice. To that end, a CATS system should select the most appropriate metric(s) for instance selection, recognizing that different metrics may influence outcomes in distinct ways depending on the specific use case.</t>
        <t>Introducing a definition of metrics requires balancing the following trade-off: if the metrics are too fine-grained, they become unscalable due to the excessive number of metrics that must be communicated through the metrics distribution protocol. (See <xref target="I-D.rcr-opsawg-operational-compute-metrics"/> for a discussion of metrics distribution protocols.) Conversely, if the metrics are too coarse-grained, they may not have sufficient information to enable proper operational decisions.</t>
        <t>Conceptually, it is necessary to define at least two fundamental levels of metrics: one comprising all raw metrics, and the other representing a simplified form---consisting of a single value that encapsulates the overall capability of a service instance.</t>
        <t>However, such a definition may, to some extent, constrain implementation flexibility across diverse CATS use cases. Implementers often seek balanced approaches that consider trade-offs among encoding complexity, accuracy, scalability, and extensibility.</t>
        <t>To ensure scalability while providing sufficient detail for effective decision-making, this document provides a definition of metrics that incorporates three levels of abstraction:</t>
        <ul spacing="normal">
          <li>
            <t><strong>Level 0 (L0): Raw metrics.</strong> These metrics are presented without abstraction, with each metric using its own unit and format as defined by the underlying resource.</t>
          </li>
          <li>
            <t><strong>Level 1 (L1): Metrics normalized within categories.</strong> These metrics are derived by aggregating L0 metrics into multiple categories, such as network and computing. Each category is summarized with a single L1 metric by normalizing it into a value within a defined range of scores.</t>
          </li>
          <li>
            <t><strong>Level 2 (L2): Single normalized metric.</strong> These metrics are derived by aggregating lower level metrics (L0 or L1) into a single L2 metric, which is then normalized into a value within a defined range of scores.</t>
          </li>
        </ul>
      </section>
      <section anchor="level-0-raw-metrics">
        <name>Level 0: Raw Metrics</name>
        <t>Level 0 metrics encompass detailed, raw metrics, including but not limited to:</t>
        <ul spacing="normal">
          <li>
            <t>CPU: Base Frequency, boosted frequency, number of cores, core utilization, memory bandwidth, memory size, memory utilization, power consumption.</t>
          </li>
          <li>
            <t>GPU: Frequency, number of render units, memory bandwidth, memory size, memory utilization, core utilization, power consumption.</t>
          </li>
          <li>
            <t>NPU: Computing power, utilization, power consumption.</t>
          </li>
          <li>
            <t>Network: Bandwidth, capacity, throughput, bytes transmitted, bytes received, host bus utilization.</t>
          </li>
          <li>
            <t>Storage: Available space, read speed, write speed.</t>
          </li>
          <li>
            <t>Delay: Time taken to process a request.</t>
          </li>
        </ul>
        <t>L0 metrics serve as foundational data and do not require classification. They provide basic information to support higher-level metrics, as detailed in the following sections.</t>
        <t>L0 metrics can be encoded and exposed using an Application Programming Interface (API), such as a RESTful API, and can be solution-specific. Different resources can have their own metrics, each conveying unique information about their status. These metrics can generally have units, such as bits per second (bps) or floating point instructions per second (flops).</t>
        <t>Regarding network-related information, <xref target="RFC8911"/> and <xref target="RFC8912"/> define various performance metrics and their registries. Additionally, in <xref target="RFC9439"/>, the ALTO WG introduced an extended set of metrics related to network performance, such as throughput and delay. For compute metrics, <xref target="I-D.rcr-opsawg-operational-compute-metrics"/> lists a set of cloud resource metrics.</t>
      </section>
      <section anchor="level-1-normalized-metrics-in-categories">
        <name>Level 1: Normalized Metrics in Categories</name>
        <t>L1 metrics are organized into distinct categories, such as computing, communication, service, and composed metrics. Each L0 metric is classified into one of these categories. Within each category, a single L1 metric is computed using an <em>aggregation function</em> and normalized to a unitless score that represents the performance of the underlying resources according to that category. Potential categories include:</t>
        <!-- JRG Note: TODO, define aggregation and normalization function -->

<ul spacing="normal">
          <li>
            <t><strong>Computing:</strong> A normalized value derived from computing-related L0 metrics, such as CPU, GPU, and NPU utilization.</t>
          </li>
          <li>
            <t><strong>Communication:</strong> A normalized value derived from communication-related L0 metrics, such as communication throughput.</t>
          </li>
          <li>
            <t><strong>Service:</strong> A normalized value derived from service-related L0 metrics, such as tokens per second and service availability</t>
          </li>
          <li>
            <t><strong>Composed:</strong> A normalized value derived from an aggregation function that takes as input a combination of computing, communication and service metrics. For example, end-to-end delay computed as the sum of all delays along a path.</t>
          </li>
        </ul>
        <t>Editor note: detailed categories can be updated according to the CATS WG discussion.</t>
        <t>L0 metrics, such as those defined in <xref target="RFC8911"/>, <xref target="RFC8912"/>, <xref target="RFC9439"/>, and <xref target="I-D.rcr-opsawg-operational-compute-metrics"/>, can be categorized into the aforementioned categories. Each category will employ its own aggregation function (e.g., weighted summary) to generate the normalized value. This approach allows the protocol to focus solely on the metric categories and their normalized values, thereby avoiding the need to process solution-specific detailed metrics.</t>
      </section>
      <section anchor="level-2-single-normalized-metric">
        <name>Level 2: Single Normalized Metric.</name>
        <t>The L2 metric is a single score value derived from the lower level metrics (L0 or L1) using an aggregation function. Different implementations may employ different aggregation functions to characterize the overall performance of the underlying compute and communication resources. The definition of the L2 metric simplifies the complexity of collecting and distributing numerous lower-level metrics by consolidating them into a single, unified score.</t>
        <t>TODO: Some implementations may support the configuration of Ingress CATS-Forwarders with the metric normalizing method so that it can decode the information from the L1 or L0 metrics.</t>
        <t>Figure 1 provides a summary of the logical relationships between metrics across the three levels of abstraction.</t>
        <figure anchor="fig-metric-levels">
          <name>Logic of CATS Metrics in levels</name>
          <artwork><![CDATA[
                                    +--------+
                         L2 Metric: |   M2   |
                                    +---^----+
                                        |
                    +-------------+-----+-----+------------+
                    |             |           |            |
                +---+----+        |       +---+----+   +---+----+
    L1 Metrics: |  M1-1  |        |       |  M1-2  |   |  M1-3  | (...)
                +---^----+        |       +---^----+   +----^---+
                    |             |           |             |
               +----+---+         |       +---+----+        |
               |        |         |       |        |        |
            +--+---+ +--+---+ +---+--+ +--+---+ +--+---+ +--+---+
 L0 Metrics:| M0-1 | | M0-2 | | M0-3 | | M0-4 | | M0-5 | | M0-6 | (...)
            +------+ +------+ +------+ +------+ +------+ +------+

]]></artwork>
        </figure>
      </section>
    </section>
    <section anchor="cats-metrics-framework-and-specification">
      <name>CATS Metrics Framework and Specification</name>
      <t>The CATS metrics framework is a key component of the CATS architecture. It defines how metrics are encoded and transmitted over the network. The representation should be flexible enough to accommodate various types of metrics along with their respective units and precision levels, yet simple enough to enable easy implementation and deployment across heterogeneous edge environments.</t>
      <section anchor="cats-metric-fields">
        <name>CATS Metric Fields</name>
        <t>This section defines the detailed structure used to represent CATS metrics. The design follows principles established in related IETF specifications, such as the network performance metrics outlined in <xref target="RFC9439"/>.</t>
        <t>Each CATS metric is expressed as a structured set of fields, with each field describing a specific property of the metric. The following definition introduces the fields used in the CATS metric representations.</t>
        <!-- JRG Note and TODO: Define each of the types, formats, etc.. Do we need to standardize them? -->
<figure anchor="fig-metric-def">
          <name>CATS Metric Fields</name>
          <artwork><![CDATA[
- Cats_metric:
      - Metric_type:
            The type of the CATS metric.
            Examples: compute_cpu, storage_disk_size, network_bw,
            compute_delay, network_delay, compute_norm,
            storage_norm, network_norm, delay_norm.
      - Format:
            The encoding format of the metric.
            Examples: int, float.
      - Format_std (optional):
            The standard used to encode and decode the value
            field according to the format field.
            Example: ieee_754, ascii.
      - Length:
            The size of the value field measured in octets.
            Examples: 2, 4, 8, 16, 32, 64.
      - Unit:
            The unit of this metric.
            Examples: mhz, ghz, byte, kbyte, mbyte,
            gbyte, bps, kbps, mbps, gbps, tbps, tflops, none.
      - Source (optional):
            The source of information used to obtain the value field.
            Examples: nominal, estimation, normalization,
            aggregation.
      - Statistics(optional):
            The statistical function used to obtain the value field.
            Examples: max, min, mean, cur.
      - Level:
            The level this metric belongs to.
            Examples: L0, L1, L2.
      - Value:
            The value of this metric.
            Examples: 12, 3.2.
]]></artwork>
        </figure>
        <t>Next, we describe each field in more detail:</t>
        <ul spacing="normal">
          <li>
            <t><strong>Metric_Type (type)</strong>: This field specifies the category or kind of CATS metric being measured, such as computational resources, storage capacity, or network bandwidth. It acts as a label that enables network devices to identify the purpose of the metric.</t>
          </li>
          <li>
            <t><strong>Format (format)</strong>: This field indicates the data encoding format of the metric, such as whether the value is represented as an integer, a floating-point number, or has no specific format.</t>
          </li>
          <li>
            <t><strong>Format standard (format_std, optional)</strong>: This optional field indicates the standard used to encode and decode the value field according to the format field. It is only required if the value field is encoded using a specific standard, and knowing this standard is necessary to decode the value field. Examples of format standards include ieee_754 and ascii. This field ensures that the value can be accurately interpreted by specifying the encoding method used.</t>
          </li>
          <li>
            <t><strong>Length (length)</strong>: This field indicates the size of the value field measured in octets (bytes). It specifies how many bytes are used to store the value of the metric. Examples include 4, 8, 16, 32, and 64. The length field is important for memory allocation and data handling, ensuring that the value is stored and retrieved correctly.</t>
          </li>
          <li>
            <t><strong>Unit (unit)</strong>: This field defines the measurement units for the metric, such as frequency, data size, or data transfer rate. It is usually associated with the metric to provide context for the value.</t>
          </li>
          <li>
            <t><strong>Source (source, optional)</strong>: This field describes the origin of the information used to obtain the metric. It may include one or more of the following non-mutually exclusive values:  </t>
            <ul spacing="normal">
              <li>
                <t>'nominal'. Similar to <xref target="RFC9439"/>, "a 'nominal' metric indicates that the metric value is statically configured by the underlying devices.  For example, bandwidth can indicate the maximum transmission rate of the involved device.</t>
              </li>
              <li>
                <t>'estimation'. The 'estimation' source indicates that the metric value is computed through an estimation process.</t>
              </li>
              <li>
                <t>'directly measured'. This source indicates that the metric can be obtained directly from the underlying device and it does not need to be estimated.</t>
              </li>
              <li>
                <t>'normalization'. The 'normalization' source indicates that the metric value was normalized. For instance, a metric could be normalized to take a value from 0 to 1, from 0 to 10, or to take a percentage value. This type of metrics do not have units.</t>
              </li>
              <li>
                <t>'aggregation'. This source indicates that the metric value was obtained by using an aggregation function.
<!-- JRG: Define aggregation and normalization functions -->
                </t>
              </li>
            </ul>
            <t>
Nominal metrics have inherent physical meanings and specific units without any additional processing. Aggregated metrics may or may not have physical meanings, but they retain their significance relative to the directly measured metrics. Normalized metrics, on the other hand, might have physical meanings but lack units.</t>
          </li>
          <li>
            <t><strong>Statistics (statistics, optional)</strong>: This field provides additional details about the metrics, particularly if there is any pre-computation performed on the metrics before they are collected. It is useful for services that require specific statistics for service instance selection.  </t>
            <ul spacing="normal">
              <li>
                <t>'max'. The maximum value of the data collected over intervals.</t>
              </li>
              <li>
                <t>'min'. The minimum value of the data collected over intervals.</t>
              </li>
              <li>
                <t>'mean'. The average value of the data collected over intervals.</t>
              </li>
              <li>
                <t>'cur'. The current value of the data collected.</t>
              </li>
            </ul>
          </li>
          <li>
            <t><strong>Level (level)</strong>: This field specifies the level at which the metric is measured. It is used to categorize the metric based on its granularity and scope. Examples include L0, L1, and L2. The level field helps in understanding the level of detail and specificity of the metric being measured.</t>
          </li>
          <li>
            <t><strong>Value (value)</strong>: This field represents the actual numerical value of the metric being measured. It provides the specific data point for the metric in question.</t>
          </li>
        </ul>
      </section>
      <section anchor="aggregation-and-normalization-functions">
        <name>Aggregation and Normalization Functions</name>
        <t>In the context of CATS metric processing, aggregation and normalization are two fundamental operations that transform raw and derived metrics into forms suitable for decision-making and comparison across heterogeneous systems.</t>
        <section anchor="aggregation">
          <name>Aggregation</name>
          <t>Aggregation functions combine multiple metric values into a single representative value. This is particularly useful when metrics are collected from multiple sources or over time intervals. For example, CPU usage metrics from multiple service instances may be aggregated to produce a single load indicator for a service. Common aggregation functions include:</t>
          <ul spacing="normal">
            <li>
              <t>Mean average: Computes the arithmetic average of a set of values.</t>
            </li>
            <li>
              <t>Minimum/maximum: Selects the lowest or highest value from a set.</t>
            </li>
            <li>
              <t>Weighted average: Applies weights to values based on relevance or priority.</t>
            </li>
          </ul>
          <t>The output of an aggregation function is typically a Level 2 metric, derived from multiple Level 0 metrics, or a level 2 metric, derived from multiple Level 0 or 1 metrics.</t>
          <figure anchor="fig-agg-funct">
            <name>Aggregation function</name>
            <artwork><![CDATA[
      +------------+     +-------------------+
      | Metric 1.1 |---->|                   |
      +------------+     |    Aggregation    |     +----------+
           ...           |     Function      |---->| Metric 2 |
      +------------+     |                   |     +----------+
      | Metric 1.n |---->|                   |
      +------------+     +-------------------+

      Input: Multiple values                   Output: Single value

]]></artwork>
          </figure>
        </section>
        <section anchor="normalization">
          <name>Normalization</name>
          <t>Normalization functions convert metric values with or without units into unitless scores, enabling comparison across different types of metrics and systems. This is essential when combining metrics from a heterogeneous set of resources (e.g, latency measured in milliseconds with CPU usage measured in percentage) into a unified decision model.</t>
          <t>Normalization functions often map values into a bounded range, such as integers fron 0, to 5, or real numbers from 0 to 1, using techniques like:</t>
          <ul spacing="normal">
            <li>
              <t>Sigmoid function: Smoothly maps input values to a bounded range.</t>
            </li>
            <li>
              <t>Min-max scaling: Rescales values based on known minimum and maximum bounds.</t>
            </li>
            <li>
              <t>Z-score normalization: Standardizes values based on statistical distribution.</t>
            </li>
          </ul>
          <t>Normalized metrics facilitate composite scoring and ranking, and can be used to produce Level 1 and Level 2 metrics.</t>
          <figure anchor="fig-norm-funct">
            <name>Normalization function</name>
            <artwork><![CDATA[
      +----------+     +------------------------+     +----------+
      | Metric 1 |---->| Normalization Function |---->| Metric 2 |
      +----------+     +------------------------+     +----------+

      Input:  Value with or without units         Output: Unitless value

]]></artwork>
          </figure>
        </section>
      </section>
      <section anchor="on-the-meaning-of-scores-in-heterogeneous-metrics-systems">
        <name>On the Meaning of Scores in Heterogeneous Metrics Systems</name>
        <t>In a system like CATS, where metrics originate from heterogeneous resources---such as compute, communication, and storage---the interpretation of scores requires careful consideration. While normalization functions can convert raw metrics into unitless scores to enable comparison, these scores may not be directly comparable across different implementations. For example, a score of 4 on a scale from 1 to 5 may represent a high-quality resource in one implementation, but only an average one in another.</t>
        <t>This ambiguity arises because different implementations may apply distinct normalization strategies, scaling methods, or semantic interpretations. As a result, relying solely on unitless scores for decision-making can lead to inconsistent or suboptimal outcomes, especially when metrics are aggregated from multiple sources.</t>
        <t>To mitigate this, implementors of CATS metrics SHOULD provide clear and precise definitions of their metrics---particularly for unitless scores---and explain how these scores should be interpreted. This documentation should be designed to support operators in making informed decisions, even when comparing metrics from different implementations.</t>
        <t>Similarly, operators SHOULD exercise caution when making potentially impactful decisions based on unitless metrics whose definitions are unclear or underspecified. In such cases, especially when decisions are critical or sensitive, operators MAY choose to rely on Level 0 (L0) metrics with units, which typically offer a more direct and unambiguous understanding of resource conditions.</t>
      </section>
      <section anchor="level-metric-representations">
        <name>Level Metric Representations</name>
        <section anchor="level-0-metrics">
          <name>Level 0 Metrics</name>
          <t>Several definitions have been developed within the compute and communication industries, as well as through various standardization efforts---such as those by the <xref target="DMTF"/>---that can serve as L0 metrics. L0 metrics contain all raw metrics which are not considered to be standardized in this document, considering about their diversity and many other existing work.</t>
          <t>See Appendix A for examples of L0 metrics.</t>
        </section>
        <section anchor="level-1-metrics">
          <name>Level 1 Metrics</name>
          <t>L1 metrics are normalized from L0 metrics. Although they don't have units, they can still be classified into types such as compute, communication, service and composed metrics. This classification is useful because it makes L1 metrics semantically meaningful.</t>
          <t>The sources of L1 metrics is normalization. Based on L0 metrics, service providers design their own algorithms to normalize metrics. For example, assigning different cost values to each raw metric and do weighted summation. L1 metrics do not need further statistical values.</t>
          <section anchor="normalized-compute-metrics">
            <name>Normalized Compute Metrics</name>
            <t>The metric type of normalized compute metrics is "compute_norm", and its format is unsigned integer. It has no unit. It will occupy an octet. Example:</t>
            <figure anchor="fig-normalized-compute-metric">
              <name>Example of a normalized L1 compute metric</name>
              <artwork><![CDATA[
Basic fields:
      Metric type: compute_norm
      Level: L1
      Format: unsigned integer
      Length: one octet
      Value: 5
Source:
      normalization


|Metric Type|Level|Format|Length|Value|Source|
    8bits    2bits  1bit   3bits 8bits  3bits
]]></artwork>
            </figure>
          </section>
          <section anchor="normalized-communication-metrics">
            <name>Normalized Communication Metrics</name>
            <t>The metric type of normalized communication metrics is "communication_norm", and its format is unsigned integer. It has no unit. It will occupy an octet. Example:</t>
            <figure anchor="fig-normalized-communication-metric">
              <name>Example of a normalized L1 communication metric</name>
              <artwork><![CDATA[
Basic fields:
      Metric type: communication_norm
      Level: L1
      Format: unsigned integer
      Length: one octet
      Value: 1
Source:
      normalization

|Metric Type|Level|Format|Length|Value|Source|
    8bits    2bits  1bit   3bits 8bits  3bits

]]></artwork>
            </figure>
          </section>
          <section anchor="normalized-composed-metrics">
            <name>Normalized Composed Metrics</name>
            <t>The metric type of normalized composed metrics is "delay_norm", and its format is unsigned integer.  It has no unit.  It will occupy an octet. Example:</t>
            <figure anchor="fig-normalized-metric">
              <name>Example of a normalized L1 composed metric</name>
              <artwork><![CDATA[
Basic fields:
      Metric type: composed_norm
      Level: L1
      Format: unsigned integer
      Length: an octet
      Value: 8
Source:
      normalization

|Metric Type|Level|Format|Length|Value|Source|
    8bits    2bits  1bit   3bits 8bits  3bits
]]></artwork>
            </figure>
          </section>
        </section>
        <section anchor="level-2-metrics">
          <name>Level 2 Metrics</name>
          <t>A Level 2 metric is a single-value, normalized metric that does not carry any inherent physical unit or meaning. While each provider may employ its own internal methods to compute this value, all providers must adhere to the representation guidelines defined in this section to ensure consistency and interoperability of the normalized output.</t>
          <t>Metric type is "norm_fi". The format of the value is unsigned integer. It has no unit. It will occupy an octet. Example:</t>
          <figure anchor="fig-level-2-metric">
            <name>Example of a normalized L2 metric</name>
            <artwork><![CDATA[
Basic fields:
      Metric type: norm_fi
      Level: L2
      Format: unsigned integer
      Length: an octet
      Value: 1
Source:
      normalization

|Metric Type|Level|Format|Length|Value|Source|
    8bits    2bits  1bit   3bits 8bits  3bits
]]></artwork>
          </figure>
          <t>The single normalized value also facilitates aggregation across multiple service instances. When each instance provides its own normalized value, no additional statistical processing is required at the instance level. Instead, aggregation can be performed externally using standardized methods, enabling scalable and consistent interpretation of metrics across distributed environments.</t>
        </section>
      </section>
    </section>
    <section anchor="comparison-among-metric-levels">
      <name>Comparison among Metric Levels</name>
      <t>Metrics are progressively consolidated from L0 to L1 to L2, with each level offering a different degree of abstraction to address the diverse requirements of various services. Table 1 provides a comparative overview of these metric levels.</t>
      <table anchor="comparison">
        <name>Comparison among Metrics Levels</name>
        <thead>
          <tr>
            <th align="center">Level</th>
            <th align="left">Encoding Complexity</th>
            <th align="left">Extensibility</th>
            <th align="left">Stability</th>
            <th align="left">Accuracy</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="center">Level 0</td>
            <td align="left">High</td>
            <td align="left">Low</td>
            <td align="left">Low</td>
            <td align="left">High</td>
          </tr>
          <tr>
            <td align="center">Level 1</td>
            <td align="left">Medium</td>
            <td align="left">Medium</td>
            <td align="left">Medium</td>
            <td align="left">Medium</td>
          </tr>
          <tr>
            <td align="center">Level 2</td>
            <td align="left">Low</td>
            <td align="left">High</td>
            <td align="left">High</td>
            <td align="left">Medium</td>
          </tr>
        </tbody>
      </table>
      <t>Since Level 0 metrics are raw and service-specific, different services may define their own sets---potentially resulting in hundreds or even thousands of unique metrics. This diversity introduces significant complexity in protocol encoding and standardization. Consequently, L0 metrics are generally confined to bespoke implementations tailored to specific service needs, rather than being standardized for broad protocol use. In contrast, Level 1 metrics organize raw data into standardized categories, each normalized into a single value. This structure makes them more suitable for protocol encoding and standardization. Level 2 metrics take simplification a step further by consolidating all relevant information into a single normalized value, making them the easiest to encode, transmit, and standardize.</t>
      <t>Therefore, from the perspective of encoding complexity, Level 1 and Level 2 metrics are recommended.</t>
      <t>When considering extensibility, Level 0 metrics allow new services to define their own custom metrics. However, this flexibility requires corresponding protocol extensions, and the proliferation of metric types can introduce significant overhead, ultimately reducing the protocol's extensibility. In contrast, Level 1 metrics introduce only a limited set of standardized categories, making protocol extensions more manageable. Level 2 metrics go even further by consolidating all information into a single normalized value, placing the least burden on the protocol.</t>
      <t>Therefore, from an extensibility standpoint, Level 1 and Level 2 metrics are recommended.</t>
      <t>Regarding stability, Level 0 raw metrics may require frequent protocol extensions as new metrics are introduced, leading to an unstable and evolving protocol format. For this reason, standardizing L0 metrics within the protocol is not recommended. In contrast, Level 1 metrics involve only a limited set of predefined categories, and Level 2 metrics rely on a single consolidated value, both of which contribute to a more stable and maintainable protocol design.</t>
      <t>Therefore, from a stability standpoint, Level 1 and Level 2 metrics are preferred.</t>
      <t>In conclusion, for CATS, Level 2 metrics are recommended due to their simplicity and minimal protocol overhead. If more advanced scheduling capabilities are required, Level 1 metrics offer a balanced approach with manageable complexity. While Level 0 metrics are the most detailed and dynamic, their high overhead makes them unsuitable for direct transmission to network devices and thus not recommended for standard protocol integration.</t>
    </section>
    <section anchor="cats-l2-metric-registry-entry">
      <name>CATS L2 Metric Registry Entry</name>
      <t>This section gives an initial Registry Entry for the CATS L2 metric.</t>
      <section anchor="summary">
        <name>Summary</name>
        <t>This category includes multiple indexes to the Registry Entry: the element ID, Metric Name, URI, Metric Description, Metric Controller, and Metric Version.</t>
        <section anchor="id-identifier">
          <name>ID (Identifier)</name>
          <t>IANA has allocated the Identifier 1 for the Named Metric Entry in Section 5. See Section 5.1.2 for mapping to Names.</t>
        </section>
        <section anchor="name">
          <name>Name</name>
          <t>Norm_Passive_CATS-L2_RFCXXXXsecY_Unitless_Singleton</t>
          <t>Naming Rule Explanation</t>
          <ul spacing="normal">
            <li>
              <t>Norm: Metric type (Normalized Score)</t>
            </li>
            <li>
              <t>Passive: Measurement method</t>
            </li>
            <li>
              <t>CATS-L2: Metric level (CATS Metric Framework Level 2)</t>
            </li>
            <li>
              <t>RFCXXXXsecY: Specification reference (To-be-assigned RFC number and section number)</t>
            </li>
            <li>
              <t>Unitless: Metric has not units</t>
            </li>
            <li>
              <t>Singleton: Metric is a single value</t>
            </li>
          </ul>
        </section>
        <section anchor="uri">
          <name>URI</name>
          <t>To-be-assigned.</t>
        </section>
        <section anchor="description">
          <name>Description</name>
          <t>This metric represents a single normalized score used within CATS. It is derived by aggregating one or more CATS L0 and/or L1 metrics, followed by a normalization process that produces a unitless value. The resulting score provides a concise assessment of the overall capability of a service instance, enabling rapid comparison across instances and supporting efficient traffic steering decisions.</t>
        </section>
        <section anchor="change-controller">
          <name>Change Controller</name>
          <t>IETF</t>
        </section>
        <section anchor="version">
          <name>Version</name>
          <t>1.0</t>
        </section>
      </section>
      <section anchor="metric-definition">
        <name>Metric Definition</name>
        <section anchor="reference-definition">
          <name>Reference Definition</name>
          <t><xref target="I-D.ietf-cats-metric-definition"/>
Core referenced sections: Section 3.4 (L2 Level Metric Definition), Section 4.2 (Aggregation and Normalization Functions)</t>
        </section>
        <section anchor="fixed-parameters">
          <name>Fixed Parameters</name>
          <ul spacing="normal">
            <li>
              <t>Normalization score range: 0-10 (0 indicates the poorest capability, 10 indicates the optimal capability)</t>
            </li>
            <li>
              <t>Data precision: decimal number (unsigned integer)</t>
            </li>
          </ul>
        </section>
      </section>
      <section anchor="method-of-measurement">
        <name>Method of Measurement</name>
        <t>This category includes columns for references to relevant sections of the RFC(s) and any supplemental information needed to ensure an unambiguous method for implementations.</t>
        <section anchor="reference-methods">
          <name>Reference Methods</name>
          <t>Raw Metrics collection: Collect L0 service and compute raw metrics using platform-specific management protocols or tools (e.g., Prometheus <xref target="Prometheus"/> in Kubernetes). Collect L0 network performance raw metrics using existing standardized protocols (e.g., NETCONF <xref target="RFC6241"/>, IPFIX <xref target="RFC7011"/>).</t>
          <t>Aggregation logic: Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.2.1 (e.g., Weighted Average Aggregation).</t>
          <t>Normalization logic: Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.2.2 (e.g., Sigmoid Normalization).</t>
          <t>The reference method aggregates and normalizes L0 metrics to generate L1 metrics in different categories, and further calculates a L2 singleton score for full normalization.</t>
        </section>
        <section anchor="packet-stream-generation">
          <name>Packet Stream Generation</name>
          <t>N/A</t>
        </section>
        <section anchor="traffic-filtering-observation-details">
          <name>Traffic Filtering (Observation) Details</name>
          <t>N/A</t>
        </section>
        <section anchor="sampling-distribution">
          <name>Sampling Distribution</name>
          <t>Sampling method: Continuous sampling (e.g., collect L0 metrics every 10 seconds)</t>
        </section>
        <section anchor="runtime-parameters-and-data-format">
          <name>Runtime Parameters and Data Format</name>
          <t>CATS Service Contact Instance ID (CSCI-ID): an identifier of CATS service contact instance. According to <xref target="I-D.ietf-cats-framework"/>, a unicast IP address can be an example of identifier. (format: ipv4-address-no-zone or ipv6-address-no-zone, complying with <xref target="RFC6991"/>)</t>
          <t>Service_Instance_IP: Service instance IP address (format: ipv4-address-no-zone or ipv6-address-no-zone, complying with <xref target="RFC6991"/>)</t>
          <!-- KY: C-SMA can see service instance IP when it is co-located with Service contact instance, right? -->

<t>Measurement_Window: Metric measurement time window (Units: seconds, milliseconds; Format: uint64; Default: 10 seconds)</t>
        </section>
        <section anchor="roles">
          <name>Roles</name>
          <t>C-SMA: Collects L0 service and compute raw metrics, and optionally calculates L1 and L2 metrics according to service-specific strategies.</t>
          <t>C-NMA: Collects L0 network performance raw metrics, and optionally calculates L1 and L2 metrics according to service-specific strategies.</t>
        </section>
      </section>
      <section anchor="output">
        <name>Output</name>
        <t>This category specifies all details of the output of measurements using the metric.</t>
        <section anchor="type">
          <name>Type</name>
          <t>Singleton value</t>
        </section>
        <section anchor="reference-definition-1">
          <name>Reference Definition</name>
          <t>Output format: Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.4.3</t>
          <t>Score semantics: 0-3 (Low capability, not recommended for steering), 4-7 (Medium capability, optional for steering), 8-10 (High capability, priority for steering)</t>
        </section>
        <section anchor="metric-units">
          <name>Metric Units</name>
          <t>Unitless</t>
        </section>
        <section anchor="calibration">
          <name>Calibration</name>
          <t>Calibration method: Conduct benchmark calibration based on standard test sets (fixed workload) to ensure the output score deviation of C-SMA and C-NMA is lower than 0.1 (one abnormal score in every ten test rounds).</t>
          <!-- KY: Do we need more details in calibration discussions? -->

</section>
      </section>
      <section anchor="administrative-items">
        <name>Administrative Items</name>
        <section anchor="status">
          <name>Status</name>
          <t>Current</t>
        </section>
        <section anchor="requester">
          <name>Requester</name>
          <t>To-be-assgined</t>
        </section>
        <section anchor="revision">
          <name>Revision</name>
          <t>1.0</t>
        </section>
        <section anchor="revision-date">
          <name>Revision Date</name>
          <t>2026-01-20</t>
        </section>
        <section anchor="comments-and-remarks">
          <name>Comments and Remarks</name>
          <t>None</t>
        </section>
      </section>
    </section>
    <section anchor="implementation-guidance-on-using-cats-metrics">
      <name>Implementation Guidance on Using CATS Metrics</name>
      <t>&lt;Authors’ Note: This section has been moved to <xref target="I-D.ietf-cats-framework"/> at the suggestion of the chairs, since this document focuses primarily on metric definitions rather than implementation details.&gt;</t>
    </section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>TBD</t>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t>TBD</t>
    </section>
  </middle>
  <back>
    <references anchor="sec-combined-references">
      <name>References</name>
      <references anchor="sec-normative-references">
        <name>Normative References</name>
        <reference anchor="RFC6241">
          <front>
            <title>Network Configuration Protocol (NETCONF)</title>
            <author fullname="R. Enns" initials="R." role="editor" surname="Enns"/>
            <author fullname="M. Bjorklund" initials="M." role="editor" surname="Bjorklund"/>
            <author fullname="J. Schoenwaelder" initials="J." role="editor" surname="Schoenwaelder"/>
            <author fullname="A. Bierman" initials="A." role="editor" surname="Bierman"/>
            <date month="June" year="2011"/>
            <abstract>
              <t>The Network Configuration Protocol (NETCONF) defined in this document provides mechanisms to install, manipulate, and delete the configuration of network devices. It uses an Extensible Markup Language (XML)-based data encoding for the configuration data as well as the protocol messages. The NETCONF protocol operations are realized as remote procedure calls (RPCs). This document obsoletes RFC 4741. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="6241"/>
          <seriesInfo name="DOI" value="10.17487/RFC6241"/>
        </reference>
        <reference anchor="RFC6991">
          <front>
            <title>Common YANG Data Types</title>
            <author fullname="J. Schoenwaelder" initials="J." role="editor" surname="Schoenwaelder"/>
            <date month="July" year="2013"/>
            <abstract>
              <t>This document introduces a collection of common data types to be used with the YANG data modeling language. This document obsoletes RFC 6021.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="6991"/>
          <seriesInfo name="DOI" value="10.17487/RFC6991"/>
        </reference>
        <reference anchor="RFC7011">
          <front>
            <title>Specification of the IP Flow Information Export (IPFIX) Protocol for the Exchange of Flow Information</title>
            <author fullname="B. Claise" initials="B." role="editor" surname="Claise"/>
            <author fullname="B. Trammell" initials="B." role="editor" surname="Trammell"/>
            <author fullname="P. Aitken" initials="P." surname="Aitken"/>
            <date month="September" year="2013"/>
            <abstract>
              <t>This document specifies the IP Flow Information Export (IPFIX) protocol, which serves as a means for transmitting Traffic Flow information over the network. In order to transmit Traffic Flow information from an Exporting Process to a Collecting Process, a common representation of flow data and a standard means of communicating them are required. This document describes how the IPFIX Data and Template Records are carried over a number of transport protocols from an IPFIX Exporting Process to an IPFIX Collecting Process. This document obsoletes RFC 5101.</t>
            </abstract>
          </front>
          <seriesInfo name="STD" value="77"/>
          <seriesInfo name="RFC" value="7011"/>
          <seriesInfo name="DOI" value="10.17487/RFC7011"/>
        </reference>
        <reference anchor="RFC8911">
          <front>
            <title>Registry for Performance Metrics</title>
            <author fullname="M. Bagnulo" initials="M." surname="Bagnulo"/>
            <author fullname="B. Claise" initials="B." surname="Claise"/>
            <author fullname="P. Eardley" initials="P." surname="Eardley"/>
            <author fullname="A. Morton" initials="A." surname="Morton"/>
            <author fullname="A. Akhter" initials="A." surname="Akhter"/>
            <date month="November" year="2021"/>
            <abstract>
              <t>This document defines the format for the IANA Registry of Performance
Metrics. This document also gives a set of guidelines for Registered
Performance Metric requesters and reviewers.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8911"/>
          <seriesInfo name="DOI" value="10.17487/RFC8911"/>
        </reference>
        <reference anchor="RFC8912">
          <front>
            <title>Initial Performance Metrics Registry Entries</title>
            <author fullname="A. Morton" initials="A." surname="Morton"/>
            <author fullname="M. Bagnulo" initials="M." surname="Bagnulo"/>
            <author fullname="P. Eardley" initials="P." surname="Eardley"/>
            <author fullname="K. D'Souza" initials="K." surname="D'Souza"/>
            <date month="November" year="2021"/>
            <abstract>
              <t>This memo defines the set of initial entries for the IANA Registry of
Performance Metrics. The set includes UDP Round-Trip Latency and
Loss, Packet Delay Variation, DNS Response Latency and Loss, UDP
Poisson One-Way Delay and Loss, UDP Periodic One-Way Delay and Loss,
ICMP Round-Trip Latency and Loss, and TCP Round-Trip Delay and Loss.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8912"/>
          <seriesInfo name="DOI" value="10.17487/RFC8912"/>
        </reference>
        <reference anchor="RFC9439">
          <front>
            <title>Application-Layer Traffic Optimization (ALTO) Performance Cost Metrics</title>
            <author fullname="Q. Wu" initials="Q." surname="Wu"/>
            <author fullname="Y. Yang" initials="Y." surname="Yang"/>
            <author fullname="Y. Lee" initials="Y." surname="Lee"/>
            <author fullname="D. Dhody" initials="D." surname="Dhody"/>
            <author fullname="S. Randriamasy" initials="S." surname="Randriamasy"/>
            <author fullname="L. Contreras" initials="L." surname="Contreras"/>
            <date month="August" year="2023"/>
            <abstract>
              <t>The cost metric is a basic concept in Application-Layer Traffic
Optimization (ALTO), and different applications may use different
types of cost metrics. Since the ALTO base protocol (RFC 7285)
defines only a single cost metric (namely, the generic "routingcost"
metric), if an application wants to issue a cost map or an endpoint
cost request in order to identify a resource provider that offers
better performance metrics (e.g., lower delay or loss rate), the base
protocol does not define the cost metric to be used.</t>
              <t>This document addresses this issue by extending the specification to
provide a variety of network performance metrics, including network
delay, delay variation (a.k.a. jitter), packet loss rate, hop count,
and bandwidth.</t>
              <t>There are multiple sources (e.g., estimations based on measurements
or a Service Level Agreement) available for deriving a performance
metric. This document introduces an additional "cost-context" field
to the ALTO "cost-type" field to convey the source of a performance
metric.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="9439"/>
          <seriesInfo name="DOI" value="10.17487/RFC9439"/>
        </reference>
        <reference anchor="I-D.ietf-cats-framework">
          <front>
            <title>A Framework for Computing-Aware Traffic Steering (CATS)</title>
            <author fullname="Cheng Li" initials="C." surname="Li">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Zongpeng Du" initials="Z." surname="Du">
              <organization>China Mobile</organization>
            </author>
            <author fullname="Mohamed Boucadair" initials="M." surname="Boucadair">
              <organization>Orange</organization>
            </author>
            <author fullname="Luis M. Contreras" initials="L. M." surname="Contreras">
              <organization>Telefonica</organization>
            </author>
            <author fullname="John Drake" initials="J." surname="Drake">
              <organization>Independent</organization>
            </author>
            <date day="20" month="November" year="2025"/>
            <abstract>
              <t>   This document describes a framework for Computing-Aware Traffic
   Steering (CATS).  Specifically, the document identifies a set of CATS
   functional components, describes their interactions, and provides
   illustrative workflows of the control and data planes.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-cats-framework-19"/>
        </reference>
        <reference anchor="I-D.ietf-cats-metric-definition">
          <front>
            <title>CATS Metrics Definition</title>
            <author fullname="Kehan Yao" initials="K." surname="Yao">
              <organization>China Mobile</organization>
            </author>
            <author fullname="Cheng Li" initials="C." surname="Li">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Luis M. Contreras" initials="L. M." surname="Contreras">
              <organization>Telefonica</organization>
            </author>
            <author fullname="Jordi Ros-Giralt" initials="J." surname="Ros-Giralt">
              <organization>Qualcomm Europe, Inc.</organization>
            </author>
            <author fullname="Hang Shi" initials="H." surname="Shi">
              <organization>Huawei Technologies</organization>
            </author>
            <date day="20" month="October" year="2025"/>
            <abstract>
              <t>   Computing-Aware Traffic Steering (CATS) is a traffic engineering
   approach that optimizes the steering of traffic to a given service
   instance by considering the dynamic nature of computing and network
   resources.  In order to consider the computing and network resources,
   a system needs to share information (metrics) that describes the
   state of the resources.  Metrics from network domain have been in use
   in network systems for a long time.  This document defines a set of
   metrics from the computing domain used for CATS.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-cats-metric-definition-04"/>
        </reference>
      </references>
      <references anchor="sec-informative-references">
        <name>Informative References</name>
        <reference anchor="I-D.ietf-cats-usecases-requirements">
          <front>
            <title>Computing-Aware Traffic Steering (CATS) Problem Statement, Use Cases, and Requirements</title>
            <author fullname="Kehan Yao" initials="K." surname="Yao">
              <organization>China Mobile</organization>
            </author>
            <author fullname="Luis M. Contreras" initials="L. M." surname="Contreras">
              <organization>Telefonica</organization>
            </author>
            <author fullname="Hang Shi" initials="H." surname="Shi">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Shuai Zhang" initials="S." surname="Zhang">
              <organization>China Unicom</organization>
            </author>
            <author fullname="Qing An" initials="Q." surname="An">
              <organization>Alibaba Group</organization>
            </author>
            <date day="28" month="January" year="2026"/>
            <abstract>
              <t>   Distributed computing enhances service response time and energy
   efficiency by utilizing diverse computing facilities for compute-
   intensive and delay-sensitive services.  To optimize throughput and
   response time, "Computing-Aware Traffic Steering" (CATS) selects
   servers and directs traffic based on compute capabilities and
   resources, rather than static dispatch or connectivity metrics alone.
   This document outlines the problem statement and scenarios for CATS
   within a single domain, and drives requirements for the CATS
   framework.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-cats-usecases-requirements-13"/>
        </reference>
        <reference anchor="I-D.rcr-opsawg-operational-compute-metrics">
          <front>
            <title>Joint Exposure of Network and Compute Information for Infrastructure-Aware Service Deployment</title>
            <author fullname="Sabine Randriamasy" initials="S." surname="Randriamasy">
              <organization>Nokia Bell Labs</organization>
            </author>
            <author fullname="Luis M. Contreras" initials="L. M." surname="Contreras">
              <organization>Telefonica</organization>
            </author>
            <author fullname="Jordi Ros-Giralt" initials="J." surname="Ros-Giralt">
              <organization>Qualcomm Europe, Inc.</organization>
            </author>
            <author fullname="Roland Schott" initials="R." surname="Schott">
              <organization>Deutsche Telekom</organization>
            </author>
            <date day="21" month="October" year="2024"/>
            <abstract>
              <t>   Service providers are starting to deploy computing capabilities
   across the network for hosting applications such as distributed AI
   workloads, AR/VR, vehicle networks, and IoT, among others.  In this
   network-compute environment, knowing information about the
   availability and state of the underlying communication and compute
   resources is necessary to determine both the proper deployment
   location of the applications and the most suitable servers on which
   to run them.  Further, this information is used by numerous use cases
   with different interpretations.  This document proposes an initial
   approach towards a common exposure scheme for metrics reflecting
   compute and communication capabilities.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-rcr-opsawg-operational-compute-metrics-08"/>
        </reference>
        <reference anchor="performance-metrics" target="https://www.iana.org/assignments/performance-metrics/performance-metrics.xhtml">
          <front>
            <title>performance-metrics</title>
            <author>
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
        </reference>
        <reference anchor="DMTF" target="https://www.dmtf.org/">
          <front>
            <title>DMTF</title>
            <author>
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
        </reference>
        <reference anchor="Prometheus" target="https://prometheus.io/">
          <front>
            <title>Prometheus</title>
            <author>
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
        </reference>
      </references>
    </references>
    <?line 634?>

<section anchor="appendix-a">
      <name>Appendix A</name>
      <section anchor="level-0-metric-representation-examples">
        <name>Level 0 Metric Representation Examples</name>
        <t>Several definitions have been developed within the compute and communication industries, as well as through various standardization efforts---such as those by the <xref target="DMTF"/>---that can serve as L0 metrics. This section provides illustrative examples.</t>
        <!-- JRG: The following two paragraphs seem redundants, as we have
already explained it in the previous section. So I suggest to remove them. -->

<!-- The sources of L0 metrics can be nominal, directly measured, estimated, or aggregated. Nominal L0 metrics are initially provided by resource providers. Dynamic L0 metrics are measured or estimated during the service stage. Additionally, L0 metrics support aggregation when there are multiple service instances.

L0 metrics also support the statistics defined in section 4.1. -->

<!-- TODO: next step would be to update the examples once we agree with (and update as necessary) the above changes regarding the CATS metric specification. -->

<section anchor="compute-raw-metrics">
          <name>Compute Raw Metrics</name>
          <t>This section uses CPU frequency as an example to illustrate the representation of raw compute metrics. The metric type is labeled as compute_CPU_frequency, with the unit specified in GHz. The format should support both unsigned integers and floating-point values. The corresponding metric fields are defined as follows:</t>
          <figure anchor="fig-compute-raw-metric">
            <name>An Example for Compute Raw Metrics</name>
            <artwork><![CDATA[
Basic fields:
      Metric Type: compute_CPU_frequency
      Level: L0
      Format: unsigned integer, floating point
      Unit: GHz
      Length: four octets
      Value: 2.2
Source:
      nominal

|Metric Type|Level|Format| Unit|Length| Value|Source|
    8bits    2bits  1bit  4bits  3bits 32bits  3bits
]]></artwork>
          </figure>
        </section>
        <section anchor="communication-raw-metrics">
          <name>Communication Raw Metrics</name>
          <t>This section takes the total transmitted bytes (TxBytes) as an example to show the representation of communication raw metrics. TxBytes are named as "communication type_TxBytes". The unit is Mega Bytes (MB). Format is unsigned integer or floating point. It will occupy 4 octets. The source of the metric is "Directly measured" and the statistics is "mean". Example:</t>
          <figure anchor="fig-network-raw-metric">
            <name>An Example for Communication Raw Metrics</name>
            <artwork><![CDATA[
Basic fields:
      Metric type: "communication type_TXBytes"
      Level: L0
      Format: unsigned integer, floating point
      Unit: MB
      Length: four octets
      Value: 100
Source:
      Directly measured
Statistics:
      mean

|Metric Type|Level|Format| Unit|Length| Value|Source|Statistics|
    8bits    2bits  1bit  4bits  3bits 32bits  3bits   2bits
]]></artwork>
          </figure>
        </section>
        <section anchor="delay-raw-metrics">
          <name>Delay Raw Metrics</name>
          <t>Delay is a kind of synthesized metric which is influenced by computing, storage access, and network transmission. Usually delay refers to the overal processing duration between the arrival time of a specific service request and the departure time of the corresponding service response. It is named as "delay_raw". The format should support both unsigned integer or floating point. Its unit is microseconds, and it occupies 4 octets. For example:</t>
          <figure anchor="fig-delay-raw-metric">
            <name>An Example for Delay Raw Metrics</name>
            <artwork><![CDATA[
Basic fields:
      Metric type: "delay_raw"
      Level: L0
      Format: unsigned integer, floating point
      Unit: Microsecond(us)
      Length: four octets
      Value: 231.5
Source:
      aggregation
Statistics:
      max

|Metric Type|Level|Format| Unit|Length| Value|Source|Statistics|
    8bits    2bits  1bit  4bits  3bits 32bits  3bits   2bits
]]></artwork>
          </figure>
        </section>
      </section>
    </section>
    <section anchor="contributors" numbered="false" toc="include" removeInRFC="false">
      <name>Contributors</name>
      <contact initials="M." surname="Boucadair" fullname="Mohamed Boucadair">
        <organization>Orange</organization>
        <address>
          <email>mohamed.boucadair@orange.com</email>
        </address>
      </contact>
      <contact initials="Z." surname="Du" fullname="Zongpeng Du">
        <organization>China Mobile</organization>
        <address>
          <email>duzongpeng@chinamobile.com</email>
        </address>
      </contact>
      <contact initials="H." surname="Shi" fullname="Hang Shi">
        <organization>Huawei</organization>
        <address>
          <email>shihang9@huawei.com</email>
        </address>
      </contact>
    </section>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA9196XYb15ngfzzFHemHSQuASEp2bPRMj2ktNhNRUot0nORH
eIqFC6BGhSqkFlKwqZx5jX69fpL51rtUFUhKcaZ7RseWgMKtu377dieTyWjU
ZE1uZ+bBs+PzM3NqmypLa/PcLrIia7KyeDBKLi8re9Vp8WCUJo1dltV2Zupm
PhrNy7RI1tDRvEoWzSSzzWICTerJml6YzF2Pk4OvRnV7uc7qGr412w28dPLi
/OWoaNeXtpqN5tDzbJSWRW2Luq1nIxj8ySipbAKTeFe2TVYsH4yuy+r9sirb
Dc6sXG/o8eT4GtqZc5jDIkvNWWNtRa3f2y28MJ8ZXMTY8KRGo6RtViUMaSYj
A3+yop6ZP0/NH+wqKejJos1zXhc9M39OSnpeVsukyH5JcEHQ6SorEnNaXma5
pZ/tOsnymdkm5Xt87bsUG6zp92larqlNWrZFg/tHb0dTeDY1r7LO+M9Wtljq
43j4H9vk2mbm3KaroszLZWbrcBbpNP9uRU3uM/ar6enUPIODqWyV1J1JvJqa
3q/xXM5tbhdlkaVJOIW8zep1tmxtDlOQl9dtleV5+V3j3uDpBXP5/dS8K+vJ
D1mV5E1nKr+H48y6P8dz+bc2yaHLtXnRVuXGjs1JkU7Daf2vqqy/+1uTTf8m
LXsz+GH6F9j2ztA/tEmxBrAy7rdPOo9f4K2ldHHHwYxot7LLtkEonRge/7Rc
wb9z833Zpsk8yaoRzWBm3lRJsUQIlJHW3HB6qQ2/K6kFDae9/aUslhuEreet
9tOBZ+lt3v4iTXvwrH39CL2bs1WmHfFO+C7qVQbosPw2XPYoKxZltYa9uwK0
N+Zk8nzqyUdb2zSpbT2p7N/arLJrWzS1NqvSalJu6uR6Cf8ATOH2J/kkJXJg
hfRQa/iVBinS6LExQv4GfqefHYHgQ+YPTKDo7aRa2mZmVk2zqWePH19fX0+z
pEim0PZxAgRuWdCEHw/0P/Rs+mHVrHPo+vnp+ctogvjgc2c0XzcLmhG0eFuV
MNbKtvH6/ePPGWTj3p5m5ePRqAjP893LZ18fPT3Uj99+qx9/d3CoH7/5Nvx4
JB+/ffrk2z5ELCoANaT+/Z96vGY2Gk0mE5Nc1k2VpM1odBenMHvIIPZNVpvE
NPIjQHxWyO/JBhabpCvTrJLGlJsmW2e/2Bq+WmCE0qhcuHebEjpawlYUprbV
VZZaJCwNHrq53Brkctmc38Iu5ltAJHitSJoWpgcdpTpjkxRzU9gGl24qW5dt
ldp6CkQNjgi6wKG0O+rrjjfHMLF6C3New292XuP79Qo3xSFkWZg9gcx9XvDc
1inQI7dgAAlaLXwJpqRixALgwg08L4EGFGaVXMHKLewHfAHsxn+0CU8H3isr
mFxe4qZkazs15ys4EBAxWsQmQ+dr8YRq2+Dw63C8eOkyKgw0p27xeKcMFets
Pgf6NnoIW9hU5bxNccWj0ZmcExz0FW4mDASbMrebvNxil77zNNkkQAMB1HA2
KXAT3hddDw64KutGACcHHodD1KZuAYISWFJWM3mH2R2fGHwpL5M5nM3xu8d/
fEcHN68AeqrcQt9XdpWlOR3dGjenhMEqBgH4VOskieSAkNPmTbbJbQ/weEGV
5QnB0DL1q6TKyhZmlzWWwAHlL2hZtwjKGW49rjjNmi0tDXe2gf8VeIVEz4nx
YiM4mhcfgMzBqzD83r+VL/aNhQcpDgrQjy8F+wLnDCDYbjZl1TB8gWhAhyIw
VgdrQPB1pADmlefQ5yegdyZnDm8BfPz66w4i8/EjQAtJvv8PkQTYRphPZgHT
GsCcMUE9vVYWCBt6UiHb80gEJ3sJkIXt120hZ0NDCWMNhzrL8F0+G/c4QBg+
aAeKDqfGSDyKeQJC3C/QRgfHFwHSYZJZkgcgCKdlK+LxhG9bmo4tkkvq06YZ
DK/76/Z8js8R38Y4wcrCROR08FedrYGtzEV2k25BRknp0ADxALCUTAa7BUBx
J5FLclBZ5lsmdkyzCNZgvleEZ0rQYH52upyOAQqF6338KF+QG+IXnJc+ONQH
/xgVXcAHEKwMrFp0Od4XxuTcXiXQyJ1LM0xXlaTiAZXQpLLWwKs2r81lgjS3
JNKUVfRqbj/o2S1BEG1zIDjNdookGHSKKzx1pI74u9dA69EonjjNGmezKEF/
uKY52woWHOzxLfiMMsF9CQU2FXYQfCQCGX5HKR3EC4fGNALuijZ4Jg1OFM9P
nsMIZ89OJifP93utGbLM8RJXu/dscnZ67Bu9llPuNnpNjR7CxqHUCQIdICai
HGzfw4dw9HAyk1d4MsqdR6NjOPy2yXft2aDojbAXcnvQMebQpwgHzJdpngo5
pz+dnQOkmhpINKLrmFGaIQ1eyoAcZQga2D0jYYXaAsDBdQbUC3ASHiR1yciO
sFG3NbId+p4Ce52a4/k8Y+E/346hSzcqrAH0NsY6OADYXaKONLIt5shEaLaC
4fWqbPO58B1a5xr6Zwq/qTKUdnhdeyAQIew7wu1Y1Rhmm5bLQpEJ9yVbLID8
BOi0TrYoZeUtMUY4BMAOizyJJAI4ucZcJ1uEaFC35kSKCuYnwDwzBFTEeTwd
QB6VXoghGC/8hpKRHCFiZS7ErYNBVTK3k3KxmJmM5bmQJDdlafBsJ4C2CC4E
A0jZcN6mLfRwQUfExvS+/QCcoAY2Z9iwE06HtmUNx4hn5PkMwCGQkLJdrqIZ
OCkJFwUn0ZRpmU/N3hkQGwba+ymCHz8KTYQOgfbVnT0aHKae7jNtquCEEbaG
NyctE2jQ2R485aJsmBsEQlQoXxOLEzaGUzfB/D0DQxmkBFDZNK0DcSCJhcUt
TqotdsPYh5Q6B3QB6L2GM2uBxRJi5UqW/YJnAFRM0isYBYEHWF2VXOvvzGRw
tSRnosAIEIREmgCtRqwFWGTBeg0SNUkuLO3CKNiiWMLCrpIcoYJRDgTIGuh+
IyRcOayTpLfyakcwgg34sbyGFVRjEZ5DQIeNHpP6gtBoPzQwxzHJUQ2eR5e8
LJANqRjBku88owNmWqCohYK1volaQLloSGSz7wWLUHIW2c8KTHsFTPGpFlkd
ll7OVX1gRggbnKYt0CX4xBhEk+J9p2XUMk9Y/rkXx31Tc73KnFSFnQdQNrdA
JHOCeAv0J0V93EHUZJ28h/Zjkg89axXxrN5JR5hkw1IqENLlFAOej2cnurZo
3+bLL5nrHJi9Vwf7M/POA9j0yy+BMaHkGKKTABlsLtJ/oI1hl2N6aCyK2/wS
HBcuHNlIeQ2yEMyZ9o9RjJQsEQtE4QCUAHVqG4qA03CihzDRQ5ioyndkzMhJ
ShWGpOKS3bECFN6veMBkuazsMiGceHXgWpG05IRi39/YaYYqzHmxG7qYmhe4
cDW9Iwmo2/UapCidnce6V4e6QZdbtwbeKR4+EcR0XFb3iUyEeJQ1HLOto805
gs05gs0540GCveHBPmlDgPUAphDouOYAJAYgFk5AZ6nrOZI2YwR62IWMKEgR
zuFT1wWCkQAnw6WTjBRkdVaIvOtNgpSCsApJfEQpASXyljAQ+AcR/Rz0DGJo
JQucb3+ame+BqpiXyIuhQ8DzyxKkC6Sf/pHnlTTJMf0TqijovFjj4V8CaFxn
82blntSwCe5L9MqGdhqJU7vekKoNU/oBp/RyaGiQVZCGIS7VnzVef9KDM3iN
M3CiOLcZ3+c9Rg7cUDcntUyMVYSATmGHt0Sj4OhrOI4Gz40fgZBmESbHZKCB
U6vDcXGQswYo3NLOzPEVnDix6BqGsCjgJXOUxPDta9BgLH/Bl57bPNnOzDno
W6ZJ3lti8EBVkU+TIAubDQIrQJgHLuR1FnF+USK7Vt6fNAnbf0qCJ1XZ0xzt
ygtvMkFJQ+g26lyA8B35Qi0qq2wJbHwS4duYCSTDNMqfsVRYs0xbxxNOkwLl
NuJoyAOJW21KVPeYGMPvx96sg8ZlEIzW5DI5QV66SNAcdPz2ZN8TvMS8e3F2
jsI6PGcWKOPUZU4y2USF36l57iRqb3PA1iRpsb6JzMAtkvhFimIc0X0AbDiH
aJ+SS+Q0/CqaNdt62iFk2D8oWySvbHkkQRBdwiUyIZThYNtKmP/e5QbUBKBm
i7xMBMKzgtXEiq2NcXtoB2/AZr8DElkRORE+ADpYTvJxMOdxaBGILARH8F3E
QbXpDVl6RLzLULRbkvCLMk9HlypiqwTCx/Gr8zfm5x9CExrsDQksCA8do6zO
HCBRmVowGb97Hm0Z7BGTpuZlWTmzkzvOTxT6c1hbYC1O87KdBwqsCCMBOzic
mdeeq5w6pm2eOU4NCHEY8ThxASoXcnrcEHN3DH0cG9nGKveOHd8nrNIpMvt3
mIgsUMmBjosivTOWBoKK+ZnZoQ0FiPGQuJDp/EJ0/tLxbRSg24KA90u2P/qd
Iu6LSEHWamKzLDE61YGl/hAYxXswIJWhST0tGQ0a0dl15lPztmzESuhXKWzY
Asf97/9tMjG/f/cDHGQDNPz8zfM3Y6ciBWsJVxCvzkwm/8qij+NQM5BujsMF
s6Sh4g1ZAt3ZOpT1lNNDAIgDY2TAfNDAB2Puo8N60Ljn0P6FW4ePTbse82Rk
MUTdZ0wB2FtHa0pghBGlIyuOqHgJs1fSZ/x+I9zfZ3yAzSHQZGhBDlzjDLKC
yAqu+zIrEtVqduFhND+HfEiK7IcEtTc0Ys0nTTmxSqk8ziRi+W/XpA7lOTeA
eZApNjGbpFnBRr8AMgs9FgSfjgUHwCzsr93MxUkTIYMoq0CGvTUDeh0+AJBx
bGweDUzJkaE5IvbMUz6F1o511s6krISJfD2A9aROw9vRWruKzXUG22bX6EBw
mt3gMe+x4fzagmyDm8Tq0HYf94iZdWPZJdeBIrGGO99NgiKPUCcx/WAfZCRH
CcQCzxcbnNDJ4KQ8I+0OE7gekquSdXT2EDK5VNGwJ+N4iBjgT0dOAeuxKbQU
rAJtid1WQuSZIg9gEU7pDm3MsYKhcwglstjawrZOOUpvCB3qhFyO6SpBVd8i
5EQWott5hooIwjYDVA7dYivbMWs00VY5i1btnB3isSBakZOFV7xu3liIQlq7
thVKWbSHsYStLr0yz+YsBELf61i5HSPTJB5OJ4RnCPwKDhntWUP7GTpJofNF
tmwrR9VOCtjbms3wEyBa10lFXmyyDwQAHBoFMHyjhOFLNcoTHs8tSvj0Tigr
O5ABoQGh4yCA0Zc4FWsOQ1OSIKVuOEZEpcC3iWXgklbZBrYJZEP0ZzmZyrvT
bzExwYh///vfKSjlrj+PJvLn0e7mAAuMRjNzA19Pj+Cvm3v3/tc7eu/8Ge7Y
TZPn2vv71kXc7PwW/dIf+ZH2/6j7QvSL/0I9wPmfqi0Z2p8eTg6DkW78v/DL
ET/gL0/ww950Ot0fnMlfd87kr+FM6Ns/tBH9nXikO/2o98bAHnXf7q092oXu
h+jtRzps+AE/PRr+6RGvHZBPj+DGnB7AAdwY+nCkH57oh6f64Sv98PXgKQic
PfqkD4yHv87MQyBGGoglKEsxZv/jwSvEe8TfKNQYpBFu9uDjiBzB4Y8vXYAH
0twz4Y0Jx+mcqwzkw3+0NTG993brYx6U+NALSZWusgaoOZCqqTnxnstVeR0p
daGJIzAiEVMKI32Ytzgth8mkOBNBGGKPQ479sXOrJGluvS5RtHM6OsZChw4a
kRiVbpOejvIBWfLJ9kAT4/gHHJE3cmy2oOgSMwtHFDeTTept1yPC+raGDinp
XVmMuUApCidn50vs7SqrSo5pZIEkOC7zMrP5XN31YjpyW0uhLCrTsP0DGQUF
ZsHs3N5FR6o8m1zabJYCPcK5to2tG1hUVq9YrFUlBMPJnauUeUwoDNshQ4Tb
9Ngl7qRhlNhRUAymh2BmP+C8a5b7E78wZwZZ0KaEfgt6opF04ktTqY+dgI3j
lWJVp23wZrlAiHE2GAmJoNF4V8WaF044hlA8wkhPJkBg0eM5q8o0YZkLgedY
HCtoVGvSKQh+JUjfTp4NgnpIzvmfpEUjaZig8aS+4InMhOBMBHQuKA0gokLn
MmKEuLIdUcMXrJIBCxIh8CLdtBheRObbCxDU3l+wpVpO/eLyehz1oO+Rnuab
yVf9FcWl+D0dgn5xr/E3epk+T91aX9LO9ZfpXIPis4rPfsdiM/RxkmGxO8BF
3czNXrlhHW2/P54ekkM+JnNCBpzIR3pC9DJDbk8RlWnTr4PThdlaay9+99VT
tDenWean/MoWy2Y1MEeEINkIVlh48DXQL0IvgO4S9ASkQ8MbdDQ2MNw3Y3P4
9dg8gW9fP/XD/gTo0x+UnIc0KGD27du/Xv0yNkv8C/0JY/Oe/1nTP9E7S/7l
clNjK/x7TX8v6e+G/ybLL8AQ8Co/yTM2UN56lNwE5hwK6Hqu5SXGyXT3cNeS
inKdwShjpKqZmpgj21i8sECHC+aMpAXeT+s7IJBbgRrgdPnPm/U6+QA7mpFf
LEHXU1uF0AUMsT88K2jBKQOTRlaLGuiucV4djEHehf+PfPd/xOn1u+dZ3w+O
DgEwn0yhzwHxCci8yk59Novi0mv7oUHjhwvLDhkMbOG6rJTpii9eyO05EtY9
JK/7X345Y1MIvyWcSFVgNciAmvc+A/Kg0pvbNlYeGSe7Bm51ZQUh50IyA28d
msCEGzsPI8lkoODVzFTz5FJDzFiI8e7xuUUbHVkOQNksmmzBTv5NW6EJsUtK
aQ+YSpo9xpfuBsAqKRRJJBb0wt1Kn/2ir1eW4mQ84Ga157giIRDHtkv0cybO
LzRhvxA7YGlLVhgCUHq5gEeOF+DIuKwE6f7YOKxz69Ingwv8FF5wL/qPR4eD
FvnWB4RnfUKe1U66FtOSX61Oii2Q7wsJUSOxUufbD38amuzUoRrJYvHGOX+B
Y080HnOoECY48EaiX/wIYurkIJ4GLYQUpgwHLvHtvKCtGv0cHIm1BbfcRVcg
GzR7Of17O0zenzWaPXJ279OheMwmLScptuIKTwIxHPEz3MSOGOo2Uzcu5rC4
e8BlhcbSitxhg8pRVk1CgceVBg2g1TUwuRO2reBTTjb5OFA0QiuaJ+tlFc7N
oikTwBJUoSbfyqYikzd7yNS7GxoqJbJtHFpMShVOcAjBgzANmilLldCYvpF+
uMA4OTgpRYO2poA9eLsu0yzRqKbQDMdWYHLfYxwxkHQ3AbZUi09GhAEmpUN4
HukVGl9XZcvMmTrvkBH0lGHqHKDKZ0wuxYqZiXTkVZECY8laDkvEkM+8pZBP
Nn4D02FG+YXIFl9gwsA6yxPKVop8DQ8S38rpVwHcCwTILwEgJCRG5FtnCB0M
9RJGMTWxG8exHEJmHY9HSj5k63atej+Hi1ZBwlNWXJU5wh33PXWL9fLTF4wL
4RMV2e6xNudS0qBY9LK7ntR34MedZwz+jhJ8IVTsziGFkjEs4Iq0J2fq7W0l
4R7g17xEdlw2TgVEKYQnSdTNAUAgR+q2xA/vuzPXSRiUx345jRNFrqprUutL
7KBGn6CLEaPlHeBjkOyCLweE1r416OQpas1LG3mPVD91EcSlj/clUuLXHwjL
9z4Vv1x3MADYtztinELvVPj7ebtrdnfjbF8zFrpV0XKyYsWem81qW5PcjuJ2
hjIzOUtdXDpRUBe5CUwmcfEkCrEUyXgs0wqSf5DoUFpZEDbdG25MEXYUX11Z
JV0YtJMtCzL3oC2HXQtXLha9hxnewBR40JzfVBx9HPaM/Ag1jOVq15RoRnmS
vneHTgTbaUJAtN3n3YTbO0z8hrHsXvvoJD/HTVJBf5hCg3LHgt2MZPgsMB7M
TgIZXO1cLh/HO6fsQhj+luQA8XLZuWdglECBHEkc4rWGdHA8Wii06XqD1gMZ
Eh4pgMQKKVBiG0kdxFbdjNjoShIWtBK1HzvJlJ7Ap8/uBI5SeknQ4bi0n9ML
yIHSCXwibLmlkyiudo9U0tuVMdZaYes5/jUgFKRlMmAH50bkzvvhwxdcbhYi
a5CKxbiclhs7IOupBoxtQAsONGme7crmG7LnE68gIVslX24G2yBB6SHJyLrW
zo5WKftEurbZow3t7lMnvAg0R5BH2CtLiDogy3YHwW1zGNiEmTZ0aqygxYIh
LpXCORmmHz50NE1J7euI1L5UUospO+q6JXmvo1V7Mjm+g3pT+kknz8PFZyhD
IbkUXqOIZdbr2PUfRaNjCwwnzxpyE+BSO8kCLiYNQKXGwYfcBJJ8SPsRbcho
dDzo8+eQHOuj4UPmV3eCwEP79VXMi+G/iCQK4bpehT7lkMIxv/dJ0hJzButm
7w6G8HoUj8XGZxiwVSOViFLPb8m4RqZ26Zmxi/5A071fHyZ/q0CAcaOUrSSd
Yd2V9boc5vth6Bsa1VE8YDqmAdYC1ojmK5g07LASOkm4ITDkXcdI5lMmpo+F
Ms/MGdHv2sWK1A0ZKjCouG5CeYo6wy5+1qgcNxWKCoaZcLwOmW3knB1FktTT
lHSOTZWVkiKK1AaYIMZx4YR3xAOxUCYKQaLhMk6Ti4Je3Gl1ov1J9kuEZt33
XXjFxYNGQQmx177/qOPPv1Er3+H00NzgL/8ae6yl2e7eqXmIa/osbBx5zafT
adg1/a3ESp7JRGRuR3eO353urvGD5Raft9zhzZT2Jxj3NzOneloCbP0/bwiy
XFgVuz4imyyA24TATE2yQ+SMfdhA+CLKPxq93iFzU0h61XRoHlkIAKBUimaZ
mihhHFyLTjg0iGoEVEyYfbRV37GM/FcotSOfPu+eiCYTZjFXeRqXdAk+Ew4f
tIsheWODjtgiDQRuNEdneZ5xBKgsMqSjvp3XtlwSkMZHKUvitOPp7p3lZL11
sukwkkvMsdBkIG/aEaMsrbEwB5RN+BURgsqyKHEpv3p9kRWxBqs8YTpBbfLs
PdPfs2y5LrO5mw7A1boEbQIVkGSj4agysf60SOIB+gt89wNl+2HosXln8aOt
exQT7aOFE37xaFWapl5ZH/nLhIP/Ivlhhu4a9df2Ow69NGFybLDtgRSxSFKM
4UUDCQeuU2IMDKqiA6yNcw6D3A4VU5UTagoeyZgRAd5JVXeTgeGf+6THEZ5h
ee1e1O/TZxGTKPYl7UD9LpX6SanAAJ3CA44J1TCKIKkCQvWGJdFTVmURk8+I
siAe/hghusbknDHdICHWVRBC0JfaetekiLpoCjJAIlAQ7sS0wxEN2JDYdWR7
mRFEsth7BK3Z/ibGdhf5yETRJ7unIPChEKhZuZIy9TMlze6ygyBkKl0OcvwG
6W8QV+Pp71iyL6SJmjMuAzMEN+aCBl1i3Yn37EieiQTxwmqfIo4mRCBkdw+J
bNGIPp4mIRlt8jepyePyXtAsXHTDS9m6Qs4bL0JyO9Q/yCIylQifBPjDsiWt
EdaNhMOmCSZP71wLzQwL/mx9fkx8DBjYCZSYs2WY8om3hMWy2q6ToiHdKzx8
TFvi/LoaWD1m6LGp0gdudw9uSL/Bk88xtw+9iYVks1PsGAzcXlLtGFSwpFwD
OspRSSRZs6dnBJL+oKLBKd3rrMmWbHHGOj1uv0rKN4/D285+fPPTq+feWQBT
rYIAsDC2uRZlN6v0bcCZSD3C9Xf2BJpIKl9ONWzK6xiQfTxb4ObqVJXpRr5x
6Jb4liRomRVUXCHKBLz17JkIGDzuLlZIUlEEcasriuzGmdFI3AyYwubHkx20
H0DAyCg/ios98OHxTDaaV5RTmFySNkhB3LQ8d3S7p3O69lkWcgjkWSv4oGi/
0S4iNp05le0iqkfFBvrQ5MckvRX0IOLEhAUAmqgBh4s7Pf6zSVclzoHi6Rju
w+R7P1PkMpK8KMYkpzSVuKdoRafYASJYBGRtwfiOdDs28ATSHxJONl5G+QrC
ON/F0WcsKesEXe71maWI/2gjfdG4OTaHRbtkfI3UH47+B1265cRGynS9tnke
5Br6imdODOLX7ALAsQnZEqfQiF/p11+xHOPHj8SIEo6Vd1m8QTh8mPJPNYKQ
iMZFNmT/ExLNfP0I50qJqmTRcgNsG0fFwsIUVi5ooRY9cvayNdt+kAodFLaK
u21RK8cqMx/MMVeLCDzmUWy/P67DIFU+ToMMnC2Eo+FuHOco1HBxF+AAZfFF
E+XR0mPaywbzfy5tL7uRFZm7RAWXWTaYQ0n0Ks6iDgzdysMy9IFi9liwPmU+
hCdi/Id3xDjhDEiL8J2sjjnclIoAEAGJMrV6NQcl6LVxycxJjnbcZrUmqcPt
8478NC4+Su46RyWxVlKgdVCskAdFTTaPc6l4zsGCxMdF3r5FWxFQhYqCWpAQ
Wh6G3hUxQ3nIOQ984OJIC6Cnk/SLG/kgDMV8MBYPZK1hHXiIhbAb0efIrCux
NAhk9J2Sy8o0bTck5FCohDN3z1ia/p4S6TmaVgO8Tv1cfawpTkV+52Az2Cv5
LuGevUm55hT5yO51nIM856Ay89WIPf46ehFbFEY3Mh0M5bqhoW94wBvu+Ib6
ueFOWGH55lKUiSP+cAj/wLcn9E1+pC89hYLPpJPup/qF7BzbEYMTBKCJD1Gt
I12wCOj1/YEjeKkDIv6X/yKA0pnQPwVcDm8Hl38qtNwCLkE+8qcATe9wWVvt
ExQi7Z9AUUJWQPDi47TvCSc9QPlNSQrO7zcAEp1DDCPf/CfCyA4Q+SRKEhye
wIOzETkQOO6YjcL81wnxpnHYs45PZfo0kiVNKoxNK7YDUQ8cIl4p+1djAvFS
Zd5hyqtmL5PKJFEVqM1ytWamjiTTydwo1dUJAVQiL5mTRUVCGTpJRqCAAwhT
MFuQ392EGTi+iqpTaVOWC3tVVcVFGmwQu1xcuVPGK0QcbHOxyB5obkoYGesi
mP7vkVmZTgdtjn4LtPnPJK0h2pBDanJ0X5w5ClCFMyq6Jbv4mJK8LgPjbR27
nNk6tduxiQhgpayIC/VwjnSF/u6giINhnEsoQHrvN8dPSwixhEa5MWg3plRR
1Sbz2FEuxmUf9YK1aSqqaCNW+27xYTYwOWeKq2fJKoQzBPVNjk73UTOeL+jd
TZYjhqUuGqoJKJBDQFP7isJcBa+kLG7Q4/IwgTzQqwCvX5HF79VRmF6msRYL
0QoDBWBul5hJHadQkwNiPqeUcY6V4lqIYdVXdg2LrixhQID4tEVRqrcYNsk7
j270q8xe+3o0ArmcpjhFIVZQ1aA5/oWGRz9ziffwMCyDeIP+CiFVN8dSO/Fm
dDNTq/rs0WzS//No99cb9wHmoqYImMuPoAOZ+A/8Xl4Pf73R5r6XQ3YwzLN2
3eklehh+vdGPvpej3rD6WjTB8GvQCxKOwC2oqSTDUFgLGJKQzlW9u1XwqGq8
xJBo3RcNkxkHYOYCxZAPSs0dr8fWlgwroZ2NzbZsBzSrtgBgnFMkBpkA0WhQ
J+gzBECS0l2xNu+tHUE6pA8IbMJSDpmv7eoD8tm5ENmA6JqbmgK+G7Qivor3
wVcCo7DjQg029aZ836/ZgEFPpRh1fMCc0FK6fQHrCUr2CBGvHpGisuwVxoe4
6be1JSMiXRKT1M3YwZ73vnBRKjo2CmQiM0rUcVigighIv6BiWMhVg1ddBi+b
STDZk22GUQjRPbe64+rjsFstx6EJAljYfeNsDr2yGmRY07rlYaR7vIg+KxLD
L62A0jRA1sCQFpcMM3a53+PO9C3bfiqKohz7aOkN23mZDC6Ga7/e4uqU+xlQ
EaKSajDKz2wH99a+qD7suI+rGKAPoHUdhG2WfWRMQcBE54SikyuySxJkWCnX
O9UwywLAnI2//oB5PmS517rB8CMcoK063FLMeBxxL/gaoStyjhVxdSQLa86t
AfRpXeFqHfaLulMo93Z88MOxi8tV6ZQohp2Ioc6B/moZ6NdJkSwtgn0fmJcl
07FbIfdTAHaTJ24juNbzZVvNbaHBva5Edh84tU6f7hevmKIbPxUifY3Cukm6
YBhauNkdydHCkkLTDO4klb2NCzD4EoNjcs1J1lmC7pe6cSKaxUyM6IAkYY6M
ogTLXEM+vHaiU5M38Ce4XrJaam/6hd8FYJQTsgO8QHJUJS0ErqH9Vv+NA4NI
BBRAoGs6oF/2Ibi7wizHlDA19puk17VotXFeItuZh0DFn+snQQksElCeA3d5
rygpCPdeb4sY3wVeQR15Cu1HRpA6bwZGurCiwEtQcgFns+BVJ/Mrrs9dpysg
GxwgFV3WU/nragbYpvjAemW+Wcz2yB4QdLUEDAlNFDOMxndXAINs7Xyxy1hW
iV56t5SQqwKkR1G57JOL0pKC2pqaEcs0uO0BMAfnaxalh3RUiCstPyh1PVw5
JPOOC4RuQUiHvzslPvAuG0ltzSh+LG7twqa1z7Vm4z58aM64NpT06CtbczRr
oHlmMPkPzMWwr3iIGTNulrnMyfOxzvt1sgZ4/undiXvynPLjNuwpkmd0oSHG
BVeMjPL4jyhUSmz3Q7qy44RzjDNb7QNwH78+JmOGZDFaZnq+DUCULv01Xdgn
/fKuAK05kw38amrQD+e/Hk6P5IalzUZIHvagTjj8zDFYF28T0hEvqNjXq6OL
dy+f/Qn+wNH8+ULDhC44qrGhSMSEyvC+a2FLX6CnvxAzxpdkWp2FVhWzF1hb
KSxoH5rJiNjSJ06yCg2/yjxcP6yN7kX5665Mj1AB7DWY9ywu9GOInvANUufl
5NJO2LcFc4KXtGo16yS8f/wIe9UdcNNh25PEVUEDtzOuRVgvT8KrcMsBhDBg
IxxeDiOEKAFjEXOCtIQhbs6BPBQBJ7yHLieTBI4dNdPDZEzGpwNc+2Oq0uf9
iZygKa93Ymy05iAZPTeqLgWFW52cbwPNjGcbqfoFBVHAdsBL66DG0n0vdQgM
LlWyyYaSC4Iry+jiF4ogIfHX3XOw+9onOaBnKyr87pEcUBfvm6UfBcdHo8Pp
AREkRyY0+oDbvXNAGP7SvTKH9z+4gPDjx9GzkniNvO6gtJ45fH8yfYpl9eNg
CT/M/ti1fAp0Ye+eWSb7PPGX2QcY9G2CSIfXWGAwaPwGnyxFns7MweTwwOwd
dFLONyVGADXBiY7NYbeRxkf5RnRr0XPKn9EaVTM6nrWLqcX87Ngmu6/HgCny
ADcBmdnJJYCDteuCo7rcVtcSB8MaoW67AinQDrzEhxL+C66oKCp7LIyjdq6V
EciITrKnj4VhyseXAfWikGLI4TXBCQR3Dbi6knSPMH9GnO7GMNC1a4FQzWZM
oN8NXb7ijAosm6xDEbvmXFb8IOVS/W2f5tdf/ZePH5Ep/aGFYwFpguoGBDMa
qpnVn5GLL4lUKT8VmcHrF+fP3rx+ySngeD0opoCfvH158id+hHeDfvyIpdBD
aKfKkTPeUU4gvxP9QtSZHurwLjPlWCIcg1H2e1Hl/+iwRzqsxoVH3e9L/Ihn
cwJSLoKwjutshyFGUZ3bMOykCOM+OrqGKqJpkqdyF0+CclmtzFBIAsI03nrc
iWBhsH6bpO9BqzlrQLFamx94Dpzs8PiYm+ilai+zvJFb1d5cImDzuoHEUXZr
8MYZ+jKw4fMg3Hw0co95Z2ZEy7OC8K/W32SPUw+xuhlo0dgivZLcA6GM79qC
krw8beSb55BesQdHbn681y1u5D3KvOyn0Zv1jjvipuY4rKNyy4V1Y+bNKSr6
J2+drV6rjxQa9ENVoNwEploWZmayzdXTibw2KcrJLyJEwPOvu8+53hlHz5Ky
wxj67beIju5u1AvdgouTtzO3P843E8zynzEJSnH/A8iJdDWeRN/1fVM4DQqk
5Muy0nKiYjr1uevyvrGpkDhw5bpRwH0ufgZ+V147STEsHEKAdE2/mz2UOoG9
C7SNo7yXf/G+SOB3Xz/9F2T0CchYswEALen6PlqlYw71PbgDo7kmmqON2mP6
K9HcA907hMOuXT+IyMZLyPCawXgqd7CFf9pUMHeBfNNdocAnTXPtdc6gV9HU
ZRAGp6e8K6iBIgQMlCByhghZDPSBYXmQZ2QU5j+TYzydPoFRiQRryGGNktkT
kBIBvkIhbFi5Z0EY5Mank9+ZPXEIha/5klBx+29I+iNnUthc0zDj5rwRggwE
86ORKlwieQPTuFS2EHwJ6Theuwx0rEhX64Qv8nWtwkQkNlc0KITWVNloQXKt
3pi8HwhowTkzH0OTiDNBM81AuCNgRsrAlc/J8XKAMgJSpuSSmZ50gTdnEBfB
rDKaRUW5VfvTgBwFpTCDymvEjMNl+Xr9tRAZzJSeo1mLIBydBiecYkM8kW6j
gf3jkgIKfpR5jrqMU0rxRuK5/nyVRWqNf4T8DWD46ODo68nB4eRIfn5GICQV
Zd9ZPAtkzLAVdEF2XC/2hzabc4puYX4i1Amr9sKOHNP98fV//O9/1/s3QnsR
KuEUzL0ur1i0voX7qeO/bpdLzrVXXE5XSUYXCcsVxEOX2wLk4vVobEsVtTwM
LA+dbp2iuHJ6UzwfRM6WMOBZmEqEIWffP6cNQlvQ4G901fglCEtU3diHWoe3
jw0HyLsKDP9/xcRHoOAjRfK8ddCvEehBfdpZpwQuVj3ASINllWxW2J9dk4cI
pkwXKtPCaI9Geg+zJLbYOV+BJ5Z+uYtZJjQ1Z6U5UWBjBRKhlCyxU8ZWmlI3
4rt3O5arptmrPTP29ZE41dxlCk1d7Z2Ot1kMq7m76YsMOy7rwgWMTc1zuS28
04FLr0Wnuo5u5q27ZVwlihrTbruXQIWXlUkSTxhuQ0IW15+hsXYHC0XXiFHg
UXTdu68gE8Sy1Y4vHkYHQGWKCyyfQX7ha005wiw9ujKFbcIumQGJxDXmZWEI
DEmAe5TXwm2ToJzgPldKuMRzT8l4hP4Y9XY5S7aQk6jO9FQJ+kMX7x5dLBgB
P1EoTIF2heWkRKQK9JiHpnhhh6L/MPUGuu/Eyks1nDhgj4po2nmQN3EBQ18E
Ne1cYTqKcXR5SngGP/z4SxTnp/cky+GRH6prx2FW0ilxKdkBXCMnciTLdKWA
Nd8ayTBAN+NR6e+7gwHPo+D8aIGd0MCDO0IDx51b26Q5FQ7G/eiEDi4AE6Xq
Yhw9COp/L36QcPy2yEEaRuMHzX0DCJ8GkYPmydGuOEKN4gfI6cQSHjuew666
PgS7AgeduP3dQN6oIwugGW1rYRl9rj+5d/7heypT2Yf+WpIQB+C+c8tLcKus
kf44H4kcL0k3M4DQ4kIaSgwrgX2Gic7LxHAPe6ff708FRIbiWfuX+/UiWp9q
mepOvea4fNOD510u8cBFUQRkEVti5PGDT46PHVz+n3j5vyVmnH5/X8Q4PDjo
IEZvD0a+hpq2wdV/Jub4zj4PibRlHMuu9zPeA5mG8UVD2fnu0BiT+BHfZiHl
l+ttgcGUYfS6uw/XXSk/5yATd7uZll0G/RpYHOvkqriHjuQpiPNcypMvNSOb
pPO5sl8nDNCd67VDensPlxiqsivEdLSJsOOnG/Im96A6EJ9bTE4m5U1eanoM
wr+Lj2pXYtVjOOdywEk8+GRmNYzItaMJIFJVpTPoSOlLwnA0NHgcD3Lw7o2Z
ftq/KSL6Ge+19f692dWTw2k39SyQ9YYwMvnwXwkhaTfvRscesgEa/h+9AuVp
EJMAAA==

-->

</rfc>
