<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.29 (Ruby 2.7.0) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-ietf-cats-metric-definition-04" category="std" submissionType="IETF" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.25.0 -->
  <front>
    <title abbrev="CATS Metrics">CATS Metrics Definition</title>
    <seriesInfo name="Internet-Draft" value="draft-ietf-cats-metric-definition-04"/>
    <author initials="Y." surname="Kehan" fullname="Kehan Yao">
      <organization>China Mobile</organization>
      <address>
        <postal>
          <country>China</country>
        </postal>
        <email>yaokehan@chinamobile.com</email>
      </address>
    </author>
    <author initials="C." surname="Li" fullname="Cheng Li">
      <organization>Huawei Technologies</organization>
      <address>
        <postal>
          <country>China</country>
        </postal>
        <email>c.l@huawei.com</email>
      </address>
    </author>
    <author initials="L. M." surname="Contreras" fullname="L. M. Contreras">
      <organization>Telefonica</organization>
      <address>
        <email>luismiguel.contrerasmurillo@telefonica.com</email>
      </address>
    </author>
    <author initials="J." surname="Ros-Giralt" fullname="Jordi Ros-Giralt">
      <organization>Qualcomm Europe, Inc.</organization>
      <address>
        <email>jros@qti.qualcomm.com</email>
      </address>
    </author>
    <author initials="H." surname="Shi" fullname="Hang Shi">
      <organization>Huawei Technologies</organization>
      <address>
        <postal>
          <country>China</country>
        </postal>
        <email>shihang9@huawei.com</email>
      </address>
    </author>
    <date year="2025" month="October" day="20"/>
    <area>Routing</area>
    <workgroup>Computing-Aware Traffic Steering</workgroup>
    <keyword>CATS, metric</keyword>
    <abstract>
      <?line 79?>

<t>Computing-Aware Traffic Steering (CATS) is a traffic engineering approach that optimizes the steering of traffic to a given service instance by considering the dynamic nature of computing and network resources. In order to consider the computing and network resources, a system needs to share information (metrics) that describes the state of the resources. Metrics from network domain have been in use in network systems for a long time. This document defines a set of metrics from the computing domain used for CATS.</t>
    </abstract>
    <note removeInRFC="true">
      <name>Discussion Venues</name>
      <t>Discussion of this document takes place on the
    Computing-Aware Traffic Steering Working Group mailing list (cats@ietf.org),
    which is archived at <eref target="https://mailarchive.ietf.org/arch/browse/cats/"/>.</t>
      <t>Source for this draft and an issue tracker can be found at
    <eref target="https://github.com/VMatrix1900/draft-cats-metric-definition"/>.</t>
    </note>
  </front>
  <middle>
    <?line 83?>

<section anchor="introduction">
      <name>Introduction</name>
      <t>Service providers are deploying computing capabilities across the network for hosting applications such as distributed AI workloads, AR/VR and driverless vehicles, among others. In these deployments, multiple service instances are replicated across various sites to ensure sufficient capacity for maintaining the required Quality of Experience (QoE) expected by the application. To support the selection of these instances, a framework called Computing-Aware Traffic Steering (CATS) is introduced in <xref target="I-D.ietf-cats-framework"/>.</t>
      <t>CATS is a traffic engineering approach that optimizes the steering of traffic to a given service instance by considering the dynamic nature of computing and network resources. To achieve this, CATS components require performance metrics for both communication and compute resources. Since these resources are deployed by multiple providers, standardized metrics are essential to ensure interoperability and enable precise traffic steering decisions, thereby optimizing resource utilization and enhancing overall system performance.</t>
      <t>Metrics from network domain have already been defined in previous documents, e.g., <xref target="RFC9439"/>, <xref target="RFC8912"/>，and <xref target="RFC8911"/>, and been in use in network systems for a long time. This document focuses on categorizing the relevant metrics at the computing domain for CATS into three levels based on their complexity and granularity.</t>
    </section>
    <section anchor="conventions-and-definitions">
      <name>Conventions and Definitions</name>
      <t>This document uses the following terms defined in <xref target="I-D.ietf-cats-framework"/>:</t>
      <ul spacing="normal">
        <li>
          <t>Computing-Aware Traffic Steering (CATS)</t>
        </li>
        <li>
          <t>Service</t>
        </li>
        <li>
          <t>Service contact instance</t>
        </li>
      </ul>
    </section>
    <section anchor="design-principles">
      <name>Design Principles</name>
      <section anchor="three-level-metrics">
        <name>Three-Level Metrics</name>
        <t>As outlined in <xref target="I-D.ietf-cats-usecases-requirements"/>, the resource model that defines CATS metrics MUST be scalable, ensuring that its implementation remains within a reasonable and sustainable cost. Additionally, it MUST be useful in practice. To that end, a CATS system should select the most appropriate metric(s) for instance selection, recognizing that different metrics may influence outcomes in distinct ways depending on the specific use case.</t>
        <t>Introducing a definition of metrics requires balancing the following trade-off: if the metrics are too fine-grained, they become unscalable due to the excessive number of metrics that must be communicated through the metrics distribution protocol. (See <xref target="I-D.rcr-opsawg-operational-compute-metrics"/> for a discussion of metrics distribution protocols.) Conversely, if the metrics are too coarse-grained, they may not have sufficient information to enable proper operational decisions.</t>
        <t>Conceptually, it is necessary to define at least two fundamental levels of metrics: one comprising all raw metrics, and the other representing a simplified form---consisting of a single value that encapsulates the overall capability of a service instance.</t>
        <t>However, such a definition may, to some extent, constrain implementation flexibility across diverse CATS use cases. Implementers often seek balanced approaches that consider trade-offs among encoding complexity, accuracy, scalability, and extensibility.</t>
        <t>To ensure scalability while providing sufficient detail for effective decision-making, this document provides a definition of metrics that incorporates three levels of abstraction:</t>
        <ul spacing="normal">
          <li>
            <t><strong>Level 0 (L0): Raw metrics.</strong> These metrics are presented without abstraction, with each metric using its own unit and format as defined by the underlying resource.</t>
          </li>
          <li>
            <t><strong>Level 1 (L1): Metrics normalized within categories.</strong> These metrics are derived by aggregating L0 metrics into multiple categories, such as network and computing. Each category is summarized with a single L1 metric by normalizing it into a value within a defined range of scores.</t>
          </li>
          <li>
            <t><strong>Level 2 (L2): Single normalized metric.</strong> These metrics are derived by aggregating lower level metrics (L0 or L1) into a single L2 metric, which is then normalized into a value within a defined range of scores.</t>
          </li>
        </ul>
      </section>
      <section anchor="level-0-raw-metrics">
        <name>Level 0: Raw Metrics</name>
        <t>Level 0 metrics encompass detailed, raw metrics, including but not limited to:</t>
        <ul spacing="normal">
          <li>
            <t>CPU: Base Frequency, boosted frequency, number of cores, core utilization, memory bandwidth, memory size, memory utilization, power consumption.</t>
          </li>
          <li>
            <t>GPU: Frequency, number of render units, memory bandwidth, memory size, memory utilization, core utilization, power consumption.</t>
          </li>
          <li>
            <t>NPU: Computing power, utilization, power consumption.</t>
          </li>
          <li>
            <t>Network: Bandwidth, capacity, throughput, bytes transmitted, bytes received, host bus utilization.</t>
          </li>
          <li>
            <t>Storage: Available space, read speed, write speed.</t>
          </li>
          <li>
            <t>Delay: Time taken to process a request.</t>
          </li>
        </ul>
        <t>L0 metrics serve as foundational data and do not require classification. They provide basic information to support higher-level metrics, as detailed in the following sections.</t>
        <t>L0 metrics can be encoded and exposed using an Application Programming Interface (API), such as a RESTful API, and can be solution-specific. Different resources can have their own metrics, each conveying unique information about their status. These metrics can generally have units, such as bits per second (bps) or floating point instructions per second (flops).</t>
        <t>Regarding network-related information, <xref target="RFC8911"/> and <xref target="RFC8912"/> define various performance metrics and their registries. Additionally, in <xref target="RFC9439"/>, the ALTO WG introduced an extended set of metrics related to network performance, such as throughput and delay. For compute metrics, <xref target="I-D.rcr-opsawg-operational-compute-metrics"/> lists a set of cloud resource metrics.</t>
      </section>
      <section anchor="level-1-normalized-metrics-in-categories">
        <name>Level 1: Normalized Metrics in Categories</name>
        <t>L1 metrics are organized into distinct categories, such as computing, communication, service, and composed metrics. Each L0 metric is classified into one of these categories. Within each category, a single L1 metric is computed using an <em>aggregation function</em> and normalized to a unitless score that represents the performance of the underlying resources according to that category. Potential categories include:</t>
        <!-- JRG Note: TODO, define aggregation and normalization function -->

<ul spacing="normal">
          <li>
            <t><strong>Computing:</strong> A normalized value derived from computing-related L0 metrics, such as CPU, GPU, and NPU utilization.</t>
          </li>
          <li>
            <t><strong>Communication:</strong> A normalized value derived from communication-related L0 metrics, such as communication throughput.</t>
          </li>
          <li>
            <t><strong>Service:</strong> A normalized value derived from service-related L0 metrics, such as tokens per second and service availability</t>
          </li>
          <li>
            <t><strong>Composed:</strong> A normalized value derived from an end-to-end aggregation function by levaraging both computing and communication metrics. For example, end-to-end delay computed as the sum of all delays along a path.</t>
          </li>
        </ul>
        <t>Editor note: detailed categories can be updated according to the CATS WG discussion.</t>
        <t>L0 metrics, such as those defined in <xref target="RFC8911"/>, <xref target="RFC8912"/>, <xref target="RFC9439"/>, and <xref target="I-D.rcr-opsawg-operational-compute-metrics"/>, can be categorized into the aforementioned categories. Each category will employ its own aggregation function (e.g., weighted summary) to generate the normalized value. This approach allows the protocol to focus solely on the metric categories and their normalized values, thereby avoiding the need to process solution-specific detailed metrics.</t>
      </section>
      <section anchor="level-2-single-normalized-metric">
        <name>Level 2: Single Normalized Metric.</name>
        <t>The L2 metric is a single score value derived from the lower level metrics (L0 or L1) using an aggregation function. Different implementations may employ different aggregation functions to characterize the overall performance of the underlying compute and communication resources. The definition of the L2 metric simplifies the complexity of collecting and distributing numerous lower-level metrics by consolidating them into a single, unified score.</t>
        <t>TODO: Some implementations may support the configuration of Ingress CATS-Forwarders with the metric normalizing method so that it can decode the information from the L1 or L0 metrics.</t>
        <t>Figure 1 provides a summary of the logical relationships between metrics across the three levels of abstraction.</t>
        <figure anchor="fig-metric-levels">
          <name>Logic of CATS Metrics in levels</name>
          <artwork><![CDATA[
                                    +--------+
                         L2 Metric: |   M2   |
                                    +---^----+
                                        |
                    +-------------+-----+-----+------------+
                    |             |           |            |
                +---+----+        |       +---+----+   +---+----+
    L1 Metrics: |  M1-1  |        |       |  M1-2  |   |  M1-3  | (...)
                +---^----+        |       +---^----+   +----^---+
                    |             |           |             |
               +----+---+         |       +---+----+        |
               |        |         |       |        |        |
            +--+---+ +--+---+ +---+--+ +--+---+ +--+---+ +--+---+
 L0 Metrics:| M0-1 | | M0-2 | | M0-3 | | M0-4 | | M0-5 | | M0-6 | (...)
            +------+ +------+ +------+ +------+ +------+ +------+

]]></artwork>
        </figure>
      </section>
    </section>
    <section anchor="cats-metrics-framework-and-specification">
      <name>CATS Metrics Framework and Specification</name>
      <t>The CATS metrics framework is a key component of the CATS architecture. It defines how metrics are encoded and transmitted over the network. The representation should be flexible enough to accommodate various types of metrics along with their respective units and precision levels, yet simple enough to enable easy implementation and deployment across heterogeneous edge environments.</t>
      <section anchor="cats-metric-fields">
        <name>CATS Metric Fields</name>
        <t>This section defines the detailed structure used to represent CATS metrics. The design follows principles established in related IETF specifications, such as the network performance metrics outlined in <xref target="RFC9439"/>.</t>
        <t>Each CATS metric is expressed as a structured set of fields, with each field describing a specific property of the metric. The following definition introduces the fields used in the CATS metric representations.</t>
        <!-- JRG Note and TODO: Define each of the types, formats, etc.. Do we need to standardize them? -->
<figure anchor="fig-metric-def">
          <name>CATS Metric Fields</name>
          <artwork><![CDATA[
- Cats_metric:
      - Metric_type:
            The type of the CATS metric.
            Examples: compute_cpu, storage_disk_size, network_bw,
            compute_delay, network_delay, compute_norm,
            storage_norm, network_norm, delay_norm.
      - Format:
            The encoding format of the metric.
            Examples: int, float.
      - Format_std (optional):
            The standard used to encode and decode the value
            field according to the format field.
            Example: ieee_754, ascii.
      - Length:
            The size of the value field measured in octets.
            Examples: 2, 4, 8, 16, 32, 64.
      - Unit:
            The unit of this metric.
            Examples: mhz, ghz, byte, kbyte, mbyte,
            gbyte, bps, kbps, mbps, gbps, tbps, tflops, none.
      - Source (optional):
            The source of information used to obtain the value field.
            Examples: nominal, estimation, normalization,
            aggregation.
      - Statistics(optional):
            The statistical function used to obtain the value field.
            Examples: max, min, mean, cur.
      - Level:
            The level this metric belongs to.
            Examples: L0, L1, L2.
      - Value:
            The value of this metric.
            Examples: 12, 3.2.
]]></artwork>
        </figure>
        <t>Next, we describe each field in more detail:</t>
        <ul spacing="normal">
          <li>
            <t><strong>Metric_Type (type)</strong>: This field specifies the category or kind of CATS metric being measured, such as computational resources, storage capacity, or network bandwidth. It acts as a label that enables network devices to identify the purpose of the metric.</t>
          </li>
          <li>
            <t><strong>Format (format)</strong>: This field indicates the data encoding format of the metric, such as whether the value is represented as an integer, a floating-point number, or has no specific format.</t>
          </li>
          <li>
            <t><strong>Format standard (format_std, optional)</strong>: This optional field indicates the standard used to encode and decode the value field according to the format field. It is only required if the value field is encoded using a specific standard, and knowing this standard is necessary to decode the value field. Examples of format standards include ieee_754 and ascii. This field ensures that the value can be accurately interpreted by specifying the encoding method used.</t>
          </li>
          <li>
            <t><strong>Length (length)</strong>: This field indicates the size of the value field measured in octets (bytes). It specifies how many bytes are used to store the value of the metric. Examples include 4, 8, 16, 32, and 64. The length field is important for memory allocation and data handling, ensuring that the value is stored and retrieved correctly.</t>
          </li>
          <li>
            <t><strong>Unit (unit)</strong>: This field defines the measurement units for the metric, such as frequency, data size, or data transfer rate. It is usually associated with the metric to provide context for the value.</t>
          </li>
          <li>
            <t><strong>Source (source, optional)</strong>: This field describes the origin of the information used to obtain the metric. It may include one or more of the following non-mutually exclusive values:  </t>
            <ul spacing="normal">
              <li>
                <t>'nominal'. Similar to <xref target="RFC9439"/>, "a 'nominal' metric indicates that the metric value is statically configured by the underlying devices.  For example, bandwidth can indicate the maximum transmission rate of the involved device.</t>
              </li>
              <li>
                <t>'estimation'. The 'estimation' source indicates that the metric value is computed through an estimation process.</t>
              </li>
              <li>
                <t>'directly measured'. This source indicates that the metric can be obtained directly from the underlying device and it does not need to be estimated.</t>
              </li>
              <li>
                <t>'normalization'. The 'normalization' source indicates that the metric value was normalized. For instance, a metric could be normalized to take a value from 0 to 1, from 0 to 10, or to take a percentage value. This type of metrics do not have units.</t>
              </li>
              <li>
                <t>'aggregation'. This source indicates that the metric value was obtained by using an aggregation function.
<!-- JRG: Define aggregation and normalization functions -->
                </t>
              </li>
            </ul>
            <t>
Nominal metrics have inherent physical meanings and specific units without any additional processing. Aggregated metrics may or may not have physical meanings, but they retain their significance relative to the directly measured metrics. Normalized metrics, on the other hand, might have physical meanings but lack units.</t>
          </li>
          <li>
            <t><strong>Statistics (statistics, optional)</strong>: This field provides additional details about the metrics, particularly if there is any pre-computation performed on the metrics before they are collected. It is useful for services that require specific statistics for service instance selection.  </t>
            <ul spacing="normal">
              <li>
                <t>'max'. The maximum value of the data collected over intervals.</t>
              </li>
              <li>
                <t>'min'. The minimum value of the data collected over intervals.</t>
              </li>
              <li>
                <t>'mean'. The average value of the data collected over intervals.</t>
              </li>
              <li>
                <t>'cur'. The current value of the data collected.</t>
              </li>
            </ul>
          </li>
          <li>
            <t><strong>Level (level)</strong>: This field specifies the level at which the metric is measured. It is used to categorize the metric based on its granularity and scope. Examples include L0, L1, and L2. The level field helps in understanding the level of detail and specificity of the metric being measured.</t>
          </li>
          <li>
            <t><strong>Value (value)</strong>: This field represents the actual numerical value of the metric being measured. It provides the specific data point for the metric in question.</t>
          </li>
        </ul>
      </section>
      <section anchor="aggregation-and-normalization-functions">
        <name>Aggregation and Normalization Functions</name>
        <t>In the context of CATS metric processing, aggregation and normalization are two fundamental operations that transform raw and derived metrics into forms suitable for decision-making and comparison across heterogeneous systems.</t>
        <section anchor="aggregation">
          <name>Aggregation</name>
          <t>Aggregation functions combine multiple metric values into a single representative value. This is particularly useful when metrics are collected from multiple sources or over time intervals. For example, CPU usage metrics from multiple service instances may be aggregated to produce a single load indicator for a service. Common aggregation functions include:</t>
          <ul spacing="normal">
            <li>
              <t>Mean average: Computes the arithmetic average of a set of values.</t>
            </li>
            <li>
              <t>Minimum/maximum: Selects the lowest or highest value from a set.</t>
            </li>
            <li>
              <t>Weighted average: Applies weights to values based on relevance or priority.</t>
            </li>
          </ul>
          <t>The output of an aggregation function is typically a Level 2 metric, derived from multiple Level 0 metrics, or a level 2 metric, derived from multiple Level 0 or 1 metrics.</t>
          <figure anchor="fig-agg-funct">
            <name>Aggregation function</name>
            <artwork><![CDATA[
      +------------+     +-------------------+
      | Metric 1.1 |---->|                   |
      +------------+     |    Aggregation    |     +----------+
           ...           |     Function      |---->| Metric 2 |
      +------------+     |                   |     +----------+
      | Metric 1.n |---->|                   |
      +------------+     +-------------------+

      Input: Multiple values                   Output: Single value

]]></artwork>
          </figure>
        </section>
        <section anchor="normalization">
          <name>Normalization</name>
          <t>Normalization functions convert metric values with or without units into unitless scores, enabling comparison across different types of metrics and systems. This is essential when combining metrics from a heterogeneous set of resources (e.g, latency measured in milliseconds with CPU usage measured in percentage) into a unified decision model.</t>
          <t>Normalization functions often map values into a bounded range, such as integers fron 0, to 5, or real numbers from 0 to 1, using techniques like:</t>
          <ul spacing="normal">
            <li>
              <t>Sigmoid function: Smoothly maps input values to a bounded range.</t>
            </li>
            <li>
              <t>Min-max scaling: Rescales values based on known minimum and maximum bounds.</t>
            </li>
            <li>
              <t>Z-score normalization: Standardizes values based on statistical distribution.</t>
            </li>
          </ul>
          <t>Normalized metrics facilitate composite scoring and ranking, and can be used to produce Level 1 and Level 2 metrics.</t>
          <figure anchor="fig-norm-funct">
            <name>Normalization function</name>
            <artwork><![CDATA[
      +----------+     +------------------------+     +----------+
      | Metric 1 |---->| Normalization Function |---->| Metric 2 |
      +----------+     +------------------------+     +----------+

      Input:  Value with or without units         Output: Unitless value

]]></artwork>
          </figure>
        </section>
      </section>
      <section anchor="on-the-meaning-of-scores-in-heterogeneous-metrics-systems">
        <name>On the Meaning of Scores in Heterogeneous Metrics Systems</name>
        <t>In a system like CATS, where metrics originate from heterogeneous resources---such as compute, communication, and storage---the interpretation of scores requires careful consideration. While normalization functions can convert raw metrics into unitless scores to enable comparison, these scores may not be directly comparable across different implementations. For example, a score of 4 on a scale from 1 to 5 may represent a high-quality resource in one implementation, but only an average one in another.</t>
        <t>This ambiguity arises because different implementations may apply distinct normalization strategies, scaling methods, or semantic interpretations. As a result, relying solely on unitless scores for decision-making can lead to inconsistent or suboptimal outcomes, especially when metrics are aggregated from multiple sources.</t>
        <t>To mitigate this, implementors of CATS metrics SHOULD provide clear and precise definitions of their metrics---particularly for unitless scores---and explain how these scores should be interpreted. This documentation should be designed to support operators in making informed decisions, even when comparing metrics from different implementations.</t>
        <t>Similarly, operators SHOULD exercise caution when making potentially impactful decisions based on unitless metrics whose definitions are unclear or underspecified. In such cases, especially when decisions are critical or sensitive, operators MAY choose to rely on Level 0 (L0) metrics with units, which typically offer a more direct and unambiguous understanding of resource conditions.</t>
      </section>
      <section anchor="level-metric-representations">
        <name>Level Metric Representations</name>
        <section anchor="level-0-metrics">
          <name>Level 0 Metrics</name>
          <t>Several definitions have been developed within the compute and communication industries, as well as through various standardization efforts---such as those by the <xref target="DMTF"/>---that can serve as L0 metrics. L0 metrics contain all raw metrics which are not considered to be standardized in this document, considering about their diversity and many other existing work.</t>
          <t>See Appendix A for examples of L0 metrics.</t>
        </section>
        <section anchor="level-1-metrics">
          <name>Level 1 Metrics</name>
          <t>L1 metrics are normalized from L0 metrics. Although they don't have units, they can still be classified into types such as compute, communication, service and composed metrics. This classification is useful because it makes L1 metrics semantically meaningful.</t>
          <t>The sources of L1 metrics is normalization. Based on L0 metrics, service providers design their own algorithms to normalize metrics. For example, assigning different cost values to each raw metric and do weighted summation. L1 metrics do not need further statistical values.</t>
          <section anchor="normalized-compute-metrics">
            <name>Normalized Compute Metrics</name>
            <t>The metric type of normalized compute metrics is "compute_norm", and its format is unsigned integer. It has no unit. It will occupy an octet. Example:</t>
            <figure anchor="fig-normalized-compute-metric">
              <name>Example of a normalized L1 compute metric</name>
              <artwork><![CDATA[
Basic fields:
      Metric type: compute_norm
      Level: L1
      Format: unsigned integer
      Length: one octet
      Value: 5
Source:
      normalization


|Metric Type|Level|Format|Length|Value|Source|
    8bits    2bits  1bit   3bits 8bits  3bits
]]></artwork>
            </figure>
          </section>
          <section anchor="normalized-communication-metrics">
            <name>Normalized Communication Metrics</name>
            <t>The metric type of normalized communication metrics is "communication_norm", and its format is unsigned integer. It has no unit. It will occupy an octet. Example:</t>
            <figure anchor="fig-normalized-communication-metric">
              <name>Example of a normalized L1 communication metric</name>
              <artwork><![CDATA[
Basic fields:
      Metric type: communication_norm
      Level: L1
      Format: unsigned integer
      Length: one octet
      Value: 1
Source:
      normalization

|Metric Type|Level|Format|Length|Value|Source|
    8bits    2bits  1bit   3bits 8bits  3bits

]]></artwork>
            </figure>
          </section>
          <section anchor="normalized-composed-metrics">
            <name>Normalized Composed Metrics</name>
            <t>The metric type of normalized composed metrics is "delay_norm", and its format is unsigned integer.  It has no unit.  It will occupy an octet. Example:</t>
            <figure anchor="fig-normalized-metric">
              <name>Example of a normalized L1 composed metric</name>
              <artwork><![CDATA[
Basic fields:
      Metric type: composed_norm
      Level: L1
      Format: unsigned integer
      Length: an octet
      Value: 8
Source:
      normalization

|Metric Type|Level|Format|Length|Value|Source|
    8bits    2bits  1bit   3bits 8bits  3bits
]]></artwork>
            </figure>
          </section>
        </section>
        <section anchor="level-2-metrics">
          <name>Level 2 Metrics</name>
          <t>A Level 2 metric is a single-value, normalized metric that does not carry any inherent physical unit or meaning. While each provider may employ its own internal methods to compute this value, all providers must adhere to the representation guidelines defined in this section to ensure consistency and interoperability of the normalized output.</t>
          <t>Metric type is "norm_fi". The format of the value is unsigned integer. It has no unit. It will occupy an octet. Example:</t>
          <figure anchor="fig-level-2-metric">
            <name>Example of a normalized L2 metric</name>
            <artwork><![CDATA[
Basic fields:
      Metric type: norm_fi
      Level: L2
      Format: unsigned integer
      Length: an octet
      Value: 1
Source:
      normalization

|Metric Type|Level|Format|Length|Value|Source|
    8bits    2bits  1bit   3bits 8bits  3bits
]]></artwork>
          </figure>
          <t>The single normalized value also facilitates aggregation across multiple service instances. When each instance provides its own normalized value, no additional statistical processing is required at the instance level. Instead, aggregation can be performed externally using standardized methods, enabling scalable and consistent interpretation of metrics across distributed environments.</t>
        </section>
      </section>
    </section>
    <section anchor="comparison-among-metric-levels">
      <name>Comparison among Metric Levels</name>
      <t>Metrics are progressively consolidated from L0 to L1 to L2, with each level offering a different degree of abstraction to address the diverse requirements of various services. Table 1 provides a comparative overview of these metric levels.</t>
      <table anchor="comparison">
        <name>Comparison among Metrics Levels</name>
        <thead>
          <tr>
            <th align="center">Level</th>
            <th align="left">Encoding Complexity</th>
            <th align="left">Extensibility</th>
            <th align="left">Stability</th>
            <th align="left">Accuracy</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="center">Level 0</td>
            <td align="left">High</td>
            <td align="left">Low</td>
            <td align="left">Low</td>
            <td align="left">High</td>
          </tr>
          <tr>
            <td align="center">Level 1</td>
            <td align="left">Medium</td>
            <td align="left">Medium</td>
            <td align="left">Medium</td>
            <td align="left">Medium</td>
          </tr>
          <tr>
            <td align="center">Level 2</td>
            <td align="left">Low</td>
            <td align="left">High</td>
            <td align="left">High</td>
            <td align="left">Medium</td>
          </tr>
        </tbody>
      </table>
      <t>Since Level 0 metrics are raw and service-specific, different services may define their own sets---potentially resulting in hundreds or even thousands of unique metrics. This diversity introduces significant complexity in protocol encoding and standardization. Consequently, L0 metrics are generally confined to bespoke implementations tailored to specific service needs, rather than being standardized for broad protocol use. In contrast, Level 1 metrics organize raw data into standardized categories, each normalized into a single value. This structure makes them more suitable for protocol encoding and standardization. Level 2 metrics take simplification a step further by consolidating all relevant information into a single normalized value, making them the easiest to encode, transmit, and standardize.</t>
      <t>Therefore, from the perspective of encoding complexity, Level 1 and Level 2 metrics are recommended.</t>
      <t>When considering extensibility, Level 0 metrics allow new services to define their own custom metrics. However, this flexibility requires corresponding protocol extensions, and the proliferation of metric types can introduce significant overhead, ultimately reducing the protocol’s extensibility. In contrast, Level 1 metrics introduce only a limited set of standardized categories, making protocol extensions more manageable. Level 2 metrics go even further by consolidating all information into a single normalized value, placing the least burden on the protocol.</t>
      <t>Therefore, from an extensibility standpoint, Level 1 and Level 2 metrics are recommended.</t>
      <t>Regarding stability, Level 0 raw metrics may require frequent protocol extensions as new metrics are introduced, leading to an unstable and evolving protocol format. For this reason, standardizing L0 metrics within the protocol is not recommended. In contrast, Level 1 metrics involve only a limited set of predefined categories, and Level 2 metrics rely on a single consolidated value, both of which contribute to a more stable and maintainable protocol design.</t>
      <t>Therefore, from a stability standpoint, Level 1 and Level 2 metrics are preferred.</t>
      <t>In conclusion, for CATS, Level 2 metrics are recommended due to their simplicity and minimal protocol overhead. If more advanced scheduling capabilities are required, Level 1 metrics offer a balanced approach with manageable complexity. While Level 0 metrics are the most detailed and dynamic, their high overhead makes them unsuitable for direct transmission to network devices and thus not recommended for standard protocol integration.</t>
    </section>
    <section anchor="implementation-guidance-on-using-cats-metrics">
      <name>Implementation Guidance on Using CATS Metrics</name>
      <t>&lt;Authors Note: This part has been moved to <xref target="I-D.ietf-cats-framework"/>, according to he chairs' sugguestion. Since this document is primarily on metric definition, rather than real implementations.&gt;</t>
    </section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>TBD</t>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t>TBD</t>
    </section>
  </middle>
  <back>
    <references anchor="sec-informative-references">
      <name>Informative References</name>
      <reference anchor="I-D.ietf-cats-usecases-requirements">
        <front>
          <title>Computing-Aware Traffic Steering (CATS) Problem Statement, Use Cases, and Requirements</title>
          <author fullname="Kehan Yao" initials="K." surname="Yao">
            <organization>China Mobile</organization>
          </author>
          <author fullname="Luis M. Contreras" initials="L. M." surname="Contreras">
            <organization>Telefonica</organization>
          </author>
          <author fullname="Hang Shi" initials="H." surname="Shi">
            <organization>Huawei Technologies</organization>
          </author>
          <author fullname="Shuai Zhang" initials="S." surname="Zhang">
            <organization>China Unicom</organization>
          </author>
          <author fullname="Qing An" initials="Q." surname="An">
            <organization>Alibaba Group</organization>
          </author>
          <date day="12" month="October" year="2025"/>
          <abstract>
            <t>   Distributed computing is a computing pattern that service providers
   can follow and use to achieve better service response time and
   optimized energy consumption.  In such a distributed computing
   environment, compute intensive and delay sensitive services can be
   improved by utilizing computing resources hosted in various computing
   facilities.  Ideally, compute services are balanced across servers
   and network resources to enable higher throughput and lower response
   time.  To achieve this, the choice of server and network resources
   should consider metrics that are oriented towards compute
   capabilities and resources instead of simply dispatching the service
   requests in a static way or optimizing solely on connectivity
   metrics.  The process of selecting servers or service instance
   locations, and of directing traffic to them on chosen network
   resources is called "Computing-Aware Traffic Steering" (CATS).

   This document provides the problem statement and the typical
   scenarios for CATS, which shows the necessity of considering more
   factors when steering traffic to the appropriate computing resource
   to better meet the customer's expectations.

            </t>
          </abstract>
        </front>
        <seriesInfo name="Internet-Draft" value="draft-ietf-cats-usecases-requirements-08"/>
      </reference>
      <reference anchor="I-D.ietf-cats-framework">
        <front>
          <title>A Framework for Computing-Aware Traffic Steering (CATS)</title>
          <author fullname="Cheng Li" initials="C." surname="Li">
            <organization>Huawei Technologies</organization>
          </author>
          <author fullname="Zongpeng Du" initials="Z." surname="Du">
            <organization>China Mobile</organization>
          </author>
          <author fullname="Mohamed Boucadair" initials="M." surname="Boucadair">
            <organization>Orange</organization>
          </author>
          <author fullname="Luis M. Contreras" initials="L. M." surname="Contreras">
            <organization>Telefonica</organization>
          </author>
          <author fullname="John Drake" initials="J." surname="Drake">
            <organization>Independent</organization>
          </author>
          <date day="16" month="October" year="2025"/>
          <abstract>
            <t>   This document describes a framework for Computing-Aware Traffic
   Steering (CATS).  Specifically, the document identifies a set of CATS
   components, describes their interactions, and provides illustrative
   workflows of the control and data planes.

            </t>
          </abstract>
        </front>
        <seriesInfo name="Internet-Draft" value="draft-ietf-cats-framework-16"/>
      </reference>
      <reference anchor="I-D.rcr-opsawg-operational-compute-metrics">
        <front>
          <title>Joint Exposure of Network and Compute Information for Infrastructure-Aware Service Deployment</title>
          <author fullname="Sabine Randriamasy" initials="S." surname="Randriamasy">
            <organization>Nokia Bell Labs</organization>
          </author>
          <author fullname="Luis M. Contreras" initials="L. M." surname="Contreras">
            <organization>Telefonica</organization>
          </author>
          <author fullname="Jordi Ros-Giralt" initials="J." surname="Ros-Giralt">
            <organization>Qualcomm Europe, Inc.</organization>
          </author>
          <author fullname="Roland Schott" initials="R." surname="Schott">
            <organization>Deutsche Telekom</organization>
          </author>
          <date day="21" month="October" year="2024"/>
          <abstract>
            <t>   Service providers are starting to deploy computing capabilities
   across the network for hosting applications such as distributed AI
   workloads, AR/VR, vehicle networks, and IoT, among others.  In this
   network-compute environment, knowing information about the
   availability and state of the underlying communication and compute
   resources is necessary to determine both the proper deployment
   location of the applications and the most suitable servers on which
   to run them.  Further, this information is used by numerous use cases
   with different interpretations.  This document proposes an initial
   approach towards a common exposure scheme for metrics reflecting
   compute and communication capabilities.

            </t>
          </abstract>
        </front>
        <seriesInfo name="Internet-Draft" value="draft-rcr-opsawg-operational-compute-metrics-08"/>
      </reference>
      <reference anchor="performance-metrics" target="https://www.iana.org/assignments/performance-metrics/performance-metrics.xhtml">
        <front>
          <title>performance-metrics</title>
          <author>
            <organization/>
          </author>
          <date>n.d.</date>
        </front>
      </reference>
      <reference anchor="DMTF" target="https://www.dmtf.org/">
        <front>
          <title>DMTF</title>
          <author>
            <organization/>
          </author>
          <date>n.d.</date>
        </front>
      </reference>
      <reference anchor="RFC8911">
        <front>
          <title>Registry for Performance Metrics</title>
          <author fullname="M. Bagnulo" initials="M." surname="Bagnulo"/>
          <author fullname="B. Claise" initials="B." surname="Claise"/>
          <author fullname="P. Eardley" initials="P." surname="Eardley"/>
          <author fullname="A. Morton" initials="A." surname="Morton"/>
          <author fullname="A. Akhter" initials="A." surname="Akhter"/>
          <date month="November" year="2021"/>
          <abstract>
            <t>This document defines the format for the IANA Registry of Performance
Metrics. This document also gives a set of guidelines for Registered
Performance Metric requesters and reviewers.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="8911"/>
        <seriesInfo name="DOI" value="10.17487/RFC8911"/>
      </reference>
      <reference anchor="RFC8912">
        <front>
          <title>Initial Performance Metrics Registry Entries</title>
          <author fullname="A. Morton" initials="A." surname="Morton"/>
          <author fullname="M. Bagnulo" initials="M." surname="Bagnulo"/>
          <author fullname="P. Eardley" initials="P." surname="Eardley"/>
          <author fullname="K. D'Souza" initials="K." surname="D'Souza"/>
          <date month="November" year="2021"/>
          <abstract>
            <t>This memo defines the set of initial entries for the IANA Registry of
Performance Metrics. The set includes UDP Round-Trip Latency and
Loss, Packet Delay Variation, DNS Response Latency and Loss, UDP
Poisson One-Way Delay and Loss, UDP Periodic One-Way Delay and Loss,
ICMP Round-Trip Latency and Loss, and TCP Round-Trip Delay and Loss.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="8912"/>
        <seriesInfo name="DOI" value="10.17487/RFC8912"/>
      </reference>
      <reference anchor="RFC9439">
        <front>
          <title>Application-Layer Traffic Optimization (ALTO) Performance Cost Metrics</title>
          <author fullname="Q. Wu" initials="Q." surname="Wu"/>
          <author fullname="Y. Yang" initials="Y." surname="Yang"/>
          <author fullname="Y. Lee" initials="Y." surname="Lee"/>
          <author fullname="D. Dhody" initials="D." surname="Dhody"/>
          <author fullname="S. Randriamasy" initials="S." surname="Randriamasy"/>
          <author fullname="L. Contreras" initials="L." surname="Contreras"/>
          <date month="August" year="2023"/>
          <abstract>
            <t>The cost metric is a basic concept in Application-Layer Traffic
Optimization (ALTO), and different applications may use different
types of cost metrics. Since the ALTO base protocol (RFC 7285)
defines only a single cost metric (namely, the generic "routingcost"
metric), if an application wants to issue a cost map or an endpoint
cost request in order to identify a resource provider that offers
better performance metrics (e.g., lower delay or loss rate), the base
protocol does not define the cost metric to be used.</t>
            <t>This document addresses this issue by extending the specification to
provide a variety of network performance metrics, including network
delay, delay variation (a.k.a. jitter), packet loss rate, hop count,
and bandwidth.</t>
            <t>There are multiple sources (e.g., estimations based on measurements
or a Service Level Agreement) available for deriving a performance
metric. This document introduces an additional "cost-context" field
to the ALTO "cost-type" field to convey the source of a performance
metric.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="9439"/>
        <seriesInfo name="DOI" value="10.17487/RFC9439"/>
      </reference>
    </references>
    <?line 472?>

<section anchor="appendix-a">
      <name>Appendix A</name>
      <section anchor="level-0-metric-representation-examples">
        <name>Level 0 Metric Representation Examples</name>
        <t>Several definitions have been developed within the compute and communication industries, as well as through various standardization efforts---such as those by the <xref target="DMTF"/>---that can serve as L0 metrics. This section provides illustrative examples.</t>
        <!-- JRG: The following two paragraphs seem redundants, as we have
already explained it in the previous section. So I suggest to remove them. -->

<!-- The sources of L0 metrics can be nominal, directly measured, estimated, or aggregated. Nominal L0 metrics are initially provided by resource providers. Dynamic L0 metrics are measured or estimated during the service stage. Additionally, L0 metrics support aggregation when there are multiple service instances.

L0 metrics also support the statistics defined in section 4.1. -->

<!-- TODO: next step would be to update the examples once we agree with (and update as necessary) the above changes regarding the CATS metric specification. -->

<section anchor="compute-raw-metrics">
          <name>Compute Raw Metrics</name>
          <t>This section uses CPU frequency as an example to illustrate the representation of raw compute metrics. The metric type is labeled as compute_CPU_frequency, with the unit specified in GHz. The format should support both unsigned integers and floating-point values. The corresponding metric fields are defined as follows:</t>
          <figure anchor="fig-compute-raw-metric">
            <name>An Example for Compute Raw Metrics</name>
            <artwork><![CDATA[
Basic fields:
      Metric Type: compute_CPU_frequency
      Level: L0
      Format: unsigned integer, floating point
      Unit: GHz
      Length: four octets
      Value: 2.2
Source:
      nominal

|Metric Type|Level|Format| Unit|Length| Value|Source|
    8bits    2bits  1bit  4bits  3bits 32bits  3bits
]]></artwork>
          </figure>
        </section>
        <section anchor="communication-raw-metrics">
          <name>Communication Raw Metrics</name>
          <t>This section takes the total transmitted bytes (TxBytes) as an example to show the representation of communication raw metrics. TxBytes are named as "communication type_TxBytes". The unit is Mega Bytes (MB). Format is unsigned integer or floating point. It will occupy 4 octets. The source of the metric is "Directly measured" and the statistics is "mean". Example:</t>
          <figure anchor="fig-network-raw-metric">
            <name>An Example for Communication Raw Metrics</name>
            <artwork><![CDATA[
Basic fields:
      Metric type: "communication type_TXBytes"
      Level: L0
      Format: unsigned integer, floating point
      Unit: MB
      Length: four octets
      Value: 100
Source:
      Directly measured
Statistics:
      mean

|Metric Type|Level|Format| Unit|Length| Value|Source|Statistics|
    8bits    2bits  1bit  4bits  3bits 32bits  3bits   2bits
]]></artwork>
          </figure>
        </section>
        <section anchor="delay-raw-metrics">
          <name>Delay Raw Metrics</name>
          <t>Delay is a kind of synthesized metric which is influenced by computing, storage access, and network transmission. Usually delay refers to the overal processing duration between the arrival time of a specific service request and the departure time of the corresponding service response. It is named as "delay_raw". The format should support both unsigned integer or floating point. Its unit is microseconds, and it occupies 4 octets. For example:</t>
          <figure anchor="fig-delay-raw-metric">
            <name>An Example for Delay Raw Metrics</name>
            <artwork><![CDATA[
Basic fields:
      Metric type: "delay_raw"
      Level: L0
      Format: unsigned integer, floating point
      Unit: Microsecond(us)
      Length: four octets
      Value: 231.5
Source:
      aggregation
Statistics:
      max

|Metric Type|Level|Format| Unit|Length| Value|Source|Statistics|
    8bits    2bits  1bit  4bits  3bits 32bits  3bits   2bits
]]></artwork>
          </figure>
        </section>
      </section>
    </section>
    <section anchor="contributors" numbered="false" toc="include" removeInRFC="false">
      <name>Contributors</name>
      <contact initials="M." surname="Boucadair" fullname="Mohamed Boucadair">
        <organization>Orange</organization>
        <address>
          <email>mohamed.boucadair@orange.com</email>
        </address>
      </contact>
      <contact initials="Z." surname="Du" fullname="Zongpeng Du">
        <organization>China Mobile</organization>
        <address>
          <email>duzongpeng@chinamobile.com</email>
        </address>
      </contact>
    </section>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA9092ZYbx3Xv+IoK+aAhCUCcIeVIOI6lEReRDkeSOSMr9oN5
Ct0FoDO9QL0MCGnokw/ID+Rf8jX5gfxC7lZbd88my7ETHonEUl1169bdl8Js
NptM2qzNzULde3Z8dqpOTFtnSaOem1VWZm1WlfcmermszUVvxL1Joluzrur9
QjVtOpmkVVLqAiZKa71qZ5lpVzMY0swKemCWuhlnj59Omm5ZZE0D79r9Fh56
/eLs5aTsiqWpF5MUZl5MkqpsTNl0zWICiz+Z6NpoAOJt1bVZub432VX1+bqu
ui1CVhVb+nh2vINx6gxgWGWJOm2NqWn0udnDA+lC4SamioGaTHTXbipYUs0m
Cv5kZbNQf5irfzYbXdInqy7PeV/0mfqDrujzql7rMvtR44Zg0k1WanVSLbPc
0Nem0Fm+UHtdneNjXyQ4oKDv50lV0Jik6soW8UdPRyA8m6s3WW/9ZxtTru3H
8fKvOr0zmTozyaas8mqdmSaEIpnnX2xoyG3WfjM/matncDC1qXXTA+LNXA2+
jWE5M7lZVWWW6BCEvMuaIlt3JgcQ5OGiq7M8r75o3RMMXgDLb+fqbdXMvspq
nbc9UH4Lx5n1v45h+V2nc5iyUC+6utqaqXpdJvMQrH+tq+aLH9ps/oOMHEDw
aq5ON/2jeKXhJOzHdzqKZpMBOaw/u+E8JoSkbNm1SJwzxcueVBv4N1VfVl2i
U53VE1p9ob6pYU4kPFml4IHzpR34RUUjaDk72x+rcr1Fknre2Xl6ZCyzpd2P
MnRAxpOyqgvY+QXwq1JvXz5Tn352eBi8PrKvP3v65LPFZJKVq/CJ17Pncy8p
usYkujHNrDY/dFltClO2zXDYqgb4kfvtV3VSz6pto3dr+AcoC09C57OEhIIR
AQQTwXD4mgAok+BzPACRgiPf09dOTvCB8wuWU/S0rtemXahN226bxccf73a7
eaZLPYexH2uQc+uSNvPxyPxjn83fb9oiR4Cfn5y9jCDED34uSGnRrgikyWQ2
mym9bNpaJ+1kcpP8VAcoNh+orFFatfIlEERWyvd6u60rnWxUu9GtqrZtVmQ/
mgbeGlAPMqhauWfbCiZaAxWUqjH1RZYYZLcWcaCWe4WyP0v5KZwi3QPZwWOl
bjsADyZKLMRKl6kqTYsEoWrTVF2dmGYOrA5YgSlwKTsdzXXDk1MArNkDzAV8
Z9IGn282iBRHu1WpDuSgHvCGU9MkwK5uw3AKtFt4E4Bkleuqrgq3cFoBl5Vq
oy9g5wbwAW+AEfAfO4TBgeeqGoDLK0RKVpi5OtvAgYDi7ZC4FGlYgyfUmBaX
L8L14q3LqrBQStPi8c6ZKoosTYH9J/cBhW1dpV2CO55MTuWc4KAvEJmwECAl
Ndu82uOUfvJEbzWICFD2CE0CMpbxYveDC26qphXCyUHy4xKNajqgIA1byhqW
fgDd8WuFD+WVTuFsjt9+/Pu3dHBpDdRT5wbmvjCbLMnp6ApETgWL1UwC8Kqx
QBIHgurv8jbb5mZAeLyh2jBAsLSAfqHrrOoAuqw1RA5olcDIpkNSzhD1uOMk
a/e0NcRsC/9b4hVplpI6wkFwNC/eA9fDo7D8we+qFw+UgQ8SXBSoHx8K8ALn
DCTYbbdV3TJ9gcKkQxEaa4I9IPk6AQlw5TnMeQf2zuTM4Smgj59+ukL0fvgA
1EL24P8hkQBoBHgyA5zWAudMierpsapE2rAnFWoBz0RwskugLBxfdKWcDS0l
iiZc6jTDZ/ls3McBw/BBO1J0PDVF4VGmGkybH2GMXRwfBEoHIDOdByQIp2Vq
0nnEb3sCx5R6SXOaJIPlLX4dzlP8HPltigDWBgCR08FvLbQKUJmLWSPTguWS
0KEB4wFhWTEZYAuI4kYhp3Mw5NM9CzuWWURrAO8F8ZkVaACfma/nU6BCsCDQ
gPjwQd6gZfHhw3//578jYPaTQ/waP/jLxOgKXoARomDb4uIwYpiVc3OhYZA7
mHZcsFqZiidUwZDaGAWPmrxRS41CtyLZlNX0aG7e28Nbg6HW5SBx2v0cZTCY
2hd47Cge8XvvmDWTSQw4QY3QrCowq3cEs6lhwwGSr2FosI9mt5UUOFT0QfAS
ebQFe8LxLW7guUHrR30LTydI6wD2/fuAcsDI7A1ixKrFyeQYkN61+VWwjpqH
eOahmgXbN4U5RSuzQqSDsCd28t3pGVCIakA2Ip9MmZf4hOGhDORAhkeC0zP1
12gIA/53GYgNYAb4QDcVcxmeCfioKO/pfQJ6ba6O0zRjKzTfT2FKtyrsAdwI
JnfAFCCNxBKtbMoUpTdBK6zVbKouT0Xg0z4LmJ9F67bO0MzgfR2AJYI05ySm
0xFTgDap1qUlYsRLtloB3wdkXOg9mjd5RxoJDgGo0qAyIFUMJ9eqnd4jJYEb
kJIMKFmQg9bKkECQ1/B0gGit2UCSWHm/PzRJ5AiRG3KRKj3KrXVqZtVqtVAZ
G1KhLGyrSuHZzoBdkFyIBlCkINyqK+3hgu+Cg+l58x5EcAP6RXGcIQSH0FLA
MeIZeQEPdAisW3XrTQSBM09wU3ASbZVU+VwdnAKTM9HeziP58EFkEUwIMqfp
4Wh0mWb+gGVCDSeMtDWOnKTSMKCHHjzlsmpZDAfWS2jYkm4R/YGgqwB+rzlQ
+VdAKtu2cyQOoqg0iGJd73Ea5j6UkDmwC1DvDs6sA91GjJVbceg3vACiYlFa
wypIPKBjar2z37Nwx92SgYeWGlAQCkcitAa5FmiRLdoCTFkyGdjMhFVwRLmG
jV3oHKmCWQ4stwbkbSui06o2Z8Lu5dGeRQIIeFXtYAf1VKzWkNAB0VPyG5Aa
zfsWYJySAdPiefTFywrFv9XfbHKmGR0wywLLWmjR2ifR/K5WLdlK5ly4CE1W
MbqM0LT3fCw/NWIkw9ar1NrtrIAAwUnSgVyCV8xBBBTjnbbRCJyw/TNvB/uh
arfJnDmDkwdUlhoQkjlRvAH5k2AMwFHUrNDnMH5KhplXaWIXNVfKERbZsJUa
rGM5xUDX4tmJk4vRGdRVDx+y1nmsDt48frBQbz2BzR8+BMWEJlvITkJkgFyU
/yAbwymn9KEyaOfyQ3BcuHFUI9UObBCAmfDHLEbejahjsfSBJcCP2Ye21zwE
9BAAPQRArWFFIZeczENRSNZMMVfsAK3mC15Qr9e1WWviiTeP3SiyUpw16ueb
OpfMGlHe3oUp5uoFbtxGglEENF1RgPViofNc9+bQImi5d3tgTPHyWhjTaVmL
Jwpd4VE2cMymiZBzBMg5AuSc8iIBbnixOyEEVA9wCpGOGw5EooBi4QQslHY/
RzJmikQPWMhIgpQhDHfdFxhGQpxMl84ysiRroULmLbYaJQVxFYr4SFICS+Qd
cSDoDxL6ORj4pNAqNvS+/W6hvgSpol6iLoYJgc+XFVgXKD/9R15XEpBT+if0
DTCWXuDhL4E0dlnabtwnDSDBvYke2RKmUTh1xZZ8XADpKwTp5djSYKugDENe
an7WekOgRyH4GiFwJjCPmd7mOWYORKiDyYYEptaEgEkBw3uSUXD0DRxHi+fG
H4GRZpAmpxQZgVNrwnVxkdMWJNzaLNTxBZw4qegGljBo4OkULTF8egeeg+E3
+NBzk+v9Qp2Bn6NafW5IwYNURT1NhiwgGwxWoDBPXKjrDPL8qkJ1bXW/bjUH
XiqiJ+srJznGN1c+VoGWhsht9HWA4Xv2hQ1lbLI1qPFZxG9TFpBM02h/xlZh
wzZtEwOc6BLtNtJoqANJW20rdLNYGMP3xz6eAr5IBYZRUeBXr1GXrjTGYY6/
ff3ACzyt3r44PUNjHT5nFSjrNFVONtnMGr9z9dxZ1N7Zx9FkabGfh8rAbZL0
RYJmHMl9IGw4hwhPeomahh/FeGLXzHuCDOdfm5LslT2vJAxit7BEJYQ2HKCt
AvgPlltwE0CarfJKC4VnJbtrNYf54vEwDp4AZL8FEVmTOBE9AD5YTvZxADP6
5p87V1yxa/6589atQWjDaWNBFjHwMjTu1mT+otXT86ZKmdZGBJBEjt+cfaO+
/yoMXwF6yGZBkugFRC3wQIxWrwXQeAR6zmXKR2aaq5dV7UI+7kQBpDsZ/jns
LgjVJnnVpYETKwZJoBIOF+prr1lOnOJWz5y2BqY4jPScpKasJnK+3JiCd0p9
Gke4ptb2nTrdT5xlQWQTwHEjqkErEuy6aNa7SGVgrKjvWSWa0IiYjpkMmYUv
ZOmHTnejEd2VRMAPOfjnMUUaGBmDQsWkatlqdO4DW/4hOUrofsQyw3h2UjEr
tOK3W8jn6tuqlRCd36WoYgNa99f/MJup3779Cg6yBTl+9s3zb6bOTQr2Eu4g
3p2azX7D5o/TUguwcI7DDbO1YU0cCsO5s3Vs66WnpwAwCaaohPmgQRfGGsgu
60njlkv7B65dPo6retaTlSXGdJs1hWCvXa2tQBlG0o4iOeLmaVax5NN4fCPd
32Z9lDxlOmurmcF5R6gULU+MJIJCJxtN4spB+DrGhmM2lD3mvUaPbRouQqLJ
84iWMHtXkAuU5zwAqJfCnlptdbsBxL4AwQozlkSPTu0GxCsqr9umkhGJiF8c
VJC7PoIBs44jHOwaE4ciPw/itqGmmPYEvGiSu8jXqYXchXCtMKLkCnA6udHw
dLTfvkOzywB1psCIvfPoRs/zgCPVOwM2DSKK3aD9A8QTK+nWcA6sRzkSfXbJ
Eo2mjkgkCfngHBSURsvDgK6X2JvIxuC0vPrsLxPE+vVFxb45p+RYRFqTcGDb
eKoY0UlHzvEaqCaMEGwCL4nzRCLYWQqPcA6CdIMX5sT/2DmEllgcZeEYpxyl
D4COTUI5vmSj0cU3SDlRZOh6PWHtgiELh3mojemFM9oIVS6S1bjkgmQIyA3L
KbIrcsIHCdE46wpTo21FOIwta5tDq/IsZeMP5i5ip3aKipL0Np0QniHoKDhk
jGON4TPMSsLkq2zdMU8ipK9LwG3D4fcZCK6driltTHGBgIDDYAB8tKlg+coG
44mPU4OWPT0T2siOZMBQQOp4HNDoSwTFqMMwhCRMaRGO1TkJ6GpSE7ilTbYF
NIFBiPkjZ0f5/PU1oSVY8c9//jMVXtz059FM/jy6ejjQArPRQl3C25Mj+Ovy
1rP/6YbZe3/GJ3ZgMqyDv6/dxOWV76Jvhis/svM/6j8QfePf0Axw/ic2hgzj
Tw5nh8FKl/5f+OaIP+A3T/DFwXw+fzAKyZ+uhORPIST07i9CxBATjyymHw2e
GMFR/+nB3iMs9F9ETz+yy4Yv8NWj8a8e8d6B+ewRXKqTx3AAl4peHNkXT+yL
p/bFJ/bFr0ZPQejs0Z1eMB/+tFD3QRjZ2lNhWaqg+qd7b5DvkX+jilewSHjY
vQ8TSryGX750FRUoc09FN2oujDmzdpCvt7GjSemdm70vMrDChx7QdbLJWpDm
IKrm6rXPWG6qXeTIhaGNIHhESiksrWHd4jwbFpOSRARjiDMNOc7HSa2KLLqi
qNC8c545luSGiRmxGq3cJu8c7QOK4FPMgQDjggNckRE5VXtwbkmZhStKesno
Zt/PhLCTbWt1rOjdGCxyQCsKgTPpGme7yOqKa+rYIAmOS73MTJ7a9LiEjBxq
qXbE2jQc90BFQZVQAJ3DXXSkVmdTKpvDUeA7uJS2Mk0Lm8qaDZu21vHAqmaX
ImUdExrEZiz64JAep8K9PYxmO1qKAXxIZ+Y9At6w8a/9zlzwY0VYCRMW9Imt
XZMkmjX7OPvXOmUp4XTCg4/HBVaMi7xIDQKtxmiVMF4IcEyieIaRc0yUwLbH
c/aPCWCBhehzKhkVjKa1yRwsvwrMb2fQBmU0ZOh8Tq4zyoYZRkyadwzIQiTO
TGjnHZWjR2LoTFaMOFfQEQ18wX4Z6CCxAt8l2w4Leihu+w4stfN3HKKWY3+3
3E2jGexz5Kz5YfLWfov2UvycXYK+cY/xO3qYXs/dXl8S5obbdDlBSVbFZ3/F
ZjNMblJEsb/Au6ZN1UG1ZSftwXA9e0iO+1jOiRxwNh85CtHDTLkDb1TApm9H
wQVojTHv/vGTpxhoTrLMg/zGlOt2MwIjUpAggj0WXrwAAUbsBdRdgaOAgmgc
QUdTBct9OlWHv5qqJ/DuV0/9st8B+wwXpawhLQqcfT36i82PU7XGvzCRMFXn
/E9B/0TPrPmb5bbBUfh3QX+v6e+W/6aQL9AQKCsP5ClHJa89Sh4CMIcWuj3X
aokFMn0cXrWlsioyWGWKYjWzseUoIBZvLHDiAphRtMDzSXMDBfIo8AOcM//z
oC70e8BoRgkxjTmnrg6pCzTicHn20IJTBi2NuhZd0KvWefN4CgYv/H/kp/89
gjecnqG+HR0dAmE+mcOcI/YTiHlrPA31LNpLX5v3LUY/XCF0qGAAhUVVW60r
SXgRt2coWA9QvD54+HDBsRB+SjSR9YFtRAb8vPMMxIM13xza2HtknuxHtW0O
KyjyFpEZpOkwDibq2KUWySgDD69hpZrrpa0tYyvG58VTg4FDCh2At1m22Yqz
+9uuxrhhX5QSDlhKqgPmlz4CYJdUgyQmC6bfrpXPftM7sJg2YhgyCWSN17hi
IZDGNmtMcGqXEJpxQogzr4SSDeb+K28X8MrxBpwYl52g3J8qx3VuX/aT0Q3e
RRfcSv7j0eGiZb73JdjZUJBnjTOvJbbkd2uB4hjkeSm1aWRXWniHdU9jwM4d
q5EtFiPOJQmceqL1WEOFNMEVN1L24leQWCdX77QYIqTCYDhwqSjnDe1t1M/R
kYRbEOWurALVoDrI6d/rafL2qlEdUJb7AR2K52xyc3S5lxy4Duxw5M8QiT0z
1CHTIi7WsIg90LIiY2lH7rDB56jqVlOlb22rBTDsGlR0E7dt4FVOCbG4QjRi
K4KTHbMaYTMYywSyBF+ozfeCVFTy6gCVeh+hoVciaONaXvKqEMAxBg/qMwhS
tiphML0jB3GFBXJwUpYNuoYq9eDppkoybcuZwjgch4Epb4+FvCDSHQAcqpZE
jBgDLErH+DzyK2xhXZ2tMxfrvMFGsKcMoHNlKp8x5RFrViYykXdFSiwi67ge
EWs9845qPTn6Tb1fqCg/EtviIyzRL7JcU39QnG64p/0w52AFhC8kIN8ElKDJ
jsj3LhQ6WuQlmmKu4mSO0znEzXY9Xkm/z4qusJ4/F4rWQY9RVl5UORIezz13
u/UG1EfMDOEn1ma7xd5cYsmWw2KKy81kswd+3TRj+nei4CMRYzcuKaKMiQF3
ZGdywd4BKon5gMHSCvVx1TofEM0QBpLEm6OAwJC0aIk/vC1mdjosx+PsnK0Q
RbVq92TjL3FaGutxXHUYbe8xfgymXfDmMfG1Hw1OeYJu89pE+SProLra4cpX
+pIs8fsPrOVbn4rfrjsYIOzrUzHOo3c+/O1y3A0nuRHar5kL3a5oO1m54dzN
drNvyHBHeztDo5lSuK4inUSoq9kELaNdHYmlWKphPBawgn4blDrUyRUUTA+W
m1JtHVVW18bKLizXydYlBXwwmsPJhQtXhT7gDB9iCnJoLnsqqT4ueEaFhC7G
enMVSARRrpNzd+gksZ0rBFLbvb5acvuUiUcYG++Nr0vyMG51DfNh0woaHitO
NFLos8RKMDMLjHAb6XIdMD49ZVai8fdkCEiey6Reg1HrBKokSdM3tpCDK9FC
q83uNxg90hvhmQJErIgCK2wjs4P0qoOIw65kYsEo8ftxkszKE3j1syeBo5RZ
NKYc1+bnzAKGoEwCr4hbrpkkqqg9IJ/0em+M3VZAPVe+BoKC3Ewm7ODcSNz5
THz4gOuGQmYNmp+Yl5Nqa0aMPesC4xhwgwNXmqHdmHxLEX3SFWRlW9OXhwEa
pBw9FBlZP9zZcysFT+RsqwNCaB9PvaIicB3BIOG8LDHqiDHbXwTR5jiwDXts
6NTYQ4stQ9wqFXIyTd+/72SaFbVfR6L2pRW12Kxjk7dk8PXcai8mpzdIb2o8
6XV4uAoNq1DIMIXHqFaZHTtO/kd16DgCC8mzlhIFuNVem4CrRANSaXDxsUSB
tPsRPiKETCbHo1l/mHCJasrVwYfKr+mVf4cB7ItYF8N/kUgUwbXbhFnlUMKx
vvd9yVJpBvvm/A4W73oWj83GZ1im1aCUiLq9r2lyRqW29MrY1X9g7N7vD/ut
rUGAFaPUpyST4QUgRVGN6/2w4A2j6mgesByzpdVC1sjmGwAaMGwFnbTaEBky
1rGG+YSF6ccimRfqlOR346pFmpYiFVhO3LShPUWT4RTf27ocBwrVAwMkXLFD
cRs5ZyeRpNkzIadjW2eVNGWitAEliFWhCPAVFUFslIlDoG3BjHPlorIXd1q9
On+y/bTIrNs+C4+4KtCoLCHO2w8/6mX0L22Y73B+qC7xm9/EOWsZdvXsNDzk
NftZODjKm8/n83Bq+tsKK/lMABHYjm5cvw/uVesH2y1/3nbHkSnjX5dALwt1
Yk9LiG345xuiLFdYxbmPKCgL5DYjMrMx2TFxxllsEHyR5J9Mvr7C5qZi9Lrt
yTwKEQBBWSuabWqShHFJLWbhMCJqa6BiwezrrYapZdS/Iqmd+PSt7iQ0WTBL
vMrLON0X+Cw4fKkuFuVNFaZiyyQwuDEeneV5xnWfsslQjvpx3tty7T+2Qsqq
JG44nl+NWW7TK/S2p0iW2F1h24B8bEeisrTHUj2mPsJPSBDUhk2JpXzr/UV2
xFq8cwgbCRqVZ+csf0+zdVFlqQMH6KqowJtAB0STjYRiTAAbgkUWD8hf0Lvv
qc8PC47VW4MvTTOQmBggLZ3xi0drrWmalf2RP864/C+yHxaYr7EJ2+HEYZom
bIsN0B5YESudYOUuBki4XJ1aYmBRazrA3rjbMOjqsGaq1YS2+Y5szEgAXylV
rxYD418PRY8TPOP22q2k392hiEUUJ5OuYP2+lPrOSoEROYUHHAuqcRZBUQWC
6hu2RE/YlUVOPiXJgnz4KmJ0W5VzynKDjFh3aQ+SvlzytiNH1NVTUAQSiYJ4
J5YdTmgAQuLckRn0Q5DI4vQRjOb4m0TbXe0jC0Xf5p6AwYdGoO3HlWap76ld
9qo4CFKmlctBd9+o/A0qa7z8nUrPhQyx4YxlEIbgwXyVQV9Y9yo+e5anljJe
2O1T5FFNAkKwe0hii1b0FTWabLTZD3INjut2wbhw2S8w5egKZW+8Ccnj0P+g
iMhcanw06Id1R14j7BsFh0k0tk1fuReCDO/Y2fuumPgYsLQTJDH3yLDkk3QJ
m2WNKXTZku8VHj62K3FnXQOqHnvzOFTpS7f7Bzfm3+DJ59jVh+nEUvrYqXoM
Fu6WdF0LOlhyUQNmytFJJFtz4GcElv6oo8HN3EXWZmuOOOPVOA5fFXWaxwVu
p6+++e7Nc58tAFDroAQsrG5uxNnNavs08EzkHuH+eziBIdLEl9O1MdUuJmRf
0RbkuXr3uPRr37h4S5JLUrbMDiruEG0CRj2nJgIFj9jFS4msKYK81TdFruaZ
yUTyDNi65tcTDJr3YGBk1BXF1zzw4TEkW9tNlFOhnE5alCAOLK8dHfYsTDvf
ayGHQKm1kg+K8I1xEYnppHRTFkk9umZgSE1+TfJbwQ8iTUxcAKSJHnC4uZPj
P6hkUyEMVFHHdB+23XtIUctI26IEk5zTVCFOMYpOxQMksIjIupL5HeV2HOAJ
rD8UnBy8jDoWRHG+jcvP2FK2ALqu61NDNf8RIv09bSkOh027Nnxbqz9e/w++
dMcNjdTjujN5HrQY+kvGnBnEj5kVkGMbqiVupJG80k8/4X2AHz6QItJcLe/6
d4OC+LDZn27pQSEaX68h+NdkmvmbI1wqJbqYirYbcNs0up8rbF7lqyxsRI+y
vRzNNu/lbg4qXEVsG/TK8X6Z9+qY74kIUuZRdb8/rsOgST5ufgySLcSjITaO
czRq+FoX0ABV+VEbddDSx4TLFjuAlmbQ08iOzE2mgusnG+2cJHkV908HgW6r
wzJMgp7DYsH+rPIhPpHgPzwjwQkXQFqFz2RNrOHm1P5PAiTq1xpc8ydlr61r
Y9Y5xnHbTUFWh8PzFV1qfP0lpeuclMRbkgKvg4qFPCnaNvO4m4phDjYkOS7K
9q26mogqdBRsBAmp5X6YXZEwlKecsyAJLom0gHp6vb6IyHthLea9qWQgG1vX
gYdYiroRf47CulJMg0RG76m9rEqSbktGDtVKuHD3gq3pL6mFnstpbYXXiYfV
F5siKPI9V5sBruS91HsOgHLDqfSR8+sIg3zOVWXqkwmn/O3qZRxRmFwKOFjL
dUlLX/KClzzxJc1zyZOww/LpUpyJI35xCP/Auyf0Tr6kNwOHgs+k1/Bn/QvB
HMcRgxMEookP0UZH+mQRyOvbE8ewT9OSiP/m74RQegD9Vcjl8Hpy+atSyzXk
EnQh34VoBofL3upQoJBov4NECVUB0Ysv1L4lnQwI5RcVKQjfL0AkFoaYRj79
G9LIFSRyJ0kSHJ7Qg4sRORI47oWNwg7YGemmaTizXZ8u6LOVLImusTit3I9U
PXCNeG3Vvw0mkC61yjtserX9y+QySVUFerN8QTJLR7LpBDZqdnVGAF2Op1OK
qEgpQ6/NCBxwIGGqZgu6vNuwB8dfXOpc2oTtwsFFppIiDRDEKRd3wyjzFTIO
jnm3yu7Z5pSwNNZVMP3viVkBp8c2R78E2/wtRWvINpSQmh3dlmeOAlbhlor+
ZV18TDpvqiB428QpZ45OXZ3YRAYwcpmIK/VwiXRL/f1FkQfDOpfQgPTZby6g
lhpiKY1yaxA20IcGetZpnCiX4LKvesEraWq6yUai9v37fjnA5JIp7iZLdiFc
IGgYcnS+jw3j+Tu0++1ypLBsioZuAxTKIaJp/CW+fP9dRX3c4MflYQt54FcB
X7+hiN+bo7C/zNZarMQrDByA1KyxlzpuoqYERJpS0zjXSvEtiOF9r5waFl9Z
yoCA8QlFUbO3BDYpO49p9IvM7PwtNEK53Kg4RyNWWFVhOP6FrY9+5lrv4cPw
AsRLzFeIqLo8llsTLyeXCxtVXzxazIZ/Hl399tK9AFhsKAJgeQU+kIr/wPfV
bvztpR3uZznkBEOadUVvlujD8O2lfelnORosax+LAAzfBrOg4AjSgraXZJwK
GyFDMtL5Iu3+/Xd0UbvUkNjbXmyZzDQgM1cohnpQbtrxfmxjKLASxtk4bMtx
QLXpSiDGlCoxKASIQYNGY84QCEku7Yq9eR/tCPohfUFgG17mkPlbXX1FPicX
ohgQ/d5KQxXfLUYR38R48HeAUdlxaQM2zbY6H97agEVPlQR1fMGcyFL6wQO8
SVDaR0h4DYQU3YReY32IA79rDAUR6WdLdNNOHe357AtfRUXHRoVMFEaJJg6v
pSIBMrxKMbzC1Ravuh5eDpNgtyfHDKMSoluiupfq47JbeyGH7RDAu9S3LuYw
uFiDAmv2pvCw1D3exFAVSeCXdkB9GmBrYEmL64aZuu7vaQ98w7Gfmqoop75a
estxXhaDq/FbX69JdcpPIqAjRDepwSrfcxzcR/uim2GnQ17FCn0grV1QtlkN
mTEBAxOTE5ad3PW6ZEGGd+T6pBq2WQCZc/DXHzDDQ5F7e2MwfAkHaOqetpQw
HlfcC79G7IqaY0NaHcVCwc01wD6du7LaLvtf//YfTe+S3Os5wi/ISS53Q6fU
MVzJGjY9MNwvk32hS702SPhDcl5XLMmupd27kOw21w4VfM/zsqtTU9ryXnc9
9pA87QV9Fl+8Y6pvvCtN+vsJm1b3CTGMcXNCkuuFpYumHcUkXXkbX8Lg7xac
UnJOGs80JmCa1hlpBnsxogOSnjkKixI18/3x4W899O7jDTIKbpaskXs3/cZv
IjDqCrmCvMB2tG5aSFxj+LYZHEcGkREohEB3mMG8nEVwv19luKqE5bFHkv2N
FHvTOG+RI81jpOLP9U5UApsEpufSXcYV9QUh7u0vNExvIq/gDnkq7kdVkLh8
Bta6sKvAW7ACA85mxbvW6QXfzd0kGxAcXCIV/UJO7X8jZkRxShZscMU3G9qe
2QORbmMBY2YTVQ1j+N1dgkHRdv41lansEvP0biuhXgVKj+pyOSsXNSYFl2ra
pliWwt2AgLk83zZSekpHl7i21w7e97ees0j6qgPKo7LMUn1HPlR4V8vk18f0
w1iNvWVxI/W45O1T5q6oLtgAuubXMKZxdymm9jY6q5uPwKpYr22ttfudl/DS
8oyuBsFbsJltRNX4LGJsYVFZVz9t/Bvc96kBvwIp7VlYNYLRxS+fE16Ovz4e
/45+yGmpk3O6ysZn1cIrpsdzoa7Y/v9X+jO6CsYHBfK8ozoPtI5ssjG4i2TR
u+4EC9zRqQTi3G5wPuAINAYAZPq5GtoY4Whif+VGahhMyveci0iXX7oRgICM
KvWaCEusPXB2K745uJhzOxSB1E/uDa5AdjcnDNqMpr4VjquKXVHI3LVZ9RwL
OnRyLARh1PTlEuwuNjhXz+W3mHoTuEpK9J/s6iBO3W84Wb+jwQrL/j2/4Y3U
Uq8RRlaoMIFbjWitq+NC0V3RFGOKfkzLNwsFYUtLKk/nh9EB0JU0JXZKkAuw
s9UlWJBFd2Sy3e7y1igddliCg9EOEtgHVMLAY3XQOv6Ai+KXeO4J/kYklY9Z
s6bt3Z8TXSokEGIU2qY2o9vjI+KnXwnCalfXRCzXAQjQVHJk+cKMBXqxygKm
76VFpfEpjs3ShQl84YBNU8LS74L+ZdeETOFsV5KCZ/DVqx+jkK79MRw5PDI4
+mFU1jW96wwkEcztUJHPIODKZUX80wBMA3T9Od3zdHPc9yzKw0Yb7EWBH98Q
BZ72ruaW4XRJDOKjFyVeASdKh30cKD6aHw1CxcTj1wWJaRkbKla3jRU/DYLE
6snRVSFjm7AFyumFjY+dzmGbbEjBrpa9l6K9mshba7EANWMnUnhnGt81cHD2
/ku6kmBI/Y3Um43Qfe9Kz+CnQ5TMx6Un9DOtup8EJrZ4JwMlXUFkn2FN61or
nuHg5MsHcyGRsdTF8Ab3QfLiqb2SqHc3T9ypd+95X0vccw5zIBZxJCaZ7t05
FTK6/X/h7f+SnHHy5W0Z4/Dx4x5jDHAw8e2ydgzu/mdyjp/s5zGRHRmnLe0l
/LdgpnF+sVlL/oGImJP4I766UK7aafYlxs3DRKX70RP3u2EpRxPcNfL2ih0w
pEHFTaNfZQw9hjlY8XxtA99iTT5bY7OMfO9umItJ7R2z9qpW7iarswvkdGyW
4z6yfnRTfuzCkXhq0C3AwKF9qB0oCP8sftS46zQ8h3PaHk7i3p2V1TgjN04m
gElVV9KlYksCmMPRbfQ8HpRb3ZozPdi/KCN6iA+65sGt1dWTw3m/yiiw9cY4
Ur//e2JIwubN7DhgNmDD/wFqpPM6hH8AAA==

-->

</rfc>
