<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.1 (Ruby 2.7.0) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-rcr-opsawg-operational-compute-metrics-08" category="info" consensus="true" submissionType="IETF" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.18.2 -->
  <front>
    <title abbrev="TODO - Abbreviation">Joint Exposure of Network and Compute Information for Infrastructure-Aware Service Deployment</title>
    <seriesInfo name="Internet-Draft" value="draft-rcr-opsawg-operational-compute-metrics-08"/>
    <author fullname="S. Randriamasy">
      <organization>Nokia Bell Labs</organization>
      <address>
        <email>sabine.randriamasy@nokia-bell-labs.com</email>
      </address>
    </author>
    <author fullname="L. M. Contreras">
      <organization>Telefonica</organization>
      <address>
        <email>luismiguel.contrerasmurillo@telefonica.com</email>
      </address>
    </author>
    <author fullname="Jordi Ros-Giralt">
      <organization>Qualcomm Europe, Inc.</organization>
      <address>
        <email>jros@qti.qualcomm.com</email>
      </address>
    </author>
    <author fullname="Roland Schott">
      <organization>Deutsche Telekom</organization>
      <address>
        <email>Roland.Schott@telekom.de</email>
      </address>
    </author>
    <date year="2024" month="October" day="21"/>
    <keyword>next generation</keyword>
    <keyword>unicorn</keyword>
    <keyword>sparkling distributed ledger</keyword>
    <abstract>
      <?line 86?>

<t>Service providers are starting to deploy computing capabilities
across the network for hosting applications such as distributed AI workloads,
AR/VR, vehicle networks, and IoT, among others. In this
network-compute environment, knowing information about
the availability and state of the underlying communication and compute resources is
necessary to determine both the proper deployment location of
the applications and the most suitable servers on which to run them.
Further, this information is used by numerous use cases with different
interpretations. This document proposes an initial approach towards a
common exposure scheme for metrics reflecting compute and communication capabilities.</t>
    </abstract>
    <note removeInRFC="true">
      <name>About This Document</name>
      <t>
        The latest revision of this draft can be found at <eref target="https://giralt.github.io/draft-rcr-opsawg-operational-compute-metrics/draft-rcr-opsawg-operational-compute-metrics.html"/>.
        Status information for this document may be found at <eref target="https://datatracker.ietf.org/doc/draft-rcr-opsawg-operational-compute-metrics/"/>.
      </t>
      <t>Source for this draft and an issue tracker can be found at
        <eref target="https://github.com/giralt/draft-rcr-opsawg-operational-compute-metrics"/>.</t>
    </note>
  </front>
  <middle>
    <?line 99?>

<section anchor="introduction">
      <name>Introduction</name>
      <t>Operators are starting to deploy distributed computing environments
in different parts of the network that must support a variety of
applications with different performance needs such as latency, bandwidth,
compute power, storage, energy, etc.
This translates in the emergence of distributed compute resources
(both in the cloud and at the edge) with a variety of sizes
(e.g., large, medium, small) characterized by
distinct dimensions of CPUs, memory, and storage capabilities, as well
as bandwidth capacity for forwarding the traffic generated in and out
of the corresponding compute resource.</t>
      <t>The proliferation of the edge computing paradigm further increases
the potential footprint and heterogeneity of the environments where a
function or application can be deployed, resulting in different
unitary cost per CPU, memory, and storage. This increases the
complexity of deciding the location where a given function or
application should be best deployed or executed.
On the one hand, this decision should be jointly
influenced by the available resources in a given computing
environment and, on the other, by the capabilities of the network
path connecting the traffic source with the destination.</t>
      <t>Network and compute-aware application placement and service selection has become
of utmost importance in the last decade. The availability of such information
is taken for granted by the numerous service providers and bodies that are specifying them.
However, distributed computational resources often run different
implementations with different understandings and representations of
compute capabilities, which poses a challenge to the application placement
and service selection problems. While standardization
efforts on network capabilities representation and exposure are well advanced,
similar efforts on compute capabilitites are in their infancy.</t>
      <t>This document proposes an initial approach towards  a common understanding
and exposure scheme for metrics reflecting compute capabilities.
It aims at leveraging existing work in the IETF on compute metrics definitions to build synergies.
It also aims at reaching out to working or research groups in the IETF that would consume such information and have particular requirements.</t>
    </section>
    <section anchor="conventions-and-definitions">
      <name>Conventions and Definitions</name>
      <t>The key words "<bcp14>MUST</bcp14>", "<bcp14>MUST NOT</bcp14>", "<bcp14>REQUIRED</bcp14>", "<bcp14>SHALL</bcp14>", "<bcp14>SHALL
NOT</bcp14>", "<bcp14>SHOULD</bcp14>", "<bcp14>SHOULD NOT</bcp14>", "<bcp14>RECOMMENDED</bcp14>", "<bcp14>NOT RECOMMENDED</bcp14>",
"<bcp14>MAY</bcp14>", and "<bcp14>OPTIONAL</bcp14>" in this document are to be interpreted as
described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they
appear in all capitals, as shown here.</t>
      <?line -18?>

<!-- # Motivations

TODO TEST
ANOTHE CHANGE + COMMENTS
CHANGE 2 -->

</section>
    <section anchor="problem-space-and-needs">
      <name>Problem Space and Needs</name>
      <t>With the emergence of a new generation of applications with stringent
performance requirements (e.g.,  distributed AI training and inference,
driverless vehicles, and virtual/augmented reality) the need for
advanced solutions that can model and manage compute and communication
resources has become essential to manage and optimize the performance
of these applications.  Today's networks connect compute resources
deployed across a continuum, ranging from data centers (cloud
computing) to the edge (edge computing). While the same architecture
principles apply across this continuum, in this draft we focus on
the deployment of services at the edge, involving the cooperation
of different actors---namely, network operators, service providers
and applications---in a heterogeneous environment.</t>
      <t>In what follows, we use the lifecycle of a service to understand
the problem space and guide the analysis of the capabilities that are
lacking in today's protocol interfaces needed to enable these new services.</t>
      <figure anchor="lifecycle">
        <name>Service lifecycle.</name>
        <artwork><![CDATA[
                     +--------------+      +-------------+
         New         |              |      |             |
       Service +----->  (1) Service +------> (2) Service |
                     |  Deployment  |      |  Selection  |
                     |              |      |             |
                     +-----^--------+      +-------^-----+
                           |                       |
                           |                       |
                           |                       |
                           |    +-------------+    |
                           |    |             |    |
                           +----> (3) Service <----+
                                |  Assurance  |
                                |             |
                                +-------------+
]]></artwork>
      </figure>
      <t>At the edge, compute nodes are deployed near
communication nodes (e.g., co-located
in a 5G base station) to provide computing services that are
close to users with the goal to (1) reduce latency, (2) increase
communication bandwidth, (3) increase reliability, (4) enable privacy
and security, (5) enable personalization, and (6) reduce cloud costs and
energy consumption. Services are deployed on the communication and compute
infrastructure through a phased lifecycle that generally involves a
service <em>deployment stage</em>, a <em>service selection</em> stage, and a <em>service assurance</em> stage,
 as shown in <xref target="lifecycle"/>.</t>
      <!--
In this Section, we introduce the lifecycle of a
service as a simple framework to understand the capabilities
that are lacking in today's protocol interfaces and that are necessary for
these new services. -->

<t><strong>(1) Service deployment.</strong> This stage is carried out by the service provider
and involves the deployment of a new service (e.g., a distributed AI
training/inference, an XR/AR service, etc.) on the compute and communication
infrastructure.  The service provider needs to properly size the amount of
compute and communication resources assigned to this new service to meet
the expected user demand.  The decision on where the service is deployed
and how many resources are requested from the infrastructure depends on
the levels of Quality of Experience (QoE) that the provider wants to guarantee
to the users of the service. To make a proper deployment decision, the
provider must have visibility on the resources available within the infrastructure,
including compute (e.g., CPU, GPU, memory and storage capacity) and
communication (e.g., link bandwidth and latency) resources.  For instance,
to run a Large Language Model (LLM) with 175 billion parameters, a total
aggregated memory of 350GB and 5 GPUs may be needed \cite{llm_comp_req}.
The service provider needs an interface to query the infrastructure,
extract the available compute and communication resources, and decide
which subset of resources are needed to run the service.</t>
      <t><strong>(2) Service selection.</strong> This stage is initiated by the user, through a
client application that connects to the deployed service.  There are two
main actions that must be performed in the service selection stage:
(2.a) \textit{compute node selection} and (2.b) \textit{path selection}.
In the compute node selection step, as the service is generally replicated
in N locations (e.g., by leveraging a microservice architecture), the
application must decide which of the service replicas it connects to.
This decision depends on the compute properties (e.g., CPU/GPU
availability) of the compute nodes running the service replicas.
On the other hand, in the path selection decision, the application must
decide which path it chooses to connect to the service.  This decision
depends on the communication properties (e.g., bandwidth and latency) of
the available paths.  Similar to the service deployment case, the application
needs an interface to query the infrastructure and extract the available
compute and communication resources, with the goal to make informed node
and path selection decisions. Note that in some scenarios, the network
or service provider can make node and path selection decisions in lieu
of the application. It is also important to note that, ideally, the node
and path selection decisions should be jointly optimized, since in general
the best end-to-end performance is achieved by jointly taking into account
both factors.  In some cases, however, such decisions may be owned by
different players.  For instance, in some network environments, the path
selection may be decided by the network operator, wheres the compute node
selection may be decided by the application or the service provider.
Even in these cases, it is crucial to have a proper interface (for
both the operators and the application) to query the available compute
and communication resources from the system.</t>
      <t><strong>(3) Service assurance.</strong> Due to the stringent Quality of Experience (QoE)
requirements of edge applications, service assurance (SA) is also essential.
SA continuously monitors service performance to ensure that the distributed
computing and communication system meets the applicable Service Level Objectives (SLOs).
If the SLOs are not met, corrective actions can be taken by the
service provider, the application, or the network provider.
The evaluation of SLO compliance needs to consider both computing metrics (e.g.,
compute latency, memory requirements) and communication metrics (e.g.,
bandwidth, latency). Corrective actions can include both new service placement and
new service selection tasks. For instance, upon detecting that a certain compute
node is overloaded, increasing the compute delay above the corresponding
SLO threshold, the application can reinvoke service node selection (2.a)
to migrate its workload to another less utilized compute node.
Similarly, upon detecting that a certain communication link
is congested, increasing the communication delay above
the corresponding SLO threshold, the application can reinvoke service path
selection (2.b) to move the data flow to another less congested link.
If SA detects that there are not enough compute or communication resources
to guarantee the SLOs, it can also invoke service placement (1) to allocate
additional compute and communication resources.</t>
      <t><xref target="prob_space"/> summarizes the problem space, the information that needs to be exposed,
and the stakeholders that need this information.</t>
      <table anchor="prob_space">
        <name>Problem space, needs, and stakeholders.</name>
        <thead>
          <tr>
            <th align="right">Action to take</th>
            <th align="center">Information needed</th>
            <th align="left">Who needs it</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="right">(1) Service placement</td>
            <td align="center">Compute and communication</td>
            <td align="left">Service provider</td>
          </tr>
          <tr>
            <td align="right">(2.a) Service selection: compute node selection</td>
            <td align="center">Compute and communication</td>
            <td align="left">Network provider, service provider or application</td>
          </tr>
          <tr>
            <td align="right">(2.b) Service selection: path selection</td>
            <td align="center">Communication</td>
            <td align="left">Network provider or application</td>
          </tr>
          <tr>
            <td align="right">(3) Service assurance</td>
            <td align="center">Compute and communication</td>
            <td align="left">Network provider, service provider or application</td>
          </tr>
        </tbody>
      </table>
    </section>
    <section anchor="use-cases">
      <name>Use Cases</name>
      <section anchor="distributed-ai-workloads">
        <name>Distributed AI Workloads</name>
        <t>Generative AI is a technological feat that opens up many applications such as holding
conversations, generating art, developing a research paper, or writing software, among
many others. Yet this innovation comes with a high cost in terms of processing and power
consumption. While data centers are already running at capacity, it is projected
that transitioning current search engine queries to leverage generative AI will
increase costs by 10 times compared to traditional search methods <xref target="DC-AI-COST"/>. As (1)
computing nodes (CPUs, GPUs, and NPUs) are deployed to build the edge cloud leveraging
technologies like 5G and (2) with billions of mobile user devices globally providing a large
untapped computational platform, shifting part of the processing from the cloud to the
edge becomes a viable and necessary step towards enabling the AI-transition.
There are at least four drivers supporting this trend:</t>
        <ul spacing="normal">
          <li>
            <t>Computational and energy savings: Due to savings from not needing
large-scale cooling systems and the high performance-per-watt
efficiency of the edge devices, some workloads can run at the edge
at a lower computational and energy cost <xref target="EDGE-ENERGY"/>, especially when
considering not only processing but also data transport.</t>
          </li>
          <li>
            <t>Latency: For applications such as driverless vehicles which require real-time
inference at very low latency, running at the edge is necessary.</t>
          </li>
          <li>
            <t>Reliability and performance: Peaks in cloud demand for generative AI queries can
create large queues and latency, and in some cases even lead to denial of service.
Further, limited or no connectivity generally requires running the workloads at the edge.</t>
          </li>
          <li>
            <t>Privacy, security, and personalization: A "private mode" allows users to strictly
utilize on-device (or near-the-device) AI to enter sensitive prompts to chatbots,
such as health questions or confidential ideas.</t>
          </li>
        </ul>
        <t>These drivers lead to a distributed computational model that is hybrid: Some AI workloads
will fully run in the cloud, some will fully run in the edge, and some will run both in the
edge and in the cloud. Being able to efficiently run these workloads in this hybrid,
distributed, cloud-edge environment is necessary given the aforementioned massive energy
and computational costs. To make optimized service and workload placement decisions, information
about both the compute and communication resources available in the network is necessary too.</t>
        <t>Consider as an example a large language model (LLM) used to generate text and hold intelligent
conversations. LLMs produce a single token per inference,
where a token is a set of characters forming words or fractions of words.
Pipelining and parallelization techniques are used to optimize inference, but
this means that a model like GPT-3 could potentially go through all 175 billion parameters
that are part of it to generate a single word. To efficiently run these computational-intensive
workloads, it is necessary to know the availability of compute resources in the distributed
system. Suppose that a user is driving a car while conversing with an AI model. The model
can run inference on a variety of compute nodes, ordered from lower to higher compute power
as follows: (1) the user's phone, (2) the computer in the car, (3) the 5G edge cloud,
and (4) the datacenter cloud. Correspondingly, the system can deploy four different models
with different levels of accuracy and compute requirements. The simplest model with the
least parameters can run in the phone, requiring less compute power but yielding lower
accuracy. Three other models ordered in increasing value of accuracy and computational
complexity can run in the car, the edge, and the cloud. The application can identify the
right trade-off between accuracy and computational cost, combined with metrics of
communication bandwidth and latency, to make the right decision on which of the four
models to use for every inference request. Note that this is similar to the
resolution/bandwidth trade-off commonly found in the image encoding problem, where an
image can be encoded and transmitted at different levels of resolution depending on the
available bandwidth in the communication channel. In the case of AI inference, however,
not only bandwidth is a scarce resource, but also compute.</t>
      </section>
      <section anchor="open-abstraction-for-edge-computing">
        <name>Open Abstraction for Edge Computing</name>
        <t>Modern applications such as AR/VR,
V2X, or IoT, require bringing compute
closer to the edge in order to meet
strict bandwidth, latency, and jitter requirements.  While this
deployment process resembles the path taken
by the main cloud providers
(notably, AWS, Facebook, Google and Microsoft) to deploy
their large-scale datacenters, the edge presents a
key difference: datacenter clouds (both in terms of their infrastructure
and the applications run by them) are owned and managed by a
single organization,
whereas edge clouds involve a complex ecosystem of operators,
vendors, and application providers, all striving to provide
a quality end-to-end solution to the user. This implies that,
while the traditional cloud has been implemented for the most part
by using vertically optimized and closed architectures, the edge will
necessarily need to rely on a complete ecosystem of carefully
designed open standards to enable horizontal interoperability
across all the involved parties.</t>
        <t>As an example, consider a user of an XR
application who arrives at his/her home by car. The application
runs by leveraging compute capabilities from both the
car and the public 5G edge cloud. As the user parks the
car, 5G coverage may diminish (due to building interference)
making the home local Wi-Fi connectivity a better choice.
Further, instead of relying on computational resources from
the car and the 5G edge cloud, latency can be reduced by leveraging
computing devices (PCs, laptops, tablets) available from the home
edge cloud.
The application's decision to switch from one
domain to another, however,
demands knowledge about the compute
and communication resources available both in the 5G and the Wi-Fi
domains, therefore requiring interoperability across multiple
industry standards (for instance, IETF and 3GPP on the public side,
and IETF and LF Edge <xref target="LF-EDGE"/> on the private home side).</t>
      </section>
      <section anchor="optimized-placement-of-microservice-components">
        <name>Optimized Placement of Microservice Components</name>
        <t>Current applications are transitioning from a monolithic service architecture
towards the composition of microservice components, following cloud-native
trends. The set of microservices can have
associated Service Level Objectives (SLOs) that impose
constraints not only in terms of the required computational resources
dependent on the compute facilities available, but also in terms of performance
indicators such as latency, bandwidth, etc, which impose restrictions in the
networking capabilities connecting the computing facilities. Even more complex
constraints, such as affinity among certain microservices components could
require complex calculations for selecting the most appropriate compute nodes
taken into consideration both network and compute information.</t>
      </section>
    </section>
    <section anchor="production-and-consumption-scenarios-of-compute-related-information">
      <name>Production and Consumption Scenarios of Compute-related Information</name>
      <t>From the standpoint of the network operator and the service provider,
understanding the scenarios of production and consumption of compute and
communication-related information is essential. By leveraging this combination,
it becomes possible to optimize resource and workload placement, leading to
significant operational cost reductions for operators and service providers,
as well as enhanced service levels for end users.</t>
      <section anchor="producers-of-compute-related-information">
        <name>Producers of Compute-Related Information</name>
        <t>The information relative to compute (e.g., processing capabilities, memory,
storage capacity, etc.) can be structured in two ways. On one hand, the
information corresponding to the raw compute resources; on the other hand,
the information of resources allocated or utilized by a specific
application or service function.</t>
        <t>The former is typically provided by the management systems enabling the
virtualization of the physical resources for a later assignment to the
processes running on top. Cloud Managers or Virtual Infrastructure Managers
are usually the entities that manage these resources. These management
systems offer APIs to access the available resources in the computing
facility. Thus, it can be expected that these APIs can also be used for
consuming such information. Once the raw resources are retrieved from
the various compute facilities, it is possible to generate topological
network views including such resources, as proposed in <xref target="I-D.llc-teas-dc-aware-topo-model"/>.</t>
        <t>Regarding the resources allocated or utilized by a specific application
or service function, two situations apply: (1) The total allocation of resources, and
(2) the allocation per service or application. In the first case, the
information can be supplied by the virtualization management systems described
before. For the specific per-service allocation, it can be expected
that the specific management systems of the service or application are
capable of providing the resources being used at run time,
typically as part of the allocated ones. In this last scenario,
it is also reasonable to expect the availability of APIs offering this
information, even though they can be specific to the service or application.</t>
      </section>
      <section anchor="consumers-of-compute-related-information">
        <name>Consumers of Compute-Related Information</name>
        <t>The consumption of compute-related information is relative to the
different phases of the service lifecycle (<xref target="lifecycle"/>). This means
that this information can be consumed in different points of time
and for different purposes.</t>
        <t>The expected consumers can be both external or internal to the network.
As external consumers, it is possible to consider external application
management systems requiring resource availability information for
service function placement decision, workload migration in the case
of consuming raw resources, or requiring information on the usage
of resources for service assurance or service scaling, among others.</t>
        <t>As internal consumers, it is possible to consider network
management entities requiring information on the level of
resource utilization for traffic steering (e.g., as done by the
Path Selector in <xref target="I-D.ldbc-cats-framework"/>), load balance,
or analytics, among others.</t>
      </section>
    </section>
    <section anchor="considerations-about-selection-and-exposure-of-metrics">
      <name>Considerations about Selection and Exposure of Metrics</name>
      <t>One can distinguish the topics of (1)
which kind of metrics need to be exposed and (2) how the metrics are exposed.
The infrastructure resources can be divided into (1) network and (2) compute related
resources. This section intends to give a brief outlook regarding these resources
for stimulating additional discussion with related work going on in
other IETF working groups or standardization bodies.</t>
      <!-- Network based resources can roughly be subdivided according to the -->
<!-- network structure into edge, backbone, and cloud resources. -->

<section anchor="considerations-about-metrics">
        <name>Considerations about Metrics</name>
        <t>The metrics considered in this document are meant to support
decisions for selection and deployment of services and applications.
Further iterations of this document may consider additional
lifecycle operations such as assurance and relevant metrics.</t>
        <t>The abovementioned operation may also involve network metrics that are specified in a number of
IETF documents such as RFC 9439 <xref target="I-D.ietf-alto-performance-metrics"/>,
which itself leverages on RFC 7679. The work on compute metrics
at the IETF, on the other hand, is in its first stages and merely
relates to low-level infrastructure metrics such as in <xref target="RFC7666"/>.
However:</t>
        <ul spacing="normal">
          <li>
            <t>Decisions for service deployment and selection may further involve
decisions that require an aggregated view, for instance, at the
service level.</t>
          </li>
          <li>
            <t>Deciding entities may only have partial access to the compute
information and actually do not need to have all the details.</t>
          </li>
        </ul>
        <t>Compute metrics and their acquisition and management have been addressed by standardization
bodies outside the IETF such as NIST and DMTF, with the goal to guarantee reliable assessment and comparison of cloud
services.
A number of public tools and methods to test compute facility
performances are made available by cloud service providers or
service management businesses (e.g., see <xref target="UPCLOUD"/> and <xref target="IR"/>).
However, for the proposed performance metrics, their definition and
acquisition method may differ from one provider to the other,
making it thus challenging to compare performances across different
providers. The latter aspect is particularly problematic for
applications running at the edge where a complex ecosystem of
operators, vendors, and application providers is involved
and calls for a common standardized definition.
<!--  REFS
UPCLOUD https://upcloud.com/resources/tutorials/how-to-benchmark-cloud-servers
IR https://www.ir.com/guides/cloud-performance-testing-->
        </t>
      </section>
      <section anchor="decision-dimensions-for-metrics-selection">
        <name>Decision Dimensions for Metrics Selection</name>
        <t>Once defined, the compute metrics are to be selected and exposed to management entities
acting at different levels, such as a centralized controller or a router, taking different actions
such as service placement, service selection, and assurance, with decision scopes ranging from
local compute facilities to end-to-end services.</t>
        <t>Upon exploring existing work, this draft
proposes to consider a number of "decision dimensions" reflecting the abovementioned aspects in order to help identifying
the suitable compute metrics needed to take a service operation decision.
This list is initial and is to be updated upon further discussion.</t>
        <t>Dimensions helping to identify needed compute metrics:</t>
        <table anchor="comp_dimensions">
          <name>Dimensions to consider when identifying the needed compute metrics.</name>
          <thead>
            <tr>
              <th align="left">Dimension</th>
              <th align="left">Definition of the dimension</th>
              <th align="left">Examples</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">Target operation</td>
              <td align="left">What operation the metric is used for</td>
              <td align="left">Monitoring, benchmarking, service placement and selection</td>
            </tr>
            <tr>
              <td align="left">Driving KPI(s)</td>
              <td align="left">KPI(s) assessed with the metrics</td>
              <td align="left">Speed, scalability, cost, stability</td>
            </tr>
            <tr>
              <td align="left">Decision scope</td>
              <td align="left">Granularity of metric definition</td>
              <td align="left">Infrastructure node/cluster, compute service, end-to-end application</td>
            </tr>
            <tr>
              <td align="left">Receiving entity</td>
              <td align="left">Function receiving the metrics</td>
              <td align="left">Router, centralized controller, application management</td>
            </tr>
            <tr>
              <td align="left">Deciding entity</td>
              <td align="left">Function using the metrics to compute decisions</td>
              <td align="left">Router, centralized controller, application management</td>
            </tr>
          </tbody>
        </table>
        <t>The "value" of a dimension has an impact on the characteristic of the metric to consider. In particular:</t>
        <ul spacing="normal">
          <li>
            <t>The target operation: determines the specific use case for the metric, such as monitoring, benchmarking, service placement, or selection, guiding the selection of relevant metrics.</t>
          </li>
          <li>
            <t>The driving KPI(s): leads to selecting metrics that are relevant from a performance standpoint.</t>
          </li>
          <li>
            <t>The decision scope: leads to selecting metrics at a relevant granularity or aggregation level.</t>
          </li>
          <li>
            <t>The receiving entity: impacts the dynamicity of the received metric values. While a router likely receives static information to moderate overhead, a centralized control function may receive more dynamic information that it may additionally process on its own.</t>
          </li>
          <li>
            <t>The deciding entity: computes the decisions to take upon metric values and needs information that is synchronized at an appropriate frequency.</t>
          </li>
        </ul>
        <t>Metric values undergo various lifecycle actions, primarily acquisition, processing, and exposure. These actions can be executed through different methodologies. Documenting these methodologies enhances the reliability and informed utilization of the metrics. Additionally, detailing the specific methods used for each approach further increases their reliability. The table below provides some examples:</t>
        <table anchor="metric_action">
          <name>Examples of lifecycle actions documented on metrics.</name>
          <thead>
            <tr>
              <th align="left">Lifecycle action</th>
              <th align="left">Example</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">Acquisition method</td>
              <td align="left">telemetry, estimation</td>
            </tr>
            <tr>
              <td align="left">Value processing</td>
              <td align="left">aggregation, abstraction</td>
            </tr>
            <tr>
              <td align="left">Exposure</td>
              <td align="left">in-path distribution, off-path distribution</td>
            </tr>
          </tbody>
        </table>
      </section>
      <section anchor="abstraction-level-and-information-access">
        <name>Abstraction Level and Information Access</name>
        <t>One important aspect to consider is that receiving entities that need to consume metrics to take selection or placement decisions do not always have access to computing information. In particular, several scenarios may need to be considered, among which:</t>
        <ul spacing="normal">
          <li>
            <t>The consumer is an ISP that does not own the compute infrastructure or has no access to full information. In this case, the compute metrics will likely be estimated.</t>
          </li>
          <li>
            <t>The consumer is an application that has no direct access to full information while the ISP has access to both network and compute information. However the ISP is willing to provide guidance to the application with abstract information.</t>
          </li>
          <li>
            <t>The consumer has access to full network and compute information and wants to use it for fine-grained decision making, e.g., at the node/cluster level.</t>
          </li>
          <li>
            <t>The consumer has access to full information but essentially needs guidance with abstracted information.</t>
          </li>
          <li>
            <t>The consumer has access to information that is abstracted or detailed depending on the metrics.</t>
          </li>
        </ul>
        <t>These scenarios further drive the selection of metrics upon the above mentioned dimensions.</t>
      </section>
      <section anchor="distribution-and-exposure-mechanisms">
        <name>Distribution and Exposure Mechanisms</name>
        <section anchor="metric-distribution-in-computing-aware-traffic-steering-cats">
          <name>Metric Distribution in Computing-Aware Traffic Steering (CATS)</name>
          <t>The IETF CATS WG has explored the collection and distribution of computing metrics in <xref target="I-D.ldbc-cats-framework"/>. In their deployment considerations, the authors consider three deployment models for the location of the service selection function: distributed, centralized and hybrid. For these three models, the compute metrics are, respectively:</t>
          <ul spacing="normal">
            <li>
              <t>Distributed among network devices directly.</t>
            </li>
            <li>
              <t>Collected by a centralized control plane.</t>
            </li>
            <li>
              <t>Hybrid where some compute metrics are distributed among involved network devices,
and others are collected by a centralized control plane.</t>
            </li>
          </ul>
          <t>In the hybrid mode, the draft says that some static information (e.g., capabilities information) can be distributed among network devices since they are quite stable. Frequent changing information (e.g., resource utilization) can be collected by a centralized control plane to avoid frequent flooding in the distributed control plane.</t>
          <t>The hybrid mode thus stresses the impact of the dynamicity of the distributed metrics and the need to carefully sort out the metric exposure mode with respect to their dynamicity.</t>
          <t>The section on Metrics Distribution also indicates the need for required extensions to the routing protocols, in order to distribute additional information such as link latency and other information not standardized in these protocols, such as compute metrics.</t>
        </section>
        <section anchor="metric-exposure-with-extensions-of-alto">
          <name>Metric Exposure with Extensions of ALTO</name>
          <t>The ALTO protocol has been defined to expose an abstract network topology and related path costs in <xref target="RFC7285"/>. ALTO is a client-server protocol exposing information to clients that can be associated to applications as well as orchestrators. Its extension RFC 9240 allows to define entities on which properties can be defined, while <xref target="I-D.contreras-alto-service-edge"/> introduces a proposed entity property that allows to consider an entity as both a network element with network related costs and properties and a element of a data center with compute related properties. Such an exposure mechanism is particularly useful for decision making entities which are centralized and located off the network paths.</t>
        </section>
        <section anchor="exposure-of-abstracted-generic-metrics">
          <name>Exposure of Abstracted Generic Metrics</name>
          <t>In some cases, whether due to unavailable information details or for the sake of simplicity, a consumer may need reliable but simple guidance to select a service. To this end, abstracted generic metrics may be useful.</t>
          <t>One can consider a generic metric that can be named 'computingcost' and is applied to a contact point to one or more edge servers such as a load balancer, for short  an edge server, to reflect the network operator policy and preferences.  The metric “computingcost” results from an abstraction method that is hidden from users, similarly to the metric “routingcost” defined in <xref target="RFC7285"/>.  For instance, “computingcost” may be higher for an edge server located far away, or in disliked geographical areas, or owned by a provider who does not share information with the Internet Service Provider (ISP) or with which the ISP has a poorer commercial agreement.  'computingcost' may also reflect environmental preferences in terms, for instance, of energy source, average consumption vs. local climate, location adequacy vs. climate.</t>
          <t>One may also consider a generic metric named 'computingperf', applied to an edge server, that reflects its performance based on measurements or estimations by the ISP or combination thereof.  An edge server with a higher “computingperf” value will be preferred.  “computingperf” can be based on a vector of one or more metrics reflecting, for instance, responsiveness, reliability of cloud services based on metrics such as latency, packet loss, jitter, time to first and/or last byte, or a single value reflecting a global performance score.</t>
        </section>
      </section>
      <section anchor="examples-of-resources">
        <name>Examples of Resources</name>
        <section anchor="network-resources">
          <name>Network Resources</name>
          <t>Network resources relate to the traditional network
infrastructure. The next table provides examples of some of the
commonly used metrics:</t>
          <table anchor="net_res">
            <name>Examples of network resource metrics.</name>
            <thead>
              <tr>
                <th align="left">Kind of Resource</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">QoS</td>
              </tr>
              <tr>
                <td align="left">Latency</td>
              </tr>
              <tr>
                <td align="left">Bandwidth</td>
              </tr>
              <tr>
                <td align="left">RTT</td>
              </tr>
              <tr>
                <td align="left">Packet Loss</td>
              </tr>
              <tr>
                <td align="left">Jitter</td>
              </tr>
            </tbody>
          </table>
        </section>
        <section anchor="cloud-resources">
          <name>Cloud Resources</name>
          <t>Cloud resources relate to the compute infrastructure
infrastructure. The next table provides examples of some of the
commonly used metrics:</t>
          <table anchor="cloud_res">
            <name>Examples of cloud resource parameters.</name>
            <thead>
              <tr>
                <th align="left">Resource</th>
                <th align="left">Type</th>
                <th align="left">Example</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">CPU</td>
                <td align="left">Compute</td>
                <td align="left">Available CPU resources in GHz</td>
              </tr>
              <tr>
                <td align="left">Memory</td>
                <td align="left">Compute</td>
                <td align="left">Available memory in GB</td>
              </tr>
              <tr>
                <td align="left">Storage</td>
                <td align="left">Storage</td>
                <td align="left">Available storage in GB</td>
              </tr>
              <tr>
                <td align="left">Configmaps</td>
                <td align="left">Object</td>
                <td align="left">Configuration and topology maps</td>
              </tr>
              <tr>
                <td align="left">Pods</td>
                <td align="left">Object</td>
                <td align="left">Current list of active pods</td>
              </tr>
              <tr>
                <td align="left">Jobs</td>
                <td align="left">Object</td>
                <td align="left">current list of active jobs</td>
              </tr>
              <tr>
                <td align="left">Services</td>
                <td align="left">Object</td>
                <td align="left">Concurrent services</td>
              </tr>
            </tbody>
          </table>
          <!-- | Secrets     |    Object        |   Possible secrets           | -->

</section>
      </section>
    </section>
    <section anchor="study-of-the-kubernetes-metrics-api-and-exposure-mechanism">
      <name>Study of the Kubernetes Metrics API and Exposure Mechanism</name>
      <t>An approach to develop IETF specifications for the definition of compute and
communication metrics is to leverage existing and mature solutions, whether based on
open standards or de facto standards. On one hand, this approach avoids
reinventing the wheel; on the other, it ensures the specifications are based
on significant industry experience and stable running code.</t>
      <t>For communication metrics, the IETF has already developed detailed and mature
specifications. An example is the ALTO Protocol <xref target="RFC7285"/>, which provides RFCs standardizing
communication metrics and a detailed exposure mechanism protocol.</t>
      <t>Compute metrics, however, have not been thoroughly studied within the IETF.
With the goal to avoid reinventing the wheel and to ensure significant industry
experience is taken into account, in this section we study the Kubernetes
Metric API. Kubernetes is not only a de facto standard to manage containerized
software in data centers, but it is also increasingly being used by telecommunication operators
to manage compute resources at the edge.</t>
      <section anchor="understanding-the-kubernetes-metrics-api-and-its-exposure-mechanism">
        <name>Understanding the Kubernetes Metrics API and its Exposure Mechanism</name>
        <t><xref target="kubernetes_metrics"/> shows the Kubernetes Metric API architecture.
It consists of the following components:</t>
        <t><strong>Pod</strong>. A collection of one or more containers.</t>
        <t><strong>Cluster</strong>. A collection of one or more nodes.</t>
        <t><strong>HPA, VPA and 'kubectl stop'</strong>. Three different applications that
serve as examples of consumers of the Metrics API.
The HorizontalPodAutoscaler (HPA) and VerticalPodAutoscaler
(VPA) use data from the metrics
API to adjust workload replicas and resources to meet
customer demand. 'kubectl stop' can
be used to show all the metrics.</t>
        <t><strong>cAdvisor</strong>. Daemon for collecting metrics (CPU, memory, GPU, etc.) from all the containers
in a pod. It is responsible for aggregating and exposing these metrics to kubelet.</t>
        <t><strong>Kubelet</strong>. Node agent responsible for managing container resources. It includes the
ability to collect the metrics from the cAdvisor and making them accessible
using the /metrics/resource and /stats kubelet API endpoints.</t>
        <t><strong>Metrics server</strong>. Cluster agent responsible for collecting and aggregating resource metrics
from each kubelet.</t>
        <t><strong>API Server</strong>. General server providing API access to kubernetes services.
One of them corresponds to the Metrics API service. HPA, VPA, and
'kubectl top' query the API server to retrieve the metrics.</t>
        <figure anchor="kubernetes_metrics">
          <name>Collection and exposure of metrics using the Kubernetes Metrics API.</name>
          <artwork><![CDATA[
            +---------------------------------------------------------------------------------+
            |                                                                                 |
            |  Cluster                      +-----------------------------------------------+ |
            |                               |                                               | |
            |                               |  Node                           +-----------+ | |
            |                               |                                 | Container | | |
            |                               |                               +-+           | | |
            |                               |                               | |  runtime  | | |
            |                               |                 +----------+  | +-----------+ | |
+-------+   |                               |                 |          |  |               | |
|  HPA  <-+ |                               |               +-+ cAdvisor |<-+               | |
+-------+ | |                               |               | |          |  | +-----------+ | |
          | | +----------+    +-----------+ | +----------+  | +----------+  | | Container | | |
+-------+ | | |  API     |    |  Metrics  | | |          |  |               +-+           | | |
|  VPA  <-+-+-+          <--+-+           <-+-+ Kubelet  <--+                 |  runtime  | | |
+-------+ | | | server   |  | |   server  | | |          |  |                 +-----------+ | |
          | | +----------+  | +-----------+ | +----------+  |                               | |
+-------+ | |               |               |               |                               | |
|kubectl| | |               |               |               | +----------+                  | |
| top   <-+ |               | +-----------+ |               | |  Other   |                  | |
+-------+   |               | |  Other    | |               +-+   pod    |                  | |
            |               +-+           | |                 |   data   |                  | |
            |                 |  data     | |                 +----------+                  | |
            |                 +-----------+ |                                               | |
            |                               +-----------------------------------------------+ |
            |                                                                                 |
            +---------------------------------------------------------------------------------+
]]></artwork>
        </figure>
      </section>
      <section anchor="example-of-how-to-map-the-kubernetes-metrics-api-with-the-ietf-cats-metrics-distribution">
        <name>Example of How to Map the Kubernetes Metrics API with the IETF CATS METRICS Distribution</name>
        <t>In this section, we describe a mapping between
the Kubernetes Metrics API and the IETF CATS metric dissemination
architecture, illustrating and example of how a de facto standard widely
used in production systems can be adapted to support the CATS metrics
framework.</t>
        <t>To describe the mapping, we take the centralized model
of the CATS metrics dissemination framework introduced in
<xref target="I-D.ldbc-cats-framework"/>, which we include in <xref target="cats_framework"/>
for ease of reading. (Similar mappings can be created with the
distributed and hybrid models also introduced in this <xref target="cats_framework"/>)</t>
        <figure anchor="cats_framework">
          <name>Collection and exposure of metrics using the CATS Centralized Model. (Taken from [I-D.ldbc-cats-framework])</name>
          <artwork><![CDATA[
            :       +------+
            :<------| C-PS |<----------------------------------+
            :       +------+ <------+              +--------+  |
            :          ^            |           +--|CS-ID 1 |  |
            :          |            |           |  |CIS-ID 1|  |
            :          |   +----------------+   |  +--------+  |
            :          |   |    C-SMA       |---|Service Site 2|
            :          |   +----------------+   |  +--------+  |
            :          |   |CATS-Forwarder 2|   +--|CS-ID 1 |  |
            :          |   +----------------+      |CIS-ID 2|  |
+--------+  :          |             |             +--------+  |
| Client |  :  Network |   +----------------------+            |
+--------+  :  metrics |   | +-------+            |            |
     |      :          +-----| C-NMA |            |      +-----+
     |      :          |   | +-------+            |      |C-SMA|<-+
+----------------+ <---+   |                      |      +-----+  |
|CATS-Forwarder 1|---------|                      |          ^    |
+----------------+         |       Underlay       |          |    |
            :              |     Infrastructure   |     +--------+|
            :              |                      |     |CS-ID 1 ||
            :              +----------------------+  +--|CIS-ID 3||
            :                        |               |  +--------+|
            :          +----------------+------------+            |
            :          |CATS-Forwarder 3|         Service Site 3  |
            :          +----------------+                         |
            :                        |       :      +-------+     |
            :                        +-------:------|CS-ID 2|-----+
            :                                :      +-------+
            :<-------------------------------:
]]></artwork>
        </figure>
        <t>The following table provides the mapping:</t>
        <table anchor="kub_cats_map">
          <name>Example of how to map the Kubernetes Metrics API with the IETF CATS Architecture.</name>
          <thead>
            <tr>
              <th align="left">IETF CATS component</th>
              <th align="left">Kubernetes Metrics API component</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">CIS-ID</td>
              <td align="left">Cluster API</td>
            </tr>
            <tr>
              <td align="left">C-SMA</td>
              <td align="left">cAdvisor</td>
            </tr>
            <tr>
              <td align="left">C-NMA</td>
              <td align="left">Other components outside Kubernetes</td>
            </tr>
            <tr>
              <td align="left">C-PS</td>
              <td align="left">Other components outside Kubernetes</td>
            </tr>
            <tr>
              <td align="left">CATS Service Site</td>
              <td align="left">Node or cluster</td>
            </tr>
            <tr>
              <td align="left">CATS Service</td>
              <td align="left">One or more clusters distributed on several locations</td>
            </tr>
          </tbody>
        </table>
        <t>Note that while in Kubernetes there are multiple levels of abstraction
to reach the Metrics API (cAdvisor -&gt; kubelet -&gt; metrics  server -&gt; API server),
they can all be co-located in the cAdvisor, which can then be mapped to the
C-SMA module in CATS.</t>
      </section>
      <section anchor="available-metrics-from-the-kubernetes-metrics-api">
        <name>Available Metrics from the Kubernetes Metrics API</name>
        <t>The Kubernetes Metrics API implementation
can be found in staging/src/k8s.io/kubelet/pkg/apis/stats/v1alpha1/types.go
as part of the Kubernetes repository (https://github.com/kubernetes/kubernetes):</t>
        <t>In this section we provide a summary of the metrics offered by the API:</t>
        <table anchor="kub_metrics_node">
          <name>Summary of the Kubernetes Metric API: Node-level metrics.</name>
          <thead>
            <tr>
              <th align="left">Nodel-level metric</th>
              <th align="left">Decription</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">nodeName</td>
              <td align="left">Name of the node</td>
            </tr>
            <tr>
              <td align="left">ContainerStats</td>
              <td align="left">Stats of the containers within this node</td>
            </tr>
            <tr>
              <td align="left">CPUStats</td>
              <td align="left">Stats pertaining to CPU resources</td>
            </tr>
            <tr>
              <td align="left">MemoryStats</td>
              <td align="left">Stats pertaining to memory (RAM) resources</td>
            </tr>
            <tr>
              <td align="left">NetworkStats</td>
              <td align="left">Stats pertaining to network resources</td>
            </tr>
            <tr>
              <td align="left">FsStats</td>
              <td align="left">Stats pertaining to the filesystem resources</td>
            </tr>
            <tr>
              <td align="left">RuntimeStats</td>
              <td align="left">Stats about the underlying containers runtime</td>
            </tr>
            <tr>
              <td align="left">RlimitStats</td>
              <td align="left">Stats about the rlimits of system</td>
            </tr>
          </tbody>
        </table>
        <table anchor="kub_metrics_pod">
          <name>Summary of the Kubernetes Metric API: Pod-level metrics.</name>
          <thead>
            <tr>
              <th align="left">Pod-level metric</th>
              <th align="left">Description</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">PodReference</td>
              <td align="left">Reference to the measured Pod</td>
            </tr>
            <tr>
              <td align="left">CPU</td>
              <td align="left">Stats pertaining to CPU resources consumed by pod cgroup</td>
            </tr>
            <tr>
              <td align="left">Memory</td>
              <td align="left">Stats pertaining to memory (RAM) resources consumed by pod cgroup</td>
            </tr>
            <tr>
              <td align="left">NetworkStats</td>
              <td align="left">Stats pertaining to network resources</td>
            </tr>
            <tr>
              <td align="left">VolumeStats</td>
              <td align="left">Stats pertaining to volume usage of filesystem resources</td>
            </tr>
            <tr>
              <td align="left">FsStats</td>
              <td align="left">Total filesystem usage for the containers</td>
            </tr>
            <tr>
              <td align="left">ProcessStats</td>
              <td align="left">Stats pertaining to processes</td>
            </tr>
          </tbody>
        </table>
        <table anchor="kub_metrics_container">
          <name>Summary of the Kubernetes Metric API: Container-level metrics.</name>
          <thead>
            <tr>
              <th align="left">Container-level metric</th>
              <th align="left">Description</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">name</td>
              <td align="left">Name of the container</td>
            </tr>
            <tr>
              <td align="left">CPUStats</td>
              <td align="left">Stats pertaining to CPU resources</td>
            </tr>
            <tr>
              <td align="left">MemoryStats</td>
              <td align="left">Stats pertaining to memory (RAM) resources</td>
            </tr>
            <tr>
              <td align="left">AcceleratorStats</td>
              <td align="left">Metrics for Accelerators (e.g., GPU, NPU, etc.)</td>
            </tr>
            <tr>
              <td align="left">FsStats</td>
              <td align="left">Stats pertaining to the container's filesystem resources</td>
            </tr>
            <tr>
              <td align="left">UserDefinedMetrics</td>
              <td align="left">User defined metrics that are exposed by containers in the pod</td>
            </tr>
          </tbody>
        </table>
        <t>For more details, refer to https://github.com/kubernetes/kubernetes under the path
staging/src/k8s.io/kubelet/pkg/apis/stats/v1alpha1/types.go.</t>
      </section>
    </section>
    <section anchor="related-work">
      <name>Related Work</name>
      <t>Some existing work has explored compute-related metrics. It can be categorized as follows:</t>
      <ul spacing="normal">
        <li>
          <t><strong>References providing raw compute infrastructure metrics</strong>:  </t>
          <ul spacing="normal">
            <li>
              <t><xref target="I-D.contreras-alto-service-edge"/> includes references to cloud management solutions (e.g., OpenStack, Kubernetes) that administer the virtualization infrastructure, providing information about raw compute infrastructure metrics.</t>
            </li>
            <li>
              <t><xref target="NFV-TST"/> describes metrics related to processor, memory, and network interface usage.</t>
            </li>
          </ul>
        </li>
        <li>
          <t><strong>References providing compute virtualization metrics</strong>:
          </t>
          <ul spacing="normal">
            <li>
              <t><xref target="RFC7666"/> defines several metrics as part of the Management Information Base (MIB) for managing virtual machines controlled by a hypervisor. These objects reference the resources consumed by a particular virtual machine serving as a host for services or applications.</t>
            </li>
            <li>
              <t><xref target="NFV-INF"/> provides metrics associated with virtualized network functions.</t>
            </li>
          </ul>
        </li>
        <li>
          <t><strong>References providing service metrics including compute-related information</strong>:
          </t>
          <ul spacing="normal">
            <li>
              <t><xref target="I-D.dunbar-cats-edge-service-metrics"/> proposes metrics associated with services running in compute infrastructures. Some of these metrics do not depend on the infrastructure behavior itself but on the topological location of the compute infrastructure.</t>
            </li>
          </ul>
        </li>
        <li>
          <t><strong>Other existing work at the IETF CATS WG</strong>:
          </t>
          <ul spacing="normal">
            <li>
              <t><xref target="I-D.ldbc-cats-framework"/> explores the collection and distribution of computing metrics. In their deployment considerations, they consider three models: distributed, centralized, and hybrid.</t>
            </li>
          </ul>
        </li>
      </ul>
    </section>
    <section anchor="guiding-principles">
      <name>Guiding Principles</name>
      <t>The driving principles for designing an interface to jointly extract network and compute information are as follows:</t>
      <ul spacing="normal">
        <li>
          <t><strong>P1. Leverage existing metrics across working groups to avoid reinventing the wheel.</strong> For instance:
          </t>
          <ul spacing="normal">
            <li>
              <t>RFC 9439 (<xref target="I-D.ietf-alto-performance-metrics"/>) leverages IPPM metrics from RFC 7679.</t>
            </li>
            <li>
              <t>Section 5.2 of <xref target="I-D.du-cats-computing-modeling-description"/> considers delay as a good metric, since it is easy to use in both compute and communication domains. RFC 9439 also defines delay as part of the performance metrics.</t>
            </li>
            <li>
              <t>Section 6 of <xref target="I-D.du-cats-computing-modeling-description"/> proposes representing the network structure as graphs, similar to the ALTO map services in RFC 7285.</t>
            </li>
          </ul>
        </li>
        <li>
          <t><strong>P2. Aim for simplicity, while ensuring the combined efforts don’t leave technical gaps in supporting the full lifecycle of service deployment and selection.</strong> For instance:
          </t>
          <ul spacing="normal">
            <li>
              <t>The CATS working group covers path selection from a network standpoint, while ALTO (e.g., RFC 7285) covers exposing network information to the service provider and the client application. However, there is currently no effort being pursued to expose compute information to the service provider and the client application for service placement or selection.</t>
            </li>
          </ul>
        </li>
      </ul>
    </section>
    <section anchor="gap-analysis">
      <name>GAP Analysis</name>
      <t>From this related work it is evident that compute-related metrics can serve several purposes, ranging from service instance instantiation to service instance behavior, and then to service instance selection. Some of the metrics could refer to the same object (e.g., CPU) but with a particular usage and scope.</t>
      <t>In contrast, the network metrics are more uniform and straightforward. It is then necessary to consistently define a set of metrics that could assist to the operation in the different concerns identified so far, so that networks and systems could have a common understanding of the perceived compute performance. When combined with network metrics, the combined network plus compute performance behavior will assist informed decisions particular to each of the operational concerns related to the different parts of a service lifecycle.</t>
    </section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>TODO Security</t>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t>This document has no IANA actions.</t>
    </section>
  </middle>
  <back>
    <references>
      <name>References</name>
      <references anchor="sec-normative-references">
        <name>Normative References</name>
        <reference anchor="RFC7285">
          <front>
            <title>Application-Layer Traffic Optimization (ALTO) Protocol</title>
            <author fullname="R. Alimi" initials="R." role="editor" surname="Alimi"/>
            <author fullname="R. Penno" initials="R." role="editor" surname="Penno"/>
            <author fullname="Y. Yang" initials="Y." role="editor" surname="Yang"/>
            <author fullname="S. Kiesel" initials="S." surname="Kiesel"/>
            <author fullname="S. Previdi" initials="S." surname="Previdi"/>
            <author fullname="W. Roome" initials="W." surname="Roome"/>
            <author fullname="S. Shalunov" initials="S." surname="Shalunov"/>
            <author fullname="R. Woundy" initials="R." surname="Woundy"/>
            <date month="September" year="2014"/>
            <abstract>
              <t>Applications using the Internet already have access to some topology information of Internet Service Provider (ISP) networks. For example, views to Internet routing tables at Looking Glass servers are available and can be practically downloaded to many network application clients. What is missing is knowledge of the underlying network topologies from the point of view of ISPs. In other words, what an ISP prefers in terms of traffic optimization -- and a way to distribute it.</t>
              <t>The Application-Layer Traffic Optimization (ALTO) services defined in this document provide network information (e.g., basic network location structure and preferences of network paths) with the goal of modifying network resource consumption patterns while maintaining or improving application performance. The basic information of ALTO is based on abstract maps of a network. These maps provide a simplified view, yet enough information about a network for applications to effectively utilize them. Additional services are built on top of the maps.</t>
              <t>This document describes a protocol implementing the ALTO services. Although the ALTO services would primarily be provided by ISPs, other entities, such as content service providers, could also provide ALTO services. Applications that could use the ALTO services are those that have a choice to which end points to connect. Examples of such applications are peer-to-peer (P2P) and content delivery networks.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="7285"/>
          <seriesInfo name="DOI" value="10.17487/RFC7285"/>
        </reference>
        <reference anchor="I-D.ietf-alto-performance-metrics">
          <front>
            <title>Application-Layer Traffic Optimization (ALTO) Performance Cost Metrics</title>
            <author fullname="Qin Wu" initials="Q." surname="Wu">
              <organization>Huawei</organization>
            </author>
            <author fullname="Y. Richard Yang" initials="Y. R." surname="Yang">
              <organization>Yale University</organization>
            </author>
            <author fullname="Young Lee" initials="Y." surname="Lee">
              <organization>Samsung</organization>
            </author>
            <author fullname="Dhruv Dhody" initials="D." surname="Dhody">
              <organization>Huawei</organization>
            </author>
            <author fullname="Sabine Randriamasy" initials="S." surname="Randriamasy">
              <organization>Nokia Bell Labs</organization>
            </author>
            <author fullname="Luis M. Contreras" initials="L. M." surname="Contreras">
              <organization>Telefonica</organization>
            </author>
            <date day="21" month="March" year="2022"/>
            <abstract>
              <t>The cost metric is a basic concept in Application-Layer Traffic
Optimization (ALTO), and different applications may use different
types of cost metrics. Since the ALTO base protocol (RFC 7285)
defines only a single cost metric (namely, the generic "routingcost"
metric), if an application wants to issue a cost map or an endpoint
cost request in order to identify a resource provider that offers
better performance metrics (e.g., lower delay or loss rate), the base
protocol does not define the cost metric to be used.

 This document addresses this issue by extending the specification to
provide a variety of network performance metrics, including network
delay, delay variation (a.k.a. jitter), packet loss rate, hop count,
and bandwidth.

 There are multiple sources (e.g., estimations based on measurements
or a Service Level Agreement) available for deriving a performance
metric. This document introduces an additional "cost-context" field
to the ALTO "cost-type" field to convey the source of a performance
metric.
              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-alto-performance-metrics-28"/>
        </reference>
        <reference anchor="I-D.du-cats-computing-modeling-description">
          <front>
            <title>Computing Information Description in Computing-Aware Traffic Steering</title>
            <author fullname="Zongpeng Du" initials="Z." surname="Du">
              <organization>China Mobile</organization>
            </author>
            <author fullname="Kehan Yao" initials="K." surname="Yao">
              <organization>China Mobile</organization>
            </author>
            <author fullname="Cheng Li" initials="C." surname="Li">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Daniel Huang" initials="D." surname="Huang">
              <organization>ZTE</organization>
            </author>
            <author fullname="Zhihua Fu" initials="Z." surname="Fu">
              <organization>New H3C Technologies</organization>
            </author>
            <date day="6" month="July" year="2024"/>
            <abstract>
              <t>   This document describes the considerations and requirements of the
   computing information that needs to be notified into the network in
   Computing-Aware Traffic Steering (CATS).

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-du-cats-computing-modeling-description-03"/>
        </reference>
        <reference anchor="I-D.ldbc-cats-framework">
          <front>
            <title>A Framework for Computing-Aware Traffic Steering (CATS)</title>
            <author fullname="Cheng Li" initials="C." surname="Li">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Zongpeng Du" initials="Z." surname="Du">
              <organization>China Mobile</organization>
            </author>
            <author fullname="Mohamed Boucadair" initials="M." surname="Boucadair">
              <organization>Orange</organization>
            </author>
            <author fullname="Luis M. Contreras" initials="L. M." surname="Contreras">
              <organization>Telefonica</organization>
            </author>
            <author fullname="John Drake" initials="J." surname="Drake">
              <organization>Juniper Networks, Inc.</organization>
            </author>
            <date day="8" month="February" year="2024"/>
            <abstract>
              <t>   This document describes a framework for Computing-Aware Traffic
   Steering (CATS).  Particularly, the document identifies a set of CATS
   components, describes their interactions, and exemplifies the
   workflow of the control and data planes.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ldbc-cats-framework-06"/>
        </reference>
        <reference anchor="RFC2119">
          <front>
            <title>Key words for use in RFCs to Indicate Requirement Levels</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <date month="March" year="1997"/>
            <abstract>
              <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="2119"/>
          <seriesInfo name="DOI" value="10.17487/RFC2119"/>
        </reference>
        <reference anchor="RFC8174">
          <front>
            <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
            <author fullname="B. Leiba" initials="B." surname="Leiba"/>
            <date month="May" year="2017"/>
            <abstract>
              <t>RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="8174"/>
          <seriesInfo name="DOI" value="10.17487/RFC8174"/>
        </reference>
      </references>
      <references anchor="sec-informative-references">
        <name>Informative References</name>
        <reference anchor="NFV-TST" target="https://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/008/03.03.01_60/gs_NFV-TST008v030301p.pdf">
          <front>
            <title>ETSI GS NFV-TST 008 V3.3.1, NFVI Compute and Network Metrics Specification</title>
            <author>
              <organization/>
            </author>
            <date year="2020" month="June" day="01"/>
          </front>
        </reference>
        <reference anchor="NFV-INF" target="https://www.etsi.org/deliver/etsi_gs/NFV-INF/001_099/010/01.01.01_60/gs_NFV-INF010v010101p.pdf">
          <front>
            <title>ETSI GS NFV-INF 010, v1.1.1, Service Quality Metrics</title>
            <author>
              <organization/>
            </author>
            <date year="2014" month="December" day="01"/>
          </front>
        </reference>
        <reference anchor="LF-EDGE">
          <front>
            <title>Linux Foundation Edge</title>
            <author>
              <organization/>
            </author>
            <date year="2023" month="March"/>
          </front>
          <seriesInfo name="https://www.lfedge.org/" value=""/>
        </reference>
        <reference anchor="EDGE-ENERGY">
          <front>
            <title>Estimating energy consumption of cloud, fog, and edge computing infrastructures</title>
            <author>
              <organization/>
            </author>
            <date year="2019"/>
          </front>
          <seriesInfo name="IEEE Transactions on Sustainable Computing" value=""/>
        </reference>
        <reference anchor="DC-AI-COST">
          <front>
            <title>Generative AI Breaks The Data Center - Data Center Infrastructure And Operating Costs Projected To Increase To Over $76 Billion By 2028</title>
            <author>
              <organization/>
            </author>
            <date year="2023"/>
          </front>
          <seriesInfo name="Forbes, Tirias Research Report" value=""/>
        </reference>
        <reference anchor="UPCLOUD" target="https://upcloud.com/resources/tutorials/how-to-benchmark-cloud-servers">
          <front>
            <title>How to benchmark Cloud Servers</title>
            <author>
              <organization/>
            </author>
            <date year="2023" month="May"/>
          </front>
        </reference>
        <reference anchor="IR" target="https://www.ir.com/guides/cloud-performance-testing">
          <front>
            <title>Cloud Performance Testing Best Tips and Tricks</title>
            <author>
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
        </reference>
        <reference anchor="LLM_COMP_REQ" target="https://alpa.ai/tutorials/opt_serving.html">
          <front>
            <title>Serving OPT-175B, BLOOM-176B and CodeGen-16B using Alpa</title>
            <author>
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
        </reference>
        <reference anchor="I-D.llc-teas-dc-aware-topo-model">
          <front>
            <title>DC aware TE topology model</title>
            <author fullname="Young Lee" initials="Y." surname="Lee">
              <organization>Samsung Electronics</organization>
            </author>
            <author fullname="Xufeng Liu" initials="X." surname="Liu">
              <organization>Alef Edge</organization>
            </author>
            <author fullname="Luis M. Contreras" initials="L. M." surname="Contreras">
              <organization>Telefonica</organization>
            </author>
            <date day="10" month="July" year="2023"/>
            <abstract>
              <t>   This document proposes the extension of the TE topology model for
   including information related to data center resource capabilities.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-llc-teas-dc-aware-topo-model-03"/>
        </reference>
        <reference anchor="RFC7666">
          <front>
            <title>Management Information Base for Virtual Machines Controlled by a Hypervisor</title>
            <author fullname="H. Asai" initials="H." surname="Asai"/>
            <author fullname="M. MacFaden" initials="M." surname="MacFaden"/>
            <author fullname="J. Schoenwaelder" initials="J." surname="Schoenwaelder"/>
            <author fullname="K. Shima" initials="K." surname="Shima"/>
            <author fullname="T. Tsou" initials="T." surname="Tsou"/>
            <date month="October" year="2015"/>
            <abstract>
              <t>This document defines a portion of the Management Information Base (MIB) for use with network management protocols in the Internet community. In particular, this specifies objects for managing virtual machines controlled by a hypervisor (a.k.a. virtual machine monitor).</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="7666"/>
          <seriesInfo name="DOI" value="10.17487/RFC7666"/>
        </reference>
        <reference anchor="I-D.contreras-alto-service-edge">
          <front>
            <title>Use of ALTO for Determining Service Edge</title>
            <author fullname="Luis M. Contreras" initials="L. M." surname="Contreras">
              <organization>Telefonica</organization>
            </author>
            <author fullname="Sabine Randriamasy" initials="S." surname="Randriamasy">
              <organization>Nokia Bell Labs</organization>
            </author>
            <author fullname="Jordi Ros-Giralt" initials="J." surname="Ros-Giralt">
              <organization>Qualcomm Europe</organization>
            </author>
            <author fullname="Danny Alex Lachos Perez" initials="D. A. L." surname="Perez">
              <organization>Benocs</organization>
            </author>
            <author fullname="Christian Esteve Rothenberg" initials="C. E." surname="Rothenberg">
              <organization>Unicamp</organization>
            </author>
            <date day="13" month="October" year="2023"/>
            <abstract>
              <t>   Service providers are starting to deploy computing capabilities
   across the network for hosting applications such as AR/VR, vehicle
   networks, IoT, and AI training, among others.  In these distributed
   computing environments, knowledge about computing and communication
   resources is necessary to determine the proper deployment location of
   each application.  This document proposes an initial approach towards
   the use of ALTO to expose such information to the applications and
   assist the selection of their deployment locations.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-contreras-alto-service-edge-10"/>
        </reference>
        <reference anchor="I-D.dunbar-cats-edge-service-metrics">
          <front>
            <title>5G Edge Services Use Cases</title>
            <author fullname="Linda Dunbar" initials="L." surname="Dunbar">
              <organization>Futurewei</organization>
            </author>
            <author fullname="Kausik Majumdar" initials="K." surname="Majumdar">
              <organization>Microsoft</organization>
            </author>
            <author fullname="Gyan Mishra" initials="G. S." surname="Mishra">
              <organization>Huawei</organization>
            </author>
            <author fullname="Haibo Wang" initials="H." surname="Wang">
              <organization>Huawei</organization>
            </author>
            <author fullname="Haoyu Song" initials="H." surname="Song">
              <organization>Futurewei</organization>
            </author>
            <date day="6" month="July" year="2023"/>
            <abstract>
              <t>   This draft describes the 5G Edge computing use cases for CATS
   and how BGP can be used to propagate additional IP layer
   detectable information about the 5G edge data centers so that
   the ingress routers in the 5G Local Data Network can make
   path selections based on not only the routing distance but
   also the IP Layer relevant metrics of the destinations. The
   goal is to improve latency and performance for 5G Edge
   Computing (EC) services even when the detailed servers
   running status are unavailable.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-dunbar-cats-edge-service-metrics-01"/>
        </reference>
      </references>
    </references>
    <?line 891?>

<section numbered="false" anchor="acknowledgments">
      <name>Acknowledgments</name>
      <t>The work from Luis M. Contreras has been partially funded by the European Union under Horizon Europe projects NEMO (NExt generation Meta Operating system) grant number 101070118, and CODECO (COgnitive, Decentralised Edge-Cloud Orchestration), grant number 101092696.</t>
    </section>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA8V97XLcyJHgfzwFVnMRQ0rdTVGaT96s1xRFaWiLEk1SM+vw
2go0Gt2NEbrRBtDk9IiK8GtsxF3EPcs9ip/k8rMqC0CT1Hi9R3tmyG6gKisr
K78zazgcRk3eFNlB/OB3Zb5s4uOfV2W9rrK4nMavs+a6rN7HyXISH5WL1brJ
4pPltKwWSZOXyxh+w7+rpG6qddrAW8PD6wTevciqqzzN4ufZqig3i2zZPIiS
8bjKrmCeyzfP38TD+JD+zmmkB1GaNNmsrDYHcQ4TRNGkTJfJAsCaVMm0GVZp
NSxXdXI9g/9kFb2UFMOUgRousqbK03r4+JuoXo8XeV3D981mBe+fHF++iOPP
4qSoS5g8X06yVQb/ApAG8YNskjdllScF/nFy+Az+A2t6cHJ++eJBtFwvxll1
EE0AtoMoLZd1tqzX9UEMq80iWMrTCMatsuQgPjw/PoQ/EFuzqlyvDuIfX8Y/
wl/5cha/xE+i99kGvp4cRLD2ZfZzE8+ypawEP1ov87Ss6Nd6lVTvC3xzkgNm
8zEscRIX2WSWVdFVtlwDNJ/FsZsI/+DFhjPCx4skL/CR32Y/J4tVkY0AY/h5
UqXzg3jeNKv6YG/PfLkHw8HQeTNfjwFds7xKimbvUzbhAbxfAMbqBt7XGXic
EY87ystPGvGTHh7Nm0XxIIqSdTMvK8Q2wBPH03VRMEE9uBjF50DRsOuLpN48
oK/LapYs819ozIP4dfk+T+JnWVHEr5JxTU9kjMoHdTLOl9mo8iP8domPD8fw
+LCAxxGNAEB34lej+HQEB2nZVAB+3TfzZVZk0xJIIQkmLdZ5vchn66yAweX1
xbrKi6L8beNe2Trx74Du8vi8rIcvaR/6Zv7DOing/UV8vK4AvQM42OkoAOKn
qqx/+9cmH/1VHt0633lZIMu4SOdl0zvZ82zd1Ok8o/W+B5K08/DbI36blgdP
jCYZTLVk1nMFByCOz18cff3kmy/x15Ph81GeNdMhLK4cAnEQj1qmjir0ocl6
CJymFqKBozJclJMMz9pwktVpla8IQHm6mIxTfh543CLD030QRblyQAYjfv3i
h+HlxSX9bn6UrR5fXpzELy/0sfjx42/iH56Ono72B/jZieOsiDLluKcMdnyx
ytJ8CnvLXDKcgfhS/OTxk8fDx18NH++3AUiqWdb4U359fT3KmjofwV7s4aKv
smoPP3g3q/cEur3Hj/ffPf72W/jvN3uPn47w//vvvnq8N6vfySPwzdXjp/C/
/dVoNZlGioKT1y/ugwJ4LH68/3gQX+2P9hEHKiuQAPNmo0vfttj9L4b7T/4L
Fgtw+MXuP4Z/RvR/s1h4BL65gn/27WJfvRgeP395zIulJcYH8at8uf45flGu
lxOWjcfAremJOqvyrEaiiUPwiilydAIw0hXCI4dpmtU1MPxTZNK4vU/xa5xy
ePz6+PzlH8OZj+smR2oEto8CZbaJUVKtF0TJKMXTolxPBiCsZwOiMZw1dgcA
5a0R4HUX5pPj4+P4EhhenaQ4Zh3DuBfruknyZTIGGI50LLsM2Klv8e/nR8PD
k+HRGz0gCvZLkX5XWXx4Ej8DKfq+ji+BJTxPmiQ+AvmcVSAL7V+hqhEfwlre
sCCAZRyVdVPHZ1X5U5aiuLwskYPBsHWGv7+B/Y//x9dfxc+AaSJinm0Qtd90
l/uirMZZPYgvc+DvdXye1RltxHm2KqsmXCJvzduzo1dv3j4X6tcFfl9ex00Z
j7NlOl+AQI+PcB+I3LOK0awDnSYbN5iSsiGW9Yq2kMQz7FC5roBC9po1ay71
3ry8HgLbczMN6fFh7Wc6OW8Bx7CceUYJnLgmPD6D/8LaVzXRyiUcxff1FriQ
iPOKwJqtc+Cfezyx5b8ND0vn5tXpu6M3p2fvzo//EFt4DpgHwORvzi6H+19/
+WwQP3v15s0p/P7VM9E+JxlQzHAf/l7X+OhhsUq2wJXAV6MkNxgqV827mucg
/SAaDocxiOqmApKOImVBq6q8gnVUNWp1MRB4RSiBXZyQImtOTZqsQBEAhgWE
EyUpiMY6boB6l8LAUTWel4zSZLUqhIXXcb0GWgK6sqodHAB8qSiTST2IDs/3
fjgH/pjN87RwI9Z8eE/KS/hlUcKwJcxX1SMgc5g5ryN5UBUi4AZXeVUuUfse
xO+X5bWcdqe+J+Ny3UQIdXIFopfXs6FpYO0NmQD4LfC0rCo2tGwQ+6ipygDw
pM7mCDMmUJCHJdWGcQeHdwFKUzwGkGnEFaoYlWAVAYyLUsYspwyRxRnOgx8u
AKGAwLwhtiP0jdzoGlA1x7mqNSIjW4yiF+sK8TMg3ATLhj/XyF/Hmxg0/AxU
ZfoAtrQG6K9BRYXNmU6zCuACYQ/Ar6qsYVBGwKLgfbBO1gQ2LqTE1xIYdwnU
kBQIelUmBA8YQxP4LkK0wcyZ2lao+ywyIhJRUAB/0wI4lyDZKQQhwi3VjSKi
4kU+mRRZBEr9CSiG5WRNPDqKmDOW22nZEqCna0M0NSzeYyIGowQYrJCEknkz
T5p4saZtWSF/jJP4KgFuCnQEOxnsYojZ2HAJGC6b+KOB1sMy3QziMWDgOp80
80GkOFmV17ipNawsmYGWyiIP/tuAtkp706CkIgMEdoSABVQDh8B5APrusg3t
RjtEovIecTPaBVgkDQSSc5fXYZcZ1/kv+G42mo0GAH2FgC3AtFwvANJFUhS7
cTpPkNeAoPmFSC9COPJl2gBAgOya5eo0Pjp7W+PLC7CFB3IUaanB3g8QTddg
cETwX4cleiTFI4yEBf8g+dGmA+yAlikokmp1AhA5H2DkAbKrYIACKlblcmLJ
ULEDBHfJZ7fIp2J/KT20dAqglWSSzxZgFNAphLlYFNd0uFclbDCdlWlZNqsK
/Q4Iyhw5RYkQ5oxYGtpQJBx0IB44UNP1MuX5K8sqAANLkLhC4hloPQD8uhA9
xxxrOFINsqcUOQqyIsB7L9rlwDv4ESQixiL7WYCcgIru0Oz4mEAKpjRY7LGB
1x6KuJ6X62KCII9R7ircuKzs5yxFKh1Fb5gcS+CgcwBNWBpOW4dj/IQenGKD
FkqxRoInJmcYfBHw6aUDz+1cZLAd01ylTM7MVIaztNhiCdEqQUosl0vhZpb4
eGo+Qfj5hLQDQgVQl3U3qVWfkD/JomxVJGmm8JEQQNFdZ8Q94fs5HokM3s+Q
rNcNCY18gcyJeI0c7iIhfKfJhPa4JQPxUCM3MnIjQt6SvM/Y7TUDLtN4/DpB
UndVCYByXE5yoh1gJMSOyazbCHpAWoGymF0hgrvsSfwcZuPKKRwfEnVGTiFF
Ilb6mS2JcBACdLIZpioDsVb7N4Bd64kPWQ0LVxFzyMiKIlvCcQdR0pLUfm+i
/r0BrAANLkCM/jjPCxJLYC8Bk2LHQJRNAbcNiXSVMAGphTCzNaMyFfGKLDFO
Jle40ZNBVOcL2FI4Sn7Y7hJRUOC7TBc58qopvL8hdvfJsh4xxMI+QHkUgHo/
8R8K+xOgnHxRoygqkFSSGYnrn3PWMQlXQtrk8jRr1Tkm2ZTgxt1Gy2SdA9uo
NyhB3RRFXbp5gOOlcxwcRAS+cC3eRQC7UquInJB1MDNR+TXxJLZEs85ZYm6f
gO2HWkWernGXquyv67wi8kHlBjSao3J5hYJCdcDnfgEsi95nG4QK8P7g9O3F
JTpx8b/x6zf0Oxgab0/Oj5/j7xffH7565X6J5ImL79+8ffXc/+bfBEvl9Pj1
c34ZPo2Dj6IHp4d/fMCS4gFYLSdvXh++esBosDSDhEVGYOwUSTjYSR2xt2nM
YvjZ0dn//T/7X8QfPvzL+YujJ/v73378KH98s//1F/AHSJMlz1Yui438CRjf
oDSBnSBmDrQPNAOCrWAFAQTDNfBD4ACAzod/Qsz8+SD+bpyu9r/4jXyACw4+
VJwFHxLOup90XmYk9nzUM43DZvB5C9MhvId/DP5WvJsPv/u3Ag2N4f43//Yb
IKHv/gU05M/i07LJrxKlGww/XB5fXEaHMNn3x/HR94evXx7Hj2Ke9fIikk+e
xMPhb4gQz5hvxRegX6mjDjTWKPpRJVmgYybAva6Nf58+66jCyOiBjwKztKqw
PQWxqJRtYxHEKZwDtCyXSD/E49NsEE0qdHMVYHqp9ShWI8j0Zp0Ue8l6hgNn
yP3J27Yrohs+maJqIqwTBHWxFkaBpxnVKnKU0mgAaOLUvR4zJfKiygvjGJ1a
rPTBgZAhiKBXDbDpXzI2DD0mRCmtQ1twFMeX5STZfF47w1iVjR513ulTYqEj
cwYYlmvUzEGCEw+dVuUCPTHwJfmZAO2k+UdOK9pVWUdq7k6o7O6qMMMH6mSR
UXAFxAp5qSLUbtMchHNN69jEzlmQ1xYaxzow1gGCDPYjXaPIilhRcpYyqiYs
WGtrl+AIV2VxpSpXWrogSURmj6oCYIWAVQimIzrrC9B2VdKWajEOuloMyS+7
D/A+6Y9eZUftx2iPwHJOUAsGEKdlUZTXqElkZGiT+gUmRLpB/wadF50Q8Ozl
ZiSuAjp6tTt65Gti1QP0ok2dOxU00BVU2YpAJ3kvBkAjpAODNmVaFsyVpwni
Eg8BUApAkLFbk4kPT7LiexS1vM7m59Ew+HnU9+mj8PXXMLT+3ISj3fR9emNf
V58Vz/CbON7Z3219CJ/uPPEf3mwHHubxoVo7+4VT3e54/dOAD38Y2r9sQd1f
+lB3Jwz3mff/64styrj/izc9H9314iOhhqeeGr67F1Z1jsMaFFeST3dP5uHy
f93rnfZh+XAQf+b5BLmK//WBwu++GD2IP0bRoeWEKgiWILJYuXeCYAnKUhQ6
1fgpEbVpOSQTPptExN++fBmPMYRQs8lBkkC4ovF3OIbsmA5IkJrZWY0SxVm8
s5IFIJ7WKpuscSXq7sLDqo6GFozeE0abqE/BEEUuNit888Wu8i6QOldJuhEz
LF1X/MCX/gGACs1KsbxYTdj5ygHFvq+UgirIiruhpZGSUgvD4jDY6imOwoAT
PAwmxAw9aivQFjDRwW85YpPVqAIkJws4nC9SefHQCEbYoln2EFYSP+xYng/5
W16meSBRutYHIq84w/5/+OCA+fhxJAplJA53WH/KuLsm9Z6cr33SLfKzoagj
Uz12EeVQ5nXkWOScBveUYzyIvOMd8aje9Ug0VnEfPrTSw+N09PAh+78IOeg5
T5OqyjPyGqrvo60sRKyVyl51tZfEQqDnLmnpuJHquHtev0XD+9/P9w7P9WV2
++4aktuikIYkhzpkD9ziheYDvsKIBzl2WdNYlGuCPto6iXHQAFXlsyUrE0Qp
dsGo/WYZx12yn1ccrUQuAWhaYOIDQ+fce86XaHFN/j8+cIRuIFjUqTcWiIqN
iazGCUjJxRFap4+zoZyeia6FgvQpDcfDr8cAJWw6cv+dP5THu0xeopwx6q4T
NFdgabN1Qp6xLBKVmfmfKGgC/ggjsovkPfpHu6EgXTkZuJGbgqIM5DO4gq/V
T8c7b1btHJ3IcsUtEa55AOSQFuvAxy1USF7gl94V3HHAp2QzIUMM9159//ny
vfHH49vC3Xc9jLDBL0q02GtySA4iiVsl8SsMHcC/l4BF+OWULK6dV69OJeiw
//WX8Vji1+hiX6DyjSYeYB6s/iiZzapsRs59gR/w/vTLxy85iPolLq0GxG/Q
ISH67n/AkrIPRbF4h7h4BxTzcRTdcjzI+SXMBjccCKza9GI5+5nCqy3/8z3O
D3NpcqtnEXsf6/W4zoh7hATulXaJ/DkSI6ZmtF8nCro8jZ15xpuLNDvwggmE
eU6Wk/F0slnMhmet5qGTgY7O8SiLbxKMrGiRoFKRGsuaqHrsDF/2B9mT7p2n
BO9BtPNklOzG/9EAevPmg1V1/LMfWZw/GY39k+SX90+MWIxlcf8IMFu2Ii9S
i+14cVxljA5WlV670IfTpQCbxlWZxIscTV+VhcZI3uWjbvFLeGESEAd0yEJ0
dti9YB8kCOi4p+dvwWKZ65Cd6I/+HhyPyIYBdp1hGSiUQGpLNbPb4PhoDcW9
OF4jWxruQMjn4vbio2Dx9CqudF6SCxoITr0eQnuW5AwCoi4CzJHromEL89Ko
vDvGCBBysgtxsYdQWHaOYfXOEqNPYybi6e/hJ/eRx4Ou+k3Chx3SaBTAtpIY
3bJDsNDXZSP6KGxmjf6sOgVVusrLehCEv4Czdzgnuc9wRjpjt02EowOzWWtE
1qBsFJ80eADJRa/xLNr+pcIGlDbJ8GwKSHctqxs6dN44oNo6l3CZHHmiAApT
AkFhwlGGQxvXJQIHhxqOPHFSHbJJRG0FUJM0RUUqojD7lF1RQEUnglNKwRig
NsPRMIobeHBFcoFyrjF0l0xQJBtKhglFq9ss9XHZYPLAHcrI40am4MPnw3st
F9mAVbK6wxzuHMkec4C0T38eRccYk2WmUTuk5LT7KZwI8aGSLuQUKH+OdlDV
d6k2pc8DEevCgLAbHrqOlI5u03KdSllvQFgsWOIaD4OzrVDiPl+7gKHze9+m
Y0aBHxweIKer9UB6L6WbKN65ONx1Z8Q5nEfRxaG6Wst1DSS5KJc54cTh3pAx
eQBrNk1FzzW2ifcK97AcxgTp97XFNaJU0fIKVez4zRjzFHO0kHYuXr2pd0Ee
85HHv1i3KRsM3Q04LYMedrqD5DlwPJpJK2rTUYfpDpTilJw9xaHCl10lxdrF
KwAMooIiNxk6LHZqYmpEYR4ZGmRkOeLYsnNuiEJqt3W3B4OtYYzfQ2URZtD3
4oO1eskysxZXkDEQ2W/8aW2S+j2wj5B7rFfELRuXyYBWdZyCxEQ9Ts8IMXV0
QmPspUwmGUl8ctB4ZzxjA/R54ArJGB7tJtxEiHJQOjPgysWkqxXgGqsMDev3
nmm0tDZSDtGgWOQzzPIBrlG73ELcvmTJigkFiWDnCspHsiwMTgvLdBQjd2LA
bB0aPxEHNWZkdPahwTxvkBF1s49+DTJavJz1X0SG4puiPNOCM3MDVDigaRl0
GIFp8MprxwhElceTmS3JOFDMAd1s4ZORtYrdCSd+jotgcd5ahyNYdMsgqAU7
JaNkMsklH+Qeig+w5A8fMIDyjqInHz+CQF0sEsxBq9V+98GVgSpeLkpPy3Yn
f5xxBgNmVqgsqZED4Q6hke8ej9s5lwDITXwoJ60kthXfBKVkYsjdxD/OS5kS
EHQT3RiP8MHNwbD3Z+sX8H4QF/GIvQnKLkL83fjnVYOjgcjw6tiTB9ssqNun
eN3iwd2gWzu3TWAY98LQUvFo7lvn6x29T3r/E9aBjn1PlurZPwuJkYhAc/E8
lbG7/7P4LWhGR5RRGH32Wfw8DJD/qNnUURTWGaBuEMOZni/LopwBREU8zehs
w79AVQJRsl6xJ603axuBQFadYmZKVasmoqF+VAoqkNgTlPHlis1elyuzSlaI
H0DHdZVz3KCcNpjgJindEU2sed1/zBo9SMvyStgeKLO1ZqDOc2JANdkkmGNN
ihIgFn2+qqBQwmwU+O05XB1EvCnJrgBePdk485bi/uzyUu1zpeUV7JWmVFvi
R+RLW1eki8tqM4yuZ6Rd5myxijMgc+jiPbnOiyJyEQ0OOYBKs/8YCANXi8cL
4GNvKuaVCgOUeUBjmJfALj588KUmHz+O4sMaj75R1yTOwxm2L+nflMkBv+2G
IQyXHOVTWykc4r0ZkachGLPIgZ99+VKcLuKsE0cd7cmiHCPKxcvLQZNZUY7J
j8LnhGmFkocjsJAwraedAQjcq0GOCUdsnk810bZRJ4XZeKeaM9yseEe0Es7G
wGNwlZNeikD7MAG6fVwyG8WKVHgDav1+k8Yo0pAy0jCbcgpSJ+YclFrzwflt
SssGc/EgiobCTXRRZNlzcKlOMHuhPlBjQf7m1aDMRZaAyCcsDWs4vZTnQCCy
6u2tHDocthIFfh9eJ02DSYZ5iqbGJsheln0ZsMHo6jFY00D3rI8wRqQFFXiy
WltkVkMH88MHU7f18eMAjBJM/aR9xySuSNVpJtCG07vMTgJLYxWBjittAKIV
k//jV6wRU8nSliKTbkKQOJRED6c8oCGes8jFWXClV2gSop7k9HfDFRzKKLIh
lEMQnftwJDMfj/+D+IxKvFBtJKLkaAcn0wb8QDkGID5CpkBWBDrG4Yu1BLgc
WBxrMt4DMGPAJAKCnHCtwxKtZZ8zYwpDClBzG861Xjp3Wn6FsFsnJ+EpdPp5
4jDYIASccdx1YIKuggcbbT2ID+MHFKLF7ExgSw9Iw7uuJVqCtI9mEOZyi44O
hDFkCo13EGDgfUOYWT7bpawwtFuxTg7L1HPCJhASMH2224Bhg2VUDyIny2Dr
gU9RhIgZFVLzcppPJE8LnUk1Z/0DY9aDrahNbslV5lwxdpfBRJtxlU8O4gvc
JFvrFCHrp8JhkjpB3YWew94nOMxPqoF7CL821RvM7YQ83Kij+FlGVEwJPmWs
vKCR8dnj4vdXs7J4CYPILHnAIw5pHps2b0+F5NeT9QKUTronoAgDNBgkvMqE
W6ijxeCQBKGPlDnHnHd7wCvOtPOarXOXDYLsdSr28hVY9wpjOneQoFD9BsEK
m7IEGjlSt0BCXl3pKKASDf4tMa2FiWlRFRZaR1KSEmOkglODQc0ijxaIUMqQ
DPStEZYSkjJCAXcMqS9ntJ/oDWFnmMuJ1DoM/pL0P4kkuZKcGpnQQrKoJ3QO
ppWrdZ3yp6PoLF9hpbhTrBLkEJmeadYq87+uJTKli3PpjSaOPaayO4BlkSUa
CEoENaRLvDy7HD6FfUHnrCuWARKdlT4qBSTfHw/0GQOqHeRNgGaHMFwYUVj/
MQjocYjbsUSSjXytoqiFQbkfVhpab6IrqOipFFx2vGviS4wvUH2oM8UNaU45
CzTWlNKkQlFGGgDRBu0fKcZLZDKETa7soF8jleNezmFeii3jCqI8qKcDPWvo
nIU9el1Br3BiX+rRsApLkhwP2GSXICKmaMzhuHNujzl3leNKScUpPfgHqJBe
22QjG3N61HPByroysiPrLlGXvzggca1S5sdKmXOUEy6Q8QblIT7sn6Qgt0CC
tWo7TWY+505QEkst47n4SsSKoKfF2GOddVTGBg+IOybOF4NMUnk2eUaGFuM9
Uqhw8irT4Bqvxe1TvrT+JnRmZltWJCRt67hacNK2hJLGyJDLHmcUS80pu2Ir
IBIyjybZsJxOQelurrNseQssxO4pcw3bi0wYo+oM5bSTvnSwUBnSwBalRBAI
YRaJCaMiWUSCQM5SI00sI8XPHxFJILHxLzZJ67gOIn+U880543seOo8BLoop
iB69TM4XCQnPtKTNFnfUQIvnlhE/IN5uei7jikzShEF/o3KKppeUPUQSBaba
FVYOvGjzwOZ9EVIQEqAXFlJlzTomDo6+BM/QNWAVORXeDEsiBwgq9axv4PV6
of0RuTHeAJjxodSl59JhCTtJmB4LEaaFVMt+hZ9rx6Mfnvw7+RmoWlxV/TEe
OJP0wjmKLnTLGv2Sj5PLUmI9NO764flQ/IQ70CrdiV0+fK4Z+Fo/hWKC/CGL
caEuSHRcURwjkhAZJUmwjeDzz3cAs7BfMO3hjxeD+AUww3FZvgdDvixnYsWe
Un5BOW12fZkzepbzKrYWo2eltT/jsRSWYXIhVhUpQaHh0ua9dezLhNXr0mjp
mIlaOy9psFWkp9JKF+x24HCmL6ygWGESiYy2jXNEm4F99mKi1lQ7rjxDfhaD
iS+CACDzaf3YO2pC+f2tbH6P5wHpFbjpV1ItLl9FSfxXidaZ6K87YCbZSytm
MXQkGbEIt1ZIWA8ObzLXiKBypjWMXI3CxFCyPGmQOrjzwxVmLKSkD3mlmBgq
0vMkyCyxO0x+JlVVcnibXdUlJtFuWBlgBGLvBItBOLsZmSBYtcWJfegqdNWL
tSkamJdV/ku5bBLJySTssw6kbSIQw+xpp32bcA0ceesPrfI88LE2UYBQmmEK
ZJApcz0HW6yqKJQInBBwv0eZJ2gYjVGwVR2BFQEN1q30nL6qQ1Z91GaIUOVS
kl6txzBaqLGQz03JAFf1vtb3BvhkWor/D0PjE9i4ZV7P450Je3vI4yaJAlkl
h283WnD2ALl0cEkYCCniH/Phizy02ROUsnRG52Vo6GNAD41Wkgncx8JVR3ar
a3HNkegAbrmhZqY8UAUT50xPQowaz6O6/HbOjmp8edWUK6RMpBgKgzpp5Bx3
uNbIoDZqbeHnJscJ3QWgL4AEoNdBw4omJXFRH+UyMoodLzUp6gVbyWQbGvX0
1ri/kZ2mWYK4P/FX2hwBgQ8gnJ+yyoza1z4bWhW1wGJ9oP0oX07WwIY25pBh
XoMJzlKtKU759OXZmaY3CVniqWH92T316gUL0g8fpH/Tx4/uJfHEEH3hq7sq
jZW7nDnrGmjo1KaxoVwGhGO3jOhIHOBh75Iqa/nJaZPQ3FuWBeaopt6it4Vj
6oTVbSl5BHInWwhSB8FAjBA6zuSbWJJTLSLfq2rubPzaIVhLx3QSMGTqMuVc
yDvyFcS7g3Bl5MWkhG2Qn04FaolHVRK21rVHrjNjO1tvmqTKkRzxGRUqiH6Y
AkKgIdwGyvPY3loEc8i1yJ2XgyCR2qMZWcjExPfR7v3T7nTgD70HehRTQs8C
j4CIaIuxgYMOmyQs6TRQgx8Ns7f2yu03Owg0U8ZJf+CPWFDN0E8pH60wEJJE
pbp1oHsk+8DsjTilhJK1VPyIzcFpFZ0WDa04LxXLShcaad3ku5FdaM4ctTqR
Fg/Ak4ngTDg4il645CI8/ytqTdrqPaOKjeM7nQSYKKjB52csBKsQ0nbfNO8i
C80vB3Grq5BPOMLuYka2Sq0nmnaix+WNC8QAxdW5uCOds0gPxRYn34C8sKyh
RaiSUI/AZRObxpgcgCDR5EkhzAbrlHkOIukrg9SYLedSDKwlV2xakZ245JqF
mjklb7kk+uu+nvft62UrxYBQiX7QxplCmopqoiBhQwpp0xK1E/O1HESkstPD
2dy8LuPrZAOH8c0yaKWSRRaeMAlF1Noque56r/5n0BqFh4vaGRRhwrpkcVDI
weXeoLIvDUHyNGrlByrqtX+M9OChvFVyiDWblejCsoku05CNCa6MktiYjedF
UhOu7ksNIs43NYXFjUKER4w4ZyWVLTSo2P2ySyZCQhrJaiQ95k4JjIq8qj/w
lO1GfvpIxH7TNS2H1HY4Tb6WV2rG2Tdpqik4POGXG+lySzTh4sOzE1LQE+qr
aD2T7W44Af+OhH+Tz2nt83XGpmpH04JgeprF5fOMxf+LmZjMVShE2eqCgaQo
BWNIYe3SHRBBlErrFFJ0VWKVdVcsuvi84SXeq16uNNlBpVh8lWfXdezrYAg0
W4BRa7uTCdfB/Rv1Qy3SIWjS9XCScmOeIY7NLVSpPu48m5mWU59E+YF90kP5
AzrBoAOtVbHCanp2t+KRoAoYnah99MjijdQHax5aZX6mMIjqPD7THCSIz2IP
uYVwmjW+549e62j1nETX/iMak2rMmYckoBQfGK92iqGDuI8OI5en6l7umbJV
P9FKyqGaVeSyXLPocxLCjRxT9IxoGxvEYKggX2AJk2NDSDkmKcHs/DLzzQq5
/ZIKY5KHmraL/o1y6QJ0tMbeeAIdOTrjKmPt3gw4ENzMKVoC7ztrzSGpVbPQ
2n8SbKy83Few9SsQ29QFK/qQsEw2+5wi2a0d80WlO0Fd6q54XCiYFBkfbZdQ
pR/PJGiGFpN2xdNhIoBG5c0D64o6H4n0cfwvddiR8UlHzH4GSYEKSClZ6UvO
Vze62wg9He45N0wfG3M+EPe4ZRQ9dO6NTK9EWcrJwwb2UZvR9IRSB14F47xa
2kHvEI5os5XNB7x8wN2SvN1rdIOlOEtgBVGgKEwN//N5eOZDdGTCcK1OoOQ/
chi/H1a1bMUg0kndW8EmZRAjEw7LzNcdYn3TtybjI6qlvtgfaZlp2voZOoC5
zwQRDAibLb23gdRB8cVdGCcFl06S9p8UmyZP6w42+Pg6C6YWP4fvaYGUbu87
kNbTUfRmyUEHbtI4W6OfipyX5YrjMZRaxjYjmITkWtJgjXoVfbqsywubS2RU
H0U5L8+MVDO2ipGnCO1rmLOKR+YZSj5rj+EMXksllhMFahIGbWTlFMxlx+WM
8ugxOpBNsay7KMv3cWUFuVW3IqJN4BNkYWIk1mckA7bSNV29wOEr5XsE4awU
3TAH8U4aM7lm1KaWPmI0eNATTnrnafG9yzcdU6+AEEMUGS82LJLHiiysOaqs
Oo8l7zSWYs9jnBDLMb9xkr4fU7RSfMvridU6qW7+sy0k5gjp0uy2njmt8Wz3
CUP+TWq1pM5FvvDJWPFCttsaArXa9DgvKPAAByPJFTs7+mO9r9ltaGTaGKzc
285Z4RgTdxMEhoDw6zUMvHbK7vdJL24YmtKlvmP0QvdCsdXqlZhLw9KYL+VA
xkP0o2vwcJ2/OIq//eLpt8JHbr0W4OPHgZzivAH8Tl2KKtVL4khff/X1t+w4
Y59Dp6teJKoXQjPoMQm5wJjqMFiTpDJe3ingzlmxificcIpseT1kztpiBYoW
XSUr5XgFwldffYW6t3SQpAzL5y3C6dRksvFvK9V8o1baDkN7tBHqY0IDxxeZ
oxUxiEO3LKMjCnwGIwVqwi2GRb7gvOQp9J0AUbyLnVYGDul288AkbdhOnJQu
NdQXxUmEZZI1IPhrSk8KWyGKyygHck9hZeJa9TE4whKNRZEpOBMVN+UfbzpN
K6W5J5z7WhtTEWnqVr0+ubjk9oWnSCKdWlRfHMLdXAoS+jCd2yrOgM7r0vfz
VwTD4g79mVD/d1OWhZIYZ0cjOjFfo2U7bmzvORZIi2RiLWQMIRHv63Y2NcqT
wdoYo3TsERCBX2fodpcu+R+5Nh0O5zmqrr7zqUb8nN1pa/Jk3wayab6TJRl2
dg95wRJiIgeABkV8QYLQFgdGNMKESVJztK6lwakIDMk+j0M8cbTCt191SGFe
AeeZvSVkvOS1aXPJnhpMcwDqSbnrXis83Mmz1Ry2vvhuZNq23R3fZW7EcUcO
8sBi1cUjjUs9fWcTg+kRy8z4/PjFRSSb+V91RcHJ+a+5VEBFsLK7+LlvpY0r
cpeoKKdDxY4Y4RRzbAZBjMHqZKy9MYMU9U1VOde7MFCVo4T96z2ZKMa7T9UW
VaIld9gxCCiN62NQd2moZpOpMejXhytyabudIjFfceNYulCASmhhOr5jdApE
UwcNECOOq/ZEXCi27SP+vhXeW6wNBLwUZdVpRas9qrGZYeQa51rDw0jy+IFv
1+B28IFtidt0VQk+W3WQsTLPipVLxKIqDTSf9b6A9k773iENt6JxjgCnpShc
0lWiyOvGNwzhbP9ca+PWqwkJRSqZVHHqVWJAmCFPhFQYjEscE3haYB5g6Zx7
U5ur+U646iWY+EduwKihDIKaqrou+X4Mvyod5Ecpeqq02k8ndVck4Cm6iU+5
WpoMTneC6a/eCltbhoawSy7J789OdupdGE5+YRmnqW7WLLrBa4+oAwGQpOtu
xglywJzEjqexA4qGF18CTSOXFReRLMeIi5u28xmjXsBi1jWdPkW+bzDlSd8y
VJr9PEszXhvxAQApfqFOhMp9Z1cGT5zLOe/nBYOwD4jnNLraSf9067o9lYmn
eF3uH5gea/aoQ5C5r0AK9wxd2xOO9S32NIoDqI/IuagPJecDytt8wH3CPE3P
Oa08B1mc+vCwu1OhRlEqB0H23EBCXkcvgUlFJpdx62Ac+KtK6tCdqteD+LQk
msTz9sX9j8ggtsbcgNqZ+m4yenI4V6VtUjHck+BEHVAskItGXKC3Y0W5wST7
wCpXPsTqpwgO1q1TUH62G35mD2DljAUcypsCl+RTDg/PgWyudIzbLJNFnppr
IPh56mxF+0t04hrJq/yk9Hkq26Gna+7bmIZVzSVlDlNsBDOS5rC2Qb909k5B
1CdlUA7kC4Tdeumc7WlvRvuCLnJ/oKf1ehlgemLRIEdDO+c5M0zEFImXAAdS
wUcl0x1gamzwns4roM9f2GmfLIPo/5RSfLnf/WkwLAXPZ6ULO3lvgKgkGKLN
F5xLZ1RwG7kdBD36NVbX6mKhN13EWttg0tVJm5c6y1H8XKx975gKHtCQdS0U
ExajuXY/1lEZMA2Y4dBs20AMSHc6XXRFjConI7FRvr8LoHPpiVgtBqCRMCCy
sDIstBMVveaKJkkBZOn/qoV4J8BFzGuPVSyx7xhC8BheoYgLxAC5XBYnsvkH
SpI3gXYd2ZzbgbsoS99yLtOg32u+HFIur6vmoHfL6bT7sYgTRvo7GVmEiVNd
YGc6FOfcPdxnNJAeYAjYzGlOW6IUMHMo+HI9dvD6Hkpip1nhlTvHR8CmXDBa
/Q16w4ERu3RMDSev+oqz1G+RFJiUIG4L5/nwWURBuDgQYyha0FlVmJwWZDzG
/+w9juobJ2eXE4AaIqDY2zI+uTjjxU3KTPK4rsNErJZTCu84S/BJAzomynag
5gQY1wisrYlT/Z5wbuQHTKToFO8HtNOIT6CY5NgX5hZgYp+HjIslrcI9fK8E
p1j8FW6MnMEPs6VJqGszoaZVLcJFSkKqreyp1nJDAGk1dwDICUPajRP1lrzh
66BAqxnOMOMsm3jxzr4P4AscmWGfg1WJW1L7NsgsFJib59KhJNW69lgJUBBG
Ru+aqk/EmZEwbklMm1YZVn2EvunaJoM5e63Kr7KuKqaESsLXGaOxt0a9UjwK
+1F0wkynGZaU5PWCGld8Jk6K8AWwaV29h9yofSnBtAsXTDs6vLzYZaWZ3I34
N946jehiqzyToiVU7U3kwM7kotRWo7s9BqeJEXnQtDXIFZSse76C2Qc+UL5n
gR9aCpBUq7a5Gzby7fdC9bGDOKzENaob1Y9Sua5LqqA6Qpya59vq+KHrwlac
51ps2JVuapyZheoB1KRuZjoFF8AfMa41uaVPpQRhsORi8e8JSvHucQV7jzNq
0oHAFQ60QOGMZw5/0qvp/aGRZBdGHOGJscTXUdQoo+i0cevDrlKt7dNtaqz5
ftdHMO/CJ7cdpIQNXAMoNA3NCKoSbChrqw3VZc3aoWkBoi8ivetTIO6HE0oZ
uyrziWrIDTaOKqVCQVwutv49xOdliEx2LcPj7BinNEGxZadbDB47eits4bUP
rUyBfcGcG8niF/PA3TdFEEhI1qk6coTdvAK0Rojh/+o/DZkZB+0oszqrPTRT
l+eQUatO4xEgZbyUmwGlYzlVp3vPnV+sDSjbvXUZ3NhiWesvHL0Hj6LmEjix
XStFM7uO1/ZEBFzZcW1C3rFfFaYfvbp8wyjD33wrdlfOJG5myWLCvHJUXVTu
uxs0OTtvo1FU8iHKHXp1I8xYrjqntjY4GdUUcldi8aH7+Wmu9slAWqHHzYU+
Y4ozabo/kntQt+ATgcsqnWNGfMLdOk+a2m8wh1uffPFYO1hQ5R0u3CvMpdae
mnaz7qJGccWzYiZ5hu6Oew7cihigTgsfP8au1X4t3S/JMS8+MZliI54PB5P3
Oy/1Udynkhopue6gXH7Gu60f6p64uxDsMsjN7t5jp5Xvq8QDtXIyzOtY5p7O
ueRLj6qqB52oEehycNQ5MytU3zymGc3E+lsS0WXhTcMkeu7jy0RvU2EOvUpF
HbTgMLikhla3VpBfrDut5f4e2zfCk6AEY2O+mpTlOzW2mHI1ec4p3IlX/Jw1
4+KiqFbK/QlWw2b9wHvwqacB2RwZRuCNejiTtShDlSatjNuRT/wxYYrwleD4
4A1Kk/hzp0EhiXyuUYFEckKpVQpSNJ56LmTATP8lmU/kSqIYn95m7KNFNtNJ
4qP1HNk8EYx/Z8A1jBQsCfbWFUgAh8mFW67gQS6tq+WiAVnX3//2v4J1/P1v
/1suT5UyQMO8jHvB9XjJJxO8GhOfpMKAgRaIFxuVAX4mkQY6j3LKNq9rtePs
A1E2ULoyUBwzQI4j/ClW84GtPeC0RJQ4aHEiSZRgFq3mlPWeoMOGHtEmw8xj
5JaDeelt43rOt0Ya41IDGSeUg5c1robqTEfYAZNxlzrA4bNybbW1RmGvgCS4
mWRWUavfZAaqK93GEXdozaXR6P6bRjTYMczvtquSaidsYG9d6b8ltemJlGna
dNYrIBcJERZknA+8tp5MQO5jVwN8SL6Ws+Tg236g2ocIPdOfD4Lj0yZ3dszQ
gmtyqFpvNieGEYUmyM2kg3BlfF+1Jmoj2rlzp5bm4MdVVk4B14chIZmme/CX
pUWcHWmRW06QM2OcCe5BG4Kh+p7WlFkFN4mvOAUSa7YNd+jeENreQS5YwdYs
mHcxCByfmi/iU8QMfsKUIlcbB3rp+wyvZMfBuMJ/QKnBZOpTEhOwkr2y4izu
8QapgYLYUrPOiDDx20S63YVhh7SspO+Bdfqdu1RDEkqa72c+fu0ksyb/sWhV
PmNLzDW/tX0DzCXxyZ8b8cE672tmICEhx9p45PpXkM/XxmZ/L/mfCiC5SPHn
D+WFvQGLP5SebeGHz1yzCPPh+eVl9/Uz3ppXmH3iPvwdN2HQJ9G1Cst+R/3J
u07VZQt7oReVcM6VOwbjR2H2Ywvf/d7BfyLKHar1orHLzcp4o/Ej4xpHNB2d
vQ2+di1N/UeHTmvBh4PCoJff/6IDnXIb6/sNJD2vcYRn1lcO41xI7Zobx37Q
HkcL3cKBaF3Yr222SFa1DMN1unap/Mi68u5BZ3Lwe26wM4xpWCT1DCYlzpQJ
Qc11uM+cedOP97tyfNd4af94P5k3HcaUgd0CXbn0TUj90zwCBbCRjrcdjDDF
1zQx4pNB2U8IRlphd/dbwDjTRPvaPKtf65WyF8164mz936/HpDAAIGp0H56d
bHEcRtHh0l48rd1mJetQglSm/pcDiTZhZGtpq/cBhh1bXXIPp0iS99/dEetN
AJUtUas/BtksfN+D/7RTiclKMy+LXC91RC2+fcAPJ8qKsPKSyhu4YX8YtTcF
+ARXhF4EUyvrWgxk/u4B6TZMpYGSiJdST/ToRafDt81JZNyTBidddGVTyA0t
/miPuyiEcUS6hnCsnFdBdv6ZGvVGLR54Y5r5J3xTG3+HtJ7o2VM2Vh04PTan
OhG6GbPmUg6KVaEGTF4OdPFK2n0NJJ1LQo+5EHzkb0nWlFd2rPVurnAovYKh
b8Mis2GILV+xLteL+Nt01Zt1nRF0m9Zp04g3HLaRPYS5aWOQdEnXZwGyYZej
QvsLdq+Tfs5kYJj+ytyswBS5+UZlFPJyRXWol6IxG+yfy/CM7LztZnphK1KQ
4m87xe+3sBnUovtYzYcP791L71zWPN2QWPePyUOaThZ0sTzp/3XjitpMpwrX
z+AAbxABIfTwIRwIG7ZoacQO5zXdOXLEkaq73qL2BvTC92eHg/iHs0Na+Oe4
vrTBpkfl6nMchFvMmQRM6xZD24Oynuk2R6u6+GI4WaJBMVf3fO96A8EiD9dN
Sf2owCgEgPgujB+ktVHwfbTzA36PsTy+v0BbI2gFAiIciX/yE16a5UrV3A1Z
7FlUOtHOXik8XC7MtYMhJqj97tj3scQdd4n13ln68GF6OLnK65Lw/zzJFlL9
pRthrwY58jfryTV7XLDP/gUZ2+8u38AKuoVefaSGDnXKsdlFIpic69PlhmhM
HpdWZA0B/Hv+HeF9Tfcx0V007bHpnDF9Cjy2+gfh4etGuMWRmlvkaSycJ0Yh
8P24BVkiC7S10UICnDh75HP59uR9l1JNr+1h+KXWJdFxyyR/i3dECY8NV1yn
HJEtSzVbRSLCYLVtJES0Ekp3sShFGC7cbNxyv4i9Z1pqiYkzuEiuZywmqxid
Bnx+FqYXgwsjWLblnH16nrnK25ExUbG/3Ehf4YCDlte3yDnq3vD9qP9qiX/g
p3vr8rarpH/9T/fGZZhDyaD351PX+ah/jtuh+tRV/Lo56Fhv/3kUruKfsg4y
zYRv3PxT5nikF6XL0/+MOXBUVITJ+fNfNscji374vn8/HvlHfsUcN8Gv7e91
DvgcmEccf0fzftociH/Hzm++C3bDzvHIruwT57jprOMu2r0Jn3lkIdB3bsH/
Ix6hj3bDdeCd8MBQHdDwL+XN+n0Adxd3IcxuP36Q/RgGz3w3DP/mJ2IR5fx9
HzZ7aLe9DpEJAidCqp/cvY67eUl3P7p72P7+9p/70NWn/r1tjhuRpTe/ao42
HW6ZAwV13H8Gu7jqjBHHb8j10Luo+/CSYIyedTLdrTCV9tY5ws/6xghm7HmD
tPtfPQd9IkP0z3G//bh9jtv3466fXyM//jv0kk//6c7xz9AT0WfZtb/VeXkU
JvZlPnwfuYTF+nbb36VOq78cdO/v+Xq802R1m9PABztd8uHp8eX5ydFFkC8k
mWXeFTNAX4y2P8I2mGBeI4zSIj26w08Rzqj1XXldZwuJ4EXW9zCI8wKV3soa
iW6lZNL2uHeu8wnW5a+l+5VpE6hNZjR/ZpKsJHlGmjYQgAY2tJgkdRITrEq/
dDI8ePGEk0Y7t9vUDb66QBwKdthwzbGbxGfHIOjRLXmc6ke8ztSU5fg7PvfO
PBdxgQO3Pa+47eAo3tGLn2UJDiV8Y4+v6ouCjD+Xm6mpn+INMzAztXTh2O2a
Zgfh0evaVQff8Teg0QzPLlBLu8ehu2uW+Dv9pf/8P+rjDgf+178EfCQc4ebo
YnjyPN4nZeO2QQKOdhP+fnN0wqPcZ5AO2xIZee/l3CgAR8OL00P9EJGu2Q8X
mMH55L8JEjwkwxdlhc1zQaQ/kZE/CbG9kMQesU/cIBa2bbvT+qu7HKBOys3D
B2EQDW/3QtJDer2Q+Fpaq0WF73UHsZ+b5fDreIheww73kd6j9tnpDnI3JDdE
QWhKhSvyK/tu2K/D9QLj0NsiiX1/p+jt4+DPXzxytlGFeYFc7nirbWecGz+O
/TkI/+QXWhXK+rHf4XuOs2Vd/ijcOc528qMjxafh6d3jbIfs5t7r6qL/jhOx
ZZw2OTz1IAUM6+kd49xCDp1Ffzp+DsJJHn3qOPqiXIwre/6ESX+7jNv604Zn
q6jd+nPAEfhAqv8qTZbUoCOjJJ3SVU/RziUFAslD/actas+fd12luY9AtfJR
jGJG2SZe3XShKtimLTqqeQTTM/iEtHZY3bDqQHHf4BtGiPo3nJ+p/U2kTLn9
Bhu0ple4ticycPPbZxedUe//NmIlODX4NrlfMa6w3d3cedlPbQN9PEAd1EKg
Fi6Fj5qBWEt6B5hK74jAFmi7BBkequ5TBLVt2ER3GDaHNppJBOSvQ+LEdVBb
DWoadzeq3mdgb9jySbQRBSISyQG1QOy4HR/+xkV64FfXSEJ8VPCRj2vsUv/p
jbQhLrjaZagJsNqvUgZW9R8fbrBTw5ipPnPXxTIpgp6+5gUiKjiy7DOTTtsB
rv5jwSduy5Fx176w+SZ2hLsnCvulwVHcq6t07/039Sgv9wQhe6v3s71kldcc
Edu72k+K1TzZ32s2q6wezcqo1YzWAFBldKECpmftaOuhGez8ekyth7zRbX7d
PehYs2g9ac1nIreru6wef33XlLr+jV0gCgaK+JwU0u6Nn6UeH1W+cqXWGLR+
DbwLT1XiUuP4pnHJ/2JH7QXFBDGvLPFBdh9K9TkZlNqgb5+9Dd9b8Y0DUswa
5sH57Lfb3pGUt53zw9Pd1sui1972djs3kV98Ud/2DmUTwBmUjljhq+fs/w3f
99edUIsDvhHG4EqdxjQA3RK77f2KvuXMRZ7eMyLZ/XeEbWFGFyGB9OZOHBBd
BGQhGWiUodehlzogGHjiXHO+Y8yU1N9dDj4lRU/wQZcZeZ/td62DgYrRGZpS
x06bE/lJBHHLcL+aTn4oi3V7r8P3rugJbrmLm7CVbjzJXVJfcfMgv6x5dYZs
CP3cSuE2GHzD/C6tICY+iVTaBOEoxTGGO+hl2WUuPuHhv59HYH+GgrOddAQn
YgDj5mvXXZDySF67ZJL7MQy3xs/r7UTwFsTqcy5NOXXm9Fu+w50LVjptdrRN
3HhjKUMvv6Qj195zj+5P2vn+DZb9f6EalBRdYU3AVHqk3VPYMW9kuJNmHv0D
YphaMWvT9B8xGz+64CYnpmNcWLfebp3u+rOcuOIr1GpmZcUVbv7+1Sh6GD98
eO7rXnz6ib3Go7+96sOHB+hlfHjfQkTJADJVNlRoiXnEtjO5JsoqyeI1j0CZ
6fuB2V25USmZ0L1ojeC+dZVACPbALC7oAUEC6u7ljmStr1/8MLy8uIQlqXe6
NmUnhdaHCudC3VGTuLj1UKOe56yaJqnw19EtO6FgtS9K8Lsgm+Aa3MqRq53u
75JaQxXv1KPd9n15hv7rndOTZ7thcpcAAB+Ahr9kucS90KT2aw40XJHCrM2L
Ssr3NpvOmkCvZEtMCWd7Ls5fwqAEln3N8bIe0623bl1HEOzVyesXgBFnsHpU
uGpeMmIcdk2rAG2gUN+2Pa6rq2sIofeE3HKlgdk1PDqT9XKcVGx+43lxh8cn
kbq+kNsW4HChudj5cgtFYymtLx0xuX/SZodbgWjeeOs0jLN5cpVjSRU3gcZc
XXnS3J3S6U7RD4nglS3okMOZTtHarcMjbWucRpmi3gT3ac087t2tw3QAt70y
tjfaGNhOG8jh45fSye6sAoJBq1caoGu/upX7XKqYKbmbwnKGeQCj+QlzGQtM
zA9r5bf2vKmyrgg42x9RE6iwfsGRGnfwbbWfvz01ffTwYVCRylvn+o3v/OnO
fuN/3jUdxk/Ozk7DDFHXcJwGvpB9/nL0BLf2T3yqmD7cPvPlO/jLxGt2f3Z7
iTfNoD+aeMysLFWSDqTNBqekg02wce2C5KY3UyLSqnuQix1HfuUUxFMG7Sa0
bLmnl3O4yK8+eYmOe4Apz3f3+h6T7bb+AA3V+PqqZFUEqcwCPUKO1+TS+P3J
N1/KWT57MooP8wWzZ1Oqzq4fKlTQqd393dkUnm7opou//+0/sRkwVk00WTpf
EjOZYekVejc4bKzvUyMl03bftfbf2r29nygv1U8aEHhMN7DW3F7CNNXhlpAe
bdoOUpdISBLdRXGzq4O5pGuvBwR9JxrTyMdVUvsL1fNWir3rsCUXhyJ9SlEX
9pEqBbNSNLFaV/U66LHRxx8+HYqgcb5v32Y7d46ojOvl4Vl8iDeQ1Hntbi3M
vdrEGOFTdkXNUKWDQL+CS8otlxeooqOX7wyCfs0OON13+aXJ3ZI7T6icc/fZ
9z/mV2hlqrnGYl1MvD3BfRwWqhcpmYBduEuCVMqmjR7EBjQRMTYX5eZDpHcl
2OHXnmDbB4nsGeBCuK1Sp1UleMn9lEM5WiRA69Jrnjeu9wdo1ERB0pwkcReh
WhOO14aX3NXaI8c0SHZtf7QyBAZOQXuvtcstFj8BJ5xSc74ylkaBtBS5cFHT
R2gebvqnbdfDuyo935S2p0rXhpNi89Ns6XlO0LYkqE5zj7i2H4W5Ry6onldd
iMrYBROuc6ZvX2j2E88e+rAF5PAKSkGQsSJCFOI47Bbv3nNFWgUIiDV1kw0v
WgHN4s3zN+5bOosnh68Pu48FN51Io0B6MnGKcDQcDum+FxzlMNWLkalzANjr
3Kg8m/zrgylIuuyBBJBYn8az+GoNc5yOyCYnc9E3AZJbLQq8YWNpbmc8XmMX
GDjsb5e5br4WCMmXyKfY1Hh9fAr89/Xxz43e6pdze6YETUnJbWLa2qUOvI02
V99/vP/468f7+9/wmT968/z4CIY6ejPDYtArsCCfZ6rTod8Cr0gecrX3G9f1
B9tnDbrjfvvkq2+/GkX/DzrFf7NDygAA

-->

</rfc>
