<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.29 (Ruby 3.2.3) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-cui-nmrg-llm-benchmark-01" category="info" consensus="true" submissionType="IRTF" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.31.0 -->
  <front>
    <title abbrev="NetConfBench">A Framework to Evaluate LLM Agents for Network Configuration</title>
    <seriesInfo name="Internet-Draft" value="draft-cui-nmrg-llm-benchmark-01"/>
    <author initials="Y." surname="Cui" fullname="Yong Cui">
      <organization>Tsinghua University</organization>
      <address>
        <postal>
          <region>Beijing</region>
          <code>100084</code>
          <country>China</country>
        </postal>
        <email>cuiyong@tsinghua.edu.cn</email>
        <uri>http://www.cuiyong.net/</uri>
      </address>
    </author>
    <author initials="C." surname="Liu" fullname="Chang Liu">
      <organization>Tsinghua University</organization>
      <address>
        <postal>
          <region>Beijing</region>
          <code>100084</code>
          <country>China</country>
        </postal>
        <email>liuchang23@mails.tsinghua.edu.cn</email>
      </address>
    </author>
    <author initials="X." surname="Xie" fullname="Xiaohui Xie">
      <organization>Tsinghua University</organization>
      <address>
        <postal>
          <region>Beijing</region>
          <code>100084</code>
          <country>China</country>
        </postal>
        <email>xiexiaohui@tsinghua.edu.cn</email>
      </address>
    </author>
    <author initials="C." surname="Du" fullname="Chenguang Du">
      <organization>Zhongguancun Laboratory</organization>
      <address>
        <postal>
          <region>Beijing</region>
          <code>100094</code>
          <country>China</country>
        </postal>
        <email>ducg@zgclab.edu.cn</email>
      </address>
    </author>
    <date year="2025" month="December" day="30"/>
    <area>IRTF</area>
    <workgroup>Network Management Research Group</workgroup>
    <keyword>Large Language Model</keyword>
    <keyword>Network Configuration</keyword>
    <keyword>Benchmark</keyword>
    <abstract>
      <?line 161?>

<t>This document specifies an evaluation framework and related definitions for intent-driven network configuration using Large Language Model(LLM)-based agents. The framework combines an emulator-based interactive environment, a suite of representative tasks, and multi-dimensional metrics to assess reasoning quality, command accuracy, and functional correctness.  The framework aims to enable reproducible, comprehensive, and fair comparisons among LLM-driven network configuration approaches.</t>
    </abstract>
    <note removeInRFC="true">
      <name>About This Document</name>
      <t>
        The latest revision of this draft can be found at <eref target="https://datatracker.ietf.org/doc/draft-cui-nmrg-llm-benchmark/"/>.
        Status information for this document may be found at <eref target="https://datatracker.ietf.org/doc/draft-cui-nmrg-llm-benchmark/"/>.
      </t>
      <t>
        Discussion of this document takes place on the
        Network Management Research Group mailing list (<eref target="mailto:nmrg@irtf.org"/>),
        which is archived at <eref target="https://mailarchive.ietf.org/arch/browse/nmrg"/>.
        Subscribe at <eref target="https://www.ietf.org/mailman/listinfo/nmrg/"/>.
      </t>
      <t>Source for this draft and an issue tracker can be found at
        <eref target="https://github.com/nobrowning/draft_llm_conf_benchmark"/>.</t>
    </note>
  </front>
  <middle>
    <?line 165?>

<section anchor="introduction">
      <name>Introduction</name>
      <t>Network configuration is fundamental to ensuring network stability, scalability, and conformance with intended design behavior. Effective configuration requires not only a comprehensive understanding of network technologies but also advanced capabilities for interpreting complex topologies, analyzing dependencies, and specifying parameters accurately.  Traditional automation approaches such as Ansible playbooks<xref target="A2023"/>, NETCONF<xref target="RFC6241"/>/YANG models<xref target="RFC7950"/>, or program-synthesis methods-either demand extensive manual scripting or are limited to narrow problem domains<xref target="Kreutz2014"/>.  In parallel, Large Language Models (LLMs) have demonstrated the ability to interpret natural-language instructions and generate device-specific commands, showing promise for intent-driven automation in networking.  However, existing work remains fragmented and lacks a standardized way to measure whether an LLM can truly operate as an autonomous agent in realistic, multi-step configuration scenarios.</t>
      <t>Despite encouraging results in individual subtasks, most evaluations<xref target="Wang2024NetConfEval"/> rely on static datasets and ad hoc metrics that do not reflect real-world complexity.  As a result:
- There is no common benchmark suite covering diverse configuration domains (routing, QoS, security) with clearly defined intents, topologies, and ground truth.
- Existing tests seldom involve interactive environments that emulate vendor-specific device behavior or provide runtime feedback on command execution.
- Evaluation metrics are often limited to simple syntactic checks or isolated command validation, failing to capture whether the intended network behavior is actually achieved.</t>
      <t>Consequently, it is difficult to compare different LLM approaches or to identify gaps in reasoning, context-sensitivity, and error-correction capabilities<xref target="Long2025"/><xref target="Liu2024"/><xref target="Fuad2024"/><xref target="Lira2024"/>.  To address these shortcomings, this document introduce <strong>NetConfBench</strong>, a holistic framework that provides:
1. An emulator-based environment (built on GNS3) to simulate realistic device interactions.
2. A benchmark suite of forty tasks spanning routing, QoS, and security, each defined by intent, topology, initial state, ground-truth configuration, annotated reasoning trace, and expert-crafted testcases.
3. Multidimensional metrics-<em>reasoning score</em>, <em>command score</em>, and <em>testcase score</em>-that evaluate an agent's internal reasoning coherence, semantic correctness of generated commands, and functional outcomes in the emulated network.</t>
      <t>NetConfBench aims to enable reproducible, comprehensive comparisons among single-turn LLMs, ReAct-style multiturn agents, and knowledge-augmented variants, guiding future research toward truly autonomous, intent-driven network configuration.</t>
    </section>
    <section anchor="terminology">
      <name>Terminology</name>
      <t>For clarity within this document, the following terms and abbreviations
are defined:</t>
      <ul spacing="normal">
        <li>
          <t>Agent: A software component powered by an LLM that consumes a task intent, interacts with a network environment, and issues configuration commands autonomously.</t>
        </li>
        <li>
          <t>Configuration Command: A device-specific instruction (e.g., a Cisco IOS CLI line or a Juniper Junos set statement) sent by the agent to a network device.</t>
        </li>
        <li>
          <t>Environment: An emulated or real network instance that exposes device status, topology information, and feedback on applied commands.</t>
        </li>
        <li>
          <t>Intent: A high-level specification of desired network behavior or objective, expressed in natural language or a structured format defined in this document.</t>
        </li>
        <li>
          <t>Task: A single evaluation unit defined by (1) a scenario category, (2) an environment topology, (3) initial device configurations, and (4) an intent. The agent is evaluated on its ability to fulfill the intent in the given environment.</t>
        </li>
        <li>
          <t>Testcase: A concrete, executable set of verification steps (e.g., ping tests, traffic-flow validation, policy checks) used to assert whether the agent's final configuration satisfies the intent.</t>
        </li>
        <li>
          <t>MCP (Model Context Protocol): An open standard protocol designed to facilitate communication between LLMs and external data sources or tools, enabling standardized tool discovery, invocation, and result handling.</t>
        </li>
      </ul>
    </section>
    <section anchor="framework-overview">
      <name>Framework Overview</name>
      <artwork><![CDATA[
+------------------+
|    Task Dataset  |                     +-------------------------+
|+----------------+|    +-----------+    |        Evaluator        |
||Network Intents ||(1) |           |(4) |+----------+ +----------+|
||+--------+      |---->| LLM Agent |<--->|Reasoning | |Grnd Truth||
|||Routing |      ||    |           |    ||Trajectory| |Reasoning ||
|||Policy  | +---+||    +-----------+    |+----------+ +----------+|
||+--------+ |QoS|||          |          |     \             /     |
||+--------+ +---+||          |          |      Rouge/Cos. Sim.    |
|||Security|      ||         (3)         |                         |
||+--------+      ||          |          |+----------+ +----------+|
|+----------------+|          |       (5)|| Final    | |Grnd Truth||
|+----------------+|          |        +->| Configs  | |Configs   ||
||Network Topology||    +-----------+  | |+----------+ +----------+|
||+-----+ +-----+ ||(2) |Environment|  | |     \             /     |
|||Nodes| |Links| |---->|           |-+  |    Precision/Recall     |
||+-----+ +-----+ ||    | R2 --- R1 |    |                         |
|+----------------+|    | |(GNS3)|  |(6) | +---------------------+ |
|                  |    | R3 --- R4 |<-->| |     Testcases       | |
|+----------------+|(2) |           |    | +---------------------+ |
||Initial Configs |---->| Emulator- |    |            |            |
|+----------------+|    |  based    |    |        Pass Rate        |
+------------------+    +-----------+    +-------------------------+

Legend:
(1)Task Assignment             (2)Environment Setup
(3)Interactive Task Execution  (4)Reasoning Trajectory Export
(5)Final Configuration Export  (6)Testcase Execution

Figure 1: The NetConfBench Framework
]]></artwork>
      <t>The proposed framework is shown in Figure 1. The flow begins with a <strong>Task Dataset</strong> defining network intents and topologies. The <strong>LLM Agent</strong> perceives the environment, reasons about required actions, and applies configuration commands. The <strong>Environment</strong> simulates or controls real devices, providing feedback for each action. Finally, the <strong>Evaluator</strong> compares the agent's outputs against ground-truth configurations and reasoning, computing scores for accuracy and completion.</t>
      <t>The framework supports multiple communication protocols for agent-environment interaction, including direct API calls and standardized protocols such as MCP. When using MCP, network operations are encapsulated as tools that can be discovered and invoked by the LLM agent through the MCP client-server architecture.</t>
      <section anchor="components">
        <name>Components</name>
        <t>NetConfBench consists of four key components:</t>
        <ol spacing="normal" type="1"><li>
            <t><strong>Task Dataset</strong><br/>
A repository of forty configuration tasks, each defined as a JSON object with:
            </t>
            <ul spacing="normal">
              <li>
                <t><strong>Intent</strong>: One or more natural language instructions.</t>
              </li>
              <li>
                <t><strong>Topology</strong>: A list of node names and link definitions.</t>
              </li>
              <li>
                <t><strong>Initial Configuration</strong>: The initial configuration state of all nodes.</t>
              </li>
              <li>
                <t><strong>Ground Truth Configuration</strong>: Expert-validated CLI commands that achieve the intent.</t>
              </li>
              <li>
                <t><strong>Ground Truth Reasoning</strong>: A textual record of the agent's step-by-step reasoning that maps high-level intent to low-level configuration actions.</t>
              </li>
              <li>
                <t><strong>Testcases</strong>: A set of verification procedures (e.g., <em>show</em>, <em>ping</em>, <em>ACL</em> checks) that confirm functional intent satisfaction.</t>
              </li>
            </ul>
          </li>
          <li>
            <t><strong>Emulator Environment</strong><br/>
Built on GNS3, this component launches official vendor images for routers and switches, replicating realistic CLI behavior.  Key interfaces include:
            </t>
            <ul spacing="normal">
              <li>
                <t><strong>Agent-Network Interface (ANI)</strong>: 
 Based on the key stages commonly involved in intent-driven network configuration, the framework provides an Agent-Network Interface to facilitate structured interactions between the LLM agent and the emulated network environment. This interface supports four core actions: <tt>get-topology</tt>, <tt>get-running-cfg</tt>, <tt>update-cfg</tt>, and <tt>execute_validation</tt>.
                </t>
                <ul spacing="normal">
                  <li>
                    <t><tt>get-topology</tt>: provides this information in a format interpretable by the LLM.</t>
                  </li>
                  <li>
                    <t><tt>get-running-cfg</tt>: enables the agent to obtain the active configurations of specified devices, providing essential context for planning subsequent updates.</t>
                  </li>
                  <li>
                    <t><tt>update-cfg</tt>: allows the agent to apply new configuration commands and provides detailed feedback on their execution, including whether each command was accepted or resulted in any errors.</t>
                  </li>
                  <li>
                    <t><tt>execute_validation</tt>: accepts a device name and a command string as parameters and returns the resulting output.</t>
                  </li>
                </ul>
              </li>
              <li>
                <t><strong>Task Evaluation Interface</strong>: To enable reliable and objective assessment of the LLM agent's configuration behavior, the environment provides a Task Evaluation Interface that allows the evaluation module to access relevant execution results. Specifically, this interface supports:
                </t>
                <ul spacing="normal">
                  <li>
                    <t><strong>Exporting the final configurations of all devices</strong>: This allows for direct comparison with ground truth configurations to evaluate the correctness and completeness of the agent's output.</t>
                  </li>
                  <li>
                    <t><strong>Executing a set of predefined testcases</strong>: These testcases are designed to verify whether the resulting network behavior accurately reflects the intended configuration objectives, as defined by the network intent.</t>
                  </li>
                </ul>
              </li>
            </ul>
          </li>
          <li>
            <t><strong>LLM Agent</strong><br/>
A modular component that can be implemented with any LLM (open-source or closed-source).  It interacts with the emulator via the <strong>Agent-Network Interface</strong> (ANI), issuing queries such as <tt>get-topology</tt>, <tt>get-running-cfg</tt>, <tt>update-cfg</tt>, and <tt>execute_validation</tt>.  Agents may use:
            </t>
            <ul spacing="normal">
              <li>
                <t><strong>Single-Turn Generation</strong>: The entire reasoning and command generation in one pass.</t>
              </li>
              <li>
                <t><strong>ReAct-Style Multi-Turn Interaction</strong>: Interleaved reasoning and actions, with runtime feedback guiding subsequent steps.</t>
              </li>
              <li>
                <t><strong>External Knowledge Retrieval</strong>: (Optional) Queries to a command manual to resolve vendor-specific syntax.</t>
              </li>
            </ul>
          </li>
          <li>
            <t><strong>Evaluator</strong><br/>
Computes three core metrics for each task:  </t>
            <ul spacing="normal">
              <li>
                <t><strong>Reasoning Score (<tt>S_reasoning</tt>)</strong>      </t>
                <t>
The reasoning score evaluates whether the agent can coherently map network intents to concrete configuration actions through semantically aligned reasoning. This score compares the agent's reasoning process with a predefined ground truth reasoning process, focusing on logical consistency and semantic similarity.      </t>
                <t>
For one-shot prediction, prompts are designed to elicit the reasoning process prior to command generation, enabling direct comparison. For multi-turn interaction, an auxiliary LLM summarizes the interleaved steps into a unified reasoning process, which is then compared against the ground truth.
  The reasoning score is computed using cosine similarity:      </t>
                <t><tt>
 S_reasoning = (r_agent * r_gt) / (||r_agent|| * ||r_gt||)
</tt>      </t>
                <t>
where r_agent is the embedding of the agent's reasoning process, and r_gt is the embedding of the ground truth reasoning process.</t>
              </li>
              <li>
                <t><strong>Command Score (<tt>S_command</tt>)</strong>      </t>
                <t>
This evaluation comprehensively assesses the effectiveness of configuration commands generated by the agent. While syntactic correctness is a prerequisite, it does not ensure that configuration commands are correctly applied to the device, particularly when commands must be issued within specific configuration contexts.      </t>
                <t>
After the agent completes its configuration task, the final configurations of all devices are exported and compared to their initial configurations to extract the set of commands that were actually applied. Hierarchical parsing using the Python library <tt>ciscoconfparse</tt> ensures structural completeness during comparison. Since certain configuration parameters (e.g., ACL numbers, route policy names) are manually defined and do not have fixed values, wildcard-based fuzzy matching is introduced to ignore non-essential differences and focus on semantic equivalence.      </t>
                <t>
Based on the extracted command sets, standard precision and recall are computed:
- Precision measures the proportion of correctly generated commands among all generated commands
- Recall measures the proportion of correctly generated commands relative to the ground truth command set      </t>
                <t>
The command score is reported as the harmonic mean of precision and recall:      </t>
                <t><tt>
S_command = (2 * Precision * Recall) / (Precision + Recall)
</tt></t>
              </li>
              <li>
                <t><strong>Testcase Score (<tt>S_testcase</tt>)</strong>      </t>
                <t>
While command-level evaluation based on configuration differences can effectively measure the semantic correctness of generated commands, it does not fully reflect whether the configuration actually achieves the intended network behaviors. To address this limitation, a testcase-driven evaluation strategy is introduced that directly verifies the functional correctness of the agent's configuration in the target environment.      </t>
                <t>
A set of validation testcases is defined for each task, where each testcase encodes a network intent in the form of executable verification commands. To support complex tasks involving multiple sub-goals, the overall intent is decomposed into sub-intents based on node-specific configuration objectives. Each sub-intent is then formulated as an individual testcase to enable fine-grained evaluation and enhance interpretability.      </t>
                <t>
Examples of testcases include:
- <strong>Routing intent</strong>: Verifying the next hop selection on intermediate routers to confirm end-to-end path correctness
- <strong>ACL intent</strong>: Simulating traffic flows and validating whether they are allowed or denied as expected
- <strong>QoS intent</strong>: Inspecting interface statistics to check whether QoS policies are properly enforced      </t>
                <t>
The testcase score is defined as the proportion of passed testcases among all defined testcases:      </t>
                <t><tt>
S_testcase = |Passed Testcases| / |Total Testcases|
</tt>      </t>
                <t>
This score reflects the agent's ability to produce configurations that meet functional requirements and demonstrates practical applicability in real-world deployment scenarios.</t>
              </li>
            </ul>
          </li>
        </ol>
      </section>
      <section anchor="workflow">
        <name>Workflow</name>
        <t>The evaluation workflow for each task proceeds through six stages:</t>
        <ol spacing="normal" type="1"><li>
            <t><strong>Task Assignment</strong><br/>
NetConfBench selects a task from the JSON dataset and provides only the high-level intent(s) to the LLM agent.</t>
          </li>
          <li>
            <t><strong>Environment Setup</strong><br/>
The framework instantiates a GNS3 topology based on the task's <tt>topology</tt> and applies the <tt>startup-config</tt> to each device.  Once the emulated network reaches a stable state, control transfers to the agent.</t>
          </li>
          <li>
            <t><strong>Interactive Execution</strong><br/>
The LLM agent receives the partial prompt containing:
            </t>
            <ul spacing="normal">
              <li>
                <t>The API specification for <tt>get-topology</tt>, <tt>get-running-cfg</tt>, <tt>update-cfg</tt>, and <tt>execute_validation</tt>.</t>
              </li>
              <li>
                <t>The natural language intent.</t>
              </li>
              <li>
                <t>(Optionally) Device model/version hints.</t>
              </li>
              <li>
                <t>The agent issues a sequence of API calls; for single-turn agents, it outputs reasoning followed by a batch of CLI commands.  For multi-turn agents, it alternates reasoning traces and API calls.
When using MCP, network operations are encapsulated as tools that can be discovered and invoked by the LLM agent through the MCP client-server architecture.</t>
              </li>
            </ul>
          </li>
          <li>
            <t><strong>Reasoning Trajectory Export</strong><br/>
After execution completes (agent signals "task done" or after a predefined command budget), NetConfBench captures the entire reasoning log:
            </t>
            <ul spacing="normal">
              <li>
                <t>For single-turn: the reasoning paragraph embedded in the LLM's output.</t>
              </li>
              <li>
                <t>For ReAct: an auxiliary summarization LLM condenses the interleaved reasoning and actions into a single coherent trace.</t>
              </li>
            </ul>
          </li>
          <li>
            <t><strong>Final Configuration Export</strong><br/>
The framework uses the Task Evaluation Interface to extract the final running configs from each device.</t>
          </li>
          <li>
            <t><strong>Testcase Execution and Scoring</strong>
            </t>
            <ul spacing="normal">
              <li>
                <t><strong>Command Score:</strong> Hierarchical diff against ground truth commands.</t>
              </li>
              <li>
                <t><strong>Testcase Score:</strong> Execute each testcase in sequence; record pass/fail.</t>
              </li>
              <li>
                <t><strong>Reasoning Score:</strong> Compute embedding similarity between the agent's reasoning trace and ground truth reasoning.</t>
              </li>
            </ul>
          </li>
        </ol>
        <t>The final per-task score is typically reported as a tuple <tt>(S_reasoning, S_command, S_testcase)</tt>.  Aggregate results across the forty tasks enable comparisons among LLMs and interaction strategies.</t>
      </section>
    </section>
    <section anchor="data-model">
      <name>Data Model</name>
      <t>This section specifies the JSON schemas and interface conventions used to represent tasks and to enable structured interaction between the LLM agent and the emulated environment.</t>
      <section anchor="task-definition-schema">
        <name>Task Definition Schema</name>
        <t>Each configuration task is defined as a JSON object with the following structure:</t>
        <sourcecode type="json"><![CDATA[
{
  "task_name": "Static Routing",
  "intents": [
    "NewYork: create a static route pointing to the Loopback0 on
    Washington, traffic should pass the 192.168.1.0 network.",
    "NewYork: create a backup static route pointing to the Loopback0
    on Washington, administrative distance should be 100."
    ...
  ],
  "topology": {
    "nodes": ["NewYork", "Washington"],
    "links": [
      "NewYork S0/0 <-> Washington S0/0 ", 
      "NewYork S0/1 <-> Washington S0/1"
    ]
  },
  "startup_configs": {
    "NewYork": "!\r\nversion 12.4\r\nservice timestamps
    debug datetime msec\r\n...", 
    "Washington": "!\r\nversion 12.4\r\nservice timestamps
    debug datetime msec\r\n...",
  },
  "ground_truth_configs": {
    "NewYork": [
      "ip route 2.2.2.0 255.255.255.252 192.168.1.2",
      "ip route 2.2.2.0 255.255.255.252 192.168.2.2 100"
    ],
    ...
  },
  "ground_truth_reasoning": "NewYork to Washington Loopback 
  (primary path): add a static route for Washington's 
  Loopback0 network (2.2.2.0/30) pointing to the 
  next-hop 192.168.1.2...",
  "testcases": [
    {
      "name": "Static Route from NewYork to Washington",
      "expected_result": {
        "protocol": "static", 
        "next_hop": "192.168.1.2"
      }
    },
    ...
  ]
}
]]></sourcecode>
      </section>
      <section anchor="agent-network-interface-ani">
        <name>Agent-Network Interface (ANI)</name>
        <t>The Agent-Network Interface defines the minimal API primitives necessary for intent-driven configuration.  Each primitive uses JSON-RPC style request/response with the following methods:</t>
        <ol spacing="normal" type="1"><li>
            <t><strong><tt>get-topology</tt></strong>
            </t>
            <ul spacing="normal">
              <li>
                <t><strong>Request</strong>:      </t>
                <sourcecode type="json"><![CDATA[
{
  "method": "get-topology",
  "params": {
    "devices": ["R1", "R2", ...]
  }
}
]]></sourcecode>
              </li>
              <li>
                <t><strong>Response</strong>:      </t>
                <sourcecode type="json"><![CDATA[
{
  "topology": {
    "nodes": [...],
    "links": [...]
  }
}
]]></sourcecode>
              </li>
              <li>
                <t><strong>Description</strong>: Returns the full topology for the specified subset of devices.  If <tt>"devices"</tt> is empty or omitted, returns the entire topology.</t>
              </li>
            </ul>
          </li>
          <li>
            <t><strong><tt>get-running-cfg</tt></strong>
            </t>
            <ul spacing="normal">
              <li>
                <t><strong>Request</strong>:      </t>
                <sourcecode type="json"><![CDATA[
{
  "method": "get-running-cfg",
  "params": {
    "device": "R1"
  }
}
]]></sourcecode>
              </li>
              <li>
                <t><strong>Response</strong>:      </t>
                <sourcecode type="json"><![CDATA[
{
  "running_config": "
   interface Gig0/0
   ip address 192.168.1.1 255.255.255.255
   ...
  "
}
]]></sourcecode>
              </li>
              <li>
                <t><strong>Description</strong>: Retrieves the active (running) configuration of the specified device.</t>
              </li>
            </ul>
          </li>
          <li>
            <t><strong><tt>update-cfg</tt></strong>
            </t>
            <ul spacing="normal">
              <li>
                <t><strong>Request</strong>:      </t>
                <sourcecode type="json"><![CDATA[
{
  "method": "update-cfg",
  "params": {
    "device": "R1",
    "commands": [
      "configure terminal",
      "ip route 2.2.2.0 255.255.255.252 192.168.1.2"
    ]
  }
}
]]></sourcecode>
              </li>
              <li>
                <t><strong>Response</strong>:      </t>
                <sourcecode type="json"><![CDATA[
{
  "results": [
    { "command": "configure terminal", "status": "success" },
    { 
    "command": "ip route 2.2.2.0 255.255.255.252 192.168.1.2", 
    "status": "success" }
  ]
}
]]></sourcecode>
              </li>
              <li>
                <t><strong>Description</strong>: Applies a sequence of CLI commands to the specified device.  Returns per-command status and any error messages.</t>
              </li>
            </ul>
          </li>
          <li>
            <t><strong><tt>execute_validation</tt></strong>
            </t>
            <ul spacing="normal">
              <li>
                <t><strong>Request</strong>:      </t>
                <sourcecode type="json"><![CDATA[
{
  "method": "execute_validation",
  "params": {
    "device": "R1",
    "command": "show ip route 2.2.2.0 255.255.255.252"
  }
}
]]></sourcecode>
              </li>
              <li>
                <t><strong>Response</strong>:      </t>
                <sourcecode type="json"><![CDATA[
{
  "output": "S 2.2.2.0/30 [1/0] via 192.168.1.2"
}
]]></sourcecode>
              </li>
              <li>
                <t><strong>Description</strong>: Executes a read-only command on the specified device and returns its output.  Must not alter device state.</t>
              </li>
            </ul>
          </li>
        </ol>
      </section>
      <section anchor="task-evaluation-interface">
        <name>Task Evaluation Interface</name>
        <t>After the agent signals completion, the framework uses the Task Evaluation Interface to retrieve results:</t>
        <ul spacing="normal">
          <li>
            <t><strong><tt>export-final-cfg</tt></strong>
            </t>
            <ul spacing="normal">
              <li>
                <t><strong>Request</strong>:      </t>
                <sourcecode type="json"><![CDATA[
{
  "method": "export-final-cfg"
}
]]></sourcecode>
              </li>
              <li>
                <t><strong>Response</strong>:      </t>
                <sourcecode type="json"><![CDATA[
{
  "configs": {
    "R1": "!\nversion 15.2\n...",
    "R2": "!\nversion 15.2\n..."
  }
}
]]></sourcecode>
              </li>
              <li>
                <t><strong>Description</strong>: Returns the final running-configuration of each device.</t>
              </li>
            </ul>
          </li>
          <li>
            <t><strong><tt>run-testcases</tt></strong>
            </t>
            <ul spacing="normal">
              <li>
                <t><strong>Request</strong>:      </t>
                <sourcecode type="json"><![CDATA[
{
  "method": "run-testcases",
  "params": {
    "testcases": [
      {
        "device": "R1",
        "commands": ["show ip route 2.2.2.0 255.255.255.252"],
        "expected_output": "S 2.2.2.0/30 [1/0] via 192.168.1.2"
      },
      ...
    ]
  }
}
]]></sourcecode>
              </li>
              <li>
                <t><strong>Response</strong>:      </t>
                <sourcecode type="json"><![CDATA[
{
  "results": [
    { 
      "name": "Verify primary static route on R1", 
      "status": "pass" 
    },
    { 
      "name": "Verify backup static route on R1", 
      "status": "fail" 
    }
  ]
}
]]></sourcecode>
              </li>
              <li>
                <t><strong>Description</strong>: Executes each verification command sequence on the appropriate device and compares actual output against <tt>expected_output</tt> (regular expression).  Returns pass/fail for each testcase.</t>
              </li>
            </ul>
          </li>
        </ul>
      </section>
    </section>
    <section anchor="mcp-based-implementation">
      <name>MCP-Based Implementation</name>
      <t>The Model Context Protocol (MCP) provides a standardized approach for implementing the Agent-Network Interface (ANI). This section describes how MCP can be applied to NetConfBench for LLM-driven network configuration evaluation.</t>
      <section anchor="benefits-of-mcp-integration">
        <name>Benefits of MCP Integration</name>
        <t>Integrating MCP into NetConfBench provides several advantages:</t>
        <ol spacing="normal" type="1"><li>
            <t><strong>Standardization</strong>: MCP provides a uniform interface for tool invocation across different LLM implementations and network environments.</t>
          </li>
          <li>
            <t><strong>Vendor Abstraction</strong>: The MCP server can handle vendor-specific command translation, allowing the LLM to work with high-level operations without needing detailed knowledge of each vendor's CLI syntax.</t>
          </li>
          <li>
            <t><strong>Tool Extensibility</strong>: New network operations can be easily added as MCP tools without modifying the LLM agent implementation.</t>
          </li>
          <li>
            <t><strong>Traceability</strong>: The structured MCP communication protocol enables detailed logging of all tool invocations and results, facilitating debugging and analysis.</t>
          </li>
          <li>
            <t><strong>Ecosystem Integration</strong>: MCP-enabled network tools can potentially be reused across different AI applications beyond network configuration evaluation.</t>
          </li>
        </ol>
      </section>
      <section anchor="mcp-tool-definitions-for-ani-operations">
        <name>MCP Tool Definitions for ANI Operations</name>
        <t>This subsection provides the complete MCP tool definitions for the four core Agent-Network Interface operations: <tt>get-topology</tt>, <tt>get-running-cfg</tt>, <tt>update-cfg</tt>, and <tt>execute_validation</tt>. These definitions use JSON Schema to specify tool parameters and enable LLMs to understand and invoke network operations through the MCP protocol.</t>
        <section anchor="gettopology">
          <name>1. get_topology</name>
          <t>This tool provides network topology information in a format interpretable by the LLM, returning topology for specified devices or the entire network if no devices are specified.</t>
          <sourcecode type="json"><![CDATA[
{

  "name": "get_topology",
  "description": "Retrieve network topology information including
   nodes and their interconnections. Returns topology for 
   specified devices or entire network if no devices specified.",
  "inputSchema": {
    "type": "object",
    "properties": {
      "devices": {
        "type": "array",
        "items": {
          "type": "string"
        },
        "description": "List of device names. Leave empty for 
        entire network topology."
      }
    }
  }
}
]]></sourcecode>
          <t><strong>Usage Example</strong>:</t>
          <sourcecode type="json"><![CDATA[
{
  "method": "tools/call",
  "params": {
    "name": "get_topology",
    "arguments": {
      "devices": ["R1", "R2", "R3"]
    }
  }
}
]]></sourcecode>
        </section>
        <section anchor="getrunningconfig">
          <name>2. get_running_config</name>
          <t>This tool enables the agent to obtain the active configurations of specified devices, providing essential context for planning subsequent updates.</t>
          <sourcecode type="json"><![CDATA[
{
  "name": "get_running_config",
  "description": "Retrieve the active running configuration 
  from a network device. Returns the complete configuration 
  as a text string.",
  "inputSchema": {
    "type": "object",
    "properties": {
      "device": {
        "type": "string",
        "description": "Device name or identifier to retrieve
         configuration from"
      }
    },
    "required": ["device"]
  }
}
]]></sourcecode>
          <t><strong>Usage Example</strong>:</t>
          <sourcecode type="json"><![CDATA[
{
  "method": "tools/call",
  "params": {
    "name": "get_running_config",
    "arguments": {
      "device": "R1"
    }
  }
}
]]></sourcecode>
        </section>
        <section anchor="updateconfig">
          <name>3. update_config</name>
          <t>This tool allows the agent to apply new configuration commands and provides detailed feedback on their execution, including whether each command was accepted or resulted in any errors.</t>
          <sourcecode type="json"><![CDATA[
{
  "name": "update_config",
  "description": "Apply configuration commands to a network
   device. Executes a sequence of CLI commands and returns
    detailed status for each command.",
  "inputSchema": {
    "type": "object",
    "properties": {
      "device": {
        "type": "string",
        "description": "Device name or identifier to apply
         configuration to"
      },
      "commands": {
        "type": "array",
        "items": {
          "type": "string"
        },
        "description": "Ordered list of CLI commands to
         execute on the device"
      }
    },
    "required": ["device", "commands"]
  }

}

]]></sourcecode>
          <t><strong>Usage Example</strong>:</t>
          <sourcecode type="json"><![CDATA[
{
  "method": "tools/call",
  "params": {
    "name": "update_config",
    "arguments": {
      "device": "R1",
      "commands": [
        "configure terminal",
        "interface GigabitEthernet0/0",
        "ip address 192.168.1.1 255.255.255.0",
        "no shutdown"
      ]
    }
  }
}
]]></sourcecode>
          <t>### 4. execute_cmd</t>
          <t>This tool accepts a device name and a read-only command string as parameters and returns the resulting output. It must not alter the device state and is intended for validation and status inspection.</t>
          <sourcecode type="json"><![CDATA[
{
  "name": "execute_validation",
  "description": "Execute a read-only validation command
   on a network device to verify configuration or check
    device status. This command must not alter the 
    device state.",
  "inputSchema": {
    "type": "object",
    "properties": {
      "device": {
        "type": "string",
        "description": "Device name or identifier to 
        execute command on"
      },
      "command": {
        "type": "string",
        "description": "Read-only command to execute
         (e.g., show commands)"
      }
    },
    "required": ["device", "command"]
  }
}

]]></sourcecode>
          <t><strong>Usage Example</strong>:</t>
          <sourcecode type="json"><![CDATA[
{
  "method": "tools/call",
  "params": {
    "name": "execute_validation",
    "arguments": {
      "device": "R1",
      "command": "show ip route 2.2.2.0 255.255.255.252"
    }
  }
}

]]></sourcecode>
          <t>These four tools form the core MCP interface for NetConfBench. The MCP server must register these tools and handle the translation between MCP tool invocations and actual device communication protocols (CLI, NETCONF, RESTCONF, etc.). The JSON Schema definitions in <tt>inputSchema</tt> enable LLMs to automatically understand parameter requirements and generate valid tool calls.</t>
        </section>
      </section>
      <section anchor="additional-mcp-tools-for-advanced-scenarios">
        <name>Additional MCP Tools for Advanced Scenarios</name>
        <t>Beyond the four core ANI operations, additional MCP tools can be defined for more complex scenarios. The following examples demonstrate extended tool definitions:</t>
        <section anchor="batchconfiguredevices">
          <name>batch_configure_devices</name>
          <t>For batch operations across multiple devices:</t>
          <sourcecode type="json"><![CDATA[
{
  "name": "batch_configure_devices",
  "description": "Apply configuration commands to
   multiple network devices simultaneously",
  "inputSchema": {
    "type": "object",
    "properties": {
      "device_ips": {
        "type": "array",
        "items": {"type": "string"},
        "description": "List of device IP addresses"
      },
      "commands": {
        "type": "array",
        "items": {"type": "string"},
        "description": "CLI command sequence to execute"
      },
      "credential_id": {
        "type": "string",
        "description": "Authentication credential
         identifier"
      }
    },
    "required": ["device_ips", "commands"]
  }
}

]]></sourcecode>
        </section>
        <section anchor="checkdevicestatus">
          <name>check_device_status</name>
          <t>For comprehensive device health monitoring:</t>
          <sourcecode type="json"><![CDATA[
{
  "name": "check_device_status",
  "description": "Check operational status of network
   devices including CPU, memory, and interface metrics",
  "inputSchema": {
    "type": "object",
    "properties": {
      "device_ip": {
        "type": "string",
        "description": "Device IP address to check"
      },
      "metrics": {
        "type": "array",
        "items": {
          "enum": ["cpu", "memory", "interface"]
        },
        "description": "List of metrics to retrieve"
      }
    },
    "required": ["device_ip", "metrics"]
  }
}

]]></sourcecode>
          <t>These additional tools demonstrate the extensibility of the MCP approach, allowing the framework to support advanced scenarios such as batch operations and comprehensive device monitoring.</t>
        </section>
      </section>
    </section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>LLM-driven network configuration introduces risks such as unintended or malicious commands, emulator vulnerabilities, and data exposure; to mitigate these, NetConfBench should enforce strict input validation (e.g., YANG/XML schema checks), run emulated devices in isolated sandboxes with limited privileges, encrypt and restrict access to task definitions and logs, employ human-in-the-loop approval for generated configurations, and use curated prompt templates and fine-tuning to reduce LLM hallucinations. Validation endpoints must enforce read-only execution (e.g., execute-validation) to prevent unintended state changes. Where appropriate, human-in-the-loop approval should gate privileged write operations (update-cfg/update-config) identified as high-impact.</t>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t>This document has no IANA actions.</t>
    </section>
  </middle>
  <back>
    <references anchor="sec-combined-references">
      <name>References</name>
      <references anchor="sec-normative-references">
        <name>Normative References</name>
        <reference anchor="RFC6241">
          <front>
            <title>Network Configuration Protocol (NETCONF)</title>
            <author fullname="R. Enns" initials="R." role="editor" surname="Enns"/>
            <author fullname="M. Bjorklund" initials="M." role="editor" surname="Bjorklund"/>
            <author fullname="J. Schoenwaelder" initials="J." role="editor" surname="Schoenwaelder"/>
            <author fullname="A. Bierman" initials="A." role="editor" surname="Bierman"/>
            <date month="June" year="2011"/>
            <abstract>
              <t>The Network Configuration Protocol (NETCONF) defined in this document provides mechanisms to install, manipulate, and delete the configuration of network devices. It uses an Extensible Markup Language (XML)-based data encoding for the configuration data as well as the protocol messages. The NETCONF protocol operations are realized as remote procedure calls (RPCs). This document obsoletes RFC 4741. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="6241"/>
          <seriesInfo name="DOI" value="10.17487/RFC6241"/>
        </reference>
        <reference anchor="RFC7950">
          <front>
            <title>The YANG 1.1 Data Modeling Language</title>
            <author fullname="M. Bjorklund" initials="M." role="editor" surname="Bjorklund"/>
            <date month="August" year="2016"/>
            <abstract>
              <t>YANG is a data modeling language used to model configuration data, state data, Remote Procedure Calls, and notifications for network management protocols. This document describes the syntax and semantics of version 1.1 of the YANG language. YANG version 1.1 is a maintenance release of the YANG language, addressing ambiguities and defects in the original specification. There are a small number of backward incompatibilities from YANG version 1. This document also specifies the YANG mappings to the Network Configuration Protocol (NETCONF).</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="7950"/>
          <seriesInfo name="DOI" value="10.17487/RFC7950"/>
        </reference>
      </references>
      <references anchor="sec-informative-references">
        <name>Informative References</name>
        <reference anchor="Kreutz2014">
          <front>
            <title>Software-defined networking: A comprehensive survey</title>
            <author initials="D." surname="Kreutz" fullname="Diego Kreutz">
              <organization/>
            </author>
            <author initials="F. M. V." surname="Ramos" fullname="Fernando M. V. Ramos">
              <organization/>
            </author>
            <author initials="P. E." surname="Verissimo" fullname="Paulo Esteves Verissimo">
              <organization/>
            </author>
            <author initials="C. E." surname="Rothenberg" fullname="Christian Esteve Rothenberg">
              <organization/>
            </author>
            <author initials="S." surname="Azodolmolky" fullname="Siamak Azodolmolky">
              <organization/>
            </author>
            <author initials="S." surname="Uhlig" fullname="Steve Uhlig">
              <organization/>
            </author>
            <date year="2014"/>
          </front>
        </reference>
        <reference anchor="A2023">
          <front>
            <title>Ansible</title>
            <author initials="R." surname="Hat" fullname="Red Hat">
              <organization/>
            </author>
            <date year="2023"/>
          </front>
        </reference>
        <reference anchor="Long2025">
          <front>
            <title>A Survey on Intelligent Network Operations and Performance Optimization Based on Large Language Models</title>
            <author initials="S." surname="Long" fullname="Sifan Long">
              <organization/>
            </author>
            <author initials="J." surname="Tan" fullname="Jingjing Tan">
              <organization/>
            </author>
            <author initials="B." surname="Mao" fullname="Bomin Mao">
              <organization/>
            </author>
            <author initials="F." surname="Tang" fullname="Fengxiao Tang">
              <organization/>
            </author>
            <author initials="Y." surname="Li" fullname="Yangfan Li">
              <organization/>
            </author>
            <author initials="M." surname="Zhao" fullname="Ming Zhao">
              <organization/>
            </author>
            <author initials="N." surname="Kato" fullname="Nei Kato">
              <organization/>
            </author>
            <date year="2025"/>
          </front>
        </reference>
        <reference anchor="Liu2024">
          <front>
            <title>Large language models for networking: Workflow, advances and challenges</title>
            <author initials="C." surname="Liu" fullname="Chang Liu">
              <organization/>
            </author>
            <author initials="X." surname="Xie" fullname="Xiaohui Xie">
              <organization/>
            </author>
            <author initials="X." surname="Zhang" fullname="Xinggong Zhang">
              <organization/>
            </author>
            <author initials="Y." surname="Cui" fullname="Yong Cui">
              <organization/>
            </author>
            <date year="2024"/>
          </front>
        </reference>
        <reference anchor="Fuad2024">
          <front>
            <title>An intent-based networks framework based on large language models</title>
            <author initials="A." surname="Fuad" fullname="Ahlam Fuad">
              <organization/>
            </author>
            <author initials="A. H." surname="Ahmed" fullname="Azza H. Ahmed">
              <organization/>
            </author>
            <author initials="M. A." surname="Riegler" fullname="Michael A. Riegler">
              <organization/>
            </author>
            <author initials="T." surname="Cicic" fullname="Tarik Cicic">
              <organization/>
            </author>
            <date year="2024"/>
          </front>
        </reference>
        <reference anchor="Lira2024">
          <front>
            <title>Large language models for zero touch network configuration management</title>
            <author initials="O. G." surname="Lira" fullname="Oscar G. Lira">
              <organization/>
            </author>
            <author initials="O. M." surname="Caicedo" fullname="Oscar M. Caicedo">
              <organization/>
            </author>
            <author initials="N. L. S. da" surname="Fonseca" fullname="Nelson L. S. da Fonseca">
              <organization/>
            </author>
            <date year="2024"/>
          </front>
        </reference>
        <reference anchor="Wang2024NetConfEval">
          <front>
            <title>Netconfeval: Can llms facilitate network configuration?</title>
            <author initials="C." surname="Wang" fullname="Changjie Wang">
              <organization/>
            </author>
            <author initials="M." surname="Scazzariello" fullname="Mariano Scazzariello">
              <organization/>
            </author>
            <author initials="A." surname="Farshin" fullname="Alireza Farshin">
              <organization/>
            </author>
            <author initials="S." surname="Ferlin" fullname="Simone Ferlin">
              <organization/>
            </author>
            <author initials="D." surname="Kostic" fullname="Dejan Kostic">
              <organization/>
            </author>
            <author initials="M." surname="Chiesa" fullname="Marco Chiesa">
              <organization/>
            </author>
            <date year="2024"/>
          </front>
        </reference>
      </references>
    </references>
    <?line 872?>

<section numbered="false" anchor="acknowledgments">
      <name>Acknowledgments</name>
      <t>TODO acknowledge.</t>
    </section>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA9V9aX/byJH3e36KfuQXkWSSkuzx7IaT5IlGtifO+IqkuXbj
nwUCTbLHIMDgkExbzmffuvoCQVn2eHez2s2YBIE+quv8V3VjNBoN7ty5M7ij
nhSNrgrdjB5WyaxRz5LqTVZeFepcL1d50ugB3nSqi2SpVbMwtZqZXKtZVS5V
hk+MmjIrR+uyrfCW0aoqmzIt8/EyU02p5rpRdZNUjc7G0A73QW3NymqZNAoa
3OF2/mDb+NPoD1dl9WZele0KPtMlaG5nTEN5XFbKFKYxSa5q3bSroYIHVVnk
a1VoTb3qzDQwWOjEVHWjpnmZvlHlDL7qPKtxIC/w9p3GNLneocdqfG6qVbpI
irnOvlGZznWj1U4ynVb6ckeZGfZTKXoGh10vyqrBto6LtSqht0qlJRCzaFSa
FNgWDkNnQzVtG2o6qfSszVVRNtiZKZqqzNoU7quqsqJhnZVIGRqlujJ5jo/B
JFXSNiVQy6RJDuPO2soUc549jgv6XitoXLWFDJ9J9bAsfgcULtK8zWAmo8PD
HQXU2xnhutYNzKkQKuW0vjiCp8lU57X7BRZJ3WJ5pEUeRA2LMF1DW9hCU5Y5
0RbmDhSCD3g1basKCXWpq9qUxTcwFxhgVqbY2g52q/TbBBhQ80zOkfEa4Ujs
oVZvqmSJjDqqZulELZpmVU8ODuamWbTTcVouD9JkWh6Ed0E7vwCn4OJUGlpK
NY0FxmEqJoIsslrxYBOVmRl8wJEyuyKFTojEjnAwUFhznAVODu5JF450wN+7
47fLnCb087OnQ6WbdDwe7+GkQPqIlyZq51g9hnFqJCs28ugyyVuc6NOnz9Tx
HLqvqYXnuqFbTspiZuZtBQxRFjsD5lBoBn7Hn77VwAY7gxRamJfVegKLOSsH
A6HuRNYzbc2oWFbzUZ4vR1N8ZAmiPzo8GtTtdGlqnFGzXsH9T07PHyt1RyV5
XUIvpsj0SsN/imZnqHaQycsKpBG/PDn+Fv5BHsOHdgZFu5zqajLIYCyTAYhH
DaRq64lqqlYPYMz3B8AyCfcxcGw1cVN9lhTJXC9xBU51rZMKqPsd3jJ4o9dw
RzYZqBEwbQUr8hTWpYW71bMShBev9xIMf/jWzndwqYsWhqbU1o7hN6ZDZwAK
FtvkE4VE/LOpmtm4rOZwFW/xDIn34BVzqcdG800HeOFgWpVXtT7Ax7F7Ylxo
rcTrBcj3Aa3Ta1ie10C42Wu3RnA3i5vvBeibNFWSvtGV7wUW/OCmtT4YDECv
gBabDEaqKpETYZUrvSwvSRJIT+E6KeAgWLNfxuqkNfCN2eiXEnQQX4DeJuq8
hkEv2kT9UBiS62YNP9VNpXWDBE7hAv5b6TmswgTWwPyKauwOdZTkV8m6Vskl
0muaY6cpLONEHR0eHv77V/S1BY0J7HyyMEUCj4EaUOdPH6pd/TbVq0b98P0e
sp7cR6OEx1aLsoBm4JPm9QJirGHof25kvGOdteMU2EKBWmWSAkWvrq7GcucY
rOOBI9FNFDoZq6emdRQiTSFXbk8i9b9FI0+i3LRkSO7d/zNeqMddWt2GGD+P
1c9GO2L8bJJy0Rq59n+KHG+Nfsuj32CaW3LFw5ApNKopmMVDxxf/AR3O8WLa
FmiBS9BUoLm/GDF+/yXlB1yW+Z/fzVPoxlJhUJCxg0XEkZ4+Pvn63ldH8vHf
fv/gED8O0AoFd31f6bZ5d+/w6Cv8ppS1hWflrLkCozDK9MwU4EoUrJFhqhN1
DKNbgnVeoMUFatdtdanXO9SA1WX4eSTEfmjABEpX0Q+PwedNiqxUz8bqx7E6
TZZlHd3wMmlzsMR1oy/B2fhRV2gRl2V0z8kCroIjWsh96hTdwAIs3jy678wk
y+SNOn4HnnK+LPM36/hnevaHRW74MTKWCikDX4/vHd67H1HoGKbOC9w/51Og
2V+SJmrq3n34+hSYDD4+iOl9rM6IiOBBUyyQwzDQ3FpD+GKl2W4CcxWZeqkr
WsYCvKcXq8YszTv6VX2boN9XFr3GuN6+RGdmBgTEsUWX/wrrTex9nhTRD9+W
S3BOnyVlZz2LOcoo3h439AtcoB5MdPkZtv0fi04zz7VR34PsxcR7gMQzLXzq
8CpPNbdTXdJUyVULmfYn+DDLy6uhSrJLJBxTEjRsDn73XN9AnNCC+KuxKg2v
gxYpeV5dKnhjHUwMGexxm2SbMzsuMD4BPhhNaV1lPjA556hO7YLnfVTYPqfj
RZ4sqdv48rt3ifrLGH5e6qyzVEAqnatjkFOQ51xX0c/nSWXAxTOpSTdn99RU
yaes2ztdQYRSgvmzU8agzjuP4PNZv3D7DF/UEOqp78bUe88voHNOEpPqrMt8
4GADo47V2RhmAZEuuMtpsjmpnxIS5K/E38dwIZ4f/ICj1vgDdAVLlC9hfklq
ctNgYNE7t///ET781WjqOl4coH5SlOosTWABKwhd83hWxzlEV7C0j5OqBsPT
kf0lGBdUxnnnl4f6Vxj29yWo17TbX1qiCdN1D2UGA4iuBoPRaKSSaY0ecTMY
UPAIznBLYUS90qmBEBuFUGmOtHBhPWOjcFYaXexMkREyrP5mhDyQUGQVWJ9i
C4u06CP0qsFdCOj2RKISiuvGENrqoG8wb1Owejy4ZZujHyAPYNc4IbR7urg0
VVnghECrgBk0sKjlDOPaCqKUoiEzq5qkflMPaULQVmNGmVlyrJrkaqmbyqQU
myc1RNQ1PJ0AA+Lg/9EmwCnrIY5niY8nKYTsSbrmxmZtkTbcSlpCJJ82MGSY
S2cyiVlS87pAj4QGh4AHWq9hbMil2cRUdB34qCaDs0S1BUS7meDJChpO0oWu
Ze2XJsvAQjK6RX1S4Dd43vs4YlptkSVITZgRDbhmiMX2VzfJ1DBFQIJz94X0
eFk4i3gFURzzSEbMU5s5IkGL5NKU1Vg9ms00L2A8gkr/owUpqcmTIyQr6Tg6
MD5wkBvoD4cFK21H1uh0UZR5OUeORqAJo3RraGBwyYoHiz9bBq6g4QbbwT5y
/RamvJImcEpJvn6Hv9owP5XrmcjOGn+ENQKCQVu18AZ4DmvkgCrJjPCGRa2i
JQJmBeWa1NaPUas8WU/L8k39/j05Ox8+DNXzR+cnL54/fv9efMkPHw5+OX7+
nWhquox+Jd4Kc4Km5zCaUb0uwP+qYT1hZIsyq0faEC6XaWJiQWqAnPC1Rfww
rcyKKAGtIHaVgz/TMIZYJBUE4tg2DHIJ+gO83wK69n7rhw8w3ycFkQIseT7s
d30USn29p4AJNI4EGLupSLkgTCSsJIAgLw303QBF85GzUAafYS5m5wGUB7pl
2OAlWJKRaLXUCiwsWL0or2ilKvCYat2jvoL1MUXgssC0/lJegVNaDYFm6OJC
M8RtlSYqoIzPUVxQj8Fo8iQF1yBRxKBJlZl38APEI4SIgVJpgbRXC01rga7Y
02eEwsGUgNXLFc8kIaWHYyrKZdnWrCFxZKCXchxFOhQ1Bt72qiNCdQpapjIl
qoCHul6hQgTWLeH3OQ4fpAseJWDTgBBdmowYoJ2KjgT/vwnMASx0j5X98AEN
A7nKNarYFK0PKOeGFyXJ1KJMvWJdJA3wDQl1pWc5iD5NZQSkzDMrfLD4QO9j
JB+PEdEY0KMVgcxgWXFFS9QiAtuItk8h2iQdlVHs3FUpwq9qtypbXL+h+lt5
BkyhQVahyz1WVWmukwoRZYm1mD+AHLFGyAgdg39gxZrFGAb4yLIF4lAg0zqH
DuHxyzLHILjfVAlJ2K5pBSyYgXVzrMus7NSlSDasFJgOCEkNQuNaZ9MEwfzC
mSb9FuaEc6ZxeYNulwHlGmJK4PdAuiGWA+Ir1Bg4TpCbhUYeRhmpS7b9tgNo
0mTU5hBNVE7TLlG1NiFjozQ71W/Vs5uMQT3ZtITfgx40IF0ZsOoJ+nj/aIE4
OZgTQ/kQBJ1NCpxAvZA51AESjdITqNOSQHWDcCyoZjVPVrXIDNvyIWcl3jYj
xF5BNV86y0V5h5FYcCRZaC3ev7fx4ocP8JnDH/xoAwa+zO41qcJztDtZhZ4E
qmHNGZIU47U5slTkhvnsx/5+iFvv76NDsyhZ4ANnglhH+KGeDI4gUthwkQJe
U7vT1uRoTdV3z8/u78miM+c5hWJ5zjEsLMd4cA/a3pA3TB/BdNbsVIEpTApy
lWL5IispMgbKE5bIydZ0LeLlpAtX3Gay0C0fipiNSMxigcamQZEQY3o/DT1c
cZ70W9CjzShFwBd5HOQyBaLAdO6P1TNUmz3O32jft1UDI2ig/r7le3sBP+/b
9uTqiCXZJipQcaO2/l3NpMQufMtpieqswJHWaIlJ3rzfiKS19iwLDFjH0wQ6
w2/a5aZEjThZG5N/5xjpE7zPHp8TXfhcj0DAyVzBaE71cQoy1KyhLbJC9Bt7
8TzWN0V5letsrkdJa63jJYVHeMe8NeS7zVrSGpVNJjTlFVhMsYXe+g1vE2qM
0cU91xVIGPHTYIDZ0RQCcnQpUMMTrQK5G0puCqK0K1bf1VKMF2WRDFu/Aakc
ZtwJ+NSchUL8rRZ4jmgGwRtI2gp8hYoZXIw7MQdme1pcr4RExjG/lbWaTVDi
phYHNTAkU9ctPB8bNssfAa3A8cQxRlke+Eb34Zi7PlLgTaldPZ6PUeWcGOBs
9eTFmTp5+gRMBcSl6BSqv7aFAcnCf8uakrEkqzjKPYWhFs6bPDlyVzCacjPi
jmlwj/zkJl5zIYBSkTpyz+DgKJZgCXu7KjHbKXoKu269eV4rB6qyhsgiCwlW
IjeBSNFAntAyIFkWZr4Y5WCIchsUp0w6kEcMXao+O4b/P/2Voxj0DjHe5OjU
eq0eVyH6MalbbExSot7ZiHmThncOrEJ8RgIYBuiwDk2oTHeP9rB58fuUTXcO
1e69PQqfA2PgFe4umAKrdIWoEX+JLO9+RW0wz3KQLu5o7bQegV8GfT/vw8/a
fIZ5e+cMNFZdzUmMg0HxdEWtMrRdpOD9E1nRpSG1hQwH64G+nlsfdH9ry7kr
54UN0Ryg6zBCtDFyW2D2Jl2Lk7PHKXqJ+6sm8mCsGgcyU3Qf+djwT03QiZ8e
zeLZyUu1S9EOCiE6G+ql1IHsEbeDj1+46EDZGhEJkHksATqF/AqrLbOdAgtq
zWpY2TCOLAw636CQ2iq1flCZAxVI4ZNFC8MR/BG8qJr8ZrK8l2UaCA473xCl
FRk+TKrVZ+VfwEOXRl8R5ET/uTva+LtLP1zjf5CN1UMODuTSxl9PC3FDGzfc
ve4+d9f1iH/i/QIp5O+aG7q2wAcLfw1XUHrCYV0jx4c93g37uWsbuht1DFfw
85+ufaWCuv4DXTl11v9aXX9XAYHP0am5tg1dn7LfZAdxfR3NxH6+vj6vElQ3
INnQUNCqa+gl8zbcf5dGuoVGnzK1a/Dlrq+vO2MJP/49WsmDiNh3437u3tiQ
AjrM9cEJBK4Ih47Dhq7PxJOMaER/qMZ62uz8bV21LSP6GI22MGTc0O6DPWj/
MekPutyz/LdqCEYAnMVWvaaG3Ge//Jazz0XD9y7/9a2X3/52F0UE7Mh1YLev
qSEa2U3Lf/0cFGENdz41xRv8V0QkmCCPCf5egg9s0CU/ONVY29WzauGImDqn
9xR8V6dHakNmepd/C7FhZLsUGuG8dr/eEwHa/LtrG+rpQUZ0n0f0FUn/nyyV
rHGr3d3bR0TE3mz5IyO6fiKm3HKGJfYjGxj20Cj+8hEaSZrLj0j+XoLtVKdo
raKG+qxC9/rGhc79A2rpqQZ1mnEaBpQ1GZTjGu0leTThH9AuYFN1hnWB/Nz9
vScBFENtPLJwiUInxytUr2fhlhVEutzCgz2W5Ni55jvg56/37CL7hnn8j/Fu
rY4m5DxFkZkzq5ib0egOoJObBcE+l1deES5pG5JUCbo2Uz1HbEvih/390Nru
70vSJoDvBdYiO++RLW5wf9/ZLngUPP1Um0txcaKIhONZ9PbAclnIHjMjgdvI
Hve2kMV2GCwWdGlRiZqLLxAXyWuOCdhBhbYZ96Dw0fr3VCOJ+AIPYMwaF1Gk
hjuxvgB0ISBSHXl4MItVi0SZI1DY3IA91OIgBYjScsXWm9AAzizYFJEkRRBf
kxg1zgrV7QqZp+YoGlG42N+z3qG0iqMdhb58gNUMpbiVYVDEE9TxyycKNSmP
OfIAfcM2BQF+61j9tNA2Zwffh45pyqDwoSIwOVnVErQltS1spUiXy32tcymg
ODqYbzhUQbITbMch4gIoPecqUfScU2CZAuG56hLxcSwWbDSFTOiE3sFIliPt
uoNwYIRtEH4lbKqt1Bu99nF5DYE7CM2GdJD7eox4SFkbkncHbcVsK8h4BGIh
SK/+evbiuUSBJIOkpEbQE7uX+/sT9YLD5yVwx2ZcGGYzxjwefNqacHz+WCFA
R4kusKaU++UVBd/8TZiVDZ6PbYHMAhs7p2iFf+xENBRvQC9oe7GnsL3vGPMm
x2Wz0UeMt0mUBaRB2MCBE8QXgvOG0dKW1p0a5rljBNUShAbileH4QsHF8G80
XXMWJMACscslwr9BYC8xKMRXoDflWid5alFPuwrWaPNQ+uJPEKRUZy0KvgSh
+6itETvEaBT/PT55uu/CTQsHzUy1DBE9GRyHlVaNqQHir6C/xIKrWFsS/b4N
oV1Blj0elSdYB4/6FINhXHTONCizBBKyWkHUlnKXqCSAh/H+IZWF0yQpYWQx
YlxYn8NV32sGcisYMqGRVF7vZIAsySgMuOhGtXv8/Mke0nTARtvVS+HSouAC
L87JdGC6J1/bZErGKauPYoGC7Dk1a6FyhDG2jSkOuwOkJsTDXQAeazGypT0w
bARxcOG+o5bX/aSu0HZY/puoi7nGzSSsAy6G/L1qCWUfpbM5XmpXKGvyDUdw
wVCJfu3RjouxkHjUaXLiadLwsBx2hjRO3JYUm4clAMar73HUbDiyiYDMdQwE
ltMmEfQn6Un7k9625ShZn7FHaK2waotgFeTdVS6ph7qdSvJIMWFqP8SAUhPU
buVVZ3DoquCOmaut+GqReYJlQA2T6xhfhPZM5fNvoTW2kBIZD5tQuEqoWECv
HO6JkAtzeFKs3T4YO4eetZ1IA2iGBL6j3UTkfLmOgJNxENBdWKZAPgyi9kwJ
7p2S/+QJeQ1IbrIHHp28kC0JEgq5oQ/YsMNEpZaGPBXR205mftd1DK1WGXa9
zUB61dbhiJHxaxuApcsya3MScKQX1faA8k+Kxi+XTYmP1ZlFf8V97BXZiV0W
UM3k/bPN0X1IYW1NqvA0G2FMgfJgkYvFX/OpF3bnw0Rzt1FM5tiME+0iCnJI
gdOpbVJp09sV+2unQZRARrFmDqTeejpNaAfPKZ/pLilOj3jwkszjOgJSPXtt
YOi+YMbWBQSIaqazDpc43sIgow4hcHwojnHAY7w/7sQ04vARSyRVYChD55Xy
4ZK04rgKBBJb2UXsdsQoK0UoOcZqcmEPK2C8Sy4hmTcLcP+lSSQi2WKFYIRk
G4eU7+HqMyBnUCn0BS2D3U61TNaIgjurfcb5vnPM6X3HucjAeZQtYt7XEm5b
BqU4YkWwnnGV1KEfyZnDM8ocUiKWu3nibSz2Q19znVxGCV6uvZMIk4i7UQph
E4uBNaAEQTCCRxYx/96mKMHjBB2J0oR972IlN7pke+pvQnvKY9kpSrkU7eKr
qcKjW7tBhRRvgfu+GndiTxrECQWMZB8rrdnw2/oMF8g2lPpRA083S4UzemD3
4uy1o8wF+FIDcaXOF+HaUEjqFEW9md0glpecdAMyCD7zBlJAlRecjun3l10U
ZzPaXNmRs0pwoxEPiMfUG4b7gZNbXTtYI9BFkVLceGAIJEw5gMVq7HKOg7HB
IQSKa6lKkMx7bZaGM8RjS0DMGgPjjsCLb6hfI+E11o6Rse3oOzB9qWyx3Rz/
qjJclLIpIUFyZkP/j2kYXOBFufUo0KfSsLfgqiYV66W6hcYrCOy97rTiw/kx
uII83BbsXvWQ7WphgO0MNVDY1ckcIEJZu6juSW3nN4lAWtSfvBYphNegDDy5
J5bcFxcX8ilgaPVHtVu9ZgbdV9XrebOnDtTu9bVcvL6Gy/htDh/3gpbk4xWV
jNkWjHgEy6nObMnojVwnSTBofeuzN3Ph2MmtZN4DqRVGIJmlwQ6Ejj6bKq6n
L8pAaSJXymJxtnbWWvctbquvJAnz8gjzmLjgK3Ae0DNBvidQrzaYgsVccykV
uVQOrIMwtsdZrpw7ggOXxLvsdmYvaIjOaIOFXVRydyU8x88v25r2iFPJQ2ZL
N4KqzrhXCgasp8z/PZ41sZITZ6imFPUmsjO8rfPG6Be5fIJsOUnh+ZmqH11h
h+0tVeFTZ+JkxSAJ1o4EpXFMubH6i4FVRCgMdRn0RiLFgoVNvVw3C1R2Zlqh
QrhIEXvD3vFWfSFLVruYlsYWeIeyhT7UPuACYDGArihoiwkWhBGCeByfPFW8
uxlxA0QTbJKdsKo9ohrbzaC+EqknFaFUDjwzb6lIKG812fc8S5Mqk2q2Wfvu
3Zp3lONY2SvnojkiPWhjgtfKYuQDRVsnaHcZkWmgglWr/5HHoUe8JWKgCJCQ
ZQuqILHKdRjm8CVzJXEVZa9sXRBqQRcvuByXLQVmeSbUv7L1Jl52NivBpCQL
O9j80fYi+bPP7YL2e9CWiXJT1wUkiNWX9j9ZI4CwamUhYrhhkVQweoNVwUkh
IcYG4SZhs846OL2JpuEeaH9PyX2ZL5kIf/muvRy1FKJ6gVK20UzgSbGOlF4F
LQz0s9vx1Sk1DlgOXSunqdG3kupvFv/b1/6F6nfW5j5Sity5Dc8sqq/tBFXd
MAzTMWHFKiweFQjbmhAX7lnYLaAEV/Bj/VUslFTxbYTNGDOVYfRvmema5c7G
FJbFBncVNJ3SIdb5Dp51MU4QpBofK0Y+9lB8Bb5gGQOL5RlwiH3h6CgQ6Cqo
TYpA4SDJVVrQwG8woZpZRjRRl7nUDwQto3mZ5DWbI0yfoBjbrnEGFK/KBqiS
HrBOuuNHRO5HW4ylj57H6hHO2LfgXD+cm8/sJNEWAUcgX0qKNB3Nq4RIG3AF
FScVC6reC0BEKg6zS/aIjzXhlfdLFYDIEvxIeYxxKZUfCWOwBrBANHBRrrD0
Xqq3S3GZl+C/U5WzYNwcyxD4rjHBV440QnsJaTbHir5rNG2+2zNOT0q1MSLq
lINl62L5LkD83GE0hPUw0JfpwjBtsUoZzYrv7W/lWdDbkwKX0c1cICiUSYTi
eS6YVXDd4eNkeI14Kqj0NTpYGuFdkMqB19ZxEXMoIEmfxcA4PkSCAkO0gRJZ
5z7Q3q63P6rrl9yUS65cg+K+Pi9xv5m/5lsI/GMeaoQTWW0R1B2upJi+64BR
TkjrJtQ/krpeupR4sCkJozfyj3H71oqyIdKJ7MORzSuZXuXlmjdUBjtv7txx
u5057RtIx5X8ECsjDh/o/CUbUJu3kguJEpi+9sGCClEulMXAFRvT+VRIK8pV
yi6dGNKmJAuZ6G66bLfes36Aw27HNi3VLbSwo4mT3FzF2xgiaUJ5Kl+0Ow09
LRwurOWFw7eiQgK844IO0GpXI17cC9JEnJSl8mKlXhT2LKNuMqbSvEuE9mZR
PSlvNJA6AxTpop6JlvDRkqCIYfGIK+8IJ+zTQaBGfN0EBTrotxN8QJ0lVJEh
eBs+ion6uOwY+eJLQn2+r54MdJSNdfhXvt5TDzmtQFsND+SAKuAR2qwbtGnj
bCpRRwAZobeUksmuBuEbmlO4l8DuFwD/xtZf+Eiay/Klkh64BI+SgubCxPJY
dVGSoMUkJ5gPWa6zRYTl3I2L/P5/7boHwhFvqE5yuDaFvT6p4cPeXe4VtQb4
F2qH9EJWFpoOQUvouQhls972tM2A6/aGsYKR7V62MKgDCAO7Cm8/jld80gXJ
IJIE12G1EHzF1sETobp5Cm6N8ONJjIBZ9IslhzZUlrhXt+6Bw3rRZAuPSZ29
hUOZXYD+D5D+20u/+rVea7u/IWcVQwKMP4iAi/WqWXuHKm4w+HocRjG+iM0C
TVQ74RHvCIOawC8RnoARS6fuKQ71orKUKHTCtrj3rgONeI0ogW9s5QZ6EAe4
ZzBKB0SwNjYo8HgAuXnUMEq/b8J3tF4bWzQDAFrKr4jOINkjEgPn/zTrlUDX
YeAKRrRF7/xiNwAohz4gHQbezR5nVeaVnvPWOt5nm6RVWdc2cnDb5sSD7t3k
X4v6cKivDbMMbey/QzVMcrgbH+pQi+frz3RwJr8Gs7dMgiaJ+4DBLlF2sV+7
CcKdmSBD5CJBO9L+oojb1kTEMRt4R1yL5cqXgAVwmIPBI86Td0G6jou6WXrV
2c7lRguu0z//+U/1K1B48B44j7TfazpdEc9Y4s3LEmPsDPEGCarg5/8k73Pn
ub76BYR6olJgAdzjZ/c8W7QLnpCtsESFslxhPupQlXyUx08Jnvcxb6g0RSKH
elG2OYsFPXT0+3vjo6//fXw0PnTb+Gg4vf1j8+3qlsOgRoCM4TCSbAmEJ65C
lwYMF2+1kmGBLTs6PBzzMSh4kIhSr4g21iEB4rznwVGtGNLKDhNPP/Rd7byS
SWC5mqepn5U6Ozw4VH8Y/SkYH1+DhnruPeq594gH+gr++4GGKX7ia1GjfrR2
jLD0/+/v1d8L69Uc3Rt/hd/RBKPPg4lFaGS54mgw09N2TgesUMZxCeKGdwNl
7CDDKX/Bxt2MWKe9Jp1207Qcec1K+OLeGP/vUN178GDs/3cv4Lh7wmif8hT8
iiwihB8GjNIzXqc5dyZ+LYFNg1W03IrU3F1VZom2HSPzvQkiU12ZQ2/SPw12
YKACubMO3K7M4uD+4d6GfMATCB2MEDoIaGHJvuMCWkfV95ZKPdpDDgLunZwn
rw36X7NxcAtIP9qqXGyZJ+slADuFwb6GweLP4drJHR/o3w/hSrwafEDdR+r2
xmI8tozbbmG1y2oKtcYS7Cc60LhKhvAkICQmvnDJNs/WiPfpKoae3LPsKaEy
H52+PFG8qxjjcqD+AVBphccC9Cl4OdjERsZxuMTZNfYxqKn9fQtLOGOgwiVV
O9weEjdsyS0dLA9mPupoydSOZIZI/50eoeo7BWlC+r+yd/HC2H+we/rkB8hz
RMxHfWSMG9pXxSoYux0GPzide+N4ouE81HIUDNdinAZlYog++9AdF5qwbFe2
R2UXDe+ZJapgScxMXTgiXdCeUYiD17R5FhiADoUOS9EknLC9WKxhI/gNPdzP
XOKgudutMj54erSznY6fs6QyClHp2IXr1ztr35k5GET/w8qB9V4RHHVU9QN3
OysD7u4T177y2QMBP3ZlwHtdcHnW4QYXrxB+EkIVv3HtfEuftGyhXNjYJrCX
9geek6bjADBSCJ/7VKvqn/yI8PUpg4/xDUcX8Qzeu6nhpPsmw5alrcnGtFQQ
uWONhjSxSSe8+dPcibCRvg7tz6/6ybGNI48FC4xBpni7QdnPh8qpMgz9fHks
jo3RAFt2C5YFTNmcIi3CXvpAtd/Iw5st/lZeJvouyiv1sYW6QX99HicyTkPu
kPL+lvrPo4PDV1TzuCkSt9M/Ai7weVBJNiKY2q6cYMbddY5qm7Hcw6FIz7Cu
hM8HbuggNHeMhA4C0j6kZjDoVpRYJM3v7eruObgd/lOJhrVYAR0wQgyHCMSI
oIpQZfZxW7QszkMNWS1ui5fgwyBYgMGWJe9vuht80EVgSop6fMwDzOZjGHvX
va13xV5sMLrBx/2SEDkbbZilGD0j+sK9I+fe/xbiRg15N79HgnvCCRUL0s1y
rjpm65bC/qrThAtAPkNqeYHCBgPPwpm4rSt4S/bqM2yhUfLRF6eBlQ0Wo/gQ
lp7c8fA5b4gQddnxvwVzurGnPtTl5o4Q8Aw7GoS0imRwC587HUhc3FdmEJhC
QUbxRDQgij8TMSxVs4eviWJ02O9FhzMuwNPTcyqTl6NtoM+90IxaQDfIYgqL
E0T57OTliAupntiCen7pBMWa/UekqF14ai/c8hFtW7VnvXGQaVu1dQA3Rri2
+lhg0oyoPIUuUIooBcMZnKBeMcp5YI8fPX3Vp3jZosCjEDrzrlTsA8c0l1dv
DNwXzjpxFiLq05Gh1lQMwmeZRvngM0ceV6WPbQUExKJfrFfxwcRMzocJzn2x
GHV8tJ6J1o29pJ6tbbUN0n7kvYXHctxwsGsAxyTJLaQznSyzWTlvGZoysbkt
PnLncwm+DFSiERAkECSsg2Qd/oQ70/FtRnx8q2zZcueSObvAY/hdTU6kK92n
kOUcafSID0rlzD9O57m+6ssP2vccJbXBsivKZvG2akkT2jEtyyyoXfGAeUxs
63qeY04j8b0jMQMcnhi3d8u424bn5g7h9FxKmBOK4yMOsJvCSPcO/V5IJt+0
5WfZT07ydW1qmxl7lJb1um70MmRvYcQRj8KzDdMCibUqG67TzDGvAz1TEmKD
D4+f2OILuwNzXQZseLP8IXloFR92zq4GjRCc5m9TKAhfpJaKdnekdnlUt5gb
R2EzMGW3cW7TQ55bvugOT96QFQ4J32ZB2RFOqdB5j3xcMQ+/sxNQ0juUdoJb
/fnKQQ67j+W7SWz3vjOk/R08pQIm9dpOUojMA7Dk9WyxeYjbrTaiWvCIYd0A
mdrYTKpknQRgcpV9uK0+qu92T46DxNHgPboszh8IJ8ZgcebNNvlu1rH/yAxl
kyg6AAXXHHLezMjx1MDfhbbHAzifN5woPto72Rsn6idpU15g8pldfFIB3/WE
k+Ekm01GcVUZnosa+LcBDhq6vNJAUlXJOowDDKiLToDr7+Ytqzt93lmXzE/l
ZIRg/yvQ6Skm+wVntCSivw5JHMi4GXx8sMj5/v4PiAXYYkXyXUOuiKIB0m4H
mERmqnbCgO3so5BGczr6bwtVI3R55/T+zqvNsaLU3WOpi1HFUPb+ZbZnd8kY
EqcDit4oYcGg4+IJaxXgYUrNbBxGGQWRTstvPMxVADgn5swvKzL9EiMysJ3z
HwY7vtEZ5kOPDb9X0MIKXrriSSE5etNGO/YsH+I4GeCr/xF56Fnym6UiROM3
xAAcOGa0Hgn4P3YEwDY5iebXKyLHNKktEwpPZ0UaWpkIQLetGGuAsRH5HT0E
TnUBoTzxry8xtP7bxKUpnbA4hCeAYv4nDd6LKqPqQnskUAf69q2Lq2gxAaHc
rWV+GMyQ5R+k679RA2wy862Ev285PGp0czZnJ0qvQYzVPEIhBZE4PDiM7vt4
ti26H9ysetE2+K5VS/BNY41KCiI869GnyyxSUTec7LEJhX/eGR94WsEyxsQ9
p8hxUHz0tN84hIIdbLAJMihGditQ6LVFY21JenR53NYUhlMNOpVZI0VxCB2j
HhxB0QGCK94uIQorOENaoCG3w3+TJt1n9L++QvM+r1DT5022KrPPHMfpBkNS
ZSt16xWSbBYl1NpK697nKCTnj/w3qqOt6bnP0EmfmJj7EM2Og3vCFhg6ISyP
/dVKW9wwAPZCAHHcxd6ItfGlmbVwdm3fSo2rJqgc7cTwCJyr73ToRxc0EjzZ
HSHef4LhLpgq9zajoTp9dCaf8D3QezzUELQIAQ3wiC4CWbvoQhbxO8EDAMPp
w82dPu61QbTGPDPZEUDlUpl7d5NFkQQ6su+TOrObfQaDbxmU6sBAz58EaAlW
XEYteiRs6t4vQB0s7fkYuFvQ7yji+nJX/aTt5rlgxxK/1SlzJ3x7Ck7YK6Z9
FK+dVXwtgRy/K0E2WQRbHhiLcxsU5e5N+bJSs6X5z/FNURRcx7GGr/mYUFhf
TS88+KKq+LVZfbJL19WUt8crnry0noV98+WXcDE/YTyB++i9fa+9e8aEe0Qo
0H9tPtdeHLe407RxaSzXpDcW3pbd2kLQym26rVaRIvuT/ReufM22X14TEr0C
RRZnocEBWCjcst7QportjN/TcC/Tn9B+TSdi8rKbtg7eYudDsToIIE9e/jBU
S5D0St5X5JW+nB30paXgt/kknrHdNtVNXrIj/w0RlC7aJfFAumpx8ZlE+MkR
aMeVXt0GSAzeBGlBlE/hQB4DTyvmPzbkgQ1g/R9qbzlswqebbE0dmgub+uwk
xIKXQvmt5u6Vh856uPPDNpW8ZIY32N/zPWVz7fn7mLKtQTxd6uSjSVF3KECt
KkPvi5KxgJdg4wo0ewnuXMaX3fmjD/zJaW2O1tq+kItlgN55QW+FAWPzDb1g
D36dCylr3dm1JjsbZC80BU0pJhYwCR6EF+Kn4jsWD35+9lR2z9gDXIeIMfpN
LV5S/bvSahjbtHyr5fQq+6a1FdDI5Hqu6bUcabVeNTblxiOR0wmxfo225wX+
D532W86JIrjZWC1aoNDIFCOY6CgvyxXzB0yDXIjwGInNF7pgeohP3cvsttQG
2+UNunhQCm7qb1pJqsAIaT81pirxPc1tCoG0ZCR+9HSDhaQCeznAx5LZh3B+
Y6KQWIzMyBN/j/dv60sCij17cDCa4pt3Ed//ic5sCMochjcRRNad+MItQqau
KnqVmZeEXZ9vO7AfiXp73hxRTpdyzma5Ar+XROPJ8fPjDbGI37S7SOj9hXSn
O2WY3s+K6CE2cpza3DS5qIP3Ez5SR2d/3Jklea13QJOcv3j4Ap53Wezx4L8A
bAx53L2IAAA=

-->

</rfc>
