<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.19 (Ruby 3.0.2) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-chen-bmwg-savnet-sav-benchmarking-02" category="std" consensus="true" submissionType="IETF" xml:lang="en" version="3">
  <!-- xml2rfc v2v3 conversion 3.23.1 -->
  <front>
    <title abbrev="SAVBench">Benchmarking Methodology for Source Address Validation</title>
    <seriesInfo name="Internet-Draft" value="draft-chen-bmwg-savnet-sav-benchmarking-02"/>
    <author initials="L." surname="Chen" fullname="Li Chen">
      <organization>Zhongguancun Laboratory</organization>
      <address>
        <postal>
          <city>Beijing</city>
          <country>China</country>
        </postal>
        <email>lichen@zgclab.edu.cn</email>
      </address>
    </author>
    <author initials="D." surname="Li" fullname="Dan Li">
      <organization>Tsinghua University</organization>
      <address>
        <postal>
          <city>Beijing</city>
          <country>China</country>
        </postal>
        <email>tolidan@tsinghua.edu.cn</email>
      </address>
    </author>
    <author initials="L." surname="Liu" fullname="Libin Liu">
      <organization>Zhongguancun Laboratory</organization>
      <address>
        <postal>
          <city>Beijing</city>
          <country>China</country>
        </postal>
        <email>liulb@zgclab.edu.cn</email>
      </address>
    </author>
    <author initials="L." surname="Qin" fullname="Lancheng Qin">
      <organization>Zhongguancun Laboratory</organization>
      <address>
        <postal>
          <city>Beijing</city>
          <country>China</country>
        </postal>
        <email>qinlc@zgclab.edu.cn</email>
      </address>
    </author>
    <date year="2024" month="September" day="27"/>
    <area>General [REPLACE]</area>
    <workgroup>IETF</workgroup>
    <abstract>
      <?line 65?>

<t>This document defines methodologies for benchmarking the performance of source address validation (SAV) mechanisms. SAV mechanisms are utilized to generate SAV rules to prevent source address spoofing, and have been implemented with many various designs in order to perform SAV in the corresponding scenarios. This document takes the approach of considering a SAV device to be a black box, defining the methodology in a manner that is agnostic to the mechanisms. This document provides a method for measuring the performance of existing and new SAV implementations.</t>
    </abstract>
  </front>
  <middle>
    <?line 69?>

<section anchor="introduction">
      <name>Introduction</name>
      <t>Source address validation (SAV) is significantly important to prevent source address spoofing. Operators are suggested to deploy different SAV mechanisms <xref target="RFC3704"/> <xref target="RFC8704"/> based on their deployment network environments. In addition, existing intra-domain and inter-domain SAV mechanisms have problems in operational overhead and accuracy under various scenarios <xref target="intra-domain-ps"/> <xref target="inter-domain-ps"/>. Intra-domain and inter-domain SAVNET architectures <xref target="intra-domain-arch"/> <xref target="inter-domain-arch"/> are proposed to guide the design of new intra-domain and inter-domain SAV mechanisms to solve the problems. The benchmarking methodology defined in this document will help operators to get a more accurate idea of the SAV performance when their deployed devices enable SAV and will also help vendors to test the performance of SAV implementation for their devices.</t>
      <t>This document provides generic methodologies for benchamarking SAV mechanism performance. To achieve the desired functionality, a SAV device may support many SAV mechanisms. This document considers a SAV device to be a black box, regardless of the design and implementation. The tests defined in this document can be used to benchmark a SAV device for SAV accuracy, convergence performance, and control plane and data plane forwarding performance. These tests can be performed on a hardware router, a bare metal server, a virtual machine (VM) instance, or q container instance, which runs as a SAV device. This document is intended for those people who want to measure a SAV device's performance as well as compare the performance of various SAV devices.</t>
      <section anchor="goal-and-scope">
        <name>Goal and Scope</name>
        <t>The benchmarking methodology outlined in this draft focuses on two objectives:</t>
        <ul spacing="normal">
          <li>
            <t>Assessing ''which SAV mechnisms performn best'' over a set of well-defined scenarios.</t>
          </li>
          <li>
            <t>Measuring the contribution of sub-systems to the overall SAV systems's performance (also known as ''micro-benchmark'').</t>
          </li>
        </ul>
        <t>The benchmark aims to compare the SAV performance of individual devices, e.g., hardware or software routers. It will showcase the performance of various SAV mechanisms for a given device and network scenario, with the objective of deploying the appropriate SAV mechanism in their network scenario.</t>
      </section>
      <section anchor="requirements-language">
        <name>Requirements Language</name>
        <t>The key words "<bcp14>MUST</bcp14>", "<bcp14>MUST NOT</bcp14>", "<bcp14>REQUIRED</bcp14>", "<bcp14>SHALL</bcp14>", "<bcp14>SHALL
NOT</bcp14>", "<bcp14>SHOULD</bcp14>", "<bcp14>SHOULD NOT</bcp14>", "<bcp14>RECOMMENDED</bcp14>", "<bcp14>NOT RECOMMENDED</bcp14>",
"<bcp14>MAY</bcp14>", and "<bcp14>OPTIONAL</bcp14>" in this document are to be interpreted as
described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they
appear in all capitals, as shown here.</t>
        <?line -18?>

</section>
    </section>
    <section anchor="terminology">
      <name>Terminology</name>
      <t>Improper Block: The validation results that the packets with legitimate source addresses are blocked improperly due to inaccurate SAV rules.</t>
      <t>Improper Permit: The validation results that the packets with spoofed source addresses are permitted improperly due to inaccurate SAV rules.</t>
      <t>SAV Control Plane: The SAV control plane consists of processes including gathering and communicating SAV-related information.</t>
      <t>SAV Data Plane: The SAV data plane stores the SAV rules within a specific data structure and validates each incoming packet to determine whether to permit or discard it.</t>
      <t>Host-facing Router: An intra-domain router of an AS which is connected to a host network (i.e., a layer-2 network).</t>
      <t>Customer-facing Router: An intra-domain router of an AS which is connected to an intra-domain customer network running the routing protocol (i.e., a layer-3 network).</t>
    </section>
    <section anchor="test-methodology">
      <name>Test Methodology</name>
      <section anchor="test-setup">
        <name>Test Setup</name>
        <t>The test setup in general is compliant with <xref target="RFC2544"/>. The Device Under Test (DUT) is connected to a Tester and other network devices to construct the network topology introduced in <xref target="testcase-sec"/>. The Tester is a traffic generator to generate network traffic with various source and destination addresses in order to emulate the spoofing or legitimate traffic. It is <bcp14>OPTIONAL</bcp14> to choose various proportions of traffic and it is needed to generate the traffic with line speed to test the data plane forwarding performance.</t>
        <figure anchor="testsetup">
          <name>Test Setup.</name>
          <artwork><![CDATA[
    +~~~~~~~~~~~~~~~~~~~~~~~~~~+
    | Test Network Environment |
    |     +--------------+     |
    |     |              |     |
+-->|     |      DUT     |     |---+
|   |     |              |     |   |
|   |     +--------------+     |   |
|   +~~~~~~~~~~~~~~~~~~~~~~~~~~+   |
|                                  |
|         +--------------+         |
|         |              |         |
+---------|    Tester    |<--------+
          |              |
          +--------------+
]]></artwork>
        </figure>
        <t><xref target="testsetup"/> shows the test setup for DUT. In the test network environment, the DUT can be connected to other devices to construct various test scenarios. The Tester can be connected to the DUT directly or by other devices. The connection type between them is determined according to the benchmarking tests in <xref target="testcase-sec"/>. Besides, the Tester can generate spoofing traffic or legitimate traffic to test the SAV accuracy of DUT in the corresponding scenarios, and it can also generate traffic with line speed to test the data plane forwarding performance of the DUT. In addition, the DUT needs to support logs to record all the test results.</t>
      </section>
      <section anchor="network-topology-and-device-configuration">
        <name>Network Topology and Device Configuration</name>
        <t>The location where the DUT resides in the network topology affects the accuracy of SAV mechanisms. Therefore, the benchmark <bcp14>MUST</bcp14> put the DUT into different locations in the network to test it.</t>
        <t>The device in the network topology can have various routing configurations and the generated SAV rules also depends on their configurations. The device configurations used needs to be specified as well.</t>
        <t>In addition, it is necessary to indicate the device role, such as host-facing router, customer-facing router, and AS border router in the intra-domain network, and the business relationship between ASes in the inter-domain network.</t>
        <t>The network traffic generated by Tester must specify traffic rate, the proportion of spoofing traffic and legitimate traffic, and the distribution of source addresses, when testing the data plane forwarding performance, as all may affect the testing results.</t>
      </section>
    </section>
    <section anchor="sav-performance-indicators">
      <name>SAV Performance Indicators</name>
      <t>This section lists key performance indicators (KPIs) of SAV for overall benchmarking tests. All KPIs <bcp14>MUST</bcp14> be measured in the bencharking scenarios described in <xref target="testcase-sec"/>. Also, the KPIs <bcp14>MUST</bcp14> be measured from the result output of the DUT.</t>
      <section anchor="proportion-of-improper-blocks">
        <name>Proportion of Improper Blocks</name>
        <t>The proportion of legitimate traffic which is blocked improperly by the DUT across all the legitimate traffic, and this can reflect the SAV accuracy of the DUT.</t>
      </section>
      <section anchor="proportion-of-improper-permits">
        <name>Proportion of Improper Permits</name>
        <t>The proportion of spoofing traffic which is permitted improperly by the DUT across all the spoofing traffic, and this can reflect the SAV accuracy of the DUT.</t>
      </section>
      <section anchor="protocol-convergence-time">
        <name>Protocol Convergence Time</name>
        <t>The control protocol convergence time represents the period during which the SAV control plane protocol converges to update the SAV rules when routing changes happen, and it is the time elapsed from the begining of routing change to the completion of SAV rule update. This KPI can indicate the convergence performance of the SAV protocol.</t>
      </section>
      <section anchor="protocol-message-processing-throughput">
        <name>Protocol Message Processing Throughput</name>
        <t>The protocol message processing throughput measures the throughput of processing the packets for communicating SAV-related information on the control plane, and it can indicate the SAV control plane performance of the DUT.</t>
      </section>
      <section anchor="data-plane-sav-table-refreshing-rate">
        <name>Data Plane SAV Table Refreshing Rate</name>
        <t>The data plane SAV table refreshing rate refers to the rate at which a DUT updates its SAV table with new SAV rules, and it can reflect the SAV data plane performance of the DUT.</t>
      </section>
      <section anchor="data-plane-forwarding-rate">
        <name>Data Plane Forwarding Rate</name>
        <t>The data plane forwarding rate measures the SAV data plane forwarding throughput for processing the data plane traffic, and it can indicate the SAV data plane performance of the DUT.</t>
      </section>
    </section>
    <section anchor="testcase-sec">
      <name>Benchmarking Tests</name>
      <section anchor="intra-domain-sav">
        <name>Intra-domain SAV</name>
        <section anchor="sav-accuracy">
          <name>SAV Accuracy</name>
          <t><strong>Objective</strong>: Measure the accuracy of the DUT to process legitimate traffic and spoofing traffic across various intra-domain network scenarios including SAV for customer or host Network, SAV for Internet-facing network, and SAV for aggregation-router-facing network, defined as the proportion of legitimate traffic which is blocked improperly by the DUT across all the legitimate traffic and the proportion of spoofing traffic which is permitted improperly by the DUT across all the spoofing traffic.</t>
          <t>In the following, this document introduces the classic scenarios for testing the accuracy of DUT for intra-domain SAV.</t>
          <figure anchor="intra-domain-customer-syn">
            <name>SAV for customer or host network in intra-domain symmetric routing scenario.</name>
            <artwork><![CDATA[
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+
|                  Test Network Environment                |
|                       +~~~~~~~~~~+                       |
|                       | Router 1 |                       |
| FIB on DUT            +~~~~~~~~~~+                       |
| Dest         Next_hop   /\    |                          |
| 10.0.0.0/15  Network 1   |    |                          |
|                          |    \/                         |
|                       +----------+                       |
|                       |   DUT    |                       |
|                       +----------+                       |
|                         /\    |                          |
|            Traffic withs |    | Traffic with             |
|      source IP addresses |    | destination IP addresses |
|           of 10.0.0.0/15 |    | of 10.0.0.0/15           |
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+
                           |    \/
                    +--------------------+
                    |       Tester       |
                    |    (10.0.0.0/15)   |
                    +--------------------+
]]></artwork>
          </figure>
          <t><strong>SAV for Customer or Host Network</strong>: <xref target="intra-domain-customer-syn"/> shows the case of SAV for customer or host network in intra-domain symmetric routing scenario, and the DUT performs SAV as a customer/host-facing router and connects to Router 1 to access the Internet. Network 1 is a customer/host network within the AS, connects to the DUT, and its own prefix is 10.0.0.0/15. The Tester can emulate Network 1 to advertise its prefix in the control plane and generate spoofing and legitimate traffic in the data plane. In this case, the Tester configs to make the inbound traffic destined for 10.0.0.0/15 come from the DUT. The DUT learns the route to prefix 10.0.0.0/15 from the Tester, while the Tester can send outbound traffic with source addresses in prefix 10.0.0.0/15 to the DUT, which emulates the a symmetric routing scenario between the Tester and the DUT. The IP addrsses in this test case is optional and users can use other IP addresses, and this holds true for other test cases as well.</t>
          <t>The <strong>procedure</strong> is listed below for testing SAV for customer or host network in intra-domain symmetric routing scenario:</t>
          <ol spacing="normal" type="1"><li>
              <t>First, in order to test whether the DUT can generate accurate SAV rules for SAV for customer or host network in intra-domain symmetric routing scenario, a testbed can be built as shown in <xref target="intra-domain-customer-syn"/> to construct the test network environment. The Tester is connected to the DUT and performs the functions as Network 1.</t>
            </li>
            <li>
              <t>Then, the devices including the DUT and Router 1 are configured to form the symmetric routing scenario.</t>
            </li>
            <li>
              <t>Finally, the Tester generates traffic using 10.0.0.0/15 as source addresses (legitimate traffic) and traffic using 10.2.0.0/15 as source addresses (spoofing traffic) to the DUT, respectively. The ratio of spoofing traffic to legitimate traffic can vary, such as from 1:9 to 9:1.</t>
            </li>
          </ol>
          <t>The <strong>expected results</strong> are that the DUT can block the spoofing traffic and permit the legitimate traffic from Network 1 for this test case.</t>
          <figure anchor="intra-domain-customer-asyn">
            <name>SAV for customer or host network in intra-domain asymmetric routing scenario.</name>
            <artwork><![CDATA[
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+
|                   Test Network Environment                  |
|                       +~~~~~~~~~~+                          |
|                       | Router 2 |                          |
| FIB on DUT            +~~~~~~~~~~+   FIB on Router 1        |
| Dest         Next_hop   /\      \    Dest         Next_hop  |
| 10.1.0.0/16  Network 1  /        \   10.0.0.0/16  Network 1 |
| 10.0.0.0/16  Router 2  /         \/  10.1.0.0/16  Router 2  |
|               +----------+     +~~~~~~~~~~+                 |
|               |   DUT    |     | Router 1 |                 |
|               +----------+     +~~~~~~~~~~+                 |
|                     /\           /                          |
|         Traffic with \          / Traffic with              |
|   source IP addresses \        / destination IP addresses   |
|         of 10.0.0.0/16 \      / of 10.0.0.0/16              |
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+
                           \   \/
                    +--------------------+
                    |       Tester       |
                    |   (10.0.0.0/15)    |
                    +--------------------+
]]></artwork>
          </figure>
          <t><xref target="intra-domain-customer-asyn"/> shows the case of SAV for customer or host network in intra-domain asymmetric routing scenario, and the DUT performs SAV as a customer/host-facing router. Network 1 is a customer/host network within the AS, connects to the DUT and Router 1, respectively, and its own prefix is 10.0.0./15. The Tester can emulate Network 1 and performs its control plane and data plane functions. In this case, the Tester configs to make the inbound traffic destined for 10.1.0.0/16 come only from the DUT and the inbound traffic destined for 10.0.0.0/16 to come only from Router 1. The DUT only learns the route to prefix 10.1.0.0/16 from the Tester, while Router 1 only learns the route to the prefix 10.0.0.0/16 from Network 1. Then, the DUT and Router 1 avertise their learned prefixes to Router 2. Besides, the DUT learns the route to 10.0.0.0/16 from Router 2, and Router 1 learns the route to 10.1.0.0/16 from Router 2. The Tester can send outbound traffic with source addresses of prefix 10.0.0.0/16 to the DUT, which emulates the an asymmetric routing scenario between the Tester and the DUT.</t>
          <t>The <strong>procedure</strong> is listed below for testing SAV for customer or host network in intra-domain asymmetric routing scenario:</t>
          <ol spacing="normal" type="1"><li>
              <t>First, in order to test whether the DUT can generate accurate SAV rules for SAV for customer or host network in intra-domain asymmetric routing scenario, a testbed can be built as shown in <xref target="intra-domain-customer-asyn"/> to construct the test network environment. The Tester is connected to the DUT and Router 1 and performs the functions as Network 1.</t>
            </li>
            <li>
              <t>Then, the devices including the DUT, Router 1, and Router 2, are configured to form the asymmetric routing scenario.</t>
            </li>
            <li>
              <t>Finally, the Tester generates traffic using 10.1.0.0/16 as source addresses (spoofing traffic) and traffic using 10.0.0.0/16 as source addresses (legitimate traffic) to the DUT, respectively. The ratio of spoofing traffic to legitimate traffic can vary, such as from 1:9 to 9:1.</t>
            </li>
          </ol>
          <t>The expected results are that the DUT can block the spoofing traffic and permit the legitimate traffic from Network 1 for this test case.</t>
          <figure anchor="intra-domain-internet-syn">
            <name>SAV for Internet-facing network in intra-domain symmetric routing scenario.</name>
            <artwork><![CDATA[
                   +---------------------+
                   |  Tester (Internet)  |
                   +---------------------+
                           /\   | Inbound traffic with source 
                           |    | IP address of 10.2.0.0/15
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+
| Test Network Environment |    |                          |
|                          |   \/                          |
|                       +----------+                       |
|                       |    DUT   | SAV facing Internet   |
| FIB on DUT            +----------+                       |
| Dest         Next_hop   /\    |                          |
| 10.0.0.0/15  Network 1   |    |                          |
|                          |    \/                         |
|                       +~~~~~~~~~~+                       |
|                       | Router 1 |                       |
|                       +~~~~~~~~~~+                       |
|                         /\    |                          |
|             Traffic with |    | Traffic with             |
|      source IP addresses |    | destination IP addresses |
|           of 10.0.0.0/15 |    | of 10.0.0.0/15           |
|                          |    \/                         |
|                  +--------------------+                  |
|                  |     Network 1      |                  |
|                  |   (10.0.0.0/15)    |                  |
|                  +--------------------+                  |
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+
]]></artwork>
          </figure>
          <t><strong>SAV for Internet-facing Network</strong>: <xref target="intra-domain-internet-syn"/> shows the test case of SAV for Internet-facing network in intra-domain symmetric routing scenario. In this test case, the network topology is the same as <xref target="intra-domain-customer-syn"/>, and the difference is the location of the DUT in the network topology, where the DUT is connected to Router 1 and the Internet, and the Tester is used to emulate the Internet. The DUT performs Internet-facing SAV instead of customer/host-network-facing SAV.</t>
          <t>The <strong>procedure</strong> is listed below for testing SAV for Internet-facing network in intra-domain symmetric routing scenario**:</t>
          <ol spacing="normal" type="1"><li>
              <t>First, in order to test whether the DUT can generate accurate SAV rules for SAV for Internet-facing network in intra-domain symmetric routing scenario, a testbed can be built as shown in <xref target="intra-domain-internet-syn"/> to construct the test network environment. The Tester is connected to the DUT and performs the functions as the Internet.</t>
            </li>
            <li>
              <t>Then, the devices including the DUT and Router 1 are configured to form the symmetric routing scenario.</t>
            </li>
            <li>
              <t>Finally, the Tester can send traffic using 10.0.0.0/15 as source addresses (spoofing traffic) and traffic using 10.2.0.0/15 as source addresses (legitimate traffic) to the DUT, respectively. The ratio of spoofing traffic to legitimate traffic can vary, such as from 1:9 to 9:1.</t>
            </li>
          </ol>
          <t>The <strong>expected results</strong> are that the DUT can block the spoofing traffic and permit the legitimate traffic from the Internet for this test case.</t>
          <figure anchor="intra-domain-internet-asyn">
            <name>SAV for Internet-facing network in intra-domain asymmetric routing scenario.</name>
            <artwork><![CDATA[
                   +---------------------+
                   |  Tester (Internet)  |
                   +---------------------+
                           /\   | Inbound traffic with source 
                           |    | IP address of 10.2.0.0/15
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+
| Test Network Environment |    |                             |
|                          |   \/                             |
|                       +----------+                          |
|                       |    DUT   |                          |
| FIB on Router 1       +----------+   FIB on Router 2        |
| Dest         Next_hop   /\      \    Dest         Next_hop  |
| 10.1.0.0/16  Network 1  /        \   10.0.0.0/16  Network 1 |
| 10.0.0.0/16  DUT       /         \/  10.1.0.0/16  DUT       |
|               +~~~~~~~~~~+     +~~~~~~~~~~+                 |
|               | Router 1 |     | Router 2 |                 |
|               +~~~~~~~~~~+     +~~~~~~~~~~+                 |
|                     /\           /                          |
|         Traffic with \          / Traffic with              |
|   source IP addresses \        / destination IP addresses   |
|         of 10.0.0.0/16 \      / of 10.0.0.0/16              |
|                         \    \/                             |
|                  +--------------------+                     |
|                  |     Network 1      |                     |
|                  |   (10.0.0.0/15)    |                     |
|                  +--------------------+                     |
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+
]]></artwork>
          </figure>
          <t><xref target="intra-domain-internet-asyn"/> shows the test case of SAV for Internet-facing network in intra-domain asymmetric routing scenario. In this test case, the network topology is the same with <xref target="intra-domain-customer-asyn"/>, and the difference is the location of the DUT in the network topology, where the DUT is connected to Router 1 and Router 2 within the same AS, as well as the Internet. The Tester is used to emulate the Internet. The DUT performs Internet-facing SAV instead of customer/host-network-facing SAV.</t>
          <t>The <strong>procedure</strong> is listed below for testing SAV for Internet-facing network in intra-domain asymmetric routing scenario**:</t>
          <ol spacing="normal" type="1"><li>
              <t>First, in order to test whether the DUT can generate accurate SAV rules for SAV for Internet-facing network in intra-domain asymmetric routing scenario, a testbed can be built as shown in <xref target="intra-domain-internet-asyn"/> to construct the test network environment. The Tester is connected to the DUT and performs the functions as the Internet.</t>
            </li>
            <li>
              <t>Then, the devices including the DUT, Router 1, and Router 2 are configured to form the asymmetric routing scenario.</t>
            </li>
            <li>
              <t>Finally, the Tester can send traffic using 10.0.0.0/15 as source addresses (spoofing traffic) and traffic using 10.2.0.0/15 as source addresses (legitimate traffic) to the DUT, respectively. The ratio of spoofing traffic to legitimate traffic can vary, such as from 1:9 to 9:1.</t>
            </li>
          </ol>
          <t>The <strong>expected results</strong> are that the DUT can block the spoofing traffic and permit the legitimate traffic from the Internet for this test case.</t>
          <figure anchor="intra-domain-agg-syn">
            <name>SAV for aggregation-router-facing network in intra-domain symmetric routing scenario.</name>
            <artwork><![CDATA[
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+
|                  Test Network Environment                |
|                       +----------+                       |
|                       |    DUT   | SAV facing Router 1   |
| FIB on DUT            +----------+                       |
| Dest         Next_hop   /\    |                          |
| 10.0.0.0/15  Network 1   |    |                          |
|                          |    \/                         |
|                       +~~~~~~~~~~+                       |
|                       | Router 1 |                       |
|                       +~~~~~~~~~~+                       |
|                         /\    |                          |
|             Traffic with |    | Traffic with             |
|      source IP addresses |    | destination IP addresses |
|           of 10.0.0.0/15 |    | of 10.0.0.0/15           |
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+
                           |    \/
                    +--------------------+
                    |       Tester       |
                    |   (10.0.0.0/15)    |
                    +--------------------+
]]></artwork>
          </figure>
          <t><strong>SAV for Aggregation-router-facing Network</strong>: <xref target="intra-domain-agg-syn"/> shows the test case of SAV for aggregation-router-facing network in intra-domain symmetric routing scenario. The test network environment of <xref target="intra-domain-agg-syn"/> is the same with <xref target="intra-domain-internet-syn"/>. The Tester is connected to Router 1 to emulate the functions of Network 1 to test the SAV accuracy of the DUT facing the direction of Router 1.</t>
          <t>The <strong>procedure</strong> is listed below for testing SAV for aggregation-router-facing network in intra-domain symmetric routing scenario:</t>
          <ol spacing="normal" type="1"><li>
              <t>First, in order to test whether the DUT can generate accurate SAV rules for SAV for Internet-facing network in intra-domain symmetric routing scenario, a testbed can be built as shown in <xref target="intra-domain-agg-syn"/> to construct the test network environment. The Tester is connected to Router 1 and performs the functions as Network 1.</t>
            </li>
            <li>
              <t>Then, the devices including the DUT and Router 1 are configured to form the symmetric routing scenario.</t>
            </li>
            <li>
              <t>Finally, the Tester can send traffic using 10.1.0.0/15 as source addresses (legitimate traffic) and traffic using 10.2.0.0/15 as source addresses (spoofing traffic) to Router 1, respectively. The ratio of spoofing traffic to legitimate traffic can vary, such as from 1:9 to 9:1.</t>
            </li>
          </ol>
          <t>The expected results are that the DUT can block the spoofing traffic and permit the legitimate traffic from the direction of Router 1 for this test case.</t>
          <figure anchor="intra-domain-agg-asyn">
            <name>SAV for aggregation-router-facing network in intra-domain asymmetric routing scenario.</name>
            <artwork><![CDATA[
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+
|                   Test Network Environment                  |
|                       +----------+                          |
|                       |    DUT   | SAV facing Router 1 and 2|
| FIB on Router 1       +----------+   FIB on Router 2        |
| Dest         Next_hop   /\      \    Dest         Next_hop  |
| 10.1.0.0/16  Network 1  /        \   10.0.0.0/16  Network 1 |
| 10.0.0.0/16  DUT       /         \/  10.1.0.0/16  DUT       |
|               +~~~~~~~~~~+     +~~~~~~~~~~+                 |
|               | Router 1 |     | Router 2 |                 |
|               +~~~~~~~~~~+     +~~~~~~~~~~+                 |
|                     /\           /                          |
|         Traffic with \          / Traffic with              |
|   source IP addresses \        / destination IP addresses   |
|         of 10.0.0.0/16 \      / of 10.0.0.0/16              |
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+
                           \   \/
                   +--------------------+
                   | Tester (Network 1) |
                   |   (10.0.0.0/15)    |
                   +--------------------+
]]></artwork>
          </figure>
          <t><xref target="intra-domain-agg-asyn"/> shows the test case of SAV for aggregation-router-facing network in intra-domain asymmetric routing scenario. The test network environment of <xref target="intra-domain-agg-asyn"/> is the same with <xref target="intra-domain-internet-asyn"/>. The Tester is connected to Router 1 and Router 2 to emulate the functions of Network 1 to test the SAV accuracy of the DUT facing the direction of Router 1 and Router 2.</t>
          <t>The <strong>procedure</strong> is listed below for testing SAV for aggregation-router-facing network in intra-domain asymmetric routing scenario:</t>
          <ol spacing="normal" type="1"><li>
              <t>First, in order to test whether the DUT can generate accurate SAV rules for SAV for aggregation-router-facing network in intra-domain asymmetric routing scenario, a testbed can be built as shown in <xref target="intra-domain-agg-asyn"/> to construct the test network environment. The Tester is connected to Router 1 and Router 2 and performs the functions as Network 1.</t>
            </li>
            <li>
              <t>Then, the devices including the DUT, Router 1, and Router 2 are configured to form the asymmetric routing scenario.</t>
            </li>
            <li>
              <t>Finally, the Tester generates traffic using 10.1.0.0/16 as source addresses (spoofing traffic) and traffic using 10.0.0.0/16 as source addresses (legitimate traffic) to Router 1, respectively. The ratio of spoofing traffic to legitimate traffic can vary, such as from 1:9 to 9:1.</t>
            </li>
          </ol>
          <t>The expected results are that the DUT can block the spoofing traffic and permit the legitimate traffic from the direction of Router 1 and Router 2 for this test case.</t>
        </section>
        <section anchor="intra-control-plane-sec">
          <name>Control Plane Performance</name>
          <t><strong>Objective</strong>: Measure the control plane performance of the DUT including protocol convergence performance and protocol message processing performance when route changes happen due to network failures or operator configurations. The prootocol convergence performance is defined as the protocol convergence time representing the time elapsed from the begining of routing change to the completion of SAV rule update, and the protocol message processing performance is defined as the protocol message processing throughput representing the overall size of protocol messages per second.</t>
          <figure anchor="intra-convg-perf">
            <name>Test setup for protocol convergence performance measurement.</name>
            <artwork><![CDATA[
+~~~~~~~~~~~~~~~~~~~+      +-------------+          +-----------+
| Emulated Topology |------|   Tester    |<-------->|    DUT    |
+~~~~~~~~~~~~~~~~~~~+      +-------------+          +-----------+
]]></artwork>
          </figure>
          <t><strong>Protocol Convergence Performance</strong>: <xref target="intra-convg-perf"/> shows the test setup for protocol convergence performance measurement. The protocol convergence process of the DUT to update SAV rules launches when the route changes happen. Route changes is the cause of updating SAV rules and may be because of network failures or operator configurations. Therefore, in <xref target="intra-convg-perf"/>, the Tester is direclty connects to the DUT and emulates the route changes to launch the convergence process of the DUT by adding or withdrawing the prefixes.</t>
          <t>The <strong>procedure</strong> is listed below for testing the protocol convergence performance:</t>
          <ol spacing="normal" type="1"><li>
              <t>First, in order to test the protocol convergence time of the DUT, a testbed can be built as shown in <xref target="intra-convg-perf"/> to construct the test network environment. The Tester is directly connected to the DUT.</t>
            </li>
            <li>
              <t>Then, the Tester proactively withdraws the prefixes in a certern percentage of the overall prefixes supported by the DUT, such as 10%, 20%, ..., 100%.</t>
            </li>
            <li>
              <t>Finally, the protocol convergence time is calculated according to the logs of the DUT about the beginning and completion of the protocol convergence.</t>
            </li>
          </ol>
          <t>Note that withdrawing prefixes proportionally for IGP can be achieved by shutting down interfaces proportionally. For example, the Tester connects to an emulated network topology with each interface connecting to an emulated device. The Tester can connect to ten emulated devices via ten interfaces. Initially, the ten emulated devices advertise their prefixes to the DUT. To withdraw 10% prefixes, the Tester can shut down one interface to the emulated device randomly. For 20%, it can shut down two interfaces randomly, and other proportions perform in a similar fashion.
This is just a recommendation, and so can other approaches that can achive the same goal.</t>
          <t>The time between the last hello received on DUT from the emulated device connected by the disabled interface and finishing the SAV rule generation on DUT should be taken as the amount of time required for the DUT to complete protocol convergence.
To measure the protocol convergence time, the logs of the DUT records the time of receiving the last hello and the time when the SAV rule update is finished, and the protocol convergence time is obtained by calculating their gap.</t>
          <t>Note that if the emulated device sends a ""goodbye hello"", in the process of shutting down the Tester's interface, using the time when this hello message is received instead of the time when the last hello was would provide a more accurate measurement <xref target="RFC4061"/>.</t>
          <t><strong>Protocol Message Processing Performance</strong>: The test of the protocol message processing performance uses the same test setup shown in <xref target="intra-convg-perf"/>. The protocol message processing performance measures the protocol message processing throughput to process the protocol messages. Therefore, the Tester can vary the rate for sending protocol messages, such as from 10% to 100% of the overall link capacity between the Tester and the DUT. Then, the DUT records the size of the processed total protocol messages and processing time.</t>
          <t>The <strong>procedure</strong> is listed below for testing the protocol message processing performance:</t>
          <ol spacing="normal" type="1"><li>
              <t>First, in order to test the protocol message processing throughput of the DUT, a testbed can be built as shown in <xref target="intra-convg-perf"/> to construct the test network environment. The Tester is directly connected to the DUT.</t>
            </li>
            <li>
              <t>Then, the Tester proactively sends the protocol messages to the DUT in a certern percentage of the overall link capacity between the Tester and the DUT, such as 10%, 20%, ..., 100%.</t>
            </li>
            <li>
              <t>Finally, the protocol message processing throughput is calculated according to the logs of the DUT about the overall size of the protocol messages and the overall processing time.</t>
            </li>
          </ol>
          <t>To measure the protocol message processing throughput, the logs of the DUT records the overall size of the protocol messages and the overall processing time, and the protocol message processing throughput is calculated by dividing the overall size of the protocol messages by the overall processing time.</t>
        </section>
        <section anchor="intra-data-plane-sec">
          <name>Data Plane Performance</name>
          <t><strong>Objective</strong>: Measure the data plane performance of the DUT including data plane SAV table refreshing performance when updating the SAV table on the control plane according to the new updated SAV rules and data plane forwarding performance. The data plane SAV table refreshing performance is defined as the data plane SAV table refreshing rate representing the rate at which a DUT updates its SAV table with new SAV rules and the data plane forwarding performance is defined as the data plane forwarding rate representing the overall size of the forwarded packets per second.</t>
          <t><strong>Data Plane SAV Table Refreshing Performance</strong>: The test of the data plane SAV table refreshing performance uses the same test setup shown in <xref target="intra-convg-perf"/>. The data plane SAV table refreshing performance measures the rate at which a DUT updates its SAV table with new SAV rules. Therefore, the Tester can vary the rate for sending protocol messages, such as from 10% to 100% of the overall link capacity between the Tester and the DUT, which will affect the proportions of updated SAV rules, and as a result, affect the proportions of the entries of SAV table. Then, the DUT records the overall number of updated SAV table entries and refreshing time.</t>
          <t>The <strong>procedure</strong> is listed below for testing the data plane SAV table refreshing performance:</t>
          <ol spacing="normal" type="1"><li>
              <t>First, in order to test the data plane SAV table refreshing rate of the DUT, a testbed can be built as shown in <xref target="intra-convg-perf"/> to construct the test network environment. The Tester is directly connected to the DUT.</t>
            </li>
            <li>
              <t>Then, the Tester proactively sends the protocol messages to the DUT in a certern percentage of the overall link capacity between the Tester and the DUT, such as 10%, 20%, ..., 100%.</t>
            </li>
            <li>
              <t>Finally, the data plane SAV table refreshing rate is calculated according to the logs of the DUT about the overall number of updated SAV table entries and the overall refreshing time.</t>
            </li>
          </ol>
          <t>To measure the data plane SAV table refreshing rate, the logs of the DUT records the overall number of updated SAV table entries and the overall refreshing time, and the data plane SAV table refreshing rate is calculated by dividing the overall number of updated SAV table entries by the overall refreshing time.</t>
          <t><strong>Data Plane Forwarding Performance</strong>: The test of the data plane forwarding performance uses the same test setup shown in <xref target="intra-convg-perf"/>. The Tester needs to send the traffic which include spoofing and legitimate traffic at the rate of the overall link capacity between the Tester and the DUT, and the DUT build a SAV table with occupying the overall allocated storage space. The ratio of spoofing traffic to legitimate traffic can vary, such as from 1:9 to 9:1. The DUT records the overall size of the forwarded packets and the overall forwarding time.</t>
          <t>The <strong>procedure</strong> is listed below for testing the data plane forwarding performance:</t>
          <ol spacing="normal" type="1"><li>
              <t>First, in order to test the data plane forwarding rate of the DUT, a testbed can be built as shown in <xref target="intra-convg-perf"/> to construct the test network environment. The Tester is directly connected to the DUT.</t>
            </li>
            <li>
              <t>Then, the Tester proactively sends the data plane traffic including spoofing and legitimate traffic to the DUT at the rate of the overall link capacity between the Tester and the DUT. The ratio of spoofing traffic to legitimate traffic can vary, such as from 1:9 to 9:1.</t>
            </li>
            <li>
              <t>Finally, the data plane forwarding rate is calculated according to the logs of the DUT about the overall size of the forwarded traffic and the overall forwarding time.</t>
            </li>
          </ol>
          <t>To measure the data plane forwarding rate, the logs of the DUT records the overall size of the forwarded traffic and the overall forwarding time, and the data plane forwarding rate is calculated by dividing the overall size of the forwarded traffic by the overall forwarding time.</t>
        </section>
      </section>
      <section anchor="inter-domain-sav">
        <name>Inter-domain SAV</name>
        <section anchor="sav-accuracy-1">
          <name>SAV Accuracy</name>
          <t><strong>Objective</strong>: Measure the accuracy of the DUT to process legitimate traffic and spoofing traffic across various inter-domain network scenarios including SAV for customer-facing ASes and SAV for provider/peer-facing ASes, defined as the proportion of legitimate traffic which is blocked improperly by the DUT across all the legitimate traffic and the proportion of spoofing traffic which is permitted improperly by the DUT across all the spoofing traffic.</t>
          <figure anchor="inter-customer-syn">
            <name>SAV for customer-facing ASes in inter-domain symmetric routing scenario.</name>
            <artwork><![CDATA[
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+
|                  Test Network Environment                |
|                        +~~~~~~~~~~~~~~~~+                |
|                        |    AS 3(P3)    |                |
|                        +~+/\~~~~~~+/\+~~+                |
|                           /         \                    |
|                          /           \                   |
|                         /             \                  |
|                        / (C2P)         \                 |
|              +------------------+       \                |
|              |      DUT(P4)     |        \               |
|              ++/\+--+/\+----+/\++         \              |
|                /      |       \            \             |
|      P2[AS 2] /       |        \            \            |
|              /        |         \            \           |
|             / (C2P)   |          \ P5[AS 5]   \ P5[AS 5] |
|+~~~~~~~~~~~~~~~~+     |           \            \         |
||    AS 2(P2)    |     | P1[AS 1]   \            \        |
|+~~~~~~~~~~+/\+~~+     | P6[AS 1]    \            \       |
|             \         |              \            \      |
|     P6[AS 1] \        |               \            \     |
|      P1[AS 1] \       |                \            \    |
|          (C2P) \      | (C2P/P2P) (C2P) \      (C2P) \   |
|             +~~~~~~~~~~~~~~~~+        +~~~~~~~~~~~~~~~~+ |
|             |  AS 1(P1, P6)  |        |    AS 5(P5)    | |
|             +~~~~~~~~~~~~~~~~+        +~~~~~~~~~~~~~~~~+ |
|                  /\     |                                |
|                  |      |                                |
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+
                   |     \/
              +----------------+
              |     Tester     |
              +----------------+
]]></artwork>
          </figure>
          <t><strong>SAV for Customer-facing ASes</strong>: <xref target="inter-customer-syn"/> presents a test case of SAV for customer-facing ASes in inter-domain symmetric routing scenario. In this test case, AS 1, AS 2, AS 3, the DUT, and AS 5 constructs the test network environment, and the DUT performs SAV as an AS. AS 1 is a customer of AS 2 and the DUT, AS 2 is a customer of the DUT, which is a customer of AS 3, and AS 5 is a customer of both AS 3 and the DUT. AS 1 advertises prefixes P1 and P6 to AS 2 and the DUT, respectively, and then AS 2 further propagates the route for prefix P1 and P6 to the DUT. Consequently, the DUT can learn the route for prefixes P1 and P6 from AS 1 and AS 2. In this test case, the legitimate path for the traffic with source addresses in P1 and destination addresses in P4 is AS 1-&gt;AS 2-&gt;AS 4, and the Tester is connected to the AS 1 and the SAV for customer-facing ASes of the DUT is tested.</t>
          <t>The <strong>procedure</strong> is listed below for testing SAV for customer-facing ASes in inter-domain symmetric routing scenario:</t>
          <ol spacing="normal" type="1"><li>
              <t>First, in order to test whether the DUT can generate accurate SAV rules for SAV for customer-facing ASes in inter-domain symmetric routing scenario, a testbed can be built as shown in <xref target="inter-customer-syn"/> to construct the test network environment. The Tester is connected to AS 1 and generates the test traffic to the DUT.</t>
            </li>
            <li>
              <t>Then, the ASes including  AS 1, AS 2, AS 3, the DUT, and AS 5, are configured to form the symmetric routing scenario.</t>
            </li>
            <li>
              <t>Finally, the Tester sends the traffic using P1 as source addresses and P4 as destination addresses (legitimate traffic) to the DUT via AS 2 and traffic using P5 as source addresses and P4 as destination addresses (spoofing traffic) to the DUT via AS 2, respectively. The ratio of spoofing traffic to legitimate traffic can vary, such as from 1:9 to 9:1.</t>
            </li>
          </ol>
          <t>The <strong>expected results</strong> are that the DUT can block the spoofing traffic and permit the legitimate traffic from the direction of AS 2 for this test case.</t>
          <t>Note that the locations of the DUT in <xref target="inter-customer-syn"/> can be set at AS 1 and AS 2 to evaluate its SAV accuracy according to the procedure outlined above. The expected results are that the DUT will effectively block spoofing traffic.</t>
          <figure anchor="inter-customer-lpp">
            <name>SAV for customer-facing ASes in inter-domain asymmetric routing scenario caused by NO_EXPORT.</name>
            <artwork><![CDATA[
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+
|                  Test Network Environment                |
|                        +~~~~~~~~~~~~~~~~+                |
|                        |    AS 3(P3)    |                |
|                        +~+/\~~~~~~+/\+~~+                |
|                           /         \                    |
|                          /           \                   |
|                         /             \                  |
|                        / (C2P)         \                 |
|              +------------------+       \                |
|              |      DUT(P4)     |        \               |
|              ++/\+--+/\+----+/\++         \              |
|                /      |       \            \             |
|      P2[AS 2] /       |        \            \            |
|              /        |         \            \           |
|             / (C2P)   |          \ P5[AS 5]   \ P5[AS 5] |
|+~~~~~~~~~~~~~~~~+     |           \            \         |
||    AS 2(P2)    |     | P1[AS 1]   \            \        |
|+~~~~~~~~~~+/\+~~+     | P6[AS 1]    \            \       |
|    P6[AS 1] \         | NO_EXPORT    \            \      |
|     P1[AS 1] \        |               \            \     |
|     NO_EXPORT \       |                \            \    |
|          (C2P) \      | (C2P)     (C2P) \      (C2P) \   |
|             +~~~~~~~~~~~~~~~~+        +~~~~~~~~~~~~~~~~+ |
|             |  AS 1(P1, P6)  |        |    AS 5(P5)    | |
|             +~~~~~~~~~~~~~~~~+        +~~~~~~~~~~~~~~~~+ |
|                  /\     |                                |
|                  |      |                                |
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+
                   |     \/
              +----------------+
              |     Tester     |
              +----------------+
]]></artwork>
          </figure>
          <t><xref target="inter-customer-lpp"/> presents a test case of SAV for customer-facing ASes in inter-domain asymmetric routing scenario caused by NO_EXPORT configuration. In this test case, AS 1, AS 2, AS 3, the DUT, and AS 5 constructs the test network environment, and the DUT performs SAV as an AS. AS 1 is a customer of AS 2 and the DUT, AS 2 is a customer of the DUT, which is a customer of AS 3, and AS 5 is a customer of both AS 3 and the DUT. AS 1 advertises prefixes P1 to AS 2 and adds the NO_EXPORT community attribute to the BGP advertisement sent to AS 2, preventing AS 2 from further propagating the route for prefix P1 to the DUT.  Similarly, AS 1 adds the NO_EXPORT community attribute to the BGP advertisement sent to the DUT, resulting in the DUT not propagating the route for prefix P6 to AS 3. Consequently, the DUT only learns the route for prefix P1 from AS 1 in this scenario. In this test case, the legitimate path for the traffic with source addresses in P1 and destination addresses in P4 is AS 1-&gt;AS 2-&gt;DUT, and the Tester is connected to the AS 1 and the SAV for customer-facing ASes of the DUT is tested.</t>
          <t>The <strong>procedure</strong> is listed below for testing SAV for customer-facing ASes in inter-domain asymmetric routing scenario caused by NO_EXPORT:</t>
          <ol spacing="normal" type="1"><li>
              <t>First, in order to test whether the DUT can generate accurate SAV rules for SAV for customer-facing ASes in inter-domain asymmetric routing scenario caused by NO_EXPORT, a testbed can be built as shown in <xref target="inter-customer-lpp"/> to construct the test network environment. The Tester is connected to AS 1 and generates the test traffic to the DUT.</t>
            </li>
            <li>
              <t>Then, the ASes including  AS 1, AS 2, AS 3, the DUT, and AS 5, are configured to form the asymmetric routing scenario.</t>
            </li>
            <li>
              <t>Finally, the Tester sends the traffic using P1 as source addresses and P4 as destination addresses (legitimate traffic) to the DUT via AS 2 and traffic using P5 as source addresses and P4 as destination addresses (spoofing traffic) to the DUT via AS 2, respectively. The ratio of spoofing traffic to legitimate traffic can vary, such as from 1:9 to 9:1.</t>
            </li>
          </ol>
          <t>The <strong>expected results</strong> are that the DUT can block the spoofing traffic and permit the legitimate traffic from the direction of AS 2 for this test case.</t>
          <t>Note that the locations of the DUT in <xref target="inter-customer-lpp"/> can be set at AS 1 and AS 2 to evaluate its SAV accuracy according to the procedure outlined above. The expected results are that the DUT will effectively block spoofing traffic.</t>
          <figure anchor="inter-customer-dsr">
            <name>SAV for customer-facing ASes in the scenario of direct server return (DSR).</name>
            <artwork><![CDATA[
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+
|                  Test Network Environment                       |
|                                +----------------+               |
|                Anycast Server+-+    AS 3(P3)    |               |
|                                +-+/\----+/\+----+               |
|                                   /       \                     |
|                         P3[AS 3] /         \ P3[AS 3]           |
|                                 /           \                   |
|                                / (C2P)       \                  |
|                       +----------------+      \                 |
|                       |     DUT(P4)    |       \                |
|                       ++/\+--+/\+--+/\++        \               |
|          P6[AS 1, AS 2] /     |      \           \              |
|               P2[AS 2] /      |       \           \             |
|                       /       |        \           \            |
|                      / (C2P)  |         \ P5[AS 5]  \ P5[AS 5]  |
|      +----------------+       |          \           \          |
|User+-+    AS 2(P2)    |       | P1[AS 1]  \           \         |
|      +----------+/\+--+       | P6[AS 1]   \           \        |
|          P6[AS 1] \           |             \           \       |
|           P1[AS 1] \          |              \           \      |
|                     \         |               \           \     |
|                      \ (C2P)  | (C2P)    (C2P) \     (C2P) \    |
|                    +----------------+        +----------------+ |
|                    |AS 1(P1, P3, P6)|        |    AS 5(P5)    | |
|                    +----------------+        +----------------+ |
|                         /\     |                                |
|                          |     |                                |
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+
                           |    \/
                     +----------------+
                     |     Tester     |
                     | (Edge Server)  |
                     +----------------+

Within the test network environment, P3 is the anycast prefix and is only advertised by AS 3 through BGP.
]]></artwork>
          </figure>
          <t><xref target="inter-customer-dsr"/> presents a test case of SAV for customer-facing ASes in the scenario of direct server return (DSR). In this test case, AS 1, AS 2, AS 3, the DUT, and AS 5 constructs the test network environment, and the DUT performs SAV as an AS. AS 1 is a customer of AS 2 and the DUT, AS 2 is a customer of the DUT, which is a customer of AS 3, and AS 5 is a customer of both AS 3 and the DUT. When users in AS 2 send requests to the anycast destination IP, the forwarding path is AS 2-&gt;DUT-&gt;AS 3.  The anycast servers in AS 3 receive the requests and tunnel them to the edge servers in AS 1.  Finally, the edge servers send the content to the users with source addresses in prefix P3. The reverse forwarding path is AS 1-&gt;DUT-&gt;AS 2. The Tester sends the traffic with source addresses in P3 and destination addresses in P2 along the path AS 1-&gt;DUT-&gt;AS 2.</t>
          <t>The <strong>procedure</strong> is listed below for testing SAV for customer-facing ASes in the scenario of direct server return (DSR):</t>
          <ol spacing="normal" type="1"><li>
              <t>First, in order to test whether the DUT can generate accurate SAV rules for SAV for customer-facing ASes in the scenario of DSR, a testbed can be built as shown in <xref target="inter-customer-dsr"/> to construct the test network environment. The Tester is connected to AS 1 and generates the test traffic to the DUT.</t>
            </li>
            <li>
              <t>Then, the ASes including  AS 1, AS 2, AS 3, the DUT, and AS 5, are configured to form the scenario of DSR.</t>
            </li>
            <li>
              <t>Finally, the Tester sends the traffic using P3 as source addresses and P2 as destination addresses (legitimate traffic) to AS 2 via the DUT.</t>
            </li>
          </ol>
          <t>Note that in <xref target="inter-customer-dsr"/>, to direct the return traffic from the edge server to the user to the path AS 1-&gt;DUT-&gt;AS 2, the document recommends to config static route to direct the traffic with source addresses in P3 and destination addresses in P2 to the DUT.</t>
          <t>The <strong>expected results</strong> are that the DUT can permit the legitimate traffic with source addresses in P3 from the direction of AS 1 for this test case.</t>
          <t>Note that the locations of the DUT in <xref target="inter-customer-dsr"/> can be set at AS 1 and AS 2 to evaluate its SAV accuracy according to the procedure outlined above. The expected results are that the DUT will effectively block spoofing traffic.</t>
          <figure anchor="inter-customer-reflect">
            <name>SAV for customer-facing ASes in the scenario of reflection attacks.</name>
            <artwork><![CDATA[
             +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+
             |                   Test Network Environment                 |
             |                          +----------------+                |
             |                          |    AS 3(P3)    |                |
             |                          +--+/\+--+/\+----+                |
             |                              /      \                      |
             |                             /        \                     |
             |                            /          \                    |
             |                           / (C2P)      \                   |
             |                  +----------------+     \                  |
             |                  |     DUT(P4)    |      \                 |
             |                  ++/\+--+/\+--+/\++       \                |
             |     P6[AS 1, AS 2] /     |      \          \               |
             |          P2[AS 2] /      |       \          \              |
             |                  /       |        \          \             |
             |                 / (C2P)  |         \ P5[AS 5] \ P5[AS 5]   |
+----------+ |  +----------------+      |          \          \           |
|  Tester  |-|->|                |      |           \          \          |
|(Attacker)| |  |    AS 2(P2)    |      |            \          \         |
|  (P1')   |<|--|                |      | P1[AS 1]    \          \        |
+----------+ |  +---------+/\+---+      | P6[AS 1]     \          \       |
             |     P6[AS 1] \           | NO_EXPORT     \          \      |
             |      P1[AS 1] \          |                \          \     |
             |      NO_EXPORT \         |                 \          \    |
             |                 \ (C2P)  | (C2P)      (C2P) \    (C2P) \   |
             |             +----------------+          +----------------+ |
             |     Victim+-+  AS 1(P1, P6)  |  Server+-+    AS 5(P5)    | |
             |             +----------------+          +----------------+ |
             +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+
P1' is the spoofed source prefix P1 by the attacker which is inside of 
AS 2 or connected to AS 2 through other ASes.
]]></artwork>
          </figure>
          <t><xref target="inter-customer-reflect"/> depicts the test case of SAV for customer-facing ASes in the scenario of reflection attacks. In this test case, the reflection attack by source address spoofing takes place within DUT's customer cone, where the attacker spoofs the victim's IP address (P1) and sends requests to servers' IP address (P5) that are designed to respond to such requests. The Tester performs the source address spoofing function as an attacker. The arrows in <xref target="inter-customer-reflect"/> illustrate the commercial relationships between ASes.  AS 3 serves as the provider for the DUT and AS 5, while the DUT acts as the provider for AS 1, AS 2, and AS 5.  Additionally, AS 2 is the provider for AS 1.</t>
          <t>The <strong>procedure</strong> is listed below for testing SAV for customer-facing ASes in the scenario of reflection attacks:</t>
          <ol spacing="normal" type="1"><li>
              <t>First, in order to test whether the DUT can generate accurate SAV rules for SAV for customer-facing ASes in the scenario of reflection attacks, a testbed can be built as shown in <xref target="inter-customer-reflect"/> to construct the test network environment. The Tester is connected to AS 2 and generates the test traffic to the DUT.</t>
            </li>
            <li>
              <t>Then, the ASes including  AS 1, AS 2, AS 3, the DUT, and AS 5, are configured to form the scenario of reflection attacks.</t>
            </li>
            <li>
              <t>Finally, the Tester sends the traffic using P1 as source addresses and P5 as destination addresses (spoofing traffic) to AS 5 via the DUT.</t>
            </li>
          </ol>
          <t>The <strong>expected results</strong> are that the DUT can block the spoofing traffic with source addresses in P1 from the direction of AS 2 for this test case.</t>
          <t>Note that the locations of the DUT in <xref target="inter-customer-reflect"/> can be set at AS 1 and AS 2 to evaluate its SAV accuracy according to the procedure outlined above. The expected results are that the DUT will effectively block spoofing traffic.</t>
          <figure anchor="inter-customer-direct">
            <name>SAV for customer-facing ASes in the scenario of direct attacks.</name>
            <artwork><![CDATA[
             +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+
             |                   Test Network Environment                 |
             |                          +----------------+                |
             |                          |    AS 3(P3)    |                |
             |                          +--+/\+--+/\+----+                |
             |                              /      \                      |
             |                             /        \                     |
             |                            /          \                    |
             |                           / (C2P)      \                   |
             |                  +----------------+     \                  |
             |                  |     DUT(P4)    |      \                 |
             |                  ++/\+--+/\+--+/\++       \                |
             |     P6[AS 1, AS 2] /     |      \          \               |
             |          P2[AS 2] /      |       \          \              |
             |                  /       |        \          \             |
             |                 / (C2P)  |         \ P5[AS 5] \ P5[AS 5]   |
+----------+ |  +----------------+      |          \          \           |
|  Tester  |-|->|                |      |           \          \          |
|(Attacker)| |  |    AS 2(P2)    |      |            \          \         |
|  (P5')   |<|--|                |      | P1[AS 1]    \          \        |
+----------+ |  +---------+/\+---+      | P6[AS 1]     \          \       |
             |     P6[AS 1] \           | NO_EXPORT     \          \      |
             |      P1[AS 1] \          |                \          \     |
             |      NO_EXPORT \         |                 \          \    |
             |                 \ (C2P)  | (C2P)      (C2P) \    (C2P) \   |
             |             +----------------+          +----------------+ |
             |     Victim+-+  AS 1(P1, P6)  |          |    AS 5(P5)    | |
             |             +----------------+          +----------------+ |
             +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+
P5' is the spoofed source prefix P5 by the attacker which is inside of 
AS 2 or connected to AS 2 through other ASes.
]]></artwork>
          </figure>
          <t><xref target="inter-customer-direct"/> presents the test case of SAV for customer-facing ASes in the scenario of direct attacks. In this test case, the direct attack by source address spoofing takes place within the DUT's customer cone, where the attacker spoofs a source address (P5) and directly targets the victim's IP address (P1), overwhelming its network resources. The Tester performs the source address spoofing function as an attacker. The arrows in <xref target="inter-customer-direct"/> illustrate the commercial relationships between ASes.  AS 3 serves as the provider for the DUT and AS 5, while the DUT acts as the provider for AS 1, AS 2, and AS 5.  Additionally, AS 2 is the provider for AS 1.</t>
          <t>The <strong>procedure</strong> is listed below for testing SAV for customer-facing ASes in the scenario of direct attacks**:</t>
          <ol spacing="normal" type="1"><li>
              <t>First, in order to test whether the DUT can generate accurate SAV rules for SAV for customer-facing ASes in the scenario of direct attacks, a testbed can be built as shown in <xref target="inter-customer-direct"/> to construct the test network environment. The Tester is connected to AS 2 and generates the test traffic to the DUT.</t>
            </li>
            <li>
              <t>Then, the ASes including  AS 1, AS 2, AS 3, the DUT, and AS 5, are configured to form the scenario of direct attacks.</t>
            </li>
            <li>
              <t>Finally, the Tester sends the traffic using P5 as source addresses and P1 as destination addresses (spoofing traffic) to AS 1 via the DUT.</t>
            </li>
          </ol>
          <t>The <strong>expected results</strong> are that the DUT can block the spoofing traffic with source addresses in P5 from the direction of AS 2 for this test case.</t>
          <t>Note that the locations of the DUT in <xref target="inter-customer-direct"/> can be set at AS 1 and AS 2 to evaluate its SAV accuracy according to the procedure outlined above. The expected results are that the DUT will effectively block spoofing traffic.</t>
          <figure anchor="reflection-attack-p">
            <name>SAV for provider-facing ASes in the scenario of reflection attacks.</name>
            <artwork><![CDATA[
                                   +----------------+
                                   |     Tester     |
                                   |   (Attacker)   |
                                   |      (P1')     |
                                   +----------------+
                                        |     /\
                                        |      |
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+
| Test Network Environment              \/     |                    |
|                                  +----------------+               |
|                                  |                |               |
|                                  |    AS 3(P3)    |               |
|                                  |                |               |
|                                  +-+/\----+/\+----+               |
|                                     /       \                     |
|                                    /         \                    |
|                                   /           \                   |
|                                  / (C2P/P2P)   \                  |
|                         +----------------+      \                 |
|                         |     DUT(P4)    |       \                |
|                         ++/\+--+/\+--+/\++        \               |
|            P6[AS 1, AS 2] /     |      \           \              |
|                 P2[AS 2] /      |       \           \             |
|                         /       |        \           \            |
|                        / (C2P)  |         \ P5[AS 5]  \ P5[AS 5]  |
|        +----------------+       |          \           \          |
|Server+-+    AS 2(P2)    |       | P1[AS 1]  \           \         |
|        +----------+/\+--+       | P6[AS 1]   \           \        |
|            P6[AS 1] \           | NO_EXPORT   \           \       |
|             P1[AS 1] \          |              \           \      |
|             NO_EXPORT \         |               \           \     |
|                        \ (C2P)  | (C2P)    (C2P) \     (C2P) \    |
|                      +----------------+        +----------------+ |
|              Victim+-+  AS 1(P1, P6)  |        |    AS 5(P5)    | |
|                      +----------------+        +----------------+ |
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+
P1' is the spoofed source prefix P1 by the attacker which is inside of 
AS 3 or connected to AS 3 through other ASes.
]]></artwork>
          </figure>
          <t><strong>SAV for Provider/Peer-facing ASes</strong>: <xref target="reflection-attack-p"/> depicts the test case of SAV for provider-facing ASes in the scenario of reflection attacks. In this test case, the attacker spoofs the victim's IP address (P1) and sends requests to servers' IP address (P2) that respond to such requests. The Tester performs the source address spoofing function as an attacker. The servers then send overwhelming responses back to the victim, exhausting its network resources. The arrows in <xref target="reflection-attack-p"/> represent the commercial relationships between ASes. AS 3 acts as the provider or lateral peer of the DUT and the provider for AS 5, while the DUT serves as the provider for AS 1, AS 2, and AS 5.  Additionally, AS 2 is the provider for AS 1.</t>
          <t>The <strong>procedure</strong> is listed below for testing SAV for provider-facing ASes in the scenario of reflection attacks:</t>
          <ol spacing="normal" type="1"><li>
              <t>First, in order to test whether the DUT can generate accurate SAV rules for SAV for provider-facing ASes in the scenario of reflection attacks, a testbed can be built as shown in <xref target="reflection-attack-p"/> to construct the test network environment. The Tester is connected to AS 3 and generates the test traffic to the DUT.</t>
            </li>
            <li>
              <t>Then, the ASes including  AS 1, AS 2, AS 3, the DUT, and AS 5, are configured to form the scenario of reflection attacks.</t>
            </li>
            <li>
              <t>Finally, the Tester sends the traffic using P1 as source addresses and P2 as destination addresses (spoofing traffic) to AS 2 via AS 3 and the DUT.</t>
            </li>
          </ol>
          <t>The expected results are that the DUT can block the spoofing traffic with source addresses in P1 from the direction of AS 3 for this test case.</t>
          <t>Note that the locations of the DUT in <xref target="reflection-attack-p"/> can be set at AS 1 and AS 2 to evaluate its SAV accuracy according to the procedure outlined above. The expected results are that the DUT will effectively block spoofing traffic.</t>
          <figure anchor="direct-attack-p">
            <name>SAV for provider-facing ASes in the scenario of direct attacks.</name>
            <artwork><![CDATA[
                           +----------------+
                           |     Tester     |
                           |   (Attacker)   |
                           |      (P2')     |
                           +----------------+
                                |     /\
                                |      |
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+
| Test Network Environment      \/     |                    |
|                          +----------------+               |
|                          |    AS 3(P3)    |               |
|                          +-+/\----+/\+----+               |
|                             /       \                     |
|                            /         \                    |
|                           /           \                   |
|                          / (C2P/P2P)   \                  |
|                 +----------------+      \                 |
|                 |     DUT(P4)    |       \                |
|                 ++/\+--+/\+--+/\++        \               |
|    P6[AS 1, AS 2] /     |      \           \              |
|         P2[AS 2] /      |       \           \             |
|                 /       |        \           \            |
|                / (C2P)  |         \ P5[AS 5]  \ P5[AS 5]  |
|+----------------+       |          \           \          |
||    AS 2(P2)    |       | P1[AS 1]  \           \         |
|+----------+/\+--+       | P6[AS 1]   \           \        |
|    P6[AS 1] \           | NO_EXPORT   \           \       |
|     P1[AS 1] \          |              \           \      |
|     NO_EXPORT \         |               \           \     |
|                \ (C2P)  | (C2P)    (C2P) \     (C2P) \    |
|              +----------------+        +----------------+ |
|      Victim+-+  AS 1(P1, P6)  |        |    AS 5(P5)    | |
|              +----------------+        +----------------+ |
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+
P2' is the spoofed source prefix P2 by the attacker which is inside of 
AS 3 or connected to AS 3 through other ASes.
]]></artwork>
          </figure>
          <t><xref target="direct-attack-p"/> showcases a testcase of SAV for provider-facing ASes in the scenario of direct attacks. In this test case, the attacker spoofs another source address (P2) and directly targets the victim's IP address (P1), overwhelming its network resources.  The arrows in <xref target="direct-attack-p"/> represent the commercial relationships between ASes.  AS 3 acts as the provider or lateral peer of the DUT and the provider for AS 5, while the DUT serves as the provider for AS 1, AS 2, and AS 5.  Additionally, AS 2 is the provider for AS 1.</t>
          <t>The <strong>procedure</strong> is listed below for testing SAV for provider-facing ASes in the scenario of direct attacks:</t>
          <ol spacing="normal" type="1"><li>
              <t>First, in order to test whether the DUT can generate accurate SAV rules for SAV for provider-facing ASes in the scenario of direct attacks, a testbed can be built as shown in <xref target="direct-attack-p"/> to construct the test network environment. The Tester is connected to AS 3 and generates the test traffic to the DUT.</t>
            </li>
            <li>
              <t>Then, the ASes including  AS 1, AS 2, AS 3, the DUT, and AS 5, are configured to form the scenario of direct attacks.</t>
            </li>
            <li>
              <t>Finally, the Tester sends the traffic using P2 as source addresses and P1 as destination addresses (spoofing traffic) to AS1 via AS 3 and the DUT.</t>
            </li>
          </ol>
          <t>The <strong>expected results</strong> are that the DUT can block the spoofing traffic with source addresses in P2 from the direction of AS 3 for this test case.</t>
          <t>Note that the locations of the DUT in <xref target="direct-attack-p"/> can be set at AS 1 and AS 2 to evaluate its SAV accuracy according to the procedure outlined above. The expected results are that the DUT will effectively block spoofing traffic.</t>
        </section>
        <section anchor="control-plane-performance">
          <name>Control Plane Performance</name>
          <t>The test setup, procedure, and measures can refer to <xref target="intra-control-plane-sec"/> for testing the protocl convergence performance and protocol message processing performance.</t>
        </section>
        <section anchor="data-plane-performance">
          <name>Data Plane Performance</name>
          <t>The test setup, procedure, and measures can refer to <xref target="intra-data-plane-sec"/> for testing the data plane SAV table refreshing performance and data plane forwarding performance.</t>
        </section>
      </section>
    </section>
    <section anchor="reporting-format">
      <name>Reporting Format</name>
      <t>Each test has a reporting format that contains some global and identical reporting components, and some individual components that are specific to individual tests. The following parameters for test configuration and SAV mechanism settings <bcp14>MUST</bcp14> be reflected in the test report.</t>
      <t>Test Configuration Parameters:</t>
      <ol spacing="normal" type="1"><li>
          <t>Test device hardware and software versions</t>
        </li>
        <li>
          <t>Device CPU load</t>
        </li>
        <li>
          <t>Network topology</t>
        </li>
        <li>
          <t>Test traffic attributes</t>
        </li>
        <li>
          <t>System configuration (e.g., physical or virtual machine, CPU, memory, caches, operating system, interface capacity)</t>
        </li>
        <li>
          <t>Device configuration (e.g., symmetric routing, NO_EXPORT)</t>
        </li>
        <li>
          <t>SAV mechanism</t>
        </li>
      </ol>
    </section>
    <section anchor="IANA">
      <name>IANA Considerations</name>
      <t>This document has no IANA actions.</t>
    </section>
    <section anchor="security">
      <name>Security Considerations</name>
      <t>The benchmarking tests described in this document are limited to the performance characterization of SAV devices in a lab environment with isolated networks.</t>
      <t>The benchmarking network topology will be an independent test setup and <bcp14>MUST NOT</bcp14> be connected to devices that may forward the test traffic into a production network.</t>
    </section>
  </middle>
  <back>
    <references anchor="sec-combined-references">
      <name>References</name>
      <references anchor="sec-normative-references">
        <name>Normative References</name>
        <reference anchor="RFC3704">
          <front>
            <title>Ingress Filtering for Multihomed Networks</title>
            <author fullname="F. Baker" initials="F." surname="Baker"/>
            <author fullname="P. Savola" initials="P." surname="Savola"/>
            <date month="March" year="2004"/>
            <abstract>
              <t>BCP 38, RFC 2827, is designed to limit the impact of distributed denial of service attacks, by denying traffic with spoofed addresses access to the network, and to help ensure that traffic is traceable to its correct source network. As a side effect of protecting the Internet against such attacks, the network implementing the solution also protects itself from this and other attacks, such as spoofed management access to networking equipment. There are cases when this may create problems, e.g., with multihoming. This document describes the current ingress filtering operational mechanisms, examines generic issues related to ingress filtering, and delves into the effects on multihoming in particular. This memo updates RFC 2827. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="84"/>
          <seriesInfo name="RFC" value="3704"/>
          <seriesInfo name="DOI" value="10.17487/RFC3704"/>
        </reference>
        <reference anchor="RFC8704">
          <front>
            <title>Enhanced Feasible-Path Unicast Reverse Path Forwarding</title>
            <author fullname="K. Sriram" initials="K." surname="Sriram"/>
            <author fullname="D. Montgomery" initials="D." surname="Montgomery"/>
            <author fullname="J. Haas" initials="J." surname="Haas"/>
            <date month="February" year="2020"/>
            <abstract>
              <t>This document identifies a need for and proposes improvement of the unicast Reverse Path Forwarding (uRPF) techniques (see RFC 3704) for detection and mitigation of source address spoofing (see BCP 38). Strict uRPF is inflexible about directionality, the loose uRPF is oblivious to directionality, and the current feasible-path uRPF attempts to strike a balance between the two (see RFC 3704). However, as shown in this document, the existing feasible-path uRPF still has shortcomings. This document describes enhanced feasible-path uRPF (EFP-uRPF) techniques that are more flexible (in a meaningful way) about directionality than the feasible-path uRPF (RFC 3704). The proposed EFP-uRPF methods aim to significantly reduce false positives regarding invalid detection in source address validation (SAV). Hence, they can potentially alleviate ISPs' concerns about the possibility of disrupting service for their customers and encourage greater deployment of uRPF techniques. This document updates RFC 3704.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="84"/>
          <seriesInfo name="RFC" value="8704"/>
          <seriesInfo name="DOI" value="10.17487/RFC8704"/>
        </reference>
        <reference anchor="RFC2544">
          <front>
            <title>Benchmarking Methodology for Network Interconnect Devices</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <author fullname="J. McQuaid" initials="J." surname="McQuaid"/>
            <date month="March" year="1999"/>
            <abstract>
              <t>This document is a republication of RFC 1944 correcting the values for the IP addresses which were assigned to be used as the default addresses for networking test equipment. This memo provides information for the Internet community.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="2544"/>
          <seriesInfo name="DOI" value="10.17487/RFC2544"/>
        </reference>
        <reference anchor="RFC4061">
          <front>
            <title>Benchmarking Basic OSPF Single Router Control Plane Convergence</title>
            <author fullname="V. Manral" initials="V." surname="Manral"/>
            <author fullname="R. White" initials="R." surname="White"/>
            <author fullname="A. Shaikh" initials="A." surname="Shaikh"/>
            <date month="April" year="2005"/>
            <abstract>
              <t>This document provides suggestions for measuring OSPF single router control plane convergence. Its initial emphasis is on the control plane of a single OSPF router. We do not address forwarding plane performance.</t>
              <t>NOTE: In this document, the word "convergence" relates to single router control plane convergence only. This memo provides information for the Internet community.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="4061"/>
          <seriesInfo name="DOI" value="10.17487/RFC4061"/>
        </reference>
        <reference anchor="RFC2119">
          <front>
            <title>Key words for use in RFCs to Indicate Requirement Levels</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <date month="March" year="1997"/>
            <abstract>
              <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="2119"/>
          <seriesInfo name="DOI" value="10.17487/RFC2119"/>
        </reference>
        <reference anchor="RFC8174">
          <front>
            <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
            <author fullname="B. Leiba" initials="B." surname="Leiba"/>
            <date month="May" year="2017"/>
            <abstract>
              <t>RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="8174"/>
          <seriesInfo name="DOI" value="10.17487/RFC8174"/>
        </reference>
      </references>
      <references anchor="sec-informative-references">
        <name>Informative References</name>
        <reference anchor="intra-domain-ps" target="https://datatracker.ietf.org/doc/draft-ietf-savnet-intra-domain-problem-statement/">
          <front>
            <title>Source Address Validation in Intra-domain Networks Gap Analysis, Problem Statement, and Requirements</title>
            <author>
              <organization/>
            </author>
            <date year="2024"/>
          </front>
        </reference>
        <reference anchor="inter-domain-ps" target="https://datatracker.ietf.org/doc/draft-ietf-savnet-inter-domain-problem-statement/">
          <front>
            <title>Source Address Validation in Inter-domain Networks Gap Analysis, Problem Statement, and Requirements</title>
            <author>
              <organization/>
            </author>
            <date year="2024"/>
          </front>
        </reference>
        <reference anchor="intra-domain-arch" target="https://datatracker.ietf.org/doc/draft-ietf-savnet-intra-domain-architecture/">
          <front>
            <title>Intra-domain Source Address Validation (SAVNET) Architecture</title>
            <author>
              <organization/>
            </author>
            <date year="2024"/>
          </front>
        </reference>
        <reference anchor="inter-domain-arch" target="https://datatracker.ietf.org/doc/draft-wu-savnet-inter-domain-architecture/">
          <front>
            <title>Inter-domain Source Address Validation (SAVNET) Architecture</title>
            <author>
              <organization/>
            </author>
            <date year="2024"/>
          </front>
        </reference>
      </references>
    </references>
    <?line 915?>

<section numbered="false" anchor="Acknowledgements">
      <name>Acknowledgements</name>
      <t>Many thanks to Aijun Wang, Nan Geng, Susan Hares etc. for their valuable comments on this document.</t>
    </section>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA+1de3PbyJH/X1X+Dji7rizZFGVJtpOokr1obe+u6/zgWfLm
cvFWCgRHJNYgwOAhLWNtPst9lvtk193zBgYPkpAse8VkZRKY6enp6enp/s1r
d3f3zlaW+/Hk736UxOzIy9OC3dkKFyl9zfKDR4/+8Ojgzlbg50delk+8e96z
GQs+QrZiPA+zLEzifLmAnC9fnH53Z8tPmX/kfc9ilvqR97d3L0avjp+9+OnO
1sVUJrmzNUmC2J9Dnknqn+W7wYzFu+P5xXQ3889jluM/u2MWB7O5n34M4+ku
cnBnKw/zCDJ9a7zxXrN8lkySKJkuvbMk9U6SIg2YdzyZpCzLvB/9KJz4OTAJ
rI3HKTs/8k6OfyQSd7YiPwauWIzE/QIIpUd3tna9MM6OvDtbnseZfBVilWN8
kKSQ/n9mSTydFn4cFLH3yh8nqZ8n6RLfB2G+RAbDn4E3epAUcZ7Cs2ezMPZB
eEXGvNNXz71t9kvAFrn3/j93gKpMRyViPjb3w+jIi0KUzZ//OQ0ifzxkk2IY
xA4On/vASKgYPM2g9Fnhe+/j8JylGTB1FczlCco2/nMuiqvn71U4DpHD4vPI
sIjGrSJ8BayAqKfef4WfpaX/EcZRUObyzlacpHNQ33N2hGnffffs8HePHsvv
vze+Hzx5rL4/fvR0H79DP47PLAIhFO3vThIoM95dZPTM83I/nTLo3rM8h2d7
e9BhfEgXfGTpMGT52RCEsQd9do93V3wke6pNME3GEZvvgkHJ2ZzF+Z6gz/tt
bdcEtryXBiHvDcsvkvRj5n3vL7zj2I+WWZgNvBGn751I+gMPLJf3jv2jCFN6
kHm8RKALBR48Ongsas3SnmttENys1orQFdRaN42fBrOeWxtJhjkL8iJldpWt
tqyv/zYY4jcvTne8Y4NSewOuX5WLwtmAjRXRzdNHRcDw7O56/jhDLnPs4Kez
MPOAxQIb0puwszBmmTdXo1oIv3BcM4dDL58xb8FS6tsx8JSceRnnzhfcndvc
7QDFYObHYTbPhjgAGr89GLK9Ig+j8J9sAkbdm9LgnTNKlxYRcABPFzB2Ioul
grJFkgDPU66UM/+cAasMlHu+iEg5geZFmM884HQJbKVhUkCFWRZO4wy7QJJO
WEoF8ApRqfAc6xgkKRSySOIJ1joLWIz5oQa21HL/I/IIGfwF9EY/mKFEgiTO
QqCNWX2iOmHnIbAOZY0hqTeOQFO8cfLLgMtdSnZuuBTAiI+sx8jjzM89KNef
xkmWhwES4um1aG3GgJlzYCFDGkSUmnLO/KxIa9qR/RICbWQZxBmzCy4OKUxq
0WyIioOKNA8nk4jhr3vU6ZJJEXBf587WSYtCAJ/YBuFZGPhxHi2xkCQFVzDv
0NpD7+2C0WDI1ScrplOW5Vx/JmwRJUtvEp6dsRSJlPTt0ycxjv36K//+e/59
7GdAIKGmD1NBhuQYc8sIntp5mCYx2bwh1Bj5CrFOAy0300aREEOrE9uskL4K
A861kaoFFMF5TcB3mjF/QlT8ICigzy69IkZ9lYqsdBJqUhpcqXaloefXX4e2
dXQxCKbEM21ShTa+rFIXT7E5oEaLJBO9uQANJE3jnQ6VDPVqJTkBnSyJzjkd
KS/UdmYbJrPrcFs24X3Z7BUXYRR5MxYthLRRicjs5NhPEuCfCxssELDuI8NY
LLJkdpYLcNcsVYGyeA/PQFF8YJGyYNWoRD/KEl4saPZEFJqD1rq6YbXXUdeV
xVEpw6r9Vh2ebCiYiBpD7kuBWXI2mQDhJiCHWcjOdeulUMWzIg64goIHOrAt
29xfQldcYD/m9tZuxrJ5khYya7WPKZv6KZgaMAGiNYQykeJYcuJagXLN6lUA
TA6WUQgdVTpkM0LxHDah6HsD5Bg6JQg3sFqMDz7wEkxg5C0grGP0BP0B8ROS
XkAVUOa2lGcsk+wKrsR7bot8sBHp5AJ7VZoU0ENQ5GP8CU0LRiJj6Tl/CKYp
L+DJHFsNStz+8fUORhk55xDq8g9iEboXGBD94mIGQR6MszAc+nZLlNsrzKiT
ggGaCG2ETg7sJiB/IJN4F8J68xGGWcTuZ5aOQ1EXDHsFVDuZL7BCjm4gzZwm
w7X+3j3v+wTqikI+CaAb867QYA1AdpGtC+iSQTUCUIKMrP5F4iXjn8HmQbSC
TvqdrQfecQZvMb707t/ngpIqzQ2T4BfbLcvv3yebDdXOwJYA+1jDXamE2n3g
pF9bwzApTzguqKujP1WMd7MlDGnc+mESpO2DyJAD8aok1G2yMR/j5CJGwd6/
Pw+DNNFQxv37O8OKpDw/5GWY7VA2dsBRCF4QGBdUMdEUMOoNp8OB1lDQiSw5
yw1txWFSWNxsllwEftbazobZRyXzvSm0Ryz7JHdL+HAsJTrgHh7JSDYgUuVm
WUqYfLNFGkrPUpu9UNrxMmGpa1asA5E6hORTpXIf2dKDXJPMu/v6/cnp3QH/
13vzlr6/e/Ff71++e/Ecv5/8cPzqlfqyJVKc/PD2/avn+pvO+ezt69cv3jzn
meGpZz3auvv6+K93ue25+3Z0+vLtm+NXd6vGjtqUbCoNseBaoaPkZ1tgQwPQ
Od4pvn02+r//3X8Mo/q/YTi/v/8HGuLxx+/3f4f+EQ54vLQkBn+N/wTBLbdA
tMxPyVmFhg78RQimCdQDdBBbPYZhL2XDra0Hf0PJ/HTk/XEcLPYffyMeYIWt
h1Jm1kOSWfVJJTMXouORoxglTet5SdI2v8d/tX5LuRsP//gfaGm83f3f/8c3
W9w5PmXpPIzJEuGDl3NURbAU30ZJ8PGIRizDQQaXq4jyjDv81FswpIQHpOYR
m4LPOUc1tr1jxn3hMdJkNCxSIdBUk4I0IIyVY6NCq6HFzwj5zFdkiJxyNHAu
bhZEMV+NH/z1TIymIxw+OUf42B5kyYPAoRN6O1APeLlhHEQFDbVTH7hNZTgD
Bm5exBBv5ML12U1Z5BNvEqcC/0Ey8BwH71LpxoCegd8ooj4dp6I8KGTLFizA
0IbngHC74EE5siEEi24ihorAbTInv4CEyiOYnBSGnEysgYhPQZJoYydhFoDJ
9cJ86CG3P0A4uHvmB0jkHZndI+84tj1sbo5RTuBkHJ+IUT/E0ReCy0CETuBt
AC1lB7fDIRuibxH5S/DND+QLPoo8K0AGc3jeT9GlXIGgrpgBD0WFyEiSRJYm
eRKAOpQ4PbQ55V0QKmag9cK00+MTlhcLac/JJc/wCRq0qZhJCLmfEoU+RRCg
9hQ6IuyJQRVmfM5HqPcUoBHd7efvT3ccQsaX6CegKaXmlXWU8QONxjFXG6qw
TJBDYCWAAR5uc+P96RNyjaPrbsYCyZAoBiEDDyR7hvoowJUktZAWRV6kogqq
GFP0a3RoGQa53Cjofm6iKGxeYJcipmW0jjprWC1RCHkGwJu0oFTpWYIupSyZ
4siUMAfy+wV35PRT3pixSQk0woKtapAxhv7I06mAq903R4X4l/pwOO3hv2o/
D3mKS970Akz1XmjIwLuUKYjSrvV5SA+tFPyv+lzKFJDzGysFaJmZAqnd2bps
ocNp6VRufnSqpprrVC0fK5WzxEoqJ/daEuJDj4W+47s/Kqpcom5a5rsyN3bb
fzry7lGQRnaB4Nk/3dW2Y3j3V9QW3g0pDfhL6PrwAcIwKejSQnsReqReOfAl
8qyoZUVUaJkQbjWc1kL2HV6mCVkqg+CiKEubgJ8bIByHUMHSLogTEdnQBODE
K1DKLxiHQubYJ9XgRaBVwvuVKMFGkSnodVqvbxkCAxkXgsG16uXKtMiu7jQx
Voc3I3m0JljbZpx3IC0NFk2hlbYyfVgYiWVIfdBoomwNtG8c/RKoCth++g2N
BJIld1tpkfDSZNAiLdCpHDKwMmKMAufqLJwWHGr05MgHjiN/cIH+umIi5Y0h
hVUZikASoA8CADcEXAV/gCrUnQ1sXfAoBlgUuSoRBrfEwG8lXw4WeM3BEZJ1
EGFiHa/YkgS6ym4iPYnAlEhGskICssEnhpNHmgCxJYsnmYaLbQK8qwhmSrQJ
dVItO2bSWaSgjDAD7pSbCiGHO/Rw/XTJvecJurISnaOSwDMG6WYF+FdAamb4
hRI8CkpOmwKVoMLgm435QC5cNiFFyy8TIh0oEY2LDOeMQJToTGMNZ+FCWYXj
E605FsIr6KiGK7sgWvJghoQJmBdo00haS5UQEw0kMCzcBUJQyiYCGa7aCF0R
cKttCKYUzwwE5Ms40N+pi1MMjL0UoVHeUVSHJfGbfZZ0bGTYh5e8iZM0U1Bv
JkxvRGEPYg+mQQlVBm/7P0cvsx3ZD3HYkfhR1QgPvWN4jhl4XxwzCeFNZNtx
2Jjn0ZMOFoJQNePH0FN407hpn6XJnPvzJAYE6dAMGGZR2LKR1bJ2+JxJDbKb
3zEaqMDDESKDkknr4wdpkmXKttarTMghWzBqkWzX8iDTrR487K6pSEWPVTWc
sXV9RcqENqoGD7ueGXj4aThXmJiK0mVCEzgHYWKLL6DRCVATeGCYQHzBAVFe
Q8mIHfJXSJIVLRYTaQqNaBy7qzLwMA5h4hmCVfHACCKoPyJPYL8WmamWY2h6
ijlBBDYd6dBQTMhkU8miBTsCPwfdJwFbBrtmJsGabBI1rUj9NQ4CwMOIIx7I
1ekM+JvOoPcYSsRTz0XqhU6dq9SyLwop6OcaUFETxQLzOaNFRB2QFDE22u1n
+VSWRBxN7XaVhDg0PkNZT2nG7R07g8rMCI4Awsor0HYa0+aUNtVpyaeD3yxV
aDs98nOhiz51KN6sMJ7lmUGHfEA5VU6aZ1Wy3K8MXjpX8Ds9uNRUyxh+iHOr
XUvFGmmNFsd2LTW5kceyGXXN17Fq9grKUwoDPt2zBg8hAnslz/GP/DEfKI+F
faLZlAdvJfD/4MGRmFphFY9UGkZaYEA1dQ0TWMOq88BtqXQbXS6RMS5qBFIO
vgrOgu+Esb2RfpRMQYt9cHGQ8MwsR0sm8qdTnAvF7rXLnbRKcjnf5GcOp+gK
R0XlR13T8CWdZHx3lkRRckHLgOy5DwWTcWEEkQ/aHRgtRfOYhkdXjhDxfVjS
wgoy1ICNtH0eOnGTWgipCU+xPwZPD2uSNOS+FGiut18GTezc3738Fs28RKBW
LPs5VlN+3rBf8r/PkgV83fvAeaj9UO79R0P6397+E08Ja19mbMtd/xb/fNhb
J7eBIa0lcwXlNcr8KsruKnPjc2pAIJmUufmwJrcIq16ODARZ5DbhZfu9XTb0
TbPxRe7SU6vszXpovVCktrjTlDBFByJZIuSZIKZXQihLabeNyu7Up63jQVeQ
o5vWCi+FEmTLWKKdtSOZHP7C0gxOtpzPWY6rkKTvrObVOVz64IGk+cyg+YMx
OuJgXlp8ZrJmgay0tsAId3vgU2MD2C+FS8NdP1oqI4vYqyItckVQzKGxRNtT
nAEKyPdAwnLUHxoWLKzQVryLCUbMeXwysAoQbEoPLfNw4h2CrLPwF6RoaEsF
DJaTNpoFZHICAUoeglCRmCTkcOmpwCoy64ZbJAXtKgoonKLQjNmQLwFnVLu5
/5EJEGmcFNgqgh63GWJFktn/IUZhOpQjjPVUtGTE/DTO1EwiEytOsYImBZWZ
s0NLpSJWxqQhiJ0gcmGzxWfGyzPiYewqx2w87h2JBhGYaoOGmui7Oa1oVVmY
UskByZpwU+oy8CNZiCWnmLXIMArCmuEOEj4BYBpjAzaYJRGCmGnB18rxtIpy
ZsGZyMiDB+R2Q5QPbjoWjCgWInwMvDfLFeuxE9NKrv2h912YZvnAmqwkVtUU
uzHforS5ukBBrQrsz8YQHwiiiYmZcRFGuV48Q9Bakw2szBbXTSqVp4WdM0DY
usrUkXMtVnxSeyoTwZceHBBJMWMh56N09GPSVAYQl4VISJwXTevuycevHzKo
uENsRlDUaGnZCdlcmep+BcWwZifzs2p33K7apx2u3WUyB41kyqHJjtWlcWqJ
x6bRkrcAzQQ4gyPI57CZqBcQeS41tE+maf/oD5jhD0f7Rg9jvyx4kwpoGToa
X9Tn55aKU7TnDKykBuBqk5pQj4rXowVfDGpaFd5cpo+xgQf2r7owqXOctGmk
1ExABUsHrf5zp3hJJFIdxiTQHDKBK4p/ahLJqGmfK/NTK2pSAQ9S0B3HSlQK
u+CdqrjOT7GTVYpO5BBiJW5pbA4HgUrY1Bi7XgUH/CNbgP9wJqkSsOIlg8Be
fSQlCbjiqA86f20sVeLAipqeSgp75edlDjbuzvUCIh4+R0hVjqiuJqTyN4mp
/Nagqs5V8HuLlxp42CBg6i0AspwNe/RtiY26hUaWf4S0mvd9SOep5zBHWVcK
c2hBthnrqHboGC49FRsATFJShjpuonfNwZPiqyZ4Uqa5lhaHkEtB0tOSy2G6
nVUHUwaufGUGFQI15kSZGY0flJYa1QWHFUZk/oFdck3efWfeiq6tEknS/FxF
SG2RZGPnbYslryGKa+DuyLsJcVyz7Vs/kPOvKpLTnaL/sG5gmFmjLOwU9SFe
0wi2ZoynulfH4MwZ4T1qJOIKFD9XjFeO8G5GfNfVT6px1i6Vm7YtIdGdOgds
FbryQ975JYzB9ca1Hee/NDxp4SBLYGDzScD6ZeOq/BrWWqezGmazrnY6SwRm
l9zQcndPNq/XHBJ3LPurm0I0tGL13F2nba+i7DWmEO0g94ZPIfbe3u4wsmtu
/sjUWc8p+/rc1XD3CjjfzCrqH46IOpQLZhwBdc1imk3mKMsk66coTcaq+0DK
cXcPrKrQUhXAXafqrjHOR+bPaQd+47SCuTCar8QPmCSgdgoYi6pqltwPSrsJ
ys6q5aCaM6K6fO3mygMbzF1megZVhqnKzS2Llh/tA8T8CZ3QY6ERgnEj7dBb
P+TZvFVRsa4w5NmcwXUCnlLXuM6ZK0tXbs7clQr8V5y66hjWNE9c3ZSw5von
r0xtuI1vrji++demIY63aZTTTKBDsNFMwIp1mgm4J/dKHNiJDiwCN2wGUMdt
DTOAOpFr/s3QEveDihDL0i9FPY0TslfBgRDAb2oGsF7RicJavbFrYFFLoGtU
1EigS2DUUxU2Nqr6R1OE5Jpz7OoArjzlaJXaX/zTCByvEwCJUzyaAPnPEQMp
02VMdRK/ON9pHJtWjX2+5jipofVvSKDU89RQuRd9AaFS3XzQVUwH3YZNX0LQ
dCM3RrU7+2vMahj+/O2sRuVzO6txU2c1vr6NUVe6is+fTl3TDa3bcjeZeDiu
JV4/BSH4bPe+e+VcHb3s8kuw1Fou2/xyGzdudHHMjVam/6s9HGDE2ulUe0SU
HD2FRHgckIoTWOC9WiG3vv/bp/yvcqfL55ku0ArSj/u78mqoL2GeYP+zbXFx
r7L9WtY/1Xb3L32/S69ot8sHRgEf3KLdt2h3hcAXi3Z/pv0uKzjKl2qCUSnw
To3vu4KbvLaX7IKcV3d2VgafZdFX4fk2YlVruL7+ir6vv4Lza+Fv1+cJW+Ve
q1vctHXgCv3iXhld10XuFSJ2q1D/DvN1osU3cvPAb9V7thq7zpXGU96sKyCs
g1E/ifFGbL3bpb126uC4htPguhwwaKiq8/xK6xojFEbDSYuVC8P4tjD7MEp5
H4bsqWd+GNHZfXimhriizHm+MBTUwl+YOc6DazuSU3bSKzkUU0+pdpVbQx2a
z7asVEiegJuF/2TihEuLDp1Jh0fsJvGk0yFvD11O0kO390Qh3Qs+DE/0qdyX
/C26ZK7z678xgq06P3RFLtyuG6rDdBflbh1vr0+tb+0L4shJGmAkhuk8Kdbo
ySZ0qTloOjx/JTZkL3HkEOc/2qdCimNk9VAf+QXey5yp2/6cHXjI7Zl6Gsod
1gV3OYmsdG/EIeLQCfBMaBzimUq4qgWQ56kbboEpRWsMxG6E9jjKl7Xbpq3d
onZNcdwhYUg72iTK8ZLOLo/p6g/0aSepfyF7odyHu4Z7WGvADAVo8/aajaCu
xErOmKW7a/th6vIF15x91bUSGenKWe48KFlnlqTpZi4vYCkGEiirAG9LnKra
Sruo0ovrBvjh60oe0rXYf/TvA+8A/wyHwwH8fPTvQ5fzVS9l2gUfBcIWVq6J
oFsODHXyx4m4HYAGn1geL2YPNHVFEm9vkly4OaY6qgrrM0uRfY54fz+S7S7u
wSRpZLMiJ12ccBUAkYLDXyExxDN7wdnykcPybn/V9/QBA5PqWiWKBcUVUaIU
dfkGl5WZXV/ZaEHGIgPX/kryzDsPfXqhK4KLqcCj083ozKdPh8tpk725uV4q
LF4hKqWNSqMSVYFtECoXaBIzo7qCWKl08I5jiH6kkEkTxZHEmhBe52i0jswy
MG5dMq8Xkrc/89u7wnkY+SmY4WzGrwSjg7zh/z/jDQQ+Xb8BgUjM70fjNLOE
OOCk5UXQTFyaRjeIgBaJy1Qp1p8mvj4ajbqFuQE/8qGkGYsiuuyDhef8PlAK
wqUrVpaLNhui207CDI+onhgiRVbxsml+8LWM9clBE1GSOLobSwI7V0RoiOmO
61j6X/48KTiqIRxHuh5RXgqqRlPRPWt75am+KbTRXAycRoFfgWIc347+KElK
VswQofQ5KaEay0uuKTYwFw2bONxUlxVLxnSfKglcWjRRPHSKqb8oGZ/wzNlw
Gd0l4nt3706TZDJeMs723bsDuaTQGGRtC6R70v1MN/NAxKblOuOhfSQQ6TmH
mVYvY8lfVVaGMC9wFSIphrhvuHJvsuGD8XvSHj96uo/QFd/sYfiFjrPsS+6h
AtbKJr4lbKB7XVVnM7zIxsG75DK2lGGdst4xKjEOH3flql6UY9jJc7r7ZSZO
p8fuhppjRaqSTBkJAPNLh5HAv6VRPwrjj3hppx+E+bLLgZLGuStmJ5RRlaGu
5MDgNcXVSEtEzkpAoGybOoTNjbWSX9jchF+4k8itjVP5zICgo9O4ivps4EM2
t8jaDmUZEnBLRVZBO8oOxa0ZzRoZbx/aemGwG+hSK08Y3Oju5zocxc2T8EGa
ZHbPvubCBfHhWVrd8b3WGygMcK/tUpAKdqfCeOk78Cyua06qKogXhHA3Y1KC
AbrdEr8Su1XIrOMNKCXAbJNLUPTWidZ78BrZLd9s0orq4TORCU/9EtfWlJC9
Bw/a7o9pcUNWaY5NXJFVyrHckU3a7kb7IPJoM7pV3rhTrXRbbKWzcSNI5xDy
uY9BQ27y0qFJQn7WmhJWkwMkKxMX8zFLyzxwWUuayIrRihu4PysoSBcXqJOh
uHWArs0B6tQeG7s/XVXWzONUX9sJ6sJ8dxeoByYHrlGpq1zr3KAufJWcIZfs
rCHJuPGr+0BUM7xuMvwIFdYX0TKJpNjXOZFX1X77g5hmNo3Iet3IPGcWDQ5o
fXlES4KgWCzLjQX/4cZNaKQsT1LszhkUya5q2l1ttmzz6aseS1mVzWvbehku
3Oqy4iBRds++mqGhevmdETu06bk5r9aPyl/lupCGMafcvL0G2lrnzcUlrTpf
O8aUmF0vuF6ZKeeY0iy3LgF1lY/SGOKSjbw1Ud+zfMNuTaxcAN3p1kS5po5u
kzbvQxQAdLq3YHaq39g9iG0LVbqvU76aTbdehanKSvOm3PTi+MQ73B4duk/k
aC774d4HUereh4crlu1Za/2d3DXmNpfGu7I3bz61fjmyN+Xe87afHYx2GrJX
czvWez+sy17NLX6C1m6PHu+Yj6rZHWVj60B59Jf/q1vqQ1tuJSz53MphZ9e5
Rwd/A706+ElJ2s2v9aNatmom/bw2eyW3bqZLM8foCXL25Cf7B+au6Usm1ZrS
MbfsSwfbowOjL116o30sY/+n2uylss2+BLmfqtzu7JV6G2x5NS/0D5VblaPZ
snO7shvtvV/KXtGianabc95Yki36uTfCR9YL/aNS73pT6HhTyX1Jjbe/Pdof
gCx2jArIhn2yPZLHFvVcNn3Evp8me+lVpWYx2S1371vTeaHVnTYVm1fJznMa
O9Arm2VcNDQ/aqElOCld7ta0/J0wth2nNe7VNOmpJZclXiDuEsB6JgK32iti
1mXPdUwTKjP9PaC/hwpa5W416rMOBo0VoY5osOX2mRiIDak8+4IZrCGWbwMM
9KSSrgRBu+gcGoxX3o+TfEaJ7NCOmFJLquRtm/BlxJfLj+hqjyqT1cttcpyo
ooRnRZrLVU7+tLSqk/vOdH+IVYTi6BmInP2jAKnKoFBuDaA7TpykLH4pzuT1
4tI4qD2ly/ChFz7IRy4jar4BBVRNlGZu/bPfP8YWQCZ2v0EO6O9j18HCFbxA
cS7n+mr135xb5FVjkx7uSlmzj13LNSlr8tYdEKqapn72NqlWNTYESVpV6MZx
F4qosYxTu1ivxttQ1tzPpFEqeysSdgjHDiTqko/xlburtBxoRYtEtfGxS3Sf
F9BaYtOVmaq8r/skLWt3FDfZNZuh9PpBDmjxgwdt01Pbc0RPy1iOUKRlkWlj
6LkfFYRQiTlhBf1UwD1lyvCSqoiDK+PkXGDn7RvLaMKW0ZQrB1u5HG8BDc/h
EN8CGq7Mt4CGO/stoPH5AI0qIgG537z9+4v/Hr19d1qXXQMaZURiJUBDl9Mn
oLFTfXELaNwCGpofJ6ARLRZrARpNd1LS1kiaMFOabh78USq+LwxjRY7srZm3
MMc6MIeJb0CcwEVgSng+L2KcJvdzaJexcVnst9+PNFFy41ABJMEBlnIuloty
Lxsd8DI6ola8OuARMyL0TvjeMAzIRHX64dXEdMB9Rn7E1h9s5TjJO7AqUaLD
OvzGfemuXVsN3IRCjVuPWb9GAMda/PPF4jcr2pfPD+qsyPB6SA834F8p0rPu
0TW3UM8t1NMO9fCu85uGerSnvAHio531Voe+6iB3IHIcLwPcSHvCUnACHvI8
TThQN04gcpUAQ1dOHB8Z9DsxoWYio0MMYg9/spAl9XBFTjYCmBQNEypaDWeq
a9kucJN+RX8N4MiJ4rRxYgJIFnzUDD4JSIKPV7JRLqs52zGoMpzkqkUdFFX5
NEJSLYiUpiEa1kSmNLRkftdEavuqBVA5vyOR95nZV0s4k400uak4ORGtqok8
dcJVH6pE8ONAncrWw0WlJNgq+NS4GqeMXZU/JgLWRqW+iT/oJlad2MSijO91
ROqNs+NNHZFLjUsdEja1EjLVJyf02Qynsli/arhKj8VNReCfuhsiOmBYVoWa
oCyVcPvFZMrE2Ft7+6a76Dtbf9HXcdUjM6NDefiXL0Z6EVijH4bnhWAArgAA
CpoIJhFbrxEfGLbibJMs7YqzkeMqAzXwIbkjCh4iigDct7xIY2/7+cm7nTpU
DQrbAFVbofxb1KwGNfsL7T0HkZFEqWjahYXH7kBl1ZZFqXH2SdgDczMBbfVB
hIajKhxPIWgFok9y7SUR3kKyxEN5SAxHjWTBxGYBUTetTJ+r45qwl9kE9oG8
Fd1aadSmMtxDb+BhvNK16JHErA5FuMiQWF1d93VdDyzkoBpg18NVhy1wFahJ
lMhjSXzemlbB/eNK3XvYZ0GRyuwBI+uBQ9wOfaXgUElC6wFCh/XwzMHqgBDZ
GTodbqZ3zxkHWdU10QAzCyXkxoI0sAKJGAbA7O4KgHD0HrE5LQkKCt3VIWyZ
UAuQrJflUMFAANs2J31077JWrAYmNaNFTXzVIkn1l5GsiSTxfvblI0m2R9en
F+vynTvjS2Wfs8ERbwWYViEmI5bGBUcrcFZa2bIBMfwIYMANPK1IzLrYZUNi
BhLlpLYKMQuRqoG12ojVqIQb3mojVodQOWGuVs7qkCoX3OUg1hGxcsBedZx1
QK6q8FcdMflpQrAqMFgLsWYky1ovhdG4hRPUWwc3omWBRIQPyID5cvdSHDLu
4NZ87qaGxLaP8xzPLEh3LjGHNDVlnMwqxEmNc7Y92r9Pi8f+eMnPRXdzZqzu
clJrlJkwXEpm5mIvF7VGpS1jcNbaLwe1GtXogMU5qNUQq64Lc6lgmVq70rqw
OQuRs9aJNRBrGt3cyJiD2I8heAVzgmYra8jKUyw2Vnd1nG3qa4D2qyuB0MnB
k0q4U6jXaIjd4b7odxpwCOMMz2MFH+/OFjlrSVoJfQ4U2sTPKsZgph10grIj
8qTXA55EdvKpieusDnASKcEJnbBFaME666JOjsLrlrJUktKZ35ZTbvie/kdc
xRThucoXHB6E4fR+plEdkD1DPIjJcxVkixENXrVzUmLIpa8tQxvIr5rh0Z4J
9QjQ5L6dHFSb3Gb0nyGACacxb2+cVU9i+kqT4pKSFR9bF/vUVVZe+CPgMlkT
TshPU7yvwRVQ6PYETx4epvIaKore0iD08TikiEcns3CRqYNQSDG518prnRmH
OdCxD9aJ0zrShu4QMf0cdciV0YzaZWYsbzIJ5THuGupz5r4GPKequzcCx6my
tR6so7WjN2jn4MZCOy471PPSnyerLsQhKLqM8/S2RqZp5d9qq2M2gDS0jt3C
Gpar0eAE8c8trLE2MfzcwhrdiN3CGt05u4U1vlhY48ktrGG/clTvFtbwmmEN
K+WXBGs8aYM1nnwOWEPOD260nKYF0eCpzFU0GyMapYLr0Awr2YpIhvBUV0Ez
/DJ90k2aU5UHs+Z+OmV5M/AxoCMuoZhoTtugILkMAyENlXCN+IVqvVv4Yv3l
KEJR8eypGwBe2EytuR5FqsVvALcoW5t1MIuGzUP7a2AW+58Ds3hyjTt6lH59
ZZiF+9N5ta/94R5Oh7W/1Wzau18pm6fnJ7tmW7NuRpF7H1bN0c/6bbGbqhsK
9MEKbktMddqAtNaOqloJ1D9YgcymW7N646avPV4b7fJyUFnrBCEHlfV3eknE
gR/CuvKZQv3s9qqHnVba77X+jq8+93z1vOurn31fa+782njvV3kZwUa7v3rc
/9UJpnHRqYi4pz1gXaCZKp2mBu9hH9jG+6/aoRg3ENMTNz2N4X2uKTl0gS96
B1Mj+KInQHd5ObuVE4NkkLr2ghJ9+vFIXg4xKl0OwY9AdjDTZdHJBgzWwTRX
tirkQKwKuaZVIHI/T447lmhTjwXkcC4wmBojHCWiEl7RAcQfM7/gIEMD4mMC
Ne4GVPdDroLS8F1XLqQFWhzvbEnxEmNm7fMyL/qwcJUKhNOA/vQA4njrozjr
6/KV4jjrs9URy3FrTm9IzuGNRXKuYQVK016jOjTnQJ4DY299lIrdDk1cxTKU
w40hHbeafdWAzopgx4oAzorAjQJsDjoBNusANd0Bmn6AmQ6AzPpAzIYAzIaI
yeYQx2bQxmaQxmZQxnoQxobQxYaQxepQRR8QRU/QxGaQxIpQxIYQhOxXa0IP
PUAOm0ING0IM/UELG0EKa0IJPUEI1wkdIGRw0AYZHFwLZMC9so3hgpqVGiXq
4J1hlICenjzqZN3wv+MKjcoSiphLo7KQ4uDKFlJU4uqqUNaKqb/ooPrqY2pb
QW5EPL3WuoiqtvwG4ug+VkQc9LoiYr85hL7ilREHVxlGVzXsywyh8Q7mZ0mc
p0nkjeiW6JG+dl02FMkIKlYsBpo5rqHi1uuMqg+DMLcOxg3qSHmX7p/ezVgA
cipf/g4E8ySIUMNhYICOhcO55oEfaUtpgMU5NK8/ZfLS59I18apGz/HS6/6r
g3dpN9bFuGwbmzj3xxFt3wSKsxKvfORsu/ie18h7x+jWZnj3Hb7I8ekLH/wa
qsoMcW8cD0UaypxzncAW8MH1gS4yZ940SsYwvtHBbxM8fz6g8VJmhEF0kcS4
GpRLg/KEMV0LXviR8d5TGzvxgORQ2D0jaa6h/LMkgnGJH32V+nOWIxwvJWdf
U6Du0Z6zYObHYTbHdkLeMu/1+5NT7GACwMKrqY2T73glBO6MD55ZdEeq5CNM
4SFETckmDBwVBiJMJxdYHV7ts5x+4MQB9n6eBQz5c5762eg9mAZ/wp+DjZW4
R54skiiZLvmLx6IMdTizPHlfEITx/2QJ9nheEsI2G06HoJqzZUYNBKKC4SlH
sc6h0UNc8AosDEBK8wSPlg7gKV4vjvd087P4M6I74Aemn+EK2sBfwECbL3d4
2U9VZZxlVw4GH+iAR1D43dBuKXHVu/fy+M0xHfWPo7uwnp/u4dNfeQcEo6vO
S0LVjROexycLLQaue94JA/uIFxZUaGXiza+yQ4/BaMzmfvqR+iFNP8FAFYCw
pZaYZWLLRuE8NE7mN/slVAfMMogt/KcvhwysKNcUGlp88BTHptPAR6EwS9CD
nEi3Ihs6GYxLysLt9Rh1D3sQW8DgTG6tMlOklaT/b95SH7CcEskYdcm5v5Sm
pOqagDYkwDzYvEnBh0PBCzEKQRpNgnHxHwcf4+QiwiOx5tTjP90rP/qVoqG4
mI8Z+CV/unvmRxnjUcxrP8Y4zI8/0kzgcfhzEXt/8UmPoJrfM/x2UmTw/Qcf
7S3Lg6FcehyCvuNgicaTH6eV4ymVdjMOt/4f+pHOwqkbAQA=

-->

</rfc>
