<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.29 (Ruby 3.4.4) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-ietf-ccwg-bbr-04" category="exp" submissionType="IETF" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.31.0 -->
  <front>
    <title abbrev="BBR">BBR Congestion Control</title>
    <seriesInfo name="Internet-Draft" value="draft-ietf-ccwg-bbr-04"/>
    <author initials="N." surname="Cardwell" fullname="Neal Cardwell" role="editor">
      <organization>Google</organization>
      <address>
        <email>ncardwell@google.com</email>
      </address>
    </author>
    <author initials="I." surname="Swett" fullname="Ian Swett" role="editor">
      <organization>Google</organization>
      <address>
        <email>ianswett@google.com</email>
      </address>
    </author>
    <author initials="J." surname="Beshay" fullname="Joseph Beshay" role="editor">
      <organization>Meta</organization>
      <address>
        <email>jbeshay@meta.com</email>
      </address>
    </author>
    <date year="2025" month="October" day="20"/>
    <area>IETF</area>
    <workgroup>CCWG</workgroup>
    <keyword>Congestion Control</keyword>
    <abstract>
      <?line 201?>

<t>This document specifies the BBR congestion control algorithm. BBR ("Bottleneck
Bandwidth and Round-trip propagation time") uses recent measurements of a
transport connection's delivery rate, round-trip time, and packet loss rate
to build an explicit model of the network path. BBR then uses this model to
control both how fast it sends data and the maximum volume of data it allows
in flight in the network at any time. Relative to loss-based congestion control
algorithms such as Reno <xref target="RFC5681"/> or CUBIC <xref target="RFC9438"/>, BBR offers
substantially higher throughput for bottlenecks
with shallow buffers or random losses, and substantially lower queueing delays
for bottlenecks with deep buffers (avoiding "bufferbloat"). BBR can be
implemented in any transport protocol that supports packet-delivery
acknowledgment. Thus far, open source implementations are available
for TCP <xref target="RFC9293"/> and QUIC <xref target="RFC9000"/>. This document
specifies version 3 of the BBR algorithm, BBRv3.</t>
    </abstract>
    <note removeInRFC="true">
      <name>Discussion Venues</name>
      <t>Discussion of this document takes place on the
    Congestion Control Working Group Working Group mailing list (ccwg@ietf.org),
    which is archived at <eref target="https://mailarchive.ietf.org/arch/browse/ccwg/"/>.</t>
      <t>Source for this draft and an issue tracker can be found at
    <eref target="https://github.com/ietf-wg-ccwg/draft-cardwell-ccwg-bbr"/>.</t>
    </note>
  </front>
  <middle>
    <?line 219?>

<section anchor="introduction">
      <name>Introduction</name>
      <t>The Internet has traditionally used loss-based congestion control algorithms
like Reno (<xref target="Jac88"/>, <xref target="Jac90"/>, <xref target="WS95"/>  <xref target="RFC5681"/>) and CUBIC (<xref target="HRX08"/>,
<xref target="RFC9438"/>). These algorithms worked well for many years because
they were sufficiently well-matched to the prevalent range of bandwidth-delay
products and degrees of buffering in Internet paths. As the Internet has
evolved, loss-based congestion control is increasingly problematic in several
important scenarios:</t>
      <ol spacing="normal" type="1"><li>
          <t>Shallow buffers: In shallow buffers, packet loss can happen even when a link
  has low utilization. With high-speed, long-haul links employing commodity
  switches with shallow buffers, loss-based congestion control can cause abysmal
  throughput because it overreacts, making large multiplicative decreases in
  sending rate upon packet loss (by 50% in Reno <xref target="RFC5681"/> or 30%
  in CUBIC <xref target="RFC9438"/>), and only slowly growing its sending rate
  thereafter. This can happen even if the packet loss arises from transient
  traffic bursts when the link is mostly idle.</t>
        </li>
        <li>
          <t>Deep buffers: At the edge of today's Internet, loss-based congestion control
  can cause the problem of  "bufferbloat", by repeatedly filling deep buffers
  in last-mile links and causing high queuing delays.</t>
        </li>
        <li>
          <t>Dynamic traffic workloads: With buffers of any depth, dynamic mixes of
  newly-entering flows or flights of data from recently idle flows can cause
  frequent packet loss. In such scenarios loss-based congestion control can
  fail to maintain its fair share of bandwidth, leading to poor application
  performance.</t>
        </li>
      </ol>
      <t>In both the shallow-buffer (1.) or dynamic-traffic (3.) scenarios mentioned
above it is difficult to achieve full throughput with loss-based congestion
control in practice: for CUBIC, sustaining 10Gbps over 100ms RTT needs a
packet loss rate below 0.000003% (i.e., more than 40 seconds between packet losses),
and over a 100ms RTT path a more feasible loss rate like 1% can only sustain
at most 3 Mbps <xref target="RFC9438"/>. These limitations apply no matter what
the bottleneck link is capable of or what the connection's fair share
is. Furthermore, failure to reach the fair share can cause poor throughput
and poor tail latency for latency-sensitive applications.</t>
      <t>The BBR ("Bottleneck Bandwidth and Round-trip propagation time") congestion
control algorithm is a model-based algorithm that takes an approach different
from loss-based congestion control: BBR uses recent measurements of a transport
connection's delivery rate,  round-trip time, and packet loss rate to build
an explicit model of the network path, including its estimated available
bandwidth, bandwidth-delay product, and the maximum volume of data that the
connection can place in flight in the network without causing excessive queue
pressure. It then uses this model in order to guide its control behavior
in seeking high throughput and low queue pressure.</t>
      <t>This document describes the current version of the BBR algorithm, BBRv3.
The original version of the algorithm, BBRv1, was described previously at a
high level <xref target="CCGHJ16"/><xref target="CCGHJ17"/>. The implications of BBR
in allowing high utilization of high-speed networks with shallow buffers
have been discussed in other work <xref target="MM19"/>. Active work on the BBR
algorithm is continuing.</t>
      <t>This document is organized as follows. Section 2 provides various definitions
that will be used throughout this document. Section 3 provides an overview
of the design of the BBR algorithm, and section 4 describes the BBR algorithm
in detail, including BBR's network path model, control parameters, and state
machine. Section 5 describes the implementation status, section 6 describes
security considerations, section 7 notes that there are no IANA considerations,
and section 8 closes with Acknowledgments.</t>
    </section>
    <section anchor="terminology">
      <name>Terminology</name>
      <t>This document defines state variables and constants used by the BBR algorithm.</t>
      <t>Constant values have CamelCase names and are used by BBR throughout
its operation for a given connection. Variables have snake_case names.
All names are prefixed with the context they
belong to: (C) for connection state, (P) for per-packet state, (RS) for
per-ack rate sample, or (BBR) for the algorithm's internal state.
Variables that are not defined below are defined in
<xref target="delivery-rate-samples"/>, "Delivery Rate Samples".</t>
      <t>In the pseudocode in this document, all functions have implicit access to the
(C) connection state and (BBR) congestion control algorithm state for that
connection. All functions involved in ACK processing additionally have implicit
access to the the (RS) for for the rate sample populated processing that ACK.</t>
      <t>In this document, the unit of all volumes of data is bytes, the unit of
all times is seconds, and the unit of all data rates is bytes per second.
Implementations MAY use other units, such as bits and bits per second,
or packets and packets per second, as long as the implementation applies
conversions as appropriate. However, since packet sizes can vary
due to changes in MTU or application message sizes, data rates
computed in packets per second can be inaccurate, and thus it is
RECOMMENDED that BBR implementations use bytes and bytes per second.</t>
      <t>In this document, "acknowledged" or "delivered" data means any transmitted
data that the remote transport endpoint has confirmed that it has received,
e.g., via a QUIC ACK Range <xref target="RFC9000"/>, TCP cumulative acknowledgment
<xref target="RFC9293"/>, or TCP SACK ("Selective Acknowledgment") block <xref target="RFC2018"/>.</t>
      <section anchor="transport-connection-state">
        <name>Transport Connection State</name>
        <t>C.SMSS: The Sender Maximum Send Size in bytes. The maximum
size of a single transmission, including the portion
of the packet that the transport protocol implementation tracks for
congestion control purposes. C.SMSS MUST include transport protocol
payload data. C.SMSS MAY include only the transport protocol payload
data; for example, for TCP BBR implementations the C.SMSS SHOULD be
the Eff.snd.MSS defined in <xref section="3.7.1" sectionFormat="comma" target="RFC9293"/>, which includes
only the TCP transport protocol payload data, but not TCP or IP headers.
C.SMSS MAY include the transport protocol payload data plus the
transport protocol headers; for example, for QUIC BBR implementations
the C.SMSS SHOULD be the QUIC "maximum datagram size"
<xref section="14" sectionFormat="comma" target="RFC9000"/>, which includes the QUIC payload data plus
the QUIC headers, but not UDP or IP headers. In addition to including
transport protocol payload and headers, implementations MAY include
in C.SMSS the size of other headers, such as network-layer or
link-layer headers.</t>
        <t>C.InitialCwnd: The initial congestion window set by the transport protocol
implementation for the connection at initialization time.</t>
        <t>C.delivered: The total amount of data
delivered so far over the lifetime of the transport connection C.
This MUST NOT include pure ACK packets. It SHOULD include spurious
retransmissions that have been acknowledged as delivered.</t>
        <t>C.inflight: The connection's best estimate of the number of bytes
outstanding in the network. This includes the number of bytes that
have been sent and have not been acknowledged or
marked as lost since their last transmission
(e.g. "pipe" from <xref target="RFC6675"/> or "bytes_in_flight" from <xref target="RFC9002"/>).
This MUST NOT include pure ACK packets.</t>
        <t>C.is_cwnd_limited: True if the connection has fully utilized C.cwnd at any
point in the last packet-timed round trip.</t>
        <t>C.next_send_time: The earliest pacing departure time another packet can be
sent.</t>
      </section>
      <section anchor="per-ack-rate-sample-state">
        <name>Per-ACK Rate Sample State</name>
        <t>RS.delivered: The volume of data delivered between the transmission of the
packet that has just been ACKed and the current time.</t>
        <t>RS.delivery_rate: The delivery rate (aka bandwidth) sample obtained from
the packet that has just been ACKed.</t>
        <t>RS.rtt: The RTT sample calculated based on the most recently-sent packet
of the packets that have just been ACKed.</t>
        <t>RS.newly_acked: The volume of data cumulatively or selectively acknowledged
upon the ACK that was just received. (This quantity is referred to as
"DeliveredData" in <xref target="RFC6937"/>.)</t>
        <t>RS.newly_lost: The volume of data newly marked lost upon the ACK that was
just received.</t>
        <t>RS.tx_in_flight: C.inflight at
the time of the transmission of the packet that has just been ACKed (the
most recently sent packet among packets ACKed by the ACK that was just
received).</t>
        <t>RS.lost: The volume of data that was declared lost between the transmission
and acknowledgment of the packet that has just been ACKed (the most recently
sent packet among packets ACKed by the ACK that was just received).</t>
      </section>
      <section anchor="output-control-parameters">
        <name>Output Control Parameters</name>
        <t>C.cwnd: The transport sender's congestion window. When transmitting data,
the sending connection ensures that C.inflight does not exceed C.cwnd.</t>
        <t>C.pacing_rate: The current pacing rate for a BBR flow, which controls
inter-packet spacing.</t>
        <t>C.send_quantum: The maximum size of a data aggregate scheduled and transmitted
together as a unit, e.g., to amortize per-packet transmission overheads.</t>
      </section>
      <section anchor="pacing-state-and-parameters">
        <name>Pacing State and Parameters</name>
        <t>BBR.pacing_gain: The dynamic gain factor used to scale BBR.bw to produce
C.pacing_rate.</t>
        <t>BBR.StartupPacingGain: A constant specifying the minimum gain value for
calculating the pacing rate that will allow the sending rate to double each
round (4 * ln(2) ~= 2.77) <xref target="BBRStartupPacingGain"/>; used in
Startup mode for BBR.pacing_gain.</t>
        <t>BBR.DrainPacingGain: A constant specifying the pacing gain value used in
Drain mode, to attempt to drain the estimated queue at the bottleneck link
in one round-trip or less. As noted in <xref target="BBRDrainPacingGain"/>, any
value at or below 1 / BBRStartupCwndGain = 1 / 2 = 0.5 will theoretically
achieve this. BBR uses the value 0.35, which has been shown to offer good
performance when compared with other alternatives.</t>
        <t>BBR.PacingMarginPercent: The static discount factor of 1% used to scale BBR.bw
to produce C.pacing_rate.</t>
      </section>
      <section anchor="cwnd-state-and-parameters">
        <name>cwnd State and Parameters</name>
        <t>BBR.cwnd_gain: The dynamic gain factor used to scale the estimated BDP to
produce a congestion window (C.cwnd).</t>
        <t>BBR.DefaultCwndGain: A constant specifying the minimum gain value that allows
the sending rate to double each round (2) <xref target="BBRStartupCwndGain"/>.
Used by default in most phases for BBR.cwnd_gain.</t>
      </section>
      <section anchor="general-algorithm-state">
        <name>General Algorithm State</name>
        <t>BBR.state: The current state of a BBR flow in the BBR state machine.</t>
        <t>BBR.round_count: Count of packet-timed round trips elapsed so far.</t>
        <t>BBR.round_start: A boolean that BBR sets to true once per packet-timed round
trip, on ACKs that advance BBR.round_count.</t>
        <t>BBR.next_round_delivered: P.delivered value denoting the end of a
packet-timed round trip.</t>
        <t>BBR.idle_restart: A boolean that is true if and only if a connection is
restarting after being idle.</t>
      </section>
      <section anchor="core-algorithm-design-parameters">
        <name>Core Algorithm Design Parameters</name>
        <t>BBR.LossThresh: A constant specifying the maximum tolerated per-round-trip
packet loss rate when probing for bandwidth (the default is 2%).</t>
        <t>BBR.Beta: A constant specifying the default multiplicative decrease to
make upon each round trip during which the connection detects packet
loss (the value is 0.7).</t>
        <t>BBR.Headroom: A constant specifying the multiplicative factor to
apply to BBR.inflight_longterm when calculating a volume of free headroom
to try to leave unused in the path
(e.g. free space in the bottleneck buffer or free time slots in the bottleneck
link) that can be used by cross traffic (the value is 0.15).</t>
        <t>BBR.MinPipeCwnd: The minimal C.cwnd value BBR targets, to allow pipelining with
endpoints that follow an "ACK every other packet" delayed-ACK policy:
4 * C.SMSS.</t>
      </section>
      <section anchor="network-path-model-parameters">
        <name>Network Path Model Parameters</name>
        <section anchor="data-rate-network-path-model-parameters">
          <name>Data Rate Network Path Model Parameters</name>
          <t>The data rate model parameters together estimate both the sending rate required
to reach the full bandwidth available to the flow (BBR.max_bw), and the maximum
pacing rate control parameter that is consistent with the queue pressure
objective (BBR.bw).</t>
          <t>BBR.max_bw: The windowed maximum recent bandwidth sample, obtained using
the BBR delivery rate sampling algorithm in <xref target="delivery-rate-samples"/>,
measured during the current or previous bandwidth probing cycle (or during
Startup, if the flow is still in that state). (Part of the long-term
model.)</t>
          <t>BBR.bw_shortterm: The short-term maximum sending bandwidth that the algorithm
estimates is safe for matching the current network path delivery rate, based
on any loss signals in the current bandwidth probing cycle. This is generally
lower than max_bw. (Part of the short-term model.)</t>
          <t>BBR.bw: The maximum sending bandwidth that the algorithm estimates is
appropriate for matching the current network path delivery rate, given all
available signals in the model, at any time scale. It is the min() of max_bw
and bw_shortterm.</t>
        </section>
        <section anchor="data-volume-network-path-model-parameters">
          <name>Data Volume Network Path Model Parameters</name>
          <t>The data volume model parameters together estimate both the inflight
required to reach the full bandwidth available to the flow
(BBR.max_inflight), and the maximum inflight that is consistent with the
queue pressure objective (C.cwnd).</t>
          <t>BBR.min_rtt: The windowed minimum round-trip time sample measured over the
last BBR.MinRTTFilterLen = 10 seconds. This attempts to estimate the two-way
propagation delay of the network path when all connections sharing a bottleneck
are using BBR, but also allows BBR to estimate the value required for a BBR.bdp
estimate that allows full throughput if there are legacy loss-based Reno
or CUBIC flows sharing the bottleneck.</t>
          <t>BBR.bdp: The estimate of the network path's BDP (Bandwidth-Delay Product),
computed as: BBR.bdp = BBR.bw * BBR.min_rtt.</t>
          <t>BBR.extra_acked: A volume of data that is the estimate of the recent degree
of aggregation in the network path.</t>
          <t>BBR.offload_budget: The estimate of the minimum volume of data necessary
to achieve full throughput when using sender (TSO/GSO)  and receiver (LRO,
GRO) host offload mechanisms.</t>
          <t>BBR.max_inflight: The estimate of C.inflight required to
fully utilize the bottleneck bandwidth available to the flow, based on the
BDP estimate (BBR.bdp), the aggregation estimate (BBR.extra_acked), the offload
budget (BBR.offload_budget), and BBR.MinPipeCwnd.</t>
          <t>BBR.inflight_longterm: The long-term maximum inflight that the
algorithm estimates will produce acceptable queue pressure, based on signals
in the current or previous bandwidth probing cycle, as measured by loss. That
is, if a flow is probing for bandwidth, and observes that sending a particular
inflight causes a loss rate higher than the loss rate
threshold, it sets inflight_longterm to that volume of data. (Part of the long-term
model.)</t>
          <t>BBR.inflight_shortterm: Analogous to BBR.bw_shortterm, the short-term maximum
inflight that the algorithm estimates is safe for matching the
current network path delivery process, based on any loss signals in the current
bandwidth probing cycle. This is generally lower than max_inflight or
inflight_longterm. (Part of the short-term model.)</t>
        </section>
      </section>
      <section anchor="state-for-responding-to-congestion">
        <name>State for Responding to Congestion</name>
        <t>RS: The rate sample calculated from the most recent acknowledgment.</t>
        <t>BBR.bw_latest: a 1-round-trip max of delivered bandwidth (RS.delivery_rate).</t>
        <t>BBR.inflight_latest: a 1-round-trip max of delivered volume of data
(RS.delivered).</t>
      </section>
      <section anchor="estimating-bbrmaxbw">
        <name>Estimating BBR.max_bw</name>
        <t>BBR.max_bw_filter: A windowed max filter for RS.delivery_rate
samples, for estimating BBR.max_bw.</t>
        <t>BBR.MaxBwFilterLen: A constant specifying the filter window length for
BBR.max_bw_filter = 2 (representing
up to 2 ProbeBW cycles, the current cycle and the previous full cycle).</t>
        <t>BBR.cycle_count: The virtual time used by the BBR.max_bw filter window. Note
that BBR.cycle_count only needs to be tracked with a single bit, since the
BBR.max_bw_filter only needs to track samples from two time slots: the previous
ProbeBW cycle and the current ProbeBW cycle.</t>
      </section>
      <section anchor="estimating-bbrextraacked">
        <name>Estimating BBR.extra_acked</name>
        <t>BBR.extra_acked_interval_start: The start of the time interval for estimating
the excess amount of data acknowledged due to aggregation effects.</t>
        <t>BBR.extra_acked_delivered: The volume of data marked as delivered since
BBR.extra_acked_interval_start.</t>
        <t>BBR.extra_acked_filter: A windowed max filter for tracking the degree of
aggregation in the path.</t>
        <t>BBR.ExtraAckedFilterLen: A constant specifying the window length of
the BBR.extra_acked_filter max
filter window in steady-state = 10 (in units of packet-timed round trips).</t>
      </section>
      <section anchor="startup-parameters-and-state">
        <name>Startup Parameters and State</name>
        <t>BBR.full_bw_reached: A boolean that records whether BBR estimates that it
has ever fully utilized its available bandwidth over the lifetime of the
connection.</t>
        <t>BBR.full_bw_now: A boolean that records whether BBR estimates that it has
fully utilized its available bandwidth since it most recetly started looking.</t>
        <t>BBR.full_bw: A recent baseline BBR.max_bw to estimate if BBR has "filled
the pipe" in Startup.</t>
        <t>BBR.full_bw_count: The number of non-app-limited round trips without large
increases in BBR.full_bw.</t>
      </section>
      <section anchor="probertt-and-minrtt-parameters-and-state">
        <name>ProbeRTT and min_rtt Parameters and State</name>
        <section anchor="parameters-for-estimating-bbrminrtt">
          <name>Parameters for Estimating BBR.min_rtt</name>
          <t>BBR.min_rtt_stamp: The wall clock time at which the current BBR.min_rtt sample
was obtained.</t>
          <t>BBR.MinRTTFilterLen: A constant specifying the length of the BBR.min_rtt min
filter window, BBR.MinRTTFilterLen is 10 secs.</t>
        </section>
        <section anchor="parameters-for-scheduling-probertt">
          <name>Parameters for Scheduling ProbeRTT</name>
          <t>BBR.ProbeRTTCwndGain = A constant specifying the gain value for calculating
C.cwnd during ProbeRTT: 0.5 (meaning that ProbeRTT attempts to reduce in-flight
data to 50% of the estimated BDP).</t>
          <t>BBR.ProbeRTTDuration: A constant specifying the minimum duration for which ProbeRTT
state holds C.inflight to BBR.MinPipeCwnd or fewer packets: 200 ms.</t>
          <t>BBR.ProbeRTTInterval: A constant specifying the minimum time interval between
ProbeRTT states: 5 secs.</t>
          <t>BBR.probe_rtt_min_delay: The minimum RTT sample recorded in the last
ProbeRTTInterval.</t>
          <t>BBR.probe_rtt_min_stamp: The wall clock time at which the current
BBR.probe_rtt_min_delay sample was obtained.</t>
          <t>BBR.probe_rtt_expired: A boolean recording whether the BBR.probe_rtt_min_delay
has expired and is due for a refresh with an application idle period or a
transition into ProbeRTT state.</t>
          <t>The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
"SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to
be interpreted as described in <xref target="RFC2119"/>.</t>
        </section>
      </section>
    </section>
    <section anchor="design-overview">
      <name>Design Overview</name>
      <section anchor="high-level-design-goals">
        <name>High-Level Design Goals</name>
        <t>The high-level goal of BBR is to achieve both:</t>
        <ol spacing="normal" type="1"><li>
            <t>The full throughput (or approximate fair share thereof) available to a flow  </t>
            <ul spacing="normal">
              <li>
                <t>Achieved in a fast and scalable manner
(using bandwidth in O(log(BDP)) time).</t>
              </li>
              <li>
                <t>Achieved with average packet loss rates of up to 1%.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Low queue pressure (low queuing delay and low packet loss).</t>
          </li>
        </ol>
        <t>These goals are in tension: sending faster improves the odds of achieving
(1) but reduces the odds of achieving (2), while sending slower improves
the odds of achieving (2) but reduces the odds of achieving (1). Thus the
algorithm cannot maximize throughput or minimize queue pressure independently,
and must jointly optimize both.</t>
        <t>To try to achieve these goals, and seek an operating point with high throughput
and low delay <xref target="K79"/> <xref target="GK81"/>, BBR aims to adapt its sending process to
match the network delivery process, in two dimensions:</t>
        <ol spacing="normal" type="1"><li>
            <t>data rate: the rate at which the flow sends data should ideally match the
  rate at which the network delivers the flow's data (the available bottleneck
  bandwidth)</t>
          </li>
          <li>
            <t>data volume: the amount of data in flight in the network
  should ideally match the bandwidth-delay product (BDP) of the path</t>
          </li>
        </ol>
        <t>Both the control of the data rate (via the pacing rate) and data volume
(directly via the congestion window; and indirectly via the pacing
rate) are important. A mismatch in either dimension can cause the sender to
fail to meet its high-level design goals:</t>
        <ol spacing="normal" type="1"><li>
            <t>volume mismatch: If a sender perfectly matches its sending rate to the
  available bandwidth, but its C.inflight exceeds the BDP, then
  the sender can maintain a large standing queue, increasing network latency
  and risking packet loss.</t>
          </li>
          <li>
            <t>rate mismatch: If a sender's C.inflight matches the BDP
  perfectly but its sending rate exceeds the available bottleneck bandwidth
  (e.g. the sender transmits a BDP of data in an unpaced fashion, at the
  sender's link rate), then up to a full BDP of data can burst into the
  bottleneck queue, causing high delay and/or high loss.</t>
          </li>
        </ol>
      </section>
      <section anchor="algorithm-overview">
        <name>Algorithm Overview</name>
        <t>Based on the rationale above, BBR tries to spend most of its time matching
its sending process (data rate and data volume) to the network path's delivery
process. To do this, it explores the 2-dimensional control parameter space
of (1) data rate ("bandwidth" or "throughput") and (2) data volume ("in-flight
data"), with a goal of finding the maximum values of each control parameter
that are consistent with its objective for queue pressure.</t>
        <t>Depending on what signals a given network path manifests at a given time,
the objective for queue pressure is measured in terms of the most strict
among:</t>
        <ul spacing="normal">
          <li>
            <t>the amount of data that is estimated to be queued in the bottleneck buffer
(data_in_flight - estimated_BDP): the objective is to maintain this amount
at or below 1.5 * estimated_BDP</t>
          </li>
          <li>
            <t>the packet loss rate: the objective is a maximum per-round-trip packet loss
rate of BBR.LossThresh=2% (and an average packet loss rate considerably lower)</t>
          </li>
        </ul>
      </section>
      <section anchor="state-machine-overview">
        <name>State Machine Overview</name>
        <t>BBR varies its control parameters with a state machine that aims for high
throughput, low latency, low loss, and an approximately fair sharing of
bandwidth, while maintaining an up-to-date model of the network path.</t>
        <t>A BBR flow starts in the Startup state, and ramps up its sending rate quickly,
to rapidly estimate the maximum available bandwidth (BBR.max_bw). When it
estimates the bottleneck bandwidth has been fully utilized, it enters the
Drain state to drain the estimated queue. In steady state a BBR flow mostly
uses the ProbeBW states, to periodically briefly send faster to probe for
higher capacity and then briefly send slower to try to drain any resulting
queue. If needed, it briefly enters the ProbeRTT state, to lower the sending
rate to probe for lower BBR.min_rtt samples. The detailed behavior for each
state is described below.</t>
      </section>
      <section anchor="network-path-model-overview">
        <name>Network Path Model Overview</name>
        <section anchor="high-level-design-goals-for-the-network-path-model">
          <name>High-Level Design Goals for the Network Path Model</name>
          <t>At a high level, the BBR model is trying to reflect two aspects of the network
path:</t>
          <ul spacing="normal">
            <li>
              <t>Model what's required for achieving full throughput: Estimate the data rate
(BBR.max_bw) and data volume (BBR.max_inflight) required to fully utilize the
fair share of the bottleneck bandwidth available to the flow. This
incorporates estimates of the maximum available bandwidth, the BDP of the
path, and the requirements of any offload features on the end hosts or
mechanisms on the network path that produce aggregation effects.</t>
            </li>
            <li>
              <t>Model what's permitted for achieving low queue pressure: Estimate the maximum
data rate (BBR.bw) and data volume (C.cwnd) consistent with the queue pressure
objective, as measured by the estimated degree of queuing and packet loss.</t>
            </li>
          </ul>
          <t>Note that those two aspects are in tension: the highest throughput is available
to the flow when it sends as fast as possible and occupies as many bottleneck
buffer slots as possible; the lowest queue pressure is achieved by the flow
when it sends as slow as possible and occupies as few bottleneck buffer slots
as possible. To resolve the tension, the algorithm aims to achieve the maximum
throughput achievable while still meeting the queue pressure objective.</t>
        </section>
        <section anchor="time-scales-for-the-network-model">
          <name>Time Scales for the Network Model</name>
          <t>At a high level, the BBR model is trying to reflect the properties of the
network path on two different time scales:</t>
          <section anchor="long-term-model">
            <name>Long-term model</name>
            <t>One goal is for BBR to maintain high average utilization of the fair share
of the available bandwidth, over long time intervals. This requires estimates
of the path's data rate and volume capacities that are robust over long time
intervals. This means being robust to congestion signals that may be noisy
or may reflect short-term congestion that has already abated by the time
an ACK arrives. This also means providing a robust history of the best
recently-achievable performance on the path so that the flow can quickly and
robustly aim to re-probe that level of performance whenever it decides to probe
the capacity of the path.</t>
          </section>
          <section anchor="short-term-model">
            <name>Short-term model</name>
            <t>A second goal of BBR is to react to every congestion signal, including loss,
as if it may indicate a persistent/long-term increase in congestion and/or
decrease in the bandwidth available to the flow, because that may indeed
be the case.</t>
          </section>
          <section anchor="time-scale-strategy">
            <name>Time Scale Strategy</name>
            <t>BBR sequentially alternates between spending most of its time using short-term
models to conservatively respect all congestion signals in case they represent
persistent congestion, but periodically using its long-term model to robustly
probe the limits of the available path capacity in case the congestion has
abated and more capacity is available.</t>
          </section>
        </section>
      </section>
      <section anchor="control-parameter-overview">
        <name>Control Parameter Overview</name>
        <t>BBR uses its model to control the connection's sending behavior. Rather than
using a single control parameter, like the C.cwnd parameter that limits
C.inflight in the Reno and CUBIC congestion control algorithms,
BBR uses three distinct control parameters: C.pacing_race, C.send_quantum,
and C.cwnd, defined in (<xref target="output-control-parameters"/>):</t>
      </section>
      <section anchor="environment-and-usage">
        <name>Environment and Usage</name>
        <t>BBR is a congestion control algorithm that is agnostic to transport-layer
and link-layer technologies, requires only sender-side changes, and does
not require changes in the network. Open source implementations of BBR are
available for the TCP <xref target="RFC9293"/> and QUIC <xref target="RFC9000"/> transport
protocols, and these implementations have been used in production
for a large volume of Internet traffic. An open source implementation of
BBR is also available for DCCP <xref target="RFC4340"/>  <xref target="draft-romo-iccrg-ccid5"/>.</t>
      </section>
      <section anchor="ecn">
        <name>ECN</name>
        <t>This experimental version of BBR does not specify a specific response to
Classic <xref target="RFC3168"/>, Alternative Backoff with ECN (ABE) <xref target="RFC8511"/> or
L4S <xref target="RFC9330"/> style ECN. However, if the connection claims ECN support
by marking packets using either the ECT(0) or ECT(1) code point,
the congestion controller response MUST treat any CE marks as congestion.</t>
        <t><xref section="4.1" sectionFormat="comma" target="RFC8311"/> relaxes the requirement from RFC3168 that the
congestion response to CE marks be identical to packet loss.
The congestion response requirements of L4S are detailed in
<xref section="4.3" sectionFormat="comma" target="RFC9330"/>.</t>
      </section>
      <section anchor="experimental-status">
        <name>Experimental Status</name>
        <t>This draft is experimental because there are some known aspects of BBR
for which the community is encouraged to conduct experiments and develop
algorithm improvements, as described below.</t>
        <t>As noted above in <xref target="ecn"/>, BBR as described in this draft does not
specify a specific response to ECN, and instead leaves it as an area for
future work.</t>
        <t>The design of ProbeRTT in <xref target="probertt-design-rationale"/> specifies a ProbeRTT
interval that sacrifices no more than roughly 2% of a flow's available
bandwidth. The impact of using a different interval or making adjustments
for triggering ProbeRTT on specific link types is a subject of
further experimentation.</t>
        <t>The delivery rate sampling algorithm in <xref target="delivery-rate-samples"/>
has an ability to over-estimate delivery rate, as described in
<xref target="compression-and-aggregation"/>. When combined with BBR's windowed
maximum bandwidth filter, this can cause BBR to send too quickly.
BBR mitigates this by limiting any bandwidth sample by the sending rate,
but that still might be higher than the available bandwidth,
particularly in STARTUP.</t>
        <t>BBR does not deal well with persistently application limited traffic
<xref target="detecting-application-limited-phases"/> , such as low latency audio or
video flows.  When unable to fill the pipe for a full round trip,
BBR will not be able to measure the full link bandwidth, and will mark
a bandwidth sample as app-limited. In cases where an application enters
a phase where all bandwidth samples are app-limited, BBR will not
discard old max bandwidth samples that were not app-limited.</t>
      </section>
    </section>
    <section anchor="input-signals">
      <name>Input Signals</name>
      <t>BBR uses estimated delivery rate and RTT as two critical inputs.</t>
      <section anchor="delivery-rate-samples">
        <name>Delivery Rate Samples</name>
        <t>This section describes a generic algorithm for a transport protocol sender to
estimate the current delivery rate of its data on the fly. This technique is
used by BBR to get fresh, reliable, and inexpensive delivery rate information.</t>
        <t>At a high level, the algorithm estimates the rate at which the network
delivered the most recent flight of outbound data packets for a single flow. In
addition, it tracks whether the rate sample was application-limited, meaning
the transmission rate was limited by the sending application rather than the
congestion control algorithm.</t>
        <t>Each acknowledgment that cumulatively or selectively acknowledges that the
network has delivered new data produces a rate sample which records the amount
of data delivered over the time interval between the transmission of a data
packet and the acknowledgment of that packet. The samples reflect the recent
goodput through some bottleneck, which may reside either in the network or
on the end hosts (sender or receiver).</t>
        <section anchor="delivery-rate-sampling-algorithm-overview">
          <name>Delivery Rate Sampling Algorithm Overview</name>
          <section anchor="requirements">
            <name>Requirements</name>
            <t>This algorithm can be implemented in any transport protocol that supports
packet-delivery acknowledgment (so far, implementations are available for TCP
<xref target="RFC9293"/> and QUIC <xref target="RFC9000"/>). This algorithm requires a small amount of
added logic on the sender, and requires that the sender maintain a small amount
of additional per-packet state for packets sent but not yet delivered. In the
most general case it requires high-precision (microsecond-granularity or
better) timestamps on the sender (though millisecond-granularity may suffice
for lower bandwidths).  It does not require any receiver or network
changes. While selective acknowledgments for out-of-order data (e.g.,
<xref target="RFC2018"/>) are not required, such a mechanism is highly recommended for
accurate estimation during reordering and loss recovery phases.</t>
          </section>
          <section anchor="estimating-delivery-rate">
            <name>Estimating Delivery Rate</name>
            <t>A delivery rate sample records the estimated rate at which the network delivered
packets for a single flow, calculated over the time interval between the
transmission of a data packet and the acknowledgment of that packet. Since
the rate samples only include packets actually cumulatively and/or selectively
acknowledged, the sender knows the amount of data that was delivered to the
receiver (not lost), and the sender can compute an estimate of a bottleneck
delivery rate over that time interval.</t>
            <section anchor="ack-rate">
              <name>ACK Rate</name>
              <t>First, consider the rate at which data is acknowledged by the receiver. In
this algorithm, the computation of the ACK rate models the average slope
of a hypothetical "delivered" curve that tracks the cumulative quantity of
data delivered so far on the Y axis, and time elapsed on the X axis. Since
ACKs arrive in discrete events, this "delivered" curve forms a step function,
where each ACK causes a discrete increase in the "delivered" count that causes
a vertical upward step up in the curve. This "ack_rate" computation is the
average slope of the "delivered" step function, as measured from the "knee"
of the step (ACK) preceding the transmit to the "knee" of the step (ACK)
for packet P.</t>
              <t>Given this model, the ack rate sample "slope" is computed as the ratio between
the amount of data marked as delivered over this time interval, and the time
over which it is marked as delivered:</t>
              <artwork><![CDATA[
  ack_rate = data_acked / ack_elapsed
]]></artwork>
              <t>To calculate the amount of data ACKed over the interval, the sender records in
per-packet state "P.delivered", the amount of data that had been marked
delivered before transmitting packet P, and then records how much data had been
marked delivered by the time the ACK for the packet arrives (in "C.delivered"),
and computes the difference:</t>
              <artwork><![CDATA[
  data_acked = C.delivered - P.delivered
]]></artwork>
              <t>To compute the time interval, "ack_elapsed", one might imagine that it would
be feasible to use the round-trip time (RTT) of the packet. But it is not
safe to simply calculate a bandwidth estimate by using the time between the
transmit of a packet and the acknowledgment of that packet. Transmits and
ACKs can happen out of phase with each other, clocked in separate processes.
In general, transmissions often happen at some point later than the most
recent ACK, due to processing or pacing delays. Because of this effect, drastic
over-estimates can happen if a sender were to attempt to estimate bandwidth
by using the round-trip time.</t>
              <t>The following approach computes "ack_elapsed". The starting time is
"P.delivered_time", the time of the delivery curve "knee" from the ACK
preceding the transmit.  The ending time is "C.delivered_time", the time of the
delivery curve "knee" from the ACK for P. Then we compute "ack_elapsed" as:</t>
              <artwork><![CDATA[
  ack_elapsed = C.delivered_time - P.delivered_time
]]></artwork>
              <t>This yields our equation for computing the ACK rate, as the "slope" from
the "knee" preceding the transmit to the "knee" at ACK:</t>
              <artwork><![CDATA[
  ack_rate = data_acked / ack_elapsed
  ack_rate = (C.delivered - P.delivered) /
             (C.delivered_time - P.delivered_time)
]]></artwork>
            </section>
            <section anchor="compression-and-aggregation">
              <name>Compression and Aggregation</name>
              <t>For computing the delivery_rate, the sender prefers ack_rate, the rate at which
packets were acknowledged, since this usually the most reliable metric.
However, this approach of directly using "ack_rate" faces a challenge when used
with paths featuring aggregation, compression, or ACK decimation, which are
prevalent <xref target="A15"/>.  In such cases, ACK arrivals can temporarily make it appear
as if data packets were delivered much faster than the bottleneck rate. To
filter out such implausible ack_rate samples, we consider the send rate for
each flight of data, as follows.</t>
            </section>
            <section anchor="send-rate">
              <name>Send Rate</name>
              <t>The sender calculates the send rate, "send_rate", for a flight of data as
follows. Define "P.first_send_time" as the time of the first send in a flight
of data, and "P.send_time" as the time the final send in that flight of data
(the send that transmits packet "P"). The elapsed time for sending the flight
is:</t>
              <artwork><![CDATA[
  send_elapsed = (P.send_time - P.first_send_time)
]]></artwork>
              <t>Then we calculate the send_rate as:</t>
              <artwork><![CDATA[
  send_rate = data_acked / send_elapsed
]]></artwork>
              <t>Using our "delivery" curve model above, the send_rate can be viewed as the
average slope of a "send" curve that traces the amount of data sent on the Y
axis, and the time elapsed on the X axis: the average slope of the transmission
of this flight of data.</t>
            </section>
            <section anchor="delivery-rate">
              <name>Delivery Rate</name>
              <t>Since it is physically impossible to have data delivered faster than it is
sent in a sustained fashion, when the estimator notices that the ack_rate
for a flight is faster than the send rate for the flight, it filters out
the implausible ack_rate by capping the delivery rate sample to be no higher
than the send rate.</t>
              <t>More precisely, over the interval between each transmission and corresponding
ACK, the sender calculates a delivery rate sample, "delivery_rate", using
the minimum of the rate at which packets were acknowledged or the rate at
which they were sent:</t>
              <artwork><![CDATA[
  delivery_rate = min(send_rate, ack_rate)
]]></artwork>
              <t>Since ack_rate and send_rate both have data_acked as a numerator, this can
be computed more efficiently with a single division (instead of two), as
follows:</t>
              <artwork><![CDATA[
  delivery_elapsed = max(ack_elapsed, send_elapsed)
  delivery_rate = data_acked / delivery_elapsed
]]></artwork>
            </section>
          </section>
          <section anchor="tracking-application-limited-phases">
            <name>Tracking application-limited phases</name>
            <t>In application-limited phases the transmission rate is limited by the
sending application rather than the congestion control algorithm. Modern
transport protocol connections are often application-limited, either due
to request/response workloads (e.g., Web traffic, RPC traffic) or because the
sender transmits data in chunks (e.g., adaptive streaming video).</t>
            <t>Knowing whether a delivery rate sample was application-limited is crucial
for congestion control algorithms and applications to use the estimated delivery
rate samples properly. For example, congestion control algorithms likely
do not want to react to a delivery rate that is lower simply because the
sender is application-limited; for congestion control the key metric is the
rate at which the network path can deliver data, and not simply the rate
at which the application happens to be transmitting data at any moment.</t>
            <t>To track this, the estimator marks a bandwidth sample as application-limited
if there was some moment during the sampled flight of data packets when there
was no data ready to send.</t>
            <t>The algorithm detects that an application-limited phase has started when
the sending application requests to send new data, or the connection's
retransmission mechanisms decide to retransmit data, and the connection meets
all of the following conditions:</t>
            <ol spacing="normal" type="1"><li>
                <t>The transport send buffer has less than one C.SMSS of unsent data available
  to send.</t>
              </li>
              <li>
                <t>The sending flow is not currently in the process of transmitting a packet.</t>
              </li>
              <li>
                <t>The amount of data considered in flight is less than the congestion window
  (C.cwnd).</t>
              </li>
              <li>
                <t>All the packets considered lost have been retransmitted.</t>
              </li>
            </ol>
            <t>If these conditions are all met then the sender has run out of data to feed the
network. This would effectively create a "bubble" of idle time in the data
pipeline. This idle time means that any delivery rate sample obtained from this
data packet, and any rate sample from a packet that follows it in the next
round trip, is going to be an application-limited sample that potentially
underestimates the true available bandwidth. Thus, when the algorithm marks a
transport flow as application-limited, it marks all bandwidth samples for the
next round trip as application-limited (at which point, the "bubble" can be
said to have exited the data pipeline).</t>
            <section anchor="considerations-related-to-receiver-flow-control-limits">
              <name>Considerations Related to Receiver Flow Control Limits</name>
              <t>In some cases receiver flow control limits (such as the TCP <xref target="RFC9293"/>
advertised receive window, RCV.WND) are the factor limiting the
delivery rate. This algorithm treats cases where the delivery rate was constrained
by such conditions the same as it treats cases where the delivery rate is
constrained by in-network bottlenecks. That is, it treats receiver bottlenecks
the same as network bottlenecks. This has a conceptual symmetry and has worked
well in practice for congestion control and telemetry purposes.</t>
            </section>
          </section>
        </section>
        <section anchor="detailed-delivery-rate-sampling-algorithm">
          <name>Detailed Delivery Rate Sampling Algorithm</name>
          <section anchor="variables">
            <name>Variables</name>
            <section anchor="per-connection-c-state">
              <name>Per-connection (C) state</name>
              <t>This algorithm requires the following new state variables for each transport
connection:</t>
              <t>C.delivered_time: The wall clock time when C.delivered was last updated.</t>
              <t>C.first_send_time: If packets are in flight, then this holds the send time of
the packet that was most recently marked as delivered. Else, if the connection
was recently idle, then this holds the send time of most recently sent packet.</t>
              <t>C.app_limited: The index of the last transmitted packet marked as
application-limited, or 0 if the connection is not currently
application-limited.</t>
              <t>We also assume that the transport protocol sender implementation tracks the
following state per connection. If the following state variables are not
tracked by an existing implementation, all the following parameters MUST
be tracked to implement this algorithm:</t>
              <t>C.pending_transmissions: The number of bytes queued for transmission on the
sending host at layers lower than the transport layer (i.e. network layer,
traffic shaping layer, network device layer).</t>
              <t>C.lost_out: The amount of data in the current outstanding window that
is marked as lost.</t>
              <t>C.retrans_out: The amount of data in the current outstanding window that
are being retransmitted.</t>
              <t>C.min_rtt: The minimum observed RTT over the lifetime of the connection.</t>
              <t>C.srtt: The smoothed RTT, an exponentially weighted moving average of the
observed RTT of the connection.</t>
            </section>
            <section anchor="per-packet-p-state">
              <name>Per-packet (P) state</name>
              <t>This algorithm requires the following new state variables for each packet that
has been transmitted but has not been acknowledged. As noted in the
<xref target="offload-mechanisms">Offload Mechanisms</xref> section, if a connection uses an
offload mechanism then it is RECOMMENDED that the packet state be tracked
for each packet "aggregate" rather than each individual packet.  For simplicity this
document refers to such state as "per-packet", whether it is per "aggregate" or
per IP packet.</t>
              <t>P.delivered: C.delivered when the packet was sent from transport connection
C.</t>
              <t>P.delivered_time: C.delivered_time when the packet was sent.</t>
              <t>P.first_send_time: C.first_send_time when the packet was sent.</t>
              <t>P.send_time: The pacing departure time selected when the packet was scheduled
to be sent.</t>
              <t>P.is_app_limited: true if C.app_limited was non-zero when the packet was
sent, else false.</t>
              <t>P.tx_in_flight: C.inflight immediately after the transmission of packet P.</t>
            </section>
            <section anchor="rate-sample-rs-output">
              <name>Rate Sample (rs) Output</name>
              <t>This algorithm provides its output in a RateSample structure rs, containing
the following fields:</t>
              <t>RS.delivery_rate: The delivery rate sample (in most cases RS.delivered /
RS.interval).</t>
              <t>RS.is_app_limited: The P.is_app_limited from the most recent packet delivered;
indicates whether the rate sample is application-limited.</t>
              <t>RS.interval: The length of the sampling interval.</t>
              <t>RS.delivered: The amount of data marked as delivered over the sampling interval.</t>
              <t>RS.prior_delivered: The P.delivered count from the most recent packet delivered.</t>
              <t>RS.prior_time: The P.delivered_time from the most recent packet delivered.</t>
              <t>RS.send_elapsed: Send time interval calculated from the most recent packet
delivered (see the "Send Rate" section above).</t>
              <t>RS.ack_elapsed: ACK time interval calculated from the most recent packet
delivered (see the "ACK Rate" section above).</t>
            </section>
          </section>
          <section anchor="transmitting-a-data-packet">
            <name>Transmitting a data packet</name>
            <t>Upon transmitting a data packet, the sender snapshots the current delivery
information in per-packet state. This will allow the sender
to generate a rate sample later, in the UpdateRateSample() step, when the
packet is (S)ACKed.</t>
            <t>If there are packets already in flight, then we need to start delivery rate
samples from the time we received the most recent ACK, to try to ensure that
we include the full time the network needs to deliver all in-flight data.
If there is no data in flight yet, then we can start the delivery rate
interval at the current time, since we know that any ACKs after now indicate
that the network was able to deliver that data completely in the sampling
interval between now and the next ACK.</t>
            <t>After each packet transmission, the sender executes the following steps:</t>
            <artwork><![CDATA[
  OnPacketSent(Packet P):
    if (C.inflight == 0)
      C.first_send_time  = C.delivered_time = P.send_time

    P.first_send_time = C.first_send_time
    P.delivered_time  = C.delivered_time
    P.delivered       = C.delivered
    P.is_app_limited  = (C.app_limited != 0)
    P.tx_in_flight    = C.inflight    /* includes data in P */
]]></artwork>
          </section>
          <section anchor="upon-receiving-an-ack">
            <name>Upon receiving an ACK</name>
            <t>When an ACK arrives, the connection first calls InitRateSample() to initialize
the per-ACK RateSample RS:</t>
            <artwork><![CDATA[
  /* Initialize the rate sample generated using the ACK being processed. */
  InitRateSample():
    RS.rtt           = -1
    RS.has_data      = false
    RS.prior_time    = 0
    RS.interval      = 0
    RS.delivery_rate = 0
]]></artwork>
            <t>Next, for each newly acknowledged packet, the connection calls
UpdateRateSample() to update the per-ACK rate sample based on a snapshot of
connection delivery information from the time at which the packet was
transmitted. The connection invokes UpdateRateSample() multiple times when a
stretched ACK acknowledges multiple data packets. The connection uses the
information from the most recently sent packet to update the rate sample:</t>
            <artwork><![CDATA[
  /* Update RS when a packet is acknowledged. */
  UpdateRateSample(Packet P):
    if (P.delivered_time == 0)
      return /* P already acknowledged */

    C.delivered += P.data_length
    C.delivered_time = Now()

    /* Update info using the newest packet: */
    if (!RS.has_data or IsNewestPacket(P, RS))
      RS.has_data         = true
      RS.prior_delivered  = P.delivered
      RS.prior_time       = P.delivered_time
      RS.is_app_limited   = P.is_app_limited
      RS.send_elapsed     = P.send_time - P.first_send_time
      RS.ack_elapsed      = C.delivered_time - P.delivered_time
      RS.last_end_seq     = P.end_seq
      C.first_send_time   = P.send_time

    /* Mark the packet as delivered once it's acknowleged. */
    P.delivered_time = 0

  /* Is the given Packet the most recently sent packet
   * that has been delivered? */
  IsNewestPacket(Packet P):
    return (P.send_time > C.first_send_time or
            (P.send_time == C.first_send_time and
             after(P.end_seq, RS.last_end_seq))
]]></artwork>
            <t>Finally, after the connection has processed all newly acknowledged packets for this
ACK by calling UpdateRateSample() for each packet, the connection invokes
GenerateRateSample() to finish populating the rate sample, RS:</t>
            <artwork><![CDATA[
  /* Upon receiving ACK, fill in delivery rate sample RS. */
  GenerateRateSample():
    /* Clear app-limited field if bubble is ACKed and gone. */
    if (C.app_limited and C.delivered > C.app_limited)
      C.app_limited = 0

    if (RS.prior_time == 0)
      return /* nothing delivered on this ACK */

    /* Use the longer of the send_elapsed and ack_elapsed */
    RS.interval = max(RS.send_elapsed, RS.ack_elapsed)

    RS.delivered = C.delivered - RS.prior_delivered

    /* Normally we expect interval >= MinRTT.
     * Note that rate may still be overestimated when a spuriously
     * retransmitted skb was first (s)acked because "interval"
     * is under-estimated (up to an RTT). However, continuously
     * measuring the delivery rate during loss recovery is crucial
     * for connections that suffer heavy or prolonged losses.
     */
    if (RS.interval <  C.min_rtt)
      return  /* no reliable rate sample */

    if (RS.interval != 0)
      RS.delivery_rate = RS.delivered / RS.interval

    return    /* filled in RS with a rate sample */
]]></artwork>
          </section>
          <section anchor="detecting-application-limited-phases">
            <name>Detecting application-limited phases</name>
            <t>An application-limited phase starts when the connection decides to send more
data, at a point in time when the connection had previously run out of data.
Some decisions to send more data are triggered by the application writing
more data to the connection, and some are triggered by loss detection (during
ACK processing or upon the triggering of a timer) estimating that some sequence
ranges need to be retransmitted. To detect all such cases, the algorithm
calls CheckIfApplicationLimited() to check for application-limited behavior in
the following situations:</t>
            <ul spacing="normal">
              <li>
                <t>The sending application asks the transport layer to send more data; i.e.,
upon each write from the application, before new application data is enqueued
in the transport send buffer or transmitted.</t>
              </li>
              <li>
                <t>At the beginning of ACK processing, before updating the estimated
amount of data in flight, and before congestion control modifies C.cwnd or
C.pacing_rate.</t>
              </li>
              <li>
                <t>At the beginning of connection timer processing, for all timers that might
result in the transmission of one or more data packets. For example: RTO
timers, TLP timers, RACK reordering timers, or Zero Window Probe timers.</t>
              </li>
            </ul>
            <t>When checking for application-limited behavior, the connection checks all the
conditions previously described in the "Tracking application-limited phases"
section, and if all are met then it marks the connection as
application-limited:</t>
            <artwork><![CDATA[
  CheckIfApplicationLimited():
    if (NoUnsentData() and
        C.pending_transmissions == 0 and
        C.inflight < C.cwnd and
        C.lost_out <= C.retrans_out)
      C.app_limited = (C.delivered + C.inflight) ? : 1
]]></artwork>
          </section>
        </section>
        <section anchor="delivery-rate-sampling-discussion">
          <name>Delivery Rate Sampling Discussion</name>
          <section anchor="offload-mechanisms">
            <name>Offload Mechanisms</name>
            <t>If a transport sender implementation uses an offload mechanism (such as TSO,
GSO, etc.) to combine multiple C.SMSS of data into a single packet "aggregate"
for the purposes of scheduling transmissions, then it is RECOMMENDED that the
per-packet state described in Section <xref target="per-packet-p-state">Per-packet (P) state</xref> be
tracked for each packet "aggregate" rather than each IP packet.</t>
          </section>
          <section anchor="impact-of-ack-losses">
            <name>Impact of ACK losses</name>
            <t>Delivery rate samples are generated upon receiving each ACK; ACKs may contain
both cumulative and selective acknowledgment information. Losing an ACK results
in losing the delivery rate sample corresponding to that ACK, and generating a
delivery rate sample at later a time (upon the arrival of the next ACK). This
can underestimate the delivery rate due the artificially inflated
"RS.interval". The impact of this effect is mitigated using the BBR.max_bw
filter.</t>
          </section>
          <section anchor="impact-of-packet-reordering">
            <name>Impact of packet reordering</name>
            <t>This algorithm is robust to packet reordering; it makes no assumptions about
the order in which packets are delivered or ACKed. In particular, for a
particular packet P, it does not matter which packets are delivered between the
transmission of P and the ACK of packet P, since C.delivered will be
incremented appropriately in any case.</t>
          </section>
          <section anchor="impact-of-packet-loss-and-retransmissions">
            <name>Impact of packet loss and retransmissions</name>
            <t>There are several possible approaches for handling cases where a delivery
rate sample is based on a retransmitted packet.</t>
            <t>If the transport protocol supports unambiguous ACKs for retransmitted data
(as in QUIC <xref target="RFC9000"/>) then the algorithm is perfectly robust to retransmissions,
because the starting packet, P, for the sample can be unambiguously retrieved.</t>
            <t>If the transport protocol, like TCP <xref target="RFC9293"/>, has ambiguous ACKs for
retransmitted sequence ranges, then the following approaches MAY be used:</t>
            <ol spacing="normal" type="1"><li>
                <t>The sender MAY choose to filter out implausible delivery rate samples, as
  described in the GenerateRateSample() step in the "Upon receiving an ACK"
  section, by discarding samples whose RS.interval is lower than the minimum
  RTT seen on the connection.</t>
              </li>
              <li>
                <t>The sender MAY choose to skip the generation of a delivery rate sample for
  a retransmitted sequence range.</t>
              </li>
            </ol>
            <section anchor="connections-without-sack">
              <name>TCP Connections without SACK</name>
              <t>Whenever possibile, TCP connections using BBR as a congestion controller SHOULD
use both SACK and timestamps. Failure to do so will cause BBR's RTT and
bandwidth measurements to be much less accurate.</t>
              <t>When using TCP without SACK (i.e., either or both ends of the connections do
not accept SACK), this algorithm can be extended to estimate approximate
delivery rates using duplicate ACKs (much like Reno and <xref target="RFC5681"/> estimates
that each duplicate ACK indicates that a data packet has been delivered).</t>
            </section>
          </section>
        </section>
      </section>
      <section anchor="rtt-samples">
        <name>RTT Samples</name>
        <t>Upon transmitting each packet, BBR or the associated transport protocol
stores in per-packet data the wall-clock scheduled transmission time of the
packet in P.send_time (see "Pacing Rate: C.pacing_rate" in
<xref target="pacing-rate-bbrpacingrate"/> for how this is calculated).</t>
        <t>For every ACK that newly acknowledges data, the sender's BBR implementation
or the associated transport protocol implementation attempts to calculate an
RTT sample. The sender MUST consider any potential retransmission ambiguities
that can arise in some transport protocols. If some of the acknowledged data
was not retransmitted, or some of the data was retransmitted but the sender
can still unambiguously determine the RTT of the data (e.g. QUIC or TCP with
timestamps <xref target="RFC7323"/>), then the sender calculates an RTT sample, RS.rtt,
as follows:</t>
        <artwork><![CDATA[
  RS.rtt = Now() - P.send_time
]]></artwork>
      </section>
    </section>
    <section anchor="detailed-algorithm">
      <name>Detailed Algorithm</name>
      <section anchor="state-machine">
        <name>State Machine</name>
        <t>BBR implements a state machine that uses the network path model to guide
its decisions, and the control parameters to enact its decisions.</t>
        <section anchor="state-transition-diagram">
          <name>State Transition Diagram</name>
          <t>The following state transition diagram summarizes the flow of control and
the relationship between the different states:</t>
          <artwork><![CDATA[
             |
             V
    +---> Startup  ------------+
    |        |                 |
    |        V                 |
    |     Drain  --------------+
    |        |                 |
    |        V                 |
    +---> ProbeBW_DOWN  -------+
    | ^      |                 |
    | |      V                 |
    | |   ProbeBW_CRUISE ------+
    | |      |                 |
    | |      V                 |
    | |   ProbeBW_REFILL  -----+
    | |      |                 |
    | |      V                 |
    | |   ProbeBW_UP  ---------+
    | |      |                 |
    | +------+                 |
    |                          |
    +---- ProbeRTT <-----------+
]]></artwork>
        </section>
        <section anchor="state-machine-operation-overview">
          <name>State Machine Operation Overview</name>
          <t>When starting up, BBR probes to try to quickly build a model of the network
path; to adapt to later changes to the path or its traffic, BBR must continue
to probe to update its model. If the available bottleneck bandwidth increases,
BBR must send faster to discover this. Likewise, if the round-trip propagation
delay changes, this changes the BDP, and thus BBR must send slower to get
C.inflight below the new BDP in order to measure the new BBR.min_rtt. Thus,
BBR's state machine runs periodic, sequential experiments, sending faster
to check for BBR.bw increases or sending slower to yield bandwidth, drain
the queue, and check for BBR.min_rtt decreases. The frequency, magnitude,
duration, and structure of these experiments differ depending on what's already
known (startup or steady-state) and application sending behavior (intermittent
or continuous).</t>
          <t>This state machine has several goals:</t>
          <ul spacing="normal">
            <li>
              <t>Achieve high throughput by efficiently utilizing available bandwidth.</t>
            </li>
            <li>
              <t>Achieve low latency and packet loss rates by keeping queues bounded and small.</t>
            </li>
            <li>
              <t>Share bandwidth with other flows in an approximately fair manner.</t>
            </li>
            <li>
              <t>Feed samples to the model estimators to refresh and update the model.</t>
            </li>
          </ul>
        </section>
        <section anchor="state-machine-tactics">
          <name>State Machine Tactics</name>
          <t>In the BBR framework, at any given time the sender can choose one of the
following tactics:</t>
          <ul spacing="normal">
            <li>
              <t>Acceleration: Send faster then the network is delivering data: to probe the
maximum bandwidth available to the flow</t>
            </li>
            <li>
              <t>Deceleration: Send slower than the network is delivering data: to reduce
the amount of data in flight, with a number of overlapping motivations:  </t>
              <ul spacing="normal">
                <li>
                  <t>Reducing queuing delay: to reduce queuing delay, to reduce latency for
request/response cross-traffic (e.g. RPC, web traffic).</t>
                </li>
                <li>
                  <t>Reducing packet loss: to reduce packet loss, to reduce tail latency for
request/response cross-traffic (e.g. RPC, web traffic) and improve
coexistence with Reno/CUBIC.</t>
                </li>
                <li>
                  <t>Probing BBR.min_rtt: to probe the path's BBR.min_rtt</t>
                </li>
                <li>
                  <t>Bandwidth convergence: to aid bandwidth fairness convergence, by leaving
unused capacity in the bottleneck link or bottleneck buffer, to allow other
flows that may have lower sending rates to discover and utilize the unused
capacity</t>
                </li>
                <li>
                  <t>Burst tolerance: to allow bursty arrivals of cross-traffic (e.g. short web
or RPC requests) to be able to share the bottleneck link without causing
excessive queuing delay or packet loss</t>
                </li>
              </ul>
            </li>
            <li>
              <t>Cruising: Send at the same rate the network is delivering data: try to match
the sending rate to the flow's current available bandwidth, to try to achieve
high utilization of the available bandwidth without increasing queue pressure</t>
            </li>
          </ul>
          <t>Throughout the lifetime of a BBR flow, it sequentially cycles through all
three tactics, to measure the network path and try to optimize its operating
point.</t>
          <t>BBR's state machine uses two control mechanisms: the BBR.pacing_gain and the
C.cwnd. Primarily, it uses BBR.pacing_gain (see the "Pacing Rate" section), which
controls how fast packets are sent relative to BBR.bw. A BBR.pacing_gain &gt; 1
decreases inter-packet time and increases C.inflight. A BBR.pacing_gain &lt; 1 has the
opposite effect, increasing inter-packet time and while aiming to decrease
C.inflight. C.cwnd is sufficiently larger than the BDP to allow the higher
pacing gain to accumulate more packets in flight. Only if the state machine
needs to quickly reduce C.inflight to a particular absolute value, it uses
C.cwnd.</t>
        </section>
      </section>
      <section anchor="algorithm-organization">
        <name>Algorithm Organization</name>
        <t>The BBR algorithm is an event-driven algorithm that executes steps upon the
following events: connection initialization, upon each ACK, upon the
transmission of each quantum, and upon loss detection events. All of the
sub-steps invoked referenced below are described below.</t>
        <section anchor="initialization">
          <name>Initialization</name>
          <t>Upon transport connection initialization, BBR executes its initialization
steps:</t>
          <artwork><![CDATA[
  BBROnInit():
    InitWindowedMaxFilter(filter=BBR.max_bw_filter, value=0, time=0)
    BBR.min_rtt = C.srtt ? C.srtt : Infinity
    BBR.min_rtt_stamp = Now()
    BBR.probe_rtt_done_stamp = 0
    BBR.probe_rtt_round_done = false
    BBR.prior_cwnd = 0
    BBR.idle_restart = false
    BBR.extra_acked_interval_start = Now()
    BBR.extra_acked_delivered = 0
    BBR.full_bw_reached = false
    BBRResetCongestionSignals()
    BBRResetShortTermModel()
    BBRInitRoundCounting()
    BBRResetFullBW()
    BBRInitPacingRate()
    BBREnterStartup()
]]></artwork>
        </section>
        <section anchor="per-transmit-steps">
          <name>Per-Transmit Steps</name>
          <t>Before transmitting, BBR merely needs to check for the case where the flow
is restarting from idle:</t>
          <artwork><![CDATA[
  BBROnTransmit():
    BBRHandleRestartFromIdle()
]]></artwork>
        </section>
        <section anchor="per-ack-steps">
          <name>Per-ACK Steps</name>
          <t>On every ACK, the BBR algorithm executes the following BBRUpdateOnACK() steps
in order to update its network path model, update its state machine, and
adjust its control parameters to adapt to the updated model:</t>
          <artwork><![CDATA[
  BBRUpdateOnACK():
    GenerateRateSample()
    BBRUpdateModelAndState()
    BBRUpdateControlParameters()

  BBRUpdateModelAndState():
    BBRUpdateLatestDeliverySignals()
    BBRUpdateCongestionSignals()
    BBRUpdateACKAggregation()
    BBRCheckFullBWReached()
    BBRCheckStartupDone()
    BBRCheckDrainDone()
    BBRUpdateProbeBWCyclePhase()
    BBRUpdateMinRTT()
    BBRCheckProbeRTT()
    BBRAdvanceLatestDeliverySignals()
    BBRBoundBWForModel()

  BBRUpdateControlParameters():
    BBRSetPacingRate()
    BBRSetSendQuantum()
    BBRSetCwnd()
]]></artwork>
        </section>
        <section anchor="per-loss-steps">
          <name>Per-Loss Steps</name>
          <t>On every packet loss event, where some sequence range "packet" is marked
lost, the BBR algorithm executes the following BBRUpdateOnLoss() steps in
order to update its network path model</t>
          <artwork><![CDATA[
  BBRUpdateOnLoss(packet):
    BBRHandleLostPacket(packet)
]]></artwork>
        </section>
      </section>
      <section anchor="state-machine-operation">
        <name>State Machine Operation</name>
        <section anchor="startup">
          <name>Startup</name>
          <section anchor="startup-dynamics">
            <name>Startup Dynamics</name>
            <t>When a BBR flow starts up, it performs its first (and most rapid) sequential
probe/drain process in the Startup and Drain states. Network link bandwidths
currently span a range of at least 11 orders of magnitude, from a few bps
to hundreds of Gbps. To quickly learn BBR.max_bw, given this huge range to
explore, BBR's Startup state does an exponential search of the rate space,
doubling the sending rate each round. This finds BBR.max_bw in O(log_2(BDP))
round trips.</t>
            <t>To achieve this rapid probing smoothly, in Startup BBR uses the minimum gain
values that will allow the sending rate to double each round: in Startup BBR
sets BBR.pacing_gain to BBR.StartupPacingGain (2.77) <xref target="BBRStartupPacingGain"/>
and BBR.cwnd_gain to BBR.DefaultCwndGain (2) <xref target="BBRStartupCwndGain"/>.</t>
            <t>When initializing a connection, or upon any later entry into Startup mode,
BBR executes the following BBREnterStartup() steps:</t>
            <artwork><![CDATA[
  BBREnterStartup():
    BBR.state = Startup
    BBR.pacing_gain = BBR.StartupPacingGain
    BBR.cwnd_gain = BBR.DefaultCwndGain
]]></artwork>
            <t>As BBR grows its sending rate rapidly, it obtains higher delivery rate
samples, BBR.max_bw increases, and the C.pacing_rate and C.cwnd both adapt by
smoothly growing in proportion. Once the pipe is full, a queue typically
forms, but the BBR.cwnd_gain bounds any queue to (BBR.cwnd_gain - 1) * estimated_BDP,
which is approximately (2 - 1) * estimated_BDP = estimated_BDP.
The immediately following Drain state is designed to quickly drain that queue.</t>
            <t>During Startup, BBR estimates whether the pipe is full using two estimators.
The first looks for a plateau in the BBR.max_bw estimate. The second looks
for packet loss. The following subsections discuss these estimators.</t>
            <artwork><![CDATA[
  BBRCheckStartupDone():
    BBRCheckStartupHighLoss()
    if (BBR.state == Startup and BBR.full_bw_reached)
      BBREnterDrain()
]]></artwork>
          </section>
          <section anchor="exiting-acceleration-based-on-bandwidth-plateau">
            <name>Exiting Acceleration Based on Bandwidth Plateau</name>
            <t>In phases where BBR is accelerating to probe the available bandwidth -
Startup and ProbeBW_UP - BBR runs a state machine to estimate whether an
accelerating sending rate has saturated the available per-flow bandwidth
("filled the pipe") by looking for a plateau in the measured
RS.delivery_rate.</t>
            <t>BBR tracks the status of the current full-pipe estimation process in the
boolean BBR.full_bw_now, and uses BBR.full_bw_now to exit ProbeBW_UP. BBR
records in the boolean BBR.full_bw_reached whether BBR estimates that it
has ever fully utilized its available bandwidth (over the lifetime of the
connection), and uses BBR.full_bw_reached to decide when to exit Startup
and enter Drain.</t>
            <t>The full pipe estimator works as follows: if BBR counts several (three)
non-application-limited rounds where attempts to significantly increase the
delivery rate actually result in little increase (less than 25 percent),
then it estimates that it has fully utilized the per-flow available bandwidth,
and sets both BBR.full_bw_now and BBR.full_bw_reached to true.</t>
            <t>Upon starting a full pipe detection process (either on startup or when probing
for an increase in bandwidth), the following steps are taken:</t>
            <artwork><![CDATA[
  BBRResetFullBW():
    BBR.full_bw = 0
    BBR.full_bw_count = 0
    BBR.full_bw_now = 0
]]></artwork>
            <t>While running the full pipe detection process, upon an ACK that acknowledges
new data, and when the delivery rate sample is not application-limited
(see <xref target="delivery-rate-samples"/>), BBR runs the "full pipe" estimator:</t>
            <artwork><![CDATA[
  BBRCheckFullBWReached():
    if (BBR.full_bw_now or !BBR.round_start or RS.is_app_limited)
      return  /* no need to check for a full pipe now */
    if (RS.delivery_rate >= BBR.full_bw * 1.25)
      BBRResetFullBW()       /* bw is still growing, so reset */
      BBR.full_bw = RS.delivery_rate  /* record new baseline bw */
      return
    BBR.full_bw_count++   /* another round w/o much growth */
    BBR.full_bw_now = (BBR.full_bw_count >= 3)
    if (BBR.full_bw_now)
      BBR.full_bw_reached = true
]]></artwork>
            <t>BBR waits three packet-timed round trips to have reasonable evidence that the
sender is not detecting a delivery-rate plateau that was temporarily imposed by
congestion or receive-window auto-tuning. This three-round threshold was
validated by experimental data to allow the receiver the chance to grow its
receive window.</t>
          </section>
          <section anchor="exiting-startup-based-on-packet-loss">
            <name>Exiting Startup Based on Packet Loss</name>
            <t>A second method BBR uses for estimating the bottleneck is full in Startup
is by looking at packet losses. Specifically, BBRCheckStartupHighLoss() checks
whether all of the following criteria are all met:</t>
            <ul spacing="normal">
              <li>
                <t>The connection has been in fast recovery for at least one full packet-timed
round trip.</t>
              </li>
              <li>
                <t>The loss rate over the time scale of a single full round trip exceeds
BBR.LossThresh (2%).</t>
              </li>
              <li>
                <t>There are at least BBRStartupFullLossCnt=6 discontiguous sequence ranges
lost in that round trip.</t>
              </li>
            </ul>
            <t>If these criteria are all met, then BBRCheckStartupHighLoss() takes the
following steps. First, it sets BBR.full_bw_reached = true. Then it sets
BBR.inflight_longterm to its estimate of a safe level of in-flight data suggested
by these losses, which is max(BBR.bdp, BBR.inflight_latest), where
BBR.inflight_latest is the max delivered volume of data (RS.delivered) over
the last round trip. Finally, it exits Startup and enters Drain.</t>
            <t>The algorithm waits until all three criteria are met to filter out noise
from burst losses, and to try to ensure the bottleneck is fully utilized
on a sustained basis, and the full bottleneck bandwidth has been measured,
before attempting to drain the level of in-flight data to the estimated BDP.</t>
          </section>
        </section>
        <section anchor="drain">
          <name>Drain</name>
          <t>Upon exiting Startup, BBR enters its Drain state. In Drain, BBR aims to quickly
drain any queue at the bottleneck link that was created in Startup by switching
to a pacing_gain well below 1.0, until any estimated queue has been drained. It
uses a pacing_gain of BBR.DrainPacingGain = 0.35, chosen via analysis
<xref target="BBRDrainPacingGain"/> and experimentation to try to drain the queue in less
than one round-trip:</t>
          <artwork><![CDATA[
  BBREnterDrain():
    BBR.state = Drain
    BBR.pacing_gain = BBR.DrainPacingGain    /* pace slowly */
    BBR.cwnd_gain = BBR.DefaultCwndGain      /* maintain cwnd */
]]></artwork>
          <t>In Drain, when the amount of data in flight is less than or equal to the
estimated BDP, meaning BBR estimates that the queue at the bottleneck link
has been fully drained, then BBR exits Drain and enters ProbeBW. To implement
this, upon every ACK BBR executes:</t>
          <artwork><![CDATA[
  BBRCheckDrainDone():
    if (BBR.state == Drain and C.inflight <= BBRInflight(1.0))
      BBREnterProbeBW()  /* BBR estimates the queue was drained */
]]></artwork>
        </section>
        <section anchor="probebw">
          <name>ProbeBW</name>
          <t>Long-lived BBR flows tend to spend the vast majority of their time in the
ProbeBW states. In the ProbeBW states, a BBR flow sequentially accelerates,
decelerates, and cruises, to measure the network path, improve its operating
point (increase throughput and reduce queue pressure), and converge toward a
more fair allocation of bottleneck bandwidth. To do this, the flow sequentially
cycles through all three tactics: trying to send faster than, slower than, and
at the same rate as the network delivery process. To achieve this, a BBR flow
in ProbeBW mode cycles through the four Probe bw states (DOWN, CRUISE, REFILL,
and UP) described below in turn.</t>
          <section anchor="probebwdown">
            <name>ProbeBW_DOWN</name>
            <t>In the ProbeBW_DOWN phase of the cycle, a BBR flow pursues the deceleration
tactic, to try to send slower than the network is delivering data, to reduce
the amount of data in flight, with all of the standard motivations for the
deceleration tactic (discussed in "State Machine Tactics" in
<xref target="state-machine-tactics"/>). It does this by switching to a
BBR.pacing_gain of 0.90, sending at 90% of BBR.bw. The pacing_gain value
of 0.90 is derived based on the ProbeBW_UP pacing gain of 1.25, as the minimum
pacing_gain value that allows bandwidth-based convergence to approximate
fairness, and validated through experiments.</t>
            <t>Exit conditions: The flow exits the ProbeBW_DOWN phase and enters CRUISE
when the flow estimates that both of the following conditions have been
met:</t>
            <ul spacing="normal">
              <li>
                <t>There is free headroom: If BBR.inflight_longterm is set, then BBR remains in
ProbeBW_DOWN at least until inflight is less than or
equal to a target calculated based on (1 - BBR.Headroom)*BBR.inflight_longterm.
The goal of this constraint is to ensure that in cases where loss signals
suggest an upper limit on C.inflight, then the flow attempts
to leave some free headroom in the path (e.g. free space in the bottleneck
buffer or free time slots in the bottleneck link) that can be used by
cross traffic (both for convergence of bandwidth shares and for burst tolerance).</t>
              </li>
              <li>
                <t>C.inflight is less than or equal to BBR.bdp, i.e. the flow
estimates that it has drained any queue at the bottleneck.</t>
              </li>
            </ul>
          </section>
          <section anchor="probebwcruise">
            <name>ProbeBW_CRUISE</name>
            <t>In the ProbeBW_CRUISE phase of the cycle, a BBR flow pursues the "cruising"
tactic (discussed in "State Machine Tactics" in
<xref target="state-machine-tactics"/>), attempting to send at the same rate the
network is delivering data. It tries to match the sending rate to the flow's
current available bandwidth, to try to achieve high utilization of the
available bandwidth without increasing queue pressure. It does this by
switching to a pacing_gain of 1.0, sending at 100% of BBR.bw. Notably, while
in this state it responds to concrete congestion signals (loss) by reducing
BBR.bw_shortterm and BBR.inflight_shortterm, because these signals suggest that
the available bandwidth and deliverable inflight have likely
reduced, and the flow needs to change to adapt, slowing down to match the
latest delivery process.</t>
            <t>Exit conditions: The connection adaptively holds this state until it decides
that it is time to probe for bandwidth (see "Time Scale for Bandwidth Probing",
in <xref target="time-scale-for-bandwidth-probing-"/>), at which time it enters
ProbeBW_REFILL.</t>
          </section>
          <section anchor="probebwrefill">
            <name>ProbeBW_REFILL</name>
            <t>The goal of the ProbeBW_REFILL state is to "refill the pipe", to try to fully
utilize the network bottleneck without creating any significant queue pressure.</t>
            <t>To do this, BBR first resets the short-term model parameters BBR.bw_shortterm and
BBR.inflight_shortterm, setting both to "Infinity". This is the key moment in the BBR
time scale strategy (see "Time Scale Strategy", <xref target="time-scale-strategy"/>)
where the flow pivots, discarding its conservative short-term BBR.bw_shortterm and
BBR.inflight_shortterm parameters and beginning to robustly probe the bottleneck's
long-term available bandwidth. During this time the estimated bandwidth and
BBR.inflight_longterm, if set, constrain the connection.</t>
            <t>During ProbeBW_REFILL BBR uses a BBR.pacing_gain of 1.0, to send at a rate
that matches the current estimated available bandwidth, for one packet-timed
round trip. The goal is to fully utilize the bottleneck link before
transitioning into ProbeBW_UP and significantly increasing the chances of
causing loss. The motivating insight is that, as soon as a flow starts
acceleration, sending faster than the available bandwidth, it will start
building a queue at the bottleneck. And if the buffer is shallow enough,
then the flow can cause loss signals very shortly after the first accelerating
packets arrive at the bottleneck. If the flow were to neglect to fill the
pipe before it causes this loss signal, then these very quick signals of excess
queue could cause the flow's estimate of the path's capacity (i.e. BBR.inflight_longterm)
to significantly underestimate. In particular, if the flow were to transition
directly from ProbeBW_CRUISE to ProbeBW_UP, C.inflight
(at the time the first accelerating packets were sent) may often be still very
close to the C.inflight maintained in CRUISE, which may be
only (1 - BBR.Headroom)*BBR.inflight_longterm.</t>
            <t>Exit conditions: The flow exits ProbeBW_REFILL after one packet-timed round
trip, and enters ProbeBW_UP. This is because after one full round trip of
sending in ProbeBW_REFILL the flow (if not application-limited) has had an
opportunity to place as many packets in flight as its BBR.bw and BBR.inflight_longterm
permit. Correspondingly, at this point the flow starts to see bandwidth samples
reflecting its ProbeBW_REFILL behavior, which may be putting too much data
in flight.</t>
          </section>
          <section anchor="probebwup">
            <name>ProbeBW_UP</name>
            <t>After ProbeBW_REFILL refills the pipe, ProbeBW_UP probes for possible
increases in available bandwidth by raising the sending rate, using a
BBR.pacing_gain of 1.25, to send faster than the current estimated available
bandwidth. It also raises BBR.cwnd_gain to 2.25, to ensure that the flow
can send faster than it had been, even if C.cwnd was previously limiting the
sending process.</t>
            <t>If the flow has not set BBR.inflight_longterm, it implicitly tries to raise
C.inflight to at least BBR.pacing_gain * BBR.bdp = 1.25 *
BBR.bdp.</t>
            <t>If the flow has set BBR.inflight_longterm and encounters that limit, it then
gradually increases the upper volume bound (BBR.inflight_longterm) using the
following approach:</t>
            <ul spacing="normal">
              <li>
                <t>BBR.inflight_longterm: The flow raises BBR.inflight_longterm in ProbeBW_UP in a manner
that is slow and cautious at first, but increasingly rapid and bold over time.
The initial caution is motivated by the fact that a given BBR flow may be sharing
a shallow buffer with thousands of other flows, so that the buffer space
available to the flow may be quite tight (even just a single packet or
less). The increasingly rapid growth over time is motivated by the fact that
in a high-speed WAN the increase in available bandwidth (and thus the estimated
BDP) may require the flow to grow C.inflight by up to
O(1,000,000) packets; even a high-speed WAN BDP like
10Gbps * 100ms is around 83,000 packets (with a 1500-byte MTU). The additive
increase to BBR.inflight_longterm exponentially doubles each round trip;
in each successive round trip, BBR.inflight_longterm grows by 1, 2, 4, 8, 16,
etc, with the increases spread uniformly across the entire round trip.
This helps allow BBR to utilize a larger BDP in O(log(BDP)) round trips,
meeting the design goal for scalable utilization of newly-available bandwidth.</t>
              </li>
            </ul>
            <t>Exit conditions: The BBR flow ends ProbeBW_UP bandwidth probing and
transitions to ProbeBW_DOWN to try to drain the bottleneck queue when either
of the following conditions are met:</t>
            <ol spacing="normal" type="1"><li>
                <t>Bandwidth saturation: BBRIsTimeToGoDown() (see below) uses the "full pipe"
  estimator (see <xref target="exiting-acceleration-based-on-bandwidth-plateau"/>) to
  estimate whether the flow has saturated the available per-flow bandwidth
  ("filled the pipe"), by looking for a plateau in the measured
  RS.delivery_rate. If, during this process, C.inflight is constrained
  by BBR.inflight_longterm (the flow becomes cwnd-limited while cwnd is limited by
  BBR.inflight_longterm), then the flow cannot fully explore the available bandwidth,
  and so it resets the "full pipe" estimator by calling BBRResetFullBW().</t>
              </li>
              <li>
                <t>Loss: The current loss rate, over the time scale of the last round trip,
  exceeds BBR.LossThresh (2%).</t>
              </li>
            </ol>
          </section>
          <section anchor="time-scale-for-bandwidth-probing-">
            <name>Time Scale for Bandwidth Probing</name>
            <t>Choosing the time scale for probing bandwidth is tied to the question of
how to coexist with legacy Reno/CUBIC flows, since probing for bandwidth
runs a significant risk of causing packet loss, and causing packet loss can
significantly limit the throughput of such legacy Reno/CUBIC flows.</t>
            <section anchor="bandwidth-probing-and-coexistence-with-renocubic">
              <name>Bandwidth Probing and Coexistence with Reno/CUBIC</name>
              <t>BBR has an explicit strategy for coexistence with Reno/CUBIC: to try to behave
in a manner so that  Reno/CUBIC flows coexisting with BBR can continue to
work well in the primary contexts where they do today:</t>
              <ul spacing="normal">
                <li>
                  <t>Intra-datacenter/LAN traffic: the goal is to allow Reno/CUBIC to be able
to perform well in 100M through 40G enterprise and datacenter Ethernet:  </t>
                  <ul spacing="normal">
                    <li>
                      <t>BDP = 40 Gbps * 20 us / (1514 bytes) ~= 66 packets</t>
                    </li>
                  </ul>
                </li>
                <li>
                  <t>Public Internet last mile traffic: the goal is to allow Reno/CUBIC to be
able to support up to 25Mbps (for 4K Video) at an RTT of 30ms, typical
parameters for common CDNs for large video services:  </t>
                  <ul spacing="normal">
                    <li>
                      <t>BDP = 25Mbps * 30 ms / (1514 bytes) ~= 62 packets</t>
                    </li>
                  </ul>
                </li>
              </ul>
              <t>The challenge in meeting these goals is that Reno/CUBIC need long periods
of no loss to utilize large BDPs. The good news is that in the environments
where Reno/CUBIC work well today (mentioned above), the BDPs are small, roughly
~100 packets or less.</t>
            </section>
            <section anchor="a-dual-time-scale-approach-for-coexistence">
              <name>A Dual-Time-Scale Approach for Coexistence</name>
              <t>The BBR strategy has several aspects:</t>
              <ol spacing="normal" type="1"><li>
                  <t>The highest priority is to estimate the bandwidth available to the BBR flow
  in question.</t>
                </li>
                <li>
                  <t>Secondarily, a given BBR flow adapts (within bounds) the frequency at which
  it probes bandwidth and knowingly risks packet loss, to allow Reno/CUBIC
  to reach a bandwidth at least as high as that given BBR flow.</t>
                </li>
              </ol>
              <t>To adapt the frequency of bandwidth probing, BBR considers two time scales:
a BBR-native time scale, and a bounded Reno-conscious time scale:</t>
              <ul spacing="normal">
                <li>
                  <t>T_bbr: BBR-native time-scale  </t>
                  <ul spacing="normal">
                    <li>
                      <t>T_bbr = uniformly randomly distributed between 2 and 3 secs</t>
                    </li>
                  </ul>
                </li>
                <li>
                  <t>T_reno: Reno-coexistence time scale  </t>
                  <ul spacing="normal">
                    <li>
                      <t>T_reno_bound = pick_randomly_either({62, 63})</t>
                    </li>
                    <li>
                      <t>reno_bdp = min(BBR.bdp, C.cwnd)</t>
                    </li>
                    <li>
                      <t>T_reno = min(reno_bdp, T_reno_bound) round trips</t>
                    </li>
                  </ul>
                </li>
                <li>
                  <t>T_probe: The time between bandwidth probe UP phases:  </t>
                  <ul spacing="normal">
                    <li>
                      <t>T_probe = min(T_bbr, T_reno)</t>
                    </li>
                  </ul>
                </li>
              </ul>
              <t>This dual-time-scale approach is similar to that used by CUBIC, which has
a CUBIC-native time scale given by a cubic curve, and a "Reno emulation"
module that estimates what C.cwnd would give the flow Reno-equivalent throughput.
At any given moment, choose the C.cwnd implied by the more aggressive
strategy.</t>
              <t>We randomize both the T_bbr and T_reno parameters, for better mixing and
fairness convergence.</t>
            </section>
            <section anchor="design-considerations-for-choosing-constant-parameters">
              <name>Design Considerations for Choosing Constant Parameters</name>
              <t>We design the maximum wall-clock bounds of BBR-native inter-bandwidth-probe
wall clock time, T_bbr, to be:</t>
              <ul spacing="normal">
                <li>
                  <t>Higher than 2 sec to try to avoid causing loss for a long enough time to
allow Reno flow with RTT=30ms to get 25Mbps (4K video) throughput. For this
workload, given the Reno sawtooth that raises C.cwnd from roughly BDP to 2*BDP,
one C.SMSS per round trip,  the inter-bandwidth-probe time must be at least:
BDP * RTT = 25Mbps * .030 sec / (1514 bytes) * 0.030 sec = 1.9secs</t>
                </li>
                <li>
                  <t>Lower than 3 sec to ensure flows can start probing in a reasonable amount
of time to discover unutilized bw on human-scale interactive  time-scales
(e.g. perhaps traffic from a competing web page download is now complete).</t>
                </li>
              </ul>
              <t>The maximum round-trip bounds of the Reno-coexistence time scale, T_reno,
are chosen to be 62-63 with the following considerations in mind:</t>
              <ul spacing="normal">
                <li>
                  <t>Choosing a value smaller than roughly 60 would imply that when BBR flows
coexisted with Reno/CUBIC flows on public Internet broadband links, the
Reno/CUBIC flows would not be able to achieve enough bandwidth to show 4K
video.</t>
                </li>
                <li>
                  <t>Choosing a value that is too large would prevent BBR from reaching its goal
of tolerating 1% loss per round trip.
Given that the steady-state (non-bandwidth-probing) BBR response to
a non-application-limited round trip with X% packet loss is to
reduce the sending rate by X% (see "Updating the Model Upon Packet
Loss" in <xref target="updating-the-model-upon-packet-loss"/>), this means that the
BBR sending rate after N rounds of packet loss at a rate loss_rate
is reduced to (1 - loss_rate)^N.
A simplified model predicts that for a flow that encounters 1% loss
in N=137 round trips of ProbeBW_CRUISE, and then doubles its C.cwnd
(back to BBR.inflight_longterm) in ProbeBW_REFILL and ProbeBW_UP, we
expect that it will be able to restore and reprobe its original
sending rate, since: (1 - loss_rate)^N * 2^2 = (1 - .01)^137 * 2^2
~= 1.01.
That is, we expect the flow will be able to fully respond to packet
loss signals in ProbeBW_CRUISE while also fully re-measuring its
maximum achievable throughput in ProbeBW_UP.
However, with a larger number of round trips of ProbeBW_CRUISE, the
flow would not be able to sustain its achievable throughput.</t>
                </li>
              </ul>
              <t>The resulting behavior is that for BBR flows with small BDPs, the bandwidth
probing will be on roughly the same time scale as Reno/CUBIC; flows with
large BDPs will intentionally probe more rapidly/frequently than Reno/CUBIC
would (roughly every 62 round trips for low-RTT flows, or 2-3 secs for
high-RTT flows).</t>
              <t>The considerations above for timing bandwidth probing can be implemented
as follows:</t>
              <artwork><![CDATA[
  /* Is it time to transition from DOWN or CRUISE to REFILL? */
  BBRIsTimeToProbeBW():
    if (BBRHasElapsedInPhase(BBR.bw_probe_wait) ||
        BBRIsRenoCoexistenceProbeTime())
      BBRStartProbeBW_REFILL()
      return true
    return false

  /* Randomized decision about how long to wait until
   * probing for bandwidth, using round count and wall clock.
   */
  BBRPickProbeWait():
    /* Decide random round-trip bound for wait: */
    BBR.rounds_since_bw_probe =
      random_int_between(0, 1); /* 0 or 1 */
    /* Decide the random wall clock bound for wait: */
    BBR.bw_probe_wait =
      2 + random_float_between(0.0, 1.0) /* 0..1 sec */

  BBRIsRenoCoexistenceProbeTime():
    reno_rounds = BBRTargetInflight()
    rounds = min(reno_rounds, 63)
    return BBR.rounds_since_bw_probe >= rounds

  /* How much data do we want in flight?
   * Our estimated BDP, unless congestion cut C.cwnd. */
  BBRTargetInflight()
    return min(BBR.bdp, C.cwnd)
]]></artwork>
            </section>
          </section>
          <section anchor="probebw-algorithm-details">
            <name>ProbeBW Algorithm Details</name>
            <t>BBR's ProbeBW algorithm operates as follows.</t>
            <t>Upon entering ProbeBW, BBR executes:</t>
            <artwork><![CDATA[
  BBREnterProbeBW():
    BBR.cwnd_gain = BBR.DefaultCwndGain
    BBRStartProbeBW_DOWN()
]]></artwork>
            <t>The core logic for entering each state:</t>
            <artwork><![CDATA[
  BBRStartProbeBW_DOWN():
    BBRResetCongestionSignals()
    BBR.probe_up_cnt = Infinity /* not growing BBR.inflight_longterm */
    BBRPickProbeWait()
    BBR.cycle_stamp = Now()  /* start wall clock */
    BBR.ack_phase  = ACKS_PROBE_STOPPING
    BBRStartRound()
    BBR.state = ProbeBW_DOWN

  BBRStartProbeBW_CRUISE():
    BBR.state = ProbeBW_CRUISE

  BBRStartProbeBW_REFILL():
    BBRResetShortTermModel()
    BBR.bw_probe_up_rounds = 0
    BBR.bw_probe_up_acks = 0
    BBR.ack_phase = ACKS_REFILLING
    BBRStartRound()
    BBR.state = ProbeBW_REFILL

  BBRStartProbeBW_UP():
    BBR.ack_phase = ACKS_PROBE_STARTING
    BBRStartRound()
    BBRResetFullBW()
    BBR.full_bw = RS.delivery_rate
    BBR.state = ProbeBW_UP
    BBRRaiseInflightLongtermSlope()
]]></artwork>
            <t>BBR executes the following BBRUpdateProbeBWCyclePhase() logic on each ACK
that acknowledges new data, to advance the ProbeBW state machine:</t>
            <artwork><![CDATA[
  /* The core state machine logic for ProbeBW: */
  BBRUpdateProbeBWCyclePhase():
    if (!BBR.full_bw_reached)
      return  /* only handling steady-state behavior here */
    BBRAdaptLongTermModel()
    if (!IsInAProbeBWState())
      return /* only handling ProbeBW states here: */

    switch (state)

    ProbeBW_DOWN:
      if (BBRIsTimeToProbeBW())
        return /* already decided state transition */
      if (BBRIsTimeToCruise())
        BBRStartProbeBW_CRUISE()

    ProbeBW_CRUISE:
      if (BBRIsTimeToProbeBW())
        return /* already decided state transition */

    ProbeBW_REFILL:
      /* After one round of REFILL, start UP */
      if (BBR.round_start)
        BBR.bw_probe_samples = 1
        BBRStartProbeBW_UP()

    ProbeBW_UP:
      if (BBRIsTimeToGoDown())
        BBRStartProbeBW_DOWN()
]]></artwork>
            <t>The ancillary logic to implement the ProbeBW state machine:</t>
            <artwork><![CDATA[
  IsInAProbeBWState()
    state = BBR.state
    return (state == ProbeBW_DOWN or
            state == ProbeBW_CRUISE or
            state == ProbeBW_REFILL or
            state == ProbeBW_UP)

  /* Time to transition from DOWN to CRUISE? */
  BBRIsTimeToCruise():
    if (C.inflight > BBRInflightWithHeadroom())
      return false /* not enough headroom */
    if (C.inflight <= BBRInflight(BBR.max_bw, 1.0))
      return true  /* C.inflight <= estimated BDP */

  /* Time to transition from UP to DOWN? */
  BBRIsTimeToGoDown():
    if (C.is_cwnd_limited and C.cwnd >= BBR.inflight_longterm)
      BBRResetFullBW()   /* bw is limited by BBR.inflight_longterm */
      BBR.full_bw = RS.delivery_rate
    else if (BBR.full_bw_now)
      return true  /* we estimate we've fully used path bw */
    return false

  BBRIsProbingBW():
    return (BBR.state == Startup or
            BBR.state == ProbeBW_REFILL or
            BBR.state == ProbeBW_UP)

  BBRHasElapsedInPhase(interval):
    return Now() > BBR.cycle_stamp + interval

  /* Return a volume of data that tries to leave free
   * headroom in the bottleneck buffer or link for
   * other flows, for fairness convergence and lower
   * RTTs and loss */
  BBRInflightWithHeadroom():
    if (BBR.inflight_longterm == Infinity)
      return Infinity
    headroom = max(1*SMSS, BBR.Headroom * BBR.inflight_longterm)
    return max(BBR.inflight_longterm - headroom,
               BBR.MinPipeCwnd)

  /* Raise BBR.inflight_longterm slope if appropriate. */
  BBRRaiseInflightLongtermSlope():
    growth_this_round = 1*SMSS << BBR.bw_probe_up_rounds
    BBR.bw_probe_up_rounds = min(BBR.bw_probe_up_rounds + 1, 30)
    BBR.probe_up_cnt = max(C.cwnd / growth_this_round, 1)

  /* Increase BBR.inflight_longterm if appropriate. */
  BBRProbeInflightLongtermUpward():
    if (!C.is_cwnd_limited or C.cwnd < BBR.inflight_longterm)
      return  /* not fully using BBR.inflight_longterm, so don't grow it */
   BBR.bw_probe_up_acks += RS.newly_acked
   if (BBR.bw_probe_up_acks >= BBR.probe_up_cnt)
     delta = BBR.bw_probe_up_acks / BBR.probe_up_cnt
     BBR.bw_probe_up_acks -= delta * BBR.bw_probe_up_cnt
     BBR.inflight_longterm += delta
   if (BBR.round_start)
     BBRRaiseInflightLongtermSlope()

  /* Track ACK state and update BBR.max_bw window and
   * BBR.inflight_longterm. */
  BBRAdaptLongTermModel():
    if (BBR.ack_phase == ACKS_PROBE_STARTING and BBR.round_start)
      /* starting to get bw probing samples */
      BBR.ack_phase = ACKS_PROBE_FEEDBACK
    if (BBR.ack_phase == ACKS_PROBE_STOPPING and BBR.round_start)
      /* end of samples from bw probing phase */
      if (IsInAProbeBWState() and !RS.is_app_limited)
        BBRAdvanceMaxBwFilter()

    if (!IsInflightTooHigh())
      /* Loss rate is safe. Adjust upper bounds upward. */
      if (BBR.inflight_longterm == Infinity)
        return /* no upper bounds to raise */
      if (RS.tx_in_flight > BBR.inflight_longterm)
        BBR.inflight_longterm = RS.tx_in_flight
      if (BBR.state == ProbeBW_UP)
        BBRProbeInflightLongtermUpward()
]]></artwork>
          </section>
        </section>
        <section anchor="probertt">
          <name>ProbeRTT</name>
          <section anchor="probertt-overview">
            <name>ProbeRTT Overview</name>
            <t>To help probe for BBR.min_rtt, on an as-needed basis BBR flows enter the
ProbeRTT state to try to cooperate to periodically drain the bottleneck queue,
and thus improve their BBR.min_rtt estimate of the unloaded two-way propagation
delay.</t>
            <t>A critical point is that before BBR raises its BBR.min_rtt
estimate (which would in turn raise its maximum permissible C.cwnd), it first
enters ProbeRTT to try to make a concerted and coordinated effort to drain
the bottleneck queue and make a robust BBR.min_rtt measurement. This allows the
BBR.min_rtt estimates of ensembles of BBR flows to converge, avoiding feedback
loops of ever-increasing queues and RTT samples.</t>
            <t>The ProbeRTT state works in concert with BBR.min_rtt estimation. Up to once
every ProbeRTTInterval = 5 seconds, the flow enters ProbeRTT, decelerating
by setting its cwnd_gain to BBR.ProbeRTTCwndGain = 0.5 to reduce
C.inflight to half of its estimated BDP, to try to measure the unloaded
two-way propagation delay.</t>
            <t>There are two main motivations for making the MinRTTFilterLen roughly twice
the ProbeRTTInterval. First, this ensures that during a ProbeRTT episode
the flow will "remember" the BBR.min_rtt value it measured during the previous
ProbeRTT episode, providing a robust BDP estimate for the C.cwnd = 0.5*BDP
calculation, increasing the likelihood of fully draining the bottleneck
queue. Second, this allows the flow's BBR.min_rtt filter window to generally
include RTT samples from two ProbeRTT episodes, providing a more robust
estimate.</t>
            <t>The algorithm for ProbeRTT is as follows:</t>
            <t>Entry conditions: In any state other than ProbeRTT itself, if the
BBR.probe_rtt_min_delay estimate has not been updated (i.e., by getting a
lower RTT measurement) for more than ProbeRTTInterval = 5 seconds, then BBR
enters ProbeRTT and reduces the BBR.cwnd_gain to BBR.ProbeRTTCwndGain = 0.5.</t>
            <t>Exit conditions: After maintaining C.inflight at
BBR.ProbeRTTCwndGain*BBR.bdp for at least BBR.ProbeRTTDuration (200 ms) and at
least one packet-timed round trip, BBR leaves ProbeRTT and transitions to
ProbeBW if it estimates the pipe was filled already, or Startup otherwise.</t>
          </section>
          <section anchor="probertt-design-rationale">
            <name>ProbeRTT Design Rationale</name>
            <t>BBR is designed to have ProbeRTT sacrifice no more than roughly 2% of a flow's
available bandwidth. It is also designed to spend the vast majority of its
time (at least roughly 96 percent) in ProbeBW and the rest in ProbeRTT, based
on a set of tradeoffs. ProbeRTT lasts long enough (at least BBR.ProbeRTTDuration
= 200 ms) to allow diverse flows (e.g., flows with different RTTs or lower
rates and thus longer inter-packet gaps) to have overlapping ProbeRTT states,
while still being short enough to bound the throughput penalty of ProbeRTT's
cwnd capping to roughly 2%, with the average throughput targeted at:</t>
            <artwork><![CDATA[
  throughput = (200ms*0.5*BBR.bw + (5s - 200ms)*BBR.bw) / 5s
             = (.1s + 4.8s)/5s * BBR.bw = 0.98 * BBR.bw
]]></artwork>
            <t>As discussed above, BBR's BBR.min_rtt filter window, BBR.MinRTTFilterLen, and
time interval between ProbeRTT states, ProbeRTTInterval, work in concert.
BBR uses a BBR.MinRTTFilterLen equal to or longer than BBR.ProbeRTTInterval to allow
the filter window to include at least one ProbeRTT.</t>
            <t>To allow coordination with other BBR flows, each BBR flow MUST use the
standard BBR.ProbeRTTInterval of 5 secs.</t>
            <t>A BBR.ProbeRTTInterval of 5 secs is short enough to allow quick convergence if
traffic levels or routes change, but long enough so that interactive
applications (e.g., Web, remote procedure calls, video chunks) often have
natural silences or low-rate periods within the window where the flow's rate
is low enough for long enough to drain its queue in the bottleneck. Then the
BBR.probe_rtt_min_delay filter opportunistically picks up these measurements,
and the BBR.probe_rtt_min_delay estimate refreshes without requiring
ProbeRTT. This way, flows typically need only pay the 2 percent throughput
penalty if there are multiple bulk flows busy sending over the entire
BBR.ProbeRTTInterval window.</t>
            <t>As an optimization, when restarting from idle and finding that the
BBR.probe_rtt_min_delay has expired, BBR does not enter ProbeRTT; the idleness
is deemed a sufficient attempt to coordinate to drain the queue.</t>
            <t>The frequency of triggering ProbeRTT involves a tradeoff between the speed of
convergence and the throughput penalty of applying a cwnd cap during ProbeRTT.
The interval between ProbeRTTs is a subject of further experimentation.
A longer duration between ProbeRTT would reduce the throughput penalty for bulk
flows or flows on lower BDP links that are less likely to have silences or
low-rate periods, at the cost of slower convergence. Furthermore, some types
of links can switch between paths of significantly different base
RTT (e.g. LEO satellite or cellular handoff). If these path changes can be
predicted or detected, initiating a ProbeRTT immediately could concievably
speed up the convergence to an accurate BBR.min_rtt, especially when it
has increased.</t>
          </section>
          <section anchor="probertt-logic">
            <name>ProbeRTT Logic</name>
            <t>On every ACK BBR executes BBRUpdateMinRTT() to update its ProbeRTT scheduling
state (BBR.probe_rtt_min_delay and BBR.probe_rtt_min_stamp) and its BBR.min_rtt
estimate:</t>
            <artwork><![CDATA[
  BBRUpdateMinRTT()
    BBR.probe_rtt_expired =
      Now() > BBR.probe_rtt_min_stamp + ProbeRTTInterval
    if (RS.rtt >= 0 and
        (RS.rtt < BBR.probe_rtt_min_delay or
         BBR.probe_rtt_expired))
       BBR.probe_rtt_min_delay = RS.rtt
       BBR.probe_rtt_min_stamp = Now()

    min_rtt_expired =
      Now() > BBR.min_rtt_stamp + MinRTTFilterLen
    if (BBR.probe_rtt_min_delay < BBR.min_rtt or
        min_rtt_expired)
      BBR.min_rtt       = BBR.probe_rtt_min_delay
      BBR.min_rtt_stamp = BBR.probe_rtt_min_stamp
]]></artwork>
            <t>Here BBR.probe_rtt_expired is a boolean recording whether the
BBR.probe_rtt_min_delay has expired and is due for a refresh, via either
an application idle period or a transition into ProbeRTT state.</t>
            <t>On every ACK BBR executes BBRCheckProbeRTT() to handle the steps related
to the ProbeRTT state as follows:</t>
            <artwork><![CDATA[
  BBRCheckProbeRTT():
    if (BBR.state != ProbeRTT and
        BBR.probe_rtt_expired and
        not BBR.idle_restart)
      BBREnterProbeRTT()
      BBRSaveCwnd()
      BBR.probe_rtt_done_stamp = 0
      BBR.ack_phase = ACKS_PROBE_STOPPING
      BBRStartRound()
    if (BBR.state == ProbeRTT)
      BBRHandleProbeRTT()
    if (RS.delivered > 0)
      BBR.idle_restart = false

  BBREnterProbeRTT():
    BBR.state = ProbeRTT
    BBR.pacing_gain = 1
    BBR.cwnd_gain = BBRProbeRTTCwndGain  /* 0.5 */

  BBRHandleProbeRTT():
    /* Ignore low rate samples during ProbeRTT: */
    MarkConnectionAppLimited()
    if (BBR.probe_rtt_done_stamp == 0 and
        C.inflight <= BBRProbeRTTCwnd())
      /* Wait for at least ProbeRTTDuration to elapse: */
      BBR.probe_rtt_done_stamp =
        Now() + ProbeRTTDuration
      /* Wait for at least one round to elapse: */
      BBR.probe_rtt_round_done = false
      BBRStartRound()
    else if (BBR.probe_rtt_done_stamp != 0)
      if (BBR.round_start)
        BBR.probe_rtt_round_done = true
      if (BBR.probe_rtt_round_done)
        BBRCheckProbeRTTDone()

  BBRCheckProbeRTTDone():
    if (BBR.probe_rtt_done_stamp != 0 and
        Now() > BBR.probe_rtt_done_stamp)
      /* schedule next ProbeRTT: */
      BBR.probe_rtt_min_stamp = Now()
      BBRRestoreCwnd()
      BBRExitProbeRTT()

  MarkConnectionAppLimited():
    C.app_limited =
      (C.delivered + C.inflight) ? : 1
]]></artwork>
          </section>
          <section anchor="exiting-probertt">
            <name>Exiting ProbeRTT</name>
            <t>When exiting ProbeRTT, BBR transitions to ProbeBW if it estimates the pipe
was filled already, or Startup otherwise.</t>
            <t>When transitioning out of ProbeRTT, BBR calls BBRResetShortTermModel() to reset
the short-term model, since any congestion encountered in ProbeRTT may have pulled
it far below the capacity of the path.</t>
            <t>But the algorithm is cautious in timing the next bandwidth probe: raising
C.inflight after ProbeRTT may cause loss, so the algorithm resets the
bandwidth-probing clock by starting the cycle at ProbeBW_DOWN(). But then as an
optimization, since the connection is exiting ProbeRTT, we know that infligh is
already below the estimated BDP, so the connection can proceed immediately to
ProbeBW_CRUISE.</t>
            <t>To summarize, the logic for exiting ProbeRTT is as follows:</t>
            <artwork><![CDATA[
  BBRExitProbeRTT():
    BBRResetShortTermModel()
    if (BBR.full_bw_reached)
      BBRStartProbeBW_DOWN()
      BBRStartProbeBW_CRUISE()
    else
      BBREnterStartup()
]]></artwork>
          </section>
        </section>
      </section>
      <section anchor="restarting-from-idle">
        <name>Restarting From Idle</name>
        <section anchor="actions-when-restarting-from-idle">
          <name>Actions when Restarting from Idle</name>
          <t>When restarting from idle in ProbeBW states, BBR leaves C.cwnd as-is and
paces packets at exactly BBR.bw, aiming to return as quickly as possible
to its target operating point of rate balance and a full pipe. Specifically, if
the flow's BBR.state is ProbeBW, and the flow is application-limited, and there
are no packets in flight currently, then before the flow sends one or more
packets BBR sets C.pacing_rate to exactly BBR.bw.</t>
          <t>Also, when restarting from idle BBR checks to see if the connection is in
ProbeRTT and has met the exit conditions for ProbeRTT. If a connection goes
idle during ProbeRTT then often it will have met those exit conditions by
the time it restarts, so that the connection can restore C.cwnd to its full
value before it starts transmitting a new flight of data.</t>
          <t>More precisely, the BBR algorithm takes the following steps in
BBRHandleRestartFromIdle() before sending a packet for a flow:</t>
          <artwork><![CDATA[
  BBRHandleRestartFromIdle():
    if (C.inflight == 0 and C.app_limited)
      BBR.idle_restart = true
      BBR.extra_acked_interval_start = Now()
      if (IsInAProbeBWState())
        BBRSetPacingRateWithGain(1)
      else if (BBR.state == ProbeRTT)
        BBRCheckProbeRTTDone()
]]></artwork>
        </section>
        <section anchor="previous-idle-restart">
          <name>Comparison with Previous Approaches</name>
          <t>The "Restarting Idle Connections" section of <xref target="RFC5681"/> suggests restarting
from idle by slow-starting from the initial window. However, this approach was
assuming a congestion control algorithm that had no estimate of the bottleneck
bandwidth and no pacing, and thus resorted to relying on slow-starting driven
by an ACK clock. The long (log_2(BDP)*RTT) delays required to reach full
utilization with that "slow start after idle" approach caused many large
deployments to disable this mechanism, resulting in a "BDP-scale line-rate
burst" approach instead. Instead of these two approaches, BBR restarts by
pacing at BBR.bw, typically achieving approximate rate balance and a full pipe
after only one BBR.min_rtt has elapsed.</t>
        </section>
      </section>
      <section anchor="updating-network-path-model-parameters">
        <name>Updating Network Path Model Parameters</name>
        <t>BBR is a model-based congestion control algorithm: it is based on an explicit
model of the network path over which a transport flow travels. The following
is a summary of each parameter, including its meaning and how the algorithm
calculates and uses its value. We can group the parameter into three groups:</t>
        <ul spacing="normal">
          <li>
            <t>core state machine parameters</t>
          </li>
          <li>
            <t>parameters to model the appropriate data rate</t>
          </li>
          <li>
            <t>parameters to model the appropriate inflight</t>
          </li>
        </ul>
        <section anchor="bbrroundcount-tracking-packet-timed-round-trips">
          <name>BBR.round_count: Tracking Packet-Timed Round Trips</name>
          <t>Several aspects of BBR depend on counting the progress of "packet-timed"
round trips, which start at the transmission of some packet, and then end
at the acknowledgment of that packet. BBR.round_count is a count of the number
of these "packet-timed" round trips elapsed so far. BBR uses this virtual
BBR.round_count because it is more robust than using wall clock time. In
particular, arbitrary intervals of wall clock time can elapse due to
application idleness, variations in RTTs, or timer delays for retransmission
timeouts, causing wall-clock-timed model parameter estimates to "time out"
or to be "forgotten" too quickly to provide robustness.</t>
          <t>BBR counts packet-timed round trips by recording state about a sentinel packet,
and waiting for an ACK of any data packet that was sent after that sentinel
packet, using the following pseudocode:</t>
          <t>Upon connection initialization:</t>
          <artwork><![CDATA[
  BBRInitRoundCounting():
    BBR.next_round_delivered = 0
    BBR.round_start = false
    BBR.round_count = 0
]]></artwork>
          <t>Upon sending each packet, the rate estimation algorithm in
<xref target="delivery-rate-samples"/> records the amount of data thus far
acknowledged as delivered:</t>
          <artwork><![CDATA[
  P.delivered = C.delivered
]]></artwork>
          <t>Upon receiving an ACK for a given data packet, the rate estimation algorithm
in <xref target="delivery-rate-samples"/> updates the amount of data thus far
acknowledged as delivered:</t>
          <artwork><![CDATA[
    C.delivered += P.size
]]></artwork>
          <t>Upon receiving an ACK for a given data packet, the BBR algorithm first executes
the following logic to see if a round trip has elapsed, and if so, increment
the count of such round trips elapsed:</t>
          <artwork><![CDATA[
  BBRUpdateRound():
    if (packet.delivered >= BBR.next_round_delivered)
      BBRStartRound()
      BBR.round_count++
      BBR.rounds_since_bw_probe++
      BBR.round_start = true
    else
      BBR.round_start = false

  BBRStartRound():
    BBR.next_round_delivered = C.delivered
]]></artwork>
        </section>
        <section anchor="bbrmaxbw-estimated-maximum-bandwidth">
          <name>BBR.max_bw: Estimated Maximum Bandwidth</name>
          <t>BBR.max_bw is BBR's estimate of the maximum bottleneck bandwidth available to
data transmissions for the transport flow. At any time, a transport
connection's data transmissions experience some slowest link or bottleneck. The
bottleneck's delivery rate determines the connection's maximum data-delivery
rate. BBR tries to closely match its sending rate to this bottleneck delivery
rate to help seek "rate balance", where the flow's packet arrival rate at the
bottleneck equals the departure rate. The bottleneck rate varies over the life
of a connection, so BBR continually estimates BBR.max_bw using recent signals.</t>
        </section>
        <section anchor="bbrmaxbw-max-filter">
          <name>BBR.max_bw Max Filter</name>
          <t>Delivery rate samples are often below the typical bottleneck bandwidth
available to the flow, due to "noise" introduced by random variation in
physical transmission processes (e.g. radio link layer noise) or queues or
along the network path.  To filter these effects BBR uses a max filter: BBR
estimates BBR.max_bw using the windowed maximum recent delivery rate sample
seen by the connection over recent history.</t>
          <t>The BBR.max_bw max filter window covers a time period extending over the
past two ProbeBW cycles. The BBR.max_bw max filter window length is driven
by trade-offs among several considerations:</t>
          <ul spacing="normal">
            <li>
              <t>It is long enough to cover at least one entire ProbeBW cycle (see the
"ProbeBW" section). This ensures that the window contains at least some
delivery rate samples that are the result of data transmitted with a
super-unity pacing_gain (a pacing_gain larger than 1.0). Such super-unity
delivery rate samples are instrumental in revealing the path's underlying
available bandwidth even when there is noise from delivery rate shortfalls
due to aggregation delays, queuing delays from variable cross-traffic, lossy
link layers with uncorrected losses, or short-term buffer exhaustion (e.g.,
brief coincident bursts in a shallow buffer).</t>
            </li>
            <li>
              <t>It aims to be long enough to cover short-term fluctuations in the network's
delivery rate due to the aforementioned sources of noise. In particular, the
delivery rate for radio link layers (e.g., wifi and cellular technologies)
can be highly variable, and the filter window needs to be long enough to
remember "good" delivery rate samples in order to be robust to such
variations.</t>
            </li>
            <li>
              <t>It aims to be short enough to respond in a timely manner to sustained
reductions in the bandwidth available to a flow, whether this is because
other flows are using a larger share of the bottleneck, or the bottleneck
link service rate has reduced due to layer 1 or layer 2 changes, policy
changes, or routing changes. In any of these cases, existing BBR flows
traversing the bottleneck should, in a timely manner, reduce their BBR.max_bw
estimates and thus pacing rate and in-flight data, in order to match the
sending behavior to the new available bandwidth.</t>
            </li>
          </ul>
        </section>
        <section anchor="bbrmaxbw-and-application-limited-delivery-rate-samples">
          <name>BBR.max_bw and Application-limited Delivery Rate Samples</name>
          <t>Transmissions can be application-limited, meaning the transmission rate is
limited by the application rather than the congestion control algorithm.  This
is quite common because of request/response traffic. When there is a
transmission opportunity but no data to send, the delivery rate sampler marks
the corresponding bandwidth sample(s) as application-limited
<xref target="delivery-rate-samples"/>.  The BBR.max_bw estimator carefully decides which
samples to include in the bandwidth model to ensure that BBR.max_bw reflects
network limits, not application limits. By default, the estimator discards
application-limited samples, since by definition they reflect application
limits. However, the estimator does use application-limited samples if the
measured delivery rate happens to be larger than the current BBR.max_bw
estimate, since this indicates the current BBR.Max_bw estimate is too low.</t>
        </section>
        <section anchor="updating-the-bbrmaxbw-max-filter">
          <name>Updating the BBR.max_bw Max Filter</name>
          <t>For every ACK that acknowledges some data packets as delivered, BBR invokes
BBRUpdateMaxBw() to update the BBR.max_bw estimator as follows:</t>
          <artwork><![CDATA[
  BBRUpdateMaxBw()
    BBRUpdateRound()
    if (RS.delivery_rate > 0 &&
        (RS.delivery_rate >= BBR.max_bw || !RS.is_app_limited))
        BBR.max_bw = UpdateWindowedMaxFilter(
                      filter=BBR.max_bw_filter,
                      value=RS.delivery_rate,
                      time=BBR.cycle_count,
                      window_length=MaxBwFilterLen)
]]></artwork>
          <t>UpdateWindowedMaxFilter() can be implemented using Kathleen Nichols' algorithm
for tracking the minimum/maximum value of a data stream over some measurement
window. The description of the algorithm and a sample implementation are
available in Linux <xref target="KN_FILTER"/>.</t>
        </section>
        <section anchor="tracking-time-for-the-bbrmaxbw-max-filter">
          <name>Tracking Time for the BBR.max_bw Max Filter</name>
          <t>BBR tracks time for the BBR.max_bw filter window using a virtual
(non-wall-clock) time tracked by counting the cyclical progression through
ProbeBW cycles.  Each time through the Probe bw cycle, one round trip after
exiting ProbeBW_UP (the point at which the flow has its best chance to measure
the highest throughput of the cycle), BBR increments BBR.cycle_count, the
virtual time used by the BBR.max_bw filter window. Note that BBR.cycle_count
only needs to be tracked with a single bit, since the BBR.max_bw filter only
needs to track samples from two time slots: the previous ProbeBW cycle and the
current ProbeBW cycle:</t>
          <artwork><![CDATA[
  BBRAdvanceMaxBwFilter():
    BBR.cycle_count++
]]></artwork>
        </section>
        <section anchor="bbrminrtt-estimated-minimum-round-trip-time">
          <name>BBR.min_rtt: Estimated Minimum Round-Trip Time</name>
          <t>BBR.min_rtt is BBR's estimate of the round-trip propagation delay of the path
over which a transport connection is sending. The path's round-trip propagation
delay determines the minimum amount of time over which the connection must be
willing to sustain transmissions at the BBR.bw rate, and thus the minimum
amount of data needed in flight, for the connection to reach full utilization
(a "Full Pipe"). The round-trip propagation delay can vary over the life of a
connection, so BBR continually estimates BBR.min_rtt using recent round-trip
delay samples.</t>
          <section anchor="round-trip-time-samples-for-estimating-bbrminrtt">
            <name>Round-Trip Time Samples for Estimating BBR.min_rtt</name>
            <t>For every data packet a connection sends, BBR calculates an RTT sample that
measures the time interval from sending a data packet until that packet is
acknowledged.</t>
            <t>The only divergence from RTT estimation for retransmission timeouts is in the
case where a given acknowledgment ACKs more than one data packet. In order to
be conservative and schedule long timeouts to avoid spurious retransmissions,
the maximum among such potential RTT samples is typically used for computing
retransmission timeouts; i.e., C.srtt is typically calculated using the data
packet with the earliest transmission time. By contrast, in order for BBR to
try to reach the minimum amount of data in flight to fill the pipe, BBR uses
the minimum among such potential RTT samples; i.e., BBR calculates the RTT
using the data packet with the latest transmission time.</t>
          </section>
          <section anchor="bbrminrtt-min-filter">
            <name>BBR.min_rtt Min Filter</name>
            <t>RTT samples tend to be above the round-trip propagation delay of the path,
due to "noise" introduced by random variation in physical transmission processes
(e.g. radio link layer noise), queues along the network path, the receiver's
delayed ack strategy, ack aggregation, etc. Thus to filter out these effects
BBR uses a min filter: BBR estimates BBR.min_rtt using the minimum recent
RTT sample seen by the connection over that past BBR.MinRTTFilterLen seconds.
(Many of the same network effects that can decrease delivery rate measurements
can increase RTT samples, which is why BBR's min-filtering approach for RTTs
is the complement of its max-filtering approach for delivery rates.)</t>
            <t>The length of the BBR.min_rtt min filter window is BBR.MinRTTFilterLen = 10 secs.
This is driven by trade-offs among several considerations:</t>
            <ul spacing="normal">
              <li>
                <t>The BBR.MinRTTFilterLen is longer than BBR.ProbeRTTInterval, so that it covers an
entire ProbeRTT cycle (see the "ProbeRTT" section below). This helps ensure
that the window can contain RTT samples that are the result of data
transmitted with C.inflight below the estimated BDP of the flow. Such RTT
samples are important for helping to reveal the path's underlying two-way
propagation delay even when the aforementioned "noise" effects can often
obscure it.</t>
              </li>
              <li>
                <t>The BBR.MinRTTFilterLen aims to be long enough to avoid needing to reduce in-flight
data and throughput often. Measuring two-way propagation delay requires in-flight
data to be at or below the BDP, which risks  some amount of underutilization, so BBR
uses a filter window long enough that such underutilization events can be
rare.</t>
              </li>
              <li>
                <t>The BBR.MinRTTFilterLen aims to be long enough that many applications have a
"natural" moment of silence or low utilization that can reduce in-flight data below
the BDP and naturally serve to refresh the BBR.min_rtt, without requiring BBR to
force an artificial reduction in in-flight data. This applies to many popular
applications, including Web, RPC, or chunked audio/video traffic.</t>
              </li>
              <li>
                <t>The BBR.MinRTTFilterLen aims to be short enough to respond in a timely manner to
real increases in the two-way propagation delay of the path, e.g. due to
route changes, which are expected to typically happen on longer time scales.</t>
              </li>
            </ul>
            <t>A BBR implementation MAY use a generic windowed min filter to track BBR.min_rtt.
However, a significant savings in space and improvement in freshness can
be achieved by integrating the BBR.min_rtt estimation into the ProbeRTT state
machine, so this document discusses that approach in the ProbeRTT section.</t>
          </section>
        </section>
        <section anchor="bbroffloadbudget">
          <name>BBR.offload_budget</name>
          <t>BBR.offload_budget is the estimate of the minimum volume of data necessary
to achieve full throughput using sender (TSO/GSO) and receiver (LRO, GRO)
host offload mechanisms.  This varies based on the transport protocol and
operating environment.</t>
          <t>For TCP, offload_budget can be computed as follows:</t>
          <artwork><![CDATA[
    BBRUpdateOffloadBudget():
      BBR.offload_budget = 3 * C.send_quantum
]]></artwork>
          <t>The factor of 3 is chosen to allow maintaining at least:</t>
          <ul spacing="normal">
            <li>
              <t>1 quantum in the sending host's queuing discipline layer</t>
            </li>
            <li>
              <t>1 quantum being segmented in the sending host TSO/GSO engine</t>
            </li>
            <li>
              <t>1 quantum being reassembled or otherwise remaining unacknowledged due to
the receiver host's LRO/GRO/delayed-ACK engine</t>
            </li>
          </ul>
        </section>
        <section anchor="bbrextraacked">
          <name>BBR.extra_acked</name>
          <t>BBR.extra_acked is a volume of data that is the estimate of the recent degree
of aggregation in the network path. For each ACK, the algorithm computes
a sample of the estimated extra ACKed data beyond the amount of data that
the sender expected to be ACKed over the timescale of a round-trip, given
the BBR.bw. Then it computes BBR.extra_acked as the windowed maximum sample
over the last BBRExtraAckedFilterLen=10 packet-timed round-trips. If the
ACK rate falls below the expected bandwidth, then the algorithm estimates
an aggregation episode has terminated, and resets the sampling interval to
start from the current time.</t>
          <t>The BBR.extra_acked thus reflects the recently-measured magnitude of data
and ACK aggregation effects such as batching and slotting at shared-medium
L2 hops (wifi, cellular, DOCSIS), as well as end-host offload mechanisms
(TSO, GSO, LRO, GRO), and end host or middlebox ACK decimation/thinning.</t>
          <t>BBR augments C.cwnd by BBR.extra_acked to allow the connection to keep
sending during inter-ACK silences, to an extent that matches the recently
measured degree of aggregation.</t>
          <t>More precisely, this is computed as:</t>
          <artwork><![CDATA[
  BBRUpdateACKAggregation():
    /* Find excess ACKed beyond expected amount over this interval */
    interval = (Now() - BBR.extra_acked_interval_start)
    expected_delivered = BBR.bw * interval
    /* Reset interval if ACK rate is below expected rate: */
    if (BBR.extra_acked_delivered <= expected_delivered)
        BBR.extra_acked_delivered = 0
        BBR.extra_acked_interval_start = Now()
        expected_delivered = 0
    BBR.extra_acked_delivered += RS.newly_acked
    extra = BBR.extra_acked_delivered - expected_delivered
    extra = min(extra, C.cwnd)
    if (BBR.full_bw_reached)
      filter_len = BBRExtraAckedFilterLen
    else
      filter_len = 1  /* in Startup, just remember 1 round */
    BBR.extra_acked =
      UpdateWindowedMaxFilter(
        filter=BBR.extra_acked_filter,
        value=extra,
        time=BBR.round_count,
        window_length=filter_len)
]]></artwork>
        </section>
        <section anchor="updating-the-model-upon-packet-loss">
          <name>Updating the Model Upon Packet Loss</name>
          <t>In every state, BBR responds to (filtered) congestion signals, including
loss. The response to those congestion signals depends on the flow's current
state, since the information that the flow can infer depends on what the
flow was doing when the flow experienced the signal.</t>
          <section anchor="probing-for-bandwidth-in-startup">
            <name>Probing for Bandwidth In Startup</name>
            <t>In Startup, if the congestion signals meet the Startup exit criteria, the flow
exits Startup and enters Drain (see <xref target="exiting-startup-based-on-packet-loss"/>).</t>
          </section>
          <section anchor="probing-for-bandwidth-in-probebw">
            <name>Probing for Bandwidth In ProbeBW</name>
            <t>BBR searches for the maximum volume of data that can be sensibly placed
in flight in the network. A key precondition is that the flow is actually
trying robustly to find that operating point. To implement this, when a flow is
in ProbeBW, and an ACK covers data sent in one of the accelerating phases
(REFILL or UP), and the ACK indicates that the loss rate over the past round
trip exceeds the queue pressure objective, and the flow is not application
limited, and has not yet responded to congestion signals from the most recent
REFILL or UP phase, then the flow estimates that the volume of data it allowed
in flight exceeded what matches the current delivery process on the path, and
reduces BBR.inflight_longterm:</t>
            <artwork><![CDATA[
  /* Do loss signals suggest C.inflight is too high? */
  IsInflightTooHigh():
    return (RS.lost > RS.tx_in_flight * BBR.LossThresh)

  BBRHandleInflightTooHigh():
    BBR.bw_probe_samples = 0;  /* only react once per bw probe */
    if (!RS.is_app_limited)
      BBR.inflight_longterm = max(RS.tx_in_flight,
                            BBRTargetInflight() * BBR.Beta))
    If (BBR.state == ProbeBW_UP)
      BBRStartProbeBW_DOWN()
]]></artwork>
            <t>Here RS.tx_in_flight is the C.inflight value
when the most recently ACKed packet was sent. And the BBR.Beta (0.7x) bound
is to try to ensure that BBR does not react more dramatically than CUBIC's
0.7x multiplicative decrease factor.</t>
            <t>Some loss detection algorithms, including RACK <xref target="RFC8985"/> or QUIC loss
detection <xref target="RFC9002"/>, delay loss marking to wait for potential
reordering, so packets can be declared lost long after the loss itself.
happened. In such cases, the tx_in_flight for the delivered sequence range
that allowed the loss to be detected may be considerably smaller than the
tx_in_flight of the lost packet itself. In such cases using the former
tx_in_flight rather than the latter can cause BBR.inflight_longterm to be
significantly underestimated. To avoid such issues, BBR processes each loss
detection event to more precisely estimate C.inflight at
which loss rates cross BBR.LossThresh, noting that this may have happened
mid-way through some TSO/GSO offload burst (represented as a "packet" in
the pseudocode in this document). To estimate this threshold volume of data,
we can solve for "lost_prefix" in the following way, where inflight_prev
represents C.inflight preceding this packet, and lost_prev
represents the data lost among that previous in-flight data.</t>
            <t>First we start with:</t>
            <artwork><![CDATA[
  lost / C.inflight >= BBR.LossThresh
]]></artwork>
            <t>Expanding this, we get:</t>
            <artwork><![CDATA[
  (lost_prev + lost_prefix) /    >= BBR.LossThresh
  (inflight_prev + lost_prefix)
]]></artwork>
            <t>Solving for lost_prefix, we arrive at:</t>
            <artwork><![CDATA[
  lost_prefix >= (BBR.LossThresh * inflight_prev - lost_prev) /
                    (1 - BBR.LossThresh)
]]></artwork>
            <t>In pseudocode:</t>
            <artwork><![CDATA[
  BBRNoteLoss()
    if (!BBR.loss_in_round)   /* first loss in this round trip? */
      BBR.loss_round_delivered = C.delivered
    BBR.loss_in_round = 1

  BBRHandleLostPacket(packet):
    BBRNoteLoss()
    if (!BBR.bw_probe_samples)
      return /* not a packet sent while probing bandwidth */
    RS.tx_in_flight = P.tx_in_flight /* C.inflight at transmit */
    RS.lost = C.lost - P.lost /* data lost since transmit */
    RS.is_app_limited = P.is_app_limited;
    if (IsInflightTooHigh())
      RS.tx_in_flight = BBRInflightAtLoss(rs, packet)
      BBRHandleInflightTooHigh()

  /* At what prefix of packet did losses exceed BBR.LossThresh? */
  BBRInflightAtLoss(RS, packet):
    size = packet.size
    /* What was in flight before this packet? */
    inflight_prev = RS.tx_in_flight - size
    /* What was lost before this packet? */
    lost_prev = RS.lost - size
    lost_prefix = (BBR.LossThresh * inflight_prev - lost_prev) /
                  (1 - BB.RLossThresh)
    /* At what C.inflight value did losses cross BBR.LossThresh? */
    inflight_at_loss = inflight_prev + lost_prefix
    return inflight_at_loss
]]></artwork>
          </section>
          <section anchor="when-not-probing-for-bandwidth">
            <name>When not Probing for Bandwidth</name>
            <t>When not explicitly accelerating to probe for bandwidth (Drain, ProbeRTT,
ProbeBW_DOWN, ProbeBW_CRUISE), BBR  responds to loss by slowing down to some
extent. This is because loss suggests that the available bandwidth and safe
C.inflight may have decreased recently, and the flow needs
to adapt, slowing down toward the latest delivery process. BBR flows implement
this response by reducing the short-term model parameters, BBR.bw_shortterm and
BBR.inflight_shortterm.</t>
            <t>When encountering packet loss when the flow is not probing for bandwidth,
the strategy is to gradually adapt to the current measured delivery process
(the rate and volume of data that is delivered through the network path over
the last round trip). This applies generally: whether in fast recovery, RTO
recovery, TLP recovery; whether application-limited or not.</t>
            <t>There are two key parameters the algorithm tracks, to measure the current
delivery process:</t>
            <t>BBR.bw_latest: a 1-round-trip max of delivered bandwidth (RS.delivery_rate).</t>
            <t>BBR.inflight_latest: a 1-round-trip max of delivered volume of data
(RS.delivered).</t>
            <t>Upon the ACK at the end of each round that encountered a newly-marked loss,
the flow updates its model (BBR.bw_shortterm and BBR.inflight_shortterm) as follows:</t>
            <artwork><![CDATA[
      bw_shortterm = max(       bw_latest, BBR.Beta *       BBR.bw_shortterm )
inflight_shortterm = max( inflight_latest, BBR.Beta * BBR.inflight_shortterm )
]]></artwork>
            <t>This logic can be represented as follows:</t>
            <artwork><![CDATA[
  /* Near start of ACK processing: */
  BBRUpdateLatestDeliverySignals():
    BBR.loss_round_start = 0
    BBR.bw_latest       = max(BBR.bw_latest,       RS.delivery_rate)
    BBR.inflight_latest = max(BBR.inflight_latest, RS.delivered)
    if (RS.prior_delivered >= BBR.loss_round_delivered)
      BBR.loss_round_delivered = C.delivered
      BBR.loss_round_start = 1

  /* Near end of ACK processing: */
  BBRAdvanceLatestDeliverySignals():
    if (BBR.loss_round_start)
      BBR.bw_latest       = RS.delivery_rate
      BBR.inflight_latest = RS.delivered

  BBRResetCongestionSignals():
    BBR.loss_in_round = 0
    BBR.bw_latest = 0
    BBR.inflight_latest = 0

  /* Update congestion state on every ACK */
  BBRUpdateCongestionSignals():
    BBRUpdateMaxBw()
    if (!BBR.loss_round_start)
      return  /* wait until end of round trip */
    BBRAdaptLowerBoundsFromCongestion()  /* once per round, adapt */
    BBR.loss_in_round = 0

  /* Once per round-trip respond to congestion */
  BBRAdaptLowerBoundsFromCongestion():
    if (BBRIsProbingBW())
      return
    if (BBR.loss_in_round)
      BBRInitLowerBounds()
      BBRLossLowerBounds()

  /* Handle the first congestion episode in this cycle */
  BBRInitLowerBounds():
    if (BBR.bw_shortterm == Infinity)
      BBR.bw_shortterm = BBR.max_bw
    if (BBR.inflight_shortterm == Infinity)
      BBR.inflight_shortterm = C.cwnd

  /* Adjust model once per round based on loss */
  BBRLossLowerBounds()
    BBR.bw_shortterm       = max(BBR.bw_latest,
                          BBR.Beta * BBR.bw_shortterm)
    BBR.inflight_shortterm = max(BBR.inflight_latest,
                          BBR.Beta * BBR.inflight_shortterm)

  BBRResetShortTermModel():
    BBR.bw_shortterm       = Infinity
    BBR.inflight_shortterm = Infinity

  BBRBoundBWForModel():
    BBR.bw = min(BBR.max_bw, BBR.bw_shortterm)

]]></artwork>
          </section>
        </section>
      </section>
      <section anchor="updating-control-parameters">
        <name>Updating Control Parameters</name>
        <t>BBR uses three distinct but interrelated control parameters: pacing rate,
send quantum, and congestion window.</t>
        <section anchor="summary-of-control-behavior-in-the-state-machine">
          <name>Summary of Control Behavior in the State Machine</name>
          <t>The following table summarizes how BBR modulates the control parameters in
each state. In the table below, the semantics of the columns are as follows:</t>
          <ul spacing="normal">
            <li>
              <t>State: the state in the BBR state machine, as depicted in the "State
Transition Diagram" section above.</t>
            </li>
            <li>
              <t>Tactic: The tactic chosen from the "State Machine Tactics" in
<xref target="state-machine-tactics"/>: "accel" refers to acceleration, "decel" to
deceleration, and "cruise" to cruising.</t>
            </li>
            <li>
              <t>Pacing Gain: the value used for BBR.pacing_gain in the given state.</t>
            </li>
            <li>
              <t>Cwnd Gain: the value used for BBR.cwnd_gain in the given state.</t>
            </li>
            <li>
              <t>Rate Cap: the rate values applied as bounds on the BBR.max_bw value applied
to compute BBR.bw.</t>
            </li>
            <li>
              <t>Volume Cap: the volume values applied as bounds on the BBR.max_inflight value
to compute C.cwnd.</t>
            </li>
          </ul>
          <t>The control behavior can be summarized as follows. Upon processing each ACK,
BBR uses the values in the table below to compute BBR.bw in
BBRBoundBWForModel(), and C.cwnd in BBRBoundCwndForModel():</t>
          <artwork><![CDATA[
---------------+--------+--------+------+--------------+-----------------
State          | Tactic | Pacing | Cwnd | Rate         | Volume
               |        | Gain   | Gain | Cap          | Cap
---------------+--------+--------+------+--------------+-----------------
Startup        | accel  | 2.77   | 2    | N/A          | N/A
               |        |        |      |              |
---------------+--------+--------+------+--------------+-----------------
Drain          | decel  | 0.5    | 2    | bw_shortterm | inflight_longterm,
               |        |        |      |              | inflight_shortterm
---------------+--------+--------+------+--------------+-----------------
ProbeBW_DOWN   | decel  | 0.90   | 2    | bw_shortterm | inflight_longterm,
               |        |        |      |              | inflight_shortterm
---------------+--------+--------+------+--------------+-----------------
ProbeBW_CRUISE | cruise | 1.0    | 2    | bw_shortterm | 0.85*inflight_longterm
               |        |        |      |              | inflight_shortterm
---------------+--------+--------+------+--------------+-----------------
ProbeBW_REFILL | accel  | 1.0    | 2    |              | inflight_longterm
               |        |        |      |              |
---------------+--------+--------+------+--------------+-----------------
ProbeBW_UP     | accel  | 1.25   | 2.25 |              | inflight_longterm
               |        |        |      |              |
---------------+--------+--------+------+--------------+-----------------
ProbeRTT       | decel  | 1.0    | 0.5  | bw_shortterm | 0.85*inflight_longterm
               |        |        |      |              | inflight_shortterm
---------------+--------+--------+------+--------------+-----------------
]]></artwork>
        </section>
        <section anchor="pacing-rate-bbrpacingrate">
          <name>Pacing Rate: C.pacing_rate</name>
          <t>To help match the packet-arrival rate to the bottleneck bandwidth available
to the flow, BBR paces data packets. Pacing enforces a maximum rate at which
BBR schedules quanta of packets for transmission.</t>
          <t>The sending host implements pacing by maintaining inter-quantum spacing at
the time each packet is scheduled for departure, calculating the next departure
time for a packet for a given flow (C.next_send_time) as a function
of the most recent packet size and the current pacing rate, as follows:</t>
          <artwork><![CDATA[
  C.next_send_time = max(Now(), C.next_send_time)
  P.send_time = C.next_send_time
  pacing_delay = packet.size / C.pacing_rate
  C.next_send_time = C.next_send_time + pacing_delay
]]></artwork>
          <t>To adapt to the bottleneck, in general BBR sets the pacing rate to be
proportional to bw, with a dynamic gain, or scaling factor of proportionality,
called pacing_gain.</t>
          <t>When a BBR flow starts it has no bw estimate (bw is 0). So in this case it
sets an initial pacing rate based on the transport sender implementation's
initial congestion window ("C.InitialCwnd", e.g. from <xref target="RFC6928"/>), the
initial C.srtt after the first non-zero RTT sample, and the initial pacing_gain:</t>
          <artwork><![CDATA[
  BBRInitPacingRate():
    nominal_bandwidth = C.InitialCwnd / (C.srtt ? C.srtt : 1ms)
    C.pacing_rate =  BBR.StartupPacingGain * nominal_bandwidth
]]></artwork>
          <t>After initialization, on each data ACK BBR updates its pacing rate to be
proportional to bw, as long as it estimates that it has filled the pipe
(BBR.full_bw_reached is true; see the "Startup" section for details), or
doing so increases the pacing rate. Limiting the pacing rate updates in this way
helps the connection probe robustly for bandwidth until it estimates it has
reached its full available bandwidth ("filled the pipe"). In particular,
this prevents the pacing rate from being reduced when the connection has only
seen application-limited bandwidth samples. BBR updates the pacing rate on each
ACK by executing the BBRSetPacingRate() step as follows:</t>
          <artwork><![CDATA[
  BBRSetPacingRateWithGain(pacing_gain):
    rate = pacing_gain * bw * (100 - BBR.PacingMarginPercent) / 100
    if (BBR.full_bw_reached || rate > C.pacing_rate)
      C.pacing_rate = rate

  BBRSetPacingRate():
    BBRSetPacingRateWithGain(C.pacing_gain)
]]></artwork>
          <t>To help drive the network toward lower queues and low latency while maintaining
high utilization, the BBR.PacingMarginPercent constant of 1 aims to cause
BBR to pace at 1% below the bw, on average.</t>
        </section>
        <section anchor="send-quantum-bbrsendquantum">
          <name>Send Quantum: C.send_quantum</name>
          <t>In order to amortize per-packet overheads involved in the sending process (host
CPU, NIC processing, and interrupt processing delays), high-performance
transport sender implementations (e.g., Linux TCP) often schedule an aggregate
containing multiple packets (multiple C.SMSS) worth of data as a single quantum
(using TSO, GSO, or other offload mechanisms). The BBR congestion control
algorithm makes this control decision explicitly, dynamically calculating a
quantum control parameter that specifies the maximum size of these transmission
aggregates. This decision is based on a trade-off:</t>
          <ul spacing="normal">
            <li>
              <t>A smaller quantum is preferred at lower data rates because it results in
shorter packet bursts, shorter queues, lower queueing delays, and lower rates
of packet loss.</t>
            </li>
            <li>
              <t>A bigger quantum can be required at higher data rates because it results
in lower CPU overheads at the sending and receiving hosts, who can ship larger
amounts of data with a single trip through the networking stack.</t>
            </li>
          </ul>
          <t>On each ACK, BBR runs BBRSetSendQuantum() to update C.send_quantum  as
follows:</t>
          <artwork><![CDATA[
  BBRSetSendQuantum():
    C.send_quantum = C.pacing_rate * 1ms
    C.send_quantum = min(C.send_quantum, 64 KBytes)
    C.send_quantum = max(C.send_quantum, 2 * C.SMSS)
]]></artwork>
          <t>A BBR implementation MAY use alternate approaches to select a C.send_quantum,
as appropriate for the CPU overheads anticipated for senders and receivers,
and buffering considerations anticipated in the network path. However, for
the sake of the network and other users, a BBR implementation SHOULD attempt
to use the smallest feasible quanta.</t>
        </section>
        <section anchor="congestion-window">
          <name>Congestion Window</name>
          <t>The congestion window (C.cwnd) controls the maximum C.inflight.
It is the maximum C.inflight
that the algorithm estimates is appropriate for matching the current
network path delivery process, given all available signals in the model,
at any time scale. BBR adapts C.cwnd based on its model of the network
path and the state machine's decisions about how to probe that path.</t>
          <t>By default, BBR grows C.cwnd to meet its BBR.max_inflight, which models
what's required for achieving full throughput, and as such is scaled to adapt
to the estimated BDP computed from its path model. But BBR's selection of C.cwnd
is designed to explicitly trade off among competing considerations that
dynamically adapt to various conditions. So in loss recovery BBR more
conservatively adjusts its sending behavior based on more recent delivery
samples, and if BBR needs to re-probe the current BBR.min_rtt of the path then
it cuts C.cwnd accordingly. The following sections describe the various
considerations that impact C.cwnd.</t>
          <section anchor="initial-cwnd">
            <name>Initial cwnd</name>
            <t>BBR generally uses measurements to build a model of the network path and
then adapts control decisions to the path based on that model. As such, the
selection of the initial cwnd is considered to be outside the scope of the
BBR algorithm, since at initialization there are no measurements yet upon
which BBR can operate. Thus, at initialization, BBR uses the transport sender
implementation's initial congestion window (e.g. from <xref target="RFC6298"/> for TCP).</t>
          </section>
          <section anchor="computing-bbrmaxinflight">
            <name>Computing BBR.max_inflight</name>
            <t>The BBR BBR.max_inflight is the upper bound on the volume of data BBR allows in
flight. This bound is always in place, and dominates when all other
considerations have been satisfied: the flow is not in loss recovery, does not
need to probe BBR.min_rtt, and has accumulated confidence in its model
parameters by receiving enough ACKs to gradually grow the current C.cwnd to meet
the BBR.max_inflight.</t>
            <t>On each ACK, BBR calculates the BBR.max_inflight in BBRUpdateMaxInflight()
as follows:</t>
            <artwork><![CDATA[
  BBRBDPMultiple(gain):
    if (BBR.min_rtt == Infinity)
      return C.InitialCwnd /* no valid RTT samples yet */
    BBR.bdp = BBR.bw * BBR.min_rtt
    return gain * BBR.bdp

  BBRQuantizationBudget(inflight_cap)
    BBRUpdateOffloadBudget()
    inflight_cap = max(inflight_cap, BBR.offload_budget)
    inflight_cap = max(inflight_cap, BBR.MinPipeCwnd)
    if (BBR.state == ProbeBW_UP)
      inflight_cap += 2*C.SMSS
    return inflight_cap

  BBRInflight(gain):
    inflight_cap = BBRBDPMultiple(gain)
    return BBRQuantizationBudget(inflight_cap)

  BBRUpdateMaxInflight():
    inflight_cap = BBRBDPMultiple(BBR.cwnd_gain)
    inflight_cap += BBR.extra_acked
    BBR.max_inflight = BBRQuantizationBudget(inflight_cap)
]]></artwork>
            <t>The "estimated_bdp" term tries to allow enough packets in flight to fully
utilize the estimated BDP of the path, by allowing the flow to send at BBR.bw
for a duration of BBR.min_rtt. Scaling up the BDP by BBR.cwnd_gain bounds
in-flight data to a small multiple of the BDP, to handle common network and
receiver behavior, such as delayed, stretched, or aggregated ACKs <xref target="A15"/>.
The "quanta" term allows enough quanta in flight on the sending and
receiving hosts to reach high throughput even in environments using
offload mechanisms.</t>
          </section>
          <section anchor="minimum-cwnd-for-pipelining">
            <name>Minimum cwnd for Pipelining</name>
            <t>For BBR.max_inflight, BBR imposes a floor of BBR.MinPipeCwnd (4 packets, i.e.
4 * C.SMSS). This floor helps ensure that even at very low BDPs, and with
a transport like TCP where a receiver may ACK only every alternate C.SMSS of
data, there are enough packets in flight to maintain full pipelining. In
particular BBR tries to allow at least 2 data packets in flight and ACKs
for at least 2 data packets on the path from receiver to sender.</t>
          </section>
          <section anchor="modulating-cwnd-in-loss-recovery">
            <name>Modulating cwnd in Loss Recovery</name>
            <t>BBR interprets loss as a hint that there may be recent changes in path behavior
that are not yet fully reflected in its model of the path, and thus it needs
to be more conservative.</t>
            <t>Upon a retransmission timeout (RTO), BBR conservatively reduces C.cwnd to a
value that will allow 1 C.SMSS to be transmitted. Then BBR gradually increases
C.cwnd using the normal approach outlined below in "cwnd Adjustment Mechanism"
in <xref target="cwnd-adjustment-mechanism"/>.</t>
            <t>When a BBR sender is in Fast Recovery it uses the response described in
"Updating the Model Upon Packet Loss" in
<xref target="updating-the-model-upon-packet-loss"/>.</t>
            <t>When BBR exits loss recovery it restores C.cwnd to the "last known good"
value that C.cwnd held before entering recovery. This applies equally whether
the flow exits loss recovery because it finishes repairing all losses or
because it executes an "undo" event after inferring that a loss recovery
event was spurious.</t>
            <t>The high-level design for updating C.cwnd in loss recovery is as follows:</t>
            <t>Upon retransmission timeout (RTO):</t>
            <artwork><![CDATA[
  BBROnEnterRTO():
    BBRSaveCwnd()
    C.cwnd = C.inflight + 1
]]></artwork>
            <t>Upon entering Fast Recovery:</t>
            <artwork><![CDATA[
  BBROnEnterFastRecovery():
    BBRSaveCwnd()
]]></artwork>
            <t>Upon exiting loss recovery (RTO recovery or Fast Recovery), either by repairing
all losses or undoing recovery, BBR restores the best-known cwnd value we
had upon entering loss recovery:</t>
            <artwork><![CDATA[
  BBRRestoreCwnd()
]]></artwork>
            <t>Note that exiting loss recovery happens during ACK processing, and at the
end of ACK processing BBRBoundCwndForModel() will bound the cwnd based on
the current model parameters. Thus the cwnd and pacing rate after loss recovery
will generally be smaller than the values entering loss recovery.</t>
            <t>The BBRSaveCwnd() and BBRRestoreCwnd() helpers help remember and restore
the last-known good C.cwnd (the latest C.cwnd unmodulated by loss recovery or
ProbeRTT), and is defined as follows:</t>
            <artwork><![CDATA[
  BBRSaveCwnd():
    if (!InLossRecovery() and BBR.state != ProbeRTT)
      BBR.prior_cwnd = C.cwnd
    else
      BBR.prior_cwnd = max(BBR.prior_cwnd, C.cwnd)

  BBRRestoreCwnd():
    C.cwnd = max(C.cwnd, BBR.prior_cwnd)
]]></artwork>
          </section>
          <section anchor="modulating-cwnd-in-probertt">
            <name>Modulating cwnd in ProbeRTT</name>
            <t>If BBR decides it needs to enter the ProbeRTT state (see the "ProbeRTT" section
below), its goal is to quickly reduce C.inflight and drain
the bottleneck queue, thereby allowing measurement of BBR.min_rtt. To implement
this mode, BBR bounds C.cwnd to BBR.MinPipeCwnd, the minimal value that
allows pipelining (see the "Minimum cwnd for Pipelining" section, above):</t>
            <artwork><![CDATA[
  BBRProbeRTTCwnd():
    probe_rtt_cwnd = BBRBDPMultiple(BBR.bw, BBR.ProbeRTTCwndGain)
    probe_rtt_cwnd = max(probe_rtt_cwnd, BBR.MinPipeCwnd)
    return probe_rtt_cwnd

  BBRBoundCwndForProbeRTT():
    if (BBR.state == ProbeRTT)
      C.cwnd = min(C.cwnd, BBRProbeRTTCwnd())
]]></artwork>
          </section>
          <section anchor="cwnd-adjustment-mechanism">
            <name>cwnd Adjustment Mechanism</name>
            <t>The network path and traffic traveling over it can make sudden dramatic
changes.  To adapt to these changes smoothly and robustly, and reduce packet
losses in such cases, BBR uses a conservative strategy. When C.cwnd is above the
BBR.max_inflight derived from BBR's path model, BBR cuts C.cwnd immediately
to the BBR.max_inflight. When C.cwnd is below BBR.max_inflight, BBR raises
C.cwnd gradually and cautiously, increasing C.cwnd by no more than the amount of
data acknowledged (cumulatively or selectively) upon each ACK.</t>
            <t>Specifically, on each ACK that acknowledges "RS.newly_acked" packets as newly
acknowledged, BBR runs the following BBRSetCwnd() steps to update C.cwnd:</t>
            <artwork><![CDATA[
  BBRSetCwnd():
    BBRUpdateMaxInflight()
    if (BBR.full_bw_reached)
      C.cwnd = min(C.cwnd + RS.newly_acked, BBR.max_inflight)
    else if (C.cwnd < BBR.max_inflight || C.delivered < C.InitialCwnd)
      C.cwnd = C.cwnd + RS.newly_acked
    C.cwnd = max(C.cwnd, BBR.MinPipeCwnd)
    BBRBoundCwndForProbeRTT()
    BBRBoundCwndForModel()
]]></artwork>
            <t>There are several considerations embodied in the logic above. If BBR has
measured enough samples to achieve confidence that it has filled the pipe
(see the description of BBR.full_bw_reached in the "Startup" section below), then
it increases C.cwnd based on the data delivered, while bounding
C.cwnd to be no larger than the BBR.max_inflight adapted to the estimated
BDP. Otherwise, if C.cwnd is below the BBR.max_inflight, or the sender
has marked so little data delivered (less than C.InitialCwnd) that it does not
yet judge its BBR.max_bw estimate and BBR.max_inflight as useful, then it increases
C.cwnd without bounding it to be below BBR.max_inflight. Finally, BBR imposes
a floor of BBR.MinPipeCwnd in order to allow pipelining even with small BDPs
(see the "Minimum cwnd for Pipelining" section, above).</t>
          </section>
          <section anchor="bounding-cwnd-based-on-recent-congestion">
            <name>Bounding cwnd Based on Recent Congestion</name>
            <t>Finally, BBR bounds C.cwnd based on recent congestion, as outlined in the
"Volume Cap" column of the table in the "Summary of Control Behavior in the
State Machine" section:</t>
            <artwork><![CDATA[
  BBRBoundCwndForModel():
    cap = Infinity
    if (IsInAProbeBWState() and
        BBR.state != ProbeBW_CRUISE)
      cap = BBR.inflight_longterm
    else if (BBR.state == ProbeRTT or
             BBR.state == ProbeBW_CRUISE)
      cap = BBRInflightWithHeadroom()

    /* apply BBR.inflight_shortterm (possibly infinite): */
    cap = min(cap, BBR.inflight_shortterm)
    cap = max(cap, BBR.MinPipeCwnd)
    C.cwnd = min(C.cwnd, cap)
]]></artwork>
          </section>
        </section>
      </section>
    </section>
    <section anchor="implementation-status">
      <name>Implementation Status</name>
      <t>This section records the status of known implementations of the algorithm
defined by this specification at the time of posting of this Internet-Draft,
and is based on a proposal described in <xref target="RFC7942"/>.
The description of implementations in this section is intended to assist
the IETF in its decision processes in progressing drafts to RFCs. Please
note that the listing of any individual implementation here does not imply
endorsement by the IETF. Furthermore, no effort has been spent to verify
the information presented here that was supplied by IETF contributors. This
is not intended as, and must not be construed to be, a catalog of available
implementations or their features.  Readers are advised to note that other
implementations may exist.</t>
      <t>According to <xref target="RFC7942"/>, "this will allow reviewers and working groups to
assign due consideration to documents that have the benefit of running code,
which may serve as evidence of valuable experimentation and feedback that have
made the implemented protocols more mature.  It is up to the individual working
groups to use this information as they see fit".</t>
      <t>As of the time of writing, the following implementations of BBRv3 have been
publicly released:</t>
      <ul spacing="normal">
        <li>
          <t>Linux TCP
          </t>
          <ul spacing="normal">
            <li>
              <t>Source code URL:
              </t>
              <ul spacing="normal">
                <li>
                  <t>https://github.com/google/bbr/blob/v3/README.md</t>
                </li>
                <li>
                  <t>https://github.com/google/bbr/blob/v3/net/ipv4/tcp_bbr.c</t>
                </li>
              </ul>
            </li>
            <li>
              <t>Source: Google</t>
            </li>
            <li>
              <t>Maturity: production</t>
            </li>
            <li>
              <t>License: dual-licensed: GPLv2 / BSD</t>
            </li>
            <li>
              <t>Contact: https://groups.google.com/d/forum/bbr-dev</t>
            </li>
            <li>
              <t>Last updated: November 22, 2023</t>
            </li>
          </ul>
        </li>
        <li>
          <t>QUIC
          </t>
          <ul spacing="normal">
            <li>
              <t>Source code URLs:
              </t>
              <ul spacing="normal">
                <li>
                  <t>https://cs.chromium.org/chromium/src/net/third_party/quiche/src/quic/core/congestion_control/bbr2_sender.cc</t>
                </li>
                <li>
                  <t>https://cs.chromium.org/chromium/src/net/third_party/quiche/src/quic/core/congestion_control/bbr2_sender.h</t>
                </li>
              </ul>
            </li>
            <li>
              <t>Source: Google</t>
            </li>
            <li>
              <t>Maturity: production</t>
            </li>
            <li>
              <t>License: BSD-style</t>
            </li>
            <li>
              <t>Contact: https://groups.google.com/d/forum/bbr-dev</t>
            </li>
            <li>
              <t>Last updated: October 21, 2021</t>
            </li>
          </ul>
        </li>
      </ul>
      <t>As of the time of writing, the following implementations of the delivery
rate sampling algorithm have been publicly released:</t>
      <ul spacing="normal">
        <li>
          <t>Linux TCP
          </t>
          <ul spacing="normal">
            <li>
              <t>Source code URL:
              </t>
              <ul spacing="normal">
                <li>
                  <t>GPLv2 license: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/net/ipv4/tcp_rate.c</t>
                </li>
                <li>
                  <t>BSD-style license: https://groups.google.com/d/msg/bbr-dev/X0LbDptlOzo/EVgkRjVHBQAJ</t>
                </li>
              </ul>
            </li>
            <li>
              <t>Source: Google</t>
            </li>
            <li>
              <t>Maturity: production</t>
            </li>
            <li>
              <t>License: dual-licensed: GPLv2 / BSD-style</t>
            </li>
            <li>
              <t>Contact: https://groups.google.com/d/forum/bbr-dev</t>
            </li>
            <li>
              <t>Last updated: September 24, 2021</t>
            </li>
          </ul>
        </li>
        <li>
          <t>QUIC
          </t>
          <ul spacing="normal">
            <li>
              <t>Source code URLs:
              </t>
              <ul spacing="normal">
                <li>
                  <t>https://github.com/google/quiche/blob/main/quiche/quic/core/congestion_control/bandwidth_sampler.cc</t>
                </li>
                <li>
                  <t>https://github.com/google/quiche/blob/main/quiche/quic/core/congestion_control/bandwidth_sampler.h</t>
                </li>
              </ul>
            </li>
            <li>
              <t>Source: Google</t>
            </li>
            <li>
              <t>Maturity: production</t>
            </li>
            <li>
              <t>License: BSD-style</t>
            </li>
            <li>
              <t>Contact: https://groups.google.com/d/forum/bbr-dev</t>
            </li>
            <li>
              <t>Last updated: October 5, 2021</t>
            </li>
          </ul>
        </li>
      </ul>
    </section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>This proposal makes no changes to the underlying security of transport protocols
or congestion control algorithms. BBR shares the same security considerations
as the existing standard congestion control algorithm <xref target="RFC5681"/>.</t>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t>This document has no IANA actions. Here we are using that phrase, suggested
by <xref target="RFC8126"/>, because BBR does not modify or extend the wire format of
any network protocol, nor does it add new dependencies on assigned numbers.
BBR involves only a change to the congestion control algorithm of a transport
sender, and does not involve changes in the network, the receiver, or any network
protocol.</t>
      <t>Note to RFC Editor: this section may be removed on publication as an RFC.</t>
    </section>
    <section anchor="acknowledgments">
      <name>Acknowledgments</name>
      <t>The authors are grateful to Len Kleinrock for his work on the theory underlying
congestion control. We are indebted to Larry Brakmo for pioneering work on
the Vegas <xref target="BP95"/> and New Vegas <xref target="B15"/> congestion control algorithms,
which presaged many elements of BBR, and for Larry's advice and guidance during
BBR's early development. The authors would also like to thank Kevin Yang,
Priyaranjan Jha, Yousuk Seung, Luke Hsiao for their work on TCP BBR; Jana Iyengar,
Victor Vasiliev, Bin Wu for their work on QUIC BBR; and Matt Mathis for his
research work on the BBR algorithm and its implications <xref target="MM19"/>. We would also
like to thank C. Stephen Gunn, Eric Dumazet, Nandita Dukkipati, Pawel Jurczyk,
Biren Roy, David Wetherall, Amin Vahdat, Leonidas Kontothanassis,
and the YouTube, google.com, Bandwidth Enforcer, and Google SRE teams for
their invaluable help and support. We would like to thank Randall R. Stewart,
Jim Warner, Loganaden Velvindron, Hiren Panchasara, Adrian Zapletal, Christian
Huitema, Bao Zheng, Jonathan Morton, Matt Olson, Junho Choi, Carsten Bormann,
Pouria Mousavizadeh Tehrani, Amanda Baber, Frédéric Lécaille,
and Tatsuhiro Tsujikawa
for feedback, suggestions, and edits on earlier versions of this document.</t>
    </section>
  </middle>
  <back>
    <references anchor="sec-combined-references">
      <name>References</name>
      <references anchor="sec-normative-references">
        <name>Normative References</name>
        <reference anchor="RFC9293">
          <front>
            <title>Transmission Control Protocol (TCP)</title>
            <author fullname="W. Eddy" initials="W." role="editor" surname="Eddy"/>
            <date month="August" year="2022"/>
            <abstract>
              <t>This document specifies the Transmission Control Protocol (TCP). TCP is an important transport-layer protocol in the Internet protocol stack, and it has continuously evolved over decades of use and growth of the Internet. Over this time, a number of changes have been made to TCP as it was specified in RFC 793, though these have only been documented in a piecemeal fashion. This document collects and brings those changes together with the protocol specification from RFC 793. This document obsoletes RFC 793, as well as RFCs 879, 2873, 6093, 6429, 6528, and 6691 that updated parts of RFC 793. It updates RFCs 1011 and 1122, and it should be considered as a replacement for the portions of those documents dealing with TCP requirements. It also updates RFC 5961 by adding a small clarification in reset handling while in the SYN-RECEIVED state. The TCP header control bits from RFC 793 have also been updated based on RFC 3168.</t>
            </abstract>
          </front>
          <seriesInfo name="STD" value="7"/>
          <seriesInfo name="RFC" value="9293"/>
          <seriesInfo name="DOI" value="10.17487/RFC9293"/>
        </reference>
        <reference anchor="RFC2018">
          <front>
            <title>TCP Selective Acknowledgment Options</title>
            <author fullname="M. Mathis" initials="M." surname="Mathis"/>
            <author fullname="J. Mahdavi" initials="J." surname="Mahdavi"/>
            <author fullname="S. Floyd" initials="S." surname="Floyd"/>
            <author fullname="A. Romanow" initials="A." surname="Romanow"/>
            <date month="October" year="1996"/>
            <abstract>
              <t>This memo proposes an implementation of SACK and discusses its performance and related issues. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="2018"/>
          <seriesInfo name="DOI" value="10.17487/RFC2018"/>
        </reference>
        <reference anchor="RFC7323">
          <front>
            <title>TCP Extensions for High Performance</title>
            <author fullname="D. Borman" initials="D." surname="Borman"/>
            <author fullname="B. Braden" initials="B." surname="Braden"/>
            <author fullname="V. Jacobson" initials="V." surname="Jacobson"/>
            <author fullname="R. Scheffenegger" initials="R." role="editor" surname="Scheffenegger"/>
            <date month="September" year="2014"/>
            <abstract>
              <t>This document specifies a set of TCP extensions to improve performance over paths with a large bandwidth * delay product and to provide reliable operation over very high-speed paths. It defines the TCP Window Scale (WS) option and the TCP Timestamps (TS) option and their semantics. The Window Scale option is used to support larger receive windows, while the Timestamps option can be used for at least two distinct mechanisms, Protection Against Wrapped Sequences (PAWS) and Round-Trip Time Measurement (RTTM), that are also described herein.</t>
              <t>This document obsoletes RFC 1323 and describes changes from it.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="7323"/>
          <seriesInfo name="DOI" value="10.17487/RFC7323"/>
        </reference>
        <reference anchor="RFC2119">
          <front>
            <title>Key words for use in RFCs to Indicate Requirement Levels</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <date month="March" year="1997"/>
            <abstract>
              <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="2119"/>
          <seriesInfo name="DOI" value="10.17487/RFC2119"/>
        </reference>
        <reference anchor="RFC8126">
          <front>
            <title>Guidelines for Writing an IANA Considerations Section in RFCs</title>
            <author fullname="M. Cotton" initials="M." surname="Cotton"/>
            <author fullname="B. Leiba" initials="B." surname="Leiba"/>
            <author fullname="T. Narten" initials="T." surname="Narten"/>
            <date month="June" year="2017"/>
            <abstract>
              <t>Many protocols make use of points of extensibility that use constants to identify various protocol parameters. To ensure that the values in these fields do not have conflicting uses and to promote interoperability, their allocations are often coordinated by a central record keeper. For IETF protocols, that role is filled by the Internet Assigned Numbers Authority (IANA).</t>
              <t>To make assignments in a given registry prudently, guidance describing the conditions under which new values should be assigned, as well as when and how modifications to existing values can be made, is needed. This document defines a framework for the documentation of these guidelines by specification authors, in order to assure that the provided guidance for the IANA Considerations is clear and addresses the various issues that are likely in the operation of a registry.</t>
              <t>This is the third edition of this document; it obsoletes RFC 5226.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="26"/>
          <seriesInfo name="RFC" value="8126"/>
          <seriesInfo name="DOI" value="10.17487/RFC8126"/>
        </reference>
        <reference anchor="RFC6298">
          <front>
            <title>Computing TCP's Retransmission Timer</title>
            <author fullname="V. Paxson" initials="V." surname="Paxson"/>
            <author fullname="M. Allman" initials="M." surname="Allman"/>
            <author fullname="J. Chu" initials="J." surname="Chu"/>
            <author fullname="M. Sargent" initials="M." surname="Sargent"/>
            <date month="June" year="2011"/>
            <abstract>
              <t>This document defines the standard algorithm that Transmission Control Protocol (TCP) senders are required to use to compute and manage their retransmission timer. It expands on the discussion in Section 4.2.3.1 of RFC 1122 and upgrades the requirement of supporting the algorithm from a SHOULD to a MUST. This document obsoletes RFC 2988. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="6298"/>
          <seriesInfo name="DOI" value="10.17487/RFC6298"/>
        </reference>
        <reference anchor="RFC5681">
          <front>
            <title>TCP Congestion Control</title>
            <author fullname="M. Allman" initials="M." surname="Allman"/>
            <author fullname="V. Paxson" initials="V." surname="Paxson"/>
            <author fullname="E. Blanton" initials="E." surname="Blanton"/>
            <date month="September" year="2009"/>
            <abstract>
              <t>This document defines TCP's four intertwined congestion control algorithms: slow start, congestion avoidance, fast retransmit, and fast recovery. In addition, the document specifies how TCP should begin transmission after a relatively long idle period, as well as discussing various acknowledgment generation methods. This document obsoletes RFC 2581. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="5681"/>
          <seriesInfo name="DOI" value="10.17487/RFC5681"/>
        </reference>
        <reference anchor="RFC7942">
          <front>
            <title>Improving Awareness of Running Code: The Implementation Status Section</title>
            <author fullname="Y. Sheffer" initials="Y." surname="Sheffer"/>
            <author fullname="A. Farrel" initials="A." surname="Farrel"/>
            <date month="July" year="2016"/>
            <abstract>
              <t>This document describes a simple process that allows authors of Internet-Drafts to record the status of known implementations by including an Implementation Status section. This will allow reviewers and working groups to assign due consideration to documents that have the benefit of running code, which may serve as evidence of valuable experimentation and feedback that have made the implemented protocols more mature.</t>
              <t>This process is not mandatory. Authors of Internet-Drafts are encouraged to consider using the process for their documents, and working groups are invited to think about applying the process to all of their protocol specifications. This document obsoletes RFC 6982, advancing it to a Best Current Practice.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="205"/>
          <seriesInfo name="RFC" value="7942"/>
          <seriesInfo name="DOI" value="10.17487/RFC7942"/>
        </reference>
        <reference anchor="RFC9438">
          <front>
            <title>CUBIC for Fast and Long-Distance Networks</title>
            <author fullname="L. Xu" initials="L." surname="Xu"/>
            <author fullname="S. Ha" initials="S." surname="Ha"/>
            <author fullname="I. Rhee" initials="I." surname="Rhee"/>
            <author fullname="V. Goel" initials="V." surname="Goel"/>
            <author fullname="L. Eggert" initials="L." role="editor" surname="Eggert"/>
            <date month="August" year="2023"/>
            <abstract>
              <t>CUBIC is a standard TCP congestion control algorithm that uses a cubic function instead of a linear congestion window increase function to improve scalability and stability over fast and long-distance networks. CUBIC has been adopted as the default TCP congestion control algorithm by the Linux, Windows, and Apple stacks.</t>
              <t>This document updates the specification of CUBIC to include algorithmic improvements based on these implementations and recent academic work. Based on the extensive deployment experience with CUBIC, this document also moves the specification to the Standards Track and obsoletes RFC 8312. This document also updates RFC 5681, to allow for CUBIC's occasionally more aggressive sending behavior.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="9438"/>
          <seriesInfo name="DOI" value="10.17487/RFC9438"/>
        </reference>
        <reference anchor="RFC8985">
          <front>
            <title>The RACK-TLP Loss Detection Algorithm for TCP</title>
            <author fullname="Y. Cheng" initials="Y." surname="Cheng"/>
            <author fullname="N. Cardwell" initials="N." surname="Cardwell"/>
            <author fullname="N. Dukkipati" initials="N." surname="Dukkipati"/>
            <author fullname="P. Jha" initials="P." surname="Jha"/>
            <date month="February" year="2021"/>
            <abstract>
              <t>This document presents the RACK-TLP loss detection algorithm for TCP. RACK-TLP uses per-segment transmit timestamps and selective acknowledgments (SACKs) and has two parts. Recent Acknowledgment (RACK) starts fast recovery quickly using time-based inferences derived from acknowledgment (ACK) feedback, and Tail Loss Probe (TLP) leverages RACK and sends a probe packet to trigger ACK feedback to avoid retransmission timeout (RTO) events. Compared to the widely used duplicate acknowledgment (DupAck) threshold approach, RACK-TLP detects losses more efficiently when there are application-limited flights of data, lost retransmissions, or data packet reordering events. It is intended to be an alternative to the DupAck threshold approach.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8985"/>
          <seriesInfo name="DOI" value="10.17487/RFC8985"/>
        </reference>
        <reference anchor="RFC9000">
          <front>
            <title>QUIC: A UDP-Based Multiplexed and Secure Transport</title>
            <author fullname="J. Iyengar" initials="J." role="editor" surname="Iyengar"/>
            <author fullname="M. Thomson" initials="M." role="editor" surname="Thomson"/>
            <date month="May" year="2021"/>
            <abstract>
              <t>This document defines the core of the QUIC transport protocol. QUIC provides applications with flow-controlled streams for structured communication, low-latency connection establishment, and network path migration. QUIC includes security measures that ensure confidentiality, integrity, and availability in a range of deployment circumstances. Accompanying documents describe the integration of TLS for key negotiation, loss detection, and an exemplary congestion control algorithm.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="9000"/>
          <seriesInfo name="DOI" value="10.17487/RFC9000"/>
        </reference>
        <reference anchor="RFC4340">
          <front>
            <title>Datagram Congestion Control Protocol (DCCP)</title>
            <author fullname="E. Kohler" initials="E." surname="Kohler"/>
            <author fullname="M. Handley" initials="M." surname="Handley"/>
            <author fullname="S. Floyd" initials="S." surname="Floyd"/>
            <date month="March" year="2006"/>
            <abstract>
              <t>The Datagram Congestion Control Protocol (DCCP) is a transport protocol that provides bidirectional unicast connections of congestion-controlled unreliable datagrams. DCCP is suitable for applications that transfer fairly large amounts of data and that can benefit from control over the tradeoff between timeliness and reliability. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="4340"/>
          <seriesInfo name="DOI" value="10.17487/RFC4340"/>
        </reference>
        <reference anchor="RFC6928">
          <front>
            <title>Increasing TCP's Initial Window</title>
            <author fullname="J. Chu" initials="J." surname="Chu"/>
            <author fullname="N. Dukkipati" initials="N." surname="Dukkipati"/>
            <author fullname="Y. Cheng" initials="Y." surname="Cheng"/>
            <author fullname="M. Mathis" initials="M." surname="Mathis"/>
            <date month="April" year="2013"/>
            <abstract>
              <t>This document proposes an experiment to increase the permitted TCP initial window (IW) from between 2 and 4 segments, as specified in RFC 3390, to 10 segments with a fallback to the existing recommendation when performance issues are detected. It discusses the motivation behind the increase, the advantages and disadvantages of the higher initial window, and presents results from several large-scale experiments showing that the higher initial window improves the overall performance of many web services without resulting in a congestion collapse. The document closes with a discussion of usage and deployment for further experimental purposes recommended by the IETF TCP Maintenance and Minor Extensions (TCPM) working group.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="6928"/>
          <seriesInfo name="DOI" value="10.17487/RFC6928"/>
        </reference>
        <reference anchor="RFC6675">
          <front>
            <title>A Conservative Loss Recovery Algorithm Based on Selective Acknowledgment (SACK) for TCP</title>
            <author fullname="E. Blanton" initials="E." surname="Blanton"/>
            <author fullname="M. Allman" initials="M." surname="Allman"/>
            <author fullname="L. Wang" initials="L." surname="Wang"/>
            <author fullname="I. Jarvinen" initials="I." surname="Jarvinen"/>
            <author fullname="M. Kojo" initials="M." surname="Kojo"/>
            <author fullname="Y. Nishida" initials="Y." surname="Nishida"/>
            <date month="August" year="2012"/>
            <abstract>
              <t>This document presents a conservative loss recovery algorithm for TCP that is based on the use of the selective acknowledgment (SACK) TCP option. The algorithm presented in this document conforms to the spirit of the current congestion control specification (RFC 5681), but allows TCP senders to recover more effectively when multiple segments are lost from a single flight of data. This document obsoletes RFC 3517 and describes changes from it. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="6675"/>
          <seriesInfo name="DOI" value="10.17487/RFC6675"/>
        </reference>
        <reference anchor="RFC6937">
          <front>
            <title>Proportional Rate Reduction for TCP</title>
            <author fullname="M. Mathis" initials="M." surname="Mathis"/>
            <author fullname="N. Dukkipati" initials="N." surname="Dukkipati"/>
            <author fullname="Y. Cheng" initials="Y." surname="Cheng"/>
            <date month="May" year="2013"/>
            <abstract>
              <t>This document describes an experimental Proportional Rate Reduction (PRR) algorithm as an alternative to the widely deployed Fast Recovery and Rate-Halving algorithms. These algorithms determine the amount of data sent by TCP during loss recovery. PRR minimizes excess window adjustments, and the actual window size at the end of recovery will be as close as possible to the ssthresh, as determined by the congestion control algorithm.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="6937"/>
          <seriesInfo name="DOI" value="10.17487/RFC6937"/>
        </reference>
        <reference anchor="RFC9002">
          <front>
            <title>QUIC Loss Detection and Congestion Control</title>
            <author fullname="J. Iyengar" initials="J." role="editor" surname="Iyengar"/>
            <author fullname="I. Swett" initials="I." role="editor" surname="Swett"/>
            <date month="May" year="2021"/>
            <abstract>
              <t>This document describes loss detection and congestion control mechanisms for QUIC.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="9002"/>
          <seriesInfo name="DOI" value="10.17487/RFC9002"/>
        </reference>
        <reference anchor="RFC3168">
          <front>
            <title>The Addition of Explicit Congestion Notification (ECN) to IP</title>
            <author fullname="K. Ramakrishnan" initials="K." surname="Ramakrishnan"/>
            <author fullname="S. Floyd" initials="S." surname="Floyd"/>
            <author fullname="D. Black" initials="D." surname="Black"/>
            <date month="September" year="2001"/>
            <abstract>
              <t>This memo specifies the incorporation of ECN (Explicit Congestion Notification) to TCP and IP, including ECN's use of two bits in the IP header. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="3168"/>
          <seriesInfo name="DOI" value="10.17487/RFC3168"/>
        </reference>
        <reference anchor="RFC9330">
          <front>
            <title>Low Latency, Low Loss, and Scalable Throughput (L4S) Internet Service: Architecture</title>
            <author fullname="B. Briscoe" initials="B." role="editor" surname="Briscoe"/>
            <author fullname="K. De Schepper" initials="K." surname="De Schepper"/>
            <author fullname="M. Bagnulo" initials="M." surname="Bagnulo"/>
            <author fullname="G. White" initials="G." surname="White"/>
            <date month="January" year="2023"/>
            <abstract>
              <t>This document describes the L4S architecture, which enables Internet applications to achieve low queuing latency, low congestion loss, and scalable throughput control. L4S is based on the insight that the root cause of queuing delay is in the capacity-seeking congestion controllers of senders, not in the queue itself. With the L4S architecture, all Internet applications could (but do not have to) transition away from congestion control algorithms that cause substantial queuing delay and instead adopt a new class of congestion controls that can seek capacity with very little queuing. These are aided by a modified form of Explicit Congestion Notification (ECN) from the network. With this new architecture, applications can have both low latency and high throughput.</t>
              <t>The architecture primarily concerns incremental deployment. It defines mechanisms that allow the new class of L4S congestion controls to coexist with 'Classic' congestion controls in a shared network. The aim is for L4S latency and throughput to be usually much better (and rarely worse) while typically not impacting Classic performance.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="9330"/>
          <seriesInfo name="DOI" value="10.17487/RFC9330"/>
        </reference>
        <reference anchor="RFC8511">
          <front>
            <title>TCP Alternative Backoff with ECN (ABE)</title>
            <author fullname="N. Khademi" initials="N." surname="Khademi"/>
            <author fullname="M. Welzl" initials="M." surname="Welzl"/>
            <author fullname="G. Armitage" initials="G." surname="Armitage"/>
            <author fullname="G. Fairhurst" initials="G." surname="Fairhurst"/>
            <date month="December" year="2018"/>
            <abstract>
              <t>Active Queue Management (AQM) mechanisms allow for burst tolerance while enforcing short queues to minimise the time that packets spend enqueued at a bottleneck. This can cause noticeable performance degradation for TCP connections traversing such a bottleneck, especially if there are only a few flows or their bandwidth-delay product (BDP) is large. The reception of a Congestion Experienced (CE) Explicit Congestion Notification (ECN) mark indicates that an AQM mechanism is used at the bottleneck, and the bottleneck network queue is therefore likely to be short. Feedback of this signal allows the TCP sender-side ECN reaction in congestion avoidance to reduce the Congestion Window (cwnd) by a smaller amount than the congestion control algorithm's reaction to inferred packet loss. Therefore, this specification defines an experimental change to the TCP reaction specified in RFC 3168, as permitted by RFC 8311.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8511"/>
          <seriesInfo name="DOI" value="10.17487/RFC8511"/>
        </reference>
      </references>
      <references anchor="sec-informative-references">
        <name>Informative References</name>
        <reference anchor="CCGHJ16" target="http://queue.acm.org/detail.cfm?id=3022184">
          <front>
            <title>BBR: Congestion-Based Congestion Control</title>
            <author initials="N." surname="Cardwell" fullname="Neal Cardwell">
              <organization/>
            </author>
            <author initials="Y." surname="Cheng" fullname="Yuchung Cheng">
              <organization/>
            </author>
            <author initials="C." surname="Gunn" fullname="C. Stephen Gunn">
              <organization/>
            </author>
            <author initials="S." surname="Hassas Yeganeh" fullname="Soheil Hassas Yeganeh">
              <organization/>
            </author>
            <author initials="V." surname="Jacobson" fullname="Van Jacobson">
              <organization/>
            </author>
            <date year="2016" month="October"/>
          </front>
          <seriesInfo name="ACM Queue" value="Oct 2016"/>
        </reference>
        <reference anchor="CCGHJ17" target="https://cacm.acm.org/magazines/2017/2/212428-bbr-congestion-based-congestion-control/pdf">
          <front>
            <title>BBR: Congestion-Based Congestion Control</title>
            <author initials="N." surname="Cardwell" fullname="Neal Cardwell">
              <organization/>
            </author>
            <author initials="Y." surname="Cheng" fullname="Yuchung Cheng">
              <organization/>
            </author>
            <author initials="C." surname="Gunn" fullname="C. Stephen Gunn">
              <organization/>
            </author>
            <author initials="S." surname="Hassas Yeganeh" fullname="Soheil Hassas Yeganeh">
              <organization/>
            </author>
            <author initials="V." surname="Jacobson" fullname="Van Jacobson">
              <organization/>
            </author>
            <date year="2017" month="February"/>
          </front>
          <seriesInfo name="Communications of the ACM" value="Feb 2017"/>
        </reference>
        <reference anchor="MM19">
          <front>
            <title>Deprecating The TCP Macroscopic Model</title>
            <author initials="M." surname="Mathis" fullname="M. Mathis">
              <organization/>
            </author>
            <author initials="J." surname="Mahdavi" fullname="J. Mahdavi">
              <organization/>
            </author>
            <date year="2019" month="October"/>
          </front>
          <seriesInfo name="Computer Communication Review, vol. 49, no. 5, pp. 63-68" value=""/>
        </reference>
        <reference anchor="BBRStartupCwndGain" target="https://github.com/google/bbr/blob/master/Documentation/startup/gain/analysis/bbr_startup_cwnd_gain.pdf">
          <front>
            <title>BBR Startup cwnd  Gain: a Derivation</title>
            <author initials="I." surname="Swett" fullname="Ian Swett">
              <organization/>
            </author>
            <author initials="N." surname="Cardwell" fullname="Neal Cardwell">
              <organization/>
            </author>
            <author initials="Y." surname="Cheng" fullname="Yuchung Cheng">
              <organization/>
            </author>
            <author initials="S." surname="Hassas Yeganeh" fullname="Soheil Hassas Yeganeh">
              <organization/>
            </author>
            <author initials="V." surname="Jacobson" fullname="Van Jacobson">
              <organization/>
            </author>
            <date year="2018" month="July"/>
          </front>
        </reference>
        <reference anchor="BBRStartupPacingGain" target="https://github.com/google/bbr/blob/master/Documentation/startup/gain/analysis/bbr_startup_gain.pdf">
          <front>
            <title>BBR Startup Pacing Gain: a Derivation</title>
            <author initials="N." surname="Cardwell" fullname="Neal Cardwell">
              <organization/>
            </author>
            <author initials="Y." surname="Cheng" fullname="Yuchung Cheng">
              <organization/>
            </author>
            <author initials="S." surname="Hassas Yeganeh" fullname="Soheil Hassas Yeganeh">
              <organization/>
            </author>
            <author initials="V." surname="Jacobson" fullname="Van Jacobson">
              <organization/>
            </author>
            <date year="2018" month="June"/>
          </front>
        </reference>
        <reference anchor="BBRDrainPacingGain" target="https://github.com/google/bbr/blob/master/Documentation/startup/gain/analysis/bbr_drain_gain.pdf">
          <front>
            <title>BBR Drain Pacing Gain: a Derivation</title>
            <author initials="N." surname="Cardwell" fullname="Neal Cardwell">
              <organization/>
            </author>
            <author initials="Y." surname="Cheng" fullname="Yuchung Cheng">
              <organization/>
            </author>
            <author initials="S." surname="Hassas Yeganeh" fullname="Soheil Hassas Yeganeh">
              <organization/>
            </author>
            <author initials="V." surname="Jacobson" fullname="Van Jacobson">
              <organization/>
            </author>
            <date year="2021" month="September"/>
          </front>
        </reference>
        <reference anchor="draft-romo-iccrg-ccid5">
          <front>
            <title>Profile for Datagram Congestion Control Protocol (DCCP) Congestion Control ID 5</title>
            <author fullname="Nathalie Romo Moreno" initials="N. R." surname="Moreno">
              <organization>Deutsche Telekom</organization>
            </author>
            <author fullname="Juhoon Kim" initials="J." surname="Kim">
              <organization>Deutsche Telekom</organization>
            </author>
            <author fullname="Markus Amend" initials="M." surname="Amend">
              <organization>Deutsche Telekom</organization>
            </author>
            <date day="25" month="October" year="2021"/>
            <abstract>
              <t>   This document contains the profile for Congestion Control Identifier
   5 (CCID 5), BBR-like Congestion Control, in the Datagram Congestion
   Control Protocol (DCCP).  CCID 5 is meant to be used by senders who
   have a strong demand on low latency and require a steady throughput
   behavior.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-romo-iccrg-ccid5-00"/>
        </reference>
        <reference anchor="A15" target="https://www.ietf.org/mail-archive/web/aqm/current/msg01480.html">
          <front>
            <title>TCP ACK suppression</title>
            <author initials="M." surname="Abrahamsson" fullname="Mikael Abrahamsson">
              <organization/>
            </author>
            <date year="2015" month="November"/>
          </front>
          <refcontent>IETF AQM mailing list</refcontent>
        </reference>
        <reference anchor="Jac88" target="http://ee.lbl.gov/papers/congavoid.pdf">
          <front>
            <title>Congestion Avoidance and Control</title>
            <author initials="V." surname="Jacobson" fullname="Van Jacobson">
              <organization/>
            </author>
            <date year="1988" month="August"/>
          </front>
          <seriesInfo name="SIGCOMM 1988, Computer Communication Review, vol. 18, no. 4, pp. 314-329" value=""/>
        </reference>
        <reference anchor="Jac90" target="ftp://ftp.isi.edu/end2end/end2end-interest-1990.mail">
          <front>
            <title>Modified TCP Congestion Avoidance Algorithm</title>
            <author initials="V." surname="Jacobson" fullname="Van Jacobson">
              <organization/>
            </author>
            <date year="1990" month="April"/>
          </front>
          <seriesInfo name="end2end-interest mailing list" value=""/>
        </reference>
        <reference anchor="BP95">
          <front>
            <title>TCP Vegas: end-to-end congestion avoidance on a global Internet</title>
            <author initials="L." surname="Brakmo" fullname="Lawrence S. Brakmo">
              <organization/>
            </author>
            <author initials="L." surname="Peterson" fullname="Larry L. Peterson">
              <organization/>
            </author>
            <date year="1995" month="October"/>
          </front>
          <seriesInfo name="IEEE Journal on Selected Areas in Communications 13(8): 1465-1480" value=""/>
        </reference>
        <reference anchor="B15" target="https://docs.google.com/document/d/1o-53jbO_xH-m9g2YCgjaf5bK8vePjWP6Mk0rYiRLK-U/edit">
          <front>
            <title>TCP-NV: An Update to TCP-Vegas</title>
            <author initials="L." surname="Brakmo" fullname="Lawrence S. Brakmo">
              <organization/>
            </author>
            <date year="2015" month="August"/>
          </front>
          <seriesInfo name="" value=""/>
        </reference>
        <reference anchor="WS95">
          <front>
            <title>TCP/IP Illustrated, Volume 2: The Implementation</title>
            <author initials="G." surname="Wright" fullname="Gary R. Wright">
              <organization/>
            </author>
            <author initials="W." surname="Stevens" fullname="W. Richard Stevens">
              <organization/>
            </author>
            <date year="1995"/>
          </front>
          <seriesInfo name="Addison-Wesley" value=""/>
        </reference>
        <reference anchor="HRX08">
          <front>
            <title>CUBIC: A New TCP-Friendly High-Speed TCP Variant</title>
            <author initials="S." surname="Ha">
              <organization/>
            </author>
            <author initials="I." surname="Rhee">
              <organization/>
            </author>
            <author initials="L." surname="Xu">
              <organization/>
            </author>
            <date year="2008"/>
          </front>
          <seriesInfo name="ACM SIGOPS Operating System Review" value=""/>
        </reference>
        <reference anchor="GK81" target="http://www.lk.cs.ucla.edu/data/files/Gail/power.pdf">
          <front>
            <title>An Invariant Property of Computer Network Power</title>
            <author initials="R." surname="Gail">
              <organization/>
            </author>
            <author initials="L." surname="Kleinrock">
              <organization/>
            </author>
            <date/>
          </front>
          <seriesInfo name="Proceedings of the International Conference on Communications" value="June, 1981"/>
        </reference>
        <reference anchor="K79">
          <front>
            <title>Power and deterministic rules of thumb for probabilistic problems in computer communications</title>
            <author initials="L." surname="Kleinrock">
              <organization/>
            </author>
            <date/>
          </front>
          <seriesInfo name="Proceedings of the International Conference on Communications" value="1979"/>
        </reference>
        <reference anchor="KN_FILTER" target="https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/lib/win_minmax.c?id=a4f1f9ac8153e22869b6408832b5a9bb9c762bf6">
          <front>
            <title>Linux implementation of Kathleen Nichols' windowed min/max algorithm</title>
            <author initials="K." surname="Nichols" fullname="Kathleen Nichols">
              <organization/>
            </author>
            <author initials="N." surname="Cardwell" fullname="Neal Cardwell">
              <organization/>
            </author>
            <author initials="V." surname="Jacobson" fullname="Van Jacobson">
              <organization/>
            </author>
            <date/>
          </front>
        </reference>
        <reference anchor="RFC8311">
          <front>
            <title>Relaxing Restrictions on Explicit Congestion Notification (ECN) Experimentation</title>
            <author fullname="D. Black" initials="D." surname="Black"/>
            <date month="January" year="2018"/>
            <abstract>
              <t>This memo updates RFC 3168, which specifies Explicit Congestion Notification (ECN) as an alternative to packet drops for indicating network congestion to endpoints. It relaxes restrictions in RFC 3168 that hinder experimentation towards benefits beyond just removal of loss. This memo summarizes the anticipated areas of experimentation and updates RFC 3168 to enable experimentation in these areas. An Experimental RFC in the IETF document stream is required to take advantage of any of these enabling updates. In addition, this memo makes related updates to the ECN specifications for RTP in RFC 6679 and for the Datagram Congestion Control Protocol (DCCP) in RFCs 4341, 4342, and 5622. This memo also records the conclusion of the ECN nonce experiment in RFC 3540 and provides the rationale for reclassification of RFC 3540 from Experimental to Historic; this reclassification enables new experimental use of the ECT(1) codepoint.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8311"/>
          <seriesInfo name="DOI" value="10.17487/RFC8311"/>
        </reference>
      </references>
    </references>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA+S9eXPbWJYn+j8+BVoZGUlmkdTiJW1luWpk2el0pxe1JVd2
velpBUhCEsokwQJAySqX+7PP+Z3lLgAoO6tr3ryIlxHdZRHAXc499+zLeDxO
rg/Te8m8nK2yZX6YzqvsohkXeXMxns1uLsfTaTXeu5/MsuYwzT+uk3ozXRZ1
XZSr5nZN7798fvZTsi4OkzRtytlh+l1TbfLv6K/6dlnlF3X4S1k10U9N0Sxo
iKdP36XH5eoyrxsaFv9sqnKRZDR1fs2Pk6zKM53r5vIwPT7+9UXy4eaw77Mk
2zRXZXWYjFP6k4bP50VTVjS/bPBNni3S46ya3+SLBf1aVjTgi7K8XOT0V77M
isVhuprpC//jkp9MZuVy24Avs1V6epM3zbbBimxV4/lXjPWvZZ2vr9KneX2V
3dp4r/Mm86P9ZcoP/8eSfuWhVmW1zJriOscZvPvp+PHB43v6z4O9/Uf6zx/u
Hbhf9/cf6z8f7R881H8+PHhs7z54+GjfPnt8/8DGvX/PXnj0+NED+3Vvb0//
ef/effvnw8cH9u7Dhz88cL/e+8F/ZuPe239o7z6+d89GePRgn9ZQrC7C3R0f
v/j5X/d5xYRuWXWZE1ZeNc36cHf3r5t8k0+y2XJCQNudE3SKxWR2sfxjMX9y
b+/gYP/RfflMkO47QqsQf8ZPszqf9yDUd/yVIRX+Pd6CSf7Jnzezq83qMj2+
yleX+qRYEeIfT9IXm9WKf7Izp99OGzr1fOWf2UCn5VVeLNK/pz9ndZ3V6Z/z
y2yVX0Xv/Inw71+zWTmtS/l4njX0M539w/H+Hv9S51WR1wDmoU59dPw6/TdA
7DB9O2v4ZQffH7rwrQnAM8DW4LvMLrO/Fau83qVPf9g92D3YP7h/8IjJxcwD
dQqghj/MBKi76/nF/99O44fx3sGW0zgul8vNqiAiS1ut0/Iiba5ynNFh+lM+
5Y/pzdev5d46qD3L11WOj2hzZ/TB2fFJ+jqbVWU9K9fFLH1dzvPFVoC9ntDL
zVVRR7/+K369mmfXRbz6x9txiVa/3jR5FW8jfZdfF/nNKL0uF5P0/uNRuion
6YNRul5P0of3xnTp0+9wnHT0p4RrzWZ9fLOav8iK1WEbNVJ9IZ3RG2nK76QZ
7b8qrnmy73ox9rJorjZTEMldIb27hJ6700U5JfytacW7z8rZZpmvGh5kt5ZZ
di9p/N1slS1u66LGN+f65Bzzn+PxxBC4D7IhR/hHEfSfgXGPxv+6WUQQPslm
hCx3w1je+b8I5C/C9/8OJFcCyWcVLe8uOPIL//egOMf0/x+E4cH++DRf008i
YVblshwXs1l1SXJmMX9At2b8bNL+lV4/2n/Qz49ubm4mkFOVHxWLcVbNrkhS
2L3Jp7vZX5e7s01VEcR2l/Xl3v79R3uTq2a5CM8LBPPo+Je03qyJkrJUu51c
Fh+yfJEeTavsKlvWHRR5MN7f51+qmUiq6dG/vU6xLuDBoqhBDQgyjx71ii95
PllMF5PL8np3na3zqt4F18yuy2I+aTHLgCse4Xm2muVptpo7EXjbFraczf7j
R4/Ge4+2UPfTly+O375+zW+NvorW7z8SWn9faP29/fvjewePldrT/I/3Yghc
MADo/0+Kupjk881uvpof0P/Z/46LFU1JWx7vP368NwFMQ3gQmysuChIXcJy9
wDlaXJYV3bDlbwfN4z2oP/2gaS8vOm3jbiePH8Q0Aqv8E90cEkDwcVOO6X9S
LyOlmVs3/kgviQjQXX2JSVZ50ycBsTTzapI+rbIPyzKSZ15lN3QHaKzT6LH/
5iSncW3X/ququu08dTB5sF0Y+O7l8+fPSY/ZVESUsIHTfJHPGjqdI9Lhapq2
Le3s3xs8GtKw9x/SsHRLvzPIbbv4pKvWE69L4W+mjLvz3f1y/ODeX6Zvzz/+
PF4+vjz48/HlX7KLB9NfHl3nJ3/59eTh6w971Z+Ld69+Gb/fhe7VPpnxmz8d
pker9P0amyWVFkg15uP65wA+oBdbr9x3BoJfT3uQZ/flSfpysdjUTUVjzUfp
n8oF7T89OGQh8OVyvcgdo9i66BeT9NequLxqokW/yOjY30WP9P1fWS6+zld1
9AH9/K6YXREfiR57TNmmf8znBaHV+Ne8XuS3ut2f3/373qN4v8fvn748phMh
fnXDR/ETjbSaL27Tn2mF49N1rvf+T1lFevb220Gn8HMW/vCSVn6V562D/PdN
dEpbjwj6E1HGtyen6Vui1iKDn94Sp14qMdQ9vfhFdOkOwQf7WnyYECpvZouM
yR7Nmu1eFAvSqkhuICWpvMmrNvUn5Hy5upbdpidVSbM3t1AZHGl+kzc3ZfUh
PcHn2+BBh/zC6Kjf/i+LvFhV5exDAIWLbFHnW8BA88/oBGjzTmsROsXIBwmj
XF3kcg/K9s0nZWOzykfgLOCcv/wQaze8fOZrcxChZbEiqkpaTbUhAMlsm+U0
vSirdF0RiZwWC3kBf9EVYFozU6DoemfRAu640P8vA+K7/cc/PAa6/PLm/KeX
r86ev+snfSQmTj6ADSxY6llvprv1bLlLTGfzcVce4J3dpqyus8W8licT/ISt
0/8siunuDQmIBM5l9nEyg3kku3+xf/GYRJP9B/fyg4NHDx9PH97fe/To3sH0
QfZ4On08++HhwfTiYXg6rzBwWkTEBjv/hVTKRU7a8xsiC+Wi/i6l2eZ0lPOU
piQ57WOafZEbtwf5Ssl1CxOXc0uS8XicZlOQzVlDf56R6psa70jrdT6DFFHz
2UGODziyWi38yif8xmDnadkQNFY54clTwtSbYt5cMcq+Kzfg7VWxBjqus0sB
UFMs851huqlpHlLdMfGSuOKmYiAy6mQJLXBVr8uqwbw0NtNxWmq+IOGW6DPI
/iit/AwYdcTTrrPZh7xJF2Vd82sJMbDppljM6SnMt4tiVtCUsAsYmq6UWqwJ
5rKtBsYPXiKsA/p2UyYGhWlJm7wqbwiwJPPQeDVR5BrAzngRGJXOuVhulhAI
wZtoLn5ML2eLRXlTJ3Q3LxZgMril4ToyemV1y3siCp0v2PgHRoxNiUGp52gS
dzQ1SfMzOoSaPl6V6adPatD8/DklSsH8RH6EPfPz5xFvubygy1nDuE2K1aop
aJG36RUtjyhQc0WgvrwiMsK0ZuqOvE5uaMK0vuItEZx5EMxCBzgvl7zivJaT
iYdeMG1jkyUYBwE4u62T1vApDz/P87Ube8CyIT7ZkZ9IQ8yanaEc3IwOeZon
7lISpAi4DE2HUoSNTTmjU2yuCNJQfOjXWhFnbDiW0J+r8maRzy8x0IQki01N
512NUmI4q7Qm+Y4oWXz96zSrSBG5JraSEQHm7YA1C7QPHt+jIwAo/u29O4K9
vb3PnzF6cBUTfxVpKdDJ0nuGrNilO2k+uut7E7nZy2I+p0mTb0B4q3K+4XuT
fvqmCP78jGufO1k6vSI0IdiQIMhUmk5mA/y6E9X8AupkUXzIBc8Gnz6xcgeM
4n8+3pN/QoajjYeIOBRtjVGRvmOxh15OArQcAih5nQeTpbgetCKQPcbEJU72
Ns8IL6b5LKOVJwSjW3qBjqEm9KDLTvBc3PIn42XWzK7oe7pKACUpvMQiQIEI
Ny75ik6NhI0ZIZO1wK1WJnxZ5cJ4BfWAhcXKwxIEpJ6kR3XA/QTESU5k4Bqi
6t2ALcCyZ1ATaGxatjLyDEydZqpJvqyyBfCbcBbiT00UlCShsj5Mkn0SUOOb
SDLeqn07RxGJxIW5ytbAaMiu6Q0oX0bq2wp8H8iBTzcNSRZ/YxQnyRh3EpRh
XEP2xJZWl+OrbLPgz+o0pztR3gI44LikmjZw5NR0lwn6eqc7i7obLlgmny+x
r9t6mYHrBVRJDx/EtSQIEfzozEaEHh9YF4UIkS43i6YA+RdyOs8ZzDkAjtUR
Ace7YBnpZk1Th1AaTG/TB3vf4gj6SOq9vW9pCOh1bdo6FNJXrugsa9ov/c9l
Vd4w4hBWhbPyjghvswtCGyUI7dMphAaES6PDxyYuKqK2TOOA8BiryoD+BOCq
ppn4XPEtjihlnlbjXhREMYh8HEzSZwGVJRm74beJ+vG9aMp5dksc2HD6C+dF
8/sTk6vGeIyhYro9Sgm0Vb7OocjRekj2Xwg/8KsR2C6I1Y6XpBoolgGumABv
AxuZl3hWQpu6R5u6JbmIoGDQAAGheedQ54CGjmNdMJOY5+vmapTO9atl8ZGv
Oy1gldPZjcFR+NZfgIPj6IWD14698zmIXKPQ1XcdPGiwiyqnxa6a8CAnfFXB
t92V/vKdwFjEa0DQlhlReZhdgVf0Y4UrVsVEjU4tzxjh6IN1Scsn3FqoDE5j
kSLFzkYS0Ql8tB6Wc3B+el3HAq90sD8ZYvcKqLGBd3CPfvfrBy+jkfN5kk3p
XuJ6gssVeJduI1aRza4KQu30YrNYhDeaiUTv/p0QRntdQ5AtZpBwTbIZERBr
AALb3N97MV3XTBPo33vEQd6dndFZ5iSsZUlbVCQyApK0N9nDf/e+TQfFJJ8Q
GSkrYDEd4P09urI0/xz8prmBfB4MktfDUcLXHfNlwYxgDPQDD3QB4j4FGrt5
mYfuf8s4IqRCtpBkDd9T4v6vsY+AsBhzXBSk0pjsQYd5m66ACw204BuSb8AP
A4nKXf9ZtoaEAvQo5U0+50jc9liUFISfP20qEChsYsRot6lYKgW1FSwJ0M5f
f8Yzf7IMIPkNmEvCLWmEt3x++u8xUcW6YCIdoCcu9JnKP6HWkf4WraMHi5x0
AahkIuorzvlHLCk22YccZAerqkrsGZgMjbZJ+NrfeV0lyONOtcfLqMldas/X
6T2p6T3JV+k9I8gdi83cOBMWvwRNDqTZgI605KRU5aTRl7SfRhEt2CCjynqR
QZjephGBGpREFYze5x9n8FUQhrACkbDrgmBJRLTpV95owLKaQ5cp08tNMc95
l06fy6+y66KsEpax8g+OpwQUCTsDeeAZUzdjW4ue5/WsKqaqRav/xYnxdwrx
QG/66bKAtaT1Revt/VF6k9VusjnLskW5qYkAQH1MePULoqwLohoaNPL5s/3z
ByUgrL6ETndEGkFhArF3QAikP7zjRT87n36ZLiGYgqbSacyLerapa9HGSlAR
5sS0NHj0sZijGV94/rVcGZCS6HbitIoVeHwH6gWY8WW2Kv4GjCXSVbKOTQKx
ItkBUPSazp1UKjCnDYB3QVxCzGCMlzcke9B6RQHSkwfWNeFUfsh7fkjQbTow
GD0TPTD6ubjcduCsDusw91soE72Kw5BAnvCC0itEFsLrK1g+cgi9zqpsyY4L
nayBhLkEt13lfgsPWnO3jFn4akMj2FIf+tcT+m1DS7zFlDUBoRIk8i//QJyo
4XHlykMxpv8j9vTy6M1R+7MkBMmjdEZ0zFSFo0gRBx8gDfeMzaHlory8JQW3
8X997l7IC0TpCAj48EHMVH6kqaFG1XLmJIp2ToCmO9a36OPFhr5kvD4m+C6O
idiz5U2Gw/5sIDEjGQ4loDblWrfL3C5LLwvI9J4STsSEz6vjOeoVsZzzmZtk
khwRhup8FROhCxJR5wIn5d9N/pHhfZtAnGFZ7zAdHA950oDsMjxG6eBEntDa
xspD7Mm7U36U4BE9EbZSZ8CREaSGAe1RPo4o1HdQqdjwu5ChJonfF2OD4IEd
zVzlLvxqv5Ds8+mT8b0xJh7LxDVMCjvPjCO+w5JO5cmOyKysbtT5hhCgnOfC
SQJ8GIG4kbi5mgnRY0ALGYRpbgbGogaCBFBrQ4wPWnZ+l2FEXxbgZCFDJ1oX
LaBYiWkAK4Unfg07es1sLpsHZplooUm0UP4/Oy53IMFxkcC13iyYnQfD81nQ
lAa3CEoYYbOCQn3BEBNG7hUdenl628C2F7yZ4E3IJDWeq6jshYJwPB4ES6zd
UMBB/WiSvGwZ1l4f/Rl3S7kHRgKtUTvntFATDf/DDzNKgNmM1XUgJUWvpGzj
ALR7aSCLoETv6F1lyTXeZBlwTVhN6J3+XN7AMEPrKeDdsGtEzEj0PiI6t8l8
wzLZ7Aq2JvbMvD57n8Y6GMmEdZ2Rys3fjgIgJerGYTTp7kINn/SQEGMjcqJA
nRgdq13Ju+cIYnj+5tnzZ3LyoFBt8yUgLGfB0OycSg+i7HhDaT7fwYZ29OLi
T94BSbqAmllhSWOhjSSRQEhC8bKE1OrMtPlqvi6JkLARima/KKol8+WMLe74
FYJ0Aatakk8uSVG7LjIiq2xcxU16x1a9wMw6YmMsLXyjNvXYxpsEZlomcHj7
FCMNdsSHj29idkRKxXRRzj7INIiTImmGGBRxKLeTY09CTpkqEMOyh2NPGMZM
Moh/HU9OX5+eii/7lMBA8H+t4jT+TE8JOYAGfDwix6m4nQBvRJdgA2JuAOfA
nlCEYCpJ80MVKiOrkjuRHot5627Aj/ShZh7RQwvXm2oNLj5JZUPp6/enZ7qE
vtFJH7+FbYZxxn9E996+Yd14y9L0Y8aqH5kG5h+VVZkVvg/hMZrOdPrz2/ev
nsF9gB+fX1xMakJ5PPFcyVvyR14OnPww2QfG3FwVRI50sXXiVou5t6+Yt0sa
FQma4Il4mdb78iS9yjM6emL6PZC4Gwhy69aLDe8v6XlRx+4BFN+eHkglfZDi
dfAXO6bwYepLEj6ZhO0k7vZ5eO3f7wLLD9TZQuIe6aI9rN4/a8MKRjTjmqC2
DuH7gGBTgdS5sdsIEkAdwrhCgG1ieteEI7kBjCepeD4m9Zge0xWB6UX/ckdL
Z/sSOki2QOSvXPlCfgjFC3EgExVuTEbtuT6ty2liQCDAgHLK4KbOsY8Rq3Ak
W9bQlA2tIFuWm1VjLD9x76R1CU+Y2LnEqHyRYyhTd/r8twQ6kcyZDLx560nB
GoYkFn2Er7EWrzhm79T0ErS2pMpDkqYCpdc0Q2aUZs52ks95l8VK7AuyycjI
MkX0mlk9nIVks5zi8C6E1iYkyUMTmKvjJzBRqME+wufW1yII+qXW0E4Y9/AT
8Lm7A0KbZcY+L5ZS6kZFDBq+qNgkHlH4ZABGmO6si3W+I7ZovoDIQBFXxQ4v
5bxYnQskwreQkQK/29ceE0O0lohwtkMy9lQk56ifIjh8sGvYeG/VmoDUhgnH
sou3OxFer0DljalDFng1F5tXCpsXT7siJecc3pNzPJbjzLMKkhp/KH6ANSKE
YaQEamYruajK5tRTjEMQfn1CWo5IDU6lcOxaNaBQEXHc+t1p+/K0bF7+2pjN
2F0RPTZFtyTkwADYXza1IgUtLJ87OdqsSnp7/QpuzyuO9zhjA0RgNUwH2YfM
W+2GpheUU9iZaWhgQdKWAnrWINNVjV4hGLd1qFm2mKmOITZQNeOwAds8IuPa
uz1iuSO8yb1zsgfmHO/2g9mLdYRkJURWldhgEgvuVMLePcku+UUmvbF9mkA5
SQd8B/66QbxCcwsVhXRteBbZe5zViemg+fwZzb7jZANkeJEIOAzWjHvbu2R+
nOoF59vdu7YkXhuP3Hz0dxgpO0bZUjX5d6hxjGpfOuZ0AHyMji4Njg6cgW6Y
nZx8opypA9XEVj6UpW8Fh/tqns8WWWUw2XZr2GQUS/C/ZXcxYib/6O7ScHeg
I283DczFGomenjhLHNGRkp9ZCtjYW+lY6p85EcDzz5rl/+/qrjQwSX9ln65p
VEzyIEry+ZtnOSDB+QoGa71nAcbMS/oR7AdWdUeXmcwKJQ2IilEeJbGV2Tgy
Fhnh6TTJTveIMKcmMC3Jhzw4k2++YZvlYajFpF6LkYiqy8sqv2RrBsI3Ngsj
hYEy2ZSXOZN36OZsHhilohTiui6h6dCYgZErvhR0kSGT1XqGmrpy6gw+0SnK
HoQBjOlpfI4ECIPbJae+MDFWjzJ+IdFp1hDMxMxc0qayBRsdJ9MbdsyyLyWP
wT+RgbtJTOmRs2JqBN+t6XeIFwU4eVK2XYqqpoTaqYHBWXo7uJjyQ1Qyr9K8
3MBzCK9fImx5cD/9j+/TxWpwMEz/60l6MPnhhyFRw76kq8+ff5SNF6vE0q1g
u2YsaoFO99xON7pjx7qVYMM2l6QkYSbBCEKa5Zo90JwtJMEOzuclPh7Vg1ve
U2gB5SoPvXDwXea1RP7A6K2aYjdVCooPhB1ZG42PcDc2fu6nuz1pgOkTfnBA
/7s3eSAHQ0sqSQguZrAKJuY/h1Fm4l2MWLdMsje598DuJEihiJ5X5Q3rRxz1
l16W5TwJnP8SLQKLE1Nhti6L8JQtNKL3GtZoPh7Z3eusuqSd5hXIqeA87geh
PHw/rEYo2tO93v+2F/kTj/xpG/lxK1lc3HIn8eyuG+nSFr/2PsYI8ZQ0zaZM
bHVZj3I2ELo5NKzNL7LNonEJnb/tnoqZXOJEv3AHVTTG3QuvnE0Mc9R79UnM
ZU0p3wQIylccAGVXz6d2CrxfENJXpAa63CEnD1/Kk7EzdztpGOPwv2N+IdZw
JunGKEzWx9/y2NxTMgpv65xRBwlfqohuUQrqNF9k69qpptEYnCmIA5iW5SLP
Vt78WbPcWaYoxkCXesYcomeOBHOMINES/zcnxvya70prrTozKyjyc6AdnHhV
QQ96nhPBMESAcY/jn7erPhgbUUXnyLbq21ZRy26KCx95hn+HkkABNZo/Zy8D
As6ILrBSK+FgOP1jBKv4o38m/sz40tErAQ6Iy7N7916VdX12RTNe3XkLlPk3
tJlKXBXErT2V7YbrMJVCZBlHZIGUulCQgbhgFd3r9OBbu5dP8ya7axn20ZZw
QVCBZfZBYwSD+8eMYL7h6DChti0NGCkcMxdnnEhcoafTtMi9yQ+2yp9JGKnK
cnknwOIVKhWj9UkcEKE1I4uKeefwccBPqsQ9EAKyQBC/qPKczVOYPeGrwUMR
gl3DgaP8VLltc6UWB/4Mwl1uDwOuqUFjcE3hNVZN6kXZ1N132Ug2FExWl4Y5
VJHUX7tAvjbk9h8Y6F4TGyrWubenMXlFwoTYG+Qjds9yakktAgGLO7CbLCR0
DEwvMR+EXnmJKoC3fwd6QM7qdWhV2JHgw3zOloR1SYdze5iwdCSGQ71bLj8J
rnuuURBfLLMdAsJjCUmKbtU3NAbUTjFWfGk0CNFiuvjSuMwazeekkTP+hdQJ
2c5O5gMEQwaF8MaiYqk8jA5DiJ+/oi6uyJyYzBTgWZ0QKTif3gw70URJKKp2
Qh0c/ePQgroB33GO8ThqJymnf1F3zkBkD8MemVoQx6ftKGnSwC2/B+cMNzMK
hyclxtZiEwy/zPfNB7VATNzm5k40QGxuZCU0/XDKl4T8BOsxYji7nRFcB4jQ
5E9N1B6ZZU4YMIIiIFIWyjuYDQ8n6YDQx+nSHN0NwpEwPsCyISA7JymyQqCh
am/8J7/pFTnFCr9C513yIS6GTOIzzi5yDe1vIA3Eu45CXlphcWx0SkrJ9mDq
Cm6ULRyVsUG2gMssuHWq0g1J15KkwqGfghgt2IRbjoHTUmi/Ag5pCIck8C//
Y+CQ4BLaReIvWgsgGjMUpBuJ7Mv296I22XQwxHZl/2xzCY9eCJpSI03D/Sp6
JCzn6ymSsqjfQpOM9SVGj9LfTI8SR49ssC5VcvPcRYCSmAClAQGK9QYC+Lkz
r4aJg0KB4tBPs706SmEemYQt6MoO352d/VRAcXuVs0bpgpgV5VUZZknYQZGt
bTfl+EayYFwkrcR79kSQaurIYhEIPTVHA4uMETB5iZPSSDZx5xFaKhOuhTe3
liJc2x2kszlNpvN1Erzp1KZORLlQPo1CW+SX2ew2jNhFXkfi0uMkZt8WH8so
elA0sfod2k6jACjf1aw6DlyQ8vgZw+9E4mWHIx/XgWoIOi4dkpqDSHAIkEJn
JtWiyswSftRrRdXb216aMjBJZIL93UxrrBWsOsvXCcuLCzhKz6ebOefi9u3a
MLRj4ka0EeJf7gr2l4hdgFrsneng7PTt7ovTt8OUb5vaWOn3V+/ejpIX7+jB
FTRYXRldAMTVFPWyDvh47O0LFxxYPwPakEQOqo4UezelGEVejwSn7mYc6LkO
JVgqBHr8TnCy+q5uMBHQy1vxcShBaom+piq2pX+BhePpW2gYdtDHmNgA5Ywg
s1m+bhgOMXELQKEcJ2mx4K+QXjguy9G16a0myZzBh1rUI9FpTYzpVQI19Wpa
59W1mb2NEWdgIQ2yUDIEfeveOWEB9mOvY7oM2Ew9kz6vmDXacjEfSQIwqzNt
XYsRhOaNb8XXSVhutEDOOiJolpcAm6p3IS8edWQSFZs7h7tF6uiXvpK7xQ0N
KAwO/QsCWPL1AljaEsDcRkp/ag7YXyGbQfc6dYGZ7/KalHjLhwrK5nz6RiyJ
9NK4ci+hTo03+7HnV25TGGgZeEElJS92NLX8VRMnS+MTuMWydD+weWDTjDTe
hexNHG2377Bz579yzBg5k0Ho0Tav1nNBE+XZqiYRoHL3Owof0s/Tm8+hJnV+
wZIH+FSoTqXysxxDax+JqkASj5T3TWyqfvbx6Y0Tbe6yleh0aqolin5JEIQz
pLNS4r0H6aBCXUG4BKE8bdbAjwMw7Wn+9FfBVg18tbshGpdJho64MafjZ3Y6
/IeZNdkJWpBmlknYbDsM3eAcLX+SvimZ/ogRMxxRzH2S34b8n1xC9MyG76IC
p3CPuUiSHiDE4/AgiuGWanpTBoacw2jXSQSoTtRC9LQfvQI+2MExfsaPPnfE
oXN2NpKsaDZf9UN4qsBrtrdaCMZ6u6QWtUKf4qAcjeSN2PjFBYx7XQEttP72
uLx9cE8QXIVz+cLOeib68kXjY/R2TsiAHLPdlQED2e85pjjCDF910eIbRqMb
JneXiuUl8cVE+lWTZ/Nb8SeIsjKgXzne+y77v9EpX8LRaYiZ8xoxXcfjQMdk
d1HovcCVxU1gRVEk7MjAToS8rOacUc16J7QVz0Q1PDmBqw32wXbQEweqO/nR
U/NtoXRh4kC8PsLHf2xtXI/gK5clNKJoPA/jSBBAkcMzyg/izQ8WhkU5Q1kN
c2pEykLFruCEM3ZL7iDxGxZDYB9HrxUrO8zWxgPa6SPsVuVqnK3XY41Di1xD
lj3IdQASra8gIfjBuOb6B3VCWBPQRvWubdgE8SWn54xD9C7+uQWzvuGoAjcK
rmObpepciC9wQ0ACabFYnuZzZCwATViqNnrDGjiHpEvYWxO6I5QEh/MJWU8Q
z2JmTG9JD00Hd117d98959Lh6X/jSz7qNUuQ1Cd2idosSi1onUroByZ0R9QB
Ve1eGtvZKKTsm8Czvn03ccBE6CvRKB2zyNqoh+yeHyDPwSXWeEQKzCtE3zfs
IxmraUpU9pKrSyj0Iq+ziQ022rONpJB9jT95vgnSzQQJbJhE6CvUlzrUhlWr
CBRJdtvkN87FUXNRuNQp2jbiS+VPX7OwmAlrfFfiAMZro3keGD5wXAieMrID
s9gKFbh3aNAgFFHIoHdUwRiWtBfaO+5vvEjbVmYL6blU/u3847qoYgYjCxcX
olBwu049kwiPkUGYLCEtZ2PxWFV+Af1UBb9VlGXEZSnWeVWUfLxab6tQCYBw
ID4Kzbz/kN/eMHfZQWjwzkj+FyHC+Pe75//2/uW758/w79Ofj169cv+wNySi
e2eU6L/8ly4xiQc9+vOOqO47b0/OXr59c/Rqp5PFx0a8pkymikckejYmR1lK
tEVkotg/J+Uk35gX+60m68IYLU5rS99lUi2VFV9x9rR+8aKEIvvpG05+5rxq
c3df4olaqv3TFD9rRjVb47z1C/ZpqdZzdtW1hQ0kI6wqPwqPDGoqsPmyvBjG
BiixgiSo9vZ9eiRzSNUrqU7GibVEwvj9ZUbShJYCHIjFzTN7+ubtYFFeDkB3
hoz1ID/xwIJPKEN0GReikVw+2rGoS/vfSlmZV52U+XRgafSuWotLrg8GZInu
jItcMIj5yIEHqA4BAmi2nAuuDo3Ejaq81tincj6Xmgq8bBDuwf6QDc1Cgbe8
hSgaDpZaeI9mLTYIGz7Z+uHXDL8/1EJisYVtRsdSNmKvEdOjwwcYY0Dh8HML
jsRO8zXspQhglQzqJSJS/wKXNYKg1418B4zDJXbefB815sBrSen5B85mdxU9
JS7/xmo+pa1KHjg0OcJPn375ge4Z/S8KflpZuaxYCvLPs3UT1TxSu5EEVDRK
V83E1LUu4eRJ55wTUvL5a8Er560WFZRNMRGlZhNhUJ6vJlFwQfdjnrOByU1N
aN79uLWc2o34nY7GYQiB4Ox9HGkQbc8XIfBiyVpbKua24hcoTrVlzdvKcKR8
g30wdHNFvMe8YuY3tzoFzt0/QPpkEweESp22YOnJYE78Zgb0stc7oXA/Cjda
dd6UgRMduOJcW6ljNiEmuCxq2RntPy+Y+7nTbpWTUi8BTPZW/ijPBb0CEqwl
GBi9BVvMiagzHaYvOVVSRkMApKxX6sTVnRJdlhee9ulK4sfCJ4E8JSHVWtjh
2QnbjFZS6cumnbFtU4s3ZVqszGUZ8ZUfBUXhHE5qtZxEPSRFzap9WFGK0U4i
Ofo2/F20UtuzrlSrQQk8bGMRLMKt9d0ADxkaS6KEwqPT2G1Y2+ErCW5BBnUf
oURzkPYrTlxVn0Tql86FjBiTBKjKdjJhqOGQHEeEKmgi28g4wToVxFEtMceV
don+Si0VgSikAx8UF8gRPgwuECWehokwlRbCRQ07ekfoY1NxwdUSojLotzi1
GNosdZohPukjnQN/eVvXdGiuqZYz0lW31CGIGyGYlIUr9mOgUlBZKR4cjN0F
lHzEVsANB3zBjQjmGhCSHXf0kg/uecaOEBRwy9CrP9iJNaIdcGGxWJocdVGs
XOKyKy4k9TjoKTv1OwtMXLWJtkeerUnOBQ+JuVPW5xnzVkwJwsbOI3VnWOWO
uAALqX4XOUrtYUp9g8szicRwx2Rcjs88XSzhVEtXQ5kxoiY0mRHDRV4KUbLv
+/iH+X29+igWYJ7LqUKdsDxcTnzv04nSsR/jHIxE2JXfgYizjmaxYC6LATEK
g9pJKf6P7+PRbPVt0bFnkswddBwJGn5rbFsk7SDU9MnBt+mAM4RWW8VVX4Rm
ar6myEn0WgKSw2sujiGNVI6uOl1mFJdRttEtxONM8GGss0YsQEi6UEqT+Nsy
YvFKCb3+UdYqqlkhMtETUDrRNAXG2YuwXJdItHZi7AAFwYRHa+4j/frKISfJ
kY/YZrufc+eZqVXLxTAXIrW3BiXucIu/borZB0ipMIJk6wK1HqMADzvqPitk
GBSoaU9Fk4SGzS2eepfwEJs7hdKtGpXoNDlEDuaujBCp0cgGaisG44EjBTUT
l35hbg6xZnCQqajckrmRTglXLiStbm46jCRATCVPR13PqNQ3Qw6i+lFW8Zeq
nvhIXVk9fLB0DRAiTMzDVn/BPh0FgA3jAdHS+0dSc/pGzRB6nolJQm6l+k7X
qKjFKaSAFee/Spk18bsgf0igWIRqO5OO7XGywV3siR2L1PjterzlxveMv1XF
ZwMjfdQTskazHYHo+6JrIzPcWPk5RC7fqqe5IrATlWN9JoOJrKlbdy/B2Ezo
ZVFgP9/VrfAnp1W2TAiHZlnOYwEflD64SG2RIe2GuoXBMWknOEaKjwZ1Rn9b
wIy4+7m266ysSA/gq+wvtfG/7XRhZMKq+UpSLWVoHkddvq+wuLp1EUMXedZw
OqRKZ7hLCClCITkayAcU2QsRt2ey7QJheh2BrbNbo0RZ03QOr1tSsHV+FsSR
hhKWBi13z1CDCb8mBjr13LYTbBMTP+crdFabVsVJ2u6b0uLvGgJjHiF323LT
qKkMyfphhF7gg0rCoPAbofiqxqOSABu2CKg0N9dS5Uif2WyzBv/FXnDUgTau
KQiSdRB8+KMG4NxgKV2pLDOrl4KELW2d1dScFHDHai6IXHXzIXgxSfAdC+M0
O8qBibNaADZqhew4m4q34jgsCQtW8mO+MmrR4mBvaMomSG8LSjUfzBlUkFPE
BHdpppFLyLhjjhvuksj/FnXkiAJu7lI4cpBEt7A0m5CWYQ1imKHwf4MtvPKh
brpgF27llvd2JUYwLEST4SIJl1duQmSrGiajhS+Ua+U6+6gV+3ilJF/oALEw
XKVWAQ1MAguOWZyctqc3XsWDIqytR4wZdsB4vqQ9n1Tlkrwv/QL1ybw1x/Qd
HndJ+vAUZUqK+jbhCLFbd1RBqFXwucu+zxYVi0zZVKpD3LpIDNSmRaZMVlWc
UarxyIgFltVJhU2J2NM10htNWbkQZNRtSVx5iQDnw3zW0oc1IEHQhcExdYF9
QMVTADaRefBHsRSEFGeifCamJUQitPJl2dtfILh2xjVBTUJi9c8JccGJ6iUj
ZaMVqQY1w/3k75CVeut6FrjePXvW2WbaOcKw8BcrEKA6xQX79ekYoVnPRJpF
2z1hG7s+RtQ85tJNyDdpY+tI4jLjTMP8Ypxsboa8zM1PUmmi5aRQ7tKBxhMg
UjeA+lLu05Gcca2/qgZWS2F1aTFiycq5r9Zdm07fMbRo+LGDu8Ri1nolEENq
tUUQEQis11D39m0BkDKxU3Jle4kjSzxkg4/EZhipBbIOLGvRolw4aEXNxBBS
C4A7acmDnHHdoV2wqHDNiATRW8neg7IKUDVkxy4ntFXUIhTHO1Ut2uoxq0ZY
rNuPacm6LF+NySXMqMYwQaabBeImAiMXzdbRtUdSUR2jqr++lSQmQAsKQhny
cncH35rkztYnI78pxALnyHQnzjprepT/wzChfUbyVlz+Qpw3stZRWG5u8OnT
9pohn4eHEjy3ui6qcrW0elLvuYTkp29y/zvHomzwu54Fm1fuLGBq9qTsclVy
UzEJBNSyhVzHTDxAvqxZQ0IzV+EtoPA6fiYl7dlwO4a5xWpgiqCO8iMJvF/6
flghM5C7J+gyt7XtjpJDcGB/BUxi+ZoOPEHtdSuo5guX1t0JfRkvy4hduxY7
iXjgxZLvA/5cPxjNYZ2gkd32XkIw4dhJcXJMtK9nx7Yp9JyX7jr9bWWtHuXz
4zfAidnKKiTnH0F4eL6o3DhnLlpNGI3gwGWTnkSzVCKiJRX6eJGR7DqTlaCN
Pfx+R75IRPqUlARSuUQJwRIGR0+fD+V9NLjnqmTJq/unehj37mEzdXNL+6TX
g9qq3ZpiswVLwhhVuzglUymn5D0htRJUdShhhOfHZ4M9bl+Bf+1DV0J1M/g5
xVbbvRYL+tRtm6MeGuJ5kjd3/JynZEHff0kw//Tpj9jkvf19X/vwPipF0lCL
7KOaiQIdVSJrFZBOSAlLawag9/MiBgI+YDAQljpCvews3o8boK0a4wSkALPa
argCs55IuP57Dp9C9DnlSuFWeht4mLYxzLN9S8KqS7oYiKtdhWYQQr/EByzJ
eXCrQGFJ+WpGtyW7FKMEBCI4PP1M1qqJWHW5DuvHi/+eXxnFgSJmcXLVW7Rb
CeJHcF/Mld0KLmn8Xu26JHdfF6DqSJ2jbEaUtHqu0JtJawlCKzb+XWy4ch1T
Ps2CdAXlnZWOV+iCENVY5dxMuEeuj1jmQ79c4JVkxGS0I1onbyBodcJqJNHt
g2+lbId6vXu6QbhOAhBCEf+h3NmrZW5G1hv4dmZzFM/i00gkNrm4vMyjoDrO
HzIosruvuV1LnkqGZnLQVUEkL6QvSYhtegMFbP+NHGwOscKxoLNmw+ZVyDRj
Z7luJd22UIRuENL7tK828+DAVoS2B79qoZ0pM3wmkk+5uL/FcCdm//JitURT
jgT/vF9clVY2CjdlaSrNhJkIiTvFpdrKudy2iEBiyrntZLSbjhaa8EfJdKMZ
RGpIYMFp2s2R6lN+E59vtWBx9PTs6N3Z+xOJivP8BiEO0t+NgeGlZsjzQQib
xfkqL+Vq8aivgdDP4D2LBx5LqRu6Er5Aa+BbSTNSjkpwIvRzKCX9c5LK8WxW
psNcaP0ljlDWODs2vvpoYxEKOU9OKnqm9rGa10QRwkeM0q1sNf4QZD3Juoci
ZcdtS+yMmHEg842Q1DjIT8z6NA5v3d6JUp4to4ObFvqRhdzZHhIUcEJ343Ih
GQXd76VqWK5V/cM1SjNCWKNONRPQC82hfTG8o9zGB3GzNVt36DYJayswTi3M
p7cBAEfz9d1j5Uu1q8RiPS8ySTUj+uLpgRxrT7lgH3QSOa4spjrehKqWbLJR
68MFXUYxcLCcXPyVa5ckUcOIMkWWJ0duQoBecNcEYxkgcKtaqtGEc6ELb7U0
otdra+vL9+uPlzIPhM9HcV5ojeq3FLyLlFSTKWO+FGtWgUsAqLqZWPpfrhKr
ysyuJy0cHga5hkl0N4Lp7Us8SjXAWipchgX8pB4QLrXShRYBCy9G5TXJtnzV
7XKbJM8RWtAqMyn1ab6u4KjvgOKMl1dRvs+KtGcBoPgTgJYRNPhsLLXDe/6T
boFZl0XSG2GddsDmaixacSXznPSV1cysPqYwfLv8ob1WMCRBPbn1xtn2RdDz
BnArRycmRNYIVTxvpaATPe54ZwZ6DdFjVrPCh64IRZcq4PB743V6KAXzjr4w
HjZFvQsl5k/fhAK0EZgooJPl8t/eh9Zqf7kr3jqKgRQ46xYqjzrPWs375Et6
79DZXG3pTm2nO7wEv3BxJrjDnPVzSQRTD0ZOQ73/9qWzr+pZBeFt4ZBcgMC1
OOl0oJG2NEpUuDyr1X2/zRuP9MwHG6tWq6nDYu0qGr8m9uqSKDYrGPMHywIF
pdiiOr6kM4FkwgbaKqHrQhdHAqA5IaCOd4uoT8brJZpV9owBxJbus9ICWPzj
jm/WBHSUWHFCj1k+xGmvpQ7oO6PFahGBuCiRydaLIsYNIb1ElMflxVi6nUmQ
KhdCTYIOFUPXgsc8vCYWeccn5GwAjS2eUMGwd/ZeJtZjxOVOgqlKRkyV88Tm
I5RQG/pcgnlZCHPG3SABKr65UcpndFHZDt4j0ucRffSSxRdDeknC3sq3RmE2
95dJa9JPWtPfRlpPOfuzxRHVhuYqsVtHmxkyh+lBxIo0ajFgR0Eva5x0gMn4
OeQpfbWYnRwgsZO+FgfwBxWag5I4QUyrFjbhtutB6Y2oEExLZro2+2wE44mg
yzepq87+6Rsrxk748FNR1c3IhXL1iDXWryhK4lUBwXbDMkoTUcKRGR9oG5G7
D+vwBdIsAla8g/WiXEtllfTqdo3acCK8hj1x6PJcm69cBCGRI11bGld2nGhu
i8Nb5wWhR39OSTs0OyVgZlUw9fm/83NDKq5dKa42cCPI9Uib4dbCXAYP2+8u
FMIlc4MmX7u+VaNEFAoOvARAXO0MN2zoNsJiopEZ2bTMHz4kPeUanl7AarO+
gb7B8yGWzFWPuLYaEeg5xLUCdqLzKTSvIjwLO7Rw9ngnUeyDK9qw82GV5zvm
geUvBrTPIbzls9yFolocs7m45LO081nimVkKpfeFxIe6BpUjow0RUdvhLexI
TSlXJ8hQvChd7lzPDe5LLdcbVtTxDfP3l52y/Jb2Z2E7Xs9Qh0nyX/QfAj71
KNInPK9m7e/y74qO8iryTxxB7aM5UmjdEVq/uoCyGJkvVklHXtgJyqvujLZS
tatsLpZ72Vag6UzzC7Z/hSXV7cwcjFZuDVeI+tsYgbFhrWFHMKx3eDv6Yb4J
Yw7iAOdc952gG8uOdhTW05ejN8PaLPenEID+SRoMkI7DorPBQSh17nA0aehl
J7cz4lrXYuohGn7pYlYJM26QlQK3rWtpTJfAUjTalcoGpNIHKSnC7J5yYgFQ
jI2nqD4DExZE29sAV0JbiC/zZr5St4UeXizt5n4jCz7zuQmruVDNoBE7Msnh
/herCoxUTAO5DOhIEkZF2q/RfwQr1Vh7SD4kqqqEOoo0MVi+m9zNAX0AOpMk
XwEGgXkNcq5GPACVRlaPIujsJ7TGpdehMrha33mvMKNznNoI9ms495LIrBnt
twhSZNjGE5dP9+fh0j2ik2nhgVplpZKqaubSTNmheIR/E1/Co3CBM3USXnVu
AaP3PWx74YQLYWNKmR2BJ9Al/bScpPMz0TmDKaNruWXK5MtT8sU/4V2tCJzu
HkabRi24iL4aW4+uNi8ivt/8k11yHPNtkSPDu9xUKQn6PhtcprWNm0AzMuZi
fMd1hdGdfBXvk6aSv41BRG8NttKvYboruav23+Ar4DFUgKgUeewt8kwQjoLo
TQQxbLfXk6zZAV1UuSjiVGtuGFO7jY26kqlTPfhexUK6Vecp4MEUKT8wxIlh
kMQWJIZMEuckFQnWrhTYnuXgyY0MBKeLTExNM3RtztEpUcvw0XGI7T0jVVXj
ZPmmelCM0gBO3CoRKITIp6U+F/EB/nhUBcoWIFafPh3twx2dchg9GCebr0c+
BAzxMyA9oC5lRco0J+N9YE0e1CirNHIpMjgy9DzGMEu2gHojmkH4JXcUSM9K
K0sBgs6rAdtBEhiHcBo6umJUN3msY7CjxRqgJMwDvGVUevoFjagd+nEHR9Vi
MISpMWeh9qSMr44nItbMQSN8fiPzP0RzojOR6339jONIIBZdQEfy/bJ27J6H
5JLfkckkeVzSsfxmkJF/MtkyiIzALX91BKmYHa0uGbj9mPajjFYZ9M7JzlAo
vlE8HvyC1Vmf/6VLKwIqycvyZHIQLJRJQgsCQ0cklQxHcqmDckSI/a8tQhbO
reO+Fza88c1Qb02dkuAnTQGMZ1OrIayOTtDvajOZoEFHj8x7FXk2nJm6mATq
op1cr8p42FVq+7pJJSZOxOfssb1t12kbc06tvhCKOF7d1hoEh7zg2omUHGvT
0oPD+y39bWvxNLNvuLauZpY/yqQtMA3BuFY2xSw0VtqVT6KLVdQdWhLd/AAh
2bUhRAUsVxpx9RIVlLMnetZmIpHuJ9l7q1Ldq0l3egLz61LagM+KOl/cjrrK
kxOKpexxaKESvaLyBQ4TliabXkKU9S5y5NHbqJKvgG6FWaz2bGST2cr50jIy
4STOcncr7+KYA7UnnJ3uJcpVu/s0cgC36y7o5o5BKh7Y7eO60Q7X9HZzO6nV
ZolyCGXgdIfa41RyjprIYe8txFMdV9ubF9dqdLawD4DkpoTdzJHrnj15crbM
Pg4CcWkUkZxhDxwi+tQeMJSHoOxIUbgeh5s1iOF+xfzWXb51bg59xyj9Trui
7bNLvsJnd2fk4oRTE6pVX8vXsDS1ZC1B6+r1NVoJgk0ubQz+uqEJd100D4zI
SCSq1bye/ppPLRphlL47ObY/hpIP64Kfkk4GvCW+z642qw9uPK6YAVNdjViz
JSDCwQnwdv2yEuXJHKj9t3ObH5XNSdVmVmSLRJSBO6JcJdfUD1KHSn7XhZ9E
VmvJ34Df+6ewz/DdEyJ2d3GbzEt2UNygjlQY497eq8WpipNFbQc98C56YfFj
ugUC2N4HIjoiYptpcbs/QeOtV7a6QGriOEpZl1G2JBojxHTRuoPynXF3P6vW
vyy1fuyZleeUFP6Yx2lg4rZQkjYwElcjHZjDFgiZJ+xAISPM24KnI+jKaSsp
KUf8S9JWOANE45PUCOB9jtYbRzJY7qAi7Dq38oOYKuqOFVEMubC1i4kyN/vI
OEwYcN5qKxym/0k+h6Cg03j94cYjcXZVncDHaWK1s3TATci+ztpXf4obPVpm
GDa54No4oHcwwGnbaUTXrVjQEVxwwXhpANoDtZlYeSStkQ0s1FgVicBiQ5xW
ksBiQ1Sz86Th7slwLbHSVCExdXlZyS+7RaYloC1Jw34H9yfpkUVUKfoE43Lz
Tx9n7aEvgUXJSwZwnQeAFSc4J7k1YqsNpBlAtdo4A55V3rvIJcYlcYHmbDlh
46aaycSvBm8GGyN3ppspAZ2t/FxKTQ2oIstxNIX0EXIlrd1LktWkaH7bT7Wj
nrx8rZPghlnyf/wNv+rsnEGzIo4tdWEVH5skCFXjUtulpt1N2wFk7uKZPMom
0rKx9JpkA6jG4UTceawn/E+KXgViuL/6SqICZn2h6ZS9XJlTlviL3kg2FcgT
bDVszLWFEQ68PMrh32LGsuO1FtFZMXdqSP5Rwg71pK1jFBdKM/uS1pQQjHyX
i/+YBnhnntOfsEPLpXklCTycPxN8OK7kQ9RoMCflGKBxmSCSxCJiF9NqCQZ0
/llJbtNZNE1oYMGP2EArLyLJ5uyCg8ipg7iSne+O/zT59c0ziRtgsibdxlwU
aWT9VAtLHFfCEfN1FLHY1X1uJIgeWV24AjAli6HIX3HlQczDOJTsK4alKxSM
CkGzWFmSamAa0v4CqdbC0aEdQIMXk3AVWwZC/ESmaTbolIAi3/XtEgLFrfZ9
r1mMhMEtlzZMa2LlUEu3SSXMcXLE/WCQ9aYiPdlFVJC2rcH7XwyEgiYu78bx
FT2BUBb/9KesYqMjUPXa/v3ZsB7d0wM2ODgeqlNOmqf7R+OZq4m7LfAoZpvg
2zKUm9XVjwiydvwMh2guHJuA+yt6MjUKbc0cQJhxG26URZEeyC3LEZfQcgEY
kthu6r9yHBw8F1b11i6xs3UarGPCuMl2j6t1kj5f1HlPAgwLWO5LMJkvLyHd
2tObN0tE8lyJo8AMuZkfXaMKwCbgwrYXt+ikl2bTYe31pO+0ZZK+j2lRv+aa
/1TXm6UrMZDfEaHbyqPyMRaJxyrBKHT7DApspy/bMlsb8TRsKrGy+tNbjm/5
yJl/l62ZR8yk4gGDekDII0qCEv3EIdz3aRyKwiitqavnkd+wXQd7egterDWf
tOh7EJS0ijRsbqGDdEhk79Vhs40YwJLdNygmRNV9+Tn6bZRYP8aa9BZOL+af
gyira1Az/nXIGAap7rzcaAHvbg3E5ipoE7NpXCE8rRDfSP+X4J5gQB5ZBcT/
7uA4Yk2Kb0mcx3FzLmffkg4zErS+rY57iGbc/NwNUy9L+I7585Fg05pEfkti
vslBW9jIxPVCzCarHsd48p6pPHnWyzo4iUmzxpuu/5lkOaBxiavBFBIORHJe
ZbXmSMAEE5gA4z7a2OX/fKtVW147vex/Db7RUi5jr6wNLcZ/1Ol5K9FJsFi3
OkYJyRQTdFCK2BOaKMbE39ekvdsd16d+J7JX8TvIsb8u5pABLMyAbSJsFyg4
31okfStwrN5DqHUQgLTcVJ3u+DPbGTkDkFrQ6V/hKsoKQTLpyxNP4gPn6GHM
+0w0192wBcAlJXpaELCf43g8ZY8dj+y2kfnrDnftMNy7vw++PLvKfdQD0o04
24YrgnAo5LZNSs146VY6zf3QRX0esUNrqxxxyVSMHKvx3/Kq7Buf/RKjNCcW
TkLzgmsbnEyaj77e3mFYhrMg+XCOro8I5LxoLOa0FVkaBJLJ/Q7SYNJBRffg
LWeNI07dJ8KMq3os2eTdWy6FNjQ/X14SZwpG1oFJft7MGKpVzUY8LSaXxJTh
gmMODtEfKW7so63X+3TegbUkFzk+7ECU7mIg82eAieDP1tlg4PaJ9TdgUtC5
4X9MrPzF9nyUftOhrsTVvcca4j4ILtswCGUNt9bLpu6I29s64roqyqrdaiZs
Ni7xll8FkHBEf7Ha9/y3jBU6Kw7FAR4HUX+paZa2zPbbGdS5qHk7zp2+49K7
2LWqeBI4TQ45zOCfNrEFI/fM6xwroT0tDAVnf4p7OM7G3BFVHtLVfL8uV21z
XGQBCoxa9Yo2d1U2dSTbOGt8kBrG6mUrZNJsXUj1kw7YfuiEk9FWuZi8o/vA
AWkjE6jes7Lk6cRgyHGv3tpjyUU00+B0yFGeEzPfaSq4U6i0VFBbqbrJuXIh
s0Nu5xRRkcSZf+wQhXG48O5uEpu4Ol3RxHyluZnwOOYu0J4JG1fXsxgHE2xd
bywz92eswGsRW/WCuy0W3hDuraW3ebA72JpkZx3rhU/YVoHETpmry2qg0I1k
0nvTogR7MwdZcWclIXKJE2tsJ+wkUk+7bYZfUjsvIMvsSI/bSFDScTGvuDf6
3NkasQYkJPIiIrkwYGcRNucf85kLcw21sHwdOEjfrk54HLr7zeBEWeHwkCPD
iD8PAnb65Em6N9SQsa5k0RdP9yQNhApuftCNHuHvWr/pm63RemZov6iRbNGL
+k6Ln0loXPjLv7jtxSKFDejgQP/tfm9o7T2OJ+n3u5E3mEmPXBqtEgs69+mb
Df0+dr+Ps9WYwE6kihOk4yJeljbhJG8JLEJgR52+XBVNRCmg9NJvpOsUf5O0
F9AoI64qeLw79YdP23jpPuhwaiNY8yACFYOJQmeBuKRhfI9AwvZqBIWIbaCE
qf/vSTretyektJwz9PQJi3T20DNNebhnD9xVSVsP2n77PT2NN3R9Rl6fImUr
TiqdR7wgrIkCMCc9JBleW/41DYEcws7373RcBTarYHBHlkK2EtPcyK8ZSMGh
Js0CRWgIWl2XHwgre1a9RAHbtXpP1LmYJfCIo1r9XPAuzLV1H4ROyc6MVqQ3
6d3IVhNZC4gB8CL8lH3Q8ep6U8//YjWXkbCz6x6K1qErIV0jWGyqFSY+8aX2
QlShaRIhgJ7o/A50jqNERGRtv2B07k15MxjK535ngFpwwQg7UT5Tdnkou5J1
/0t4YwiZX9Zv+F3Z4uBkREAa2j46t4uvCrQu/0JL0E2ZXMdks+cmpq33PB2W
29kis/xy/KN/OQo0tJHvjDX034ah3F2ivzWe230P++s5hq3zv7q59e/tTK6P
pdFZviY9I7ylscIhYXnfeYz1CNvD5kC4lDYL75by9ydmBrrjTkl7IVcrko1B
bvQ/Kp1uoU18QRT/o5DPP/QAoqyi2PHo/Sc9HJ2zP8IvRJwaOJCP2mcytDiz
nxAJi3g8r8IH1Af7dKyIBcet9N08mkWdMBvjrBhWAHuIZcsg1eENSmeTF8ok
2/zhgjhxDVfoWrvdtancqM2JW+ICC9VcJKVY9ev5BDA50r41HBpqHi/yrArr
iIhNASRFfLMgpZItlnFBTHjaA7ITC0lSWs/j9h9iC44XD8OPFKNlvJig9NPe
VdlcacKNu0NixcfBGQkGzDR2CuUdxWZvArCjDezhD2iFbi0UJCQmsEWPRi0i
o4Q7sqa0s8O6RNWt9A1Yo9igubjSLKjk9IcnqbRwnAgkvk99AWjJjUUOPNcJ
mkpmr48VU65YrzcV2gUvbm2IyOCe1h+mrJyI/Dioh+py0dCuHVvLjn2OVAku
MuinGmhXmBXs48OgmhysV8VqE00vSaD9QbkaABXnswdxdDqEum1djKGWdpCo
njy75kIhdPf58CU9Hu5b+Xo3RDgH6d8DNdXx0EI7wTufDhLeNMO49nD/EmBv
jwAaW95ClEtCYisIIj1bcdkh60jMa2sNoXLxzCo03R1r+lWFnEitvCtMTDtV
OHtsJMS6+rx1rkVPE43nQukcybkrVoGPuEO/567XNaojxDFFk+QUwRhzrTQR
T6RxW5xkygXPfHpoGLx2U3FUReI/0RQrvwrt14apOsMxlioY4YwX5GUOEicJ
btal+fpc+TXOMMDeq2HYfF0wuWRrOmrszhAMyXU6zTAzzVseM+4txKtgJhdm
/EQxQInohsdX+ezDy4sjD4dXcqbCnWZ4LEkvPcfuWksUq5Y5ui4aSXyruaHC
2ZZ4waz+EIQoB27PzvH9mMIROqLrwOBjjosDC2yiwcAjyyqGwyyc0KoS5Cvx
1nInhNYKwqBA78tVd+T36ZHIVtP8slit9OziQ3azs9pipM3RR+TdbelCJ/il
X/dEoizLuVQW1Aq7LF0FRW45PaF/icFVYjyL1ssHrBa3ymqPc7pPqg1NIjAF
ThHESSLm1d0Zp/0FwceHxAjeIlyShx+lZ69O3L/fsUbsC5jY7/T1/wPvzq/i
JT6R2sv8cKImEEZOdn58AT+7Cju+rC1SIAkCnQIS06o7SczvK8L2d5I6JBVw
iMLMW+U+OtJF07UW1R/J4UW/O66qV1rflO85VvUZncVgGEnTW4IZWLRqvehM
WL83TIufWzxB+ntINkEMwDa5Lsou/V0wwzD9Y3qY7gcsa1sc1bOinm0E77YW
kpq7dyyAquvDpq+7PuzPbBrPWlSgG9aiDu2069B2QX5np29HyQv6f2nezCZC
RaXWpLeT+Mhivf8cYq+ZK13PduIKF2jsGb70bbfjpPbRl3zr3RoOEaZb1dn/
2Re+8L8GPeELQwRsWkDNb3LQh05yOa+XrqIp6IKIanRiUugUJZZQhEZ+/ox2
cV11R+KFAptkrC5Z7ZQfxVgPiVmdqgnnIgUVYSRTqb/sU1T/L31V1oHlVihm
nRAwF2XdL9qqsBalgom8kamfhJUs2QWPnfQOkFmlgkwrPjjpQnN7fZMj8Q1o
7bFkxh0fg2DiXvlbS4ZUDerBSmwMbi6zsZ1ATt1p16MN6h1wMRMthRqaiH1b
JE0K7kECRaKAP4S4oGjon3Zd7Gjx4dpsdEb7UYjxB6nDy8Fua41pn1ouodT0
KlatHLosSn+WZGyti+arripnDeqwBqVNiqAY2RLFHao7p7ir6NWJcwEB/4JY
BfNURUEnoh8mXCpI6+Rx9jqppJk5neDNippCdM6E5V2pQBfzk54Twruc2d96
V1KwrTI1lESE6riOPppSr0FORDXmTO6i+qu9SVA49sCyHqu4juK8vGhJfj6m
UesCogwtke5LqKxCLy64BGI4nORYZ1w6v1PoT2hxJHhrzJB2efXI2QLNKAnS
qXwtDrMynYxcHqyREsljDlbMReTQ6/Q68P32bFc7N7Qi00cSSN3ZftIyGKhi
klbaXsDtuFtuhE7t9dGfeZU1Szb7PmuGsB/PZldlWVvdXysUEKbz9lFBri3O
CZktia3X6Mb1oUyk6/W97XDiuYpxpNppMV5WbJTH3HC/rVDHLzqxnBqrSINx
dz9c3rKt18apQx0g1B9QwuXK8TNf5K6PGVywPtDG9/iEfJo4zvs4MJvAlgBo
n4r3MbCojPURCVrOAcmdd+SqFjBSYrTQCCN0Xku493W9QHn/05/fvn/1DNV4
JRGYp7a6alIFkhSJrFhspATOvEQtNiZgrv72d7UULiYB1aemaGkxKdAoajJX
qOBMKaukaHqELBUbiEDAobcuJbWsZIncf6wT80maf8n9NGjsfC0DDK0ySLtA
KXFiKesYVvIJ2orGnN5AOd+I3J/LTRzIfnBxXf8Uvr0PHqIRfNDPikUKFnui
IVIfgiXxC1G0TNctoEE2DGxf+xk18H3F524QTWQaBzYo0SJeW86KTKuJtwhS
gmZT0owkkFW1mJikFIwlpcDFEcaKaVgeyPyAq8hlxDFFOycSufiO4+QiPXpH
KsnLL6JlTKeV/MllFD4LT+Loj4Lr4/u4JoCK1V8+RI6AAoQ7DodaUxp9MAYh
M7cfibSO5Gtg1tZUtFqU9FLyRb1WCdMiPrGY7qDBhqu1Avbv0s5arEl5Arc/
S7SaIJooFFJ1kM1V3fXVHOjPD61tUuh5YS56k1lp1oB+sSkg/I7xQLIw2tHN
QSCVxPaAUsQcEbaxaikV1fIwgNtXbRU2LrV8mSQkQVFavmM/3DsgDjkMmF1P
8YZV6kE90gAHbgLWKT6gwQ/q+GVvpHcdmmbsE416M4ri1KFWL+VWC2VrR2QY
U/d3SHZddeOe29ZHilBgnnOLdGd2jbJz262YOeILMmT0iUTT6no5fI8tMaTu
Z3TRlm7pjXtEWj4/+tyuaqa9hP0Y+iKJc8sl4effLMYJMXdiErP8Lqn9iuQ/
rOmKmG5Yvts31ZC+wv7kgv/+Hv/5J/7zd+Px+A+uaXM6Dv77Hb/wd/d52v7v
7/ELf7rzBemmHE3wz5tCdqG9lc+fvf31jZvJpvjPL03x9y/tAi/YFMfv3r88
fZ7GU/z9nzrFu+c/vXz1Svfxf2aK9yfBgXz1FL/T97e+sPU/f1hj38/l9xE+
BGa2Vrf1tYmX2/uu2ythlXYWoJx2slkLk+f2OHUQ6mmtHqebYjFHweue3ufc
f/lH9hyiCgf3wWbThvUmU3eMdCGtpIug1f7gfivQpNTHyPVDtGGfix9ybfBc
0lmQNt3XQ9mq6WrbOZ6g1TocuoGr8DpJX5FAdlMEmYNBKUao2JnUcku4RKTv
ySYVbmyfV9xg2Yjppk7jyX338cu8CfvpcU8liw7iHs1EE8R+0eqFws9963BN
FU9EnI5ZQbVZ1a5b4iho9hj2fxr5ygcMmSTyHUnjZA/NNCgt5jfDZRPD3izc
VJ0pMztrBB7xqNb63PphatTZRSU6z+2ItnG5KprNPB8l802VedO8T6woraZB
2NBKaD6SWnSh8BBKV2mN90qki9agVuqOTXGberOKtoq4dPorIgGj0Q7VKMxf
BQ7y4cT6p0SHwZU41E7CDdLZvXakPYm5AUnQjJhU17BAkjTRlXy2brWAcKCo
S0/cdVp1Ehr6Q55z9iEfDv0CRNc4Cm44wCOecp9yf6PYYc1lW6Xfj9iaQu2H
FsqtfZcZKVcVD/JT7ioiOCogFMRVXdGWrNzEhZcQhAzKle8le2ec+V13aF0j
v0uevxorCatIlAGlGllBGIm4cnHqYSF2UeHZPSZ6iJdTdHA9ulm+ULqqaRqu
8lkeN+YoXKyY1aU5TD2R407s3b5Vve1gMfGzvDNx3bJffGFi0gk3M66C0s2n
CTyaGqLgs2VBLRdaiG1ZNsW1cxQjmOMdRjW0csV0gwnjB6PggWHshQaddcpH
oflEPbbsWZH1350co8ykKyOFixetI8D9cBXBz+EaII3/kxYiDkTpoMfDzErO
eWZbDkMViv8uN03VRYPrq9nFp82GOGJdrYMX5MunDmOIBtEBXXKla2bGxTzs
g0ZXcwUjSvAa28jQVA/xDljoZsX9lcJGuOyP9iyW23CVVbdBO8NSsmKYSvB4
Qilc62KuCaKVp4JeaXXEjpkGMMGTjcuaBI66Lt35pmIDLC6D2zMvYIont75I
KlSGnoPj9sU4OR6cNoU6ZFYJaWhlXvQKcsPyXmiY7QmGLQNk/pFd9NctpE99
WXvgH67zcbUp8JleZWsFg2IZldHBO++zyGlES2dXeqVD2IbUg9DHkmF6O657
qU8agwN5mTP1NHHvGcABQsUFx2JSrn1LDDsBa2QmV6rOH+Z9Z0KruZVI0cTN
qWe3M2mdJi2S6LdEuggrSR515aRA82VxTHZWrmk6oBZnbq7VSZdwGJN01+vI
UaJN3/j+y977fOj8YWqBuuS+PaJKJ+KBn9DlLpZcnZf3xcO1P/GZcoFlyyXL
DbVCcKIrkGL6YDeRy6mWTGh1gtJmRXybpEed+f6Q7rtW5LXEKZq1zoJ5A6nP
S6p9Y/3H79N9lm+w53K9LmtE91jJ9AAZ+qe54S49WbFUR6otKwln1WAGiFWb
QDDipsEB44Pk7KgAftBqoJrozMtl9FZncS7xLwZEx/wm6VvuIGN9KQJ0SFwu
m6lGykACcZ6DAgLXYTaty8WGk/8XEIcVCwxBxDwb9N2qLgm9/mZFtoMmW8ED
NaKwnT70UMFDjw4l43nFMk6rRbXLGeMsMRfSFgg50t/kMA6F1hQilcJ9JBc7
vN0gbe8mv2JNu1W4K1ftgDuZUIqbqchVb6ZjWaBEYc8lwR88SxvQqo+13ZSW
HZ7RauHVjH6IrN2tJP3OTgFeB7KCUSR8IWnl2tHrb1eY3yJ78O9ftTnp6+zj
T+waG4iH7In3o59bl1JGkCd7I74eTzT0NNSXELaDahjpH+0fhzQJYtGb2/bL
52z5dGkp9pRlCn4+JynXvbTX8wLrwPxalLwlLyEMmm9l+C3q6pwjNAHJme1v
8o8EdCm0em7ut3N7NV5k+GoYju1nQqYpQIdSl1f8LJrsHemFzbFzXmlDTT8D
Pz+FBHBGmhxqoC78Q051w96PIRzTrWh99xPN/fTX+H0h3KDb/vfn2KQaEQdh
ZX0uMWI5z6TaANWlwIhZxgX/YfDttllRqwmBhMiPo0dewWY7rm9j6rQHxFTk
zubD8Zc4rxb+2rIMh+nHn+HBz9/Jtz/Rdy/n8Mi2NwSHSbgXhPzYNt6uvFdl
5DSzoNNmfzIrvSS5G29X9KE6gTlCx5lHAhtR1949Ch9HhJwJUiKdlflpv+Xb
mbRYFpVKVzJ0BLZokQK3Ph+2AVReZ6w7Ws1ZtW0/1Ip3J241klm27ePD+OtX
kKsbC7PqYL+bYtv1kBdoN0HDB/+UIxrlEryT69d6pjj/jChH6wnbu+PfZS41
vh5D0jtBSGYHWpxB0RrOzKX+56P5NdSBLwDgKW73019/Kiu7+sndsHfwPc17
r/opZ1jP/024XfT7MRHJ7mV5BS4Y3haOtelcl9CCw5xypLc6ii6XEIF0R0vd
+DZUCSI+/7H7hgXahYNT9esuXN+l4JFkaW2aQo8sUU1f8HDaauneauD+7ExF
bNjj9/AvCyq1B89uV9nSmY/w03iuP7k0baeNWHoEbORErBECxD3esH/Ntsk4
4h05e9m6mA8D3SVhZrrLFlFXP1bValsNvhY3kHipJukbKxsWNb2uE1+Rtl5n
HCDFxw7lqYEiTyvY3xfKyFqvt6FazdOLnNRjIqCo0Ek3oMolKOLFFAEbZ16o
RU7ZKoj0G5nJjIvlbS4N49Df+eN6QRxqpDEdtiuNUC3FoxqU6SLoZNXMlX2R
OBg6fBh6y810YWGGkRrLoiSLI1qFg0SeeR0sEEB9O1iUl+cHA1IDhsOgdmst
lZ9Vq5U98EmxfYWN2VxWjDW0lduB670dxASxDpGwmGadvLvlQELlm7cUrv+w
NUVSQ/VoK1Wqvel7QnBesKZ4MPnhh2H66RNoS/spypLSnvElJLNoqGf5RbZZ
MDHSgeJR7MHnzxZe42RdKaoSZtNYNgxsqeLrobPlVHeazTYHeiD+l+3EJpaQ
2rUrOi846jER9Hpik3m5NADik34Yunc9jJ70gUgp0ZG4cS4rqQ9cx0fMeKSq
vdQirlXr7K+6Mopx1txUzv8eBbNoNiZL2RzEJNLI9DYxhOVliXLNbiqSZzmw
+a30Q8q55C635SBuTbOoRaa5XUvzkIRp2cgFYcRgYddAzaes35XpIH5lnO4P
0//43mfJnMMDpo0orL2ScxAMDvo/oAOI/p4kEpXsy3t5rAlIpVjEamLuEpRl
1EvILV9PXjdh9DNJSVRkUM3OlWMOa1mFELOo55sy8FjI4oT0L8rygzXDXeMm
ZBsj78E520QWuIO8Ffk07HQJHq9uMB8asZnWLlJNkiTM5xWsJ7gvXfHrsE8y
+5lQVBi8y0AJbtWTiDf1aFuWLmIXlM8kknG+SZ9/lGLHoackfWrBvd5ufaJg
+/RNLl+Ms+CLMYcDj/kf+sVYAS1OHs2EFKGIo2M4UFAHEJuSN6P32S3HSbjb
IAhgzOOxH7UTZxNEALq+EqskmjiiE+wARHOwzCpi+6WstVh10BVwsKPpooaR
O0PJVSx94lQb4awta6eUnBg2w/a52MzGh0OqVRiHPGb0D1pVx0JLMi1Lkg1W
EVKsYLJl847ZNoMnDCo62ACwE+Z7vjmpmtW7A5tubxCOL6121uSamRzVis/M
W4pYYthFe857sK3saFC9ZbhlQ7YiMVSi1YHkvOoejRvhWwToV0KtrI8jKEoI
XzpFyHl10PvsEHcR++QidN5pPGB79zBB9cS+LLZKSLUG1wcxhKCOSALJtJOB
thlu2iXIfX9snzi4KODq8B8NfMOCgwdAW1SoGI6SRhOXOmfDaN86FsZow/g+
R0Qi+TtNLTyvjU9bSJK4L5jWs33P2TmyAO7e6mhoPbAwYf1C4gH4UFU4lA5X
q6hDs1ushBK2q29JknH2IV9FskxkO/KijG6k17olpQj7ngAUvgSS9LonUrUy
8fmOTY9MePMhrmFwa+Kbf4iB3qLp+kLXtS51X38Udmp8+tST8pfXHITpqCt7
PtyCd/z1OOywtpbV4TBiXyFs6ND+Bb+JDVOsjPDxtSvY9FcJsDTtII86gCjG
jwsQxJUB/vAkOtjv0/3JwYOAZ0aIoEFgNO+UW49I8KvKdSPEzVd43SZs40xn
cowktJWjhcA/0XMBo7shZLP92Pa738lqspWEe4gedbNbSiQ+Fka3UofqYuSg
i74EjnvDbQc17O4qsOlyQSNBcWDLTcbBY+z60xQl0O950L2idt0ncFvLFROX
HOVaRSTWXErf5wjoO/fFFtIIXx2PdRXow26b3HmPqwgkQZ4EZxpxJcWxlsrO
Nk05bja4m6q+8hbGuuorBL+UCy6PC92yECsjooBcYBOxACtt4JVN12yBufhV
xjss+YjA/ZK4McWkJZg5JdRkMi0VxJYpL42ZhcQJYkFuGCpLmDy7JB5dzr3W
zAmlYUmEyHVuArZXhmGeDkQc12La6n6kp2viuBeit4y2C7SaJJ44say3vRBK
EFRFFvbAcRUPWtWHOJ0CvsFMqjJJOROmCWZ1gYdEyEOAlMjBd2g5scFdLJav
Uytll2lf6gfXfGIeMGjLgqCCfF4LMZxgs2eMOaRVfTu08TUnz63MK/igN/jo
eNU8eSjxFqtGc8RayWA0BfcTMiUq2obvJNQDQ42s3344DSdtNq32AjnnC0Gj
Ut9/0y92CUHQhtD6YsJ+J/W9nqNWDGLzuFYijeKkdIEsmqYvSKRilIgrkJKu
dYkrLI1UZIuCetagl62qH5mATeeiQwYTs8F5qObZpOeZtkXDGEF66HW52IgE
KokMYU2ZIeMIR1JyN4ngHFJXNQty10dsNdRiWPisI+nT236FisKztdBSCqCn
0XEupX5fkMu3KouaDg1mRI6vcbBhy0W3NmzfbfdSYFLGvUeJuIS9Vhn1eyN7
3YU0bQf5luwhU6HXwgjUBLD9uNWn48sfsd1BKhnwx5++4UHMYZzHZFNNCAJn
wDOwS3A2Mf8tb2XFMgwZSGRx3qyiUT/tyCLHc6SZ1jy0HKLXz03RQBu9TDTg
wFu+uEOOOMr3J3sjO2ua0G9Xpvb5YtLuh5beJFItIRqxvBATGd4KzJEkgU7u
PRghbJI4anoN9CG8vKXTTNi62Prg82dBT8/XpLKJwx9/brI+aCEksiauuZuP
y+7aCdUM0bUS8oM7bITtbYkEBKM0B1cS4gbyzhfMhk6cW9IfQO+U7XeuvJNH
Dd/ga0sQZtwjrpS29AvF3CTC3BF3S7PUzZYi5sHZj2m+7YTcUsUFT8yVwjyr
LL5J8V51enYduKykRJorSpSIy6QL7cBdsT5wCx72W6T83GGZkyfifpc/B4Tr
w7ZpSpcIOZvOpA0cgwtume46LsRlW4STDv+aInniFTGZ8YJLZZuXCIKhEMJ6
nSsVuwbNXmZ/Adm9VRmkqMIGeImNbp4fjVqOfx5F3qgwLs7Zm5DpMM/9HxJy
j9jC/O7guJFFqvZFxCHQ3dkLXHS61A5wIb0+vk+NJhZhStPeZKSFZFKZi4PD
IbvOXCRhH4WXQlhl6jt0draddMMB0ygckMMilRFEaR90jUZhuLQGAbSDLrM4
fc4pvqo/8xJDb1J4QIhNsOODB6Qduyhy6KbSykhTbQxTpwPkZ41SSaEapZLn
JMaQ9yfDdrATIxDpcCbUR0leDldJ6l/5WPjoHSn9ZhZArDFCszXx+I1ekXkQ
cp4IiMNo0fq3xaAHUdfJ10Sge/GdOxABo4Loc9fEMFyk4kE6UJO5sM6d3hQC
TRjuTyP4PAQ/FC8muw1Dtsu6WNJmKLTWvcnjPZ9YQ9j1eO9bY6CIxzy7yiPe
yu7ERD8UiFVMXFwpjPD03p+kYTwjfQbrwsiQ1ooXdCZQQ4/0ufTGdJkjiArn
bQVJ7RY5Lpfbq6eG0EHyDSEjlMuweav4M4BRwkK24GHAU+QCJI45yscxQ2PD
4B09Y30n1CRQ7KS3wAUIxVWezauyXHKDun4FAqaYUKEhnF2ye48liWgPTtsS
Ocv3p4nZN33mGDjJnwhebcKuGu64B/vie5j8rMsc/sf3vYtEMUwAGClFrnqP
a94oGkfUr4E7ZwcOE9ZGa4mOQfEMUYFgGtys0Q2JjWRYkme6YaUQNuGqsTnh
trpIJ9DglAjQrocuIkUk/p6fs+e/m2hAg/lKfvyiKMmLsqm7b7MQM0xdTrsW
KoFdJpXQ/9SF/jPqaOVRh/LgRb49KkL9pUQO3pvGeQaiaYdNiLbJaE5P5FZw
Lgov3WImN/HjDrXAFfVp5ds6ei8cv0vx9b3fQPN3ZpqYsJP884jpqKWk1dtS
HpLt7IMJMorj1C7voTfwwuD9nQub+cqsh205D8k/lPPQ4R9JzD/aSharawHr
2N+LecebsqEl3I4kbj4pNCRH3eGwT3FVMokILbGqJqpIqXc9HeDms0+x0oSp
RKY457QYpoDmaXFkxz0ahT3jUUFWRzX6wa1htvlbMaweKj9xN0nyg6ShvQgI
88AkAAwNgl01/EhCIkSoYxRBameIGIkaXzpC3BZeFRZ2xNDSztr6gzpQK6Vv
rEJuYlcZJJfzCs3tzDTEOx+5fskZ3jhlix9nxHpnuPicdkY42U+fMNKYLYNj
ei90gct7Y71U1kaCNYtGOakpF5oy3yEemknviEeVw+esiQWeo+TtD1zoBW1x
Rz7yburwQrEymYTJXN3mvz59CjYOKeV0Gzot2/eJQ7mcfsC0iyMx2Emi3m2g
6ZhRWLJNg5DePiRPtiE5jchLYp6B3Vqo/Y71KpcJP+S3NJOWF7TgjyQw64If
N/nlbff4T/UJAS46bvuCDjiJw7gJztclMreD6lYav4zulpL9E4DgN+w4BJQU
tLVCtBDZueLZ4jaIpvDHSFQWMonM2NvT/JkVC3cXJDK+ReSh36TLqfkskjkJ
RxhZWJRLp2mhrHNKZB0DkNHcgB1JXe5E8xYbrn8Wxkn4VfdyFNxoWKoiX0Bo
unX3S+5QZBntNQSKhTPxFVI0naoMlQL2mvc5+837Ig6imtvUSK5iEHJkKhUP
XJtgAxCwZlGXXOgWXlAfEpuEgTrtSgJeFewFUqGRkzxSwuUlxP22TfRJj6Qs
L/8soiGI8ZV4w/IVdBGNRHA3hRO6mU2Fcm7KbICRPuobKXQkDOFJfHpdxaVE
u6uyJsiY7iaXamer/BLFR9WALqWK2W+slupCkkVNJgjW5mVrWjMvk03GbuVI
quLU0kTANCs3i3nqyw1qmmfo9DChG9mfltcrzYl7L9kw6cSMRNVGO4Uyix4A
eERN5kUlBRPZc9CSRyMEHgVCdTKwvtVGKbqH49L2eFYkPw45y7ikA2UVQBzp
XGJyttC6fM1VlKlnJlqRac30IswUY01JLUcm4G9QyL6oAbdIk+Bfm16ImTsB
vRj1GFw5iMo4kIlifqS255BuvF1Ob52yBbjTG9BRbonlGLKGglL+aArMNTY3
K27DW8JDPmOT2ZKLj7VzKfGkaIzxdmVKgxyqG9Nck/Q4LK3LbWG0wbeYJL1F
UMLymW6H4qVGmJAEebFQp34P2H2J8fC0SQVqVDnRcAeub+YTQ9syFNFdJz9x
noH072tNJkJS7aSkUWTMkZo7HAiqFVQTn3sLm3ePDA25PSscaQ+Vn5EGrfaa
psRU1GMW/RKDSwJW/rKRJvNYgcbHReHmBzZJaHhw+i8XeGvPzjrwnC02I85z
kd7B7Dm5yaLy7oySunGH1l6oDwmyNc1G+Mw2kUKqlKKpNI3t9EreWtLK6Q18
6nEG9Pem66dPGML0S6K/9Kxo62r0onPQjCvnz9vldYI1JJdVNt9oPWfDkYZT
02CuUWcyx22L/6RL4H0l56Rb6ZVNZb3fBXQsOPces9kqRG7uiiwVYbg6QcZi
Rb3QOD4iXA2OFbC9EPf/NNKmEY7ImRosjiJGRqImiEaa7UvTFHSsFXvpRZjx
7UIuUL5Oq2VKIoszeujFh9FHqjdkTqhQMYPN0FBS6kxriAZ1cDhAyyG4fsFG
LQzVV8TFZiTODksFY9eAUZ5TEdvl5NlsCAPTUEt1d2GjIVkOMneDIEnlVGDm
GNdrBLr9evSG3wlDHHsjZ11hq0h4hx/v2YlwX9TPKEJtxeKRwnJXJFOg0xB9
93awP9rb28P/DY13/CgEoLNC5AnAOkCf7e8hYwkXb39vb8lcMBN+9+gexnJs
aKBVbPYf7O2Np7ewWJ29V0hmc7BoLnPh3VzlFrQOEpjgIOWsnjpI62FG+6PA
ln+tNzOrAeJfaMeM2OiSXEJw2R+lB6P0/ih9NEr3H6JtSt7MRoaCeXDp6zWK
WZFwViCFg32BYujEydA6qzwK20lFXLjKF+taI8g4Jrx0ikdm5RS0/hhnU0ku
VRhehzUt89zFdEn+hegz4GFQXxltWhY0LqI67i9g1SsyuRvKlXsDmuLx0dK3
uBCkkznrUK5k+3xfeEGgY6kLGJK3RAMnd7kWNDxGqmA/DcSOZmNVmeCTrqHk
n5UvymflzWowFM2fXXdDn1UWBL16uzAKjEno7G9NiEDd8jIwMEd5LZ7/fH0a
Qpr2JCKMvj4Tods1CxrTyBqEiVhnQcmxTd3p+DwMzdh/cwZuayQGl2h7CpnB
hcVLdRErHuL6y9xqEF2XO7a9GyStQIIQHV1THbdqtSD53GdKLbFmkOqNbQ77
E7ZjgqW6+CsuXXUWCGYueHC0LXqw6caKMRWR2MEtkYPag/0LRskUrdi/aJBM
kmOUUDPyEKzuQrq58VBBqUaYgzR8X8IxNIr2IrkS9qHls4QILvLLbHYbVNBy
bJj7Jdj4kdE1sRSewKpYFfUHLgyllpCoLJiKJu0HQIYk1o3FP8Yb9QESaPIi
Rct71zqxQu5dCHN4y/ZqYXQAXYijPUNQYYxrvo8JW8rZZlrMtGQwtwTgNFwW
db1BUjxhW2c8DGgnq0zscjCRzsk+nT3amNgUD8kpLdnKlfkEoZKO6rlEATN9
4UpJ0tgl/9jUvoDFLZt8y3l2yxLqyxWtfwzdbMaK8e4rSDDi4pOiTIF9Tbhd
sEJf2Ut8lprS7ZZCMsVr596+v/dClO81F8pm54WbN30O8rpibsA1yTiV8f5e
auLJwR4R+3Q3Hew/2L+fQgCph+l/PUkfPjQRBds5QdrzDLviweQCL0G5ftue
QH+sXpl0whBBKz148BoLGuC07/+S/qmY5+VQiiFaKe17e8gD1cRQGigwBQuO
LJdwAz97I3+zrJAirB66ZHVdzPI6AoJOSTC4t5cue2Fw4GFwJvbJBfors/gZ
iBi17Lw2g2S4bc7TAPHWSqc1WPeqlOsaCDeyXFpZbebXktMj/KCKgvnquqjK
FYczqOE9mM5jLONiOsB7RK2gKU+JIms6EOaRklyopzlKGZMWt8l/7QeiKYAo
KquSg6P0GWl2Y5DhsZDhI9XKGOIhXfj0DWE/Xg7osalwTJiDKx1UinK3PixH
mtVoVlr7rh6cvozaYijxw2UA6yjfkaWn7YUqXSAUy8JGz4WjnXKqgBZD66hi
7G9Tmd1lHw+FF1thWOfswvCNGU9i5yJSmVQ/KtCosF3ysX15hApwjDktKhjL
FP5McrolxIZ+jdet1QWkUEy01iisQOn1SLP7pGC/VJbzTJKOgV0U45VWcXNP
hCtlrlwrlj/GMDNWoP2LEu1yPp1Wh+2RBFPklvIbdE+9AkEC9LzEP+aEOVVB
injQveiAp7+HZA+6rjwDeMyhrcPjpl+JTYQXz8Um8YSkoNmHc5vqXOTtwaeH
pPU8vPd5KJ/IB2xOWRYrH3AvBqFhOK6+Yl+MoukixUVXzRgjUhWv1DYYH1Se
wjLHqcWHNp38LvMx9GyyYaI1f1tX0llV2OZBkgKKwTXKMjVAJWUMNCvkFXoJ
yk9dDFC0o2+ylHk7xMJrhxg73E4kly5o5WonWZbosCGzhanu9KdZ1th9cMnT
mMDL5wkl/pqmZHuryTWT5CisnisOz5FresOWdZG0YVDzlgeO/uRucqwOJ0aF
UGciV6wDjRYvK30gqIld6Rl7XiTeNTozMN9l8dG0v77ypp6yPhMd9VgvXRA5
6GRVPGsgGPqiP9ylAR+OZ9GHQmD1Q36GD8d+kZ95Y6oYMwS0vG/QAEVLK0hk
h521VEiMRbw8wVepfAVkGKWKfMzy+bb/LAUnJDEXVzSMarkuCy/NMl8UnY25
pvjNLGIB8oOjjerSYXnw7OwJ5AM25hAlNXGCRIlrESUCLOEGotyEPWV+iX6L
vnyMdr2ps5umlPPmHthF7ZujsqdIWaaVdDz4j++5rkTK/g1tw7h22YliW1EL
SQ8MZYNciX3qk6MOxXAFGQVCUCizTPZIagEgW2ILPdpzz2DsfSwE8XtS1lz8
6z07ArV/q0ScaYqxU1IK6XHmshQlEhZ7vHAxJK4u7mblMqinN4jIu9qQEK6E
hjedSb/DgNLjCCTWjkB1la19GJyWAyKpbi1yFgoYr7PLnANouEUmp0be8CsL
QuuhphEZLgf18T0u2/luYQhGMEcJhCPNGRFp/OHB+OE9b+eKLC/hpYVoWKzm
jPbu8mYa4crilp2CYdDDPaVzIEu3gnA3VwH/BpRsvfOOyiWHh+TplpA+JdI+
B56xj15i1ZMeTUgmhxEhKCdssWZ6/zzzgegOvff+LzQWX65J71bNlA5HlQi3
Mg8cJSDbvDm+SHkm0WZwgkGOVvwqnR91/1shC/F1gsnwhV5aC9ELquSng1Vs
ghKcHmqkrNbLFpKS3lm0QFyUDPZ//zZSt3l7yKLUEt3tKD/iL/SFhNS8D1s2
c023lLO2JKeVBoHVA+GJ6adP1t55TO+OOUJojGSVKLH1s3UBQ1qNT6MRu1G8
DPG7vrESDO1OixZPwn+yDQyia627YsMHu5fd4+F/vgHwjyAyEMwuCqs4iMOd
F7NGl6NZ6ZIMDB7vPUd6pCKCv3myf++HKEManScjT7wLs1s50zbQRQgyiMg0
A/fZYhwf9niV41IqKJDOVqh1bo4YCwAJLgWCDFhW4AQTIdyclVIVl8h2RIBy
5Ohkm89hF3ysef/nAbLR8Wiytz/8T8BAfqZh/guke29fzOJ8kbBCvz6LZ2gt
UKyA6p1my4FhVxRfEkBDAx20yDGcpjbGWIykejMT3wNASINM6W1KkVsN6/6Z
+M01asaqh0ON975c/xcOXJBZttlHoTQ3U6qo9C1J+YFUCynC9hhFgKA+P4rX
yRSa9eNRrEcmxhQN5qWn4I2FBweyMKlintL+GEyReFVfxgJvZCWdPTeCVSyS
aumuXdXXGmEOq1AxFMgMbCGSzfbwIIItW0PKmzFECLVF0i8HY9GVuJcAO7Lc
c+OkLc7GBgRJZ5Fi2F0/h8a2u0y7fN7blmz3+/RlzR5jFSKC1lrME9gpAunX
heHIrf2jpDkG7guXPRdl5f2c1c8XJE7k85crqdWpQYZSOhj5xcP0776xFg8I
sAY2DB4ZkwzClD1Ob41JyaBVoYNLQgR/S9lf2fY7UybmrlGZdATmiuks8dJm
sT4J4MU43/cbjS2MQo5aCllwORQnjU/4awXYSaHFSH/NfOncXW7XgSpBouV0
hCaeEss5DBNMhZGcM3U7N6imTwwMPBQKKJ+r7jrYG6X7wx8x3R6Odd8G8/Pj
CukaAnXijjVEh+nmPkh/Z/Ojj3qwAoRQIv2SFzGZ7LMITOMlXzz9Qz1MUtqV
g3JS5xmnx7jUTkEC94LT+eUXGA+GIVJsB+MfnugoijI/wyNvkT6wMt8gG1Qi
eWXuPwqWvN1UaSvldrNaqM7peqRuTLmeONTo34mss9e8EVZRs1RCXx5e2gnW
Pu7ItxAcS1fB+rN1MrCvfe6/ZHjmYckpq5jEFu0geDYuf97Ntm7Rhq9Ijbb3
olsOYuRKxwld5MSkS6gpZeWXJW51CJ/RWnoGcwv6UhVwrXW+WZ/PuMySxXdL
FaDGVVfs9z3669K6/R4eyLGJ67Az0okeGNzE4OaRSHEuiTr0ydHxL6fnJ+/e
Pn1+fnr29uTk5ZsXERC5SnkwoSW8h/BIeuAklL8vVT5+o+9bI8sxlLfVUveU
hKDs7u9e72OuTxc+9LBQUMjcvxUK8lXfXt6fhDDoTGeQP3p39oU5e+vC31Gm
aeta35+4IWEVMbLxSpHudEE32F2X8IK29OandxTW1tsVdHFIOlXAUl8FjLNs
rjMrKRolp1tFxEj6cLc4Lpro77QOceho5NalernjXwKAtutQBsW7OF7XtZyP
VFYnoLJbx1+5I5juAeA2+vK0L+uXqyNdmRZbb03bmTVO3+fpDpUZppo/zO3n
SFmR38LreqiDq6zVkcRs8nB67WynyUjzbkdVV/mrNSq3H8rDQbdRinih8uv/
qaVGU8nltalohCMX5yyyGSk3miyvdPX9SWe/YRm4aK+e/lifOlINtwID5CJe
3fuTLUCwwJ/tkO0wPrphpLDAAS43pQmKanzNzevBVME4JTGO3IQSyMBV2IhC
prQDmv3XeUlVhy+9ptaAL732/mSo4tjZXSoL/S7zdhUVQ2RPL4Jwoj+EVUJ+
JTnIwvg7V5mVCeP+apxzKcxByb/tNUieBjXTw3okgfbCG41HiARLvQJ3QOM9
G8UBky4kDPEiSNTcqsVqHoZ1pbVWYU8iiMPadrlCV6rQx1PdLSF9FTPMAfk7
CgS2AXiTB3Fu+XfXuSVSwbHGaea+4mFbVWRwadCNl2DtRgRs2VdCbqFw9M7d
2N77qmJ8rzJtvXHiZYn0+IeOXPm71N43JVjez9qlxcSGaRHukqiP1HpRb9qJ
+p0OfxywgGw07Y74fRwGDbbe54VjXOPiIPLRu7OzWn+jFx3y9l7OuBRQF72e
eIm9hSVRTyS3tSdcv23/e3iPRlE+T7ol6j1W1rT4W3chYzfHKDp6Pf3Xxeqk
WOfH5rxmSwWiifpHqyHjYdvsP15XBcdNGqjuEgsFYBIPfg7ztUjc4Gm86fT3
v98ik98trjs1tfvwd4hZvre3VaUC0JTU7HZXBsOFQuSlxWBvySvYAg6+UG1w
vF+j8lAkPXaJIAxgsrDf300Co9KwjSMzWxVDzgmYl6vvGivFqXSoV9/5HdND
Do+WhldJgPOdt5Vch0DWZRJFpUv+pH+S3c5XSdp34vzy+IkO9n3nefRh95B+
p1+GW+hKXl/SbZT3oVw5lw4T4pn55sCex6ZWYZXdFNtusceWPlE/pjKBDtir
BLoMth6J8n93963bbRxXuv/rKTr0mjEpAaCoWI5NR5NFUnIsW7IUkrZPZs1a
Wk2gSXYEoDFogBQt64HyHHmxs++1q7oBybmcNXP8I6GARnVdd+3Lt7+tlr1k
TGOoHLqo7kVVMJNLcYPJ+fXTp0+O0S77uK6xZ+ADXatYV9ZuMJVj7B63nCjO
PQolveM3G1mUfQmmF+Xb41speSdasxlUvEDnTYPQhaiF7d9j9tmlMAwgX+eo
OOIiXZziJMHmNZ3wUVfR/6hbwhsk8yZtWbO/0qZhxKu3r+v5a69SbhYZmw4I
nXXfUNb7Xj3BNblV2nVo6zDkIG7C5Wr13rsU8auXNwjcBDM/PjNs5LP3BGrD
dBFHY0HnjosLDgpmEC/bIWIwlcLTxXsYILtSkjt8odh6hk0ZN+KOFBQulaov
jYGwN0+DGdEoCUmZ61ZEq+c610mAXhOkAYOtt83wtqQw0KLksmaBSvOOkMkY
KVCxA5JqqnEsSdumyDYjVTShVUsx2/t2GcclgAOmaJPthL/RGB/lunKup3p9
KcWPcuCCz/XFaYsTNivfVFyKZwyrJYo8zCKyQJD1UF1eIupXM11Cb6YL/kha
YlKHZO4keQONTskxFs4wXMq+Sea09HlbzShwzJAmJUVsTBccMBSJAi2wYzCi
HKawAfjn8MgwJ9FhPZE2DsssCZxl+4lLKNRznRYDnOc9pco4PxAgGp8MHM7T
1p6JFg1n9JFwWnsCwmxRBo4Vb36FjMFKFkJcHHn1Jf2ZkYQiceojx4SXpppe
l9NLoqx1DMYSc3CbwVE66gYPPRu80A1+bvTQCDrF3PcOkx5sCwNRUL09lt/P
KxeNva2FuS+fOaNwJuAEg5/kCEnCTxnXrlrULdy/IQ227+DGwwj2jmKJbREZ
91LbBp3ELKLK8oND3v4AZ+KmFnIJ3e5gY9uR1YqZogvSwhDaLCg1G7FbZGQa
RFNUXyOMHNbJ0abqA47LjGsQCfhZZieeKSVt8GMV3mVRbkiVwGKSSKcD/Ziu
J5U/FnyV45rmo2/T4XPYm+bARFaHHdr8s9hOndQmCeEplfnyqXrPmMiYj2Kz
MhxibGLVVtNLJYoIUQ/FCrM4YC6NbuuhSdvER6sVN4mxghLPruSYlYEZJ/EV
TmTt8TbmLC3XjY2nm2KFHZEbOU5b24gfeaj78hnZZalsE4Q3jee9XIW+1phh
AlHQCdW8f/SJZBwWuw8fYJIFa2fQXmSl31AlgWN75APIRp3mURpBbX2ZF3iR
SlmYnS8pguLaJfyDuU1wR9zC/ZdSJ+DbBJV7WjI4o/JqiABvl/qdpDFl9b6I
MCzeBiXc35cgnFCliztA5dbDf2MOeCGF6yUpekaXPuF0/Hu2UPoidodgFru2
QvrCLz+3EjkOvmOEZsuqjbAeulEos1OY0StKI4PFmFTN5WU7isPE5KA2Qe/u
bt0c4XGhu8NyHyboe2sVmUoQ0YGH6UzqS6q3vWJ3DSNcqmWQ0LEqYNgJ5OHx
hd2vygW/iRYH1ckp2AkWFbFru6UicVOlabmoyETCIKKBkhsBKGTpdbAY5ZRn
X5tEkr9bStrjVxFdlS67S58uMeXlKmmNaThx966iJ919/5gO16y9B0ebT+Rt
cb/YfQRmckFf7Mmne2BkP2pT9w/8eHSAPpLPRl+0e/vwIzWpSVZ8+YX9OxYb
jDSLhAjSupobr4eBupj8bc3ExsI0LZJPMx3ydejIyAHnOUWNakSHzxFo5bqB
cV7STqFNQWfP70eTwLoL+erPbzq935LaGtqGpNrQHja1FwUgrTDfPqZ8Dji6
qf8uXvxwdl4IU1IwLuHeLsLWoluiJbNg+yNMQ5XuW+4hczd5X2h9GRSBTWUJ
6GjBVsNTxZSGTIDhT7fmWDqMd3BgWju+P1UXA+SobVYV51RPUDlEcwpmghP1
xtfr+Rs4nUyRRNmcc0oGnxYtHEVmB2M0Gxe/4Zy6QhKycL1koVJKuk/ZYg9E
ZqVkXAKMc0kGmn6PWq2x+6e6kpT22KYoaE0KZSFqV2IzYmZRSwmPlDPolIJW
jcaq+KD6sawuMTW6ao2ZkCktUMu3bch2EejZKjWtoCZnJFI4eFEydPGhXgNO
rAQVYqwViVo+QyQlVtW6WE/fSMugrN0Z7NWSvpnhIfTuTKv2c0Q5v80C8YQ/
ixpLyPe+KvDMd1vzewzuvGm6qObe20WNBTjohBHFKQfNjAAJOvUV52RMcHXb
NtD9XaEqgsU/8BzURMzKrLDiEhBjNqVr0Dqe53mCHWgzV1cOt0Qq5/ymmd6Q
tNIb1KQfgUmJUwR58LI4xeabBk/cHWvRetOo/RGF0/n1FnHLLCVY0fMvCDIm
q2FJEiurhjECkSNCdKIKXkd2s4fBIeN7us3sxdM3QXIYljGZgXVnZlSZvxEj
DbcgAdqYg9UucScbQi4bBkqKN8aCQehe5KZ9HlbxNQ90RmWiiRsazktFubr8
esqNYVyEjhQjeOQaSBPuo2aC2lLAqeAMl+dPXyKzRTWdIq0OZizDn0hRR7AM
2AF7StbXCgs1C9xWULVBwPUcGOCCYLi5mV5olVmvvjStkPDBVckg6bvA+4sl
UYdafY4sdkTAkfrTKszDZXqZWy5vRGU5lPFlMsr15+cEEnBKM6EGfPH4vO5G
RNrwBb5Lipqr5x5VA0TXrBHMEiTdY5MkUJ9z+h2FJtkc2eQrO+yWitdeZbEk
bFSEjSFRfTS059WgcuVy0RzQp2cjVKL+AzQwjRvQf/rF7zdeEj6w29u9iPbY
1AQ5gHEaNj6XoAXZbS5Tt3US9BkdfqaiJYGEvo79PtEw3Uizl/t6ffq0Krsb
2u7+wga5YfiiDX8jNX179gHJUq0Zy+UOKXsgEt58zN3FGxQTdytJp5Hrf0Cl
jIQNCI9sVLj4tmTxV9BvHDojkrKagj36wIGk+jf6k10xneaTaaU5VwvMFaIK
AUFS7DMHaF8OQLfhvro6v3mcGP9J8KI75/4JvOopxAEdfS36RG/pnXigGQEF
Fwp6OPxn7lUT0PZtczz4cJwsAcX2gzP7QyvQLdeDb2jCs/6mpT1h/P9RPPC9
9kOHXnlwSTb8DShb+C5KuqQm1YF9niGqOy4nxts/ilD7fCiWifDsas7A6ltf
wLXNtRjLAnhRLt+cGLXy0WLxnEN92bT2r10uWzt4KT+QJP6HCOrU49XxdmFm
LSFmDtNYan9XrA8sLe932tv27ogy/PBLOeqKr7bdsHlXJmin3o7D2bTt9kEU
44ZeWK5M33rFJ5O2ErHBJblCj0DpK9a1cRzJXui/uuMvfDSd1RAkjn+76m7R
D1+f9tQp5xbmogc9te7Yh23bnsd6MnJhb9tduydOTtx3232v+ENxCOfZ53Ro
AVYXo1WmNxer/Yn46LJH2djq57rb6KMNv8JHS29NicYbprRKu0B+BQMH5uB/
yeasuAhEzsevZF0YN3C5M5a/ymzMNjszurNvkJwXhxDwjJZLKYNFGrbyWju6
a6x5v2bLJIY2kFZOGUfRruQ8O3yG9ldGAXKoFLs+OFdeevOW+hYZxoUM1L8y
ssCFTrK0ZmHdOdTItRRmQRGkGAAGCo8KGRATsSMPszfseUrF4NAKFhiG6+yg
24oYctSrRAODJ4Mis+PEZqFHGZxrH00n8jXhkjmTKAYOBCjMjrt2PZuVy/rn
iuOqLscnPxF53MmlHiUn9iNyUHJIaZY6cNyPyu7/0sDwKsIzjUdOlIdiFKfR
2fI1OlueTSjUEX0wQ/TBDFGbYIRGcTTmo01m4Gnmq5Gfl/zMEJ8Zbmjrp42+
HheOUCewiwdJJLRshzU5+5EGv2qNuApTzd+WROrOvusBFjhVt7uAT1stdop/
Gq/1iovySukrqzYoeAtMWqY8jXJaqkfGlV3PK0CjMzUNn1plEhlcVj0GR9Nl
IbCHlhXRUsybHhJzIWDE19IJFEiINd0SSSleuhJ9tNIBx0QXQMn0ouMp4CWd
RPTXTdtmm4eO5C4Vt1bWcyHfT098PY9RcBwa2jtUzxdPdBqZTIK95B8pfWNX
TdUGenWmJPIksBtZE/lJRPN7kA8of9PFHa2WlqiRAWb8yZloUUIA2ZCye3BD
BEYDxIIKygaPNxcsq/hrMIdJVlBQ0DDNL/AniyXmB1eyoDS1UWhbkeoiK1KN
c2s6tpxMPNR4KOHWk+5Y+SYlgohMDYko29BMbwqDatSp/rHFHHGqH34L99uy
ZHzpa3VRvtZnva60AfeXprFUKy6WewrfIWQbTZHdA30m0W03ml0blU2HYjtp
Zgu4MFqN8LwSmIcx41WcB8ufktxTYSi0dztOfpLojLpdu4PxG+Unfvfu9OuT
R59/cfD+vZaxat0xDPEY4n2NbtD0hDL3EFORiwc+cjQw5EPZwG6R5auFy1D8
yS59uJmvls3Ub0U8GkiNP286qDYHMUkJ8FiEEdmcxWphKA1BxkhIsy8b64El
I5lgmZM5opng9KGzghPdpXA9PICc0K8fEiv0f93DxWRwUau835PIpEfH1JNA
SxQWxrPTWgEHUahwYnfiDJFKNeGKEsTnECbVYtrcUTBHaJGEjoIoWtCVW7ez
gWOjIHKlHeioUCRNsRQdRamooJ97WT2nLEGsbEJ/yPS2DJMqbafxHamCCwWa
1OEsV3YRxjgQc2YYt/5biS5tud+CFvCAX+NN4j1s5LPiLBEGUhRGePO91NR6
hT5tZr5JOMyM7UaKbw1RPRbam4SzTAAWJSvpsTDoxs15KPXOrG6lY5gNzFgj
O9VXHOYgFuMkxXdGTKVMZLMsMSzKO86Eb5DIyYzYYREuiOtmnR9I0FjRd1oJ
my4/UWWt04bqEiDDWvGcdKOMip8qunquwDpeiDUhr2HnHhcapq9bYqHqSXSN
04oPOCZVRO3RvFCXYjoDJ+jQ7vzIH1ilHBaU0TlANtQho+fpymYA0DkBgMgD
Ad8hb8m7Ty4ulvQb+slwJb8YesjQMDJW4BY5SzlDFesJh5PA5XPmyYigvIZI
//CxHd/qjiuH1Sr3oYgDYVPme7xtRTpT0IibcFxFVSzeHFOXKVuS9h18xT8Z
5fPD+3yshYdpixJxTrCzn3Y4YXyRg4iaCxii1HgRyznd1GAClNOQv1LL9PCZ
cVA8hkpwPknG9YcyKfhqS+Xyooa5Wd5ZnJFmN/sZ7WDuJbm2wRzLHdhc0fem
xN2kxGoYpCTHALaxVMmOygto9W5BCFfSrFF5U17ByGwoULOs8J73SjTFDnUS
WtgJzVII4HbgPVdwoVXzHSI1U/Nh1QiEUadrzqS5pAzjxLabMG4tV7fU0ID4
yokNpqR6UXBUp7qpAnO7sBlK+hpfgBj6hUuIzqdoc8wgRwy685XVDytX1mbQ
jWqVVpweCWuynjRjmJ5D4d3w2jsrEHJjJsriM/iKju+JHDHvVkb3hfrzzA3l
iRSc4zBxTuabFH/D+hd1TTVZkbc8KuaSWVUOVu19LFgAVrM76cIdiqcZFCte
DVats7rfpKTAcQqOhABt0MIGFKfj1ciP0rnefN/hXVXNFzAvJWvhTETp1vMD
A+JCnJsGxLHTf3hARZH4D0FZHrX1z9XfP5zUnuHiaRp0CumGtFxzMShLT8vn
9A6Wu1iCsRE0NErawGabDJxo73tEZU+sV5zh0dgRUe3CLY837u3cP+M9651N
ff9+/nlOCtR94nXHjEq9Pb0HKnQ69MET2t27dp1zgtth8dSccC8keyTWDaA7
HB68uB2ar24oSSaRnJF1O82Y4ySdniqBmpziM397ab4D7293I7SGok/1uVEh
lMFMXOvUvRClHnSlp0EGxRBqgi5/wpW0K05BRmBLihkLvhZpLPZLxxrBHEvQ
pLWMp3+zDhp7MNSfBa5Uwq52yZemEoJwH3FhYVQYuxWnUReOk5e0RmFdzKaC
U/am2PF2wM6gC6eTq4bqToK+xTyTDMlybyDYJQ8KVDB0PS5ZkrH+7J6kBvC2
RxSPgsim9WUVmtTnQ/4YvlupUAPZMvHydvtI6NEqQrYJ9eEo3764ZwsGIfjN
Cv87ZBwf7M0nyWJpWBJdcVrKUf3RYl317tDQW2xrIPpPsTNv6rZCAlCwYZh4
80Ipz6MahLfX4vqupbckOuiCq9RUAreEX07qhjcjaElIvIjt7+HOlNylBiQ/
M85lBtCoKM4bhTGyslldXpI+7WC2MEXyzCFnKGxeg4jLJJtZ+Hl5YSY9kxva
innEM5cbbQz5HezlVbOUrCH/ytgvxYISPzFh7VCrE2AEiLsMswhaEeq6tzFe
RYEOsfW2vgGrQnCpmOigIGDfELHxePGifie2SUqryOVCSOXOsKhMq5xEeqVu
VtI/5pddEVvmjnxjjqM9QYImuU4OJouHqKznbXwNyrJQ9C6LQ+GtjFgzqhPq
2VSOYkyybtdYtIlLcXoEwW5aV164QcnOQE6SUXG2phJl9uuNXcLeoINkuSZ8
ItVHQYrhcmpWHheXpSqx5FYKRW/xOCrqdivo3mXF/NKYGEnus+ztGMi5xBAj
doxPMDHI+4w2MD/wqJHfSkyVpR5mfDXVQxsK3npAATocZzyzkuywno+x5ChB
//Chio0gF7UUAozq7TWYO5xwQ5hraO0CROolrDPoE7Dp0MxD7xLX7cwKCe6N
ZC+WNVOpX1T9e9K9+XK6HoM1aSaakyWftp1Fk5kiZRQ90bE6Sdusl1z7mSe9
U8aXN3jaHFl+mZwzuPltfVlz+qniLFfV+HreoDpZtaiKCVspkp/CHaLL4mIy
yRmfU32qvlkJmK3NWYHFDhZu2dmwV7HE6XJSqUWp1nVDaim0Eq3dnpXIIfzK
8ksLiZKN7n6qehTZcYmsgUC4yQptqI5SypUUQWpJAV9oyjGq0MmTSq56gLFI
ZY/3l2321CEs+1xK8/A8oTKvrNOyVfjuOiDUP/35UGGxg2LRTOsxnhj7RJIV
KHTNn400/8/8JuOSDpBVf7J0DEyrQe/e0u4sd43D7K+nhLbNZ3vgQM6a5U33
RCicXmJubnHJLpUxop4PJYLCXHZ+j7AqxztflTnjiJNzhBGk/qKFuaaDbzvq
ITk39QaDJcWZ7FWnDGEBr5645LDf5oQLOdGT5ZT1BjbVEdrxqAm/QnAETuJg
NCcRPGK5nKInbHQEj7i6JLppuaqpVItSlxcGdiuqBLQfqeFZLo+Kn5IroQyp
68/VmsbEmHkjtyFXMB6I7tuVBphtuXzTinnqKkp3KkXvYtpkb2R4syNjVORK
SyzrN4YzKvnARHHXSr0iu+RjjlNHXIivN62c7N4iNa3boPok9RSOW1a3Wz4H
EwZ7QRysA4/oQIR73UJPJ23oI+aXviqq5IJaIedUQ12+0574lwZ9qQt7JW/E
FBEqVL75jZolHHO8k7W9hp9Wc7smnFpD6yxVEp2MUAkR8TEUIEeaCXXb+F+9
SBazsgILlE5DBz6pM7DJzkkKDPQbPVgkJWKCu+ybZPM6t06beI44FIVpLm+q
NkT8PDKuJKD+1aZdugFdk7SjzovEYdMDjWUSN4THFv/+7wmcPnvgse/LL7/0
8ciksEZ59DFPe/WTGDnQP+GVSdMt7T+e5sexidf8SYedS/6j2M/jvMObnsb7
idpmGjZyMm16lrWb12zBPHaUOM+r+Z759/oHt9dD+C4qwXcgnadoxX0PoqWZ
tp86hyW5YjT8Q44dOLlgEu6racjYCTL8aYeBdl+VMzbUaN+51LmgsexzErTt
eFkvNGSeIt04msnnOPZYvKmIrbFbFMTe83q+flu8e/fd96+/fvb8/OkpyFQ5
YRa5IvJD9SttOmoWtaKqN1iZafOZE+gk4Wc2tJ3qpKp+aUiHap7EUMeeEOwv
CVZB9Vx9+At3B7PJSByMhSelaYXcBi6eooud25PKk4b0R5Ymem7g8cjonqXg
Q0jQc1wjmcriMqpKC+ZFsBKlFWEEG/1pqMhxZpKsOl2ZWgMwrWpq6MQ9FUDi
BWanhD8OJMVl3nhYWvBs24yPiu+blbv2XIuB4uLeStB5lwIYUjb9ol55MGT3
TdhOsHaokS6RBleamDar9lBCmQI8SX0DYsoEvUOSbxPB2seJ5XjT4zjv3+84
ghkDkHiC+URzPHeI8Vw+LaxV1nPMBnMOYX7axXLptKhbWCAGG/3CrmjBImeV
8ZDbsCGwn6LTRNFmgSK+g/43MDFT7r2VwbhoCwcT47szr5ZU/QoIUhOQotY2
ST3O4rphKIdUmEkK3surQxboEf4rAwsOTLC4XiSwGF8bPeyWxQ6SqxavqLo2
T8zWOcdb4YaQEN6TSwI9/DpPrix94sqNr5YFiORLhF7P95xaNDho2aHKjKjt
v/sk23qqSZPAruxHQ9u9iYbkg68JPpFQl4ZHj6gOR45DskTVSV7GlBWBjnxE
7Pl3UYkQjyMglLSL5ol3lCQTUWpwlic1SSw8MZ7YjaIXGkVnzCbLEUxw4mCA
hvYyYANoi60jN8H7wHWZzHG1b8MF15jBcVJlOKpNrhkV7JrWLlihwHaxXpKg
SzvbDoIPEYnHFZ2Ii4bK68BMekai2qfEk+SXIsILch+EDVPxVcE8PyejVkRS
bMVWeOLc3jh2CbdHlo+qXE5rur3yd5BBROZriTxV5gzQSkUwaUKuxYe1X+DQ
hEds8Iqc+QzRQRzXwNz4Ifv91jnTsWe7GZvARLF0zEU+Znq4b8SadOLPI1wf
aVCGb4wZOktUWfKruaoYfUvloYRx76OvhUH4tSGY4gMhmLA1BDMw8rje6IuE
+ymaXi0/bVnIYWQe9QApTTqgfzmP86CoVmMUzetW1ptUCU7IiAEcz5Mywx0S
AzhbRa/fJyyG3fwX28I1Ip2E+yfnZRGiq1HYfRGddFxLS+dEQ0/UEN4sk0p4
eFPT29NpBHxOE9P9HlYsF5JjXN+JPhG3laEhtaY1Io5CrWFZ47oX5ruovHd+
mPStHe2xIJY4kQzTT3NcDNXtWd3pzNjj4uCBML+ci3uWY07Fr4w5qX8of0Hd
fpAhJ2LiCULP0TVMU/TxKZz1NEAl4Sn4IgKbKXaqUSqMP2usikiOsmBVOdeA
VSLLt0Sl2KebxqUcaH1DJpEuECMEKBTFqbBJ0GmGyiMWYrqkih1TI3ai6FN/
6En5PaGtrkxKAlB5gESFkx4HnAwKP6NX/qIdrynNYLRtZTeHdfhqRT3RxkAu
bXNOY+gFpTqrm87kgg6MihdWKXAju6Pir9tumyK3VwSasAWhhC4+rFwknS3/
eMnRpDo9VdVJaFYEXBaj9UMmNByua94KLcLKODewmtmy+rtmFV9B+PCEC4mS
UHBb7gix0Y7UqWYmEeIxEYojr4RH6ZevDM8hzVsodOYYZM8vmN5RkEUKWBJ5
QC59Bl0qIdU2CtzdBAUvMBiHhDgI99CIEl6GaVeUkhUHzc5kmoRFs8AYHIZe
3XR4YDTRQ52+OqEoDhFB4Z0H3zX7TA6lDvmPXIxfFS+jKBkFj/nOsEjZ5g3t
NYiC7ntBshZMmBWjUmJxLrV+JychRMWRncbMesOC18pIGr9X7rB6cfRndlUz
82Y9dgCLeJWY/8At9iiY+7v03DUg3BC9RyNvMaGOw1NMYUw7FNvF7cMVFEDg
47HlOsGkKaHJcrXMvM8dhltFqecsEUHA6XK54L3WjCmob4xzKupjYkTWDl8q
PvYFdyESz76+WE8wo4+0SfmMPxInQ/acXPkdBJqoQFnhClB4YErA4A2ucjLZ
0U5WsiKFhhwszO752cv9P5693BMiT1b2it3npy8HxR9PX+6Fa6Yrol7FBJJW
IlkKlbLMhhTaBvOzasYYAZtPQsxirOY39bKZE3My26/nJyBls7GLT5dtIUaE
djzxzu3+kn99TD9Wn1HRN/mPi98WWFcG5+D1f69hz61nrrrQZTlGxz9M6m8p
EdqqbzNGwZOTxgLpIAsOCmlL94MayziFn7YRgwG7qF5MqdAXquPpb4XfsboS
N3ZPW4UsGszjFbTS93sUHswzTdQrlriOAAHp+3qe4G1NaHitX7sO22EfdsO+
mABDDMXoy22Lu8w53t/0Af1bNrd/ghIL+gqvbNjyBtO6wkos6MJxMJcU8CHg
MfKLSNm2QeaDl03VBnPEy2ui/kWdxd/i5PDddtcIIqODYC4lgZ9PlRevsIO5
DfNBoVTlVKvGQMRD5polT0aIzjVh+SPtljvcmemydaqpg7UJgi16vsT0eYq/
PcKf2m31GDT5bmoA57IoF1jAJWewCxEaOI1VB+sqwq5MebQZN6uOWILc2gkD
Mznb2YFZWrJxJAXg8XC+mtFjBoYVW1qhepfFntfL2U+XpPhxdNhtrOnd0EKp
sxIuoxXGnVV7J8gCTEDSb1GASX0rUQSuuH48eY+mjaTWrhiMMhli0j/ImecP
4VCBdbGLuKCBgYIGxZOXJ2fPzsAqh7Zu4VP8f9hOww0COKDoBhmN/2PSmmet
onSulhTZWT2ZTKuL5i31HwPtfPvtI1ElygHJECnXVxyfkBRiqVSVTJ1KwK7H
9k1VLYKKKEmBZrJbKksitHgDoXUjyONKNdMVJaf6lfBBbTztRXraezOT2QJ1
d0VPtBb6chSbcYw/X9c4a2/x8pTDKmfd9rYe+BtDI+ku1FpnkTZ7l3lbhh9I
JxZuBHlDAnYXz/q9WLBKOkrkDfFd9WVhx7LWM2l9xo8PfS22vD/xlVhYrdOR
NMLc/8PIPfUrs6c3jDwm4vS/r7cGkIjqx1t+N+x5XfJTLNxEf8fqwn7aNnBi
sHqLQWt+e49wzXMikp8c0KrC7SV8GIOCCrgYiu9AYpiu6q0/kUqq88Ggvwvz
+/nJY/0c3edpsA8tiO8yReK3adg+Di7JTk+wIJx5S6k6nHLJlWwyMAin1q7h
KU2zRMQp6BHPlCKOlHVLNUaTiiyuXe4DrJFHYgns3tl5AduTGJJBrVDhR06G
7i8ldbNVHVfSD+TGCdKZGE6t52CvzpzNbEFldgleUt6gNXmrPLJc4QEhLI3Q
9MX3uVQPVkO4b57vUpPyYubLM9tcwoBJ5CfNMma9DMFwbvkRnl/bjJE1I5+N
WSU8Gdo2s1jAVQ8dLGM1EAq5t/YU306UsPuEGGvJJffunbI6STc4pXqYrf37
vYQav3eoGlfeMlRKZLq4FZhDW5VLuoA0FGnYjx7VVCwSuOiQpQWM8SmYp5MQ
IxypHjoqjuBmvKOLSvk9rFaOLSoqwghdxpoVqyU55xiQyzmdyDTMv8hYYEaY
HOHLntatcKKU2nCI9DWsGShZAbtKGdMiFjVRsghUZRwLtnC1K9A2rGRj8cOr
vYhNxuY8TExGNrXiVKaALoTyH+nWMRaC920lKY5Mcr1A7Af6Dxti/q1vqi4v
TYbfCwk3jVbEuKtWKhNYb+nZwaY0zhqSuBxLcIPkkTtdlo+gIw2TsWY7Bc4B
qUnJxuDBouc3V3pUZzVHvURvVNCwXwftZy2zQRwmecGspKz0k4YXQMcqNB3e
4yyAPYSvSGXUnmpjabVPuHinOFf/kZfmEm5+FOPn1+iY2UsoFjc0y0pOp6bw
g69idWq8bVdUgqigumO3UmLL6TSbK6z1TpPUWcwGsAmTZg2dEw+Tla7dkxEf
V6tSgHjPPlyX7HhbWWNicM3nVUxht2x0Qwe7Fdzend6J6qoBT8m7BiHkGN2x
x8Xug9Hv3u5xzQgKKlmlpAxNG8nKeSUonD5ZlnivscuQQjMnPxw/O/m0Ddis
UrPT+bypYpCMvSogwc/QfU7bk7mjkyTixBN7isKF6Ga++PKLR+/f47n80w/P
TujXIf6aHvnywYOH798PxClK7SOwWeIIt0pYaRFlOE4U0ibylzZSWYmMh35P
0WwraMuTQ10T2KX3XK5nFNhrWhEtChuDguwnW98vp14xUS1tmZ8d0w5AQElt
eRYe8UXsRFCibWLSE8ACRdLwImpn8CuH8A3Je0Ws00gUn8GdT7ucpOEvZ9Uy
bSYHuk+RjX7J0TBCr/efOOp+SMnJKdphrha6yQRVsaagaLtWFpmYTkiunGzl
KUjC3B/eIIzOo7R6EDvA7W5qOfMpE16EEa8jtz9yUCixoq51ALOa3PEKQaSY
kLrl1FinDKdid1nhxcbePMTPK2MGBvjJ1xOZDliBcC7nPZobGw59t6JuNtNJ
dvMMwi0TWrTI6U+7bQfXHERsdVm/3VHtJGa0U10GRtHYuiGAL1iXWz+DOL8S
mcN+eJYRfU/yU8Ng0M7jQDCH4RUkmIVsQqC6aEi8yDYjhoPizUbN7PseCVA6
Lp5I06dvF+XcekpMjiC/Y0u71t/ifuHmCEvTwH/dZuEnyQxlP5PXnsHEq1rq
vqb3U4ZylRTPcY/gK3fTd5L57185jLMM/ey9snYPxO3g72LuG6axeUaN6BxB
JCk+7yDrv8E28Jzg8SedTcqZM0ECyz/ZqhFl+4eUbpZ+vz2TP3lUX0W0zl6B
gM6t2FQU7oOoQWzqe65YZBWKpUCxoYNICeYqS0o5GnNNZFD55YzcE8kHaa36
0jBGK9cCbWGcA/pjCE3wpr7nDooYkd0fp3oOdSD96CubhC2lY7vjOI5lxY9W
NJtLzGrjqY4LukGbkyrERyvWbWU/g0ySuZ3Umi0qWnC2Qf/QqW0unTg9s07w
eiPTB3RXEHzE+1HwtvxJmWaixm3Ekyap/hB9df5YdSrNwrL0tk2Ls6XZKFMe
21K7pvxp/yccdjnqo1N/1KXLuha54ujXou/q685QiRd5i3r5FvHnrYT8lwmf
M2Ww4cHrt9/ffUIkrfDAsNd6V6ZWqp4j9GlEIOfMVWZBkjK88QzvkrshVvAa
BK+FD0xbZ95aQe0nfiWaBmE0JC93c0uub8pTZ4+2QA5ioqpYYUqSaBZjX9Y3
RQ3Ky6TAqekdqkVPTNnPLGOC6VPUF+t2D/JeYtlj1dnQEMyNzVHMPo0uhcDi
XZ1jxBEFFqiqiDlZtWNjG6hxR8/QI2jAJvqhfaVc2sZqTV4Hlh00f6kPTHwA
C7eDYuCJg3CCTWQbt7halhPGdNPcaL6q2t3dBDqZk0CpIZYfuyFUGW82n43S
ofELFoKL1+VeBlGx0qWHlviM4qxkEw9bgVU/PX8Z4r/On7+y776yX/WlDTaI
+lx1atuSe8qx6CUhO04BGuQldNXnmU/YIQd5Ydl5lx3C/XrgcimIrwLnz6bM
nc88pWyP41LOnvjINtN1CklhCmyUHM/quZLzKAXfycKQ9cHl9TzrRJGLQUKw
KYX+YBBL8iq7FYEx6TDs9p2Aov8E7G2ANhRF0gC7Lgr7hmdkEO36e9Fhkb56
L3Rfqs1lE5w019/dYs/AEoTRRFYssZozU6czJribvq/KpWj3DYevZPvAaT40
VYAjGs+pS5oXfsbeLO8/ciqmBpliBMmmSKaFB5x8MyhMJ0p3nzWSTY9rpDNx
yU7z2Z+LZd0snQ4s5kWffrz36/XnjTNxENyMyxbfNN+SdbV1wjUWlr/K97k7
5/nU9nrndG79FIoJQCHPE/PfbtoEznjo2wH+0+5rH8hM8a5LvMVcrdmXJUq3
6LaedXOEU9uqZw5Fi8LOkMuKM2tk8Vw+YwwIHuGl9hxLyR0TYxvSY8dO7e6J
N1VcqNTEQG5CF1XsziHPyMvkhyxyFcCY+tXjTtren2QvPWtFEzz+KRooPAfd
LWemaNxCyPXo3uWJ7VCrTb/iEX0TS0axMesLawgURU1bho1H8yR7WXosUln9
uABrBokA7rLj4QXwsefoiC31Cev+9nrFOsev1SybUERZSIaT1YygPVKzdJjd
ievt/WaZusWZnt0tvsEekZvfVX1S9+Nf1nPvehGTl8Q43D5sXY3tvban+EU0
o8c/fd0se14iIIS4J7pa9F5w5TIsqH4iLCP9ZNbCQdJlsBYiXsTXTIiCZrwi
5hCCbUgdNWMwib8+9MwxAwL9KPaQbRJ3nqzAKhp/Z5GVWnt8rBwy4pkkCv3i
hZBDv/tEiKyHzaUNQ2lnMJKLOAES0EOB7AqVfXRwrsjIsmoqLbFc4+DhPLi0
se4o0TVL6iBXpkM/Obnz2WiriJ2IAX8zGHo9bi3rG9XPOWdmJBrQPR4dZ0lL
/Y25xmVSWuwBU1gsuMimPLVDP4f9ch6L6D2pS7BuZjGLhZLOGJiOdU/Gh4Rv
WNHfCmS1wOdOOt38i5a80kXx7l0ys0Nuo33//rDYIYN7B1F0wrwdLXBMf9gB
axW/JzQp/W1f4f7YGS/XlEGC9wf+ySC0ewVXSiiwRALPEvssLCkST4Mna5OJ
4exPrSB4r8DKVdsbiaXiNjRBBEQn5YJbEErIKaXKka1Gyi2FzyxMG4+tvFGe
REBto7g0RXXiO35kO8XeInbLx74niwgmb+ELQACQureNr0lBDHoovKY+YlxO
1BAjgNbLDOtm3TkW3dFKKZCO7BtIlQ6CG9Zzk4+4fl5Cssgbpv/d3/TH/Q3P
2X+Bd73994tsfPhDNuAvvIV+4W0Qn+MVyy+cX+IfXGdQ//gF19Y/B//85w6D
IDXWPB1C/OPh6He/o08e8hff7x/5bsA/t4wh/eOX7Ll/Yv8Z/+O6QJIC/8Aq
jb7/yf37izNYJcLY0QE+ejhF99L+J47QexjzEX754P+nEbLXFN7Goh3+OBg9
2LaGD0ZfPLrXGeb/8FEKOMidtHyUm3r1Dw/wXzCaH17Jm9xoHj7i0eAf/4tG
gylX+iY7Y7Y2JE7+d+9AB6WVO+qUVMm0Utq7T/hfTMF3cbHkf+K/3lNVQaLV
Ni5HTflIqLPFOb6d2VyrKzNJJ6E0qO6dZ2EbaU+rOSVuClMzJ+4LSTfT/ZH6
K2QbLZsSZQwfCjDT0RuIXpMkRFnMwngtL+6SRC1ORtAcqdbqEcVSa65yA7Hv
SI8mkkUvpOEDY5zQMAgVxLTvg/FlZTXNWMUkZ/HuCZPcU/oZPs+0isXlek5a
fNAkvwjusjg1RkA18KMRDG+P9Xpd8/eJOU1w/EHnWzTEX438w/kT8IDsOy3V
7kKyBNBw27L//Z2P7idNqoO5SWM2nskVlAcJmLABpXlCnteUoUeYMgtnEmaW
koYKtKyFCmtyNy9noPtdUXAQuYzHzNcccwD9r8GSH2BZpCnD7NQS0QhWaVE0
rbJXrwSTWniyxF0uLkD00k10NpVU8ybQSEorcZIMaEOapaSdpam5nyL8l5vo
mOTF7s7J6Bl/i8rujmQOk21IcLrPv3z4xfv3e0xPpu0I20wEwrEHDcnefq6W
jSNDiOHJdBg0X51yLbFGnrpE5g3mgU1fR+mDe8b1GLbZrnTnD9qvw+JgJkiP
VDI+Zh+LqMz8NtLR73VfJFvviKutJWVmBuQIRjlBkk5r1Pvgz8ftvlI41ole
LocUy56RMsS0pbHmWV8aCkU4l+vqq8KYLWSM0SXA4gvk4LTdwy0eOLGgbVyO
eXZuRgWVco685XFMNlbZtEghwWwZWVIYB+ENyJ5G49mrnQydRx1sYFK+sjdU
vruTTQ6SgaU03Ry7RqiCIdH8MGifa6YsM+xYlNkNAteBWPCIWKYvrJpz5Eok
3Ve78e+V/UPJlHBDcaEZl6ae1Ivc3aNampsYSPtrS7pjpvBtPgHec3KvoOyy
3YMHDwQvxi29KJdX9fxVtcQ7B7Fw8MC2NCikJhVG0+TAqYs6P4Vct63be+cI
7R/ViXf87MXbgTQaop1Jwu4CeEAcrVWZYJTiLSEg5uM7QXs5HSEgFr5IiDzU
4dIzO4S/JdYVuCMOjO+B+cmPibOiYPKCVXHwby5dFs8/euqQD+dKWaeKM3Sj
/onVk8MsNR0doZgIKv9EzQ7/Lf/klB2j6i5nKGx+Jje/ZM8QAOG6KictEeFO
b7pZ5ZpzsIvaVDh59cOg+P7ZifMDSU0jcg2vFyvvIeKSAiBbcP6G8FpKfJqP
q/CBC8oY8pnj9Pzk1Z4UMjHuNZcqXAWh3MF3Csi8MiVx1z45GZ29ODvbK2Aj
MMMR88W0kfhSM/53Ge4c02g1Ub4n23bPim/08HuHCJmYSUFcykVlxxvm3BIt
V4QsDVTxSPjaSCMNuugdnzTfDi1XdlaOR032xhWP1Th9CTqbv1bgJtafpCBl
pGwiX/WRQcqN04DE6SWGBibEe0CHy8oxRswT1yyG5WjZk0wGETwqm5HLPgzs
Yz6eA39Y465ScDF+RW+B9iKykDL4uLcX9dWV66zBEaTWKpaGRaLWD3Q4IPJN
XgeHwB0cAYoY/aGRZagJQrlXDcOvr+uFUG5Dg5w53NpOTElYKaDagxyiK3oF
w4TxvZw7GgNKd1zPW5GVKDdEbCR01pkEgf0fNtwhSQuHojwlP36cyfF7qGb1
PzgjWe0/HBSff1Z8d3y3qkwzy38Dpkj+m4fEz0EHWfWxreQzmPE5J6syljsm
5numXs9eOghlm9QI1cSMbM0xsFMvKA6GT7AQaxOmlJbrIXLdEly0lOAsaaOX
pMI4cOANjF4DCZJXg8VXsGSC0SK8ruybjbNvXv7w/EmBWRmzxQqNdJwbapPO
Mmjql6DvYX15MbNHVj3aRBonEcOVE8XckG2G9xZPyG0JSZhWkZWKpghnHIVn
ltLU/TZEdGSXM6Kouws2U74Fj0lLgHc5QG2gTKGJXqlJcrWmVMHPBlgtVcux
MQMSK3ZkjkaSBBWfEfeVrlygfqgplAT5Po2SuJVim9ccPmHVWfgKqYKGL06A
vbhaIlAzFnunbNx61XaiQ8r5RH1rA4KCP3UlqMk1YeWXM6IgyRhtNSeHp4Fp
IHAW1AuUMtYZAwOXASeTSOs1wBSuKa/s01aOpvCjC2KBLidcDn6Lg/jS5YTX
smSR4FuqVc95IyYWf7ma/wCJijDlxDJxW7W+OR9IUJQSHV6SumGMsNQQAina
pKCdxdRsI3C12rSiWDDOR6kLia8wXu1lNdQFzwozCGOV4/eiXNSAidbruAnL
sRRund5l1aDVBmyFj15eIjMReqYORQom+lkEkWDbz9SNgK9794kYxUP8p2AJ
DLrKEUJPgUk277qeTrRedm+pa4QHU6KtHLBceWrVBURPOzdIudK9dcQ7lT0W
yfbybgiONra2b4ykB1l1sXYuHdRxs1AhzOwoKpI0xR+nKvEN4KOCq5036QRg
NjISGUj6GemQSJpI6dwVc6UOui1Gdtxeb0/IvT3FFm9P7tx5+OUX79/T8Uet
W9f5RFmHu1FmvA/kS6kZoF+9N56d7q9E2K8XlL1LmCPxXWVoap5jhp/Pg1wX
rK7yz1D+T2+xdBmy3mLaPZ+mScN8QYISR8lOF2W+uQlFf4FWfAuftKBATw47
mPJcFgws/5Vo8KNsdsczJp7DOVzP1gabucQqZ0SQGG+H4AAmXHJZNEhhByTO
6gSyjpI+kQyp0DeqKD/tfSpjxpPcXal5AliMmc5hg/cBZP0LMbl2nbNBHQUq
vHpwawJxzLx56IlDVEE9SShV8ew4mOLFZOHpcdybfA6K+DjkB+JwIB1XjpZw
xFnMZlwushouGZlcmhQDj4ve6j9iuFZKNfcrfviiniOv/UmHemZLRnnS8P3H
xcN7rDT3JuTAMyHNs0oWLu1j3wr7Vj9mQkOxYU99zBsTnE7PNN7vsP3YJkk2
9uOP6mok/9sxdeY17J2dglOYtbYt81/JaVXfQ0pwjgWtAnuRqh79KGHrBBlQ
GlRNhZFU7CqkssfFbeAI0mTNwgybcBsftBiJWawXfLbhLcLeFYFODCIKGVsr
ldoj4yB6VJQTGmlvsRQvw2WlTpkzSIJxBKoWNDAqNKEJHFDBHKS6mJBzxVwR
E5Z0794dHTzCejY08WySyIzLZSAzLUHBONFN6r6K3TFzPFLTk1vP8V8Su3E9
9ySUkvkeemgu5WrUOiIkfHE98KxO2Rn17hOtG4LfUsrawr6V+gxdxVwMuEYo
gqcNh5syUVDsfqb7bEC09+Gz4r/MNpYrkn/sSaslf4WMnVVBei1uLVhU0ULR
CxF87ZFpDWYnKANWVMGWF5PQ0F9NjBwMg4/2NncEOh64amDUgrYdEvW6sskR
Jwu9+CF68WmOsrNn9WEfpkW/YvvC1dfysdnwuCNXYb3IhivHr1ra0jNUlIwN
gaoRYdWpWgyw/vYIbwE4ZqhIDFWREC2Z3KeLJb6f9AxySl7XSoXHcyf0DmJD
CHcvaT2k+cpZC8Y1rnw3XElPmA3Z29CxSo1LpiAaRDAkLG/womLbxVs9mi1V
bijIUeyenr+UXMnMWlKumqiulIFxkdRxLDEjC3qge8hqFSlLupBfssGrCpEF
rYI0HTkr5uh3nkZWXughkqxOxPkOE7JDv2BIPJEmvdCDvoNcPe/e0eqV9v3Q
BAFV3XJBXnVo08p8jRvMtgNmbajSbsmTaoDhuoSdj+BDIxjuu3cfw4lmXcOO
Md9WatKyg3PVLJMFoXAhpSQiCyxoTVgw1i+SPApyZaJpz5XmZ2rbWQoj1Vmf
3mkmYkyQ6+uW87+ihtii025ZLUrm/cZLSVKVYbu7ZzloxuVrduBWa3aEBqSU
kC36qI2/o0xfGvhRIseRAi6CJ6HoxRS+nooTgsT82kD2hlPN5rZN43O0jtuO
S6JFv5w/xRmFz30IDGwVlP276jClNz/22dz3iwMriYeRBV2WZCf2vQkf0O/7
X+mblXJp6YhxFPFfMEfJS0EcVDW5KsnAkdUMyWoi/Uvjd5Gx+PEepTAZ/GPI
G5OGz/vytgrXcD+vk0En3UsGfcotJiOLddP6h6eVM4XFNE2YE58YU/X1ptRt
gDGzxLuQ3NKqSNyHwRt4eS611lLRH2EHktq9tOvTTU4viz6Zi6pDEKQA7v5J
jNy5cWNo/moyp6R0oDVLgVijrRTqXnzOEp+HUcrolt5dxYx0leZzzcwgCtp0
aUAOKN5P0OPkMbwkIb8pSm4DiAbqb57NUcTGc2C5uWxr/eax8QT4ZCtO47TD
SC5L/M6Re3Ye07Sl+GHkF+3Zo4fpiefQCP8qbWUv4VXoUVEMGNmrnZAbg6uH
PWN/pJbgVZ2A2cAU55My5G8rpBK4kMqAlI+rBklqqa3/XtfjN6YYJEQp6MZB
MHjIkIcUCxSV0ltKzrnWsYU8JSJjQPA8sXyRRIp4A2a6Nsf7SZeHbseLMIgx
EvVUNwFbDAObkwHn5aSiX2fOLzxT1sBAdPv0GMWaF+Z//0ezkTst4BZKP9zg
bhDDPn3W562JQNP35umPqZvCHZ24myk4aF1IJyDd0Bv1NHREbtTRWGrlfmWt
18FF1clSJk7Kmvk8MWIPhutkAgqUUtsFK9peZBDE1sppgEhtmtU1RgZQ2gnQ
SWnLaZOzkhbk3qtTdjhz7pZp7Tllr5B636p3tLGgWOg4OZDG7kZjLhxfiVEX
0c9dxKCeIRc5vGV6p0Gcjg8xfzvr0P1mLBxep5A7vg3MBizXK9SycGpEd3fq
FBzreeOq9FH4T7n1AyM3fJGCXXGxspFBMdkpM4VO7/ZELRC/JzINMlaCAkER
u9dfRnonpZXe8dWk6fOknqGLw6+SiAvH1OVyRPRWmwTlcdB5AN4LgA0+2C0Q
rC2nDLTEdFCDzvrt2QVG7cvvft91o/3yi+cdgCcS922nExs6sP1264ikjZKn
71tRtKIrT/wQ/ZXHClBUmkkdo/JMY8H5k4XciQhINHoY8We44vRaYMU5+7cC
OPXOyIpD9+I6Y85niubU21UDgRHDmYek6U14gFwtdMa80T2ISnm8CS8oapWX
ie/sApKElVmP5tkMcEmNipcrKTJCxNG56OhrkLyC6sgDcxGnTQhWWixYiKpA
Nohid4pANeYdTfagTb4FbdAt8hd09yaxcY/EVr0vHSW6AytYEuH+9dOsc6Z1
qnQu8SGex35BOcIaAyyHnOMvbHH81R7UR54Sp39whTZEErH3Fh174e9TSmLd
Sx0K/epY99EpO6IcROTdJzpq1ieNrJt9VsMYf0Tnpx92qoHZVlVfl/2OENLm
vpGSrzsxY3ZH0qzVr7XSaul8aD6YaB6SzGebkjTA1ZeLinKHwxVJ/r8y7h1J
iIaaZ7PCMoq65kWkG5OHLBDSZVFN5XSvvoXmUeH/6w0ebXijXjaIuP2mKifL
ppkxZQaxSKB/5y7tWUym2oW9zHzoNU8KaLoatJOIF9xJFujq42Fwj8KlsDkm
1qtLugjOJ8WzDBMF/79GPoQ0Wk60Aev2vXAYqXRFS3MplOT8BG4iNlxz7Kpy
pSsyIKghSvVHsVHVPjgzn7FNXIL7soApI3ONWoGHqaIlaK7DJ6CsrhhUluIz
KY+gLaeJK5HD+b/78rOHGkbJLpe814rb1xFLJRPlSS9hIVuOKj97ev61OpEN
MhrZeGv6x9VSQcDYbboVoT+Y/IVe9yrMzdlCd2xtg0ZsFdLG39SoK+ZINrq8
jXsav7xDb0uzbNnqkxKv2EUQrOslXjyoRCJ1L9blwYAG3iUc8F8IRzDcH/Xl
XWA0SKzOEBmr6LUrpXts15KDD2+jySBMSn2xBnNdsLTBgAMyhaWEV6iUOn4j
RM2YpCEXLYL3YE+UoHDQRFhSXWeD0dVYLxG0h6llaI2cwtEkCCIqN5ObuuVW
4zQz+CFvCmMK1VuYfazip3Ah/KHbPoNihzM6omseWXqrW4U8Kiz1atmsSa0N
uFuu5lQ5LNGusGUlMBZcEeEv2K83h3NChjuoz3OGcIGFLuAY7ClXicTqRzei
VsHDaI6TlOcaGHGvYNcuq2pygeUF7V1hVgqWx+YCc7akHJ0UBp/RvMK0Mi4R
46eNgIVsZ8qog41aQJV0cOIm4ipcd5SHA8PbwYk2GaGH/nZJDsdBZjP0SBYQ
fje/jaCVsFhfTOsxuU/oYE0IpG24eZCN94qzZo0FMolG+ofT53xZ3SuuV6tF
e7i/fwVCan0xGjez/aumuZpW+xcXy/2LaXOxf/Pb/dOnR09ePB3NJr/qVyCw
9uvFzWf7q/HiNXwxGrueHBZ/pF/QRy9wquG6PMQ1kJqd9MXzGu7+Fh7G2R5O
+V8T+O2r5zcPi/3i+OwJPYc3eTleHcae0YKMuFfUwck+rMd6hj0cTqobbh59
02x+QaPfYwVJdFA+fDgoHj54+NuA04iU8n0z2OZTOG5H42swsev1bNQsr/b1
H/vtckxzAdtiOXmNAcy7fXR4XVf0Ff65D6eu2o8qzmtBuGFvH76WiON4/P/6
jdd//4LBysA1eifP/1PW5+V41dDyHNDyHIR/7BSxsSUgTHKWWxG5iDGOyLB/
4JDxbp3q1LjTM3qD9/qUVg9esN+OZ/tTbHKfv8Bn9uFGAfk2afmbEX20rKr0
fBFYUDeITX7PS3smftZe6bTv/58Hzy+eLFbTlz83+09/vHpz+pcfvzn+09G3
/4qj+8/eIGcV2J98gj+zLfIrTnBXnMmhIYmGsAT9YPsJ0qw/oRfvObn/sjf9
zzyxj2w1MJdtTB3BN3iXC6ax8TfD1BmjaripuJxFBbqc+lrlXnb1y7UpOued
erNtQKKjTqKWqzPCcH4qzWj1JavYatrBIEU2SYeS1Jz5BFMMt72DtatHn39x
QCF6NE6Ovj/qzkpdzssNM2KVhyWLnBooxwJfp5oxt+zmUhgEJg1cL6lwkfBO
V5MAGiyXUTl4+DlqehpNP/YlXmbNBNRjVDuJzZo9V7f1kouBlOSRRbXd/Osy
16h0L7kZLH00wfrtt1LbDLQ3hAWQgiTA/vkaz287EkQM5SNyli0qxrTeRtG8
bXKpdqqtfOD7TEHBajlw8x5F45DnA4FoMPCHEWpxfEHHN9KYMdk2xdNJDeL6
MDWjDLQza27YYOPLxJTDco4/5l1wZI5kVpDffVKmn0ggo1yvwEJmZZ9YPC7X
lEGO9cW/m1b1HIyxN+TeIb0d10SZAa6rZnnnjkvoTuWo+Il3Dii71YV49Z6X
S0yBWJZvZg2Xy4FfVBwhlheQ/fRjdVUifu/41ZdYlwcn/XtYdPsYYX3bz58q
/Gh9lVdU1gbmvlIWD9aBeTmxH9SxT1syeqQi+NW6nmCaqUTqA8c9qhJGDLvv
ppo2i5mQsse5vG3WmIkwJffiG9lp5fxN8R3mwBR/hn2C1PD1XQk76y+wat9e
l4Piz826Xb8BwbZGveP5Gn74TVuXjSaOgZGm048QOujJV8W3cKqLZ3fV/ArT
0X+siU/ix7Ktwaa8GRTH8Laf1j0NUIEjagEHCUJ9hf9DMD9e6YD2KhatS5Y8
yVLg2PiKudy1xj2sy4sXB1+CLMKVjxMR0ok4GRVnq2qBjs8/goE2KJ5iUfcn
61n5MxZ7+R7rqqxK+ODNG8xrqwfFq/K2mhbfwoX0892bQTgGkQG7vbkbFE9K
sKLgbWiUgkk5KI6wJvyP5TXcHjCPVTOHJWyL72B3NPh2cj+w8wPHBNN+vkaL
OV5JA1cu4Ckzy8ih55uwODt9WqyqctZqQl2NTj+zHgmnQFT7YN2D3HBzkU7D
KQp4sIRPaTpuQcEehG/rWfFTCVobvPJ5cwX9xaDhjxUWfZks0W35DY39FWxL
ENglFhM9mixBvhf/WcLlDUb/oDi5XuIlUs7DN+salJkSh9QU/wkTDnvr22Ze
km/7BXQOW6QN8BKWCf7+dj2/bqCBBib9pFy2mCp9TMnWc9i0oBHUJfxu3cKs
/wxduy7OK7gK5jXOOw4HXnSBff96+be/Tv72V1zX53/767jEQAXP+nm5atdg
UjTFebv+S/2mvC0JRqkGtl0quKGk2PCkZjwlnrwaVAHMiowauLvEUPoNh8MC
Gwr/F87XkTshRQIA

-->

</rfc>
