<?xml version="1.0" encoding="UTF-8"?>
  <?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
  <!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.20 (Ruby 3.3.5) -->


<!DOCTYPE rfc  [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">

<!ENTITY RFC9000 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.9000.xml">
<!ENTITY I-D.ietf-moq-transport SYSTEM "https://bib.ietf.org/public/rfc/bibxml3/reference.I-D.ietf-moq-transport.xml">
<!ENTITY RFC9438 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.9438.xml">
<!ENTITY I-D.ietf-ccwg-bbr SYSTEM "https://bib.ietf.org/public/rfc/bibxml3/reference.I-D.ietf-ccwg-bbr.xml">
<!ENTITY RFC6817 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.6817.xml">
<!ENTITY RFC6582 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.6582.xml">
<!ENTITY RFC3649 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.3649.xml">
<!ENTITY I-D.irtf-iccrg-ledbat-plus-plus SYSTEM "https://bib.ietf.org/public/rfc/bibxml3/reference.I-D.irtf-iccrg-ledbat-plus-plus.xml">
<!ENTITY RFC9331 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.9331.xml">
]>


<rfc ipr="trust200902" docName="draft-huitema-ccwg-c4-design-01" category="info" consensus="true" submissionType="IETF">
  <front>
    <title abbrev="C4 Design">Design of Christian's Congestion Control Code (C4)</title>

    <author initials="C." surname="Huitema" fullname="Christian Huitema">
      <organization>Private Octopus Inc.</organization>
      <address>
        <email>huitema@huitema.net</email>
      </address>
    </author>
    <author initials="S." surname="Nandakumar" fullname="Suhas Nandakumar">
      <organization>Cisco</organization>
      <address>
        <email>snandaku@cisco.com</email>
      </address>
    </author>
    <author initials="C." surname="Jennings" fullname="Cullen Jennings">
      <organization>Cisco</organization>
      <address>
        <email>fluffy@iii.ca</email>
      </address>
    </author>

    <date year="2025" month="October" day="19"/>

    <area>Web and Internet Transport</area>
    
    <keyword>C4</keyword> <keyword>Congestion Control</keyword> <keyword>Realtime Communication</keyword> <keyword>Media over QUIC</keyword>

    <abstract>


<?line 106?>

<t>Christian's Congestion Control Code is a new congestion control
algorithm designed to support Real-Time applications such as
Media over QUIC. It is designed to drive towards low delays,
with good support for the "application limited" behavior
frequently found when using variable rate encoding, and
with fast reaction to congestion to avoid the "priority
inversion" happening when congestion control overestimates
the available capacity. It pays special attention to the
high jitter conditions encountered in Wi-Fi networks.
The design emphasizes simplicity and
avoids making too many assumption about the "model" of
the network. The main control variables are the estimate
of the data rate and of the maximum path delay in the
absence of queues.</t>



    </abstract>



  </front>

  <middle>


<?line 123?>

<section anchor="introduction"><name>Introduction</name>

<t>Christian's Congestion Control Code (C4) is a new congestion control
algorithm designed to support Real-Time multimedia applications, specifically
multimedia applications using QUIC <xref target="RFC9000"/> and the Media
over QUIC transport <xref target="I-D.ietf-moq-transport"/>. These applications
require low delays, and often exhibit a variable data rate as they
alternate high bandwidth requirements when sending reference frames
and lower bandwith requirements when sending differential frames.
We translate that into 3 main goals:</t>

<t><list style="symbols">
  <t>Drive towards low delays (see <xref target="react-to-delays"/>),</t>
  <t>Support "application limited" behavior (see <xref target="limited"/>),</t>
  <t>React quickly to changing network conditions (see <xref target="congestion"/>).</t>
</list></t>

<t>The design of C4 is inspired by our experience using different
congestion control algorithms for QUIC,
notably Cubic <xref target="RFC9438"/>, Hystart <xref target="HyStart"/>, and BBR <xref target="I-D.ietf-ccwg-bbr"/>,
as well as the study
of delay-oriented algorithms such as TCP Vegas <xref target="TCP-Vegas"/>
and LEDBAT <xref target="RFC6817"/>. In addition, we wanted to keep the algorithm
simple and easy to implement.</t>

<t>C4 assumes that the transport stack is
capable of signaling to the congestion algorithms events such
as acknowledgements, RTT measurements, ECN signals or the detection
of packet losses. It also assumes that the congestion algorithm
controls the transport stack by setting the congestion window
(CWND) and the pacing rate.</t>

<t>C4 tracks the state of the network by keeping a small set of
variables, the main ones being 
the "nominal rate", the "nominal max RTT",
and the current state of the algorithm. The details on using and
tracking the min RTT are discussed in <xref target="react-to-delays"/>.</t>

<t>The nominal rate is the pacing rate corresponding to the most recent
estimate of the bandwidth available to the connection.
The nominal max RTT is the best estimate of the maximum RTT
that can occur on the network in the absence of queues. When we
do not observe delay jitter, this coincides with the min RTT.
In the presence of jitter, it should be the sum of the
min RTT and the maximum jitter. C4 will compute a pacing
rate as the nominal rate multiplied by a coefficient that
depends on the state of the protocol, and set the CWND for
the path to the product of that pacing rate by the max RTT.
The design of these mechanisms is
discussed in <xref target="congestion"/>.</t>

</section>
<section anchor="react-to-delays"><name>Studying the reaction to delays</name>

<t>The current design of C4 is the result of a series of experiments.
Our initial design was to monitor delays and react to
dellay increases in much the same way as
congestion control algorithms like TCP Vegas or LEDBAT:</t>

<t><list style="symbols">
  <t>monitor the current RTT and the min RTT</t>
  <t>if the current RTT sample exceed the min RTT by more than a preset
margin, treat that as a congestion signal.</t>
</list></t>

<t>The "preset margin" is set by default to 10 ms in TCP Vegas and LEDBAT.
That was adequate when these algorithms were designed, but it can be
considered excessive in high speed low latency networks.
For the initial C4 design, we set it to the lowest of 1/8th of the min RTT and 25ms.</t>

<t>The min RTT itself is measured over time. The detection of congestion by comparing
delays to min RTT plus margin works well, except in two conditions:</t>

<t><list style="symbols">
  <t>if the C4 connection is competing with a another connection that
does not react to delay variations, such as a connection using Cubic,</t>
  <t>if the network exhibits a lot of latency jitter, as happens on
some Wi-Fi networks.</t>
</list></t>

<t>We also know that if several connection using delay-based algorithms
compete, the competition is only fair if they all have the same
estimate of the min RTT. We handle that by using a "periodic slow down"
mechanism.</t>

<section anchor="vegas-struggle"><name>Managing Competition with Loss Based Algorithms</name>

<t>Competition between Cubic and a delay based algorithm leads to Cubic
consuming all the bandwidth and the delay based connection starving.
This phenomenon force TCP Vegas to only be deployed in controlled
environments, in which it does not have to compete with
TCP Reno <xref target="RFC6582"/> or Cubic.</t>

<t>We handled this competition issue by using a simple detection algorithm.
If C4 detected competition with a loss based algorithm, it switched
to a "pig war" mode and stopped reacting to changes in delays -- it would
instead only react to packet losses and ECN signals. In that mode,
we used another algorithm to detect when the competition has ceased,
and switch back to the delay responsive mode.</t>

<t>In our initial deployments, we detected competition when delay based
congestion notifications leads to CWND and rate
reduction for more than 3
consecutive RTT. The assumption is that if the competition reacted to delays
variations, it would have reacted to the delay increases before
3 RTT. However, that simple test caused many "false positive"
detections.</t>

<t>We refined this test to start the pig war
if we observed 4 consecutive delay-based rate reductions
and the nominal CWND was less than half the max nominal CWND
observed since the last "initial" phase, or if we observed
at least 5 reductions and the nominal CWND is less than 4/5th of
the max nominal CWND.</t>

<t>We validated this test by comparing the
ratio <spanx style="verb">CWND/MAX_CWND</spanx> for "valid" decisions, when we are simulating
a competition scenario, and "spurious" decisions, when the
"more than 3 consecutive reductions" test fires but we are
not simulating any competition:</t>

<texttable>
      <ttcol align='left'>Ratio CWND/Max</ttcol>
      <ttcol align='left'>valid</ttcol>
      <ttcol align='left'>spurious</ttcol>
      <c>Average</c>
      <c>30%</c>
      <c>75%</c>
      <c>Max</c>
      <c>49%</c>
      <c>100%</c>
      <c>Top 25%</c>
      <c>37%</c>
      <c>91%</c>
      <c>Media</c>
      <c>35%</c>
      <c>83%</c>
      <c>Bottom 25%</c>
      <c>20%</c>
      <c>52%</c>
      <c>Min</c>
      <c>12%</c>
      <c>25%</c>
      <c>&lt;50%</c>
      <c>100%</c>
      <c>20%</c>
</texttable>

<t>Note that this validation was based on simulations, and that we cannot
claim that our simulations perfectly reflect the real world. We will
discuss in <xref target="simplify"/> how this imperfections lead us to use change
our overall design.</t>

<t>Our initial exit competition algorithm was simple. C4 will exit the
"pig war" mode if the available bandwidth increases.</t>

</section>
<section anchor="handling-chaotic-delays"><name>Handling Chaotic Delays</name>

<t>Some Wi-Fi networks exhibit spikes in latency. These spikes are
probably what caused the delay jitter discussed in
<xref target="Cubic-QUIC-Blog"/>. We discussed them in more details in
<xref target="Wi-Fi-Suspension-Blog"/>. We are not sure about the
mechanism behind these spikes, but we have noticed that they
mostly happen when several adjacent Wi-Fi networks are configured
to use the same frequencies and channels. In these configurations,
we expect the hidden node problem to result in some collisions.
The Wi-Fi layer 2 retransmission algorithm takes care of these
losses, but apparently uses an exponential back off algorithm
to space retransmission delays in case of repeated collisions.
When repeated collisions occur, the exponential backoff mechanism
can cause large delays. The Wi-Fi layer 2 algorithm will also
try to maintain delivery order, and subsequent packets will
be queued behind the packet that caused the collisions.</t>

<t>In our initial design, we detected the advent of such "chaotic delay jitter" by computing
a running estimate of the max RTT. We measured the max RTT observed
in each round trip, to obtain the "era max RTT". We then computed
an exponentially averaged "nominal max RTT":</t>

<figure><artwork><![CDATA[
nominal_max_rtt = (7 * nominal_max_rtt + era_max_rtt) / 8;
]]></artwork></figure>

<t>If the nominal max RTT was more than twice the min RTT, we set the
"chaotic jitter" condition. When that condition was set, we stopped
considering excess delay as an indication of congestion,
and we changed
the way we computed the "current CWND" used for the controlled
path. Instead of simply setting it to "nominal CWND", we set it
to a larger value:</t>

<figure><artwork><![CDATA[
target_cwnd = alpha*nominal_cwnd +
              (max_bytes_acked - nominal_cwnd) / 2;
]]></artwork></figure>
<t>In this formula, <spanx style="verb">alpha</spanx> is the amplification coefficient corresponding
to the current state, such as for example 1 if "cruising" or 1.25
if "pushing" (see <xref target="congestion"/>), and <spanx style="verb">max_bytes_acked</spanx> is the largest
amount of bytes in flight that was succesfully acknowledged since
the last initial phase.</t>

<t>The increased <spanx style="verb">target_cwnd</spanx> enabled C4 to keep sending data through
most jitter events. There is of course a risk that this increased
value will cause congestion. We limit that risk by only using half
the value of <spanx style="verb">max_bytes_ack</spanx>, and by the setting a
conservative pacing rate:</t>

<figure><artwork><![CDATA[
target_rate = alpha*nominal_rate;
]]></artwork></figure>
<t>Using the pacing rate that way prevents the larger window to
cause big spikes in traffic.</t>

<t>The network conditions can evolve over time. C4 will keep monitoring
the nominal max RTT, and will reset the "chaotic jitter" condition
if nominal max RTT decreases below a threshold of 1.5 times the
min RTT.</t>

</section>
<section anchor="slowdown"><name>Monitor min RTT</name>

<t>Delay based algorithm rely on a correct estimate of the
min RTT. They will naturally discover a reduction in the min
RTT, but detecting an increase in the max RTT is difficult.
There are known failure modes when multiple delay based
algorithms compete, in particular the "late comer advantage".</t>

<t>In our initial design, the connections ensured that their min RTT is valid by
occasionally entering a "slowdown" period, during which they set
CWND to half the nominal value. This is similar to
the "Probe RTT" mechanism implemented in BBR, or the
"initial and periodic slowdown" proposed as extension
to LEDBAT in <xref target="I-D.irtf-iccrg-ledbat-plus-plus"/>. In our
implementation, the slowdown occurs if more than 5
seconds have elapsed since the previous slowdown, or
since the last time the min RTT was set.</t>

<t>The measurement of min RTT in the period
that follows the slowdown is considered a "clean"
measurement. If two consecutive slowdown periods were
followed by clean measurements larger than the current
min RTT, we detect an RTT change and reset the
connection. If the measurement results in the same
value as the previous min RTT, C4 continue normal
operation.</t>

<t>Some applications exhibit periods of natural slow down. This
is the case for example of multimedia applications, when
they only send differentially encoded frames. Natural
slowdown was detected if an application sent less than
half the nominal CWND during a period, and more than 4 seconds
had elapsed since the previous slowdown or the previous
min RTT update. The measurement that follows a natural
slowdown was also considered a clean measurement.</t>

<t>A slowdown period corresponds to a reduction in offered
traffic. If multiple connections are competing for the same
bottleneck, each of these connections may experience cleaner
RTT measurements, leading to equalization of the min RTT
observed by these connections.</t>

</section>
</section>
<section anchor="simplify"><name>Simplifying the initial design</name>

<t>After extensive testing of our initial design, we felt we had
drifted away from our initial "simplicity" tenet. The algorithms
used to detect "pig war" and "chaotic jitter" were difficult
to tune, and despite our efforts they resulted in many
false positive or false negative. The "slowdown" algorithm
made C4 less friendly to "real time" applications that
prefer using stable estimated rates. These algorithms
interacted with each other in ways that were sometimes
hard to predict.</t>

<section anchor="chaotic-jitter-and-rate-control"><name>Chaotic jitter and rate control</name>

<t>As we observed the chaotic jitter behavior, we came to the
conclusion that only controlling the CWND did not work well.
we had a dilemma: either use a small CWND to guarantee that
RTTs remain small, or use a large CWND so that transmission
would not stall during peaks in jitter. But if we use a large
CWND, we need some form of pacing to prevent senders from
sending a large amount of packets too quickly. And then we
realized that if we do have to set a pacing rate, we can simplify
the algorithm.</t>

<t>Suppose that we compute a pacing rate that matches the network
capacity, just like BBR does. Then, in first approximation, the
setting the CWND too high does not matter too much. 
The number of bytes in flight will be limited by the product
of the pacing rate by the actual RTT. We are thus free to
set the CWND to a large value.</t>

</section>
<section anchor="monitoring-the-nominal-max-rtt"><name>Monitoring the nominal max RTT</name>

<t>The observation on chaotic jitter leads to the idea of monitoring
the maximum RTT. There is some difficulty here, because the
observed RTT has three components:</t>

<t><list style="symbols">
  <t>The minimum RTT in the absence of jitter</t>
  <t>The jitter caused by access networks such as Wi-Fi</t>
  <t>The delays caused by queues in the network</t>
</list></t>

<t>We cannot merely use the maximum value of the observed RTT,
because of the queing delay component. In pushing periods, we
are going to use data rate slightly higher than the measured
value. This will create a bit of queuing, pushing the queing
delay component ever higher -- and eventually resulting in
"buffer bloat".</t>

<t>To avoid that, we can have periodic periods in which the
endpoint sends data at deliberately slower than the
rate estimate. This enables us to get a "clean" measurement
of the Max RTT.</t>

<t>If we are dealing with jitter, the clean Max RTT measurements
will include whatever jitter was happening at the time of the
measurement. It is not sufficient to measure the Max RTT once,
we must keep the maximum value of a long enough series of measurement
to capture the maximum jitter than the network can cause. But
we are also aware that jitter conditions change over time, so
we have to make sure that if the jitter diminished, the
Max RTT also diminishes.</t>

<t>We solved that by measuring the Max RTT during the "recovery"
periods that follow every "push". These periods occur about every 6 RTT,
giving us reasonably frequent measurements. During these periods, we
try to ensure clean measurements by
setting the pacing rate a bit lower than the nominal rate -- 6.25%
slower in our initial trials. We apply the following algorithm:</t>

<t><list style="symbols">
  <t>compute the <spanx style="verb">max_rtt_sample</spanx> as the maximum RTT observed for
packets sent during the recovery period.</t>
  <t>if the <spanx style="verb">max_rtt_sample</spanx> is more than <spanx style="verb">max_jitter</spanx> above
<spanx style="verb">running_min_rtt</spanx>, reset it to <spanx style="verb">running_min_rtt + max_jitter</spanx>
(by default, <spanx style="verb">max_jitter</spanx> is set to 250ms).</t>
  <t>if <spanx style="verb">max_rtt_sample</spanx> is larger than <spanx style="verb">nominal_max_rtt</spanx>, set
<spanx style="verb">nominal_max_rtt</spanx> to that value.</t>
  <t>else, set <spanx style="verb">nominal_max_rtt</spanx> to:
~~~
 nominal_max_rtt = gamma<em>max_rtt_sample + 
                   (1-gamma)</em>nominal_max_rtt
~~~
The <spanx style="verb">gamma</spanx> coefficient is set to <spanx style="verb">1/8</spanx> in our initial trials.</t>
</list></t>

<section anchor="preventing-runaway-max-rtt"><name>Preventing Runaway Max RTT</name>

<t>Computing Max RTT the way we do bears the risk of "run away increase"
of Max RTT:</t>

<t><list style="symbols">
  <t>C4 notices high jitter, increases Nominal Max RTT accordingly, set CWND to the
product of the increased Nominal Max RTT and Nominal Rate</t>
  <t>If Nominal rate is above the actual link rate, C4 will fill the pipe, and create a queue.</t>
  <t>On the next measurement, C4 finds that the max RTT has increased because of the queue,
interprets that as "more jitter", increases Max RTT and fills the queue some more.</t>
  <t>Repeat until the queue become so large that packets are dropped and cause a
congestion event.</t>
</list></t>

<t>Our proposed algorithm limits the Max RTT to at most <spanx style="verb">running_min_rtt + max_jitter</spanx>,
but that is still risky. If congestion causes queues, the running measurements of <spanx style="verb">min RTT</spanx>
will increase, causing the algorithm to allow for corresponding increases in <spanx style="verb">max RTT</spanx>.
This would not happen as fast as without the capping to <spanx style="verb">running_min_rtt + max_jitter</spanx>,
but it would still increase.</t>

</section>
<section anchor="initial-phase-and-max-rtt"><name>Initial Phase and Max RTT</name>

<t>During the initial phase, the nominal max RTT and the running min RTT are
set to the first RTT value that is measured. This is not great in presence
of high jitter, which causes C4 to exit the Initial phase early, leaving
the nominal rate way too low. If C4 is competing on the Wi-Fi link
against another connection, it might remain stalled at this low data rate.</t>

<t>We considered updating the Max RTT during the Initial phase, but that
prevents any detection of delay based congestion. The Initial phase
would continue until path buffers are full, a classic case of buffer
bloat. Instead, we adopted a simple workaround:</t>

<t><list style="symbols">
  <t>Maintain a flag "initial_after_jitter", initialized to 0.</t>
  <t>Get a measure of the max RTT after exit from initial.</t>
  <t>If C4 detects a "high jitter" condition and the
"initial_after_jitter" flag is still 0, set the
flag to 1 and re-enter the "initial" state.</t>
</list></t>

<t>Empirically, we detect high jitter in that case if the "running min RTT"
is less that 2/5th of the "nominal max RTT".</t>

</section>
</section>
<section anchor="monitor-rate"><name>Monitoring the nominal rate</name>

<t>The nominal rate is measured on each acknowledgement by dividing
the number of bytes acknowledged since the packet was sent
by the RTT measured with the acknowledgement of the packet,
protecting against delay jitter as explained in
<xref target="rate-measurement"/>, without additional filtering
as discussed in <xref target="not-filtering"/>.</t>

<t>We only use the measurements to increase the nominal rate,
replacing the current value if we observe a greater filtered measurement.
This is a deliberate choice, as decreases in measurement are ambiguous.
They can result from the application being rate limited, or from
measurement noises. Following those causes random decrease over time,
which can be detrimental for rate limited applications.
If the network conditions have changed, the rate will
be reduced if congestion signals are received, as explained
in <xref target="congestion"/>.</t>

<section anchor="rate-measurement"><name>Rate measurement</name>

<t>The simple algorithm protects from underestimation of the
delay by observing that
delivery rates cannot be larger than the rate at which the
packets were sent, thus keeping the lower of the estimated
receive rate and the send rate.</t>

<t>The algorithm uses four input variables:</t>

<t><list style="symbols">
  <t><spanx style="verb">current_time</spanx>: the time when the acknowledment is received.</t>
  <t><spanx style="verb">send_time</spanx>: the time at which the highest acknowledged
packet was sent.</t>
  <t><spanx style="verb">bytes_acknowledged</spanx>: the number of bytes acknowledged
 by the receiver between <spanx style="verb">send_time</spanx> and <spanx style="verb">current_time</spanx></t>
  <t><spanx style="verb">first_sent</spanx>: the time at which the packet containing
the first acknowledged bytes was sent.</t>
</list></t>

<t>The computation goes as follow:</t>

<figure><artwork><![CDATA[
ack_delay = current_time - send_time
send_delay = send_time - first_sent
measured_rate = bytes_acknowledged /
                max(ack_delay, send_delay)
]]></artwork></figure>

<t>This is in line with the specification of rate measurement
in <xref target="I-D.ietf-ccwg-bbr"/>.</t>

<t>We use the data rate measurement to update the
nominal rate, but only if not congested (see <xref target="congestion-bounce"/>)</t>

<figure><artwork><![CDATA[
if measured_rate > nominal_rate and not congested:
    nominal_rate = measured_rate
]]></artwork></figure>

</section>
<section anchor="congestion-bounce"><name>Avoiding Congestion Bounce</name>

<t>In our early experiments, we observed a "congestion bounce"
that happened as follow:</t>

<t><list style="symbols">
  <t>congestion is detected, the nomnal rate is reduced, and
C4 enters recovery.</t>
  <t>packets sent at the data rate that caused the congestion
continue to be acknowledged during recovery.</t>
  <t>if enough packets are acknowledged, they will cause
a rate measurement close to the previous nominal rate.</t>
  <t>if C4 accepts this new nomnal rate, the flow will
bounce back to the previous transmission rate, erasing
the effects of the congestion signal.</t>
</list></t>

<t>Since we do not want that to happen, we specify that the
nominal rate cannot be updated during congested periods,
defined as:</t>

<t><list style="symbols">
  <t>C4 is in "recovery" state,</t>
  <t>The recovery state was entered following a congestion signal,
or a congestion signal was received since the beginning
of the recovery era.</t>
</list></t>

</section>
<section anchor="not-filtering"><name>Not filtering the measurements</name>

<t>There is some noise in the measurements of the data rate, and we
protect against that noise by retaining the maximum of the
<spanx style="verb">ack_delay</spanx> and the <spanx style="verb">send_delay</spanx>. During early experiments,
we considered smoothing the measurements for eliminating that
noise.</t>

<t>The best filter that we could define operated by
smoothing the inverse of the data rate, the "time per byte sent".
This works better because the data rate measurements are the
quotient of the number of bytes received by the delay.
The number of bytes received is
easy to assert, but the measurement of the delays are very noisy.
Instead of trying to average the data rates, we can average
their inverse, i.e., the quotients of the delay by the
bytes received, the times per byte. Then we can obtain
smoothed data rates as the inverse of these times per byte,
effectively computing an harmonic average of measurements
over time. We could for example 
compute an exponentially weighted moving average
of the time per byte, and use the inverse of that
as a filtered measurement of the data rate.</t>

<t>We do not specify any such filter in C4, because while
filtering will reduce the noise, we will also delay
any observation, resulting into a somewhat sluggish
response to change in network conditions. Experience
shows that the precaution of using the max of the
ack delay and the send delay as a divider is sufficient
for stable operation, and does not cause the response
delays that filtering would.</t>

</section>
</section>
</section>
<section anchor="competition-with-other-algorithms"><name>Competition with other algorithms</name>

<t>We saw in <xref target="vegas-struggle"/> that delay based algorithms required
a special "escape mode" when facing competition from algorithms
like Cubic. Relying on pacing rate and max RTT instead of CWND
and min RTT makes this problem much simpler. The measured max RTT
will naturally increase as algorithms like Cubic cause buffer
bloat and increased queues. Instead of being shut down,
C4 will just keep increasing its max RTT and thus its running
CWND, automatically matching the other algorithm's values.</t>

<t>We verified that behavior in a number of simulations. We also
verified that when the competition ceases, C4 will progressively
drop its nominal max RTT, returning to situations with very low
queuing delays.</t>

<section anchor="no-need-for-slowdowns"><name>No need for slowdowns</name>

<t>The fairness of delay based algorithm depends on all competing
flows having similar estimates of the min RTT. As discussed
in <xref target="slowdown"/>, this ends up creating variants of the
<spanx style="verb">latecomer advantage</spanx> issue, requiring a periodic slowdown
mechanism to ensure that all competing flow have chance to
update the RTT value.</t>

<t>This problem is caused by the default algorithm of setting
min RTT to the minimum of all RTT sample values since the beginning 
of the connection. Flows that started more recently compute
that minimum over a longer period, and thus discover a larger
min RTT than older flows. This problem does not exist with
max RTT, because all competing flows see the same max RTT
value. The slowdown mechanism is thus not necessary.</t>

<t>Removing the need for a slowdown mechanism allows for a
simpler protocol, better suited to real time communications.</t>

</section>
</section>
<section anchor="congestion"><name>React quickly to changing network conditions</name>

<t>Our focus is on maintaining low delays, and thus reacting
quickly to changes in network conditions. We can detect some of these
changes by monitoring the RTT and the data rate, but
experience with the early version of BBR showed that
completely ignoring packet losses can lead to very unfair
competition with Cubic. The L4S effort is promoting the use
of ECN feedback by network elements (see <xref target="RFC9331"/>),
which could well end up detecting congestion and queues
more precisely than the monitoring of end-to-end delays.
C4 will thus detect changing network conditions by monitoring
3 congestion control signals:</t>

<t><list style="numbers" type="1">
  <t>Excessive increase of measured RTT (above the nominal Max RTT),</t>
  <t>Excessive rate of packet losses (but not mere Probe Time Out, see <xref target="no-pto"/>),</t>
  <t>Excessive rate of ECN/CE marks</t>
</list></t>

<t>If any of these signals is detected, C4 enters a "recovery"
state. On entering recovery, C4 reduces the <spanx style="verb">nominal_rate</spanx>
by a factor "beta":</t>

<figure><artwork><![CDATA[
    // on congestion detected:
    nominal_rate = (1-beta)*nominal_rate
]]></artwork></figure>
<t>The cofficient <spanx style="verb">beta</spanx> differs depending on the nature of the congestion
signal. For packet losses, it is set to <spanx style="verb">1/4</spanx>, similar to the
value used in Cubic. For delay based losses, it is proportional to the
difference between the measured RTT and the target RTT divided by
the acceptable margin, capped to <spanx style="verb">1/4</spanx>. If the signal
is an ECN/CE rate, we may
use a proportional reduction coefficient in line with
<xref target="RFC9331"/>, again capped to <spanx style="verb">1/4</spanx>.</t>

<t>During the recovery period, target CWND and pacing rate are set
to a fraction of the "nominal rate" multiplied by the
"nominal max RTT".
The recovery period ends when the first packet
sent after entering recovery is acknowledged. Congestion
signals are processed when entering recovery; further signals
are ignored until the end of recovery.</t>

<t>Network conditions may change for the better or for the worse. Worsening 
is detected through congestion signals, but increases can only be detected
by trying to send more data and checking whether the network accepts it.
Different algorithms have done two ways: pursuing regular increases of
CWND until congestion finally occurs, like for example the "congestion
avoidance" phase of TCP RENO; or periodically probe the network
by sending at a higher rate, like the Probe Bandwidth mechanism of
BBR. C4 adopt the periodic probing approach, in particular
because it is a better fit for variable rate multimedia applications
(see details in <xref target="limited"/>).</t>

<section anchor="no-pto"><name>Do not react to Probe Time Out</name>

<t>QUIC normally detect losses by observing gaps in the sequences of acknowledged
packet. That's a robust signal. QUIC will also inject "Probe time out"
packets if the PTO timeout elapses before the last sent packet has not been acknowledged.
This is not a robust congestion signal, because delay jitter may also cause
PTO timeouts. When testing in "high jitter" conditions, we realized that we should
not change the state of C4 for losses detected solely based on timer, and
only react to those losses that are detected by gaps in acknowledgements.</t>

</section>
<section anchor="rate-update"><name>Update the Nominal Rate after Pushing</name>

<t>C4 configures the transport with a larger rate and CWND
than the nominal CWND during "pushing" periods.
The peer will acknowledge the data sent during these periods in
the round trip that followed.</t>

<t>When we receive an ACK for a newly acknowledged packet,
we update the nominal rate as explained in <xref target="monitor-rate"/>.</t>

<t>This strategy is effectively a form of "make before break".
The pushing
only increase the rate by a fraction of the nominal values,
and only lasts for one round trip. That limited increase is not
expected to increase the size of queues by more than a small
fraction of the bandwidth*delay product. It might cause a
slight increase of the measured RTT for a short period, or
perhaps cause some ECN signalling, but it should not cause packet
losses -- unless competing connections have caused large queues.
If there was no extra
capacity available, C4 does not increase the nominal CWND and
the connection continues with the previous value.</t>

</section>
</section>
<section anchor="fairness"><name>Driving for fairness</name>

<t>Many protocols enforce fairness by tuning their behavior so
that large flows become less aggressive than smaller ones, either
by trying less hard to increase their bandwidth or by reacting
more to congestion events. We considered adopting a similar
strategy for C4.</t>

<t>The aggressiveness of C4 is driven by several considerations:</t>

<t><list style="symbols">
  <t>the frequency of the "pushing" periods,</t>
  <t>the coefficient <spanx style="verb">alpha</spanx> used during pushing,</t>
  <t>the coefficient <spanx style="verb">beta</spanx> used during response to congestion events,</t>
  <t>the delay threshold above a nominal value to detect congestion,</t>
  <t>the ratio of packet losses considered excessive,</t>
  <t>the ratio of ECN marks considered excessive.</t>
</list></t>

<t>We clearly want to have some or all of these parameters depend
on how much resource the flow is using.
There are know limits to these strategies. For example,
consider TCP Reno, in which the growth rate of CWND during the
"congestion avoidance" phase" is inversely proportional to its size.
This drives very good long term fairness, but in practice
it prevents TCP Reno from operating well on high speed or
high delay connections, as discussed in the "problem description"
section of <xref target="RFC3649"/>. In that RFC, Sally Floyd was proposing
using a growth rate inversely proportional to the
logarithm of the CWND, which would not be so drastic.</t>

<t>In the initial design, we proposed making the frequency of the
pushing periods inversely proportional to the logarithm of the
CWND, but that gets in tension with our estimation of
the max RTT, which requires frequent "recovery" periods.
We would not want the Max RTT estimate to work less well for
high speed connections! We solved the tension in favor of
reliable max RTT estimates, and fixed to 4 the number
of Cruising periods between Recovery and Pushing. The whole
cycle takes about 6 RTT.</t>

<t>We also reduced the default rate increase during Pushing to
6.25%, which means that the default cycle is more on less on
par with the aggressiveness of RENO when
operating at low bandwidth (lower than 34 Mbps).</t>

<section anchor="absence-of-constraints-is-unfair"><name>Absence of constraints is unfair</name>

<t>Once we fixed the push frequency and the default increase rate, we were
left with responses that were mostly proportional to the amount
of resource used by a connection. Such design makes the resource sharing
very dependent on initial conditions. We saw simulations where
after some initial period, one of two competing connections on
a 20 Mbps path might settle at a 15 Mbps rate and the other at 5 Mbps.
Both connections would react to a congestion event by dropping
their bandwidth by 25%, to 15 or 3.75 Mbps. And then once the condition
eased, both would increase their data rate by the same amount. If
everything went well the two connection will share the bandwidth
without exceeding it, and the situation would be very stable --
but also very much unfair.</t>

<t>We also had some simulations in which a first connection will
grab all the available bandwidth, and a late comer connection
would struggle to get any bandwidth at all. The analysis 
showed that the second connection was
exiting the initial phase early, after encountering either
excess delay or excess packet loss. The first
connection was saturating the path, any additional traffic
did cause queuing or losses, and the second connection had
no chance to grow.</t>

<t>This "second comer shut down" effect happend particularly often
on high jitter links. The established connections had tuned their
timers or congestion window to account for the high jitter. The
second connection was basing their timers on their first
measurements, before any of the big jitter events had occured.
This caused an imbalance between the first connection, which
expected large RTT variations, and the second, which did not
expect them yet.</t>

<t>These shutdown effects happened in simulations with the first
connection using either Cubic, BBR or C4. We had to design a response,
and we first turned to making the response to excess delay or
packet loss a function of the data rate of the flow.</t>

</section>
<section anchor="introducing-a-sensitivity-curve"><name>Introducing a sensitivity curve</name>

<t>In our second design, we attempted to fix the unfairness and
shutdowns effect by introducing a sensitivity curve,
computing a "sensitivity" as a function of the flow data
rate. Our first implementation is simple:</t>

<t><list style="symbols">
  <t>set sensitivity to 0 if data rate is lower than 50000B/s</t>
  <t>linear interpolation between 0 and 0.92 for values
between 50,000 and 1,000,000B/s.</t>
  <t>linear interpolation between 0.92 and 1 for values
between 1,000,000 and 10,000,000B/s.</t>
  <t>set sensitivity to 1 if data rate is higher than
10,000,000B/s</t>
</list></t>

<t>The sensitivity index is then used to set the value of delay and
loss thresholds. For the delay threshold, the rule is:</t>

<figure><artwork><![CDATA[
    delay_fraction = 1/16 + (1 - sensitivity)*3/16
    delay_threshold = min(25ms, delay_fraction*nominal_max_rtt)
]]></artwork></figure>

<t>For the loss threshold, the rule is:</t>

<figure><artwork><![CDATA[
loss_threshold = 0.02 + 0.50 * (1-sensitivity);
]]></artwork></figure>

<t>This very simple change allowed us to stabilize the results. In our
competition tests we see sharing of resource almost equitably between
C4 connections, and reasonably between C4 and Cubic or C4 and BBR.
We do not observe the shutdown effects that we saw before.</t>

<t>There is no doubt that the current curve will have to be refined. We have
a couple of such tests in our test suite with total capacity higher than
20Mbps, and for those tests the dependency on initial conditions remain.
We will revisit the definition of the curve, probably to have the sensitivity
follow the logarithm of data rate.</t>

</section>
<section anchor="cascade"><name>Cascade of Increases</name>

<t>We sometimes encounter networks in which the available bandwidth changes rapidly.
For example, when a competing connection stops, the available capacity may double.
With low Earth orbit satellite constellations (LEO), it appears
that ground stations constantly check availability of nearby satellites, and
switch to a different satellite every 10 or 15 seconds depending on the
constellation (see <xref target="ICCRG-LEO"/>), with the bandwidth jumping from 10Mbps to
65Mbps.</t>

<t>Because we aim for fairness with RENO or Cubic, the cycle of recovery, cruising
and pushing will only result in slow increases increases, maybe 6.25% after 6 RTT.
This means we would only double the bandwidth after about 68 RTT, or increase
from 10 to 65 Mbps after 185 RTT -- by which time the LEO station might
have connected to a different orbiting satellite. To go faster, we implement
a "cascade": if the previous pushing at 6.25% was successful, the next
pushing will use 25% (see <xref target="variable-pushing"/>). If three successive pushings
all result in increases of the
nominal rate, C4 will reenter the "startup" mode, during which each RTT
can result in a 100% increase of rate and CWND.</t>

</section>
</section>
<section anchor="limited"><name>Supporting Application Limited Connections</name>

<t>C4 is specially designed to support multimedia applications,
which very often operate in application limited mode.
After testing and simulations of application limited applications,
we incorporated a number of features.</t>

<t>The first feature is the design decision to only lower the nominal
rate if congestion is detected. This is in contrast with the BBR design,
in which the estimate of bottleneck bandwidth is also lowered
if the bandwidth measured after a "probe bandwidth" attempt is
lower than the current estimate while the connection was not
"application limited". We found that detection of the application
limited state was somewhat error prone. Occasional errors end up
with a spurious reduction of the estimate of the bottleneck bandwidth.
These errors can accumulate over time, causing the bandwidth
estimate to "drift down", and the multimedia experience to suffer.
Our strategy of only reducing the nominal values in
reaction to congestion notifications much reduces that risk.</t>

<t>The second feature is the "make before break" nature of the rate
updates discussed in <xref target="rate-update"/>. This reduces the risk
of using rates that are too large and would cause queues or losses,
and thus makes C4 a good choice for multimedia applications.</t>

<t>C4 adds two more features to handle multimedia
applications well: coordinated pushing (see <xref target="coordinated-pushing"/>),
and variable pushing rate (see <xref target="variable-pushing"/>).</t>

<section anchor="coordinated-pushing"><name>Coordinated Pushing</name>

<t>As stated in <xref target="fairness"/>, the connection will remain in "cruising"
state for a specified interval, and then move to "pushing". This works well
when the connection is almost saturating the network path, but not so
well for a media application that uses little bandwidth most of the
time, and only needs more bandwidth when it is refreshing the state
of the media encoders and sending new "reference" frames. If that
happens, pushing will only be effective if the pushing interval
coincides with the sending of these reference frames. If pushing
happens during an application limited period, there will be no data to
push with and thus no chance of increasing the nominal rate and CWND.
If the reference frames are sent outside of a pushing interval, the
rate and CWND will be kept at the nominal value.</t>

<t>To break that issue, one could imagine sending "filler" traffic during
the pushing periods. We tried that in simulations, and the drawback became
obvious. The filler traffic would sometimes cause queues and packet
losses, which degrade the quality of the multimedia experience.
We could reduce this risk of packet losses by sending redundant traffic,
for example creating the additional traffic using a forward error
correction (FEC) algorithm, so that individual packet losses are
immediately corrected. However, this is complicated, and FEC does
not always protect against long batches of losses.</t>

<t>C4 uses a simpler solution. If the time has come to enter pushing, it
will check whether the connection is "application limited", which is
simply defined as testing whether the application send a "nominal CWND"
worth of data during the previous interval. If it is, C4 will remain
in cruising state until the application finally sends more data, and
will only enter the the pushing state when the last period was
not application limited.</t>

</section>
<section anchor="variable-pushing"><name>Variable Pushing Rate</name>

<t>C4 tests for available bandwidth at regular pushing intervals
(see <xref target="fairness"/>), during which the rate and CWND is set at 25% more
than the nominal values. This mimics what BBR
is doing, but may be less than ideal for real time applications.
When in pushing state, the application is allowed to send
more data than the nominal CWND, which causes temporary queues
and degrades the experience somewhat. On the other hand, not pushing
at all would not be a good option, because the connection could
end up stuck using only a fraction of the available
capacity. We thus have to find a compromise between operating at
low capacity and risking building queues.</t>

<t>We manage that compromise by adopting a variable pushing rate:</t>

<t><list style="symbols">
  <t>If pushing at 25% did not result in a significant increase of
the nominal rate, the next pushing will happen at 6.25%</t>
  <t>If pushing at 6.25% did result in some increase of the nominal CWIN,
the next pushing will happen at 25%, otherwise it will
remain at 6.25%</t>
</list></t>

<t>As explained in <xref target="cascade"/>, if three consecutive pushing attempts
result in significant increases, C4 detects that the underlying network
conditions have changed, and will reenter the startup state.</t>

<t>The "significant increase" mentioned above is a matter of debate.
Even if capacity is available,
increasing the send rate by 25% does not always result in a 25%
increase of the acknowledged rate. Delay jitter, for example,
may result in lower measurement. We initially computed the threshold
for detecting "significant" increase as 1/2 of the increase in
the sending rate, but multiple simulation shows that was too high and
and caused lower performance. We now set that threshold to 1/4 of the
increase in he sending rate.</t>

</section>
<section anchor="pushing-rate-and-cascades"><name>Pushing rate and Cascades</name>

<t>The choice of a 25% push rate was motivated by discussions of
BBR design. Pushing has two parallel functions: discover the available
capacity, if any; and also, push back against other connections
in case of competition. Consider for example competition with Cubic.
The Cubic connection will only back off if it observes packet losses,
which typically happen when the bottleneck buffers are full. Pushing
at a high rate increases the chance of building queues,
overfilling the buffers, causing losses, and thus causing Cubic to back off.
Pushing at a lower rate like 6.25% would not have that effect, and C4
would keep using a lower share of the network. This is why we will always
push at 25% in the "pig war" mode.</t>

<t>The computation of the interval between pushes is tied to the need to
compete nicely, and follows the general idea that
the average growth rate should mimic that of RENO or Cubic in the
same circumstances. If we pick a lower push rate, such as 6.25% or
maybe 12.5%, we might be able to use shorter intervals. This could be
a nice compromise: in normal operation, push frequently, but at a
low rate. This would not create large queues or disturb competing
connections, but it will let C4 discover capacity more quickly. Then,
we could use the "cascade" algorithm to push at a higher rate,
and then maybe switch to startup mode if a lot of capacity is
available. This is something that we intend to test, but have not
implemented yet.</t>

</section>
</section>
<section anchor="state-machine"><name>State Machine</name>

<t>The state machine for C4 has the following states:</t>

<t><list style="symbols">
  <t>"startup": the initial state, during which the CWND is
set to twice the "nominal_CWND". The connection
exits startup if the "nominal_cwnd" does not
increase for 3 consecutive round trips. When the
connection exits startup, it enters "recovery".</t>
  <t>"recovery": the connection enters that state after
"startup", "pushing", or a congestion detection in
a "cruising" state. It remains in that state for
at least one roundtrip, until the first packet sent
in "recovery" is acknowledged. Once that happens,
the connection goes back
to "startup" if the last 3 pushing attemps have resulted
in increases of "nominal rate", or enters "cruising"
otherwise.</t>
  <t>"cruising": the connection is sending using the
"nominal_rate" and "nominal_max_rtt" value. If congestion is detected,
the connection exits cruising and enters
"recovery" after lowering the value of
"nominal_cwnd".
Otherwise, the connection will
remain in "cruising" state until at least 4 RTT and
the connection is not "app limited". At that
point, it enters "pushing".</t>
  <t>"pushing": the connection is using a rate and CWND 25%
larger than "nominal_rate" and "nominal_CWND".
It remains in that state
for one round trip, i.e., until the first packet
send while "pushing" is acknowledged. At that point,
it enters the "recovery" state.</t>
</list></t>

<t>These transitions are summarized in the following state
diagram.</t>

<figure><artwork><![CDATA[
                    Start
                      |
                      v
                      +<-----------------------+
                      |                        |
                      v                        |
                 +----------+                  |
                 | Startup  |                  |
                 +----|-----+                  |
                      |                        |
                      v                        |
                 +------------+                |
  +--+---------->|  Recovery  |                |
  ^  ^           +----|---|---+                |
  |  |                |   |     Rapid Increase |
  |  |                |   +------------------->+
  |  |                |
  |  |                v
  |  |           +----------+
  |  |           | Cruising |
  |  |           +-|--|-----+
  |  | Congestion  |  |
  |  +-------------+  |
  |                   |
  |                   v
  |              +----------+
  |              | Pushing  |
  |              +----|-----+
  |                   |
  +<------------------+

]]></artwork></figure>

</section>
<section anchor="security-considerations"><name>Security Considerations</name>

<t>We do not believe that C4 introduce new security issues. Or maybe there are,
such as what happen if applications can be fooled in going to fast and
overwhelming the network, or going to slow and underwhelming the application.
Discuss!</t>

</section>
<section anchor="iana-considerations"><name>IANA Considerations</name>

<t>This document has no IANA actions.</t>

</section>


  </middle>

  <back>




    <references title='Informative References' anchor="sec-informative-references">

&RFC9000;
&I-D.ietf-moq-transport;
&RFC9438;
&I-D.ietf-ccwg-bbr;
&RFC6817;
&RFC6582;
&RFC3649;
<reference anchor="TCP-Vegas" target="https://ieeexplore.ieee.org/document/464716">
  <front>
    <title>TCP Vegas: end to end congestion avoidance on a global Internet</title>
    <author initials="L. S." surname="Brakmo">
      <organization></organization>
    </author>
    <author initials="L. L." surname="Peterson">
      <organization></organization>
    </author>
    <date year="1995" month="October"/>
  </front>
  <seriesInfo name="IEEE Journal on Selected Areas in Communications ( Volume: 13, Issue: 8, October 1995)" value=""/>
</reference>
<reference anchor="HyStart" target="https://doi.org/10.1016/j.comnet.2011.01.014">
  <front>
    <title>Taming the elephants: New TCP slow start</title>
    <author initials="S." surname="Ha">
      <organization></organization>
    </author>
    <author initials="I." surname="Rhee">
      <organization></organization>
    </author>
    <date year="2011" month="June"/>
  </front>
  <seriesInfo name="Computer Networks vol. 55, no. 9, pp. 2092-2110" value=""/>
</reference>
<reference anchor="Cubic-QUIC-Blog" target="https://www.privateoctopus.com/2019/11/11/implementing-cubic-congestion-control-in-quic/">
  <front>
    <title>Implementing Cubic congestion control in Quic</title>
    <author initials="C." surname="Huitema">
      <organization></organization>
    </author>
    <date year="2019" month="November"/>
  </front>
  <seriesInfo name="Christian Huitema's blog" value=""/>
</reference>
<reference anchor="Wi-Fi-Suspension-Blog" target="https://www.privateoctopus.com/2023/05/18/the-weird-case-of-wifi-latency-spikes.html">
  <front>
    <title>The weird case of the wifi latency spikes</title>
    <author initials="C." surname="Huitema">
      <organization></organization>
    </author>
    <date year="2023" month="May"/>
  </front>
  <seriesInfo name="Christian Huitema's blog" value=""/>
</reference>
&I-D.irtf-iccrg-ledbat-plus-plus;
&RFC9331;
<reference anchor="ICCRG-LEO" target="https://datatracker.ietf.org/meeting/122/materials/slides-122-iccrg-mind-the-misleading-effects-of-leo-mobility-on-end-to-end-congestion-control-00">
  <front>
    <title>Mind the Misleading Effects of LEO Mobility on End-to-End Congestion Control</title>
    <author initials="Z." surname="Lai">
      <organization></organization>
    </author>
    <author initials="Z." surname="Li">
      <organization></organization>
    </author>
    <author initials="Q." surname="Wu">
      <organization></organization>
    </author>
    <author initials="H." surname="Li">
      <organization></organization>
    </author>
    <author initials="Q." surname="Zhang">
      <organization></organization>
    </author>
    <date year="2025" month="March"/>
  </front>
  <seriesInfo name="Slides presented at ICCRG meeting during IETF 122" value=""/>
</reference>


    </references>



<?line 1074?>

<section numbered="false" anchor="acknowledgments"><name>Acknowledgments</name>

<t>TODO acknowledge.</t>

</section>


  </back>

<!-- ##markdown-source:
H4sIAAAAAAAAA7V9eZMbx5Xn//kpasBwDCkB6IOkDno0YbJFjTkjkjLJGW1M
bCy7ABTQJRZQcB0NwRT92ff93pFHoZq2N2YZtrobqMrj5buvnM1mriu7qniS
Tb4v2nKzy+p1dnXTlG1X5rt/brOrercp6I96h1+7pq7o56rI7l89ejBx+WLR
FLf08tWjTN6fuGXeFZu6OT7Jyt26dm5VL3f5lmZYNfm6m930ZVds89lyedjM
lo9mK35tdn7h2n6xLduWpuqOe3r+xfN3P7hdv10UzRO3olGfuGW9a4td27dP
sq7pC1fuG/6t7S7Pz789v3R5U+S0nJ+LRZbvVtmLXVc0u6LL3jX5rt3XTTdx
H4rjoW5WT1w2y64e8X9P9ohP3xR51ZXbgj7bbvtdSfuiJ/DNy2JV5ll9WzTZ
n/7zxZVzed/d1A1GdFlG26blXc2zP8pO8ZEAwMM1/qpuNk+yn5ryljaYvV52
9b5vad3LOb6kZ8rqSaYw+4P+nNOO4rnezrNXtNv8Q7/NmzDd2/4mbwff0Gz5
rvwLb4UWVLbLOpqn3cnDf1jii/my3g629O/FblfuNm20p76qil3yxefnWFf9
en38Q1mW82Xu3K5utvTkLR0u0MX/QS+8+eHq2/Pzc/79xez7eVl069m2/vOs
s8P0jz16+E36GGMXIac98dU3F1/73x9/c2m/P/zq0bdPHP54d/XT7L+KTd7y
V1mXN5uiI9B33b59cnZWFkXx676qm2KOX+e0yTPC7H5b7LqzR189+vriK3lP
qYmGy2S4rCBE7Gr+sQyYlt/W5SrfLYsMf2Sbql7klUfYCQ/GSJ9NHl4wYhAd
ZBfffvtYvmuLpiTSIZjREy+eP3+e/XvdNzsag8Z7W1TFsitW2VOih5aOL0Xi
Nruf/Vdd9TjAi4fT7EXb9vTrN9NkngcykUdu/JvJD8WIH+eEes+a/MO2Hv/6
x3n2U0E7aolw8NUfj28JsJ0ONoTxqi4Zrhfn84vzi6/OfgEGEjDml+cXF/Nz
/O+RvmlQzreEdVl3U2S04/1Nvuto3lfFAceZtVV9yFpMKDsxeP57vysyjKkf
x6AkOO17WjEN0hGb+NBmt3U1zx4/nma7ep59O832+zm9/O3l7PLi4lwGSEBE
DMKg4Mnzj7l9l371Yp69uSkKhs1VvyiXMzCU2bOq3oyj4eFwmO+FWdTCKwCj
M9rMt2cXF/hfud1XBbCSADNb8pgB6/Ar+Nus3M3+3JfLswRnX0SvynJihNVX
gUx/olcTDH1V32YXhD5Yxwh6njA+EiwL2uMpghl8hjwUH/9czn4oZ2/7dk8i
AJv5fwDT5cOz88dnF9+cEcrMDkXZrGbLvC1m9Xp2KNflrKI3dsvjrN2XH4p2
ftNtq5SsCdP4tQyvQVgC9/Bqpq9m8moCnpf5UcFz+fD/G3iY9zXE+8rlstnM
qmK1yLvZvupb/o9wOTDLhw8vhFleXb35t9mPz1+Pw5DWnhOnXX4oGuapTJrb
ogB2nF1cXp4Rq6Zd5FV71lYlifEZfaiTE1WuZgAxCfSqyFfAxWK9Jp7UAtRV
URMjX5RV2R1ndJAFnq75xwiunp8nR/CyBD8lqL/0Y2fPZWycB+0ne6ljgxU+
l7Hpx4icH5xSs7zJLr7mcxrjsm95m9m+KUgJAXfNOwFipmDJVn2DH9BcMoLG
3zjA/55nP+bl+OcjH/9pnv3cn378xzuf/m/ihxvnZrNZli9anGXn3N+j3JUt
SaQdcdFT8nd5Rcpd2d1sM9HcChZubb+HQGalafYOWlO+31de3LQ9QTZv3UBz
mmcvOkwWj7Qisi3ol0PerNoMHHxVVPmxnboDzZpt6nrlZyN9gTFhEk2WVeWW
iGI1yRbFTX5b1o1bN8Wfezqx6khv9IQIhxtSWfoWR3WbEwovqiJroH8RAddA
qCm0R5lwnbddRlJ0yYPTAiOY0F8sxmURxHAAmSMpMrRBcKhJdkMrK6AZyZwj
7BTAwEegptZhoPyWFCVe0zLf50sakeG0JyAQcymWRHKEeR34tKyBXnI35eYm
+6XsILpo6FUpgMeGeugUBFxi3MxC6WRFtM0d2JkAn9QzEp9t+RdC8BYypMTE
DAbeYptt8w8sa+uaft3RV6Q0bPeiyyzqvhMgbAl/qgkRIm9FZ5pnmIj0v7Bt
gzthWlOIAFcgOGWq4D9yKtDk9cNt/mu57bcEDDoaRgxsCwAgDC9Ym1pndNg9
MW/B/G25WlUkYe9Bt2rqVc8H+ffRAYyc/xFi2PZsSgD5Y7qYynmu6e+qOro7
nlJMBcFkHz+qVvzpE4OF+SDecJ6oMq8e09PjevOnT3wibUqlDmRS0mlERKew
J2TLil9vykXZETA8zURH1GIpRwII9Fd8wgi5oNcP5YrOSseGdtEKLdBxMe9u
ijWhJ45u3ZBJ0TpMSUug7cjrn317Va759Q5UIQPM3c+FAAEimdZFbLrc0bk8
FBTc1CSzSBzOsu/v4DXZ/bYoCHpM9pAe8vGnTw+m9NZbPd7Pcx0bQ7/Qd99g
xAyq1wfiRuAm4NHYiJJKTLw6QsA7GoSwOiJaGOuPgKHE8vcliHxBQq9v6LD2
kFyAqmCPh5Mb4UEejVtmqcCiKVlmHR3yURVBQTyysz59mpIaz1o1fagKPT7E
sT179iZGOrPC6GtHGHIoqkoxhdTyfnUEqTNoZzUWyyI1LEWFRuZNKRraW2mf
PjGe/Pj8+2dP38nqYOMBs18QQ1oJDKc0Z3bIeWQC9oei2PPsfhbHvE5YDFlK
fCRegyZYE3SZzxWt4BFeDvRFUFh+IPA7MGoQBO0H55JXwin58djmC5srbhmX
sUeAhsbZ1QfS2TaC5NPszbt3pFbkbd/YJ8+vXunopOiI4FuRcSX8jGbeQ1fr
CI3blogAIoOerE/XP7Ygp6jQju6QkKotus5MrWiAAylj9cHdv/r51fcPPEeC
1AJlE/kJDFmPtIMHUSo7N6SnCXA2eCnP2i1xQ0wIKeLlxFT5PxFwvaPtLAo8
zVJmsqtJ4ST6x4QTedB/RhIDsJxMna1u2TcghHQlHhIiqwiwJIRbKJBCPxCE
vAsDAo3OZwTxtSrbZU9QZxk7wjaUaONlgmgHoCK40sII7sLZFH22NesfS5Cu
iUhbdGCvQWcIWLcT1Jgncys4bPoFDZkNhzUhS885xpolmSb1kuAGeMTnJrI3
O5W92c9g0ofCrWqamk6SnmhuCxXZoqfgoGgZy7rcLVmxZlYfwXbuXsj4onHL
BPYuSaL2pu4rYnmiPxCW6wacPxs9cduQvDsHzzyUhGNLMfYJ5+QYXCTL0tNi
0UzcXjhsTm+SOUMqEvAIIHKrgvS8VWsASnBr39Rdvawr4ZFAbHwKkgG7dYIG
2Hptj0NLkbfzLkERmlw3JABKZUHHMn1bQKiULXEZYk0D3IyFyRxa0VswYkPq
WM9VUfjx3hCdBZmNioaSSIZpCV74LFcLCr+LTGJmNnevCZnKXcliW4c4APCk
XNb0MfE3nR8g4xXQdwTlSlS+JfxaBXu2thATDHMS/jQINNO/IeQqstAjsUKT
iRxhpcDmj1lFgkyCXPRkuT55iNYAcVL8uiyK5HGc3LZmVTeHv49RuiODbUtm
d0mCqqMdCS4BAfOYyQrTVyYykTf1vQlAjj9p+FWxzgF2guHFebZl4IRNBmkJ
rKFZAO58RYoV8Ip1KkGfCE6kgxVes51mC1LyS+EGi4J98US2UDqw3baFLkVT
suJHem3BWpx3iwSj4wcFrh0/4Y3MwdIamyk7owXogS1j0sXZN0QixqEiAr98
vG0VNvZx2bVFtQZoVIKuxOaEdu35u3BHjBiBmsAIppDDjHeKgcBJHRiOFIV8
Jt5B6DRTBsC+Y3Z4qCMVjhFK0YS2GZhyxoxvuxe3ATM+UvqJU96I/WaPMW/J
slVNuA4+apSgjJSlo5kSqi7l8fsivFiFm4alGPtWlR7vVDVD2U7LuCyNJyYs
OBstpK2JxoZWJPRtVjWgw6i+TYoQqThNXp2uRjS+Rd4m6p4TcBRTFV8MGwNV
vYPtnpeN7oBonNg3KdqFp/wT4WhCJKPVEc2tKjUF6IhVpBMtEUMig38prmLS
ZHYT59kn2OO97GW+y1lBv4qWxAf2Iyla2TPextNAMx/v3YLeZm3X9JtNVRC7
jN9cENQKojXRqoG/uZ7lACAZfFuMfPwok1vP3m5sfSD9lTXFA0Vgh7J+S2+C
8AmYe6J1OsYdfUPiZxlzQpqNQb3AWPuqPorYUBZK6qkrdrdlU+9UJQUZ3JSE
d0SxHkflWGo9w4KB5TDHG5pTVfXH31ySBUuMgDc3zxiH5JRWphbEGND2RXxw
qrUHKg7qm3uxFo7SSQRkOTy2nDXkIbhFoaAHlje0Szh2CDtKIs28mWRwaojs
7moiBhVIqqSxBSeiSBnGbIbBDlBOHBlmHR2kgNVTb6Kq88iRcs8WDKMq5p26
A2w4rFXZQ0AR5gPYp+ffyXYR/ltCTq5E+ZXt0c5JpVf+KhgjWiczcExJiE8r
qBMJDWTQQz8Ud0AXa4hQMBbCtHR2c4hhG1AbOhBLeDh+iE+Ld4at0CAtH0rc
d9kjNCg0DR4eOaDK1rOdIRAY5upa5ONxMdO0cxKkjZ4NwAnKxqKgdRXuoSzh
jySabkWLpZkVIztIq2XO58VessmaTpR0urotsfqJ8zirjLMp1uXOkJ5fhweJ
zWtWBgUHHe2M4K5q9CpjWeJBEnNUVhM9JFtv9Zg6yyCH7CebqhX43uSVV/yT
55yfj+huKby2gj90oogxyeA0JJ5dM2eOVugIJnTM9OzjaDXZ6GrKeDGPzh6z
nHdjCxKQ3ZJ9DZ99DLRYcLMN0OCIs2u8dfby6f96j1+uGbEm/P6EoLYsW0GD
g1grbMzRSfYkBqEA5AkmtWSD0fi1qPGTdt/TH317OhCmn0Tom5xVAMZEVr4u
G+AW6VayADheokVkwKJoGaRSvOGtyc4IQL8JQOinLQmOz/jfb4Of9M89hXTe
FPTWw/Pf0X+/fvw7J6M9+hZ/X5yf/869q/ekX+HPh1/jv99e/E59+PQJf/7N
w9+5Z3XX1Vt98JJHe3xJDxJHpHEu+VMa/V8en9vA8pxzr2pz0PFJ6sEyM8mN
RbMCLNBgEAsK5QwuUkUJWm5Z5eVWPgTXih7PSMIjNMTMd42guNk5FRS4asXq
AYxBM5TETBIX+PpIQuqGdRr42LY6mOdhxJdBrUTtKgUcpq9Z7zGzhlA2tnWK
X6FBR1gVuDm2LGwkWKj8OONTKoyU0QWzPygDnl+J/vJHCFXWX25yYsLL7Hvh
gu7tqSrnfbwSwwQoVCE0f7F+ATQlM3XBDsKDuAiY5wW2qbGI2Px0Hz8Ogtxw
1/0c+0/o/S0bdXUTnDD85mj0V98H2TLVkKofghFBk4NTVqOGfg9TIznm/JBP
y2LlnWRHB78LbU60X/M5i0abr37J4Y8ZQg/LIFJflxuYHE4xw5umGoZaliry
sbhd4eU9lmZvK65D9MNqVqy9KVerArJ0xU4COnZWAdTaJqixdr4kRU3YkbgH
ZJF0JHQal/QwO/c00SrWJXIc7BJ7ME+CE/1EIEVwyBsJovWitGBp9U5d76xU
1Ot15FKEHCNFpxjOqVoS9EqNoTfFvshFnQhrZw/SyDfiiRIzYbgCLMCfuoOh
ynhJu282ipitKA8pVCIiBNXBmHFdw95geBy7XJQ74t/0Yd2s2DSCRtWTuOPY
oqp0rfAS0p/ZEbaKMM+Uvm5ALvGeT/Uubxd7lYvpfgX3MbubYfVNlkrbMelN
TCb2KsuannO0xtx93lDy9nL0RZDpBAXSkG6yhqOoXVPup2wyLBhA7HYlAvEu
Vx6xk7gnO9pIK0iwhnApFzG0OnXYkqD761//6vTj9/Tx+6brsu+y+19nX2TD
j7/MaBz760F2ln3ze34d5kCscNiewGyDiO4Opao3ajV6VwTzXgOvAdYb+Orn
lCO1D4WRF52MIRaD95bwAbC7RA+LXTN04CuLJCUOCVHcDyZfVqwVwcd1KDxQ
BfDmhYJaMBGDwcLjkfkGPyP4jRolaxE4wb0vnpdJrHNNIreMGEZMTQ2kdV/o
KUnyyPvlgVb7HdEPqYVf2BHxh1/6xCf5dx9HtTiSBvQeZLHKZln8PE7wUk6Q
mWPJkSmI9Wl2zcNfm7MxZ0lt0Isds4k/3ZljPPb+B68JYFX8Kt67C4jXybLp
S5ibEyi3F/PLx9DBJ/u+veEPx4JzwhSuB1vzK2W4tZ3Lt4jJA/r8FDjhuio3
N8obGH36JeHIumcSCbEhVcSdV8SNUbAarl4wUwBoJdGxXGekvS5gXiMeo7Ew
H0VFILe7Ibre3LDkM/EtYSpmmQ3HLBg7+wZ+wqwp2w+R9ubndYwZ6mJn/hug
xDyBY6LyJo+BoOWuMvMe5gjvUIahGVOIXguY1RVumJuLkdjccgJp7DRPUZTt
oyGK4kPBtv9szRceu931XI5w20rgzp9nozEwuKdlswvS1IICRaIPCGkhoNMo
L6RUcVtXtOjISWkKIB+TuqQZi095mYCDnxbXsPCDu3gW0HjIDcmC8TYuPGGM
DEV7U1fMJC7mj3lZbRxdUfeYesvNQ/rxHlxp8KR9cu77UcdWU1RHSXplAl2e
BKD8DEC7o+xsl3d9wxIDyiIDKg+2lAWi6EXHEIHGopY2m1AeOf2TIQ6G0Hi5
JBWK9aVG1EmQ3A4uxwo6JXRuTTzQSFDibHORz9y7MWki0pg6jJxrmlIlUb4t
Fr+6zUmv2BSTu6V+GsdDMo/JZlFSywB2M56IKhzpRzn0CYYWouqNejvtZCaZ
+D2nlq8mPjz2qyIowUY5sQjvFzB0YYLEqYDa2VgpeW+1hGF/Ip2U/TOToISF
aLq4Ep89ezPV6LUzLwLjb+KK1VU29b5m3IFl0oneDz6uUX+21P5G2qMmBBB4
nV9JLqkBzD10MlEqWzD+oBU8dmS114jqsY1A571vE18IuAHsbT8MtuYGvhKu
IYiDFqoeWMwixPiB/v5ANfLJUJEo7JpEeH1o03Wzp9QHYuiQl2SYshfbD0sA
WFtcwrsh/AAyg8R6nEwhQU4eKElBMIYnKlOQpS5WmtQhmcs2RG3RIJ7pU1Fs
OlP1LIaC2DOtgYCd+yIKNDLrwe7nldgK0XoPZG22eeVq2lgu8W8xdZOUKjN0
bfcEeeUwIRQgiO5UeLOtEmsJOKy70rrAKRzTE8s1iNkkU4kJc0lMZWU5S9kr
md75gwGaeKWf8BJhwyjhCCmowW3mTmiViVjpO/cEj4MICP4oUwSn11d/D35b
3ol94SPt/R4OOc3zi84yQdzcYJxukkNHCRKfoB6d4dMhykbqHXthBuKgZmhz
0gbLX2Ca590xVxWr3UJxpjMz1i3qrqsKevLDVAwfH2OPB9iSGIjyrXjxReNO
E3gsWxnFIH/uiV3/xWv8cWDZO11Fw0lnk5i9eqdMVRkE0kkGm/uK4LZmPU7Y
5614qPEeTXqHpbkuKnWMrNyqKdeclgXlZ93U2+SlScgUhTMTpRrimQ9RPTFz
fZwiuLHYhzpUUiTgbPKYVfZ+Vwja0gr3JXQEpLet6ZxECzsqvxDxAp+7S13u
wFn5ZFdsWDmURUbiMPgstvmKI7VMWGuc6Ery9CbsMgS1T1JOwhHaPScxqv7a
duyRM6VGXPKtz7gMsCkhmyXiwJEpQTGO8SCwxrFncXTCLU08jHUwotWGQUqT
krzsRA+7SiDpYyo+U9U9bZMAArO09B3LXJyKZ3VruUTg10sSpRaPFp5mRqXh
oPAb0kDgiGMdF6HxuRNEQpSzJOm7zZ9kRclb7NmGkGQv0zg2fd4gW080bpBQ
S8fLSV/8IKsO8qK4dPjFtlaVKPIyOQnrsFewY2+ssMJ9kX9g0WK5QM/6TkMX
0bisAjEcdshjYL8a7M9M0uyUiNUYYPZeNC3ThzOTylYYjD3zECGDWjNA59lT
cQ5xrhRQrPyLKXiyqFXt46kQn3lsluhB7TIjd0kfD6FQx6mqrdkv3mWQDqOx
xhyhzzbOD3CWfz7NfulJk+G0GaR4ItTL+LxjNXddNi07CJv6V6C8aVcuThrU
I64lP8QHi7c54x5nlZMpPs/ETOKqzzELmY2BRWH5tmYFas6U5Y6PpEwRnRHL
9Y4uyTrvcWjAttolaVnBzaE6b2zr2I4GRpSoc0Jgytd3QxLzgU/m2qsiZy0i
te6i7LvI7mYU9JzxmOFzMnMKMTkBbE/aEDw3rCphazhy9rchGeULEdDEv3WG
kQw+Wao+ahUF4q5E6tuSfVfe523+E/an6kvq4A0vSVKgzWXYhVCeBHBITrJR
aO5yA4F3AHQ3EeuCxuds5/olzeBTS8KWWfVXj42peiAalAlnm1rJGMOETPaW
8Qyef/oZK7vmGnWxFSQ+DqRugaigUWoOJBeR2NRhhW6wQjhYGpsKdTrIQwZT
6VlJFNHGjrmdmyx66DQoDcs7mI3vQvlJ3nlmwOzCW1Om4PpMDaAKsag97V44
Vyubzzv2cC+gNeMsWknBt+1LaqTJNN29eJRaDYNtmD+pARKrPkaVLy1rEU5Z
jbauCkmWZgEY8kJVjbJXEkXKMdBLyKRVwdEnBqKi6sGnLDEX1pRt2GDmXEgs
I64/kthRSOisbb542UTOS8nG2IIZ+lzyE2RFggm8vDt406L8xxgiyBvJ951N
kSaoBpzz7iKLZLC8cgo7Se8+CCvLu5HyHzW/vF9pSnzEWcyLgxsfikw3GtIn
fOgOjKK9QeofAGdw4Gn9l60m77TwYKnoQq4jb9aQ315dhY9IoWI3znHiDEcj
Y4Hp4iju1ompTt5Y41RkifPJc18JU9iUSHMCOsLXU+84PGnVXwkOzbPv/VLC
yMwbNO4jzpYxK3hxTARbLGuEBaSUkyYSE41/NUc0XOmrTF0/HRdUioTawzGP
AQQkkvqlsp15uUlzPHOtoY/3koF6bbZyJE0CB0XWsWkjbEdG52LHojCZ0zyK
FidTlHEIhb8VzLnG2dwW7lojTu9p/3jxeqpOAIkyDL/OvsyiMdz9kNI6TUfX
lFca4vLx+bZ9oGscW1/ssbgexIxoOfB2nXws8plQUUX/F2QZI8cFc449/IQd
x1l2EpP6LtvkpPF+ka6LtjmIhVhI5GLGzz/4YjAQjw/Bes3fXydBjgCM64uz
b67vQCjoL/eyn0RhxVG/6Xds07005eXKIoWeWKNQE2mhiyJvNLsbLnviZxM6
PzEMzbk6AafX1zn1lSwpiay3WVSgOI1yql4pcXjmsiSzHgp0dRSImz4GDpSl
ufFxqONknF347A2Sy2ZwALwaVGAwnsbqIcmiD6pbmwd+XWq+5b7cqynqBT4r
NnMa+7Xx618TPsOjrMudMbfY83yTRyGT7FSf6UnYoJaXAEaGRqcj0EuSWqQm
cwzLeO9YdRuGEv0RL865Cg1R9YzskrKKnqE14Cni7qL6WvUB8wkW1o2kPzIM
eLnoaxCl+TF6acZL8N6GlFYo7W0iEaBpd1Lk8nl+QCpfr/4koHzH4Q7CxCN7
duJ8/5zzE0TnFF3CIt8JG+e4krhcrr1KwZCc8hjGEJN0y5yFE5xEabFOUpFw
rUd8rQm3wRTVbBLEG+EazqXmxYpnSSPYq07698DC5y4KMGwJSusvlAH8hMgg
n5gn9SD60vjhdMys8Sl7Hoih8Mkp52EhxUYgvhBFyE7K1OYQNgAgNlztgBCJ
lvaAcyQsQrRVPUwJWVoylN8brzojxgRmQYL6dhghYzoHh4J9SSfHuCJ1KsHl
pyU7mhFCDMDlmxx5uyMJ+ZwxumVL1PwS8C5ILwCOgrL32KwJyVeMfJvsJ/2M
VvQiPQ9DeefjjsgGTIoXBjnfPsz6bjiaekS8n1yonwuPxLAQGkfMeco+2Lxt
0QFEc3TkGcfGh88gYKMjX9V79hFaEixU1pxzRFhNeWkZNDkZ8fnGJ4++z+Ga
fB8xMv5YPCB1dg7J+29sU5g2niasZLm6NulI2DWpA+C9OAkcjudJhFtRLNSQ
m7jY+KpkxZ7hnE99UkgmX6HYRsMbM461iXLr82M5zYDQ4Pl2XzZS5x2HSeKy
/dISSThMuVYtOaW6iYuSZbvsUpNl5dlhBs1nvRZMGh/vqfNhhj8/jZcphhoa
Tf8ZFKty7RFp3itPfQPvzWkCgynOyIeScBjZROqoicy9VSgJHM4ZHD00xhTJ
iD7Yq8SbpCFyBHFf5ZxqzfmE2N0sEggoYTZmbOXDOUt/CaCiSndQS0fcYea/
53K6nwtLZCiGYS02kX0UengSU9cUtLylL7HVLBXhpUlqNZpFsf7R6OKQah6H
SozP5pFBT5ZgTWoYF/SEYH+ZWDZiU24X5aave0kePLLlqQmGTGN8FlEYSspw
eQp1ybGPll2h8di7uuSq5B+8LUOgRmhDGHxDJFRv/dIii9WZINhJVUonJYQ4
Gponnjjxy8994tdpxgVbv5pQpQoCywnN3OMwksTcTorwhEeiHLe8xcsxXrnR
AkuiwDdcQRoB4+O9E+wT0rNqdK9yKF6Lb5lY9so3CwmBI/UpIYmGMUSAyzWp
mrHIAQjztS2Kkziu2K5d5CHy2YwcemBNlr2lVqWNt8SAVUL00Q6nwAmtOzic
VmhAQqPeYYd8/GsxWcgACY1BWHZcKyG8By5cPwnOHF/w4hnDVq0hOxyIgWvM
e/JuvFPxvUHWR0wKhkbKnHgwn4HkH9RRP8fwYOspZ9OVNb4ILFqeZI4lu8WU
rFW9xwru3ICuFHKd0BCcKosUsoT3yurCnqSQlw0/QagN3PKcDQcq1cQpGuK9
oNh3WbzAbJb59XPYwz/lP6ZHwg6MH6wsB+sUntnZiWlMwuy+X8E0C/M8kPxO
43bIVScqDBLDt1YxUmkGZOhC/kjarEIYufHw4BpOAtu1xryZXBJWzlobiwHO
tOqMi9D2TpIGZwtSk5bFp08PBNaldxMqlP41i3PUGEuSIaWtVPLMd+kQAibw
oadwFkspo2drz3h+4kina/J5Saxgx+Xb0ySUCJ9vVEHLL08kZUXsHcng8Tj1
RcxWy5Dk4M2PWPlQZiwNmTIodaxltd5PBcpMfFlqZodjO024ttnFeBV1uIOL
IyUX1cvjmeh81K8b28XxW7yLY5T8SJOM4M+y4ricVfxrnkWMRzodOpAsUV3c
in2BTkQRjARoa1gdLL8yPYCkys+Pn+Thy+ukHrSBaRShkVoKqlCE/pYVOHEK
caw3tzwPThnDgUvGMNPf0Ts+EiKJhJGQkYd2IBZzypIckxK5XGSCdbyJXMia
zavxJ+/ClC4M4HeF9sCKXKmnm4O/pW7GvuExTLBEKuyi2JQ75bkKMj85AVbl
/6u6C3rkqV748V6qSDqXRv1YdfJ5iwMXRoLqmglamDrsdWE+AxlngaiSSorE
Q6zKxLVnttdeel8HrnvtPeenXMEdElu33dZkPo9umfOooLbtzBomdYXXp0Jp
IZVxlUZDNHgN+9WQQfK7WKi5dCZpv1aMQIetJJZLe0jhYye6zcT7aRDRJOEs
2RA+tDouAnznMvfnvu7KyCgZqgMecVQPYEDORwPd/tGyddaPiOzwounMGXCS
MehHlAUx7gGUR7RP8Xn+XXNU15JWXKQba334UL+GIYdye4ElGefzYj5VX6Hs
tk0m1725dBtTr7O0HuSSOGDTSe2IniC4gF+RRTDS42yHw02d8CyarorqXTKO
hDYwb5d+y2kQrnVRvvXPhl9xip/z+RLDspVDAR8QLK+a9W2DmoIkwTGhSsOl
ZDuE9dyxYcyQO8FfUUuU6xp3hT+IY/BKLGj1+ygkBpCaWBUuMB/NEodMVWlb
tpJH4guf5DwdBo6SGaZJLJrzI8CcuPavrfrNpmxvnNaRF6EsHus5tcDm2XOf
L+faG0lqVblNsoqWbjpbcMPCqaEsCpJNi2di8yLU04g7AtBoo7iuw9FqcpbP
DtXcMktHCTRve/FdQDg4GQAJZOFkvJO+EIMq/VbCo/lBnAaD7hCfZODRFhCt
9b1budx3fpwU7TLfS0r6RMygtfgN4qpSNhejJXDqjvZaeEOEon7PJHaJ1FDL
ig+Mg+vA+Tv1+265UJB1EStC5BY8Yrs2SQ6oH9ENkvi9I4TzP9PGPNp5WGop
Iq8jrzBETKzTVMTkxBnR3iD5H9nYzmI4v/iIvb4vtU7twMlNGhI+VJ+bJoER
Mtawt9l3J2lShpODk/7nVvw1WtlPdE72h4+JW19AdoQGvh8VK0vYF7WH6auj
zR24sUMbwlR0FptGuvBUR4dIDe/lpFaEhH/f7FQWtGXXax4joy4LD1KQnKaw
WMkkOxJf1ZINx2SkuZOtiGu0ZtnBKzlwSMf9MX1/rFw7b7H33a05NfiGffe+
osA3RD1p5vI08sKJAefrTT5pQzGep99LlK60Hq9BYLlr1GEMyjCupcnIVEku
zpqOahKikuKQIiBxuXhToo97N9OSE8yCuRiiJMideBdTUhmnTYlwlaZOAZLA
GUk/8NnX1ipO87qQg1JVcT8qQcsxxTVzQdX3Sfk/VJ4lcyeKQpPGpQ2dF7OF
GHl+WinKQfIL/RJnnDNlRXU74n8Ky4cfqq7AsBkbNFRkMPHMufi1bDtpJ+PR
2QTdKfxhDkbV18aIfPZWVAMRVau0slhMR+AglM5h+Lk3hQp6cSkqFeRjQ+SS
7M7fa3vJJmoBp+pl25faacQnFmP9UY9+Fi7/UL/Q2Ij/JEHYdb3sW+mg5CuZ
8fawxStv2rrauOF84ioek+OSwmfRDDZXfPW4vcqdz5IYRBxWjNRzUnBdlEjv
PTliaWhPZYyPHFSoDcogWU2rCk5bI3NN5klb7GCR3LGBtsRMrt+BZ7mT7kAq
I4EePz56q4nmmWDjtvahO9j1tBB07VkXKDmSLpm+sValJoI6fLTtOjeBVX82
K5vcERW6C7GrUK4WN+jcmaBzTIHQjkhj48Qgy0sMsEWTPd9K3fNukxBChXJS
n0Oj5Lzcw7Gm1eoMJ3P8Arpc6P5m7vvgxuLDvh8SLXZpogYB5DIeotEywPT4
7sP2sUTRTOrMuLHy676bZgLjXT3bdzWD+OHYiHRWZ1fP0b3tQ8sZiKzhmlFh
7v3EHxW8TXmcryYxPeR8+Oo6+5LfEf1arJfr2DV37bh5JOlrKJmcECvIrcwd
Xryzs6xOeoTbSkadfPcvZhjgQVLE6lOFlrXPELrGY9daf9SqJI4C36yVFacu
H6cunwwN+5Lz4Ch4knj0CMlUvhyQpayErnqNlild/WCNHVVBSMfjlJFGI286
jJVNwaWlbvMuVjBjXiIlvhJTZwuAvQMSJYALjVV/a7iIbAvhwLx+X4omu0ac
lQhMkcZn22/JMpI6gWStofIoSc2KXNIuYgJT8cucrCDJzBjk4U1tc75lVqK7
c5RGy/PXTe6zA5K4MLfHHbQyBYhHAsfvTlcgepXXRSW8IGjhxPEq4fghSXAo
MnKQziP3s4ujagRR0Gyh/flPBvp9tu4b1rj1LU7jZoaPzAqfzFRIu/jgtnWv
TpkcirbURLWSL5XMiF7qJ/QSUm5/xg9RliL2kGm9/EiQUFtl+pwgdnT4vnry
Ooe7vVOGDVhpesPJ2NweppBuwwSN7kYTC4xdm0u47ObueyssjC0pVj5XNWEf
Kj9RS/Qk2/dN2wtEN1yUHFZYr6XqV6AY7WhdSh2x1MdOxUKLfSSMYRHT8Pca
aW8wHAU3/3v+6vXvAVxTqnnYPbPyaGducfRdCaBWW268UCDPjqdFBDzzTY+C
+kU7IfWAC+g5MUWcCj4dnt7joVGski9vBmXavqxA+FFuOLEu5bKJ9LaIO8o/
HQv90LwoaQIvptT3ddrNM5Vo7BFmYeYcd/SXktbKMn9MKiYR302+DyWz0mhI
zKckGCn0Cv0m7/4ZG6SJYRsbq+fpgiuo3P3CVXuyPsmi77uJDw9rlspP717z
l5yQzVWk1ixPAsXIdWtDkxxOfRTvP5LhYt7g4iQxv7pTT73X+5MsDxC1VJNy
4CValnWktupHRA/GU4LEE5qWYiGawb2muTebsg0GtLV5Rpon4YcejGcRbV1B
V/ONzLAaaR3k0oaQkgyhr4tB2UQ9f+ik7XyHneIFof4zGJdx1qvy5J+0DkXz
DsQS/cSd2X2/qmHnd+uVKakC3kPE7qCT1Pa42Di0SNEAjkiTfcGtMoBZYQfB
AhikoUfZ/iVXUkc9h7KoTgAo47TXuHmdIbefXv2HWmi74jDsoWJJQ4ciiuGm
aU+DfCEi4SRRSjq6c1oY/t6wkIv90LkvF5xwjYVSw4IO/IOKVwWTIEKSGGRl
a6eiPGnE0EpzIH4fJCY2J1h+gJVQus+QCV0wmMKcNBcTHSRZAi6CCe3ch82r
uRbTDRfnW9D97y+EKjVZm6tsJGfSkoalyCqxF06UOjWwb4CMpgOhbqFobkAK
MhRbnKF3asVlV5ocq+3hg1tXlRWlstmMpB2n0gW/QVzWLe4bccZIPrTdKyOK
YiORxR3yUgkLfLFkaMrHxoD3Xozmfpk251IXjA9LR03xfRDXVyTy/SWlFq17
F9zHe/YrUfhLGDnmfIBrTDr++oehhPQWCixDDTDqhJjMZOfiT9H8cIZZvjFv
oyAFo0TB6EccVEp8IxWH37HC5RgSZRO1LqwbCU+qG0JQLrlxyfoRpVm1LOhL
6wwMI8R5wgRsrh5Z2pFftXkrJZjMl07t5J4L37WaR8+tj/cXovVqD7+jV66H
3G6qT8aGgHWsYlyyQmR5b/RxMdjip5PIyhAcNoZQXejcI0Z3nnKNqBg/bjP2
hXGesj41v8davZ+8Aipk43r0cc2ErsSXIzkDWtosXqOGXXjeHiedLN/y1ZFq
sDo0NK4PEmugHdZ9o75MdraWek3SsIePrzmozdAXxCglGdHrsVPfoS2zVtVR
e2tMs2nqA64iMokfiTw2omKvzUAHnkjCAgf+ROlNzFysDwxXtR9GxlbcVHzd
GZcTEii2nm7NwKChQCvLwqGXieWI+17b0rBBQl0wJOBtqpMO/cRQpR5by1I9
+5NM0TjfVbDdfLJFuyTxgkcn6JJjcoCtXFyqqn13mIfQR9PsLSuwP1T1ccV8
U8pDQObW1DsG8N3AAqyrepN7ZzjWpeX6fFah2mLBtSyrhmQjN+DS20RGWl74
WhW75GyE1N2gkPjza8yGa9Roki9i2RTa4kbaGmnosPexDwGo74Is7XV4gxoU
bENlY5QI43UutLX1kNAcnVBy4Dtu0WrZoGQGzQiyNpwQDIlw4p+yuM6z8EtH
dT6hPMJZrimqUt0s6UzqaF6Xv4q68SgLqRJwpV5puz0PXvP2vDE/Al5XTVYc
tAficoVbHpcwQzkqKTWhX2mVsV1LYAm9cTxFsUwFkdKx6cld7bhI0yBOesku
ClHbGDKzVUHiEjAWKjsyj5ooc/1E5sAUlhZBgTZzLhuNZOH9qIr04aPs5WLf
qvH4NNTqg2cRPytB9eCA4tR2rzVDS4GtymaE0eHCANmJB4R3dHE7qKpYqyFg
8ifuSKIdcseQX5peOPbCKKf2DQSSSNNbcHNtW2OB5SK81N7ITRx8/iIHODNi
54l4EIxAmD1u/3yAMHBiBbGc8QVPplHuRPs81HeogXBpZJfnfABSLSOqLGJw
VSE+iovH8nWS8qzBYTRAx5dztMm+SYYW+vQ2YH4i2bmyAjV3WlmRKEv0HaMo
KlAeQ34+nH+tU4WuIrUF/EL7P7kQIFtgNbKCgT4Wsp2sxSLiZ3Kk8JI6rrmW
fKsDVslsgxmCNBkzJZatPRxhkdoHzuos5JocicVPQzKHBaV1eQtNatLUjdmM
q9+Ysvlz1ggE9SOiR9MZPvEYHbxAz9V/OVit2zT5wl+wMdJde6pXdkSd/MIQ
zsrxJLfD90TYHeO7OjhWrH2SiGSOLRGui6JZ6sHBgSXLy1uHUqfRyj2rgTMX
rF76yWl6ooonPWdZ7eG/Iz1PlsRgcem8hABd3+RRxbsA4hgXymifLYcWQGJu
WQ6B94tER3yyPTSb2tUhXM7agFnZE/88AO7zOyZqcmvW6Spy5MFjiVsrnak7
1oCl3H3QnRaMT9zfYGD3rbjr1ErIwbHPhi+JOrn3jql2ycD2PuNoNp7HjR4l
XELB6rIpdvq3HELaPEw9CCFoxX1GkzatvHR21HpPmtqvaIC5XeRVPgyjDKlA
JV5wDIj9JwkL4dqM9BxNTGr3Jxdalm+zozU6hOpNB8fBcssy9unh5S7l2iY7
T7BRdEXtICXXGnE0WIw8uepH+42xTMm95PKdlGXLSIIRRSRS+WIra0AwLqIU
cI9+l/g+As/UD2CUiLi2a2fVPoXO1JHZ3h1RTHFb+Cx7xZNIK0VrpO1e3TMk
yyXyvPPGO/wGBlNzP4Fnl5+fceqiLEnQlv9+Itlzw82traaVm8DMM04sYCim
LTW1Jyh9xPYyooPx5KjphMc4wErKZU3LeXxO/56dtfQqwmYcn0Dxe11ZqZng
7Tmj3/n820v1ycMXhsR3/f7x+ZQG4ocu8NtUhp3/zXExIr81Pq4fTB46H4w9
st2Lk+1GTYVo5GQMrQCLRiAeU/yqzaN3mXXSs0ZVvt+MT4Vkz1aw/9XAHfEM
WFU8a69RAJofe+/9et9lF2cXX2VfZvcvpNbHVvbgi4f0RfRK8Dl8h3Sg+7iT
bToYbtjVQut3bInp2sdWiCeSmc7n55e0uvP54/PsC8TD4yX+PioPEvVBauus
I6k2OpXWRRAFJRz+xgbQgNQ3jo2TRBBAaKUhuldPs1jPzStuZgAbTe7PVfRx
yf1vykOjNjX+YrBH4mfnJEhma3az7jxK/7VaUGbCQ7bqYxakDIvYmEc1BSRl
V3W/sJqNm1BsytxBtDZrELTwFxQpcyWGBT211w6onHosMNHGI3ynDWc3KRuv
UavpnaMxAVyeQ1tVu5DRgGtieDTBWlH4l8dxjV9L8MXWlcTmW7IhvZHGbwQ2
Jswv8/eWmP+pS8lOm+CeGvFxIjY6LubtEt0i6ZsXPo768d5SPv6kTZG0aWPQ
yULrtMS7NHaTiyVQNfm+XFVHuTnR/FUSJM9HrRa+dUB7X5zeJs9hMqBARTv5
GWeE7T4n3QkeWL7+BS3AqlLaR7b4XQXz/R+fv37A2RqQ3HnTipN4IwGHttPH
+K1cMgURxbZFEIl1rL6AC8PZavMIEji9oIxtId8uN1qNNHu6OOeO/I+tee1J
RotLVm05WC+urt7824w2wI36vYoRwP1Lv+XaVvaaXTBusifgsZhv7pnl1BNQ
y23qdOfh2K6vvV7COMcugigbYZrZvQKsj5hPidFXQ4L+Nhf2a0aNRfS3KQ6Q
CJNdFKryf2XXwUq9/o5ZlJgjPKoc92DD8qr6TL4RF1MdsgKcAgLn8ZVauPLK
xTePWSWczaBuKBJbm2uCsGGCmMpOIimCnSLD4uNllOPkXztn0ptJ/6+5R0oh
7Ui9ouFQbSgkNnli8WcfGjFwEk4KdMJdCrhMQSsMi187lwAep4qnFVMsyj/T
hxC2l+wgNFTU4fiSAfm+dbn03teDi9MqGB/T8lDLx6PBQsMIzrPt93K906Ax
OzddQO5qVIzPWeR8lVYcREvCtNIqWC6rx2BPo7L9HzUgeBUZPh/vWZoCR4ah
zEnNASceyCWwLCxlyDu7X2uGo9yVAyPMarV40dEiLCopdw5Ko2IL0PPtOpFJ
gESGkVcHE7NLrybVTkrD4jz7dcFpbnZVrOiv+qFd0KFGg93n5q/DNCXVx+2k
G2LaICBKDQq9buzyzFwTl3kQbqEqar5LhEB8D0LoPR3f7aXdsnlBSIAf3gPu
I6hK2+Kxj56YmFWBCrNBozrTA/wyuHrI3Eex9QorbzJyHnLlz1rCz1LXErWq
YXEUXnJ2iKFM05cUFU1Tc9b0DvaGv9BAPm81ZdZpjoJdexel4Q26EniTeQSo
c7VOdWiugSMbmlEvaZ0YN4UKvqzYkz7hdtnimwhGckQnUXYzExJ4oNyF7eOV
6MstckAtuNO4P1Ii4pu677xnU2Nllo2q163MzdJge3NAAiPpCoMEUU4zlbyJ
k44ocXrJJ6WCOBsW0ztf1iVlfj7bhXs0Sc9kWOrSrch7koo2ciQ5n7MuTmMo
yRIrkz4ncoHoOH+aM3fLV+jJdqjFd2/MQXRCvis4vO2Sft9wdj4hkHOXOuYy
Jkp8cb//KhIgsmSfP2bvMB/5jNgRTTOaLKTyjM3Dbb6ZnPRAfCrAp+F1IiaD
uIkV0qH8fUeS32z5F9JFgcfrUAxYecTGLX1iJfgguHXF9Vdju6h2Kb77Wq2k
gWPRshvFwWg539yyVAJT3AtqcKCCQNxGhJSHLtGgeRIVwkLEPmMGRRwauQnP
82pLbSOyhqVpS2OYWK2MEjPf4NDIZX6Ws4jy/Al3gy84/mvXO7zQck+9TXs6
ovktipBC5DUbfcqAT6otSbhyFSeG2NQ+du6nj2e3dCO7ztuuhhgXyT7pWPJc
tOk2TEe+Kapm/UlzxIwUg/uWVhLV2cUMbKCgvLCa+XTBms8M5bDvEJSXtrpD
YEhb2mRIv9QPuI9d7dv0+hrum8ycLdMedVz3hUCQFGUQO98gbdvgOkEvReQI
qotbQefi87GwK9941/jKvXL07lI0UswPUjNSoNG+qxesv5oLnjNpbDYNKnhL
MmGJmgge0pq8L7bYNLBOMRvfNdF5v/GoPJo7XwHty4NBBdrrM80HiXJ08fBu
xeFlWe/UxdnBvgaPZf9JtMBf6E3vHJAdxELY6b1QbLz98PzqQXxNtzX7x415
t+UKTTsHt2k3hSu3vL9OSsJ5MGhl0ZXNop5x4RBwXzuMZDQbZ2xxomde8R0M
w2YKnI6x0Hb5BBuZV8SKXI1ptbAIlvcS6HwRFYbzvdy13K8gJoAlA+GKPekc
wmZznPmd8s9R5cvOnjQ7vdYvdM7winU85uA+GW7lklz954gdS4M5pvuoUaE3
uYwYeYvMOmMLB9IFOq4JF9X1QrJ+vATLNZeu5D4bXnwDgU8GqymmQFUiTdxw
zrEWLiB0xud5CjMRsP9lYtmk6xtpjncikfmMxUHF0mjEaQMlS9Prh/xKM8Nj
qfzg9A6ulEdamQ2a/ZGFCqCcpt9q1bEI3y3tbNnKrbxkaXDJQu3zIuH7WRTR
fdu4hUB7qflCxFRd4vTacpdCenpyeizVxamqBQ0uFDSMZgwPmnzCLCHDrbH7
Apzc+sJ8TNTHSH02S2FurXcl3A7lbcpag8k7rc1NsoJUV6z3EvKKW30kOZjI
+NbqvLbriSCFXTEWnibnemzwqaB6AWrfen8qegGr064hSLQhGhengsAwC+46
dhQTH8Z3i76smO9a9Tu7Gbf5Lrd+vfHQxzgzclT35C7NQTswNLN7XGJnA+xV
ti12SeKuNg5KXRzmZkm1HOt+q86Zk5nFZYO5B1caD/OEAxK9eDW1BXxmOs6T
YAQ5lFLcob2SVP31K4L6PMj7Nn8uKdCluYDii9TC+tmsbl209hGICXe0xqTe
Bc+d/aQtg7985a6GhdFtj4EVqv/I9xt9x06lkQXgZoYdhi0sQZQLXfQeFg4n
LXiI58iIhYvD8BDP+dRmN1DwfIM/zUsJec8qRWNUAqiHZ5pk6Euc8fuotmMa
Fx1NHdhYGFH8GMnFDj/7DIlQrq5ZaxZBYkUlVN3G0JokDSouzi6HDcetKsGr
Qb7xm79jLCh+WdTiBF4OfwsO5Jrvpb3SbeCSedT7QCnDNpDE2hY+XGPhL4QX
zx6ZcRNfbzlYloi4n2KDk4WL4LWGHdVsZjUbp8fqfWNuGRQ+32qzJTP51Svn
gjNr7ifh22fItkYSL+mylY8ot09CG4BxljmVq+6Ov5dEm6qtxVSSfmamgg3b
Mrcuuk08CtdxraFk9SZa6XjVN0NCu48M7GQx0OyO85I1HY3BtYPaWHV/dse9
1rjFV8gPnVCDxssehM7XvqWZinoXoTezBvJgyi2NYEB4R5XMELxXaSpO3/ov
ZOOI+ek25+6nwJtzxU5tuPrBwg9xb/NbFUFixcoUV480K4q7r5i6L2NJapjx
dGF7wXV6uDlGfYnAQ8ToVBHlM5LtQjnxIp+0tPSEKzqYl7cYq+CJulJUFlkF
/64hX/qbqIITq3bWO05OYEPnhzIBvsOJDXtBZ+kzFWcyaxUKa2UCHssAtUiR
bsVxmt2ybJb9FtGzpRruSE8ugfvGH4w0w+XVchRkOElg6OJyzrmrhaYqQuNZ
SD4a18ygpKZoglqqMF9qtp3LeduRJvGE2z9wEWLcOSlOKu0AJc7Lo/+x/iI8
fNAAX29OiOtpAAdiCl3fLKK2MEmo3LrdAxUq1EA/CmwkRDWhbPp71d7hcjLn
u8aZeudjR2lbf0OstNzUBUcXwzWEJ03aAueYYdHZ8MFGwtJ55hbdmAsrXn1K
EqPHIewE/ciskK0yJcHFHt+dK4lU97K3bOe8zNGFqFBPLn+0lY+04EVvAIsv
j+HHpJTFh5ueKHFIaF01+xOLRA0RUpis4f+h1LxSsxffs70oDowoIzLj9uyt
B5i1NY/vep94XYHvu1BBhm08TDStUNXmqzm5D3vEq5PJOE6t/RtChjxSdcJf
T4ZKvz5vvW+shBIN4g1m0+DvnJ70iwwBj3LHnT+je+S1b8QLuzug9S3fvccV
r+BaV1ivvpIPW55GNnNcei+t07NBO8yTkvvXkgfsW7K2pjZHW+fmv2D++KqO
QpJ6aGxSPxwovKqc2h2cspIkAJq2H2CI2aEEl3MW1HM+IP/NyQGxPSzajQ/H
4HTiLhh6u+gg4WhiPZde3BW3GwGKYJR3X/D9bLx6zBkALuE2ZtAmeS0/K14c
o/ucPnltux11ywfLJHHMJ84TjyaPrAXG6eK1jhrOoihC91TvlMgyvgQuIRPv
yMch2B9jZ2CSPHVWQLHPkvbinzsXYRn0wl0UgUsWTkparRflOD0wj4KBxKHL
UJ13QhEKBYUB8LYLxF+c9Jb1SaxcIa12Gbup++2WLOu/hCKpAcd1qzLfNDmu
47Q8u+G/t6C08Uuhst/u+Pz2js+//JfZ+L8v75pg/OPPzPwPvPBlNP/f9cJv
AgwSFWMLu2uG3/6BGXSaO/79D296ZFF4gZ6IHvpXWo2vcjpdGl74P/y/dIbf
9P+jM/w2NlBmH75BRplPW/vsC8lmbMFf3vXCHZ/fnn4eY8bpt7+FerCRMb/E
vn9L3416mvMH8nm6+i/95yf/7vr89vTz05WnMDOraWzMGFs/s5IxIv7S+Adr
gaQUNdAzzb7V1iNRiuiiqMrCTDLk82hWeMExytYG4PAXKVSvG9VyO6ufnTqz
Lw5BcWBtN46H68Uc67quhAX6e1XliitU7hJik/VbbQdxXtYF/OOc7sadcuEL
S56PJkS7G/Y//BPA8OLpq6cnIJAi2prMKEQQpdeIPJn7O9RnSF2DqkODPPVy
QboCf3wiWUPF6rsJXxw+QUvu19+/jiUIjSH//i+yTPHkQ7MAAA==

-->

</rfc>

