<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.1 (Ruby 3.2.2) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-wang-ppm-differential-privacy-00" category="info" consensus="true" submissionType="IETF" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.18.2 -->
  <front>
    <title abbrev="DP-PPM">Differential Privacy Mechanisms for DAP</title>
    <seriesInfo name="Internet-Draft" value="draft-wang-ppm-differential-privacy-00"/>
    <author fullname="Junye Chen">
      <organization>Apple Inc.</organization>
      <address>
        <email>junyec@apple.com</email>
      </address>
    </author>
    <author fullname="Audra McMillan">
      <organization>Apple Inc.</organization>
      <address>
        <email>audra_mcmillan@apple.com</email>
      </address>
    </author>
    <author fullname="Christopher Patton">
      <organization>Cloudflare</organization>
      <address>
        <email>chrispatton+ietf@gmail.com</email>
      </address>
    </author>
    <author fullname="Kunal Talwar">
      <organization>Apple Inc.</organization>
      <address>
        <email>ktalwar@apple.com</email>
      </address>
    </author>
    <author fullname="Shan Wang">
      <organization>Apple Inc.</organization>
      <address>
        <email>shan_wang@apple.com</email>
      </address>
    </author>
    <date year="2023" month="October" day="23"/>
    <area>Security</area>
    <workgroup>Privacy Preserving Measurement</workgroup>
    <keyword>next generation</keyword>
    <keyword>unicorn</keyword>
    <keyword>sparkling distributed ledger</keyword>
    <abstract>
      <?line 246?>

<t>Differential Privacy (DP) is a property of a secure aggregation mechanism that
ensures that no single input measurement significantly impacts the distribution
of the aggregate result. This is a stronger property than what systems like the
Distributed Aggregation Protocol (DAP) are designed to achieve. This document
describes a variety of DP mechansisms applicable to DAP, and, for a variety of
common use cases, lifts DAP to a protocol that also achieves DP.</t>
    </abstract>
    <note removeInRFC="true">
      <name>About This Document</name>
      <t>
        The latest revision of this draft can be found at <eref target="https://wangshan.github.io/draft-wang-ppm-differential-privacy/draft-wang-ppm-differential-privacy.html"/>.
        Status information for this document may be found at <eref target="https://datatracker.ietf.org/doc/draft-wang-ppm-differential-privacy/"/>.
      </t>
      <t>
        Discussion of this document takes place on the
        Privacy Preserving Measurement Working Group mailing list (<eref target="mailto:ppm@ietf.org"/>),
        which is archived at <eref target="https://mailarchive.ietf.org/arch/browse/ppm/"/>.
        Subscribe at <eref target="https://www.ietf.org/mailman/listinfo/ppm/"/>.
      </t>
      <t>Source for this draft and an issue tracker can be found at
        <eref target="https://github.com/wangshan/draft-wang-ppm-differential-privacy"/>.</t>
    </note>
  </front>
  <middle>
    <?line 255?>

<section anchor="introduction">
      <name>Introduction</name>
      <t>[TO BE REMOVED BY RFC EDITOR: The source for this draft and and the reference
implementations of mechanisms and policies can be found at
https://github.com/wangshan/draft-wang-ppm-differential-privacy.]</t>
      <t>Multi-party computation systems like the Distributed Aggregation Protocol
<xref target="DAP"/> enable secure aggregation of measurements
generated by individuals without handling the measurements in the clear. This
is made possible by using a Verifiable Distributed Aggregation Function
<xref target="VDAF"/>, the core cryptographic component of DAP.
Execution of A VDAF involves: a large set of "Clients" who produce
cryptographically protected measurements, called "reports"; a small number of
"Aggregators" who consume reports and produce the cryptographically protected
aggregate; and a "Collector" who consumes the plaintext aggregate result.
Distributing the computation of the aggregate in this manner ensures that, as
long as one Aggregator is honest, no attacker can learn an honest Client's
measurement.</t>
      <t>Depending on the application, protecting the measurements may not be sufficient
for privacy, since the aggregate itself can reveal privacy-sensitive
information. As an illustrative example, consider using DAP/VDAF to summarize
the distribution of the heights of respondents to a survey. If one of the
respondents is especially short or tall, then their contribution is likely to
skew the summary statistic in a way that reveals their height. Ideally, no
individual measurement would have such a significant impact on the aggregate
result, but in general such leakage is inevitable for exact aggregates. Adding
some carefully chosen noise to the aggregates can however help hide the
contribution of one respondent.</t>
      <t>This intuition can be formalized by the notion of differential privacy <xref target="DMNS06"/>.
Differentially privacy is a property of an algorithm or protocol that computes
some function of a set of measurements. We say the algorithm or protocol is
"differentially private", or "DP", if the probability of observing a particular
output does not change significantly as a result of removing one of the
measurements (or substituting it with another).</t>
      <t>VDAFs are not DP on their own, but they can be composed with a variety of
mechanisms that endow them with this property. All such mechanisms work by
introducing noise into the computation that is carefully calibrated for a
number of application-specific parameters, including the structure and number
of measurements, the desired aggregation function, and the degree of "utility"
required. Intuitively, a high degree of privacy can be achieved by adding a lot
of noise; but adding too much noise can reduce the usefulness of the aggregate
result.</t>
      <t>Noise can be introduced at various steps at the computation, and by various
parties. Depending on the mechanism: the Clients might add noise to their own
measurements; and the Aggregators might add noise to their aggregate shares (the
values they produce for the Collector).</t>
      <t>In this document, we shall refer to the composition of DP mechanisms into a
scheme that provides (some notion of) DP as a "DP policy". For some policies,
noise is added only by the Clients or only by the Aggregators, but for others,
both Clients and Aggregators may participate in generating the noise.</t>
      <t>The primary goal of this document is to specify how DP policies are implemented
in DAP. It does so in the following stages:</t>
      <ol spacing="normal" type="1"><li>
          <t><xref target="overview"/> describes the notion(s) of DP that are compatible with DAP.
Security is defined in a few different "trust models" in which we assume
that some fraction of the parties execute the protocol honestly. Of course
in reality, whether such assumptions hold is usually outside of our control.
Thus our goal is to design DP policies that still provide some degree of
protection in more pessimistic trust models. (We call this "hedging".)</t>
        </li>
        <li>
          <t><xref target="mechanisms"/> specifies various mechanisms required for building DP
systems, including algorithms for sampling from discrete Laplace and
Gaussian distributions.</t>
        </li>
        <li>
          <t><xref target="policies"/> defines DP policies that are implemented with DP mechanisms, their composition with VDAFs, and their
execution semantics for DAP. <xref target="run-vdaf-with-dp-policy"/> then demonstrates how to execute VDAFs with DP policies.</t>
        </li>
        <li>
          <t><xref target="use-cases"/> specifies compositions of concrete VDAFs with concrete DP
policies for achieving DP for specific DAP use cases.</t>
        </li>
      </ol>
      <t>The following considerations are out-of-scope for this document:</t>
      <ol spacing="normal" type="1"><li>
          <t>DP policies have a few parameters that need to be tuned in order to meet the
privacy/utility tradeoff required by the application. This document provides
no guidance for this process.</t>
        </li>
        <li>
          <t>This document describes a particular class of narrowly-scoped DP policies.
Other, more sophisticated policies are possible. [TODO: Add citations. Here
we're thinking of things like DPrio, which may be more appropriate to
specify as DAP report extensions.]</t>
        </li>
        <li>
          <t>The mechanisms described in <xref target="mechanisms"/> are intended for use beyond
DAP/VDAF. However, this document does not describe general-purpose DP
policies; those described in <xref target="policies"/> are tailored to specific VDAFs or
classes of VDAFs.</t>
        </li>
      </ol>
    </section>
    <section anchor="conventions-and-definitions">
      <name>Conventions and Definitions</name>
      <t>The key words "<bcp14>MUST</bcp14>", "<bcp14>MUST NOT</bcp14>", "<bcp14>REQUIRED</bcp14>", "<bcp14>SHALL</bcp14>", "<bcp14>SHALL
NOT</bcp14>", "<bcp14>SHOULD</bcp14>", "<bcp14>SHOULD NOT</bcp14>", "<bcp14>RECOMMENDED</bcp14>", "<bcp14>NOT RECOMMENDED</bcp14>",
"<bcp14>MAY</bcp14>", and "<bcp14>OPTIONAL</bcp14>" in this document are to be interpreted as
described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they
appear in all capitals, as shown here.</t>
      <?line -18?>

<t>This document uses Python3 for describing algorithms.</t>
      <t>Let <tt>exp(EPSILON)</tt> denote raising the numeric constant <tt>e</tt> to the power of
<tt>EPSILON</tt>.</t>
    </section>
    <section anchor="overview">
      <name>Security Goals and Trust Model</name>
      <section anchor="differential-privacy">
        <name>Differential Privacy</name>
        <t>DP formalizes a property of any randomized algorithm that takes in a sequence
of measurements, aggregates them, and outputs an aggregate result. Intuitively,
this property guarantees that no single measurement significantly impacts the
value of the aggregate result. This intuition is formalized by <xref target="DMNS06"/> as
follows.</t>
        <ul empty="true">
          <li>
            <t>CP: The following is might be too jargony for an informative RFC, but for now
I think we're all just trying to agree on the definition. Once we have
consensus among ourselves, we can punt this to the appendix and leave a less
formal description here.</t>
          </li>
        </ul>
        <t>DP requires specifying the notion of "neighboring" datasets, that determines
what information is being hidden. The most common notion for our setting would
be the following:</t>
        <t>We say that two batches of measurements <tt>D1</tt> and <tt>D2</tt> are "neighboring" if they
are the same length and contain all the same measurements except one (i.e., the
symmetric difference between the multisets contains two elements). We denote
this definition as "replacement-DP" (or "substitution-DP").
<xref target="neighboring-batch"/> discusses other notions of adjacency that may be
appropriate in some settings.</t>
        <t>Let <tt>p(S, D, r)</tt> denote the probability that randomized algorithm <tt>S</tt>, on input
of measurements <tt>D</tt>, outputs aggregate result <tt>r</tt>.</t>
        <t>A randomized algorithm <tt>S</tt> is called "<tt>EPSILON</tt>-DP" if for all neighboring <tt>D1</tt>
and <tt>D2</tt> and all possible aggregate results <tt>r</tt>, the following inequality
holds:</t>
        <artwork><![CDATA[
p(S, D1, r) <= exp(EPSILON) * p(S, D2, r)
]]></artwork>
        <t>In other words, the probability that neighboring inputs produce a given
aggregate result differs by at most a constant factor, <tt>exp(EPSILON)</tt>.</t>
        <t>One can think of <tt>EPSILON</tt> as a measure of how much information about the
measurements is leaked by the aggregate result: the smaller the <tt>EPSILON</tt>, the
less information is leaked by <tt>S</tt>. For most DP applications, <tt>EPSILON</tt> will
be a small constant, e.g. 0.1 or 0.5. See <xref target="dp-explainer"/> for details.</t>
        <t>This notion of <tt>EPSILON</tt>-DP is sometimes referred to as "pure-DP". The
following is a relaxation of pure-DP, called "approximate-DP", from <xref target="DR14"/>. A
randomized algorithm <tt>S</tt> is called "<tt>(EPSILON, DELTA)</tt>-DP" if for all
neighboring <tt>D1</tt> and <tt>D2</tt> and all possible aggregate results <tt>r</tt>, the following
inequality holds:</t>
        <artwork><![CDATA[
p(S, D1, r) <= exp(EPSILON) * p(S, D2, r) + DELTA
]]></artwork>
        <t>Compared to pure-DP, approximate-DP loses an additive factor of <tt>DELTA</tt> in the
bound. <tt>DELTA</tt> can intuitively be understood as the probability that a piece of
information is leaked (e.g. a Client measurement is leaked), so <tt>DELTA</tt> is
typically taken to be polynomially small in the batch size or smaller, i.e.,
some value much smaller than <tt>1 / batch_size</tt>. Allowing for a small <tt>DELTA</tt> can
in many cases allow for much smaller noise compared to pure-DP mechanisms. See
<xref target="mechanisms"/> for details.</t>
        <t>Other variants of DP are possible; see the literature review in
<xref target="dp-explainer"/> for details.</t>
      </section>
      <section anchor="sensitivity">
        <name>Sensitivity</name>
        <t>Differential privacy noise sometimes needs to be calibrated based on the
<tt>SENSITIVITY</tt> of the aggregation function used to compute the aggregate result
over Client measurements. Sensitivity characterizes how much a change in a
Client measurement can affect the aggregate result. In this document, we focus
on the L1 and L2 sensitivity. We define them as a function over two neighboring
Client measurements:</t>
        <ul spacing="normal">
          <li>
            <t>L1 Sensitivity: the sum of the absolute values of differences at all coordinates
of the neighboring Client measurements.</t>
          </li>
          <li>
            <t>L2 Sensitivity: the sum of the squares of the differences at all coordinates of
the neighboring Client measurements.</t>
          </li>
        </ul>
      </section>
      <section anchor="trust-models">
        <name>Trust Models</name>
        <t>When considering whether a given DP policy is sufficient for DAP, it is not
enough to consider the mechanisms used in isolation. It is also necessary to
consider the process by which the policy is executed. In particular, our threat
model for DAP considers an attacker that participates in the upload,
aggregation, and collection protocols and may use its resources to attack the
underlying cryptographic primitives (VDAF <xref target="VDAF"/>, HPKE <xref target="RFC9180"/>, and TLS
<xref target="RFC8446"/>).</t>
        <t>To account for such attackers, our goal for DAP is "computational-DP" as
described by <xref target="MPRV09"/> (Definition 4, "SIM-CDP"). This formalizes the amount
of information a computationally bounded attacker can glean about the
measurements generated by honest Clients over the course of its attack on the
protocol. We consider an attacker that controls the network and statically
corrupts a subset of the parties.</t>
        <t>We say the protocol is pure-DP (resp. approximate-DP) if the view of any
computationally bounded attacker can be efficiently simulated by a simulator
that itself is pure-DP (or approximate-DP) as defined above. (The simulator
takes the measurements as input and outputs a value that should be
computationally indistinguishable from the transcript of the protocol's
execution in the presence of the attacker.)</t>
        <t>Whether this property holds for a given DP policy depends on which parties can
be trusted to execute the protocol correctly (i.e., which parties are not
corrupted by the attacker). We consider three, increasingly pessimistic trust
models.</t>
        <ul empty="true">
          <li>
            <t>KT(issue#28): Here we seem to be assuming corrupted = malicious. Is there any
benefit to a more refined distinction (i.e. honest-but-curious vs malicious).
I suspect we would always want secure against malicious, but perhaps there are
settings where we are OK with security against bad behavior that is not
catchable during an audit.</t>
          </li>
        </ul>
        <section anchor="oamc-one-aggregator-most-clients">
          <name>OAMC: One-Aggregator-Most-Clients</name>
          <t>Assume that most Clients and one Aggregator are honest and that the other
Aggregator (DAP involves just two Aggregators) and the Collector are controlled
by the attacker. When all Clients are honest, this corresponds to the same
trust model as the base DAP protocol. The degree of privacy provided (i.e., the
value of <tt>EPSILON</tt> for pure-DP) for most protocols in this setting would
degrade gracefully as the number of honest Clients decreases.</t>
        </section>
        <section anchor="oaoc-one-aggregator-one-client">
          <name>OAOC: One-Aggregator-One-Client</name>
          <t>Assume that at least one Aggregator is honest. The other Aggregator, the
Collector, and all but one Clients are controlled by the attacker. The goal is
to provide protection for the honest Client's measurement.</t>
          <ul empty="true">
            <li>
              <t>TODO(issue#15) For the simple Aggregator-noise mechanism, if the malicious
Aggregator cheats by not adding noise, then the aggregate result is not DP
from the point of view of the honest Aggregator, unless the honest
Aggregator "forgets" the randomness it used. Describe this "attack" in
"Security Considerations" and say why it's irrelevant.</t>
            </li>
          </ul>
        </section>
      </section>
      <section anchor="oc-one-client">
        <name>OC: One-Client</name>
        <t>Assume that all parties, including all but one Client, both Aggregators, and
the Collector are controlled by the attacker. The best the honest Client can
hope for is that its measurement has "local-DP". This property is defined the
same way as pure- or approximate-DP, but the bound on <tt>EPSILON</tt> that we would
aim to achieve for local-DP would typically be larger than that in a more
optimistic trust model.</t>
      </section>
      <section anchor="hedging">
        <name>Hedging</name>
        <t>If a trust model's assumptions turn out to be false in practice, then it is
desirable for a DP policy to maintain some degree of privacy in a more
pessimistic trust model. For example, a deployment of DAP might provide some
mechanism to ensure that all reports that are consumed were generated by
trusted Clients (e.g., a Trusted Execution Environment (TEE) at each Client).
This gives us confidence that our assumptions in the OAMC trust model hold. But
if this mechanism is broken (e.g., a flaw is found in the TEE), then it is
desirable if the policy remains DP in the OAOC or OC model, perhaps with a
weaker bound.</t>
      </section>
    </section>
    <section anchor="mechanisms">
      <name>DP Mechanisms</name>
      <t>This section describes various mechanisms required for implementing DP
policies. The algorithms are designed to securely expand a short, uniform
random seed into a sample from the target given distribution.</t>
      <ul empty="true">
        <li>
          <t>TODO For now we don't actually expand a seed into a sample; we just use coin
flips that are local to the relevant algorithm. Update the API so that take a
random seed instead so that we can derive test vectors.</t>
        </li>
      </ul>
      <t>Each mechanism has internal parameters that determine how much noise will be
added to its input data. Note that a mechanism that is initialized with its
internal parameters can achieve different combinations of DP parameters, e.g.
<tt>(EPSILON, DELTA)</tt>-DP, or <tt>(EPSILON', DELTA')</tt>-DP, where <tt>EPSILON &lt; EPSILON'</tt>
and <tt>DELTA &gt; DELTA'</tt>, because if we make <tt>EPSILON</tt> larger (i.e., weaker
privacy), we may achieve a smaller <tt>DELTA</tt> and thereby a smaller overall DP
bound (i.e., stronger privacy).</t>
      <t>A concrete DP mechanism implements the following methods:</t>
      <ul spacing="normal">
        <li>
          <t><tt>DpMechanism.add_noise(data: DataType) -&gt; DataType</tt> adds noise to the input
<tt>data</tt> (i.e., a measurement or an aggregate share). Some DP mechanisms apply
noise based on the input data.</t>
        </li>
        <li>
          <t><tt>DpMechanism.sample_noise(dimension: int) -&gt; DataType</tt> samples noise of
length <tt>dimension</tt>, with the DP mechanism. The interpretation of <tt>dimension</tt>
generally depends on the data type. It may be called by <tt>DpMechanism.add_noise()</tt>.</t>
        </li>
        <li>
          <t><tt>DpMechanism.debias(data: DataType, meas_count: int) -&gt; DebiasedDataType</tt>
debiases the noised <tt>data</tt> based on the number of measurements <tt>meas_count</tt>.
Note that not all mechanisms will implement this method. Some do, such as
<xref target="symmetric-rappor"/>.</t>
        </li>
      </ul>
      <t>Putting this together a DP mechanism is a concrete subclass of <tt>DPMechanism</tt>
defined below:</t>
      <artwork><![CDATA[
class DpMechanism:
    # The data type applicable to this `DpMechanism`. The type is the
    # same for the noised data and the un-noised data.
    DataType = None
    # Debiased data type after removing bias added by the noise. For
    # most of the mechanisms, `DebiasedDataType == DataType`.
    DebiasedDataType = None

    def add_noise(self, data: DataType) -> DataType:
        """Add noise to a piece of input data. """
        raise NotImplementedError()

    def sample_noise(self, dimension: int) -> DataType:
        """
        Sample noise with the initialized `DpMechanism`. `dimension`
        is used to determine the length of the output if `DataType` is
        a list.
        """
        raise NotImplementedError()

    def debias(self,
               data: DataType,
               meas_count: int) -> DebiasedDataType:
        """
        Debias the data due to the added noise, based on the number of
        measurements `meas_count`. This doesn't apply to all DP
        mechanisms. Some Client randomization mechanisms need this
        functionality.
        """
        return data
]]></artwork>
      <section anchor="discrete-laplace">
        <name>Discrete Laplace</name>
        <ul empty="true">
          <li>
            <t>TODO: Specify a Laplace sampler from Algorithm 2 of <xref target="CKS20"/> (#10).</t>
          </li>
        </ul>
      </section>
      <section anchor="discrete-gaussian">
        <name>Discrete Gaussian</name>
        <ul empty="true">
          <li>
            <t>TODO: Specify a Gaussian sampler from Algorithm 3 of <xref target="CKS20"/> (#10).</t>
          </li>
        </ul>
      </section>
      <section anchor="symmetric-rappor">
        <name>Symmetric RAPPOR</name>
        <t>This section describes Symmetric RAPPOR, a DP mechanism first proposed in
<xref target="EPK14"/>, and refined in Appendix C.1 of <xref target="MJTB_22"/>. (The specification here
reflects the refined version.) It is initialized with a parameter <tt>EPSILON_0</tt>.
It takes in a bit vector and outputs a noisy version that with randomly flipped
bits.</t>
        <t>Symmetric RAPPOR applies "binary randomized response mechanism" at each
coordinate. Binary randomized response takes in a single bit <tt>x</tt>. The bit is
flipped to <tt>1 - x</tt> with probability <tt>1 / (exp(EPSILON_0) + 1)</tt>. For example, if
<tt>EPSILON_0</tt> is configured to be 3, and the input to binary randomized response
is a <tt>0</tt>, the bit will be flipped to be 1 with probability <tt>1 / (exp(3) + 1)</tt>,
otherwise, it will stay as a <tt>0</tt>.</t>
        <t>Under OC trust model, by applying binary randomized response with <tt>EPSILON_0</tt>
parameter to its measurement, the Client gets <tt>EPSILON_0</tt>-DP with deletion
(Definition II.4 of <xref target="EFMR_20"/>, and Definition C.1 of <xref target="MJTB_22"/>). A formal
definition of deletion-DP is elaborated in <xref target="rappor-deletion-dp"/>.</t>
        <t>Symmetric RAPPOR generalizes binary randomized response mechanism by applying
it independently at all coordinates of a Client's bit vector. Under OAMC trust
model, it is proven in Appendix C.1.3 of <xref target="MJTB_22"/> that strong <tt>(EPSILON,
DELTA)</tt>-DP can be achieved by aggregating a batch of noisy Client measurements,
each of which is a bit vector with exactly one bit set, and is noised with
symmetric RAPPOR. The final aggregate result needs to be "debiased" due to the
noise added by the Clients. The debiased data type is expressed as a vector of
floats, because of floating point arithmetic. [CP: "because of floating point
arithmetic" is a bit vague.]</t>
        <t>Since the noise generated by each Client at each coordinate is independent, and
as the number of Clients <tt>n</tt> grows, the noise distribution at each coordinate
approximates a Gaussian distribution, with mean 0, and standard deviation
<tt>sqrt(n * exp(EPSILON_0) / (exp(EPSILON_0) - 1)^2)</tt>, as proved by Theorem C.2 of
<xref target="MJTB_22"/>.</t>
        <section anchor="rappor-deletion-dp">
          <name><tt>EPSILON_0</tt>-DP in Deletion-DP</name>
          <ul empty="true">
            <li>
              <t>JC: We only add a definition of deletion-DP here since this is likely the only
mechanism that provides deletion-DP in the OC trust model. Putting it in
overview might confuse readers early on, because we said only replacement-DP
applies to DAP.</t>
            </li>
          </ul>
          <t>Let <tt>P1(S, D, E)</tt> denote the probability that a randomized algorithm <tt>S</tt>, on an
input measurement <tt>D</tt>, outputs a noisy measurement <tt>E</tt>. Let <tt>P2(R, E)</tt> denote
the probability that a reference noisy output <tt>R</tt> is equal to <tt>E</tt>. A  randomized
algorithm <tt>S</tt> is said to be <tt>EPSILON_0</tt>-DP in the deletion-DP model, if there
exists a reference distribution <tt>R</tt>, such that for all possible Client
measurements <tt>D</tt>, and all possible noisy outputs <tt>E</tt>, we have:</t>
          <artwork><![CDATA[
-EPSILON_0 <= ln(P1(S, D, E) / P2(R, E)) <= EPSILON_0
]]></artwork>
          <t>Intuitively, if we think of the reference distribution <tt>R</tt> as an average, noisy
Client measurement, deletion-DP makes it hard to distinguish the Client
measurement <tt>D</tt> from the measurement from an average Client.</t>
        </section>
        <section anchor="reference-implementation">
          <name>Reference Implementation</name>
          <t>Symmetric RAPPOR is specified by the DP mechanism <tt>SymmetricRappor</tt> below.</t>
          <ul empty="true">
            <li>
              <t>TODO: We could make the sampler more efficient if we use binomial.</t>
            </li>
          </ul>
          <artwork><![CDATA[
import random

class SymmetricRappor(DpMechanism):
    DataType = list[int]
    # Debiasing produces an array of floats.
    DebiasedDataType = list[float]

    def __init__(self, eps0: float):
        self.eps0 = eps0
        self.p = 1.0 / (math.exp(eps0) + 1.0)

    def add_noise(self, data: DataType) -> DataType:
        # Apply binary randomized response at each coordinate, based
        # on Appendix C.1.1 of {{MJTB+22}}.
        return list(map(
            lambda x: 1 - x if self.coin_flip() else x,
            data
        ))

    def sample_noise(self, dimension: int) -> DataType:
        # Sample binary randomized response at each coordinate on an
        # all zero vector.
        return [int(self.coin_flip()) for coord in range(dimension)]

    def debias(self,
               data: DataType,
               meas_count: int) -> DebiasedDataType:
        # Debias the data based on Appendix C.1.2 of {{MJTB+22}}.
        exp_eps = math.exp(self.eps0)
        return list(map(
            lambda x: (x * (exp_eps + 1) / (exp_eps - 1) -
                       meas_count / (exp_eps - 1)),
            data
        ))

    def coin_flip(self):
        return random.random() < self.p
]]></artwork>
        </section>
      </section>
    </section>
    <section anchor="policies">
      <name>DP Policies for VDAFs</name>
      <t>The section defines a generic interface for DP policies for VDAFs. A DP policy
composes the following operations with execution of a VDAF:</t>
      <ol spacing="normal" type="1"><li>
          <t>An optional Client randomization mechanism that adds noise to Clients'
measurements prior to sharding.</t>
        </li>
        <li>
          <t>An optional Aggregator randomization mechanism that adds noise to an
Aggregator's aggregate share prior to unsharding.</t>
        </li>
        <li>
          <t>An optional debiasing step that removes the bias in DP mechanisms (i.e.
  <tt>DpMechanism.debias</tt>) after unsharding.</t>
        </li>
      </ol>
      <t>The composition of Client and Aggregator randomization mechanisms defines the
DP policy for a VDAF, and enforces the DP guarantee. In particular, a concrete
DP policy is a subclass of <tt>DpPolicy</tt>:</t>
      <artwork><![CDATA[
class DpPolicy:
    # Client measurement type.
    Measurement = None
    # Aggregate share type, owned by an Aggregator.
    AggShare = None
    # Aggregate result type, unsharded result from all Aggregators.
    AggResult = None
    # Debiased aggregate result type.
    DebiasedAggResult = None

    def add_noise_to_measurement(self,
                                 meas: Measurement,
                                 ) -> Measurement:
        """
        Add noise to measurement, if required by the Client
        randomization mechanism. The default implementation is to do
        nothing.
        """
        return meas

    def add_noise_to_agg_share(self,
                               agg_share: AggShare,
                               ) -> AggShare:
        """
        Add noise to aggregate share, if required by the Aggregator
        randomization mechanism. The default implementation is to do
        nothing.
        """
        return agg_share

    def debias_agg_result(self,
                          agg_result: AggResult,
                          meas_count: int,
                          ) -> DebiasedAggResult:
        """
        Debias aggregate result, if either of the Client or
        Aggregator randomization mechanism requires this operation,
        based on the number of measurements `meas_count`. The default
        implementation is to do nothing.
        """
        return agg_result
]]></artwork>
      <section anchor="run-vdaf-with-dp-policy">
        <name>Executing DP Policies with VDAFs</name>
        <t>The execution of <tt>DpPolicy</tt> with a <tt>Vdaf</tt> can thus be summarized by the
following computational steps (these are carried out by DAP in a distributed
manner):</t>
        <artwork><![CDATA[
def run_vdaf_with_dp_policy(
    dp_policy: DpPolicy,
    Vdaf: Vdaf,
    agg_param: Vdaf.AggParam,
    measurements: list[DpPolicy.Measurement],
):
    """Run the DP policy with VDAF on a list of measurements."""
    nonces = [gen_rand(Vdaf.NONCE_SIZE)
              for _ in range(len(measurements))]
    verify_key = gen_rand(Vdaf.VERIFY_KEY_SIZE)

    out_shares = []
    for (nonce, measurement) in zip(nonces, measurements):
        assert len(nonce) == Vdaf.NONCE_SIZE

        # Each Client adds Client randomization noise to its
        # measurement.
        noisy_measurement = \
            dp_policy.add_noise_to_measurement(measurement)
        # Each Client shards its measurement into input shares.
        rand = gen_rand(Vdaf.RAND_SIZE)
        (public_share, input_shares) = \
            Vdaf.shard(noisy_measurement, nonce, rand)

        # Each Aggregator initializes its preparation state.
        prep_states = []
        outbound = []
        for j in range(Vdaf.SHARES):
            (state, share) = Vdaf.prep_init(verify_key, j,
                                            agg_param,
                                            nonce,
                                            public_share,
                                            input_shares[j])
            prep_states.append(state)
            outbound.append(share)

        # Aggregators recover their output shares.
        for i in range(Vdaf.ROUNDS-1):
            prep_msg = Vdaf.prep_shares_to_prep(agg_param,
                                                outbound)

            outbound = []
            for j in range(Vdaf.SHARES):
                out = Vdaf.prep_next(prep_states[j], prep_msg)
                (prep_states[j], out) = out
                outbound.append(out)

        # The final outputs of the prepare phase are the output shares.
        prep_msg = Vdaf.prep_shares_to_prep(agg_param,
                                            outbound)
        outbound = []
        for j in range(Vdaf.SHARES):
            out_share = Vdaf.prep_next(prep_states[j], prep_msg)
            outbound.append(out_share)

        out_shares.append(outbound)

    num_measurements = len(measurements)
    # Each Aggregator aggregates its output shares into an
    # aggregate share, and adds any Aggregator randomization
    # mechanism to its aggregate share. In a distributed VDAF
    # computation, the aggregate shares are sent over the network.
    agg_shares = []
    for j in range(Vdaf.SHARES):
        out_shares_j = [out[j] for out in out_shares]
        agg_share_j = Vdaf.aggregate(agg_param, out_shares_j)
        # Each Aggregator independently adds noise to its aggregate
        # share.
        noised_agg_share_j = \
            dp_policy.add_noise_to_agg_share(agg_share_j)
        agg_shares.append(noised_agg_share_j)

    # Collector unshards the aggregate.
    agg_result = Vdaf.unshard(agg_param, agg_shares,
                              num_measurements)
    # Debias aggregate result.
    debiased_agg_result = dp_policy.debias_agg_result(
        agg_result, num_measurements
    )
    return debiased_agg_result
]]></artwork>
      </section>
      <section anchor="dp-in-dap">
        <name>Executing DP Policies in DAP</name>
        <ul empty="true">
          <li>
            <t>TODO: Specify integration of a <tt>DpPolicy</tt> into DAP.</t>
          </li>
        </ul>
      </section>
    </section>
    <section anchor="use-cases">
      <name>Use Cases</name>
      <section anchor="histograms">
        <name>Histograms</name>
        <t>Many applications require aggregating histograms in which each Client submits a
bit vector with exactly one bit set, also known as, "one-hot vector".</t>
        <t>We describe two policies that achieve <tt>(EPSILON, DELTA)</tt>-DP for this use case.
The first uses only a Client randomization mechanism and targets the OAMC trust
model. The second uses only an Aggregator randomization mechanism and
targets the more stringent OAOC trust model. We discover that both policies in
different settings of <tt>EPSILON</tt> and <tt>DELTA</tt> provide comparable utility, except
that the policy in the OAOC trust model requires all Aggregators to
independently add noise, so we lose some utility when more than one Aggregator
is honest.</t>
        <section anchor="prio3multihothistogram-with-client-randomization">
          <name>Prio3MultiHotHistogram with Client Randomization</name>
          <t>Client randomization allows Clients to protect their privacy by adding noise to
their measurements directly, as described in <xref target="levels"/>. Analyses (<xref target="FMT20"/> and
<xref target="FMT22"/>) have shown that, in the OAMC trust model, we get good approximate-DP
by aggregating noisy Clients' measurements with Client randomization. In this
policy, we will describe how to achieve approximate-DP, with each Client
applying symmetric RAPPOR to its measurement.</t>
          <t>Our target VDAF is Prio3Histogram as specified in <xref target="VDAF"/>. This uses the
<tt>Histogram</tt> circuit to enforce one-hotness of the measurement. Due to the
noising mechanism, a less strict circuit is required that tolerates a bounded
number of non-<tt>0</tt> entries in the vector. We call this Prio3MultiHotHistogram.</t>
          <ul empty="true">
            <li>
              <t>JC: Specify Prio3MultiHotHistogram. This may end up in the base VDAF draft.
In the meantime, there is a reference implementation here:
https://github.com/cfrg/draft-irtf-cfrg-vdaf/blob/main/poc/vdaf_prio3.py</t>
            </li>
          </ul>
          <t>The Client randomization we will use here is the symmetric RAPPOR mechanism of
<xref target="symmetric-rappor"/>, which is initialized with a <tt>EPSILON_0</tt> parameter. We get
<tt>(EPSILON, DELTA)</tt>-DP in the aggregate result, as long as there are at least
<tt>batch_size</tt> number of honest Clients, each of which applies symmetric RAPPOR
to its measurement, and contributes the noisy measurement to the batch. The
<tt>(EPSILON, DELTA)</tt>-DP degrades gracefully as the number of honest Clients
decreases, i.e., we can still achieve <tt>(EPSILON', DELTA)</tt>-DP, where <tt>EPSILON'</tt>
is larger than <tt>EPSILON</tt>.</t>
          <ul empty="true">
            <li>
              <t>TODO(junyechen1996): Justify why RR with <tt>EPSILON_0</tt> + <tt>batch_size</tt> can
achieve <tt>(EPSILON, DELTA)</tt>-DP in the aggregate result.</t>
            </li>
          </ul>
          <t>Because applying symmetric RAPPOR to an one-hot Client measurement can cause the
noisy measurement to have multiple bits set, we need to check the noisy
measurement has at most <tt>m</tt> number of 1s, per Section 4.5 of <xref target="TWMJ_23"/>, to
ensure robustness against malicious Clients, who attempt to bias the final
histogram by setting many coordinates to be 1.</t>
          <t>Assume the length of the Client measurement is <tt>d</tt>, and there is exactly one bit
set. For the <tt>d - 1</tt> coordinates with 0s, the probability <tt>p_0</tt> of changing a
coordinate from 0 to 1 is <tt>1 / (exp(EPSILON_0) + 1)</tt> per <xref target="symmetric-rappor"/>,
so we can model the number of 1s in the noisy measurement as a binomial random
variable <tt>C</tt> with number of trials <tt>d - 1</tt>, and probability <tt>p_0</tt>, plus the one
bit that is already set. Our goal is to ensure the probability <tt>p</tt> of <tt>1 + C</tt>
exceeding <tt>m</tt> is small enough, i.e., the false positive rate of a noisy
measurement from an honest Client having more than <tt>m</tt> bits is at most <tt>p</tt>. This
is equivalent to finding <tt>m</tt> and <tt>p</tt>, such that the cumulative distribution
function (CDF) satisfies <tt>Pr(C &lt;= m - 1) &gt;= 1 - p</tt>.</t>
          <t>Once we find <tt>m</tt>, we will use it to instantiate <tt>Prio3MultiHotHistogram</tt> to
perform verification and aggregation. The final aggregate result is debiased
based on the number of measurements according to <xref target="symmetric-rappor"/>, in order
to reduce the bias introduced during Client randomization.</t>
          <artwork><![CDATA[
class MultiHotHistogramWithClientRandomization(DpPolicy):
    Field = MultiHotHistogram.Field
    Measurement = MultiHotHistogram.Measurement
    AggShare = list[Field]
    AggResult = MultiHotHistogram.AggResult
    DebiasedAggResult = SymmetricRappor.DebiasedDataType

    def __init__(self, eps0: float):
        # TODO(junyechen1996): Justify how eps0 + batch_size can
        # achieve `(EPSILON, DELTA)`-DP.
        self.rappor = SymmetricRappor(eps0)

    def add_noise_to_measurement(self,
                                 meas: Measurement,
                                 ) -> Measurement:
        return self.rappor.add_noise(meas)

    def debias_agg_result(self,
                          agg_result: AggResult,
                          meas_count: int,
                          ) -> DebiasedAggResult:
        return self.rappor.debias(agg_result, meas_count)
]]></artwork>
          <section anchor="utility">
            <name>Utility</name>
            <t>As discussed in <xref target="symmetric-rappor"/>, as the number of Clients <tt>n</tt> increases,
the noise at each coordinate of the debiased aggregate result approximates a
Gaussian distribution with mean 0, and standard deviation
<tt>sqrt(n * exp(EPSILON_0) / (exp(EPSILON_0) - 1)^2)</tt>. We will look at the
standard deviation generated by symmetric RAPPOR from <tt>n</tt> Clients, in order to
achieve various combinations of <tt>(EPSILON, DELTA)</tt>-DP.</t>
            <table anchor="histogram-client-dp">
              <name>Utility of Pure Client Randomization for histogram use case.</name>
              <thead>
                <tr>
                  <th align="left">EPSILON</th>
                  <th align="left">DELTA</th>
                  <th align="left">Standard deviation</th>
                  <th align="left">Internal Parameters</th>
                </tr>
              </thead>
              <tbody>
                <tr>
                  <td align="left">0.317</td>
                  <td align="left">1e-9</td>
                  <td align="left">26.1337</td>
                  <td align="left">n = 100000, EPSILON_0 = 5.0</td>
                </tr>
                <tr>
                  <td align="left">0.906</td>
                  <td align="left">1e-9</td>
                  <td align="left">12.2800</td>
                  <td align="left">n = 100000, EPSILON_0 = 6.5</td>
                </tr>
                <tr>
                  <td align="left">1.528</td>
                  <td align="left">1e-9</td>
                  <td align="left">9.5580</td>
                  <td align="left">n = 100000, EPSILON_0 = 7.0</td>
                </tr>
              </tbody>
            </table>
          </section>
        </section>
        <section anchor="prio3histogram-with-aggregator-randomization">
          <name>Prio3Histogram with Aggregator Randomization</name>
          <t>Aggregator Randomization requires Aggregators to add noise to their aggregate
shares before outputting them. Under OAOC trust model, we can achieve good
<tt>EPSILON</tt>-DP, or <tt>(EPSILON, DELTA)</tt>-DP, as long as at least one of the
Aggregators is honest. The amount of noise needed by the Aggregator randomizer
typically depends on the target DP parameters <tt>EPSILON</tt> and <tt>DELTA</tt>, and also
the <tt>SENSITIVITY</tt> <xref target="sensitivity"/> of the aggregation function.</t>
          <t>In this section, we describe how to achieve <tt>(EPSILON, DELTA)</tt>-DP for
<tt>Prio3Histogram</tt> <xref section="7.4.4" sectionFormat="of" target="VDAF"/> by asking each Aggregator to
independently add discrete Gaussian noise to its aggregate share.</t>
          <t>We use the discrete Gaussian mechanism described in <xref target="discrete-gaussian"/>, which
has a mean of 0, and is initialized with a <tt>SIGMA</tt> parameter that stands for the
standard deviation of the Gaussian distribution. In order to achieve
<tt>(EPSILON, DELTA)</tt>-DP in the OAOC trust model, all Aggregators need to
independently add discrete Gaussian noise to all coordinates of their aggregate
shares.</t>
          <t>Theorem 8 in <xref target="BW18"/> shows how to compute <tt>SIGMA</tt> parameter for continuous
Gaussian, based on the given <tt>(EPSILON, DELTA)</tt>-DP parameters and
L2-sensitivity, and Theorem 7 in <xref target="CKS20"/> shows a similar result for discrete
Gaussian. For the current use case, the L2-sensitivity is <tt>sqrt(2)</tt>, because
transforming an one-hot vector into another will affect two coordinates, e.g.,
transforming an one-hot vector <tt>[1, 0]</tt> to <tt>[0, 1]</tt> changes L2-sensitivity by
<tt>sqrt((1 - 0)^2 + (0 - 1)^2) = sqrt(2)</tt>. Algorithm 1 in <xref target="BW18"/> elaborates on
how to compute <tt>SIGMA</tt>, and we will refer to the calculation in the open
sourced code <xref target="AGM"/>.</t>
          <ul empty="true">
            <li>
              <t>JC: We will need to provide an explanation of the parameter calculation in
the draft itself, instead of merely referring to the code.</t>
            </li>
          </ul>
          <ul empty="true">
            <li>
              <t>KT: <xref target="CKS20"/> mentions in Theorem 7 about the difference of approximate-DP
achieved by discrete <xref target="CKS20"/> and continuous Gaussian <xref target="BW18"/> that:
The discrete and continuous Gaussian attain almost identical guarantees for
large <tt>SIGMA</tt>, but the discretization creates a small difference that becomes
apparent for small <tt>SIGMA</tt>.
Therefore, if we use Algorithm 1 from <xref target="BW18"/>, we will need to be careful
when <tt>SIGMA</tt> is small, i.e., the desired <tt>(EPSILON, DELTA)</tt>-DP is weak.</t>
            </li>
          </ul>
          <ul empty="true">
            <li>
              <t>JC: It's also worth exploring the utility of discrete Gaussian proven by
Definition 3 in <xref target="CKS20"/>, which defines the concentrated-DP achieved by
discrete Gaussian. Concentrated-DP is then converted to approximate-DP, which
is our target here. As a first draft, we won't overwhelm readers with other
types of DP.</t>
            </li>
          </ul>
          <artwork><![CDATA[
class HistogramWithAggregatorRandomization(DpPolicy):
    Field = Histogram.Field
    # A measurement is an unsigned integer, indicating an index less
    # than `Histogram.length`.
    Measurement = Histogram.Measurement
    AggShare = list[Field]
    AggResult = Histogram.AggResult
    # The final aggregate result should be a vector of signed
    # integers, because discrete Gaussian could produce negative
    # noise that may have a larger absolute value than the count
    # before noise.
    DebiasedAggResult = list[int]

    def __init__(self, epsilon: float, delta: float):
        # TODO(junyechen1996): Consider using fixed precision or large
        # decimal for parameters like `epsilon` and `delta`. (#23)
        # Transforming an one-hot vector into another will affect two
        # coordinates, e.g. from [1, 0, 0] to [0, 1, 0], so the change
        # in L2-sensitivity is `sqrt(1 + 1) = sqrt(2)`.
        dgauss_sigma = agm.calibrateAnalyticGaussianMechanism(
            epsilon, delta, math.sqrt(2.0)
        )
        self.dgauss_mechanism = \
            DiscreteGaussianWithZeroMean(dgauss_sigma)

    def add_noise_to_agg_share(self,
                               agg_share: AggShare,
                               ) -> AggShare:
        """
        Sample discrete Gaussian noise, and merge it with the
        aggregate share.
        """
        noise_vec = self.dgauss_mechanism.sample_noise(len(agg_share))
        result = []
        for (agg_share_i, noise_vec_i) in zip(agg_share, noise_vec):
            if noise_vec_i < 0:
                noise_vec_i = Field.MODULUS + noise_vec_i
            result.append(agg_share_i + self.Field(noise_vec_i))
        return result

    def debias_agg_result(self,
                          agg_result: AggResult,
                          meas_count: int,
                          ) -> DebiasedAggResult:
        # TODO(junyechen1996): Interpret large unsigned integers as
        # negative values or 0 properly. For now, directly return it,
        # since we haven't fully implemented discrete Gaussian
        # mechanism (#10).
        return agg_result
]]></artwork>
          <section anchor="utility-1">
            <name>Utility</name>
            <t>We will demonstrate the utility of this policy in the table below in terms of
the standard deviation <tt>SIGMA</tt> in discrete Gaussian needed to achieve various
combinations of <tt>(EPSILON, DELTA)</tt>-DP, based on the open-sourced code in
<xref target="AGM"/>.</t>
            <t>It's worth noting that if more than one Aggregator is honest, we will lose some
utility because each Aggregator is independently adding noise. The standard
deviation in the aggregate result thus becomes <tt>SIGMA * sqrt(c)</tt>, where <tt>c</tt> is
the number of honest Aggregators. In the table below, the numbers in
"Standard deviation" column assume <tt>c = 2</tt>. The numbers in
"Standard deviation (OAOC)" column assume only one Aggregator is honest.</t>
            <table anchor="histogram-aggregator-dp">
              <name>Utility of Pure Aggregator Randomization for histogram use case.</name>
              <thead>
                <tr>
                  <th align="left">EPSILON</th>
                  <th align="left">DELTA</th>
                  <th align="left">Standard deviation</th>
                  <th align="left">Standard deviation (OAOC)</th>
                </tr>
              </thead>
              <tbody>
                <tr>
                  <td align="left">0.317</td>
                  <td align="left">1e-9</td>
                  <td align="left">33.0788</td>
                  <td align="left">23.3903</td>
                </tr>
                <tr>
                  <td align="left">0.906</td>
                  <td align="left">1e-9</td>
                  <td align="left">12.0777</td>
                  <td align="left">8.5402</td>
                </tr>
                <tr>
                  <td align="left">1.528</td>
                  <td align="left">1e-9</td>
                  <td align="left">7.3403</td>
                  <td align="left">5.1904</td>
                </tr>
              </tbody>
            </table>
          </section>
        </section>
      </section>
    </section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>TODO Security</t>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t>This document has no IANA actions.</t>
    </section>
  </middle>
  <back>
    <references>
      <name>References</name>
      <references anchor="sec-normative-references">
        <name>Normative References</name>
        <reference anchor="DAP">
          <front>
            <title>Distributed Aggregation Protocol for Privacy Preserving Measurement</title>
            <author fullname="Tim Geoghegan" initials="T." surname="Geoghegan">
              <organization>ISRG</organization>
            </author>
            <author fullname="Christopher Patton" initials="C." surname="Patton">
              <organization>Cloudflare</organization>
            </author>
            <author fullname="Eric Rescorla" initials="E." surname="Rescorla">
              <organization>Mozilla</organization>
            </author>
            <author fullname="Christopher A. Wood" initials="C. A." surname="Wood">
              <organization>Cloudflare</organization>
            </author>
            <date day="14" month="September" year="2023"/>
            <abstract>
              <t>   There are many situations in which it is desirable to take
   measurements of data which people consider sensitive.  In these
   cases, the entity taking the measurement is usually not interested in
   people's individual responses but rather in aggregated data.
   Conventional methods require collecting individual responses and then
   aggregating them, thus representing a threat to user privacy and
   rendering many such measurements difficult and impractical.  This
   document describes a multi-party distributed aggregation protocol
   (DAP) for privacy preserving measurement (PPM) which can be used to
   collect aggregate data without revealing any individual user's data.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-ppm-dap-07"/>
        </reference>
        <reference anchor="VDAF">
          <front>
            <title>Verifiable Distributed Aggregation Functions</title>
            <author fullname="Richard Barnes" initials="R." surname="Barnes">
              <organization>Cisco</organization>
            </author>
            <author fullname="David Cook" initials="D." surname="Cook">
              <organization>ISRG</organization>
            </author>
            <author fullname="Christopher Patton" initials="C." surname="Patton">
              <organization>Cloudflare</organization>
            </author>
            <author fullname="Phillipp Schoppmann" initials="P." surname="Schoppmann">
              <organization>Google</organization>
            </author>
            <date day="31" month="August" year="2023"/>
            <abstract>
              <t>   This document describes Verifiable Distributed Aggregation Functions
   (VDAFs), a family of multi-party protocols for computing aggregate
   statistics over user measurements.  These protocols are designed to
   ensure that, as long as at least one aggregation server executes the
   protocol honestly, individual measurements are never seen by any
   server in the clear.  At the same time, VDAFs allow the servers to
   detect if a malicious or misconfigured client submitted an
   measurement that would result in an invalid aggregate result.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-irtf-cfrg-vdaf-07"/>
        </reference>
        <reference anchor="RFC2119">
          <front>
            <title>Key words for use in RFCs to Indicate Requirement Levels</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <date month="March" year="1997"/>
            <abstract>
              <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="2119"/>
          <seriesInfo name="DOI" value="10.17487/RFC2119"/>
        </reference>
        <reference anchor="RFC8174">
          <front>
            <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
            <author fullname="B. Leiba" initials="B." surname="Leiba"/>
            <date month="May" year="2017"/>
            <abstract>
              <t>RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="8174"/>
          <seriesInfo name="DOI" value="10.17487/RFC8174"/>
        </reference>
        <reference anchor="RFC9180">
          <front>
            <title>Hybrid Public Key Encryption</title>
            <author fullname="R. Barnes" initials="R." surname="Barnes"/>
            <author fullname="K. Bhargavan" initials="K." surname="Bhargavan"/>
            <author fullname="B. Lipp" initials="B." surname="Lipp"/>
            <author fullname="C. Wood" initials="C." surname="Wood"/>
            <date month="February" year="2022"/>
            <abstract>
              <t>This document describes a scheme for hybrid public key encryption (HPKE). This scheme provides a variant of public key encryption of arbitrary-sized plaintexts for a recipient public key. It also includes three authenticated variants, including one that authenticates possession of a pre-shared key and two optional ones that authenticate possession of a key encapsulation mechanism (KEM) private key. HPKE works for any combination of an asymmetric KEM, key derivation function (KDF), and authenticated encryption with additional data (AEAD) encryption function. Some authenticated variants may not be supported by all KEMs. We provide instantiations of the scheme using widely used and efficient primitives, such as Elliptic Curve Diffie-Hellman (ECDH) key agreement, HMAC-based key derivation function (HKDF), and SHA2.</t>
              <t>This document is a product of the Crypto Forum Research Group (CFRG) in the IRTF.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="9180"/>
          <seriesInfo name="DOI" value="10.17487/RFC9180"/>
        </reference>
        <reference anchor="RFC8446">
          <front>
            <title>The Transport Layer Security (TLS) Protocol Version 1.3</title>
            <author fullname="E. Rescorla" initials="E." surname="Rescorla"/>
            <date month="August" year="2018"/>
            <abstract>
              <t>This document specifies version 1.3 of the Transport Layer Security (TLS) protocol. TLS allows client/server applications to communicate over the Internet in a way that is designed to prevent eavesdropping, tampering, and message forgery.</t>
              <t>This document updates RFCs 5705 and 6066, and obsoletes RFCs 5077, 5246, and 6961. This document also specifies new requirements for TLS 1.2 implementations.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8446"/>
          <seriesInfo name="DOI" value="10.17487/RFC8446"/>
        </reference>
      </references>
      <references anchor="sec-informative-references">
        <name>Informative References</name>
        <reference anchor="FMT20" target="https://arxiv.org/abs/2012.12803">
          <front>
            <title>Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy Amplification by Shuffling</title>
            <author initials="V." surname="Feldman">
              <organization/>
            </author>
            <author initials="A." surname="McMillan">
              <organization/>
            </author>
            <author initials="K." surname="Talwar">
              <organization/>
            </author>
            <date year="2020"/>
          </front>
        </reference>
        <reference anchor="FMT22" target="https://arxiv.org/abs/2208.04591">
          <front>
            <title>Stronger Privacy Amplification by Shuffling for Rényi and Approximate Differential Privacy</title>
            <author initials="V." surname="Feldman">
              <organization/>
            </author>
            <author initials="A." surname="McMillan">
              <organization/>
            </author>
            <author initials="K." surname="Talwar">
              <organization/>
            </author>
            <date year="2022"/>
          </front>
        </reference>
        <reference anchor="MJTB_22" target="https://arxiv.org/abs/2211.10082">
          <front>
            <title>Private Federated Statistics in an Interactive Setting</title>
            <author initials="A." surname="McMillan">
              <organization/>
            </author>
            <author initials="O." surname="Javidbakht">
              <organization/>
            </author>
            <author initials="K." surname="Talwar">
              <organization/>
            </author>
            <author initials="E." surname="Briggs">
              <organization/>
            </author>
            <author initials="M." surname="Chatzidakis">
              <organization/>
            </author>
            <author initials="J." surname="Chen">
              <organization/>
            </author>
            <author initials="J." surname="Duchi">
              <organization/>
            </author>
            <author initials="V." surname="Feldman">
              <organization/>
            </author>
            <author initials="Y." surname="Goren">
              <organization/>
            </author>
            <author initials="M." surname="Hesse">
              <organization/>
            </author>
            <author initials="V." surname="Jina">
              <organization/>
            </author>
            <author initials="A." surname="Katti">
              <organization/>
            </author>
            <author initials="A." surname="Liu">
              <organization/>
            </author>
            <author initials="C." surname="Lyford">
              <organization/>
            </author>
            <author initials="J." surname="Meyer">
              <organization/>
            </author>
            <author initials="A." surname="Palmer">
              <organization/>
            </author>
            <author initials="D." surname="Park">
              <organization/>
            </author>
            <author initials="W." surname="Park">
              <organization/>
            </author>
            <author initials="G." surname="Parsa">
              <organization/>
            </author>
            <author initials="P." surname="Pelzl">
              <organization/>
            </author>
            <author initials="R." surname="Rishi">
              <organization/>
            </author>
            <author initials="C." surname="Song">
              <organization/>
            </author>
            <author initials="S." surname="Wang">
              <organization/>
            </author>
            <author initials="S." surname="Zhou">
              <organization/>
            </author>
            <date year="2022"/>
          </front>
        </reference>
        <reference anchor="Mir17" target="https://arxiv.org/abs/1702.07476">
          <front>
            <title>Rényi Differential Privacy</title>
            <author initials="I." surname="Mironov">
              <organization/>
            </author>
            <date year="2017"/>
          </front>
        </reference>
        <reference anchor="MPRV09" target="https://link.springer.com/chapter/10.1007/978-3-642-03356-8_8">
          <front>
            <title>Computational Differential Privacy</title>
            <author initials="I." surname="Mironov">
              <organization/>
            </author>
            <author initials="O." surname="Pandey">
              <organization/>
            </author>
            <author initials="O." surname="Reingold">
              <organization/>
            </author>
            <author initials="S." surname="Vadhan">
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
        </reference>
        <reference anchor="CKS20" target="https://arxiv.org/abs/2004.00010">
          <front>
            <title>The Discrete Gaussian for Differential Privacy</title>
            <author initials="C. L." surname="Canonne">
              <organization/>
            </author>
            <author initials="G." surname="Kamath">
              <organization/>
            </author>
            <author initials="T." surname="Steinke">
              <organization/>
            </author>
            <date year="2020"/>
          </front>
        </reference>
        <reference anchor="KM11" target="https://dl.acm.org/doi/abs/10.1145/1989323.1989345">
          <front>
            <title>No free lunch in data privacy</title>
            <author initials="D." surname="Kifer">
              <organization/>
            </author>
            <author initials="A." surname="Machanavajjhala">
              <organization/>
            </author>
            <date year="2011"/>
          </front>
        </reference>
        <reference anchor="KOV15" target="http://proceedings.mlr.press/v37/kairouz15.pdf">
          <front>
            <title>The Composition Theorem for Differential Privacy</title>
            <author initials="P." surname="Kairouz">
              <organization/>
            </author>
            <author initials="S." surname="Oh">
              <organization/>
            </author>
            <author initials="P." surname="Viswanath">
              <organization/>
            </author>
            <date year="2015"/>
          </front>
        </reference>
        <reference anchor="DMNS06" target="https://link.springer.com/chapter/10.1007/11681878_14">
          <front>
            <title>Calibrating Noise to Sensitivity in Private Data Analysis</title>
            <author initials="C." surname="Dwork">
              <organization/>
            </author>
            <author initials="F." surname="McSherry">
              <organization/>
            </author>
            <author initials="K." surname="Nissim">
              <organization/>
            </author>
            <author initials="A." surname="Smith">
              <organization/>
            </author>
            <date year="2006"/>
          </front>
        </reference>
        <reference anchor="DR14" target="https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf">
          <front>
            <title>The Algorithmic Foundations of Differential Privacy</title>
            <author initials="C." surname="Dwork">
              <organization/>
            </author>
            <author initials="A." surname="Roth">
              <organization/>
            </author>
            <date year="2014"/>
          </front>
        </reference>
        <reference anchor="BS16" target="https://arxiv.org/abs/1605.02065">
          <front>
            <title>Concentrated Differential Privacy: Simplifications, Extensions, and Lower Bounds</title>
            <author initials="M." surname="Bun">
              <organization/>
            </author>
            <author initials="T." surname="Steinke">
              <organization/>
            </author>
            <date year="2016"/>
          </front>
        </reference>
        <reference anchor="EFMR_20" target="https://arxiv.org/abs/2001.03618">
          <front>
            <title>Encode, Shuffle, Analyze Privacy Revisited: Formalizations and Empirical Evaluation</title>
            <author initials="Ú." surname="Erlingsson">
              <organization/>
            </author>
            <author initials="V." surname="Feldman">
              <organization/>
            </author>
            <author initials="I." surname="Mironov">
              <organization/>
            </author>
            <author initials="A." surname="Raghunathan">
              <organization/>
            </author>
            <author initials="S." surname="Song">
              <organization/>
            </author>
            <author initials="K." surname="Talwar">
              <organization/>
            </author>
            <author initials="A." surname="Thakurta">
              <organization/>
            </author>
            <date year="2020"/>
          </front>
        </reference>
        <reference anchor="TWMJ_23" target="https://arxiv.org/abs/2307.15017">
          <front>
            <title>Samplable Anonymous Aggregation for Private Federated Data Analysis</title>
            <author initials="K." surname="Talwar">
              <organization/>
            </author>
            <author initials="S." surname="Wang">
              <organization/>
            </author>
            <author initials="A." surname="McMillan">
              <organization/>
            </author>
            <author initials="V." surname="Jina">
              <organization/>
            </author>
            <author initials="V." surname="Feldman">
              <organization/>
            </author>
            <author initials="B." surname="Basile">
              <organization/>
            </author>
            <author initials="A." surname="Cahill">
              <organization/>
            </author>
            <author initials="Y." surname="Chan">
              <organization/>
            </author>
            <author initials="M." surname="Chatzidakis">
              <organization/>
            </author>
            <author initials="J." surname="Chen">
              <organization/>
            </author>
            <author initials="O." surname="Chick">
              <organization/>
            </author>
            <author initials="M." surname="Chitnis">
              <organization/>
            </author>
            <author initials="S." surname="Ganta">
              <organization/>
            </author>
            <author initials="Y." surname="Goren">
              <organization/>
            </author>
            <author initials="F." surname="Granqvist">
              <organization/>
            </author>
            <author initials="K." surname="Guo">
              <organization/>
            </author>
            <author initials="F." surname="Jacobs">
              <organization/>
            </author>
            <author initials="O." surname="Javidbakht">
              <organization/>
            </author>
            <author initials="A." surname="Liu">
              <organization/>
            </author>
            <author initials="R." surname="Low">
              <organization/>
            </author>
            <author initials="D." surname="Mascenik">
              <organization/>
            </author>
            <author initials="S." surname="Myers">
              <organization/>
            </author>
            <author initials="D." surname="Park">
              <organization/>
            </author>
            <author initials="W." surname="Park">
              <organization/>
            </author>
            <author initials="G." surname="Parsa">
              <organization/>
            </author>
            <author initials="T." surname="Pauly">
              <organization/>
            </author>
            <author initials="C." surname="Priebe">
              <organization/>
            </author>
            <author initials="R." surname="Rishi">
              <organization/>
            </author>
            <author initials="G." surname="Rothblum">
              <organization/>
            </author>
            <author initials="M." surname="Scaria">
              <organization/>
            </author>
            <author initials="L." surname="Song">
              <organization/>
            </author>
            <author initials="C." surname="Song">
              <organization/>
            </author>
            <author initials="K." surname="Tarbe">
              <organization/>
            </author>
            <author initials="S." surname="Vogt">
              <organization/>
            </author>
            <author initials="L." surname="Winstrom">
              <organization/>
            </author>
            <author initials="S." surname="Zhou">
              <organization/>
            </author>
            <date year="2023"/>
          </front>
        </reference>
        <reference anchor="BW18" target="https://arxiv.org/abs/1805.06530">
          <front>
            <title>Improving the Gaussian Mechanism for Differential Privacy: Analytical Calibration and Optimal Denoising</title>
            <author initials="B." surname="Balle">
              <organization/>
            </author>
            <author initials="Y." surname="Wang">
              <organization/>
            </author>
            <date year="2018"/>
          </front>
        </reference>
        <reference anchor="AGM" target="https://github.com/BorjaBalle/analytic-gaussian-mechanism">
          <front>
            <title>analytic-gaussian-mechanism</title>
            <author>
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
        </reference>
        <reference anchor="EPK14" target="https://arxiv.org/abs/1407.6981">
          <front>
            <title>RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response</title>
            <author initials="Ú." surname="Erlingsson">
              <organization/>
            </author>
            <author initials="V." surname="Pihur">
              <organization/>
            </author>
            <author initials="A." surname="Korolova">
              <organization/>
            </author>
            <date year="2014"/>
          </front>
        </reference>
      </references>
    </references>
    <?line 1107?>

<section anchor="contributors">
      <name>Contributors</name>
      <t>Pierre Tholoniat
Columbia University
pierre@cs.columbia.edu</t>
    </section>
    <section anchor="dp-explainer">
      <name>Overview of Differential Privacy</name>
      <t>Differential privacy is a set of techniques used to protect the privacy of
individuals when analyzing user's data. It provides a mathematical framework
that ensures the analysis of a dataset does not reveal identifiable information
about any specific individuals. The advantage of differential privacy is that it
provides a strong, quantifiable and composable privacy guarantee. The main idea
of differential privacy is to add carefully calibrated noise to the results,
which makes it difficult to determine with high certainty whether a specific
individual's data was included in the results or not.</t>
      <section anchor="levels">
        <name>Differential privacy levels</name>
        <ul empty="true">
          <li>
            <t>KT: I think we should distinguish between the randomizer and the DP guarantee.
So I have attempted to use Client randomizer and Aggregator randomizer to
describe the noise addition by those two, and Local DP and Aggregator DP to
refer to the privacy guarantee. The distinction is important because Client
randomizer + aggregate gives an aggregator DP guarantee.</t>
          </li>
        </ul>
        <t>There are two levels of privacy protection: local differential privacy (local
DP) and aggregator differential privacy (aggregator DP).</t>
        <ul empty="true">
          <li>
            <t>OPEN ISSUE: or call it secure aggregator dp, or central dp.</t>
          </li>
        </ul>
        <t>In the local-DP settings, Clients apply noise to their own measurements. In
this way, Clients have some control to protect the privacy of their own data.
Any measurement uploaded by a Client will be have local dp, Client's privacy is
protected even if none of the Aggregators is honest (although this protection
may be weak). Furthermore, one can analyze the aggregator DP guarantee with
privacy amplification by aggregation, assuming each Client has added the
required amount of local DP noise, and there are at least minimum batch size
number of Clients in the aggregation.</t>
        <t>In Aggregator randomization settings, an Aggregator applies noise on the
aggregation. This approach relies on the server being secure and trustworthy.
Aggregators built using DAP protocol is ideal for this setting because DAP
ensures no server can access any individual data, but only the aggregation.</t>
        <t>If there are no local DP added from client, noise added to the aggregation
provides the privacy guarantee of the aggregation.</t>
        <ul empty="true">
          <li>
            <t>JC: For now, we have been assuming either Client randomization or Aggregator
randomization gives the target DP parameters. Theoretically one can use the
Aggregator randomization together with Client randomization to achieve DP.
For example, if the DP guarantee can be achieved with Client randomization
from a batch size number of Clients, and batch size is not reached when a data
collection task expires, each Aggregator can "compensate" the remaining noise,
by using the same Client randomizer, on the missing number of Clients being
the gap between actual number of Clients and target batch size.</t>
          </li>
        </ul>
      </section>
      <section anchor="neighboring-batch">
        <name>How to define neighboring batches</name>
        <t>There are primarily two models in the literature for defining two "neighboring
batches": deletion (or removal) of one measurement, and replacement (or
substitution) of one measurement with another <xref target="KM11"/>. In the DAP setting, the
protocol leaks the number of measurements in each batch collected and the
appropriate version of deletion-DP considers substitution by a fixed value
(e.g. zero). In other words, two batches of measurements <tt>D1</tt> and <tt>D2</tt> are
"neighboring" for deletion-DP if <tt>D2</tt> can be obtained from <tt>D1</tt> by replacing one
measurement by a fixed reference value.</t>
        <t>In some cases, a weaker notion of adjacency may be appropriate. For example, we
may be interested in hiding single coordinates of the measurement, rather than
the whole vector of measurements. In this case, neighboring datasets differ in
one coordinate of one measurement.</t>
      </section>
      <section anchor="protected-entity">
        <name>Protected entity</name>
        <ul empty="true">
          <li>
            <t>TODO: Chris P to fill: user or report, given time</t>
          </li>
        </ul>
      </section>
      <section anchor="budget">
        <name>Privacy budget and accounting</name>
        <t>There are various types of DP guarantees and budgets that can be enforced.
Many applications need to query the Client data multiple times, for example:</t>
        <ul spacing="normal">
          <li>
            <t>Federated machine learning applications require multiple aggregates to be
computed over the same underlying data, but with different machine learning
model parameters.</t>
          </li>
          <li>
            <t><xref target="MJTB_22"/> describes an interactive approach of building histograms over
multiple iterations, and Section 4.3 describes a way to track Client-side
budget when the Client data is queried multiple times.</t>
          </li>
        </ul>
        <ul empty="true">
          <li>
            <t>TODO: have citations for machine learning</t>
          </li>
        </ul>
        <t>It’s hard for Aggregator(s) to keep track of the privacy budget over time,
because different Clients can participate in different data collection tasks,
and only Clients know when their data is queried. Therefore, Clients must
enforce the privacy budget.</t>
        <t>There could be multiple ways to compose DP guarantees, based on different
DP composition theorems. In the various example DP guarantees below,
we describe the following:</t>
        <ul spacing="normal">
          <li>
            <t>A formal definition of the DP guarantee.</t>
          </li>
          <li>
            <t>Composition theorems that apply to the DP guarantee.</t>
          </li>
        </ul>
      </section>
      <section anchor="adp">
        <name>Pure <tt>EPSILON</tt>-DP, or <tt>(EPSILON, DELTA)</tt>-approximate DP</name>
        <t>Pure <tt>EPSILON</tt>-DP was first proposed in <xref target="DMNS06"/>, and a formal definition of
<tt>(EPSILON, DELTA)</tt>-DP can be found in Definition 2.4 of <xref target="DR14"/>.</t>
        <t>The <tt>EPSILON</tt> parameter quantifies the "privacy loss" of observing the outcomes
of querying two databases differing by one element. The smaller <tt>EPSILON</tt> is,
the stronger the privacy guarantee is, that is, the outcomes of querying two
adjacent databases are more or less the same.
The <tt>DELTA</tt> parameter provides a small probability of the privacy loss
exceeding <tt>EPSILON</tt>.</t>
        <t>One can compose multiple <tt>(EPSILON, DELTA)</tt>-approximate DP guarantees, per
Theorem 3.4 of <xref target="KOV15"/>.
One can also compose the guarantees in other types of guarantee first, such as
Rényi DP <xref target="rdp"/>, and then convert the composed guarantee to approximate
DP guarantee.</t>
        <section anchor="rdp">
          <name><tt>(ALPHA, TAU)</tt>-Rényi DP</name>
          <t>A formal definition of Rényi DP can be found in Definitions 3 and 4 of
<xref target="Mir17"/>.</t>
          <t>The intuition behind Rényi-DP is to use <tt>TAU</tt> parameter to measure the
divergence of probability distributions of querying two adjacent databases,
given Rényi order parameter <tt>ALPHA</tt>. The smaller the <tt>TAU</tt> parameter,
the harder it is to distinguish the outputs from querying two adjacent
databases, and thus the stronger the privacy guarantee is.</t>
          <t>One can compose multiple Rényi DP guarantees based on Proposition 1 of
<xref target="Mir17"/>.
After composition, one can convert the <tt>(ALPHA, TAU)</tt>-Rényi DP guarantee to
<tt>(EPSILON, DELTA)</tt>-approximate DP, per Proposition 12 of <xref target="CKS20"/>.</t>
        </section>
        <section anchor="zcdp">
          <name>Zero Concentrated-DP</name>
          <t>A formal definition of zero Concentrated-DP can be found in Definition 1.1
of <xref target="BS16"/>.</t>
          <t>Zero Concentrated-DP uses different parameters from Rényi-DP, but uses a similar
idea to measure the output distribution divergence of querying two adjacent
databases.</t>
          <t>One can compose multiple zCDP guarantees, per Lemma 1.7 of <xref target="BS16"/>.</t>
        </section>
      </section>
      <section anchor="data-type-and-noise-type">
        <name>Data type and Noise type</name>
        <t>Differential Privacy guarantee can only be achieved if data type is applied
with the correct noise type.</t>
        <ul empty="true">
          <li>
            <t>TODO: Junye to fill, mention DAP is expected to ensure the right pair of VDAF and DP mechanism</t>
          </li>
        </ul>
        <ul empty="true">
          <li>
            <t>TODO: Chris P: we will mention Prio3SumVec because that's what we use to describe aggregator DP with amplification</t>
          </li>
        </ul>
      </section>
    </section>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA9V923bbVpbgO74CLT1YqpAwKfmqVNwlS3Ki2LI0kuysdCYj
guSRCAsEGFwk067Umt+YeZrX/o3pP5kvmX07NwCU5dRUT3dWdyUCgXPZe599
3/v0+/2gSqpU7YRr+8nlpSpUViVxGp4UyU08WYZHajKLs6Scl+FlXoT7uydr
QTweF+oGvzjpn5wcrQWTuFJXebHcCZPsMg+CaT7J4jmMOS3iy6p/G2dX/cVi
3p86M/QXPEN/MAjKejxPyjLJs2q5gM8OD85fheF6GKdlDtMk2VQtFPxPVq31
wrXD3ZfwL1jM2uHp+au1IKvnY1XsBFNYxU4wybNSZWVd7oRVUasA1rkdxIWK
YaAzNamLpFquBbd5cX1V5PUCnuqtnhSqVMVNkl3BruOyLtQcZwyu1RJen+4E
YT/M1McqvFKZKuIKlouP6iyZ5AX9Z7mIi+sUB5gmZVUk47pS0zBV0ytVBDcq
q2F9YXjfecOQobH2EywWf/0eP8Tn8zhJ4TmA9C+Jqi6jvLjCx3ExmcHjWVUt
yp2HD/EtfJTcqEi/9hAfPBwX+W2pHsL3D/G7q6Sa1WP4EhFVArof3gNv+GEK
EC8rZ0o9QMRDRkl+n6Hu8040q+bpWhDEdTXLC8QFzB+Gl3WaMqn9WGdLFe7N
VEY/wF6BbD8RlnbC3cUiVeFhNonoRyUA/IDfTP4S46/RJJ+vtYfdrWFx4dHk
KEnT+CuGjvG7i/lkTt/dOcXerABqyRczVYQncVXlXdPspXk9vQR8Km+aCX67
oI++QRz/5Qp/WDHR6zqDk30ep7dxcf+dXFf0wZ1bOAOchz8B/u4/LJLJBaLc
HTgIsryYw4c3cFICZCb2rzB8dXS+NdihUTTP+iGZ4snYnefwv9VMIZwyBWd/
NzxL5jhxnE3Dtyou0mV4vKiSOUBgF8CwLJMyzC8Nn9uFl5PLZEKLDsdL2FJ9
eYlneY0mNHRH//SB0cEk76PwlUqncyEM83w38inG/PA6csFPDCvcGmwNeE9x
caXgNOnDFBcfkxs+tOPy4dZguBUNt54NtjUotnxQnFUFAAFp6ItbIlZ++m//
mi0Tgg+gqMg/AmwqFXZJgX8HEGzdBwRbg2fR4NHj50MEwdGP5y+/aQKBFgy7
eKWmyKKB+55VsP+ySiYlzA+7BUqs4KcJUlV4pqrqbhSv3MdxFP4Y3yTTcXw9
q+7aonl8EIUvi+TqqvQfH0XAAeLqUzKNr5PGbz9GlqO5D/drYOv3Q8TPUfh9
XjTHgEl/UGWpWmP8mGRxCwKvgcEkradvktp/tgfPlkBa09Z6j9RSFa0BTuJ0
3ny8j4+La//hT10Pv6eHZWO1J/BUpZ9S/+lpFJ4mZRNksN6zXHiWeXgWWUbm
PvyXWV7/EZIdDqPhYPBsi0g2KYZPfYKVU/iVp+4wwrHyLL/xljR8eo8lDZ8O
tqLB00dPn9CSTk7fD577a9rL54u6IrYBq/n7luaelxPgNWrZenyq4Ajm6bQF
8vfxdCbk3NwScLHrqATdADkeyo6HoKYu4GA/HA4Q3k8fPn/6rL/df/Joqz/Y
3n78pP/s4hnud+/1WVOGnM+Q65WTQgHj+D6uQREFNkHa7tdtHekfDmyc5Vmm
WsT6Ogb+OvMfnwMFVrD9a/VHBMLgUTQYDIYD3Nfro+HQ39bbPLwslArTOpvM
kPfB6HG4+OIu4AS+Ti47jutRjKZAfBN/+DCL09invGHniqdpFE/mtORpnjD5
AX6Gjx4/HD5/9nx7azuifz96THs4fj983MYNkmNeJiTF4G9gZvM/gp0TRAFQ
Zv2pRWjHs9ar75MSlBONMLPNx61twi5BeE6UQkWkjOZpES1AoS8f3mw/fXjN
Ew4fR4vpJW5x/+jt2eBJ47jFaTJGewIE89s8KVVY5SCZMtzyDVgriDst1/YR
h1p/uZsU99HG8Z++QmF2Bnpm0TiFILHeogE2byH9bJ40YDB48gdP5HD45Nnw
2dNnF8NHBIrT4aM2sndTMCRhynkyCV/ldTYlNkSq2tefxg4QwJZO8yZWH3Xu
6Pb2NpokZVSD7ZlFalo//FscF/Dxw5N4oYryoZylcZ5fa/y+PBs2sZtnE1gz
KyNdW9hhTdVoamUvPPhYIfrpv1E/e5Pfgl73EsFxF9JBqL+sG4J+BYcZdiOx
ISmeDB5HwIye0Ok8eHV0+k2Tdx5kk3yqeqJawn8QbX5SRgk9VTcJELKaAvWh
Mp+KXVDSxg7mi6SAfafhwU2c1vTLHRv8t/8ZhQcFarBlmTc2ukr/WSWPkBDi
q1mNZ7z5zVmXYrBCq4NxzmfxdV1UPke8Lw8fRoPtJ0OSTec/Hf34zdZ2Q62P
gTjiMVgyuyBXlvO8LsPdq6tCXbFej5ywrfXel0+s2FSnErRSFe5UG1fh4yUQ
aVwmqWoNvhfPYPCW9rrXQs9Xa8zH+DCZXHcMk1RZcwjY/PdxVsWtlXTo0cBR
vy/i7Dcg8rYR8H2dt97+MZ7k47K1vFWmRJeaDcosMISW0D6KS2A0yXVrM0eg
e5et1/8uLfscn9ZpQ4oAxwVKVGPVWm+H8v09M+JxWs9baDmbxEXSmPJN15ns
1OCJpovmKlCfzK+q1qA/wX+A5Txvvd1W+Lfvc6K3B0+j4WNUxVEc/DR85h/n
wzloC+TrQ2+F0TaNm3WlZrPDx7kibmlUhjwjPqo9G/sqAwXibouWDmDaPH8/
OyfeCIln9xESz1BIPHm8TWro7vdH/n5jWXT/Srban+utrnWOLp5DVCBe5sWH
mNb68I5hSDadvG4qE6e7JyfHp0B9AJ98nnwCpqjZJnFTAWvf8b4eF6DDARRP
VblAL/Ifl0Qnyaxuy4nXeZGn+U1Dc+5WPhpAfgRU9eT5s2EQBP1+P4RnFfow
gqDTab+xf7IZJiBiQd3PQVUBHRL0pzgs0f+twtgRHwaMQI9xFaDnHABCf4RZ
HiItAbCSDAxCeNd4p+GHq4xUlqxKQUOdL2A1JdG0cX2jfxymxWd6RhXC4HVa
ocyE9dESS+23MmtFgRze4grKZVmpeRmmybXCgWC71q/uSsET0MvySZ7C1ndh
7zHscqpwjfAeaNNguSTqRsm003xS4yYCeGUCoylcxg2wHMWA2j8RsJQU90D/
JGwUiQaGgvFJKevRSXW/C4Bm57CWGjT4SVwqUN5AqwOowCe0CNwhr5LAi9EN
vTJ46SRi3M6T6RROZ7COnqoin9YTgmTwy/lx+PIgPD04On5/sB++/Dk8fbUX
HuwfniOZo+5c5nUxUbSuivaJbnXiD/j/iIdCEbFMVEDOUQSC1a/nNtSD7y9y
2HUCKwMch2MctsaRqqDjpH5N4CD6NQiOgAaS/iJGbE+st6GF7/BL+A4+f/4n
AO93h/39iOdGLzjPHS/6g6e//x6qjHDXQfy0aUPTZSCBHZhrjFbXFMyvaQ1o
Cm9hqzmcANjiNNXM2/0UbTR8NklVXDCZBYCBeTxVAEjgWLgCGLTGAwWU8F4V
cHpoXau2+ArsdsI8bPH9/u4rd48F7HFyWVz1b6bxJe2yx9ODhhJOiuWiyq+K
eAE6D4E3z/DIImXvApUdfARI6P3vhjg2LP8mT2/QcR6HKbIiABd9sbaXJrjB
NTiQORIw0KMKvCmAQS+JtNUE9+CCpRfir/BwrVCLvIBhvsUTD6IqDTlsh+dm
TW87L2QeDOLBEQ3lK6ZHnpv3uXr+wPCab5n00QSDNUxgdG9w5lagWydZhVG9
Fo+yzEZj3CXVFmcjCiCcZ0BFoctJgWGUQYohihhOWqZCu2HkgTMMWcBLwG/j
qoon1/A5HjqkJXJZ8wsho+JBGTggBqaxT8FRXGPORCgMC5fZ04DpJNp5vIRJ
KzzdJdhveNyBKyL/kLPaQwEgMHd2WpUqvaQlFsC7QPDoWG4pXgtlYzd5FoW7
iMEQlPsaxRZ53tVHtGvAYERsJGC2yNEACn1IFAkME7A0B+76SQVNuaKhP1PJ
1awi7lWQ0J7SvojZwjZv1DIKDy8J5PxF4L4GoIe/1CQhCipBylcYVq7gTzpO
BM2kwCXamRPmTvBBlQfltbqlhfBSYRAdbaBgQ3gbL5nXM5xKGZCXDUubKpwa
MR9YduPJ2du8TqfAd25wjskM92Ulr8hdg3eNoYApuBfCmnEhzNdSHgGo6jqG
A47yNwPrnNUhRDrgZOKcgxIQN0W6Csp8jiINpEeNkJrMckB0mGlflTc3y4tZ
fgs7xp2mi3AG+CXge4DMGS8WIUDKrBVkVc3ePiN52HHAfBlnA6KVIVwJo8kw
/PyZ/Wy//x55ChLxCX6lrR0BvrTvKaQT4MpqPvmqZFBcCmvWWlXVlCSgTAO+
Yl5s97AgINamHWurlCQ37J/AfyRM5/DVOB4nacJrBQtSVFbYAwjRZFID0w5A
QqGaNs0BCXisUZwjJ/dUtRg3zvTBx2bO5ohzRjwesQFrKWvQN5OKGWFSkTgE
gIH5popNQBse2JKULpwW1KdcH538NmMqhD+XGp8kkkrAJo/jqlCODkJwB8aW
0wmb88vEYTXagD5TIWrnO/T5AZ3AgWLtCdfMpApP8hYfp2mS0iVvMa5ggaTi
BUZSuZy1T5wDwIoYiOeqAiMb8JVN0nqqmS0wLNDeSOsAScTDBA1SYcGNymoB
E7raiaaynlHfpuoKPfoolgEXKWWyFOq3Gj+NUF2sifkiR4nh1F3NnC804QsO
RO+kExXTKUfJn1e4PILWt4Q2+anK83COcGZAMuc30hgUXoAciKiyJRUDLUuD
t+bLMWGCpTkqlIR/dGuB7rco8UEDRwwAWKi8GBDNI39qiT5DBzuSDZCwoEN+
i5vxmBbTp0fu3xpYOzrJ6s+tTATtF6X9Bp4fdGWyerE0Wgsr5RjMEF0Ez82h
qAzaIOmFtzQSUDVp6qFDrzoC4tonRO9E1XFQTuCQKCZncjFMcTnErgy33MRP
iQEAd2EFf7kWoWM2pBe1yt8L5MCUuGlAUp7BuRDeq2EKH7mPHXjxiccdE4eA
4cbwH+ZDyjhwoQt8krlYshBFSmdYyTmi1ZB4QFaYkKS9yoHhE7k5AMQVo+JA
R3OJUijUG0VLBjmUMXxAVYSZUCMOD4Vpgj0mevwloCm/xflBol+BXhwEwwjk
Sn6DjFfdgllhjUcrkTbKTUEQW3gF4w52gjKWOBip4GDw61w0XPJUXSZoqZLO
cAkqhREM4RqwEFD95vlUpaAcJ2gYJ3AQgVDiEtVYHIsmY8lEmQ1WQZKTAqId
dX6lhQmLIFYrU+Cjx6DNgfHI+QAJHu4YuQvQ40whCkX3wAkXbC/OclBLYOl1
WZPsAtmDahxJp1pUpjylnZ7P4GzjQ0IZY4iNcw85vAlga6mmX96SYWE4ltZm
URHLACoA4IXCwBUrXS6wonDjJ0X2B9PI2kxNr9A1Fm0KMu0hAnQKO4eFaHbk
nDHNZImox3WSEs/ZP8Elic3q8n4j8TlpskRVl9JuinyOeiyHmt/EYHxMSDbg
OMYX6Cq6ZSRr1WAiwkNqKdvAa5C30JvLLHpGo7UMhd4iCW4ETUK+K2XsxFKB
UUMpNJIDiisq6ozNTxygP130mZ/AAkl1noJikZG6r0o6iIB1TYWsL+jl6U3o
rYI46ZP/xMOKs2QSM0BhDEZnMPOMMWOgQ2KcBB6jjbGi5Te6Z4zPRriMPf/a
PtFRK4Ax0Ho/v+yXE1BDHG+L8CBmFS5ySHvng211BXGxKXZQgUysamEBeTFl
1j9XimQhEz6nSorcB0qPpyq/vLSkKXzYUVEa7i4jFnA8sDWv6mQaZ67DiALZ
pSDC/9Z1lVmVM5ykMUv9LC6K/DZdMlSmPl5hvmNkIz0+sWUOZjueV9KxPO6s
fSVR+Mv58f7xDlog4SQRLxUmLXEC5K16UCA3SzLKjGU5kF2J52j/BE5wTzgl
ihcAL80cY6IbgBLFTEXBGS0qYnbTscMBCFUHYNFbRcBQLj/Q0CB8NRgJnUI4
f9lU+AXS1lgtcz7m2sCFvbCN1GuIMKO+60m0+dZf1AXqzQ3q/ha+x6eNNTkM
A1dUxUmaF0xshvL56OR02gmTinBJjyP0QO7l2Q2aJzpeu4+sh88gn5NrUHEw
ORrY69G7s3NMz8Z/h2+P6b9PD/7Lu8PTg33877Mfdt+8Mf8RyBtnPxy/e7Nv
/8t+uXd8dHTwdp8/hqeh9yhYO9r9eY051trxyfnh8dvdN2vGCWOgSXvPRedU
xQLZwxTdMR64Xu6d/O//NXwEYPun01d7W8PhcwAb//Fs+PQR/AGCUPRQUnr4
T9TwAqApBScBZTeImkm8AGpNkZeW6FK4BVsYaBag+adfEDK/7oR/Hk8Ww0cv
5AFu2HuoYeY9JJi1n7Q+ZiB2POqYxkDTe96AtL/e3Z+9vzXcnYd//meQdCrs
D5/984tADHqDjBoJ7GQJ9Jpt09EQJPgyE2D1BhjfSH1cbBycnB2+OX67OYJX
4UyosIgpvsU6F4xakIsTBA26Q0ZqpFXmBaVMgNIwkiFGRNBG6/o+R3cM4vOc
dIYj1BnCz+tGwYO31zsDcUHAAoRdEm03whLWaEJO1vYndl/F14rzUEGm/laT
J75lDzquFDR8herItic/WjuY4lp+gWciA5MHiQOU347p3Cuaw7ZM29nph3GM
wyYpG84a64nBM8dSFRH8Itw74aCFlbSJNrRQGILB+SEurnIAJ0lv1PdMNjhG
PqyRkeW3MN4hSwORDHgUPyBeq2LJBiwsntTITCxpzchA+UUZCPo0imkYyBSQ
hDFllZNejL5xss7QgF3UWcVsRju/FmSGfiRMpYrFfQqSFIZjeAilk/KsGcL+
iRbepRZD1uDRSvxahr7CcY55VWuUwlcqdhrEKCGApc1RGQwoXOY4XRGaY0ys
RO8bnB0RYXlJviwMU8kcZKaBcl5yKjQ7HIOx8s0gUGqMRwvp+BZYalyByVk2
XV/haH84IkCM9rdGxH/9PbBLaxkQZ0YnCShEAKzsinxKUzIdYuGm5ndvBvVx
ohYVeaw2kkhFxIiDcjkHzQr5gbaeJih3q1ulxDeAQSeEnp6ipH0o1pfLTXLa
MZvhQ2SJBJk5hjBQX8eX+/sna+QZW7OusTzDp2DWf/7s7LdPYEKtHdT+miUs
WVQMfgJfPP0A42YTAS4rLIGrqwA4yBYSJBkWudg464X7vbCwDLLpLmT3cxdH
Gp2NeiEZUsBZmmwIsIe/aq7TOPjhqEB+urtyYPaocejHcGCCGqCfDjRGgCyY
iGoCSzUYuEFDUIfOmgsocQW9hqkOB+G3mgzXAM1TNNr/9re/BQykIUIp/PN3
oStWwj+F/OsW/kpvo1OGMUR6Ta8boO7SCX6l8fTE4RWwqCxowYzJsiSPW8VH
Mbay6zJGr1CvIfYAyMcZcx1mb4AmA0925QjW8Bc0s8hN5zKCeJyz/9X362Ic
Q4E4soZDY73sQaNYnWLflZmYDxwyuCbLsUMCEbBniTaKfidrmABU7SZuMekL
PZISF9QQ6YUquorCQTREV9MgehyB9FYgUcDUBBhh3E4VcLBYj0D9ttQRBMs+
XdrD9eEpqhIM/pGDTRRiPN2gXCskUGKUgSeV0F+exh9N3E9etcHN2NbP9Mlt
T4Y+yL7TIeiOUbgb3OucaLQDPR68Od/dbB6YoHlgwr/vwAT2wIR/6MCE3/BK
+eBgnnYsEDUg8kETpjlyQFRiplPSWITuCVc01khccMEY8w0i83RCKoDRc1BJ
gN/hPIGqgAp99zkF5SxRE/IedVPqBhFZLM5JTykyL2320DNollcG1XIhYWdU
6DIxMMDiWmaAZI4mEi2LN5FkAChZnxSSshypXkiyiyNKrGbR2bUnDnY8GoYP
+fML/HxEIQ8mTE4/4XkcIKFTc45aKLkzkCiAKeC73uDix29jzLFy6bwFDevW
P21k1ZPDLM44DIsH3bHjvwWRxTIJcIJOFORUhUING4ATfOEwr6+7efCNZCcd
0OCt2IONLpVSUOKEcsZxSY5soq3R2cHbs8Pzw/eH5z+PmtqtG3tBk4XAIwHA
Tk4ZoNHQQUEEQZvGD1BE76wqyGwwvDrWcTrUeYIOOkTKj2Hjk2qFEt4ZRriE
P8pA1N03Q84j3wpLuyDRd9CZyBE2Eic2tombQgXJ4Tody0Om8SecwNnqjo6J
G9COyzxF8EloxAncThSFfJjz55SBV5GLSj51eV4XiHHyrTsnL4HHFcrEpu6e
mP3M95oYydMxHktQkdHxqR2GpEyL+1x0AuMVI6+/TbnQbtUeBldZfgWgzdVX
MyY9yZDwAlwlk2aCzCxPxd93SJ9TYlmm0JWHkZIqD7whxMuHQpo9ZGwv62WJ
k5YCio6rr0dWQjUrVFwF5GHXqzYLZMauM1g4EmXjOiZHql6keTztBc5564nm
T7ExJD4do2ATHTVi9KIlFcptTnPjLA+ajI40CYOULCg/AQoDRiQ0ynCDMksk
nwpzpn44eX0gfp7nw2cDfEQugTdngXh/Hj0C2xVjdeeYqzfJa8EWB0Rkr2XP
xjc0UDDiMHEL6UiYe44nso+5AA9Y34Z1roWP0B92eNTfI4uCrWzH5UBHao5r
QbXdU/ZCb06UkyhGKc7qZBaB/R+v1Ay9LDgv9agUpkAhSTSL8UwhUgQPwl01
8ojBGNJr0YbEiCR+BoYaBu4R/JREQ/IVCLco6kVFeaJgaXGmhRPYihy7VLnJ
FUaebWCCSdTQQzZ1YgUJInbbBPeCHAgVpY8tivkErEoNq1j/mRcBpxVwopS7
GhTajaXENgAIGMEs1Q3K5rRjkdeolb0Vl5KX6/mHRJXgaNqMsofGqrU3zDYq
0ZSsk3LGCUCosuIcFeiq7KswsBawPigDGxOSw4y1bmRpa04vwMIg20/C/XyP
FGmaor00uSL3+cAcOWFNOn6Jig36JJDdskDujGgiuQAHgQ2KY8AfRTJUNFU5
to+setMnWeR2igJ7wPXIbbZshxsDCTeiU+v1+UZSlrVa33q2uUPBCoroKxCu
rJBQCJXDSnoJ34V4rCcYcgSWS3imhJEljDeGo3gJMoES2iiCUQilMPqYWdJe
5aT2x3XVRx8nRjBvSjs28DB0kpU1upoqXBanlmH1z7IMb9EGNSmy6Byp7Lfs
awP0zeKFWWCBzjLtkEBJx5tFGB+/5nhcqb2tesRxjNQ4i2+SvDCZN4iRF4Bh
UHKJEqc1yU7kFzUYCSRo18Pj3aO9nRCM4b7NHegfgW3ZF94UBLsUEBcnSu5w
LXbbe2mXuEzhbRz0lLQTMv0D58UNYuaSHis+RVCKnASGTZMzYpI7JPJP/A1s
u6BBZkBlqCmg6mGWaNYj8SAiZUqNM25G9IQFTohbWz2o2pLMsYz33MsW0qqy
BACnrtvMOHitRU4JoMywNtl0QGBakayDLL7LEKfDdGf434lkUsn6bP5UQ5pM
FR0sVRocH7dxjH/yBz6G4f9AipXVyoRahgL7cuzvvGuDqZ4xnZHEcSgXIxaF
TU7Bg0tGQ1DlJmfByVDQOT+N9N3QT999EWKkU/jG8PEmOU0qFgDYwMQBBZs6
RgU06YHmoMJgDiQmM9DVSM/DcKKkctEYNr217drjE4lBxhdWKCzyhDPItcB0
9uXCts7IK2R/9Ve0BiC5UphNjm+wU4TyxhIKDk0xo0uCnpyzweDGuB4MZPo3
YVzSCcqvsdYQoz4Log1BnMDhSdVNnDH7CDVddRIS+k1YQvhJHE2aADaIeUxe
phMmb9x19LvpZoxwa5EGSbmZTipIJGKD+pVrDs7QW5XmE9YoRTk00tXJJiKn
ODrPMQs5Fh0kbCkgJj+TFR6UvZYV0Aq0rAjiZO5U09Aq9UJEmljPCKCQCgnE
kcFbyUSMBTkWrLVydhhXP3CeThAcYnat8/OD0ktBquoiC0mDJdF6CWYPGdEL
yoGaaCInkyqgBEuT6Rw7OgcmWmANQKyd7G2+ade9ItmIPZ0mpz1GRSbNl3Nb
diGhLTezKXBqr3KpF7AUqUsfnEQyKluYAjoK5SnpgVaLNOsipxYu41x+sCUf
B9kNVkPTyjbODw42kY8qwKh8DGoCEdQVGUw1xUouYcVcBQDvoqHjYkH0QBTP
LkRIzcN69CpIJEnPbhdjU0WOnjOz0ss0vuXwIdKgDIrrW4FFnRjNOCywq1RG
KVFmPcd7SOvwv7SenlFfOOk4uEXXXsE0T6Fh+NZpdvd53XF7iVu5FL5uc2G+
lCpmsrEkXcykxBAXcNLEmgVrrInBMVIfF1zFQuUJyGATtPbEoYyq5VQyQDnN
zFXkqaRQdGw3qcwIHaLaLL/FEz7NswdAaJOK8/nsvK0ZvsXXSRGirKmcePNl
miwcYiW2oPUWzYrthqPw3WIai/a+e3KI/lUTIAfkvAj9/QERg+aoX5IwLPpY
gAthC7rwhngv6hEHsZsOTtySMj+wsLOZfWXip9YbxyIWYxIUg6PsV9hGUmlj
C0OwUfiWQ23kYfZLKLmuIkEXJbn7idzg86BrFeTaE25qcz7BWBujN8o0vjjx
cszxyASdwQIqHDC/PJCfHshvrKJr3h7+OdTv6cAbvhy+kI9GIBfUJCa3yyWC
fI6osZJBmLu2s+g0BcIwN3v8wdJsLjaeZ+2qFp25UGw4y6/oX0DuB2eFpZGM
7xSJ8gwUfXTy/Vzuog9d2YgPAgBn+ZQdlqP9hTntEaD5ghC/gdjdoe4J58uF
2gz7L8wfI9SgSr/qhUOnYTjC70Z6sbEnrzl7oZEpvomF63PVSOXGEBnW1PMc
rsfaJb7W8vlY6h0kc05dw2acVWMH/KbeBLk6JfI+Mt8B4qXUwl8e8yyTRWWi
Yc6XMJzkqqWeLU9uV+xIgQ0tyUspSXkS+sJwYTc+KALa2O5UjZO4bKCqRzC/
IAeds3N6VU0NBGCF/LlJ204QyII/D+LWavHj4naeEWY2Wk5ASnaaerUoyEcM
PWoxiFQo+J/mPZ1bDWN9/mySGPoFEENeYAVTcFJXkgpPKSdX2qfsU33J0WQ+
EGU9NqmZo/0TA7xRoLXDsYJTIQE/ftWBMVe7r7MZqRHXKISm1biIGTGF0KsJ
Zw/xKKSEamNIIE6jaru5zvrOY+4TqVEWfgcgzvRQGqHuqi6BIm0dE/4uVQum
WAyrB1DWySBkzooN4+ZGj5rkEn73nT08sqzWK7w8rulXmMyhaRd9f73wDo6i
ewqE4dra2q5bXWLDlp7UgdfMJ5gIp5D6Dm3G90FR5MXGpl2MxxlkPav5g7ce
899nrFZowSicwRVxDSLwGQL/k5QmkmaFLoUFmf0INqSCLUGiNUzL9GcJMasK
9Jioc533AojwDgKF+VL+aTCU5s/34S/dEOS3LBec1rZskghVbPJu9hO4C+hm
QzpXW5Wkw6EUISpiQWq/dyK7yHzE6tS5CY2GEKUkp88c+OvgIOUMrECDIrsM
N8ppAZRG6Zc8aP1zJzzT6demGoJJtmA11nQDC7eQRD5/pvZ9GCxZHw42I39w
U0DxeV3XWJh2Ib93TWk+WDHn9so5z0yuGbcZgSlbnHul0dD8uNdk5ZdJwf42
rpKkODl1ONHxqcLWC+3q1MM9zJPB5UpfVMw74TiCZHvHJvcwgO/RWVGKds6D
geaFZzbalChiS4mNrRZq9MCLAfDFQy+xdZxodbwRmEAqX+ppRJPHcZn+gGTR
hligtzSh8GoLyCR/YJo11I4LL8+2kJYtFopr2rANbHQX7NHVX7q5uZwhizsZ
fRSxNmYLVNaIx2s0DPvhxxFvws08oaSNDSdt5mKAqTLDzVHDT5DY9GQAJOUC
obV9VRemKmTbVn+yJMDHKzcRkBYwGkimz5hqdcmUCZ2Fw1/Du1a9LavtBeQ8
vSXmpIcqq1iKiEeI++Adxl3RzHaM/x5Fw5ANsUReCXNahAOCwFKYGF0O0+M9
CdNCL6L7JbmgcDSYXlHbDDeeengYPeLTIV3s9Ely3mkfIFDSdyXsGji5oJi8
IJNIVplK43HO3hiqvWAO0DdvTRekx7UIWpRliuneh6ZdsAZIj7YxfrrsTmYw
qU0PSudkgv3NaDNem0AQxwkI6KZSWZO/RNsNAEmUkWwza3f2AmuRdpYb67g/
1RxzbpRUHS+7Mi16Afmn4BWO5hGNO1yG0E69C7AOMWOyL1XFGE5KrXLie06W
MOOAD/cl9X1qucLdVKI1sRuma478ljpZT90UJ5yOwbT0VUqxoBahVIiCEVsl
KXDAXfIYs7u15Q17pkcIKnbBxySagKwmUfgL5tCvrXw3sO+uOUCLr2qFVU1n
prMHb8IL+zseQeMgtITF4sHQHnvBW8Ee7Y0cZSO88+BWcml5Nq+bR3uGwPFR
l66odr8TQ3WOuQyDnk4dyKZxAfBWNwlf0TAqfyuqjSz8U9jgyG0e3Qeu99+2
gPGRuxzPAAHjXJrO7kWohQSuiOW4VYMPYVmxwyA+r3fwA9RHftzbwXAzFRRh
cXkcruYy5LvRzVi4dZZuQzLjIWDAhifKlIF77Er8o3u+/1rbmMRVYChdByN+
a5RLSGSFiinRhxvqIwo0+WGcO06kPMpPlYfhtOjmBlo6ff1kKPnrB1/IX4/v
zmCnrMdmhzI/gV3Yi/fCAchjXsfWxqm7iGDVInT7LBlNzJXRKcluyqUlxQAH
3g2dNQetpF+CFXOWNvWQpeDgTPPmS/aYBeojnILSW5B3nmBB4laghet8e5Mc
LFGwdr5/K43Y3SfK21FPV8qI66BvFo+pwmm24eAUTpiGLCUSm1d1or3Tr4J9
jCbHXRTTFZsjtgloR2fhlerxKjtyE3s+FFnBwwhawZaoTYFxOHfQICLrUHd/
oId2DfKt8INTs/JDr81ahwqQlKbG2UgQzxgYmU9OiYuM2HMTWYOGMlYw+EYe
WkkUIHuGMkZMupLAmMpRE05UjhiJyZyqXpleA3EHNebdcIz8zZ2miwbN8l9A
5vzq+WlIFHFZBCOsKOKlEVTlSo8KjUbv/GpN94sLZI0XF+LHUItysMMDbVqz
G3+L8CcYBf/l/7CAp8NogJwfe7FHyP7xLdJ3o8Hm3+nFWae7TpZ3qXJtSSdm
vzNI3tC6WnZd09ZGcMGGFhueyyKN5+NpHH7cCclOQewTEDBgc4HWwMYmKK6w
po++q4Msd/3H5v8DZ9K6diF9FVyEr9tBkC99UkWuVdgmFJD8Npo75PwVGpX6
WmCitXWRb/76/8EztN7yBxm/j4f2rZVoB7K9wE45mDkmVGzofvNriWPjI2hG
G3pItPtEMaK/USOS2346/rEbb36zeU+asqjCHTgHWVbPtBLxv4Bg/ywHWVxL
yCpP3C4PXNH+ed1Uv3OZunXCcOuMmLVdatMGxuZlLF0Q3K4NZjiU5SZdIJDm
Vc3YEqZfSMRO7BGnyWJMA3FjiF14spCrNe52wInW4QWdRKd+EIQNf+CioJS6
nMJLmL8StWZz8nC+YkY+g/bbB2UzlmXnrrPVs0+NRMA2T6F0xZvnNwJKOhJJ
1oiIUUQNA2ztANBoU1z/3qzns1a7JG3GeG2HVns9NY2geWezRDhrBNHIWpLC
pOuJrB1eM3XWrbR5G5UJvPT/uBGkWRAhL0eNmAw/1QGZjvIQiqnRz87tcX7U
ZLeBsIoCZvltJgZ55gCGh4K/z+jVFeOIgcwDCQKYn+NjVo5Sl+RKM+4pv9Md
1mmZ4HZ3+p3WEG2hfVHlFw6Iupl6NzvbcaF4j2+I2TufdEcAvOCOp6Am7Q4u
ooYaTthNqNq/cBlT0p6nZ+omS7kZBHv10QHpWp3wWlzXCmACWi6IdO4HSvP6
jiGkL35DgNRv3wOKDSbUCUlLfv/+0DQwaKoYBEym7i9C0766Y4/OXR80VJG7
XvXUFDP4nRGs5ukkoKuEwtFitAmDciB+D7Fj2iCQa8OIUrv6rw7Mu+i0Qchu
tN4bnVLtp6NaklTHHaWMFmLbaaHjZ0WPLJZTno5gBYCOtozew5cjqf2uS+7S
Kx1xNYkHbqMq99Yu7qWI7QhLzsmfgPGFNiYmTMK3nNWOLifbfjrg1sWbIoGQ
ZGEDF7iBC1zTxXRxwRtgZdL8uWPkFKMMF75D/8t/I/DIs88PIyCJE/yTf/WK
Cdn208NFDmf9tReIfgjoOa0zLXlFohq4k+FAw7S6smq8ZjlV/30X/gJq4AVS
5Qat6+3x272Di7PDfznYbBwcFP8X1oJIVbbhjry5yYbvDbb1Xl5gV6TvQn/s
9wenh69+vnh98LNMQB8ANi6kdSSshgfBuTZoiT13+Zs4/SfQlHn13m+lozlj
G6cCc+MzfnMTsxga2wsca+TAdfWi6teplRq+i6ls9mMvlV0/JneMK4Bhb//V
twc05UQrZba78xWrJa2jbCVIU7Ii+wMZtJHH/VuYOd19u99A+saiHsPyLrRo
wcEET5utzdAgtJaN1s57oSASp9tsgd0tWzDhVt7RolB4ZLj/XoXBS/Mx/nRB
zxyqEWrirDnvKdLTB0u7tNyzH3ZPD84cqqFd05g9yVALhWpoNlzdhqXuXvjh
HrqR84/hAF/3GQPvqz7xMPdVX7pY/uXDrz4LcGAecbsfhpb/lkaAeYUg6aLd
bXxaqImup8RWtOxMbpIspRE3sHd6/O7t/ll/2MAfrXFeXnmY4/HwdOGfG38Q
Ee7mnP24j32i00u/F+HJMN668abtDQfqgJGe2WGTPeOJbbwK4yEJ53XVNZWH
JXzVxZGNCGrHtymExDMJ/57FIlWdpKUm4v6B2LCYuBsJ90aAEUN/FAUdIL1o
Er+Vdc5bLkWBYnfhqXTfhS1BKwZjk3s6PdOQeXookez1TD5tGQ4U9EDBh10z
Vmmr8rFXspFULV8IWf+eUkU6iXzt9bT2i69kqYiCkhRoXWgthdGRUaS61IUv
ItnC/uIDfgp/Azal+RcV5tg3LP2Y6egjGtms2KFdb/SWsPZknJev4PmZPGA6
YzBYPd1CTS/8ld1LubA2rPPxZnuvhjrbUwmdrjvVXuL7KH10WmwV2lVB0JO3
XdjZeb/EA5rHQ5+FFcZZJHYnG3gX3mIsiNo2qQcRbec156aXeAE64a890Rcs
Je6+jdl6i36S4X09HVl66J+9KmLrSXXsJDrXHE5eD98BGe1RXvfnddtBmAvK
4DRiO4h5GQRHeMjdVlTa+PTSUmbmC9tx202JKOvxnAg2uF8eCjbkuM6wGWgM
ivsa/Nif5frDNW5iYNq9Ynlxo7GzVFF0Fn3Y9r26iXHELYwpi5BabnKGwZfc
zZRnRjVDZaOkK5AkAXGl5/CiM252HxOf6iSdwbkRcEXXxcKaqFzLS0j4icK+
WkECIFDt5cIST2ALZkwVut8dzRS1jEzdHXc8ohR26aTck3aCgSkB1y5ap5DM
LWwzzoqGgxP7rLQYnE7tBfzfKup+xTWGuo0z9pFlWFCVpF/KHNhSZo4rY1vj
bbo764e8MlTNhCfIPfWEVtCJcuoJVZr8HC5erqS/UGJqbJyrITSXDvgFT0hP
E2650ONGFl4T4hSoNi2pBRrdRYo9WD5/fnV0Tgm1SBP8FybbySU31DKXL01a
UVpIKQhU2Uadv7xi1qCRX+YmlZUP/IW7YPMAZDoqcb3ekiak5EdzRqWhuSlv
alTUMiuwDCMwyZDN/LOOPEfsq4Wtdrh8j6/nKhn3FuexmzdAsJamNpIMXkuI
KhiZb0bhJCkmNbeTkPhFKKzIvbzDXUq47+e6cRmVqT/nLqd0joF69PCJUwXJ
pypPVSGZXNJVxblUBYy8/mgwgiXBMLZNkM5U9Jr4d5+ASGdTaamx4jUGDdYe
KWRhC9ufreQ+8nxlHnXK0PeJYOP7OdcSF0p3A9TpHQ2PIr6yAx933JCHt7U9
7Lq97eE4zccPsXz14SKfPCRnGwbUtqPFkt2EnWdYUyTyfL0ySv1o0pdlwZS6
1i4w6tm0yo6Ubzc/2STnElKAOLurEDVU295iIFp9C1qlm4iYPg7ByOlzt7Jn
RC/0k0F1Yllz30FX/rBuL8uqua0A8/PCpDSDVsPdILu3KQ0vyq/oeBGYjhfS
/U/XsvK9Fy05/8Cv7vQrOB+MUEC4RfZur2tpK/GhzpZAASobPn/+ZHMn/BHY
KJ4Q7JRwetpKwA6/CT00YEeCF1/QP1agG9bwUnID7+R+LPZIH1rR/I4H0Syo
hSySG9Tcl5NMqpK1rltlrlcAAHCbMEkVa/ZT0A1jRnOX8oYlVY1j03DuyhU9
5pwMuaibbl3M5c7UsMjHAFtipK0GOpZ68QLCuKrUfCGp/EIx5G8IjN6Jsle3
V+Fujk5Wt6TwR04bi2YhVXc7y9F01LOFt5yJ7KmrAcwZmQYkoynmcoy8uYlg
Bh1tcUcLJB+8mAO7GVJut1N5wSHhAS59SCtZWSJBEO/kUwErUUgRrIn5J21o
JEebSmLOfuakN53nRm0rURMc7UnMxQ4GM2N/eIFAT19A6e8WqCOtS0nAVWQK
6ArwOMVMWcJhFB77d9+YTg9NABL4ADLfhHujALVSRcoXEiWmCVKnT24LqLkH
UQ71veCEhxtsqlIptpTapK6TFv2uI9iOCcnMaKE4IR2jxDkZi5G91hSF+02c
yvm7TDKzTtK5F172Ka5xUlMvs+SmcT2waTa5sbf/ajMs8fJEuu1ldFJs7GHO
6JxTkF58R1lsC26IzP3acV6ctOcJQ1ZvEu4hTM2zR93aAF4VEACxYYEHB250
jRQ5hGxzwjtLBKjfClu+wX2ilNhDsJA71brJ3FwBgyLMuWVNEmTMxWnSJqtT
gXVTSFob/wlInb/yjIUNbViL2+hVolL0J7bVKPqlI+Ok/abzezOthOJ8NNKv
rcyQ9kDm15WpII101aiZgPeViaTrd4tP1P8pzfQbpzsvSUs7wJ1S03q0KKGN
sd/eBqen/kdNcRHXj7MDp3Qfp2pV3v5nSX7o2JkkiLpeMTvhpvZ1gZX+jm17
lM6m67/YaF3H/c66GWk9iM5BWzvTlSwrvW1X5lH59TRBZz3NP6qchswFYtBp
nl/LBY9Be2y/BKmlK5L0QqAYZcq5LCvQp0235Gk2UVlxCoO/Oh4j/uevxmvE
f521F0rPD3VDlxPb0MX889fgrzv9xj/+k/bvdz/XP8OKB9H28KlDzX8Nh6r/
3P619SQabm+7b/DzDPPfB/hPzxZjwLPH0YBWDAM/Hzy5Y+DhVrT1bDC498BP
QGOmgYfR461ndwz8PHr8+Flz3DsGfsor/rwTrhuduT8huuhPF2GVVKn6bk1O
IqL/BFWuLh8ZuU+t3m18qGu/Oz63hq/N8Xc2/G2rfrFeQ99jeOelooGEesbq
Mi90nFFfijm3NZzN4ltRkvWJQDeZvQap1SbINzEdE91rsChXA7urbzRa5IbE
upKTba+uVDyb/184jfQb3WLE+eV1Pup27eqipZKck6Hf2R04rm0M/vvvd3V6
d65jlWxxAuQqj99KV3ww8ikGF6ENyKfRI65DFl8deVhLusFONUJlnc7kaavV
QXf4TAfNMKwgpnPHt9Yz1PDatrsnaCdRMNNXjlA0ZmBKbLs8R2eH3x/tOl4j
XS0cZ9L/d4UUECR1iihyzZoLEgUZd7uh2uej6bgXL8HXAbyj1Lr7+HJSOhWQ
PmP4vvxp+AzvtpyhF17oSt8v0IYaF7BgC7kaO3zqlTTahXCXt25AOEcIXe5v
tvrOqZCO57LCp7xC3fKCl0h9rRO87lHnd+MtDQIasyDrN5jUBYVlNC9lO9Wf
lpwApEpQva1UkQbUfBotMukA7EfKdCRf7uchl5lcjHCbu9jgJmm9Lw03+mXY
Cwe/0qVxo1+Amofw33wbQ9lc73gpqs8GGqIDUGxA998YaCUHZJLeTeS0EBl6
GDd9ApDPBd2YZ3xoi9a/BjpOJ3Uau+23c6DXgBvio2tzivfj7H5/RJXJpsCY
RtKuMB0JA2jQxRuZd+gs2fmTwWDERNCBLW3Ne6YvH9m41K2QL9UR65aWDEuS
ztg7Dl1RS0TpHmlJz/Sjd2/w4lvX3RDPC6+bgDmidnTt5uUT4/SF0XhARoR+
+nOXL676Cju30nVk5AfBVpjUnN69WQ/Z/gv2xFo0js1eaAKtCKA+z5EQdug4
W+UwpwKCUCXXTMd0ji71pTF68IjXXpBe0HPqOl3CkzuIeMvWR+LcOCtX3sNg
FIbUrEf7mlwnk76ffgWnLakBoKG5Q+w2QUHv27ygoPgi5Ys0cKzaamVt/iqd
J8ZYzu4059j2+JIOWTilOlRkgxEkNB9wTQ6NwFCtiSLsIex9wAEUur/jRhXS
Zb4V2CMp+AJfzm2Mji70C3fp+hSKutM5YZhTW02MYgOM07kpoCchyc3GX1CR
i3R69Fw3nsvGSqx7uW263DXr4W7TJQwgrzPpO0q5FnQxUTYlXxgzTRSLH/k6
Qx6EnYR2AnY9j7qKkP5uV9AqF9D6XU45c++B21wj5F3K17JXp9dGmxa5qlrf
65aRxnijS5ZEEdAX9skdzxKN8e+7CaUDMtVpZ3r9otXL3fb4rMutZQur7/Bf
JSlW4JILi0resVr1ng4t3Ukb2AddKpV8xCuZCzVJqEsTdnjGLTnDTOG3uVx0
4mgWdPHySBYjGjotBeThxvrWtpsXdv7HpbwzSkveM8sjmY5iHc8vCXX8o8fd
Y5WId2cYYCyrVJMh18M6ot18NyXd+AJoah7DC/HVPDKXTVGuAwgJTUimctGv
wRVYCcZ6XMzLM0VOHe+m7ymUia3y3kx/0/3Q9OzIO/5FFTkcwGzDXfYqt+J/
jGIvqRlfoYGzlgRax5XizlTcltDNXfNNoa4peMdAdIjhLtj6nVUxF9VsdtMt
tJaz2si8dfIMk56d7CIxlRzmBefnRmZucul+Gf45HLRTp90XvmMBEB0d7797
8+4MCNj51ftSorSS7OisFb4haNBAG+66W8Xlkuf3n9TJu4IlHuoWt6LRNeVj
yc1a9RhaLpi7zYpwIP3402WkW2z3TJ6Uhl3irH1dWvtIbxVUGTifILG9LNtH
wavD0exAuhQ2ENVOynQd1T+Z9KY5Xr5Z6M7cjqbGd/h4mXEVBU+pGwk9UsWc
rk+jNJS2WW/Uy6zrULOzyHGwiB83uJcft2EMo1XU96wi6qGoDSNST1kzxZtC
SS2NqT3Kqkw86+uyirRJ5gs0mLQm0fTl+N2yUj+rTrIqBV6BhdeKtApdC0hW
ggA1/BNLqAma0pIkMuGbKruSUdyqbJ3o5CCz58QjKNFyre0BX8PL2up5xlcB
4HTAebak7vLuT8MN9MhsNkegXNKVV7n8PX76lUv4Bznp7+Oh396OBk+fuR5x
fr61HW0/H2x3MLN7uucHT5+2/f7PosePBlsdg97LN/802n7UXtJfw8fR8Png
UfeovmM+Nii9wzm/0nV+l4M+XHElTBDQ/Qb6V3zzcPftbvst995MSgXKcn4z
Jpcteu8CwCnwl8k1jrKn08fg8ATBSYJ3CAPR56DIAW3hvUJA/EkcvssS6nQK
cy/opb9Mymgiv0ZqWuNgx7rJGpp+7t2mJ5KBS7nx9n7UFTegcjMJuRsPBEGW
/FYr23DZSe41n9B9uFNQd6c15rmQ/R+j2voJGRN8iX0+uOn0odNILiYlVc35
fj7Qt0H7x+oUTp7mxBYphaB830RaUMq99dSimPqkF8DiMSGGvCmXnIbj3GMY
sCsIE590+9rQWa8EHKZ4qQQ233KuE20CRq7QCZwtcL/KXvhbHdvJ2fuDfUPo
Tz2C09YDp8Q0TVx0HNw1JUd1xLuSLt1baL3bA+Rm5l7A3gzTnwwHxu4hld8u
m5TcWXI1CyeqQJ8U549X3Axew8nBq6AwvKXEEbzXSJmrVfSt0KSdVLqTcseG
OIUbKFFyubUz7zDkZm23Spvbbju1sapuldwwZeM9ppWt1zAFBjyDQycmNOfG
MeXiWW/kt8ggneEk9OG/cOoobMCcLpvOMw5GodwGS5JNiDd0UQn6i/xR4QmN
5jlgV1CFexcfynrqpob3nWiFQJLAX7hr/caR7HzZjnNLBC/AAVFwbvJl0dUt
SPGvdpNrx3bk8pVO8tyg3wK6dtLJciJ/ftfr3oI2ycF3fHLwNjw8O3t3sIPU
Q7nZiXN3oB1xQYFGdrLBehY6xKbsrVG6cqNnr12jFmqNkCjWBPgXKx9mAamk
t/HSfssVBFhfIRdwrWZ/zsB858Bu5qcM8i21+mJPoULdTJkmEjAv9PwPSocL
BDItDKColS7luZtEjc44KoA7BfKka3/lUi/BaSD3ZaCXdRNsirrAUz8n7y+O
SuFeYt/+DdUNOuIWuHqRaNvapDe3cIIu5NU3Vbp1TzNztwIa2ybJ34Z+U32c
HCu9neodwsDJvJ47F6MH7fyXhgJsYrQrq40sMfk1STpHXG494UtqGwl+Scm+
Xtxsoeh1MSZAFmIN0lhRArNQOW4LQ4pkRSwjLy4+rpO0En+aeysj8YapEteZ
d4OiZhTweqClaJbrqTmYT3c2o0S0DJ5Itye31Ekr2Aa4Lh3ww4gGP4xFcphN
5HI7t5exvp7ADmYlaCcn7Aitm3CAMYHFvIXtqswhMG4g01nmkLtXNzoMVPKF
Et09rCtfIJLQktwlbA6KTiZ/sZqSzCUrK2uEXFMVvfYvmp3dW3Ku1Qp75dj6
4sXYOR/t/DA+XM4biVatYAYcnnQ6br33wr1bG1SxawzJYDpKr2Ws4irp4mqg
Q5BNclMj3a9mr4/Eq2mXQuIVNxztkNQ9fYTmSUmvts84nSoJLl7FC6M38D1k
HR/YIkVn63JvIMdT5UZ79/p2epMqQ52nfXr6uyta8arwuEjwJIGQ5Xt9NRsC
WwXNBjz9FP6m2BRuH95cc4YNZLK1HdN8lm59pl53cbqJm0FKbBWmOD2T8YMA
77sGU4nSHrq+kjwLcZN//vz6aDjE8i8Rsch5hL3wlaeGDQELvm7mHHo5yrBj
ogoGsFAOsnlm5tybG2CFqou+zqHRrtpeCe9ug2UpxxjIWxbQ/X/U1nOTUzv4
1OXFFEsMALQada1uT/tDnQa0NaLLiF0krAmOnLbXl/ymHMJ8jAq05oA02Fi3
raZ2jpl3Kbq7cFv9RVtgmcRKBxf2xHIbGjmXpGx5+gFQmwHDFEHugLBxJ8St
0sKeWlOqUm4VmCXkM5KrKdqZJz5BFXE1k6og8gHdgoWqnFhYU5liccTJGu7J
EcOtFPUQXTrERr3M0wZh8mk8sfoP6JRoguvS7r1ZgZV8XDqQpjtkboZ0RBZ0
xyHnsmDZnYwk5aj1FI896a0T8gDjCj+v83PvIOskUCew6sbriXHSV2Ig6pvd
uShyGnWUieu4OZjWhdunj00sU36EqwYauLQopevmXqmpZLbOkf9nWK8TF8RA
OovRzXhOYwsK2gehThiZ2h4RxH6xrLLgMiurFvCVGKZQujk5jMalNI7UxOW6
dzzYi2soFoxccEL+bqMvAXxR55k2KudxeTiB3gozUNwmMzxbVrXtTkLX1aL+
AfNcC5D7yExgLCGBW315sYsCoCnEDdbD+thwWmWT9jFJKoE23W3dBElwWP2f
//4/Su4SfukpIBvlJq7sWmEDU1qeaQzjkSgjBstGAxtd1jjQkgxpjvuEJgu6
UyFzXqIdNWR22Qv4IvPUVDRTUwEDDrBnGpCI3DQR/c0cK/p1/W979cbYnOgg
ugEn3RQvKUtoRXunyvHBm30EJApsN9aKc32s21kfVDkrjXPK7ujATcOs3K67
dLT0FS2NyxPabgZ4d69jLcwBzKVZHd8hD6qdwsu7cmidhJGQ7n6I6bKH1vfk
lGld9AQHb//o7dngib6fJu7c3IqER+Fi5gpdJ39mS99/s3+K10hJo1zrUbd5
X9orJor1mvED5WW5Rtx+jGaJ1vzyuuJ8JfiFWKPWiZASx9SPg6mB9DBWwlUq
xeUU+dD3gpq1JFLsYO787DY4EipCpHK7nreUsLGUQKRv5awJhQQFezDBQd9Y
jmyUe2eYthEGLK7/kFKx3NK9BhdAULn1e05V7rHYIPoEmaP1ZVJyT9oCOKvO
m9vWqH19/H74GHGrJ6EMLD0Tadj2aCVa1zIi0kKWyNLeS3n6b/+aLROi5s8F
XmVkzHqTLCXJLXOmYzuSn0EVtM7VOux7983JD7u98Hz3HezZmWu9oJOz4njb
F1dTfRlu00ofya0tSTF8amg/4dsnUClVM6wk5BF1Jhg7H0ewKi+D2XTvJUV4
ij7+K52m6FKEm7PcIsiwTZC9gJUe2RZnODvXrRGQRv6JoXR3f4F8cKgZcyFX
OXXcc6G7mpHu27mwwC5McC11rl88k3fRuMWZy+O10DghNsgoGTYwtkuNvh1J
Yt1eLgWuJCaXIruYp3/UuOrbW49//aDQLmbVtNIIP69/mtxFuJ+6PrqDcw+j
YUBzvzwbPqGpO6etLadFwnISswjLhrhZLaS3TVp3gF6pBm3rFmpefZhP71+g
nLso4dNem6GFb9R8HsN+n4b+fjEyYW97BQC9ZecwFXV2hs18nwspTK7jBWxB
7zou9g9OA3O56SQvMFdDe6GpBbnRIX/EdBFtvPR0KjM31KWrvdjq8Qu9C7pF
aREnZH1RqxG6fs5pdt8ykHZMqoGegypLzur5ezUxLkMUgZjOIBej1+w2N/qS
7wlml4Hr9w3+L9FWiAwCzgAA

-->

</rfc>
