<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.29 (Ruby 2.6.10) -->
<?rfc docmapping="yes"?>
<?rfc comments="yes"?>
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-haynes-nfsv4-flexfiles-v2-02" category="std" consensus="true" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.30.2 -->
  <front>
    <title abbrev="Flex File Layout v2">Parallel NFS (pNFS) Flexible File Layout Version 2</title>
    <seriesInfo name="Internet-Draft" value="draft-haynes-nfsv4-flexfiles-v2-02"/>
    <author initials="T." surname="Haynes" fullname="Thomas Haynes">
      <organization>Hammerspace</organization>
      <address>
        <email>loghyr@gmail.com</email>
      </address>
    </author>
    <date/>
    <area>General</area>
    <workgroup>Network File System Version 4</workgroup>
    <keyword>Internet-Draft</keyword>
    <abstract>
      <?line 48?>

<t>Parallel NFS (pNFS) allows a separation between the metadata (onto a
metadata server) and data (onto a storage device) for a file.  The
Flexible File Layout Type Version 2 is defined in this document as
an extension to pNFS that allows the use of storage devices that
require only a limited degree of interaction with the metadata
server and use already-existing protocols.  Data protection is also
added to provide integrity.  Both Client-side mirroring and the
Erasure Coding algorithms are used for data protection.</t>
    </abstract>
    <note>
      <name>Note to Readers</name>
      <?line 59?>

<t>Discussion of this draft takes place
on the NFSv4 working group mailing list (nfsv4@ietf.org),
which is archived at
<eref target="https://mailarchive.ietf.org/arch/search/?email_list=nfsv4"/>. Source
code and issues list for this draft can be found at
<eref target="https://github.com/ietf-wg-nfsv4/flexfiles-v2"/>.</t>
      <t>Working Group information can be found at <eref target="https://github.com/ietf-wg-nfsv4"/>.</t>
      <t>This draft is currently a work in progress.  It needs to be
determined if we want to copy v1 text to v2 or if we want just a diff
of the new content.  For right now, we are copying the v1 text and
adding the new v2 text.  Also, expect sections to move as we push the
emphasis from flex files to protection types.</t>
      <t><em>As a WIP, the XDR extraction may not yet work.</em></t>
    </note>
  </front>
  <middle>
    <?line 78?>

<section anchor="introduction">
      <name>Introduction</name>
      <t>In Parallel NFS (pNFS) (see Section 12 of <xref target="RFC8881"/>), the metadata
server returns layout type structures that describe where file data is
located.  There are different layout types for different storage systems
and methods of arranging data on storage devices.  <xref target="RFC8435"/> defined
the Flexible File Version 1 Layout Type used with file-based data
servers that are accessed using the NFS protocols: NFSv3 <xref target="RFC1813"/>,
NFSv4.0 <xref target="RFC7530"/>, NFSv4.1 <xref target="RFC8881"/>, and NFSv4.2 <xref target="RFC7862"/>.</t>
      <t>To provide a global state model equivalent to that of the files
layout type, a back-end control protocol might be implemented between
the metadata server and NFSv4.1+ storage devices.  An implementation
can either define its own proprietary mechanism or it could define a
control protocol in a Standards Track document.  The requirements for
a control protocol are specified in <xref target="RFC8881"/> and clarified in
<xref target="RFC8434"/>.</t>
      <t>The control protocol described in this document is based on NFS.  It
does not provide for knowledge of stateids to be passed between the
metadata server and the storage devices.  Instead, the storage
devices are configured such that the metadata server has full access
rights to the data file system and then the metadata server uses
synthetic ids to control client access to individual data files.</t>
      <t>In traditional mirroring of data, the server is responsible for
replicating, validating, and repairing copies of the data file.  With
client-side mirroring, the metadata server provides a layout that
presents the available mirrors to the client.  The client then picks
a mirror to read from and ensures that all writes go to all mirrors.
The client only considers the write transaction to have succeeded if
all mirrors are successfully updated.  In case of error, the client
can use the LAYOUTERROR operation to inform the metadata server,
which is then responsible for the repairing of the mirrored copies of
the file.</t>
      <t>This client side mirroring provides for replication of data but does
not provide for integrity of data.  In the event of an error, an user
would be able to repair the file by silvering the mirror contents.
I.e., they would pick one of the mirror instances and replicate it to
the other instance locations.</t>
      <t>However, lacking integrity checks, silent corruptions are not able to
be detected and the choice of what constitutes the good copy is
difficult.  This document updates the Flexible File Layout Type to
version 2 by providing data integrity for erasure coding.  Data
blocks are transformed into a header and a chunk.  It introduces new
operations that allow the client to rollback writes to the data file.</t>
      <t>Using the process detailed in <xref target="RFC8178"/>, the revisions in this
document become an extension of NFSv4.2 <xref target="RFC7862"/>.  They are built on
top of the external data representation (XDR) <xref target="RFC4506"/> generated
from <xref target="RFC7863"/>.</t>
      <section anchor="definitions">
        <name>Definitions</name>
        <dl>
          <dt>chunk:</dt>
          <dd>
            <t>One of the resulting chunks to be exchanged with a data server after
a transformation has been applied to a data block.  The resulting chunk
may be a different size than the data block.</t>
          </dd>
          <dt>control communication requirements:</dt>
          <dd>
            <t>the specification for information on layouts, stateids, file metadata,
and file data that must be communicated between the metadata server and
the storage devices.  There is a separate set of requirements for each
layout type.</t>
          </dd>
          <dt>control protocol:</dt>
          <dd>
            <t>the particular mechanism that an implementation of a layout type would
use to meet the control communication requirement for that layout type.
This need not be a protocol as normally understood.  In some cases,
the same protocol may be used as a control protocol and storage protocol.</t>
          </dd>
          <dt>client-side mirroring:</dt>
          <dd>
            <t>a feature in which the client, not the server, is responsible for
updating all of the mirrored copies of a layout segment.</t>
          </dd>
          <dt>data block:</dt>
          <dd>
            <t>A block of data in the client's cache for a file.</t>
          </dd>
          <dt>data file:</dt>
          <dd>
            <t>The data portion of the file, stored on the data server.</t>
          </dd>
          <dt>replication of data:</dt>
          <dd>
            <t>Data replication is making and storing multiple copies of data in
different locations.</t>
          </dd>
          <dt>Erasure Coding:</dt>
          <dd>
            <t>A data protection scheme where a block of data is replicated into
fragments and additional redundant fragments are added to achieve parity.
The new chunks are stored in different locations.</t>
          </dd>
          <dt>Client Side Erasure Coding:</dt>
          <dd>
            <t>A file based integrity method where copies are maintained in parallel.</t>
          </dd>
          <dt>(file) data:</dt>
          <dd>
            <t>that part of the file system object that contains the data to be read
or written.  It is the contents of the object rather than the attributes
of the object.</t>
          </dd>
          <dt>data server (DS):</dt>
          <dd>
            <t>a pNFS server that provides the file's data when the file system
object is accessed over a file-based protocol.</t>
          </dd>
          <dt>fencing:</dt>
          <dd>
            <t>the process by which the metadata server prevents the storage devices
from processing I/O from a specific client to a specific file.</t>
          </dd>
          <dt>file layout type:</dt>
          <dd>
            <t>a layout type in which the storage devices are accessed via the NFS
protocol (see <xref target="sec-layouthint"/> of <xref target="RFC8881"/>).</t>
          </dd>
          <dt>gid:</dt>
          <dd>
            <t>the group id, a numeric value that identifies to which group a file
belongs.</t>
          </dd>
          <dt>layout:</dt>
          <dd>
            <t>the information a client uses to access file data on a storage device.
This information includes specification of the protocol (layout type)
and the identity of the storage devices to be used.</t>
          </dd>
          <dt>layout iomode:</dt>
          <dd>
            <t>a grant of either read-only or read/write I/O to the client.</t>
          </dd>
          <dt>layout segment:</dt>
          <dd>
            <t>a sub-division of a layout.  That sub-division might be by the layout
iomode (see Sections 3.3.20 and 12.2.9 of <xref target="RFC8881"/>), a striping pattern
(see Section 13.3 of <xref target="RFC8881"/>), or requested byte range.</t>
          </dd>
          <dt>layout stateid:</dt>
          <dd>
            <t>a 128-bit quantity returned by a server that uniquely defines the
layout state provided by the server for a specific layout that describes
a layout type and file (see Section 12.5.2 of <xref target="RFC8881"/>).  Further,
Section 12.5.3 of <xref target="RFC8881"/> describes differences in handling between
layout stateids and other stateid types.</t>
          </dd>
          <dt>layout type:</dt>
          <dd>
            <t>a specification of both the storage protocol used to access the data
and the aggregation scheme used to lay out the file data on the underlying
storage devices.</t>
          </dd>
          <dt>loose coupling:</dt>
          <dd>
            <t>when the control protocol is a storage protocol.</t>
          </dd>
          <dt>(file) metadata:</dt>
          <dd>
            <t>the part of the file system object that contains various descriptive
data relevant to the file object, as opposed to the file data itself.
This could include the time of last modification, access time, EOF
position, etc.</t>
          </dd>
          <dt>metadata server (MDS):</dt>
          <dd>
            <t>the pNFS server that provides metadata information for a file system
object.  It is also responsible for generating, recalling, and revoking
layouts for file system objects, for performing directory operations,
and for performing I/O operations to regular files when the clients
direct these to the metadata server itself.</t>
          </dd>
          <dt>mirror:</dt>
          <dd>
            <t>a copy of a layout segment.  Note that if one copy of the mirror is
updated, then all copies must be updated.</t>
          </dd>
          <dt>non-systematic encoding:</dt>
          <dd>
            <t>TBD</t>
          </dd>
          <dt>recalling a layout:</dt>
          <dd>
            <t>a graceful recall, via a callback, of a specific layout by the metadata
server to the client.  Graceful here means that the client would have
the opportunity to flush any WRITEs, etc., before returning the layout
to the metadata server.</t>
          </dd>
          <dt>revoking a layout:</dt>
          <dd>
            <t>an invalidation of a specific layout by the metadata server.
Once revocation occurs, the metadata server will not accept as valid any
reference to the revoked layout, and a storage device will not accept
any client access based on the layout.</t>
          </dd>
          <dt>resilvering:</dt>
          <dd>
            <t>the act of rebuilding a mirrored copy of a layout segment from a
known good copy of the layout segment.  Note that this can also be done
to create a new mirrored copy of the layout segment.</t>
          </dd>
          <dt>rsize:</dt>
          <dd>
            <t>the data transfer buffer size used for READs.</t>
          </dd>
          <dt>stateid:</dt>
          <dd>
            <t>a 128-bit quantity returned by a server that uniquely defines the set
of locking-related state provided by the server.  Stateids may designate
state related to open files, byte-range locks, delegations, or layouts.</t>
          </dd>
          <dt>storage device:</dt>
          <dd>
            <t>the target to which clients may direct I/O requests when they hold
an appropriate layout.  See Section 2.1 of <xref target="RFC8434"/> for further
discussion of the difference between a data server and a storage device.</t>
          </dd>
          <dt>storage protocol:</dt>
          <dd>
            <t>the protocol used by clients to do I/O operations to the storage
device.  Each layout type specifies the set of storage protocols.</t>
          </dd>
          <dt>systematic encoding:</dt>
          <dd>
            <t>TBD</t>
          </dd>
          <dt>tight coupling:</dt>
          <dd>
            <t>an arrangement in which the control protocol is one designed
specifically for control communication.  It may be either a proprietary
protocol adapted specifically to a particular metadata server or a
protocol based on a Standards Track document.</t>
          </dd>
          <dt>uid:</dt>
          <dd>
            <t>the user id, a numeric value that identifies which user owns a file.</t>
          </dd>
          <dt>write hole:</dt>
          <dd>
            <t>A write hole is a data corruption scenario where either two clients
are trying to write to the same chunk or one client is overwriting an
existing chunk of data.</t>
          </dd>
          <dt>wsize:</dt>
          <dd>
            <t>the data transfer buffer size used for WRITEs.</t>
          </dd>
        </dl>
      </section>
      <section anchor="requirements-language">
        <name>Requirements Language</name>
        <t>The key words "<bcp14>MUST</bcp14>", "<bcp14>MUST NOT</bcp14>", "<bcp14>REQUIRED</bcp14>", "<bcp14>SHALL</bcp14>", "<bcp14>SHALL
NOT</bcp14>", "<bcp14>SHOULD</bcp14>", "<bcp14>SHOULD NOT</bcp14>", "<bcp14>RECOMMENDED</bcp14>", "<bcp14>NOT RECOMMENDED</bcp14>",
"<bcp14>MAY</bcp14>", and "<bcp14>OPTIONAL</bcp14>" in this document are to be interpreted as
described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they
appear in all capitals, as shown here.</t>
        <?line -18?>

</section>
    </section>
    <section anchor="coupling-of-storage-devices">
      <name>Coupling of Storage Devices</name>
      <t>A server implementation may choose either a loosely coupled model or a
tightly coupled model between the metadata server and the storage devices.
<xref target="RFC8434"/> describes the general problems facing pNFS implementations.
This document details how the new flexible file layout type addresses
these issues.  To implement the tightly coupled model, a control protocol
has to be defined.  As the flexible file layout imposes no special
requirements on the client, the control protocol will need to provide:</t>
      <ol spacing="normal" type="1"><li>
          <t>management of both security and LAYOUTCOMMITs and</t>
        </li>
        <li>
          <t>a global stateid model and management of these stateids.</t>
        </li>
      </ol>
      <t>When implementing the loosely coupled model, the only control protocol
will be a version of NFS, with no ability to provide a global stateid
model or to prevent clients from using layouts inappropriately.  To enable
client use in that environment, this document will specify how security,
state, and locking are to be managed.</t>
      <section anchor="layoutcommit">
        <name>LAYOUTCOMMIT</name>
        <t>Regardless of the coupling model, the metadata server has the
responsibility, upon receiving a LAYOUTCOMMIT (see Section 18.42 of
<xref target="RFC8881"/>) to ensure that the semantics of pNFS are respected (see
Section 3.1 of <xref target="RFC8434"/>).  These do include a requirement that data
written to a data storage device be stable before the occurrence of
the LAYOUTCOMMIT.</t>
        <t>It is the responsibility of the client to make sure the data file is
stable before the metadata server begins to query the storage devices
about the changes to the file.  If any WRITE to a storage device did not
result with stable_how equal to FILE_SYNC, a LAYOUTCOMMIT to the metadata
server <bcp14>MUST</bcp14> be preceded by a COMMIT to the storage devices written to.
Note that if the client has not done a COMMIT to the storage device, then
the LAYOUTCOMMIT might not be synchronized to the last WRITE operation
to the storage device.</t>
      </section>
      <section anchor="sec-Fencing-Clients">
        <name>Fencing Clients from the Storage Device</name>
        <t>With loosely coupled storage devices, the metadata server uses synthetic
uids (user ids) and gids (group ids) for the data file, where the uid
owner of the data file is allowed read/write access and the gid owner
is allowed read-only access.  As part of the layout (see ffv2ds_user
and ffv2ds_group in <xref target="sec-ffv2_layout"/>), the client is provided
with the user and group to be used in the Remote Procedure Call
(RPC) <xref target="RFC5531"/> credentials needed to access the data file.
Fencing off of clients is achieved by the metadata server changing
the synthetic uid and/or gid owners of the data file on the storage
device to implicitly revoke the outstanding RPC credentials.  A
client presenting the wrong credential for the desired access will
get an NFS4ERR_ACCESS error.</t>
        <t>With this loosely coupled model, the metadata server is not able to fence
off a single client; it is forced to fence off all clients.  However,
as the other clients react to the fencing, returning their layouts and
trying to get new ones, the metadata server can hand out a new uid and
gid to allow access.</t>
        <t>It is <bcp14>RECOMMENDED</bcp14> to implement common access control methods at the
storage device file system to allow only the metadata server root
(super user) access to the storage device and to set the owner of all
directories holding data files to the root user.  This approach provides
a practical model to enforce access control and fence off cooperative
clients, but it cannot protect against malicious clients; hence, it
provides a level of security equivalent to AUTH_SYS.  It is <bcp14>RECOMMENDED</bcp14>
that the communication between the metadata server and storage device
be secure from eavesdroppers and man-in-the-middle protocol tampering.
The security measure could be physical security (e.g., the servers
are co-located in a physically secure area), encrypted communications,
or some other technique.</t>
        <t>With tightly coupled storage devices, the metadata server sets the
user and group owners, mode bits, and Access Control List (ACL) of
the data file to be the same as the metadata file.  And the client must
authenticate with the storage device and go through the same authorization
process it would go through via the metadata server.  In the case of
tight coupling, fencing is the responsibility of the control protocol and
is not described in detail in this document.  However, implementations
of the tightly coupled locking model (see <xref target="sec-state-locking"/>) will
need a way to prevent access by certain clients to specific files by
invalidating the corresponding stateids on the storage device.  In such
a scenario, the client will be given an error of NFS4ERR_BAD_STATEID.</t>
        <t>The client need not know the model used between the metadata server and
the storage device.  It need only react consistently to any errors in
interacting with the storage device.  It should both return the layout
and error to the metadata server and ask for a new layout.  At that point,
the metadata server can either hand out a new layout, hand out no layout
(forcing the I/O through it), or deny the client further access to
the file.</t>
        <section anchor="implementation-notes-for-synthetic-uidsgids">
          <name>Implementation Notes for Synthetic uids/gids</name>
          <t>The selection method for the synthetic uids and gids to be used for
fencing in loosely coupled storage devices is strictly an implementation
issue.  That is, an administrator might restrict a range of such ids
available to the Lightweight Directory Access Protocol (LDAP) 'uid' field
<xref target="RFC4519"/>.  The administrator might also be able to choose an id that
would never be used to grant access.  Then, when the metadata server had
a request to access a file, a SETATTR would be sent to the storage device
to set the owner and group of the data file.  The user and group might
be selected in a round-robin fashion from the range of available ids.</t>
          <t>Those ids would be sent back as ffv2ds_user and ffv2ds_group to the
client, who would present them as the RPC credentials to the storage
device.  When the client is done accessing the file and the metadata
server knows that no other client is accessing the file, it can
reset the owner and group to restrict access to the data file.</t>
          <t>When the metadata server wants to fence off a client, it changes the
synthetic uid and/or gid to the restricted ids.  Note that using a
restricted id ensures that there is a change of owner and at least one
id available that never gets allowed access.</t>
          <t>Under an AUTH_SYS security model, synthetic uids and gids of 0 <bcp14>SHOULD</bcp14> be
avoided.  These typically either grant super access to files on a storage
device or are mapped to an anonymous id.  In the first case, even if the
data file is fenced, the client might still be able to access the file.
In the second case, multiple ids might be mapped to the anonymous ids.</t>
        </section>
        <section anchor="example-of-using-synthetic-uidsgids">
          <name>Example of using Synthetic uids/gids</name>
          <t>The user loghyr creates a file "ompha.c" on the metadata server, which
then creates a corresponding data file on the storage device.</t>
          <t>The metadata server entry may look like:</t>
          <figure anchor="fig-meta-ompha">
            <name>Metadata's view of ompha.c</name>
            <sourcecode type="shell"><![CDATA[
-rw-r--r--    1 loghyr  staff    1697 Dec  4 11:31 ompha.c
]]></sourcecode>
          </figure>
          <t>On the storage device, the file may be assigned some unpredictable
synthetic uid/gid to deny access:</t>
          <figure anchor="fig-data-ompha">
            <name>Data's view of ompha.c</name>
            <sourcecode type="shell"><![CDATA[
-rw-r-----    1 19452   28418    1697 Dec  4 11:31 data_ompha.c
]]></sourcecode>
          </figure>
          <t>When the file is opened on a client and accessed, the user will try to
get a layout for the data file.  Since the layout knows nothing about
the user (and does not care), it does not matter whether the user loghyr
or garbo opens the file.  The client has to present an uid of 19452
to get write permission.  If it presents any other value for the uid,
then it must give a gid of 28418 to get read access.</t>
          <t>Further, if the metadata server decides to fence the file, it should
change the uid and/or gid such that these values neither match earlier
values for that file nor match a predictable change based on an earlier
fencing.</t>
          <figure anchor="fig-fenced-ompha">
            <name>Fenced Data's view of ompha.c</name>
            <sourcecode type="shell"><![CDATA[
-rw-r-----    1 19453   28419    1697 Dec  4 11:31 data_ompha.c
]]></sourcecode>
          </figure>
          <t>The set of synthetic gids on the storage device should be selected such
that there is no mapping in any of the name services used by the storage
device, i.e., each group should have no members.</t>
          <t>If the layout segment has an iomode of LAYOUTIOMODE4_READ, then the
metadata server should return a synthetic uid that is not set on the
storage device.  Only the synthetic gid would be valid.</t>
          <t>The client is thus solely responsible for enforcing file permissions
in a loosely coupled model.  To allow loghyr write access, it will send
an RPC to the storage device with a credential of 1066:1067.  To allow
garbo read access, it will send an RPC to the storage device with a
credential of 1067:1067.  The value of the uid does not matter as long
as it is not the synthetic uid granted when getting the layout.</t>
          <t>While pushing the enforcement of permission checking onto the client
may seem to weaken security, the client may already be responsible
for enforcing permissions before modifications are sent to a server.
With cached writes, the client is always responsible for tracking who is
modifying a file and making sure to not coalesce requests from multiple
users into one request.</t>
        </section>
      </section>
      <section anchor="sec-state-locking">
        <name>State and Locking Models</name>
        <t>An implementation can always be deployed as a loosely coupled model.
There is, however, no way for a storage device to indicate over an NFS
protocol that it can definitively participate in a tightly coupled model:</t>
        <ul spacing="normal">
          <li>
            <t>Storage devices implementing the NFSv3 and NFSv4.0 protocols are
always treated as loosely coupled.</t>
          </li>
          <li>
            <t>NFSv4.1+ storage devices that do not return the
EXCHGID4_FLAG_USE_PNFS_DS flag set to EXCHANGE_ID are indicating that
they are to be treated as loosely coupled.  From the locking viewpoint,
they are treated in the same way as NFSv4.0 storage devices.</t>
          </li>
          <li>
            <t>NFSv4.1+ storage devices that do identify themselves with the
EXCHGID4_FLAG_USE_PNFS_DS flag set to EXCHANGE_ID can potentially
be tightly coupled.  They would use a back-end control protocol to
implement the global stateid model as described in <xref target="RFC8881"/>.</t>
          </li>
        </ul>
        <t>A storage device would have to be either discovered or advertised over
the control protocol to enable a tightly coupled model.</t>
        <section anchor="loosely-coupled-locking-model">
          <name>Loosely Coupled Locking Model</name>
          <t>When locking-related operations are requested, they are primarily dealt
with by the metadata server, which generates the appropriate stateids.
When an NFSv4 version is used as the data access protocol, the metadata
server may make stateid-related requests of the storage devices.  However,
it is not required to do so, and the resulting stateids are known only
to the metadata server and the storage device.</t>
          <t>Given this basic structure, locking-related operations are handled
as follows:</t>
          <ul spacing="normal">
            <li>
              <t>OPENs are dealt with by the metadata server.  Stateids are
selected by the metadata server and associated with the client
ID describing the client's connection to the metadata server.
The metadata server may need to interact with the storage device to
locate the file to be opened, but no locking-related functionality
need be used on the storage device.</t>
            </li>
            <li>
              <t>OPEN_DOWNGRADE and CLOSE only require local execution on the
metadata server.</t>
            </li>
            <li>
              <t>Advisory byte-range locks can be implemented locally on the
metadata server.  As in the case of OPENs, the stateids associated
with byte-range locks are assigned by the metadata server and only
used on the metadata server.</t>
            </li>
            <li>
              <t>Delegations are assigned by the metadata server that initiates
recalls when conflicting OPENs are processed.  No storage device
involvement is required.</t>
            </li>
            <li>
              <t>TEST_STATEID and FREE_STATEID are processed locally on the
metadata server, without storage device involvement.</t>
            </li>
          </ul>
          <t>All I/O operations to the storage device are done using the anonymous
stateid.  Thus, the storage device has no information about the openowner
and lockowner responsible for issuing a particular I/O operation.
As a result:</t>
          <ul spacing="normal">
            <li>
              <t>Mandatory byte-range locking cannot be supported because the
storage device has no way of distinguishing I/O done on behalf of
the lock owner from those done by others.</t>
            </li>
            <li>
              <t>Enforcement of share reservations is the responsibility of the
client.  Even though I/O is done using the anonymous stateid, the
client must ensure that it has a valid stateid associated with the
openowner.</t>
            </li>
          </ul>
          <t>In the event that a stateid is revoked, the metadata server is responsible
for preventing client access, since it has no way of being sure that
the client is aware that the stateid in question has been revoked.</t>
          <t>As the client never receives a stateid generated by a storage device,
there is no client lease on the storage device and no prospect of lease
expiration, even when access is via NFSv4 protocols.  Clients will
have leases on the metadata server.  In dealing with lease expiration,
the metadata server may need to use fencing to prevent revoked stateids
from being relied upon by a client unaware of the fact that they have
been revoked.</t>
        </section>
        <section anchor="tightly-coupled-locking-model">
          <name>Tightly Coupled Locking Model</name>
          <t>When locking-related operations are requested, they are primarily dealt
with by the metadata server, which generates the appropriate stateids.
These stateids must be made known to the storage device using control
protocol facilities, the details of which are not discussed in this
document.</t>
          <t>Given this basic structure, locking-related operations are handled
as follows:</t>
          <ul spacing="normal">
            <li>
              <t>OPENs are dealt with primarily on the metadata server.  Stateids
are selected by the metadata server and associated with the client
ID describing the client's connection to the metadata server.
The metadata server needs to interact with the storage device to
locate the file to be opened and to make the storage device aware of
the association between the metadata-server-chosen stateid and the
client and openowner that it represents.  OPEN_DOWNGRADE and CLOSE
are executed initially on the metadata server, but the state change
made must be propagated to the storage device.</t>
            </li>
            <li>
              <t>Advisory byte-range locks can be implemented locally on the
metadata server.  As in the case of OPENs, the stateids associated
with byte-range locks are assigned by the metadata server and are
available for use on the metadata server.  Because I/O operations
are allowed to present lock stateids, the metadata server needs the
ability to make the storage device aware of the association between
the metadata-server-chosen stateid and the corresponding open stateid
it is associated with.</t>
            </li>
            <li>
              <t>Mandatory byte-range locks can be supported when both the metadata
server and the storage devices have the appropriate support.  As in
the case of advisory byte-range locks, these are assigned by the
metadata server and are available for use on the metadata server.
To enable mandatory lock enforcement on the storage device, the
metadata server needs the ability to make the storage device aware
of the association between the metadata-server-chosen stateid and
the client, openowner, and lock (i.e., lockowner, byte-range, and
lock-type) that it represents.  Because I/O operations are allowed
to present lock stateids, this information needs to be propagated to
all storage devices to which I/O might be directed rather than only
to storage device that contain the locked region.</t>
            </li>
            <li>
              <t>Delegations are assigned by the metadata server that initiates
recalls when conflicting OPENs are processed.  Because I/O operations
are allowed to present delegation stateids, the metadata server
requires the ability:  </t>
              <ol spacing="normal" type="1"><li>
                  <t>to make the storage device aware of the association between
the metadata-server-chosen stateid and the filehandle and
delegation type it represents</t>
                </li>
                <li>
                  <t>to break such an association.</t>
                </li>
              </ol>
            </li>
            <li>
              <t>TEST_STATEID is processed locally on the metadata server, without
storage device involvement.</t>
            </li>
            <li>
              <t>FREE_STATEID is processed on the metadata server, but the metadata
server requires the ability to propagate the request to the
corresponding storage devices.</t>
            </li>
          </ul>
          <t>Because the client will possess and use stateids valid on the storage
device, there will be a client lease on the storage device, and the
possibility of lease expiration does exist.  The best approach for the
storage device is to retain these locks as a courtesy.  However, if it
does not do so, control protocol facilities need to provide the means
to synchronize lock state between the metadata server and storage device.</t>
          <t>Clients will also have leases on the metadata server that are subject
to expiration.  In dealing with lease expiration, the metadata server
would be expected to use control protocol facilities enabling it to
invalidate revoked stateids on the storage device.  In the event the
client is not responsive, the metadata server may need to use fencing
to prevent revoked stateids from being acted upon by the storage device.</t>
        </section>
      </section>
    </section>
    <section anchor="xdr-description-of-the-flexible-file-layout-type">
      <name>XDR Description of the Flexible File Layout Type</name>
      <t>This document contains the External Data Representation (XDR)
<xref target="RFC4506"/> description of the flexible file layout type.  The XDR
description is embedded in this document in a way that makes it simple
for the reader to extract into a ready-to-compile form.  The reader can
feed this document into the shell script in <xref target="fig-extract"/> to produce
the machine-readable XDR description of the flexible file layout type.</t>
      <figure anchor="fig-extract">
        <name>extract.sh</name>
        <sourcecode type="shell"><![CDATA[
#!/bin/sh
grep '^ *///' $* | sed 's?^ */// ??' | sed 's?^ *///$??'
]]></sourcecode>
      </figure>
      <t>That is, if the above script is stored in a file called "extract.sh"
and this document is in a file called "spec.txt", then the reader can
run the script as in <xref target="fig-extract-example"/>.</t>
      <figure anchor="fig-extract-example">
        <name>Example use of extract.sh</name>
        <sourcecode type="shell"><![CDATA[
sh extract.sh < spec.txt > flex_files2_prot.x
]]></sourcecode>
      </figure>
      <t>The effect of the script is to remove leading blank space from each
line, plus a sentinel sequence of "///".</t>
      <t>XDR descriptions with the sentinel sequence are embedded throughout
the document.</t>
      <t>Note that the XDR code contained in this document depends on types
from the NFSv4.1 nfs4_prot.x file <xref target="RFC5662"/>.  This includes both nfs
types that end with a 4, such as offset4, length4, etc., as well as
more generic types such as uint32_t and uint64_t.</t>
      <t>While the XDR can be appended to that from <xref target="RFC7863"/>, the various
code snippets belong in their respective areas of that XDR.</t>
    </section>
    <section anchor="device-addressing-and-discovery">
      <name>Device Addressing and Discovery</name>
      <t>Data operations to a storage device require the client to know the
network address of the storage device.  The NFSv4.1+ GETDEVICEINFO
operation (Section 18.40 of <xref target="RFC8881"/>) is used by the client to
retrieve that information.</t>
      <section anchor="sec-ff_device_addr4">
        <name>ff_device_addr4</name>
        <t>The ff_device_addr4 data structure (see <xref target="fig-ff_device_addr4"/>)
is returned by the server as the layout-type-specific opaque field
da_addr_body in the device_addr4 structure by a successful GETDEVICEINFO
operation.</t>
        <figure anchor="fig-ff_device_versions4">
          <name>ff_device_versions4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct ff_device_versions4 {
   ///         uint32_t        ffdv_version;
   ///         uint32_t        ffdv_minorversion;
   ///         uint32_t        ffdv_rsize;
   ///         uint32_t        ffdv_wsize;
   ///         bool            ffdv_tightly_coupled;
   /// };
   ///
]]></sourcecode>
        </figure>
        <figure anchor="fig-ff_device_addr4">
          <name>ff_device_addr4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct ff_device_addr4 {
   ///         multipath_list4     ffda_netaddrs;
   ///         ff_device_versions4 ffda_versions<>;
   /// };
   ///
]]></sourcecode>
        </figure>
        <t>The ffda_netaddrs field is used to locate the storage device.  It
<bcp14>MUST</bcp14> be set by the server to a list holding one or more of the device
network addresses.</t>
        <t>The ffda_versions array allows the metadata server to present choices
as to NFS version, minor version, and coupling strength to the
client.  The ffdv_version and ffdv_minorversion represent the NFS
protocol to be used to access the storage device.  This layout
specification defines the semantics for ffdv_versions 3 and 4.  If
ffdv_version equals 3, then the server <bcp14>MUST</bcp14> set ffdv_minorversion to
0 and ffdv_tightly_coupled to false.  The client <bcp14>MUST</bcp14> then access the
storage device using the NFSv3 protocol <xref target="RFC1813"/>.  If ffdv_version
equals 4, then the server <bcp14>MUST</bcp14> set ffdv_minorversion to one of the
NFSv4 minor version numbers, and the client <bcp14>MUST</bcp14> access the storage
device using NFSv4 with the specified minor version.</t>
        <t>Note that while the client might determine that it cannot use any of
the configured combinations of ffdv_version, ffdv_minorversion, and
ffdv_tightly_coupled, when it gets the device list from the metadata
server, there is no way to indicate to the metadata server as to
which device it is version incompatible.  However, if the client
waits until it retrieves the layout from the metadata server, it can
at that time clearly identify the storage device in question (see
<xref target="sec-version-errors"/>).</t>
        <t>The ffdv_rsize and ffdv_wsize are used to communicate the maximum
rsize and wsize supported by the storage device.  As the storage
device can have a different rsize or wsize than the metadata server,
the ffdv_rsize and ffdv_wsize allow the metadata server to
communicate that information on behalf of the storage device.</t>
        <t>ffdv_tightly_coupled informs the client as to whether or not the
metadata server is tightly coupled with the storage devices.  Note
that even if the data protocol is at least NFSv4.1, it may still be
the case that there is loose coupling in effect.  If
ffdv_tightly_coupled is not set, then the client <bcp14>MUST</bcp14> commit writes
to the storage devices for the file before sending a LAYOUTCOMMIT to
the metadata server.  That is, the writes <bcp14>MUST</bcp14> be committed by the
client to stable storage via issuing WRITEs with stable_how ==
FILE_SYNC or by issuing a COMMIT after WRITEs with stable_how !=
FILE_SYNC (see Section 3.3.7 of <xref target="RFC1813"/>).</t>
      </section>
      <section anchor="storage-device-multipathing">
        <name>Storage Device Multipathing</name>
        <t>The flexible file layout type supports multipathing to multiple
storage device addresses.  Storage-device-level multipathing is used
for bandwidth scaling via trunking and for higher availability of use
in the event of a storage device failure.  Multipathing allows the
client to switch to another storage device address that may be that
of another storage device that is exporting the same data stripe
unit, without having to contact the metadata server for a new layout.</t>
        <t>To support storage device multipathing, ffda_netaddrs contains an
array of one or more storage device network addresses.  This array
(data type multipath_list4) represents a list of storage devices
(each identified by a network address), with the possibility that
some storage device will appear in the list multiple times.</t>
        <t>The client is free to use any of the network addresses as a
destination to send storage device requests.  If some network
addresses are less desirable paths to the data than others, then the
metadata server <bcp14>SHOULD NOT</bcp14> include those network addresses in
ffda_netaddrs.  If less desirable network addresses exist to provide
failover, the <bcp14>RECOMMENDED</bcp14> method to offer the addresses is to provide
them in a replacement device-ID-to-device-address mapping or a
replacement device ID.  When a client finds no response from the
storage device using all addresses available in ffda_netaddrs, it
<bcp14>SHOULD</bcp14> send a GETDEVICEINFO to attempt to replace the existing
device-ID-to-device-address mappings.  If the metadata server detects
that all network paths represented by ffda_netaddrs are unavailable,
the metadata server <bcp14>SHOULD</bcp14> send a CB_NOTIFY_DEVICEID (if the client
has indicated it wants device ID notifications for changed device
IDs) to change the device-ID-to-device-address mappings to the
available addresses.  If the device ID itself will be replaced, the
metadata server <bcp14>SHOULD</bcp14> recall all layouts with the device ID and thus
force the client to get new layouts and device ID mappings via
LAYOUTGET and GETDEVICEINFO.</t>
        <t>Generally, if two network addresses appear in ffda_netaddrs, they
will designate the same storage device.  When the storage device is
accessed over NFSv4.1 or a higher minor version, the two storage
device addresses will support the implementation of client ID or
session trunking (the latter is <bcp14>RECOMMENDED</bcp14>) as defined in <xref target="RFC8881"/>.
The two storage device addresses will share the same server owner or
major ID of the server owner.  It is not always necessary for the two
storage device addresses to designate the same storage device with
trunking being used.  For example, the data could be read-only, and
the data consist of exact replicas.</t>
      </section>
    </section>
    <section anchor="flexible-file-version-2-layout-type">
      <name>Flexible File Version 2 Layout Type</name>
      <t>The original layouttype4 introduced in <xref target="RFC5662"/> is modified to as in
<xref target="fig-orig-layout"/>.</t>
      <figure anchor="fig-orig-layout">
        <name>The original layout type</name>
        <sourcecode type="xdr"><![CDATA[
       enum layouttype4 {
           LAYOUT4_NFSV4_1_FILES   = 1,
           LAYOUT4_OSD2_OBJECTS    = 2,
           LAYOUT4_BLOCK_VOLUME    = 3,
           LAYOUT4_FLEX_FILES      = 4,
           LAYOUT4_FLEX_FILES_V2   = 5
       };

       struct layout_content4 {
           layouttype4             loc_type;
           opaque                  loc_body<>;
       };

       struct layout4 {
           offset4                 lo_offset;
           length4                 lo_length;
           layoutiomode4           lo_iomode;
           layout_content4         lo_content;
       };
]]></sourcecode>
      </figure>
      <t>This document defines structures associated with the layouttype4
value LAYOUT4_FLEX_FILES.  <xref target="RFC8881"/> specifies the loc_body structure
as an XDR type "opaque".  The opaque layout is uninterpreted by the
generic pNFS client layers but is interpreted by the flexible file
layout type implementation.  This section defines the structure of
this otherwise opaque value, ff_layout4.</t>
      <section anchor="ffv2codingtype4">
        <name>ffv2_coding_type4</name>
        <figure anchor="fig-ffv2_coding_type4">
          <name>The coding type</name>
          <sourcecode type="xdr"><![CDATA[
   /// enum ffv2_coding_type4 {
   ///     FFV2_CODING_MIRRORED       = 0x1;
   /// };
]]></sourcecode>
        </figure>
        <t>The ffv2_coding_type4 (see <xref target="fig-ffv2_coding_type4"/>) encompasses
a new IANA registry for 'Flexible Files Version 2 Erasure Coding
Type Registry'.  I.e., instead of defining a new Layout Type for
each Erasure Coding, we define a new Erasure Coding Type.  Except
for FFV2_CODING_MIRRORED, each of the types is expected to employ
the new operations in this document.</t>
        <t>FFV2_CODING_MIRRORED offers replication of data and not integrity of
data.  As such, it does not need operations like CHUNK_WRITE (see
<xref target="sec-CHUNK_WRITE"/>.</t>
      </section>
      <section anchor="sec-ffv2_layout">
        <name>ffv2_layout4</name>
        <section anchor="sec-ffv2_flags4">
          <name>ffv2_flags4</name>
          <figure anchor="fig-ffv2_flags4">
            <name>The ffv2_flags4</name>
            <sourcecode type="xdr"><![CDATA[
   /// const FFV2_FLAGS_NO_LAYOUTCOMMIT   = FF_FLAGS_NO_LAYOUTCOMMIT;
   /// const FFV2_FLAGS_NO_IO_THRU_MDS    = FF_FLAGS_NO_IO_THRU_MDS;
   /// const FFV2_FLAGS_NO_READ_IO        = FF_FLAGS_NO_READ_IO;
   /// const FFV2_FLAGS_WRITE_ONE_MIRROR  = FF_FLAGS_WRITE_ONE_MIRROR;
   /// const FFV2_FLAGS_ONLY_ONE_WRITER   = 0x00000010;
   ///
   /// typedef uint32_t            ffv2_flags4;
]]></sourcecode>
          </figure>
          <t>The ffv2_flags4 in <xref target="fig-ffv2_flags4"/>  is a bitmap that allows the
metadata server to inform the client of particular conditions that
may result from more or less tight coupling of the storage devices.</t>
          <dl>
            <dt>FFV2_FLAGS_NO_LAYOUTCOMMIT:</dt>
            <dd>
              <t>can be set to indicate that the client is not required to send
LAYOUTCOMMIT to the metadata server.</t>
            </dd>
            <dt>FFV2_FLAGS_NO_IO_THRU_MDS:</dt>
            <dd>
              <t>can be set to indicate that the client should not send I/O
operations to the metadata server.  That is, even if the client
could determine that there was a network disconnect to a storage
device, the client should not try to proxy the I/O through the
metadata server.</t>
            </dd>
            <dt>FFV2_FLAGS_NO_READ_IO:</dt>
            <dd>
              <t>can be set to indicate that the client should not send READ
requests with the layouts of iomode LAYOUTIOMODE4_RW.  Instead, it
should request a layout of iomode LAYOUTIOMODE4_READ from the
metadata server.</t>
            </dd>
            <dt>FFV2_FLAGS_WRITE_ONE_MIRROR:</dt>
            <dd>
              <t>can be set to indicate that the client only needs to update one
of the mirrors (see <xref target="sec-CSM"/>).</t>
            </dd>
            <dt>FFV2_FLAGS_ONLY_ONE_WRITER:</dt>
            <dd>
              <t>can be set to indicate that the client only needs to use a
CHUNK_WRITE to update the chunks in the data file.  I.e., keep the
ability to rollback in case of a write hole caused by overwriting.
If this flag is not set, then the client <bcp14>MUST</bcp14> write chunks with
CHUNK_WRITE with the cwa_guard set in order to prevent collision
across the data servers.</t>
            </dd>
          </dl>
        </section>
      </section>
      <section anchor="ffv2fileinfo4">
        <name>ffv2_file_info4</name>
        <figure anchor="fig-ffv2_file_info4">
          <name>The ffv2_file_info4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_file_info4 {
   ///     stateid4                fffi_stateid;
   ///     nfs_fh4                 fffi_fh_vers;
   /// };
]]></sourcecode>
        </figure>
        <t>The ffv2_file_info4 is a new structure to help with the stateid
issue discussed in Section 5.1 of <xref target="RFC8435"/>.  I.e., in version 1
of the Flexible File Layout Type, there was the singleton ffv2ds_stateid
combined with the ffv2ds_fh_vers array.  I.e., each NFSv4 version
has its own stateid.  In <xref target="fig-ffv2_file_info4"/>, each NFSv4
filehandle has a one-to-one correspondence to a stateid.</t>
      </section>
      <section anchor="ffv2dsflags4">
        <name>ffv2_ds_flags4</name>
        <figure anchor="fig-ffv2_ds_flags4">
          <name>The ffv2_ds_flags4</name>
          <sourcecode type="xdr"><![CDATA[
   /// const FFV2_DS_FLAGS_ACTIVE        = 0x00000001;
   /// const FFV2_DS_FLAGS_SPARE         = 0x00000002;
   /// const FFV2_DS_FLAGS_PARITY        = 0x00000004;
   /// const FFV2_DS_FLAGS_REPAIR        = 0x00000008;
   /// typedef uint32_t            ffv2_ds_flags4;
]]></sourcecode>
        </figure>
        <t>The ffv2_ds_flags4 (in <xref target="fig-ffv2_ds_flags4"/>) flags details the
state of the data servers.  With Erasure Coding algorithms, there
are both Systematic and Non-Systematic approaches.  In the Systematic,
the bits for integrity are placed amoungst the resulting transformed
chunk.  Such an implementation would typically see FFV2_DS_FLAGS_ACTIVE
and FFV2_DS_FLAGS_SPARE data servers.  The FFV2_DS_FLAGS_SPARE ones
allow the client to repair a payload without enaging the metadata
server.  I.e., if one of the FFV2_DS_FLAGS_ACTIVE did not respond
to a WRITE_BLOCK, the client could fail the chunk to the
FFV2_DS_FLAGS_SPARE data server.</t>
        <t>With the Non-Systematic approach, the data and integrity live on
different data servers.  Such an implementation would typically see
FFV2_DS_FLAGS_ACTIVE and FFV2_DS_FLAGS_PARITY data servers.  If the
implementation wanted to allow for local repair, it would also use
FFV2_DS_FLAGS_SPARE.</t>
        <t>The FFV2_DS_FLAGS_REPAIR flag can be used by the metadata server
to inform the client that the indicated data server is a replacement
data server as far as existing data is concerned.  <cref source="Tom">Fill in</cref></t>
        <t>See <xref target="Plank97"/> for further reference to storage layouts for coding.</t>
      </section>
      <section anchor="ffv2dataserver4">
        <name>ffv2_data_server4</name>
        <figure anchor="fig-ffv2_data_server4">
          <name>The ffv2_data_server4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_data_server4 {
   ///     deviceid4               ffv2ds_deviceid;
   ///     uint32_t                ffv2ds_efficiency;
   ///     ffv2_file_info4         ffv2ds_file_info<>;
   ///     fattr4_owner            ffv2ds_user;
   ///     fattr4_owner_group      ffv2ds_group;
   ///     ffv2_ds_flags4          ffv2ds_flags;
   /// };
]]></sourcecode>
        </figure>
        <t>The ffv2_data_server4 (in <xref target="fig-ffv2_data_server4"/>) describes a data
file and how to access it via the different NFS protocols.</t>
      </section>
      <section anchor="ffv2codingtypedata4">
        <name>ffv2_coding_type_data4</name>
        <figure anchor="fig-ffv2_coding_type_data4">
          <name>The ffv2_coding_type_data4</name>
          <sourcecode type="xdr"><![CDATA[
   /// union ffv2_coding_type_data4 switch
   ///         (ffv2_coding_type4 fctd_coding) {
   ///     case FFV2_CODING_MIRRORED:
   ///         void;
   /// };
]]></sourcecode>
        </figure>
        <t>The ffv2_coding_type_data4 (in <xref target="fig-ffv2_coding_type_data4"/>) describes
erasure coding type specific fields.  I.e., this is how the coding type
can communicate the need for counts of active, spare, parity, and repair
types of chunks.</t>
        <figure anchor="fig-ffv2_stripes4">
          <name>The stripes v2</name>
          <sourcecode type="xdr"><![CDATA[
   /// enum ffv2_striping {
   ///     FFV2_STRIPING_NONE = 0,
   ///     FFV2_STRIPING_SPARSE = 1,
   ///     FFV2_STRIPING_DENSE = 2
   /// };
   ///
   /// struct ffv2_stripes4 {
   ///         ffv2_data_server4       ffs_data_servers<>;
   /// };
]]></sourcecode>
        </figure>
      </section>
      <section anchor="ffv2mirror4">
        <name>ffv2_mirror4</name>
        <figure anchor="fig-ffv2_mirror4">
          <name>The ffv2_mirror4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_mirror4 {
   ///         ffv2_coding_type_data4  ffm_coding_type_data;
   ///         ffv2_key4               ffm_key;
   ///         ffv2_striping           ffm_striping;
   ///         uint32_t                ffm_striping_unit_size; // The minimum stripe unit size is 64 bytes.
   ///         uint32_t                ffm_client_id;
   ///         ffv2_stripes4           ffm_stripes<>; // Length of this array is the stripe count
   /// };
]]></sourcecode>
        </figure>
        <t>The ffv2_mirror4 (in <xref target="fig-ffv2_mirror4"/>) describes the Flexible
File Layout Version 2 specific fields.  The ffm_client_id tells the
client which id to use when interacting with the data servers.</t>
        <t><cref source="Tom">Nuke ffm_client_id?</cref></t>
      </section>
      <section anchor="ffv2layout4">
        <name>ffv2_layout4</name>
        <figure anchor="fig-ffv2_layout4">
          <name>The ffv2_layout4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_layout4 {
   ///     length4                 ffl_stripe_unit;
   ///     ffv2_mirror4            ffl_mirrors<>;
   ///     ffv2_flags4             ffl_flags;
   ///     uint32_t                ffl_stats_collect_hint;
   /// };
]]></sourcecode>
        </figure>
        <t>The ffv2_layout4 (in <xref target="fig-ffv2_layout4"/>) describes the Flexible
File Layout Version 2.</t>
      </section>
      <section anchor="ffv2layouthint4">
        <name>ffv2_layouthint4</name>
        <figure anchor="fig-ffv2_layouthint4">
          <name>The ffv2_layouthint4</name>
          <sourcecode type="xdr"><![CDATA[
   /// union ffv2_mirrors_hint switch (ffv2_coding_type4 ffmh_type) {
   ///     case FFV2_CODING_MIRRORED:
   ///         void;
   /// };
   ///
   /// struct ffv2_layouthint4 {
   ///     ffv2_coding_type4 fflh_supported_types<>;
   ///     ffv2_mirrors_hint fflh_mirrors_hint;
   /// };
]]></sourcecode>
        </figure>
        <t>The ffv2_layouthint4 (in <xref target="fig-ffv2_layouthint4"/>) describes the
layout_hint (see Section 5.12.4 of <xref target="RFC8881"/>) that the client can
provide to the metadata server.</t>
        <figure anchor="fig-ff_layout4">
          <name>The flex files layout type v1</name>
          <sourcecode type="xdr"><![CDATA[
   struct ff_data_server4 {
       deviceid4               ffds_deviceid;
       uint32_t                ffds_efficiency;
       stateid4                ffds_stateid;
       nfs_fh4                 ffds_fh_vers<>;
       fattr4_owner            ffds_user;
       fattr4_owner_group      ffds_group;
   };

   struct ff_mirror4 {
       ff_data_server4         ffm_data_servers<>;
   };

   struct ff_layout4 {
       length4                 ffl_stripe_unit;
       ff_mirror4              ffl_mirrors<>;
       ff_flags4               ffl_flags;
       uint32_t                ffl_stats_collect_hint;
   };
]]></sourcecode>
        </figure>
        <t>Note: In <xref target="fig-ffv2_layout4"/> ffv2_coding_type_data4 is an enumerated
union with the payload of each arm being defined by the protection
type. ffm_client_id tells the client which id to use when interacting
with the data servers.</t>
        <t>The ff_layout4 structure (see <xref target="fig-ff_layout4"/>) specifies a layout
in that portion of the data file described in the current layout
segment.  It is either a single instance or a set of mirrored copies
of that portion of the data file.  When mirroring is in effect, it
protects against loss of data in layout segments.</t>
        <t>While not explicitly shown in <xref target="fig-ff_layout4"/>, each layout4
element returned in the logr_layout array of LAYOUTGET4res (see
Section 18.43.2 of <xref target="RFC8881"/>) describes a layout segment.  Hence,
each ff_layout4 also describes a layout segment.  It is possible
that the file is concatenated from more than one layout segment.
Each layout segment <bcp14>MAY</bcp14> represent different striping parameters.</t>
        <t>The ffl_stripe_unit field is the stripe unit size in use for the
current layout segment.  The number of stripes is given inside each
mirror by the number of elements in ffm_data_servers.  If the number
of stripes is one, then the value for ffl_stripe_unit <bcp14>MUST</bcp14> default
to zero.  The only supported mapping scheme is sparse and is detailed
in <xref target="sec-striping"/>.  Note that there is an assumption here that
both the stripe unit size and the number of stripes are the same
across all mirrors.</t>
        <t>The ffl_mirrors field is the array of mirrored storage devices that
provide the storage for the current stripe; see <xref target="fig-parallel-fileystem"/>.</t>
        <t>The ffl_stats_collect_hint field provides a hint to the client on
how often the server wants it to report LAYOUTSTATS for a file.
The time is in seconds.</t>
        <figure anchor="fig-parallel-fileystem">
          <name>The Relationship between MDS and DSes</name>
          <artwork><![CDATA[
                +-----------+
                |           |
                |           |
                |   File    |
                |           |
                |           |
                +-----+-----+
                      |
     +-------------+-----+----------------+
     |                   |                |
+----+-----+       +-----+----+       +---+----------+
| Mirror 1 |       | Mirror 2 |       | Mirror 3     |
| MIRRORED |       | MIRRORED |       | REED_SOLOMON |
+----+-----+       +-----+----+       +---+----------+
     |                   |                |
+-----------+      +-----------+      +-----------+
|+-----------+     |+-----------+     |+-----------+
||+-----------+    ||+-----------+    ||+-----------+
+||  Storage  |    +||  Storage  |    +||  Storage  |
 +|  Devices  |     +|  Devices  |     +|  Devices  |
  +-----------+      +-----------+      +-----------+
]]></artwork>
        </figure>
        <t>The ffs_mirrors field represents an array of state information for
each mirrored copy of the current layout segment.  Each element is
described by a ffv2_mirror4 type.</t>
        <t>ffv2ds_deviceid provides the deviceid of the storage device holding
the data file.</t>
        <t>ffv2ds_fh_vers is an array of filehandles of the data file matching
the available NFS versions on the given storage device.  There <bcp14>MUST</bcp14>
be exactly as many elements in ffv2ds_fh_vers as there are in
ffda_versions.  Each element of the array corresponds to a particular
combination of ffdv_version, ffdv_minorversion, and ffdv_tightly_coupled
provided for the device.  The array allows for server implementations
that have different filehandles for different combinations of
version, minor version, and coupling strength.  See <xref target="sec-version-errors"/>
for how to handle versioning issues between the client and storage
devices.</t>
        <t>For tight coupling, ffv2ds_stateid provides the stateid to be used
by the client to access the file.  For loose coupling and an NFSv4
storage device, the client will have to use an anonymous stateid
to perform I/O on the storage device.  With no control protocol,
the metadata server stateid cannot be used to provide a global
stateid model.  Thus, the server <bcp14>MUST</bcp14> set the ffv2ds_stateid to be
the anonymous stateid.</t>
        <t>This specification of the ffv2ds_stateid restricts both models for
NFSv4.x storage protocols:</t>
        <dl>
          <dt>loosely couple</dt>
          <dd>
            <t>the stateid has to be an anonymous stateid</t>
          </dd>
          <dt>tightly couple</dt>
          <dd>
            <t>the stateid has to be a global stateid</t>
          </dd>
        </dl>
        <t>A number of issues stem from a mismatch between the fact that
ffv2ds_stateid is defined as a single item while ffv2ds_fh_vers is
defined as an array.  It is possible for each open file on the
storage device to require its own open stateid.  Because there are
established loosely coupled implementations of the version of the
protocol described in this document, such potential issues have not
been addressed here.  It is possible for future layout types to be
defined that address these issues, should it become important to
provide multiple stateids for the same underlying file.</t>
        <t>For loosely coupled storage devices, ffv2ds_user and ffv2ds_group
provide the synthetic user and group to be used in the RPC credentials
that the client presents to the storage device to access the data
files.  For tightly coupled storage devices, the user and group on
the storage device will be the same as on the metadata server; that
is, if ffdv_tightly_coupled (see <xref target="sec-ff_device_addr4"/>) is set,
then the client <bcp14>MUST</bcp14> ignore both ffv2ds_user and ffv2ds_group.</t>
        <t>The allowed values for both ffv2ds_user and ffv2ds_group are specified
as owner and owner_group, respectively, in Section 5.9 of <xref target="RFC8881"/>.
For NFSv3 compatibility, user and group strings that consist of
decimal numeric values with no leading zeros can be given a special
interpretation by clients and servers that choose to provide such
support.  The receiver may treat such a user or group string as
representing the same user as would be represented by an NFSv3 uid
or gid having the corresponding numeric value.  Note that if using
Kerberos for security, the expectation is that these values will
be a name@domain string.</t>
        <t>ffv2ds_efficiency describes the metadata server's evaluation as to
the effectiveness of each mirror.  Note that this is per layout and
not per device as the metric may change due to perceived load,
availability to the metadata server, etc.  Higher values denote
higher perceived utility.  The way the client can select the best
mirror to access is discussed in <xref target="sec-select-mirror"/>.</t>
        <section anchor="error-codes-from-layoutget">
          <name>Error Codes from LAYOUTGET</name>
          <t><xref target="RFC8881"/> provides little guidance as to how the client is to
proceed with a LAYOUTGET that returns an error of either
NFS4ERR_LAYOUTTRYLATER, NFS4ERR_LAYOUTUNAVAILABLE, and NFS4ERR_DELAY.
Within the context of this document:</t>
          <dl>
            <dt>NFS4ERR_LAYOUTUNAVAILABLE</dt>
            <dd>
              <t>there is no layout available and the I/O is to go to the metadata
server.  Note that it is possible to have had a layout before a
recall and not after.</t>
            </dd>
            <dt>NFS4ERR_LAYOUTTRYLATER</dt>
            <dd>
              <t>there is some issue preventing the layout from being granted.
If the client already has an appropriate layout, it should continue
with I/O to the storage devices.</t>
            </dd>
            <dt>NFS4ERR_DELAY</dt>
            <dd>
              <t>there is some issue preventing the layout from being granted.
If the client already has an appropriate layout, it should not
continue with I/O to the storage devices.</t>
            </dd>
          </dl>
        </section>
        <section anchor="client-interactions-with-ffflagsnoiothrumds">
          <name>Client Interactions with FF_FLAGS_NO_IO_THRU_MDS</name>
          <t>Even if the metadata server provides the FF_FLAGS_NO_IO_THRU_MDS
flag, the client can still perform I/O to the metadata server.  The
flag functions as a hint.  The flag indicates to the client that
the metadata server prefers to separate the metadata I/O from the
data I/ O, most likely for performance reasons.</t>
        </section>
      </section>
      <section anchor="layoutcommit-1">
        <name>LAYOUTCOMMIT</name>
        <t>The flexible file layout does not use lou_body inside the
loca_layoutupdate argument to LAYOUTCOMMIT.  If lou_type is
LAYOUT4_FLEX_FILES, the lou_body field <bcp14>MUST</bcp14> have a zero length (see
Section 18.42.1 of <xref target="RFC8881"/>).</t>
      </section>
      <section anchor="interactions-between-devices-and-layouts">
        <name>Interactions between Devices and Layouts</name>
        <t>The file layout type is defined such that the relationship between
multipathing and filehandles can result in either 0, 1, or N
filehandles (see Section 13.3 of <xref target="RFC8881"/>).  Some rationales for
this are clustered servers that share the same filehandle or allow
for multiple read-only copies of the file on the same storage device.
In the flexible file layout type, while there is an array of
filehandles, they are independent of the multipathing being used.
If the metadata server wants to provide multiple read-only copies
of the same file on the same storage device, then it should provide
multiple mirrored instances, each with a different ff_device_addr4.
The client can then determine that, since the each of the ffv2ds_fh_vers
are different, there are multiple copies of the file for the current
layout segment available.</t>
      </section>
      <section anchor="sec-version-errors">
        <name>Handling Version Errors</name>
        <t>When the metadata server provides the ffda_versions array in the
ff_device_addr4 (see <xref target="sec-ff_device_addr4"/>), the client is able
to determine whether or not it can access a storage device with any
of the supplied combinations of ffdv_version, ffdv_minorversion,
and ffdv_tightly_coupled.  However, due to the limitations of
reporting errors in GETDEVICEINFO (see Section 18.40 in <xref target="RFC8881"/>),
the client is not able to specify which specific device it cannot
communicate with over one of the provided ffdv_version and
ffdv_minorversion combinations.  Using ff_ioerr4 (<xref target="sec-ff_ioerr4"/>)
inside either the LAYOUTRETURN (see Section 18.44 of <xref target="RFC8881"/>)
or the LAYOUTERROR (see Section 15.6 of <xref target="RFC7862"/> and <xref target="sec-LAYOUTERROR"/>
of this document), the client can isolate the problematic storage
device.</t>
        <t>The error code to return for LAYOUTRETURN and/or LAYOUTERROR is
NFS4ERR_MINOR_VERS_MISMATCH.  It does not matter whether the mismatch
is a major version (e.g., client can use NFSv3 but not NFSv4) or
minor version (e.g., client can use NFSv4.1 but not NFSv4.2), the
error indicates that for all the supplied combinations for ffdv_version
and ffdv_minorversion, the client cannot communicate with the storage
device.  The client can retry the GETDEVICEINFO to see if the
metadata server can provide a different combination, or it can fall
back to doing the I/O through the metadata server.</t>
      </section>
    </section>
    <section anchor="sec-striping">
      <name>Striping via Sparse Mapping</name>
      <t>While other layout types support both dense and sparse mapping of
logical offsets to physical offsets within a file (see, for example,
Section 13.4 of <xref target="RFC8881"/>), the flexible file layout type only
supports a sparse mapping.</t>
      <t>With sparse mappings, the logical offset within a file (L) is also
the physical offset on the storage device.  As detailed in Section
13.4.4 of <xref target="RFC8881"/>, this results in holes across each storage
device that does not contain the current stripe index.</t>
      <figure anchor="fig-striping">
        <name>Stripe Mapping Math</name>
        <artwork><![CDATA[
L: logical offset within the file

W: stripe width
    W = number of elements in ffm_data_servers

S: number of bytes in a stripe
    S = W * ffl_stripe_unit

N: stripe number
    N = L / S
]]></artwork>
      </figure>
    </section>
    <section anchor="recovering-from-client-io-errors">
      <name>Recovering from Client I/O Errors</name>
      <t>The pNFS client may encounter errors when directly accessing the
storage devices.  However, it is the responsibility of the metadata
server to recover from the I/O errors.  When the LAYOUT4_FLEX_FILES
layout type is used, the client <bcp14>MUST</bcp14> report the I/O errors to the
server at LAYOUTRETURN time using the ff_ioerr4 structure (see
<xref target="sec-ff_ioerr4"/>).</t>
      <t>The metadata server analyzes the error and determines the required
recovery operations such as recovering media failures or reconstructing
missing data files.</t>
      <t>The metadata server <bcp14>MUST</bcp14> recall any outstanding layouts to allow
it exclusive write access to the stripes being recovered and to
prevent other clients from hitting the same error condition.  In
these cases, the server <bcp14>MUST</bcp14> complete recovery before handing out
any new layouts to the affected byte ranges.</t>
      <t>Although the client implementation has the option to propagate a
corresponding error to the application that initiated the I/O
operation and drop any unwritten data, the client should attempt
to retry the original I/O operation by either requesting a new
layout or sending the I/O via regular NFSv4.1+ READ or WRITE
operations to the metadata server.  The client <bcp14>SHOULD</bcp14> attempt to
retrieve a new layout and retry the I/O operation using the storage
device first and only retry the I/O operation via the metadata
server if the error persists.</t>
    </section>
    <section anchor="client-side-protection-modes">
      <name>Client-Side Protection Modes</name>
      <section anchor="sec-CSM">
        <name>Client-Side Mirroring</name>
        <t>The flexible file layout type has a simple model in place for the
mirroring of the file data constrained by a layout segment.  There
is no assumption that each copy of the mirror is stored identically
on the storage devices.  For example, one device might employ
compression or deduplication on the data.  However, the over-the-wire
transfer of the file contents <bcp14>MUST</bcp14> appear identical.  Note, this
is a constraint of the selected XDR representation in which each
mirrored copy of the layout segment has the same striping pattern
(see <xref target="fig-parallel-fileystem"/>).</t>
        <t>The metadata server is responsible for determining the number of
mirrored copies and the location of each mirror.  While the client
may provide a hint to how many copies it wants (see <xref target="sec-layouthint"/>),
the metadata server can ignore that hint; in any event, the client
has no means to dictate either the storage device (which also means
the coupling and/or protocol levels to access the layout segments)
or the location of said storage device.</t>
        <t>The updating of mirrored layout segments is done via client-side
mirroring.  With this approach, the client is responsible for making
sure modifications are made on all copies of the layout segments
it is informed of via the layout.  If a layout segment is being
resilvered to a storage device, that mirrored copy will not be in
the layout.  Thus, the metadata server <bcp14>MUST</bcp14> update that copy until
the client is presented it in a layout.  If the FF_FLAGS_WRITE_ONE_MIRROR
is set in ffl_flags, the client need only update one of the mirrors
(see <xref target="sec-write-mirrors"/>).  If the client is writing to the layout
segments via the metadata server, then the metadata server <bcp14>MUST</bcp14>
update all copies of the mirror.  As seen in <xref target="sec-mds-resilvering"/>,
during the resilvering, the layout is recalled, and the client has
to make modifications via the metadata server.</t>
        <section anchor="sec-select-mirror">
          <name>Selecting a Mirror</name>
          <t>When the metadata server grants a layout to a client, it <bcp14>MAY</bcp14> let
the client know how fast it expects each mirror to be once the
request arrives at the storage devices via the ffv2ds_efficiency
member.  While the algorithms to calculate that value are left to
the metadata server implementations, factors that could contribute
to that calculation include speed of the storage device, physical
memory available to the device, operating system version, current
load, etc.</t>
          <t>However, what should not be involved in that calculation is a
perceived network distance between the client and the storage device.
The client is better situated for making that determination based
on past interaction with the storage device over the different
available network interfaces between the two; that is, the metadata
server might not know about a transient outage between the client
and storage device because it has no presence on the given subnet.</t>
          <t>As such, it is the client that decides which mirror to access for
reading the file.  The requirements for writing to mirrored layout
segments are presented below.</t>
        </section>
        <section anchor="sec-write-mirrors">
          <name>Writing to Mirrors</name>
          <section anchor="single-storage-device-updates-mirrors">
            <name>Single Storage Device Updates Mirrors</name>
            <t>If the FF_FLAGS_WRITE_ONE_MIRROR flag in ffl_flags is set, the
client only needs to update one of the copies of the layout segment.
For this case, the storage device <bcp14>MUST</bcp14> ensure that all copies of
the mirror are updated when any one of the mirrors is updated.  If
the storage device gets an error when updating one of the mirrors,
then it <bcp14>MUST</bcp14> inform the client that the original WRITE had an error.
The client then <bcp14>MUST</bcp14> inform the metadata server (see <xref target="sec-write-errors"/>).
The client's responsibility with respect to COMMIT is explained in
<xref target="sec-write-commits"/>.  The client may choose any one of the mirrors
and may use ffv2ds_efficiency as described in <xref target="sec-select-mirror"/>
when making this choice.</t>
          </section>
          <section anchor="client-updates-all-mirrors">
            <name>Client Updates All Mirrors</name>
            <t>If the FF_FLAGS_WRITE_ONE_MIRROR flag in ffl_flags is not set, the
client is responsible for updating all mirrored copies of the layout
segments that it is given in the layout.  A single failed update
is sufficient to fail the entire operation.  If all but one copy
is updated successfully and the last one provides an error, then
the client needs to inform the metadata server about the error.
The client can use either LAYOUTRETURN or LAYOUTERROR to inform the
metadata server that the update failed to that storage device.  If
the client is updating the mirrors serially, then it <bcp14>SHOULD</bcp14> stop
at the first error encountered and report that to the metadata
server.  If the client is updating the mirrors in parallel, then
it <bcp14>SHOULD</bcp14> wait until all storage devices respond so that it can
report all errors encountered during the update.</t>
          </section>
          <section anchor="sec-write-errors">
            <name>Handling Write Errors</name>
            <t>When the client reports a write error to the metadata server, the
metadata server is responsible for determining if it wants to remove
the errant mirror from the layout, if the mirror has recovered from
some transient error, etc.  When the client tries to get a new
layout, the metadata server informs it of the decision by the
contents of the layout.  The client <bcp14>MUST NOT</bcp14> assume that the contents
of the previous layout will match those of the new one.  If it has
updates that were not committed to all mirrors, then it <bcp14>MUST</bcp14> resend
those updates to all mirrors.</t>
            <t>There is no provision in the protocol for the metadata server to
directly determine that the client has or has not recovered from
an error.  For example, if a storage device was network partitioned
from the client and the client reported the error to the metadata
server, then the network partition would be repaired, and all of
the copies would be successfully updated.  There is no mechanism
for the client to report that fact, and the metadata server is
forced to repair the file across the mirror.</t>
            <t>If the client supports NFSv4.2, it can use LAYOUTERROR and LAYOUTRETURN
to provide hints to the metadata server about the recovery efforts.
A LAYOUTERROR on a file is for a non-fatal error.  A subsequent
LAYOUTRETURN without a ff_ioerr4 indicates that the client successfully
replayed the I/O to all mirrors.  Any LAYOUTRETURN with a ff_ioerr4
is an error that the metadata server needs to repair.  The client
<bcp14>MUST</bcp14> be prepared for the LAYOUTERROR to trigger a CB_LAYOUTRECALL
if the metadata server determines it needs to start repairing the
file.</t>
          </section>
          <section anchor="sec-write-commits">
            <name>Handling Write COMMITs</name>
            <t>When stable writes are done to the metadata server or to a single
replica (if allowed by the use of FF_FLAGS_WRITE_ONE_MIRROR), it
is the responsibility of the receiving node to propagate the written
data stably, before replying to the client.</t>
            <t>In the corresponding cases in which unstable writes are done, the
receiving node does not have any such obligation, although it may
choose to asynchronously propagate the updates.  However, once a
COMMIT is replied to, all replicas must reflect the writes that
have been done, and this data must have been committed to stable
storage on all replicas.</t>
            <t>In order to avoid situations in which stale data is read from
replicas to which writes have not been propagated:</t>
            <ul spacing="normal">
              <li>
                <t>A client that has outstanding unstable writes made to single
node (metadata server or storage device) <bcp14>MUST</bcp14> do all reads from
that same node.</t>
              </li>
              <li>
                <t>When writes are flushed to the server (for example, to implement
close-to-open semantics), a COMMIT must be done by the client
to ensure that up-to-date written data will be available
irrespective of the particular replica read.</t>
              </li>
            </ul>
          </section>
        </section>
        <section anchor="sec-mds-resilvering">
          <name>Metadata Server Resilvering of the File</name>
          <t>The metadata server may elect to create a new mirror of the layout
segments at any time.  This might be to resilver a copy on a storage
device that was down for servicing, to provide a copy of the layout
segments on storage with different storage performance characteristics,
etc.  As the client will not be aware of the new mirror and the
metadata server will not be aware of updates that the client is
making to the layout segments, the metadata server <bcp14>MUST</bcp14> recall the
writable layout segment(s) that it is resilvering.  If the client
issues a LAYOUTGET for a writable layout segment that is in the
process of being resilvered, then the metadata server can deny that
request with an NFS4ERR_LAYOUTUNAVAILABLE.  The client would then
have to perform the I/O through the metadata server.</t>
        </section>
      </section>
      <section anchor="erasure-coding">
        <name>Erasure Coding</name>
        <t>Erasure Coding takes a data block and transforms it to a payload
to send to the data servers (see <xref target="fig-encoding-data-block"/>).  It
generates a metadata header and transformed block per data server.
The header is metadata information for the transformed block.  From
now on, the metadata is simply referred to as the header and the
transformed block as the chunk.  The payload of a data block is the
set of generated headers and chunks for that data block.</t>
        <t>The guard is an unique identifier generated by the client to describe
the current write transaction (see <xref target="sec-chunk_guard4"/>).  The
intent is to have an unique and non-opauqe value for comparison.
The payload_id describes the position within the payload.  Finally,
the crc32 is the 32 bit crc calculation of the header (with the
crc32 field being 0) and the chunk.  By combining the two parts of
the payload, integrity is ensured for both the parts.</t>
        <t>While the data block might have a length of 4kB, that does not
necessarily mean that the length of the chunk is 4kB.  That length
is determined by the erasure coding type algorithm.  For example,
Reed Solomon might have 4kB chunks with the data integrity being
compromised by parity chunks.  Another example would be the Mojette
Transformation, which might have 1kB chunk lengths.</t>
        <t>The payload contains redundancy which will allow the erasure coding
type algorithm to repair chunks in the payload as it is transformed
back to a data block (see <xref target="fig-decoding-db"/>).  A payload is
consistent when all of the contained headers share the same guard.
It has integrity when it is consistent and the combinations of
headers and chunks all pass the crc32 checks.</t>
        <t>The erasure coding algorithm itelf might not be sufficient to detect
errors in the chunks.  The crc32 checks will allow the data server
to detect chunks with issues and then the erasure decoding algorithm
can reconstruct the missing chunk.</t>
        <section anchor="encoding-a-data-block">
          <name>Encoding a Data Block</name>
          <figure anchor="fig-encoding-data-block">
            <name>Encoding a Data Block</name>
            <artwork><![CDATA[
                 +-------------+
                 | data block  |
                 +-------+-----+
                         |
                         |
   +---------------------+-------------------------------+
   |            Erasure Encoding (Transform Forward)     |
   +---+----------------------+---------------------+----+
       |                      |                     |
       |                      |                     |
   +---+------------+     +---+------------+     +--+-------------+
   | HEADER         | ... | HEADER         | ... | HEADER         |
   +----------------+     +----------------+     +----------------+
   | guard:         | ... | guard:         | ... | guard:         |
   |   gen_id   : 3 | ... |   gen_id   : 3 | ... |   gen_id   : 3 |
   |   client_id: 6 | ... |   client_id: 6 | ... |   client_id: 6 |
   | payload_id : 0 | ... | payload_id : M | ... | payload_id : 5 |
   | crc32   :      | ... | crc32   :      | ... | crc32   :      |
   +----------------+     +----------------+     +----------------+
   | CHUNK          | ... | CHUNK          | ... | CHUNK          |
   +----------------+     +----------------+     +----------------+
   | data: ....     | ... | data: ....     | ... | data: ....     |
   +----------------+     +----------------+     +----------------+
     Data Server 1          Data Server N          Data Server 6
]]></artwork>
          </figure>
          <t>Each data block of the file resident in the client's cache of the
file will be encoded into N different payloads to be sent to the
data servers as shown in <xref target="fig-encoding-data-block"/>.  As CHUNK_WRITE
(see <xref target="sec-CHUNK_WRITE"/>) can encode multiple write_chunk4 into a
single transaction, a more accurate description of a CHUNK_WRITE
is in <xref target="fig-example-chunk-write-args"/>.</t>
          <figure anchor="fig-example-chunk-write-args">
            <name>Example of CHUNK_WRITE_args</name>
            <artwork><![CDATA[
  +------------------------------------+
  | CHUNK_WRITEargs                    |
  +------------------------------------+
  | cwa_stateid: 0                     |
  | cwa_offset: 1                      |
  | cwa_stable: FILE_SYNC4             |
  | cwa_payload_id: 0                  |
  | cwa_owner:                         |
  |            co_guard:               |
  |                cg_gen_id   : 3     |
  |                cg_client_id: 6     |
  | cwa_chunk_size  :  1048            |
  | cwa_crc32s:                        |
  |         [0]:  0x32ef89             |
  |         [1]:  0x56fa89             |
  |         [2]:  0x7693af             |
  | cwa_chunks  :  ......              |
  +------------------------------------+
]]></artwork>
          </figure>
          <t>This describes a 3 block write of data from an offset of 1 block
in the file.  As each block shares the cwa_owner, it is only presented
once.  I.e., the data server will be able to construct the header
for the i'th chunk from the cwa_chunks from the cwa_payload_id, the
cwa_owner, and the i'th crc32 from the cw_crc32s.  The cwa_chunks
are sent together as a byte stream to increase performance.</t>
          <t>Assuming that there were no issues, <xref target="fig-example-chunk-write-res"/>
illustrates the results.  The payload sequence id is implicit in
the CHUNK_WRITEargs.</t>
          <figure anchor="fig-example-chunk-write-res">
            <name>Example of CHUNK_WRITE_res</name>
            <artwork><![CDATA[
  +-------------------------------+
  | CHUNK_WRITEresok              |
  +-------------------------------+
  | cwr_count: 3                  |
  | cwr_committed: FILE_SYNC4     |
  | cwr_writeverf: 0xf1234abc     |
  | cwr_owners[0]:                |
  |        co_chunk_id: 1         |
  |        co_guard:              |
  |            cg_gen_id   : 3    |
  |            cg_client_id: 6    |
  | cwr_owners[1]:                |
  |        co_chunk_id: 2         |
  |        co_guard:              |
  |            cg_gen_id   : 3    |
  |            cg_client_id: 6    |
  | cwr_owners[2]:                |
  |        co_chunk_id: 3         |
  |        co_guard:              |
  |            cg_gen_id   : 3    |
  |            cg_client_id: 6    |
  +-------------------------------+
]]></artwork>
          </figure>
          <section anchor="calculating-the-crc32">
            <name>Calculating the CRC32</name>
            <figure anchor="fig-calc-before">
              <name>CRC32 Before Calculation</name>
              <artwork><![CDATA[
  +---+----------------+
  | HEADER             |
  +--------------------+
  | guard:             |
  |   gen_id   : 7     |
  |   client_id: 6     |
  | payload_id : 0     |
  | crc32   : 0        |
  +--------------------+
  | CHUNK              |
  +--------------------+
  | data:  ....        |
  +--------------------+
        Data Server 1
]]></artwork>
            </figure>
            <t>Assuming the header and payload as in <xref target="fig-calc-before"/>, the crc32
needs to be calculated in order to fill in the cw_crc field.  In
this case, the crc32 is calculated over the 4 fields as shown in
the header and the cw_chunk.  In this example, it is calculated to
be 0x21de8.  The resulting CHUNK_WRITE is shown in <xref target="fig-calc-crc-after"/>.</t>
            <figure anchor="fig-calc-crc-after">
              <name>CRC32 After Calculation</name>
              <artwork><![CDATA[
  +------------------------------------+
  | CHUNK_WRITEargs                    |
  +------------------------------------+
  | cwa_stateid: 0                     |
  | cwa_offset: 1                      |
  | cwa_stable: FILE_SYNC4             |
  | cwa_payload_id: 0                  |
  | cwa_owner:                         |
  |            co_guard:               |
  |                cg_gen_id   : 7     |
  |                cg_client_id: 6     |
  | cwa_chunk_size  :  1048            |
  | cwa_crc32s:                        |
  |         [0]:  0x21de8              |
  | cwa_chunks  :  ......              |
  +------------------------------------+
]]></artwork>
            </figure>
          </section>
        </section>
        <section anchor="decoding-a-data-block">
          <name>Decoding a Data Block</name>
          <figure anchor="fig-decoding-db">
            <name>Decoding a Data Block</name>
            <artwork><![CDATA[
    Data Server 1          Data Server N          Data Server 6
  +----------------+     +----------------+     +----------------+
  | HEADER         | ... | HEADER         | ... | HEADER         |
  +----------------+     +----------------+     +----------------+
  | guard:         | ... | guard:         | ... | guard:         |
  |   gen_id   : 3 | ... |   gen_id   : 3 | ... |   gen_id   : 3 |
  |   client_id: 6 | ... |   client_id: 6 | ... |   client_id: 6 |
  | payload_id : 0 | ... | payload_id : M | ... | payload_id : 5 |
  | crc32   :      | ... | crc32   :      | ... | crc32   :      |
  +----------------+     +----------------+     +----------------+
  | CHUNK          | ... | CHUNK          | ... | CHUNK          |
  +----------------+     +----------------+     +----------------+
  | data: ....     | ... | data: ....     | ... | data: ....     |
  +---+------------+     +--+-------------+     +-+--------------+
      |                     |                     |
      |                     |                     |
  +---+---------------------+---------------------+-----+
  |            Erasure Decoding (Transform Reverse)     |
  +---------------------+-------------------------------+
                        |
                        |
                +-------+-----+
                | data block  |
                +-------------+
]]></artwork>
          </figure>
          <t>When reading chunks via a CHUNK_READ operation, the client will
decode them into data blocks as shown in <xref target="fig-decoding-db"/>.</t>
          <t>At this time, the client could detect issues in the integrity of
the data.  The handling and repair are out of the scope of this
document and <bcp14>MUST</bcp14> be addressed in the document describing each
erasure coding type.</t>
          <section anchor="checking-the-crc32">
            <name>Checking the CRC32</name>
            <figure anchor="fig-example-chunk-read-crc">
              <name>CRC32 on the Wire</name>
              <artwork><![CDATA[
  +------------------------------------+
  | CHUNK_READresok                    |
  +------------------------------------+
  | crr_eof: false                     |
  | crr_chunks[0]:                     |
  |        cr_crc: 0x21de8             |
  |        cr_owner:                   |
  |            co_guard:               |
  |                cg_gen_id   : 7     |
  |                cg_client_id: 6     |
  |        cr_chunk  :  ......         |
  +------------------------------------+
]]></artwork>
            </figure>
            <t>Assuming the CHUNK_READ results as in <xref target="fig-example-chunk-read-crc"/>,
the crc32 needs to be checked in order to ensure data integrity.
Conceptually, a header and payload can be built as shown in
<xref target="fig-example-crc-checked"/>.  The crc32 is calculated over the 4
fields as shown in the header and the cr_chunk.  In this example,
it is calculated to be 0x21de8.  Thus this payload for the data
server has data integrity.</t>
            <figure anchor="fig-example-crc-checked">
              <name>CRC32 Being Checked</name>
              <artwork><![CDATA[
  +---+----------------+
  | HEADER             |
  +--------------------+
  | guard:             |
  |   gen_id   : 7     |
  |   client_id: 6     |
  | payload_id  : 0    |
  | crc32    : 0       |
  +--------------------+
  | CHUNK              |
  +--------------------+
  | data:  ....        |
  +--------------------+
       Data Server 1
]]></artwork>
            </figure>
          </section>
        </section>
        <section anchor="write-modes">
          <name>Write Modes</name>
          <t>There are two basic writing modes for erasure coding and they depend
on the metadata server using FFV2_FLAGS_ONLY_ONE_WRITER in the
ffl_flags in the ffv2_layout4 (see <xref target="fig-ffv2_layout4"/>) to inform
the client whether it is the only writer to the file or not.  If
it is the only writer, then CHUNK_WRITE with the cwa_guard not set
can be used to write chunks.  In this scenario, there is no write
contention, but write holes can occur as the client overwrites old
data.  Thus the client does not need guarded writes, but it does
need the ability to rollback writes.  If it is not the only writer,
then CHUNK_WRITE with the cwa_guard set <bcp14>MUST</bcp14> be used to write chunks.
In this scenario, the write holes can also be caused by multiple
clients writing to the same chunk.  Thus the client needs guarded
writes to prevent over writes and it does need the ability to
rollback writes.</t>
          <t>In both modes, clients <bcp14>MUST NOT</bcp14> overwrite payloads which already
contain inconsistency.  This directly follows from <xref target="sec-reading-chunks"/>
and <bcp14>MUST</bcp14> be handled as discussed there.  Once consistency in the
payload has been detected, the client can use those chunks as a
basis for read/modify/update.</t>
          <t>CHUNK_WRITE is a two pass operation in cooperation with CHUNK_FINALIZE
(<xref target="sec-CHUNK_FINALIZE"/>) and CHUNK_ROLLBACK (<xref target="sec-CHUNK_ROLLBACK"/>).
It writes to the data file and the data server is responsible for
retaining a copy of the old header and chunk. A subsequent CHUNK_READ
would return the new chunk. However, until either the CHUNK_FINALIZE
or CHUNK_ROLLBACK is presented, a subsequent CHUNK_WRITE <bcp14>MUST</bcp14> result
in the locking of the chunk, as if a CHUNK_LOCK (<xref target="sec-CHUNK_LOCK"/>)
had been performed on the chunk. As such, further CHUNK_WRITES by
any client <bcp14>MUST</bcp14> be denied until the chunk is unlocked by CHUNK_UNLOCK
(<xref target="sec-CHUNK_UNLOCK"/>).</t>
          <t>If the CHUNK_WRITE results in a consistent data block, then the
client will send a CHUNK_FINALIZE in a subsequent compound to inform
the data server that the chunk is consistent and can be overwritten
by another CHUNK_WRITE.</t>
          <t>If the CHUNK_WRITE results in an inconsistent data block or if the
data server returned NFS4ERR_CHUNK_LOCKED, then the client sends a
LAYOUTERROR to the metadata server with a code of
NFS4ERR_PAYLOAD_NOT_CONSISTENT. The metadata server then selects a
client (or data server) to repair the the data block.</t>
          <t><cref source="Tom">Since we don't have all potential chunks available,
it can either chose the winner or pick a random client/data server.
If the client is the winner, then the process is to use CHUNK_WRITE_REPAIR
to overwrite the chunks which are not consistent. If it is a random
client, then the client should just CHUNK_ROLLBACK and CHUNK_UNLOCK
until it gets back to the original chunk.</cref></t>
          <section anchor="single-writer-mode">
            <name>Single Writer Mode</name>
          </section>
          <section anchor="repairing-single-writer-payloads">
            <name>Repairing Single Writer Payloads</name>
          </section>
          <section anchor="mutliple-writer-mode">
            <name>Mutliple Writer Mode</name>
          </section>
          <section anchor="repairing-multiple-writer-payloads">
            <name>Repairing Multiple Writer Payloads</name>
          </section>
        </section>
        <section anchor="sec-reading-chunks">
          <name>Reading Chunks</name>
          <t>The client reads chunks from the data file via CHUNK_READ.  The
number of chunks in the payload that need to be consistent depend
on both the Erasure Coding Type and the level of protection selected.
If the client has enough consistent chunks in the payload, then it
can proceed to use them to build a data block.  If it does not have
enough consistent chunks in the payload, then it can either decide
to return a LAYOUTERROR of NFS4ERR_PAYLOAD_NOT_CONSISTENT to the
metadata server or it can retry the CHUNK_READ until there are
enough consistent chunks in the payload.</t>
          <t>As another client might be writing to the chunks as they are being
read, it is entirely possible to read the chunks while they are not
consistent.  As such, it might even be the non-consistent chunks
which contain the new data and a better action than building the
data block is to retry the CHUNK_READ to see if new chunks are
overwritten.</t>
        </section>
        <section anchor="whole-file-repair">
          <name>Whole File Repair</name>
          <t><cref source="Tom"> Describe how a repair client can be assigned
with missing FFV2_DS_FLAGS_ACTIVE data servers and a number of
FFV2_DS_FLAGS_REPAIR data servers.  Then the client will either
move chunks from FFV2_DS_FLAGS_SPARE data servers to the
FFV2_DS_FLAGS_REPAIR data servers or reconstruct the chunks for the
FFV2_DS_FLAGS_REPAIR based on the decoded data blocks, The client
indicates success by returning the layout.  </cref></t>
          <t><cref source="Tom"> For a slam dunk, introduce the concept of a
proxy repair client.  I.e., the client appears as a single
FFV2_CODING_MIRRORED file to other clients.  As it receives WRITEs,
it encodes them to the real set of data servers.  As it receives
READs, it decodes them from the real set of data servers.  Once the
proxy repair is finished, the metadata server will start pushing
out layouts for the real set of data servers.  </cref></t>
        </section>
      </section>
      <section anchor="mixing-of-coding-types">
        <name>Mixing of Coding Types</name>
        <t>Multiple coding types can be present in a Flexible File Version 2
Layout Type layout.  The ffv2_layout4 has an array of ffv2_mirror4,
each of which has a ffv2_coding_type4.  The main reason to allow
for this is to provide for either the assimilation of a non-erasure
coded file to an erasure coded file or the exporting of an erasure
coded file to a non-erasure coded file.</t>
        <t>Assume there is an additional ffv2_coding_type4 of FFV2_CODING_REED_SOLOMON
and it needs 8 active chunks.  The user wants to actively assimilate
a regular file.  As such, a layout might be as represented in
<xref target="fig-example_mixing"/>}.  As this is an assimilation, most of the
data reads will be satisfied by READ (see Section 18.22 of <xref target="RFC8881"/>)
calls to index 0.  However, as this is also an active file, there
could also be CHUNK_READ (see <xref target="sec-CHUNK_READ"/>) calls to the other
indexes.</t>
        <figure anchor="fig-example_mixing">
          <name>Example of Mixed Coding Types in a Layout</name>
          <artwork><![CDATA[
 +-----------------------------------------------------+
 | ffv2_layout4:                                       |
 +-----------------------------------------------------+
 |     ffl_mirrors[0]:                                 |
 |         ffm_data_servers:                           |
 |             ffv2_data_server4[0]                    |
 |                 ffv2ds_flags: 0                     |
 |         ffm_coding: FFV2_CODING_MIRRORED            |
 +-----------------------------------------------------+
 |     ffl_mirrors[1]:                                 |
 |         ffm_data_servers:                           |
 |             ffv2_data_server4[0]                    |
 |                 ffv2ds_flags: FFV2_DS_FLAGS_ACTIVE  |
 |             ffv2_data_server4[1]                    |
 |                 ffv2ds_flags: FFV2_DS_FLAGS_ACTIVE  |
 |             ffv2_data_server4[2]                    |
 |                 ffv2ds_flags: FFV2_DS_FLAGS_ACTIVE  |
 |             ffv2_data_server4[3]                    |
 |                 ffv2ds_flags: FFV2_DS_FLAGS_ACTIVE  |
 |             ffv2_data_server4[4]                    |
 |                 ffv2ds_flags: FFV2_DS_FLAGS_PARITY  |
 |             ffv2_data_server4[5]                    |
 |                 ffv2ds_flags: FFV2_DS_FLAGS_PARITY  |
 |             ffv2_data_server4[6]                    |
 |                 ffv2ds_flags: FFV2_DS_FLAGS_SPARE   |
 |             ffv2_data_server4[7]                    |
 |                 ffv2ds_flags: FFV2_DS_FLAGS_SPARE   |
 |     ffm_coding: FFV2_CODING_REED_SOLOMON            |
 +-----------------------------------------------------+
]]></artwork>
        </figure>
        <t>When performing I/O via a FFV2_CODING_MIRRORED coding type, the
non- transformed data will be used, Whereas with other coding types,
a metadata header and transformed block will be sent.  Further,
when reading data from the instance files, the client <bcp14>MUST</bcp14> be
prepared to have one of the coding types supply data and the other
type not to supply data.  I.e., the CHUNK_READ call to the data
servers in mirror 1 might return rlr_eof set to true (see
<xref target="fig-read_chunk4"/>), which indicates that there is no data, where
the READ call to the data server in mirror 0 might return eof to
be false, which indicates that there is data.  The client <bcp14>MUST</bcp14>
determine that there is in fact data.  An example use case is the
active assimilation of a file to ensure integrity.  As the client
is helping to translated the file to the new coding scheme, it is
actively modifying the file.  As such, it might be sequentially
reading the file in order to translate.  The READ calls to mirror
0 would be returning data and the CHUNK_READ calls to mirror 1 would
not be returning data.  As the client overwrites the file, the WRITE
call and CHUNK_WRITE call would have data sent to all of the
data servers.  Finally, if the client reads back a section which
had been modified earlier, both the READ and CHUNK_READ calls would
return data.</t>
      </section>
      <section anchor="handling-write-holes">
        <name>Handling write holes</name>
      </section>
    </section>
    <section anchor="nfsv42-operations-allowed-to-data-files">
      <name>NFSv4.2 Operations Allowed to Data Files</name>
      <t><cref source="Tom"> In Flexible File Version 1 Layout Type, the
emphasis was on NFSv3 DSes.  We limited the operations that clients
could send to data files to be COMMIT, READ, and WRITE.  We further
limited the MDS to GETATTR, SETATTR, CREATE, and REMOVE.  (Funny
enough, this is not mandated here.)  We need to call this out in
this draft and also we need to limit the NFSv4.2 operations.  Besides
the ones created here, consider: READ, WRITE, and COMMIT for mirroring
types and ALLOCATE, CLONE, COPY, DEALLOCATE, GETFH, PUTFH, READ_PLUS,
RESTOREFH, SAVEFH, SEEK, and SEQUENCE for all types.  </cref></t>
      <t><cref source="Tom"> Of special merit is SETATTR.  Do we want to
allow the clients to be able to truncate the data files?  Which
also brings up DEALLOCATE.  Perhaps we want CHUNK_DEALLOCATE?  That
way we can swap out chunks with the client file.  CHUNK_DEALLOCATE_GUARD.
Really need to determine capabilities of XFS swap!  </cref></t>
    </section>
    <section anchor="sec-layouthint">
      <name>Flexible File Layout Type Return</name>
      <t>layoutreturn_file4 is used in the LAYOUTRETURN operation to convey
layout-type-specific information to the server.  It is defined in
Section 18.44.1 of <xref target="RFC8881"/> (also shown in <xref target="fig-LAYOUTRETURN"/>).</t>
      <figure anchor="fig-LAYOUTRETURN">
        <name>Layout Return XDR</name>
        <sourcecode type="xdr"><![CDATA[
      /* Constants used for LAYOUTRETURN and CB_LAYOUTRECALL */
      const LAYOUT4_RET_REC_FILE      = 1;
      const LAYOUT4_RET_REC_FSID      = 2;
      const LAYOUT4_RET_REC_ALL       = 3;

      enum layoutreturn_type4 {
              LAYOUTRETURN4_FILE = LAYOUT4_RET_REC_FILE,
              LAYOUTRETURN4_FSID = LAYOUT4_RET_REC_FSID,
              LAYOUTRETURN4_ALL  = LAYOUT4_RET_REC_ALL
      };

   struct layoutreturn_file4 {
           offset4         lrf_offset;
           length4         lrf_length;
           stateid4        lrf_stateid;
           /* layouttype4 specific data */
           opaque          lrf_body<>;
   };

   union layoutreturn4 switch(layoutreturn_type4 lr_returntype) {
           case LAYOUTRETURN4_FILE:
                   layoutreturn_file4      lr_layout;
           default:
                   void;
   };

   struct LAYOUTRETURN4args {
           /* CURRENT_FH: file */
           bool                    lora_reclaim;
           layouttype4             lora_layout_type;
           layoutiomode4           lora_iomode;
           layoutreturn4           lora_layoutreturn;
   };
]]></sourcecode>
      </figure>
      <t>If the lora_layout_type layout type is LAYOUT4_FLEX_FILES and the
lr_returntype is LAYOUTRETURN4_FILE, then the lrf_body opaque value
is defined by ff_layoutreturn4 (see <xref target="sec-ff_layoutreturn4"/>).  This
allows the client to report I/O error information or layout usage
statistics back to the metadata server as defined below.  Note that
while the data structures are built on concepts introduced in
NFSv4.2, the effective discriminated union (lora_layout_type combined
with ff_layoutreturn4) allows for an NFSv4.1 metadata server to
utilize the data.</t>
      <section anchor="sec-io-error">
        <name>I/O Error Reporting</name>
        <section anchor="sec-ff_ioerr4">
          <name>ff_ioerr4</name>
          <figure anchor="fig-ff_ioerr4">
            <name>ff_ioerr4</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct ff_ioerr4 {
   ///         offset4        ffie_offset;
   ///         length4        ffie_length;
   ///         stateid4       ffie_stateid;
   ///         device_error4  ffie_errors<>;
   /// };
   ///
]]></sourcecode>
          </figure>
          <t>Recall that <xref target="RFC7862"/> defines device_error4 as in <xref target="fig-device_error4"/>:</t>
          <figure anchor="fig-device_error4">
            <name>device_error4</name>
            <sourcecode type="xdr"><![CDATA[
   struct device_error4 {
           deviceid4       de_deviceid;
           nfsstat4        de_status;
           nfs_opnum4      de_opnum;
   };
]]></sourcecode>
          </figure>
          <t>The ff_ioerr4 structure is used to return error indications for
data files that generated errors during data transfers.  These are
hints to the metadata server that there are problems with that file.
For each error, ffie_errors.de_deviceid, ffie_offset, and ffie_length
represent the storage device and byte range within the file in which
the error occurred; ffie_errors represents the operation and type
of error.  The use of device_error4 is described in Section 15.6
of <xref target="RFC7862"/>.</t>
          <t>Even though the storage device might be accessed via NFSv3 and
reports back NFSv3 errors to the client, the client is responsible
for mapping these to appropriate NFSv4 status codes as de_status.
Likewise, the NFSv3 operations need to be mapped to equivalent NFSv4
operations.</t>
        </section>
      </section>
      <section anchor="sec-layout-stats">
        <name>Layout Usage Statistics</name>
        <section anchor="ffiolatency4">
          <name>ff_io_latency4</name>
          <figure anchor="fig-ff_io_latency4">
            <name>ff_io_latency4</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct ff_io_latency4 {
   ///         uint64_t       ffil_ops_requested;
   ///         uint64_t       ffil_bytes_requested;
   ///         uint64_t       ffil_ops_completed;
   ///         uint64_t       ffil_bytes_completed;
   ///         uint64_t       ffil_bytes_not_delivered;
   ///         nfstime4       ffil_total_busy_time;
   ///         nfstime4       ffil_aggregate_completion_time;
   /// };
   ///
]]></sourcecode>
          </figure>
          <t>Both operation counts and bytes transferred are kept in the
ff_io_latency4 (see <xref target="fig-ff_io_latency4"/>.  As seen in ff_layoutupdate4
(see <xref target="sec-ff_layoutupdate4"/>), READ and WRITE operations are
aggregated separately.  READ operations are used for the ff_io_latency4
ffl_read.  Both WRITE and COMMIT operations are used for the
ff_io_latency4 ffl_write.  "Requested" counters track what the
client is attempting to do, and "completed" counters track what was
done.  There is no requirement that the client only report completed
results that have matching requested results from the reported
period.</t>
          <t>ffil_bytes_not_delivered is used to track the aggregate number of
bytes requested but not fulfilled due to error conditions.
ffil_total_busy_time is the aggregate time spent with outstanding
RPC calls. ffil_aggregate_completion_time is the sum of all round-trip
times for completed RPC calls.</t>
          <t>In Section 3.3.1 of <xref target="RFC8881"/>, the nfstime4 is defined as the
number of seconds and nanoseconds since midnight or zero hour January
1, 1970 Coordinated Universal Time (UTC).  The use of nfstime4 in
ff_io_latency4 is to store time since the start of the first I/O
from the client after receiving the layout.  In other words, these
are to be decoded as duration and not as a date and time.</t>
          <t>Note that LAYOUTSTATS are cumulative, i.e., not reset each time the
operation is sent.  If two LAYOUTSTATS operations for the same file
and layout stateid originate from the same NFS client and are
processed at the same time by the metadata server, then the one
containing the larger values contains the most recent time series
data.</t>
        </section>
        <section anchor="sec-ff_layoutupdate4">
          <name>ff_layoutupdate4</name>
          <figure anchor="fig-ff_layoutupdate4">
            <name>ff_layoutupdate4</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct ff_layoutupdate4 {
   ///         netaddr4       ffl_addr;
   ///         nfs_fh4        ffl_fhandle;
   ///         ff_io_latency4 ffl_read;
   ///         ff_io_latency4 ffl_write;
   ///         nfstime4       ffl_duration;
   ///         bool           ffl_local;
   /// };
   ///
]]></sourcecode>
          </figure>
          <t>ffl_addr differentiates which network address the client is connected
to on the storage device.  In the case of multipathing, ffl_fhandle
indicates which read-only copy was selected. ffl_read and ffl_write
convey the latencies for both READ and WRITE operations, respectively.
ffl_duration is used to indicate the time period over which the
statistics were collected.  If true, ffl_local indicates that the
I/O was serviced by the client's cache.  This flag allows the client
to inform the metadata server about "hot" access to a file it would
not normally be allowed to report on.</t>
        </section>
        <section anchor="ffiostats4">
          <name>ff_iostats4</name>
          <figure anchor="fig-ff_iostats4">
            <name>ff_iostats4</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct ff_iostats4 {
   ///         offset4           ffis_offset;
   ///         length4           ffis_length;
   ///         stateid4          ffis_stateid;
   ///         io_info4          ffis_read;
   ///         io_info4          ffis_write;
   ///         deviceid4         ffis_deviceid;
   ///         ff_layoutupdate4  ffis_layoutupdate;
   /// };
   ///
]]></sourcecode>
          </figure>
          <t><xref target="RFC7862"/> defines io_info4 as in <xref target="fig-ff_iostats4"/>.</t>
          <figure anchor="fig-io_info4">
            <name>io_info4</name>
            <sourcecode type="xdr"><![CDATA[
   struct io_info4 {
           uint64_t        ii_count;
           uint64_t        ii_bytes;
   };
]]></sourcecode>
          </figure>
          <t>With pNFS, data transfers are performed directly between the pNFS
client and the storage devices.  Therefore, the metadata server has
no direct knowledge of the I/O operations being done and thus cannot
create on its own statistical information about client I/O to
optimize the data storage location.  ff_iostats4 <bcp14>MAY</bcp14> be used by the
client to report I/O statistics back to the metadata server upon
returning the layout.</t>
          <t>Since it is not feasible for the client to report every I/O that
used the layout, the client <bcp14>MAY</bcp14> identify "hot" byte ranges for which
to report I/O statistics.  The definition and/or configuration
mechanism of what is considered "hot" and the size of the reported
byte range are out of the scope of this document.  For client
implementation, providing reasonable default values and an optional
run-time management interface to control these parameters is
suggested.  For example, a client can define the default byte-range
resolution to be 1 MB in size and the thresholds for reporting to
be 1 MB/second or 10 I/O operations per second.</t>
          <t>For each byte range, ffis_offset and ffis_length represent the
starting offset of the range and the range length in bytes.
ffis_read.ii_count, ffis_read.ii_bytes, ffis_write.ii_count, and
ffis_write.ii_bytes represent the number of contiguous READ and
WRITE I/Os and the respective aggregate number of bytes transferred
within the reported byte range.</t>
          <t>The combination of ffis_deviceid and ffl_addr uniquely identifies
both the storage path and the network route to it.  Finally,
ffl_fhandle allows the metadata server to differentiate between
multiple read-only copies of the file on the same storage device.</t>
        </section>
      </section>
      <section anchor="sec-ff_layoutreturn4">
        <name>ff_layoutreturn4</name>
        <figure anchor="fig-ff_layoutreturn4">
          <name>ff_layoutreturn4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct ff_layoutreturn4 {
   ///         ff_ioerr4     fflr_ioerr_report<>;
   ///         ff_iostats4   fflr_iostats_report<>;
   /// };
   ///
]]></sourcecode>
        </figure>
        <t>When data file I/O operations fail, fflr_ioerr_report&lt;&gt; is used to
report these errors to the metadata server as an array of elements
of type ff_ioerr4.  Each element in the array represents an error
that occurred on the data file identified by ffie_errors.de_deviceid.
If no errors are to be reported, the size of the fflr_ioerr_report&lt;&gt;
array is set to zero.  The client <bcp14>MAY</bcp14> also use fflr_iostats_report&lt;&gt;
to report a list of I/O statistics as an array of elements of type
ff_iostats4.  Each element in the array represents statistics for
a particular byte range.  Byte ranges are not guaranteed to be
disjoint and <bcp14>MAY</bcp14> repeat or intersect.</t>
      </section>
    </section>
    <section anchor="sec-LAYOUTERROR">
      <name>Flexible File Layout Type LAYOUTERROR</name>
      <t>If the client is using NFSv4.2 to communicate with the metadata
server, then instead of waiting for a LAYOUTRETURN to send error
information to the metadata server (see <xref target="sec-io-error"/>), it <bcp14>MAY</bcp14>
use LAYOUTERROR (see Section 15.6 of <xref target="RFC7862"/>) to communicate
that information.  For the flexible file layout type, this means
that LAYOUTERROR4args is treated the same as ff_ioerr4.</t>
    </section>
    <section anchor="flexible-file-layout-type-layoutstats">
      <name>Flexible File Layout Type LAYOUTSTATS</name>
      <t>If the client is using NFSv4.2 to communicate with the metadata
server, then instead of waiting for a LAYOUTRETURN to send I/O
statistics to the metadata server (see <xref target="sec-layout-stats"/>), it
<bcp14>MAY</bcp14> use LAYOUTSTATS (see Section 15.7 of <xref target="RFC7862"/>) to communicate
that information.  For the flexible file layout type, this means
that LAYOUTSTATS4args.lsa_layoutupdate is overloaded with the same
contents as in ffis_layoutupdate.</t>
    </section>
    <section anchor="flexible-file-layout-type-creation-hint">
      <name>Flexible File Layout Type Creation Hint</name>
      <t>The layouthint4 type is defined in the <xref target="RFC8881"/> as in
<xref target="fig-layouthint4-v1"/>.</t>
      <figure anchor="fig-layouthint4-v1">
        <name>layouthint4 v1</name>
        <sourcecode type="xdr"><![CDATA[
   struct layouthint4 {
       layouttype4        loh_type;
       opaque             loh_body<>;
   };
]]></sourcecode>
      </figure>
      <artwork><![CDATA[
                          {{fig-layouthint4-v1}}
]]></artwork>
      <t>The layouthint4 structure is used by the client to pass a hint about
the type of layout it would like created for a particular file.  If
the loh_type layout type is LAYOUT4_FLEX_FILES, then the loh_body
opaque value is defined by the ff_layouthint4 type.</t>
    </section>
    <section anchor="fflayouthint4">
      <name>ff_layouthint4</name>
      <figure anchor="fig-ff_layouthint4-v2">
        <name>ff_layouthint4 v2</name>
        <sourcecode type="xdr"><![CDATA[
   /// union ffv2_mirrors_hint switch (ffv2_protection_type ffmh_type) {
   ///     case FF2_PROTECTION_TYPE_MOJETTE:
   ///         void;
   ///     case FF2_PROTECTION_TYPE_MIRRORED:
   ///         void;
   /// };
   ///
   /// /*
   ///  * We could have this be simply ffv2_protection_type
   ///  * for the client to state what protection algorithm
   ///  * it wants.
   ///  */
   /// struct ffv2_layouthint4 {
   ///     ffv2_protection_type fflh_supported_types<>;
   ///     ffv2_mirrors_hint fflh_mirrors_hint;
   /// };

   union ff_mirrors_hint switch (bool ffmc_valid) {
       case TRUE:
           uint32_t    ffmc_mirrors;
       case FALSE:
           void;
   };

   struct ff_layouthint4 {
       ff_mirrors_hint    fflh_mirrors_hint;
   };
]]></sourcecode>
      </figure>
      <t>This type conveys hints for the desired data map.  All parameters
are optional so the client can give values for only the parameter
it cares about.</t>
    </section>
    <section anchor="recalling-a-layout">
      <name>Recalling a Layout</name>
      <t>While Section 12.5.5 of <xref target="RFC8881"/> discusses reasons independent
of layout type for recalling a layout, the flexible file layout
type metadata server should recall outstanding layouts in the
following cases:</t>
      <ul spacing="normal">
        <li>
          <t>When the file's security policy changes, i.e., ACLs or permission
mode bits are set.</t>
        </li>
        <li>
          <t>When the file's layout changes, rendering outstanding layouts
invalid.</t>
        </li>
        <li>
          <t>When existing layouts are inconsistent with the need to enforce
locking constraints.</t>
        </li>
        <li>
          <t>When existing layouts are inconsistent with the requirements
regarding resilvering as described in <xref target="sec-mds-resilvering"/>.</t>
        </li>
      </ul>
      <section anchor="cbrecallany">
        <name>CB_RECALL_ANY</name>
        <t>The metadata server can use the CB_RECALL_ANY callback operation
to notify the client to return some or all of its layouts.  Section
22.3 of <xref target="RFC8881"/> defines the allowed types of the "NFSv4 Recallable
Object Types Registry".</t>
        <figure anchor="fig-new-rca4">
          <name>RCA4 masks for v2</name>
          <sourcecode type="xdr"><![CDATA[
   /// const RCA4_TYPE_MASK_FF2_LAYOUT_MIN     = 20;
   /// const RCA4_TYPE_MASK_FF2_LAYOUT_MAX     = 21;
   ///
]]></sourcecode>
        </figure>
        <figure anchor="fig-CB_RECALL_ANY4args">
          <name>CB_RECALL_ANY4args XDR</name>
          <sourcecode type="xdr"><![CDATA[
   struct  CB_RECALL_ANY4args      {
       uint32_t        craa_layouts_to_keep;
       bitmap4         craa_type_mask;
   };
]]></sourcecode>
        </figure>
        <t>Typically, CB_RECALL_ANY will be used to recall client state when
the server needs to reclaim resources.  The craa_type_mask bitmap
specifies the type of resources that are recalled, and the
craa_layouts_to_keep value specifies how many of the recalled
flexible file layouts the client is allowed to keep.  The mask flags
for the flexible file layout type are defined as in <xref target="fig-mask-flags"/>.</t>
        <figure anchor="fig-mask-flags">
          <name>Recall Mask Flags for v2</name>
          <sourcecode type="xdr"><![CDATA[
   /// enum ff_cb_recall_any_mask {
   ///     PNFS_FF_RCA4_TYPE_MASK_READ = 20,
   ///     PNFS_FF_RCA4_TYPE_MASK_RW   = 21
   /// };
   ///
]]></sourcecode>
        </figure>
        <t>The flags represent the iomode of the recalled layouts.  In response,
the client <bcp14>SHOULD</bcp14> return layouts of the recalled iomode that it
needs the least, keeping at most craa_layouts_to_keep flexible file
layouts.</t>
        <t>The PNFS_FF_RCA4_TYPE_MASK_READ flag notifies the client to return
layouts of iomode LAYOUTIOMODE4_READ.  Similarly, the
PNFS_FF_RCA4_TYPE_MASK_RW flag notifies the client to return layouts
of iomode LAYOUTIOMODE4_RW.  When both mask flags are set, the
client is notified to return layouts of either iomode.</t>
      </section>
    </section>
    <section anchor="client-fencing">
      <name>Client Fencing</name>
      <t>In cases where clients are uncommunicative and their lease has
expired or when clients fail to return recalled layouts within a
lease period, the server <bcp14>MAY</bcp14> revoke client layouts and reassign
these resources to other clients (see Section 12.5.5 of <xref target="RFC8881"/>).
To avoid data corruption, the metadata server <bcp14>MUST</bcp14> fence off the
revoked clients from the respective data files as described in
<xref target="sec-Fencing-Clients"/>.</t>
    </section>
    <section anchor="new-nfsv42-error-values">
      <name>New NFSv4.2 Error Values</name>
      <figure anchor="fig-errors-xdr">
        <name>Errors XDR</name>
        <sourcecode type="xdr"><![CDATA[
   ///
   /// /* Erasure Coding errors start here */
   ///
   /// NFS4ERR_XATTR2BIG      = 10096,/* xattr value is too big  */
   /// NFS4ERR_CODING_NOT_SUPPORTED
   ///    = 10097,/* Coding Type unsupported  */
   /// NFS4ERR_PAYLOAD_NOT_CONSISTENT = 10098,/* payload inconsitent  */
   /// NFS4ERR_CHUNK_LOCKED = 10099,/* chunk is locked  */
   /// NFS4ERR_CHUNK_GUARDED = 10100/* chunk is guarded  */
   ///
]]></sourcecode>
      </figure>
      <t>The new error codes are shown in <xref target="fig-errors-xdr"/>.</t>
      <section anchor="error-definitions">
        <name>Error Definitions</name>
        <table anchor="tbl-protocol-errors">
          <name>X</name>
          <thead>
            <tr>
              <th align="left">Error</th>
              <th align="left">Number</th>
              <th align="left">Description</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">NFS4ERR_CODING_NOT_SUPPORTED</td>
              <td align="left">10097</td>
              <td align="left">
                <xref target="sec-NFS4ERR_CODING_NOT_SUPPORTED"/></td>
            </tr>
            <tr>
              <td align="left">NFS4ERR_PAYLOAD_NOT_CONSISTENT</td>
              <td align="left">10098</td>
              <td align="left">
                <xref target="sec-NFS4ERR_PAYLOAD_NOT_CONSISTENT"/></td>
            </tr>
            <tr>
              <td align="left">NFS4ERR_CHUNK_LOCKED</td>
              <td align="left">10099</td>
              <td align="left">
                <xref target="sec-NFS4ERR_CHUNK_LOCKED"/></td>
            </tr>
            <tr>
              <td align="left">NFS4ERR_CHUNK_GUARDED</td>
              <td align="left">10100</td>
              <td align="left">
                <xref target="sec-NFS4ERR_CHUNK_GUARDED"/></td>
            </tr>
          </tbody>
        </table>
        <section anchor="sec-NFS4ERR_CODING_NOT_SUPPORTED">
          <name>NFS4ERR_CODING_NOT_SUPPORTED (Error Code 10097)</name>
          <t>The client requested a ffv2_coding_type4 which the metadata server
does not support.  I.e., if the client sends a layout_hint requesting
an erasure coding type that the metadata server does not support,
this error code can be returned.  The client might have to send the
layout_hint several times to determine the overlapping set of
supported erasure coding types.</t>
        </section>
        <section anchor="sec-NFS4ERR_PAYLOAD_NOT_CONSISTENT">
          <name>NFS4ERR_PAYLOAD_NOT_CONSISTENT (Error Code 10098)</name>
          <t>The client encountered a payload in which the blocks were inconsistent
and stays inconsistent.  As the client can not tell if another
client is actively writing, it informs the metadata server of this
error via LAYOUTERROR.  The metadata server can then arrange for
repair of the file.</t>
        </section>
        <section anchor="sec-NFS4ERR_CHUNK_LOCKED">
          <name>NFS4ERR_CHUNK_LOCKED (Error Code 10099)</name>
          <t>The client tried an operation on a chunk which resulted in the data
server reporting that the chunk was locked. The client will then
inform the the metadata server of this error via LAYOUTERROR.  The
metadata server can then arrange for repair of the file.</t>
        </section>
        <section anchor="sec-NFS4ERR_CHUNK_GUARDED">
          <name>NFS4ERR_CHUNK_GUARDED (Error Code 10100)</name>
          <t>The client tried a guarded CHUNK_WRITE on a chunk which did not match
the guard on the chunk in the data file. As such, the CHUNK_WRITE was
rejected and the client should refresh the chunk it has cached.</t>
          <t><cref source="Tom">This really points out either we need an array of
errors in the chunk operation responses or we need to not send an
array of chunks in the requests. The arrays were picked in order to
reduce the header to data cost, but really do not make sense.</cref></t>
          <t><cref source="Tom">Trying out an array of errors.</cref></t>
        </section>
      </section>
      <section anchor="operations-and-their-valid-errors">
        <name>Operations and Their Valid Errors</name>
        <t>The operations and their valid errors are presented in
<xref target="tbl-ops-and-errors"/>.  All error codes not defined in this document
are defined in Section 15 of <xref target="RFC8881"/> and Section 11 of <xref target="RFC7862"/>.</t>
        <table anchor="tbl-ops-and-errors">
          <name>Operations and Their Valid Errors</name>
          <thead>
            <tr>
              <th align="left">Operation</th>
              <th align="left">Errors</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left"> </td>
              <td align="left"> </td>
            </tr>
          </tbody>
        </table>
      </section>
      <section anchor="callback-operations-and-their-valid-errors">
        <name>Callback Operations and Their Valid Errors</name>
        <t>The callback operations and their valid errors are presented in
<xref target="tbl-cb-ops-and-errors"/>.  All error codes not defined in this document
are defined in Section 15 of <xref target="RFC8881"/> and Section 11 of <xref target="RFC7862"/>.</t>
        <table anchor="tbl-cb-ops-and-errors">
          <name>Callback Operations and Their Valid Errors</name>
          <thead>
            <tr>
              <th align="left">Callback Operation</th>
              <th align="left">Errors</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">CB_CHUNK_REPAIR</td>
              <td align="left">NFS4ERR_BADXDR, NFS4ERR_BAD_STATEID, NFS4ERR_DEADSESSION, NFS4ERR_DELAY, NFS4ERR_CODING_NOT_SUPPORTED, NFS4ERR_INVAL, NFS4ERR_IO, NFS4ERR_ISDIR, NFS4ERR_LOCKED, NFS4ERR_NOTSUPP, NFS4ERR_OLD_STATEID, NFS4ERR_SERVERFAULT, NFS4ERR_STALE,</td>
            </tr>
          </tbody>
        </table>
      </section>
      <section anchor="errors-and-the-operations-that-use-them">
        <name>Errors and the Operations That Use Them</name>
        <t>The operations and their valid errors are presented in
<xref target="tbl-errors-and-ops"/>.  All operations not defined in this document
are defined in Section 18 of <xref target="RFC8881"/> and Section 15 of <xref target="RFC7862"/>.</t>
        <table anchor="tbl-errors-and-ops">
          <name>Errors and the Operations That Use Them</name>
          <thead>
            <tr>
              <th align="left">Error</th>
              <th align="left">Operations</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">NFS4ERR_CODING_NOT_SUPPORTED</td>
              <td align="left">CB_CHUNK_REPAIR, LAYOUTGET</td>
            </tr>
          </tbody>
        </table>
      </section>
    </section>
    <section anchor="exchgid4flagusepnfsds">
      <name>EXCHGID4_FLAG_USE_PNFS_DS</name>
      <figure anchor="fig-EXCHGID4_FLAG_USE_PNFS_DS">
        <name>The EXCHGID4_FLAG_USE_PNFS_DS</name>
        <sourcecode type="xdr"><![CDATA[
   /// const EXCHGID4_FLAG_USE_ERASURE_DS      = 0x00100000;
]]></sourcecode>
      </figure>
      <t>When a data server connects to a metadata server it can via
EXCHANGE_ID (see Section 18.35 of <xref target="RFC8881"/>) state its pNFS role.
The data server can use EXCHGID4_FLAG_USE_ERASURE_DS (see
<xref target="fig-EXCHGID4_FLAG_USE_PNFS_DS"/>) to indicate that it supports the
new NFSv4.2 operations introduced in this document.  Section 13.1
of <xref target="RFC8881"/> describes the interaction of the various pNFS roles
masked by EXCHGID4_FLAG_MASK_PNFS.  However, that does not mask out
EXCHGID4_FLAG_USE_ERASURE_DS.  I.e., EXCHGID4_FLAG_USE_ERASURE_DS can
be used in combination with all of the pNFS flags.</t>
      <t>If the data server sets EXCHGID4_FLAG_USE_ERASURE_DS during the
EXCHANGE_ID operation, then it <bcp14>MUST</bcp14> support all of the operations
in <xref target="tbl-protocol-ops"/>.  Further, this support is orthogonal to the
Erasure Coding Type selected.  The data server is unaware of which type
is driving the I/O.</t>
    </section>
    <section anchor="new-nfsv42-attributes">
      <name>New NFSv4.2 Attributes</name>
      <section anchor="attribute-88-fattr4codingblocksize">
        <name>Attribute 88: fattr4_coding_block_size</name>
        <figure anchor="fig-fattr4_coding_block_size">
          <name>XDR for fattr4_coding_block_size</name>
          <sourcecode type="xdr"><![CDATA[
   /// typedef size_t                    fattr4_coding_block_size;
   ///
   /// const FATTR4_CODING_BLOCK_SIZE  = 88;
   ///
]]></sourcecode>
        </figure>
        <t>The new attribute fattr4_coding_block_size (see
<xref target="fig-fattr4_coding_block_size"/>) is an <bcp14>OPTIONAL</bcp14> to NFSv4.2 attribute
which <bcp14>MUST</bcp14> be supported if the metadata server supports the Flexible
File Version 2 Layout Type.  By querying it, the client can determine
the data block size it is to use when coding the data blocks to
chunks.</t>
      </section>
    </section>
    <section anchor="new-nfsv42-common-data-structures">
      <name>New NFSv4.2 Common Data Structures</name>
      <section anchor="sec-chunk_guard4">
        <name>chunk_guard4</name>
        <figure anchor="fig-chunk_guard4">
          <name>XDR for chunk_guard4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct chunk_guard4 {
   ///     uint32_t   cg_gen_id;
   ///     uint32_t   cg_client_id;
   /// };
]]></sourcecode>
        </figure>
        <t>The chunk_guard4 (see <xref target="fig-chunk_guard4"/>) is effectively a 64 bit
value, with the upper 32 bits, cg_gen_id, being the current generation
id of the chunk on the DS and the lower 32 bits, cg_client_id, being
an unique id established when the client did the EXCHANGE_ID operation
(see Section 18.35 of <xref target="RFC8881"/>) with the metadata server.  The
lower 32 bits are set passed back in the LAYOUTGET operation (see
Section 18.43 of <xref target="RFC8881"/>) as the fml_client_id (see Section
2.9).</t>
        <t><cref source="Tom">fix the section</cref></t>
      </section>
      <section anchor="chunkowner4">
        <name>chunk_owner4</name>
        <figure anchor="fig-chunk_owner4">
          <name>XDR for chunk_owner4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct chunk_owner4 {
   ///     chunk_guard4   co_guard;
   ///     uint32_t       co_id;
   /// };
]]></sourcecode>
        </figure>
        <t>The chunk_owner4 (see <xref target="fig-chunk_owner4"/>) is used to determine
when and by whom a block was written.  The co_id is used to identify
the block and <bcp14>MUST</bcp14> be the index of the chunk within the file.  I.e.,
it is the offset of the start of the chunk divided by the chunk
length.  The co_guard is a chunk_guard4 (see <xref target="sec-chunk_guard4"/>),
used to identify a given transaction.</t>
        <t>The co_guard is like the change attribute (see Section 5.8.1.4 of
<xref target="RFC8881"/>) in that each chunk write by a given client has to have
an unique co_guard.  I.e., it can be determined which transaction
across all data files that a chunk corresponds.</t>
      </section>
    </section>
    <section anchor="new-nfsv42-operations">
      <name>New NFSv4.2 Operations</name>
      <figure anchor="fig-ops-xdr">
        <name>Operations XDR</name>
        <sourcecode type="xdr"><![CDATA[
   ///
   /// /* New operations for Erasure Coding start here */
   ///
   ///  OP_CHUNK_COMMIT        = 77,
   ///  OP_CHUNK_ERROR         = 78,
   ///  OP_CHUNK_FINALIZE      = 79,
   ///  OP_CHUNK_HEADER_READ   = 80,
   ///  OP_CHUNK_LOCK          = 81,
   ///  OP_CHUNK_READ          = 82,
   ///  OP_CHUNK_REPAIRED      = 83,
   ///  OP_CHUNK_ROLLBACK      = 84,
   ///  OP_CHUNK_UNLOCK        = 85,
   ///  OP_CHUNK_WRITE         = 86,
   ///  OP_CHUNK_WRITE_REPAIR  = 87,
   ///
]]></sourcecode>
      </figure>
      <table anchor="tbl-protocol-ops">
        <name>Protocol OPs</name>
        <thead>
          <tr>
            <th align="left">Operation</th>
            <th align="left">Number</th>
            <th align="left">Target Server</th>
            <th align="left">Description</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">CHUNK_COMMIT</td>
            <td align="left">77</td>
            <td align="left">DS</td>
            <td align="left">
              <xref target="sec-CHUNK_COMMIT"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_ERROR</td>
            <td align="left">78</td>
            <td align="left">MDS</td>
            <td align="left">
              <xref target="sec-CHUNK_ERROR"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_FINALIZE</td>
            <td align="left">79</td>
            <td align="left">DS</td>
            <td align="left">
              <xref target="sec-CHUNK_FINALIZE"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_HEADER_READ</td>
            <td align="left">80</td>
            <td align="left">DS</td>
            <td align="left">
              <xref target="sec-CHUNK_HEADER_READ"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_LOCK</td>
            <td align="left">81</td>
            <td align="left">DS</td>
            <td align="left">
              <xref target="sec-CHUNK_LOCK"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_READ</td>
            <td align="left">82</td>
            <td align="left">DS</td>
            <td align="left">
              <xref target="sec-CHUNK_READ"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_REPAIRED</td>
            <td align="left">83</td>
            <td align="left">MDS</td>
            <td align="left">
              <xref target="sec-CHUNK_REPAIRED"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_ROLLBACK</td>
            <td align="left">84</td>
            <td align="left">DS</td>
            <td align="left">
              <xref target="sec-CHUNK_ROLLBACK"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_UNLOCK</td>
            <td align="left">85</td>
            <td align="left">DS</td>
            <td align="left">
              <xref target="sec-CHUNK_UNLOCK"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_WRITE</td>
            <td align="left">86</td>
            <td align="left">DS</td>
            <td align="left">
              <xref target="sec-CHUNK_WRITE"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_WRITE_REPAIR</td>
            <td align="left">87</td>
            <td align="left">DS</td>
            <td align="left">
              <xref target="sec-CHUNK_WRITE_REPAIR"/></td>
          </tr>
        </tbody>
      </table>
      <section anchor="sec-CHUNK_COMMIT">
        <name>Operation 77: CHUNK_COMMIT - Activate Cached Chunk Data</name>
        <section anchor="arguments">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CHUNK_COMMIT4args">
            <name>XDR for CHUNK_COMMIT4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_COMMIT4args {
   ///     /* CURRENT_FH: file */
   ///     offset4         cca_offset;
   ///     count4          cca_count;
   ///     chunk_owner4    cca_chunks<>;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results">
          <name>RESULTS</name>
          <figure anchor="fig-CHUNK_COMMIT4resok">
            <name>XDR for CHUNK_COMMIT4resok</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_COMMIT4resok {
   ///     verifier4       ccr_writeverf;
   /// };
]]></sourcecode>
          </figure>
          <figure anchor="fig-CHUNK_COMMIT4res">
            <name>XDR for CHUNK_COMMIT4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_COMMIT4res switch (nfsstat4 ccr_status) {
   ///     case NFS4_OK:
   ///         CHUNK_COMMIT4resok   ccr_resok4;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description">
          <name>DESCRIPTION</name>
          <t>CHUNK_COMMIT is COMMIT (see Section 18.3 of <xref target="RFC8881"/>) with
additional semantics over the chunk_owner activating the blocks.
As such, all of the normal semantics of COMMIT directly apply.</t>
          <t>The main difference between the two operations is that CHUNK_COMMIT
works on blocks and not a raw data stream.  As such cca_offset is
the starting block offset in the file and not the byte offset in
the file.  Some erasure coding types can have different block sizes
depending on the block type.  Further, cca_count is a count of
blocks to activate and not bytes to activate.</t>
          <t>Further, while it may appear that the combination of cca_offset and
cca_count are redundant to cca_chunks, the purpose of cca_chunks
is to allow the data server to differentiate between potentially
multiple pending blocks.</t>
          <t><cref source="Tom">Describe how CHUNK_COMMIT and CHUNK_FINALIZE interact.
How does CHUNK_COMMIT interact with a locked chunk?</cref></t>
        </section>
      </section>
      <section anchor="sec-CHUNK_ERROR">
        <name>Operation 78: CHUNK_ERROR - Report Error on Cached Chunk Data</name>
        <section anchor="arguments-1">
          <name>ARGUMENTS</name>
        </section>
        <section anchor="results-1">
          <name>RESULTS</name>
        </section>
        <section anchor="description-1">
          <name>DESCRIPTION</name>
        </section>
      </section>
      <section anchor="sec-CHUNK_FINALIZE">
        <name>Operation 79: CHUNK_FINALIZE - XXX Error on Cached Chunk Data</name>
        <section anchor="arguments-2">
          <name>ARGUMENTS</name>
        </section>
        <section anchor="results-2">
          <name>RESULTS</name>
        </section>
        <section anchor="description-2">
          <name>DESCRIPTION</name>
        </section>
      </section>
      <section anchor="sec-CHUNK_HEADER_READ">
        <name>Operation 80: CHUNK_HEADER_READ - Read Chunk Header from File</name>
        <section anchor="arguments-3">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CHUNK_HEADER_READ4args">
            <name>XDR for CHUNK_HEADER_READ4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_HEADER_READ4args {
   ///     /* CURRENT_FH: file */
   ///     stateid4    chra_stateid;
   ///     offset4     chra_offset;
   ///     count4      chra_count;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-3">
          <name>RESULTS</name>
          <figure anchor="fig-CHUNK_HEADER_READ4resok">
            <name>XDR for CHUNK_HEADER_READ4resok</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_HEADER_READ4resok {
   ///     bool            chrr_eof;
   ///     bool            chrr_locked<>;
   ///     chunk_owner4    chrr_chunks<>;
   /// };
]]></sourcecode>
          </figure>
          <t><cref source="Tom">Do we want to have a chunk_owner for reads versus writes?
Instead of co-arrays, have one single in the responses?</cref></t>
          <figure anchor="fig-CHUNK_HEADER_READ4res">
            <name>XDR for CHUNK_HEADER_READ4resok</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_HEADER_READ4res switch (nfsstat4 chrr_status) {
   ///     case NFS4_OK:
   ///         CHUNK_HEADER_READ4resok     chrr_resok4;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-3">
          <name>DESCRIPTION</name>
          <t>CHUNK_HEADER_READ differs from CHUNK_READ in that it only reads chunk
headers in the desired data range.</t>
        </section>
      </section>
      <section anchor="sec-CHUNK_LOCK">
        <name>Operation 81: CHUNK_LOCK - Lock Cached Chunk Data</name>
        <section anchor="arguments-4">
          <name>ARGUMENTS</name>
        </section>
        <section anchor="results-4">
          <name>RESULTS</name>
        </section>
        <section anchor="description-4">
          <name>DESCRIPTION</name>
        </section>
      </section>
      <section anchor="sec-CHUNK_READ">
        <name>Operation 82: CHUNK_READ - Read Chunks from File</name>
        <section anchor="arguments-5">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CHUNK_READ4args">
            <name>XDR for CHUNK_READ4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_READ4args {
   ///     /* CURRENT_FH: file */
   ///     stateid4    cra_stateid;
   ///     offset4     cra_offset;
   ///     count4      cra_count;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-5">
          <name>RESULTS</name>
          <figure anchor="fig-read_chunk4">
            <name>XDR for read_chunk4</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct read_chunk4 {
   ///     uint32_t        cr_crc;
   ///     uint32_t        cr_effective_len;
   ///     chunk_owner4    cr_owner;
   ///     uint32_t        cr_payload_id;
   ///     bool            cr_locked<>;  // TDH - make a flag
   ///     nfsstat4        cr_status<>;
   ///     opaque          cr_chunk<>;
   /// };
]]></sourcecode>
          </figure>
          <figure anchor="fig-CHUNK_READ4resok">
            <name>XDR for CHUNK_READ4resok</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_READ4resok {
   ///     bool        crr_eof;
   ///     read_chunk4 crr_chunks<>;
   /// };
]]></sourcecode>
          </figure>
          <figure anchor="fig-CHUNK_READ4res">
            <name>XDR for CHUNK_READ4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_READ4res switch (nfsstat4 crr_status) {
   ///     case NFS4_OK:
   ///          CHUNK_READ4resok     crr_resok4;
   ///     default:
   ///          void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-5">
          <name>DESCRIPTION</name>
          <t>CHUNK_READ is READ (see Section 18.22 of <xref target="RFC8881"/>) with additional
semantics over the chunk_owner.  As such, all of the normal semantics
of READ directly apply.</t>
          <t>The main difference between the two operations is that CHUNK_READ
works on blocks and not a raw data stream.  As such cra_offset is
the starting block offset in the file and not the byte offset in
the file.  Some erasure coding types can have different block sizes
depending on the block type.  Further, cra_count is a count of
blocks to read and not bytes to read.</t>
          <t>When reading a set of blocks across the data servers, it can be the
case that some data servers do not have any data at that location.
In that case, the server either returns crr_eof if the cra_offset
exceeds the number of blocks that the data server is aware or it
returns an empty block for that block.</t>
          <t>For example, in <xref target="fig-example-CHUNK_READ4args"/>, the client asks
for 4 blocks starting with the 3rd block in the file.  The second
data server responds as in <xref target="fig-example-CHUNK_READ4resok"/>.  The
client would read this as there is valid data for blocks 2 and 4,
there is a hole at block 3, and there is no data for block 5.  The
data server <bcp14>MUST</bcp14> calculate a valid cr_crc for block 3 based on the
generated fields.</t>
          <figure anchor="fig-example-CHUNK_READ4args">
            <name>Example: CHUNK_READ4args parameters</name>
            <artwork><![CDATA[
        Data Server 2
  +--------------------------------+
  | CHUNK_READ4args                |
  +--------------------------------+
  | cra_stateid: 0                 |
  | cra_offset: 2                  |
  | cra_count: 4                   |
  +----------+---------------------+
]]></artwork>
          </figure>
          <figure anchor="fig-example-CHUNK_READ4resok">
            <name>Example: Resulting CHUNK_READ4resok reply</name>
            <artwork><![CDATA[
        Data Server 2
  +--------------------------------+
  | CHUNK_READ4resok               |
  +--------------------------------+
  | crr_eof: true                  |
  | crr_chunks[0]:                 |
  |     cr_crc: 0x3faddace         |
  |     cr_owner:                  |
  |         co_chunk_id: 2         |
  |         co_guard:              |
  |             cg_gen_id   : 3    |
  |             cg_client_id: 6    |
  |     cr_payload_id: 1           |
  |     cr_chunk: ....             |
  | crr_chunks[0]:                 |
  |     cr_crc: 0xdeade4e5         |
  |     cr_owner:                  |
  |         co_chunk_id: 3         |
  |         co_guard:              |
  |             cg_gen_id   : 0    |
  |             cg_client_id: 0    |
  |     cr_payload_id: 1           |
  |     cr_chunk: 0000...00000     |
  | crr_chunks[0]:                 |
  |     cr_crc: 0x7778abcd         |
  |     cr_owner:                  |
  |         co_chunk_id: 4         |
  |         co_guard:              |
  |             cg_gen_id   : 3    |
  |             cg_client_id: 6    |
  |     cr_payload_id: 1           |
  |     cr_chunk: ....             |
  +--------------------------------+
]]></artwork>
          </figure>
        </section>
      </section>
      <section anchor="sec-CHUNK_REPAIRED">
        <name>Operation 83: CHUNK_REPAIRED - Error Repaired on Cached Chunk Data</name>
        <section anchor="arguments-6">
          <name>ARGUMENTS</name>
        </section>
        <section anchor="results-6">
          <name>RESULTS</name>
        </section>
        <section anchor="description-6">
          <name>DESCRIPTION</name>
        </section>
      </section>
      <section anchor="sec-CHUNK_ROLLBACK">
        <name>Operation 84: CHUNK_ROLLBACK - Rollback Changes on Cached Chunk Data</name>
        <section anchor="arguments-7">
          <name>ARGUMENTS</name>
        </section>
        <section anchor="results-7">
          <name>RESULTS</name>
        </section>
        <section anchor="description-7">
          <name>DESCRIPTION</name>
        </section>
      </section>
      <section anchor="sec-CHUNK_UNLOCK">
        <name>Operation 85: CHUNK_UNLOCK - Unlock on Cached Chunk Data</name>
        <section anchor="arguments-8">
          <name>ARGUMENTS</name>
        </section>
        <section anchor="results-8">
          <name>RESULTS</name>
        </section>
        <section anchor="description-8">
          <name>DESCRIPTION</name>
        </section>
      </section>
      <section anchor="sec-CHUNK_WRITE">
        <name>Operation 86: CHUNK_WRITE - Write Chunks to File</name>
        <section anchor="arguments-9">
          <name>ARGUMENTS</name>
          <figure anchor="fig-write_chunk_guard4">
            <name>XDR for write_chunk_guard4</name>
            <sourcecode type="xdr"><![CDATA[
   /// union write_chunk_guard4 (bool cwg_check) {
   ///     case TRUE:
   ///         chunk_guard4   cwg_guard;
   ///     case FALSE:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
          <figure anchor="fig-CHUNK_WRITE4args">
            <name>XDR for CHUNK_WRITE4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_WRITE4args {
   ///     /* CURRENT_FH: file */
   ///     stateid4           cwa_stateid;
   ///     offset4            cwa_offset;
   ///     stable_how4        cwa_stable;
   ///     chunk_owner4       cwa_owner;
   ///     uint32_t           cwa_payload_id;
   ///     write_chunk_guard4 cwa_guard;
   ///     uint32_t           cwa_chunk_size;
   ///     uint32_t           cwa_crc32s<>;
   ///     opaque             cwa_chunks<>;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-9">
          <name>RESULTS</name>
          <figure anchor="fig-CHUNK_WRITE4resok">
            <name>XDR for CHUNK_WRITE4resok</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_WRITE4resok {
   ///     count4          cwr_count;
   ///     stable_how4     cwr_committed;
   ///     verifier4       cwr_writeverf;
   ///     nfsstat4        cwr_status<>;
   ///     chunk_owner4    cwr_owners<>;
   /// };
]]></sourcecode>
          </figure>
          <figure anchor="fig-CHUNK_WRITE4res">
            <name>XDR for CHUNK_WRITE4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_WRITE4res switch (nfsstat4 cwr_status) {
   ///     case NFS4_OK:
   ///         CHUNK_WRITE4resok    cwr_resok4;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-9">
          <name>DESCRIPTION</name>
          <t>CHUNK_WRITE is WRITE (see Section 18.32 of <xref target="RFC8881"/>) with
additional semantics over the chunk_owner and the activation of
blocks.  As such, all of the normal semantics of WRITE directly
apply.</t>
          <t>The main difference between the two operations is that CHUNK_WRITE
works on blocks and not a raw data stream.  As such cwa_offset is
the starting block offset in the file and not the byte offset in
the file.  Some erasure coding types can have different block sizes
depending on the block type.  Further, cwr_count is a count of
written blocks and not written bytes.</t>
          <t>If cwa_stable is FILE_SYNC4, the data server <bcp14>MUST</bcp14> commit the written
header and block data plus all file system metadata to stable storage
before returning results.  This corresponds to the NFSv2 protocol
semantics.  Any other behavior constitutes a protocol violation.
If cwa_stable is DATA_SYNC4, then the data server <bcp14>MUST</bcp14> commit all
of the header and block data to stable storage and enough of the
metadata to retrieve the data before returning.  The data server
implementer is free to implement DATA_SYNC4 in the same fashion as
FILE_SYNC4, but with a possible performance drop.  If cwa_stable
is UNSTABLE4, the data server is free to commit any part of the
header and block data and the metadata to stable storage, including
all or none, before returning a reply to the client.  There is no
guarantee whether or when any uncommitted data will subsequently
be committed to stable storage.  The only guarantees made by the
data server are that it will not destroy any data without changing
the value of writeverf and that it will not commit the data and
metadata at a level less than that requested by the client.</t>
          <t>The activation of header and block data interacts with the co_activated
for each of the written blocks.  If the data is not committed to
stable storage then the co_activated field <bcp14>MUST NOT</bcp14> be set to true.
Once the data is committed to stable storage, then the data server
can set the block's co_activated if one of these conditions apply:</t>
          <ul spacing="normal">
            <li>
              <t>it is the first write to that block and the
CHUNK_WRITE_FLAGS_ACTIVATE_IF_EMPTY flag is set</t>
            </li>
            <li>
              <t>the CHUNK_COMMIT is issued later for that block.</t>
            </li>
          </ul>
          <t>There are subtle interactions with write holes caused by racing
clients.  One client could win the race in each case, but because
it used a cwa_stable of UNSTABLE4, the subsequent writes from the
second client with a cwa_stable of FILE_SYNC4 can be awarded the
co_activated being set to true for each of the blocks in the payload.</t>
          <t>Finally, the interaction of cwa_stable can cause a client to
mistakenly believe that by the time it gets the response of
co_activated of false, that the blocks are not activated.  A
subsequent CHUNK_READ or HEADER_READ might show that the co_activated
is true without any interaction by the client via CHUNK_COMMIT.</t>
          <section anchor="guarding-the-write">
            <name>Guarding the Write</name>
            <t>A guarded CHUNK_WRITE is when the writing of a block <bcp14>MUST</bcp14> fail if
cwa_guard.cwg_check is set and the target chunk does not have both
the same gen_id cwa_guard.as the cwg_guard.cg_gen_id and the same
cwa_guard.cg_gen_id as the cwa_guard.cwg_guard.cg_gen_id.  This is
useful in read-update-write scenarios.  The client reads a block,
updates it, and is prepared to write it back.  It guards the write
such that if another writer has modified the block, the data server
will reject the modification.</t>
            <t>As the chunk_guard4 (see <xref target="fig-chunk_guard4"/> does not have a
chunk_id and the CHUNK_WRITE applies to all blocks in the range of
cwa_offset to the length of cwa_data, then each of the target blocks
<bcp14>MUST</bcp14> have the same cg_gen_id and cg_client_id.  The client <bcp14>SHOULD</bcp14>
present the smallest set of blocks as possible to meet this
requirement.</t>
            <t><cref source="Tom"> Is the DS supposed to vet all blocks first or
proceed to the first error?  Or do all blocks and return an array
of errors?  (This last one is a no-go.)  Also, if we do the vet
first, what happens if a CHUNK_WRITE comes in after the vetting?
Are we to lock the file during this process.  Even if we do that,
we still have the issue of multiple DSes.  </cref></t>
          </section>
        </section>
      </section>
      <section anchor="sec-CHUNK_WRITE_REPAIR">
        <name>Operation 88: CHUNK_WRITE_REPAIR - Write Repaired Cached Chunk Data</name>
        <section anchor="arguments-10">
          <name>ARGUMENTS</name>
        </section>
        <section anchor="results-10">
          <name>RESULTS</name>
        </section>
        <section anchor="description-10">
          <name>DESCRIPTION</name>
          <t><cref source="Tom">Either a cut-and-paste of CHUNK_WRITE or overload
it?</cref></t>
        </section>
      </section>
    </section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>The combination of components in a pNFS system is required to
preserve the security properties of NFSv4.1+ with respect to an
entity accessing data via a client.  The pNFS feature partitions
the NFSv4.1+ file system protocol into two parts: the control
protocol and the data protocol.  As the control protocol in this
document is NFS, the security properties are equivalent to the
version of NFS being used.  The flexible file layout further divides
the data protocol into metadata and data paths.  The security
properties of the metadata path are equivalent to those of NFSv4.1x
(see Sections 1.7.1 and 2.2.1 of <xref target="RFC8881"/>).  And the security
properties of the data path are equivalent to those of the version
of NFS used to access the storage device, with the provision that
the metadata server is responsible for authenticating client access
to the data file.  The metadata server provides appropriate credentials
to the client to access data files on the storage device.  It is
also responsible for revoking access for a client to the storage
device.</t>
      <t>The metadata server enforces the file access control policy at
LAYOUTGET time.  The client should use RPC authorization credentials
for getting the layout for the requested iomode ((LAYOUTIOMODE4_READ
or LAYOUTIOMODE4_RW), and the server verifies the permissions and
ACL for these credentials, possibly returning NFS4ERR_ACCESS if the
client is not allowed the requested iomode.  If the LAYOUTGET
operation succeeds, the client receives, as part of the layout, a
set of credentials allowing it I/O access to the specified data
files corresponding to the requested iomode.  When the client acts
on I/O operations on behalf of its local users, it <bcp14>MUST</bcp14> authenticate
and authorize the user by issuing respective OPEN and ACCESS calls
to the metadata server, similar to having NFSv4 data delegations.</t>
      <t>The combination of filehandle, synthetic uid, and gid in the layout
is the way that the metadata server enforces access control to the
data server.  The client only has access to filehandles of file
objects and not directory objects.  Thus, given a filehandle in a
layout, it is not possible to guess the parent directory filehandle.
Further, as the data file permissions only allow the given synthetic
uid read/write permission and the given synthetic gid read permission,
knowing the synthetic ids of one file does not necessarily allow
access to any other data file on the storage device.</t>
      <t>The metadata server can also deny access at any time by fencing the
data file, which means changing the synthetic ids.  In turn, that
forces the client to return its current layout and get a new layout
if it wants to continue I/O to the data file.</t>
      <t>If access is allowed, the client uses the corresponding (read-only
or read/write) credentials to perform the I/O operations at the
data file's storage devices.  When the metadata server receives a
request to change a file's permissions or ACL, it <bcp14>SHOULD</bcp14> recall all
layouts for that file and then <bcp14>MUST</bcp14> fence off any clients still
holding outstanding layouts for the respective files by implicitly
invalidating the previously distributed credential on all data file
comprising the file in question.  It is <bcp14>REQUIRED</bcp14> that this be done
before committing to the new permissions and/or ACL.  By requesting
new layouts, the clients will reauthorize access against the modified
access control metadata.  Recalling the layouts in this case is
intended to prevent clients from getting an error on I/Os done after
the client was fenced off.</t>
      <section anchor="transport-layer-security">
        <name>Transport Layer Security</name>
      </section>
      <section anchor="rpcsecgss-and-security-services">
        <name>RPCSEC_GSS and Security Services</name>
        <t><em>Why we don't want to support RPCSEC_GSS.</em></t>
        <t>Because of the special use of principals within the loosely coupled
model, the issues are different depending on the coupling model.</t>
        <section anchor="loosely-coupled">
          <name>Loosely Coupled</name>
          <t>RPCSEC_GSS version 3 (RPCSEC_GSSv3) <xref target="RFC7861"/> contains facilities
that would allow it to be used to authorize the client to the storage
device on behalf of the metadata server.  Doing so would require
that each of the metadata server, storage device, and client would
need to implement RPCSEC_GSSv3 using an RPC-application-defined
structured privilege assertion in a manner described in Section
4.9.1 of <xref target="RFC7862"/>.  The specifics necessary to do so are not
described in this document.  This is principally because any such
specification would require extensive implementation work on a wide
range of storage devices, which would be unlikely to result in a
widely usable specification for a considerable time.</t>
          <t>As a result, the layout type described in this document will not
provide support for use of RPCSEC_GSS together with the loosely
coupled model.  However, future layout types could be specified,
which would allow such support, either through the use of RPCSEC_GSSv3
or in other ways.</t>
        </section>
        <section anchor="tightly-coupled">
          <name>Tightly Coupled</name>
          <t>With tight coupling, the principal used to access the metadata file
is exactly the same as used to access the data file.  The storage
device can use the control protocol to validate any RPC credentials.
As a result, there are no security issues related to using RPCSEC_GSS
with a tightly coupled system.  For example, if Kerberos V5 Generic
Security Service Application Program Interface (GSS-API) <xref target="RFC4121"/>
is used as the security mechanism, then the storage device could
use a control protocol to validate the RPC credentials to the
metadata server.</t>
        </section>
      </section>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t><xref target="RFC8881"/> introduced the "pNFS Layout Types Registry"; new layout
type numbers in this registry need to be assigned by IANA.  This
document defines the protocol associated with an existing layout
type number: LAYOUT4_FLEX_FILES (see <xref target="tbl_layout_types"/>).</t>
      <table anchor="tbl_layout_types">
        <name>Layout Type Assignments</name>
        <thead>
          <tr>
            <th align="left">Layout Type Name</th>
            <th align="left">Value</th>
            <th align="left">RFC</th>
            <th align="left">How</th>
            <th align="left">Minor Versions</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">LAYOUT4_FLEX_FILES_V2</td>
            <td align="left">0x6</td>
            <td align="left">RFCTBD10</td>
            <td align="left">L</td>
            <td align="left">1</td>
          </tr>
        </tbody>
      </table>
      <t><xref target="RFC8881"/> also introduced the "NFSv4 Recallable Object Types
Registry".  This document defines new recallable objects for
RCA4_TYPE_MASK_FF2_LAYOUT_MIN and RCA4_TYPE_MASK_FF2_LAYOUT_MAX
(see <xref target="tbl_recallables"/>).</t>
      <table anchor="tbl_recallables">
        <name>Recallable Object Type Assignments</name>
        <thead>
          <tr>
            <th align="left">Recallable Object Type Name</th>
            <th align="left">Value</th>
            <th align="left">RFC</th>
            <th align="left">How</th>
            <th align="left">Minor Versions</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">RCA4_TYPE_MASK_FF2_LAYOUT_MIN</td>
            <td align="left">20</td>
            <td align="left">RFCTBD10</td>
            <td align="left">L</td>
            <td align="left">1</td>
          </tr>
          <tr>
            <td align="left">RCA4_TYPE_MASK_FF2_LAYOUT_MAX</td>
            <td align="left">21</td>
            <td align="left">RFCTBD10</td>
            <td align="left">L</td>
            <td align="left">1</td>
          </tr>
        </tbody>
      </table>
      <t>This document introduces the 'Flexible File Version 2 Layout Type
Erasure Coding Type Registry'.  This document defines the
FFV2_CODING_MIRRORED type for Client-Side Mirroring (see
<xref target="tbl-coding-types"/>).</t>
      <table anchor="tbl-coding-types">
        <name>Flexible File Version 2 Layout Type Erasure Coding Type Assignments</name>
        <thead>
          <tr>
            <th align="left">Erasure Coding Type Name</th>
            <th align="left">Value</th>
            <th align="left">RFC</th>
            <th align="left">How</th>
            <th align="left">Minor Versions</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">FFV2_CODING_MIRRORED</td>
            <td align="left">1</td>
            <td align="left">RFCTBD10</td>
            <td align="left">L</td>
            <td align="left">2</td>
          </tr>
        </tbody>
      </table>
    </section>
    <section numbered="false" anchor="acknowledgments">
      <name>Acknowledgments</name>
      <t>The following from Hammerspace were instrumental in driving Flexible
File Version 2 Layout Type: David Flynn, Trond Myklebust, Didier
Feron, Jean-Pierre Monchanin, Pierre Evenou, and Brian Pawlowski.</t>
      <t>Christoph Helwig was instrumental in making sure the Flexible File
Version 2 Layout Type was applicable to more than the Mojette
Transformation.</t>
      <t>Chris Inacio, Brian Pawlowski, and Gorry Fairhurst helped guide
this process.</t>
    </section>
  </middle>
  <back>
    <references anchor="sec-combined-references">
      <name>References</name>
      <references anchor="sec-normative-references">
        <name>Normative References</name>
        <reference anchor="RFC4121">
          <front>
            <title>The Kerberos Version 5 Generic Security Service Application Program Interface (GSS-API) Mechanism: Version 2</title>
            <author fullname="L. Zhu" initials="L." surname="Zhu"/>
            <author fullname="K. Jaganathan" initials="K." surname="Jaganathan"/>
            <author fullname="S. Hartman" initials="S." surname="Hartman"/>
            <date month="July" year="2005"/>
            <abstract>
              <t>This document defines protocols, procedures, and conventions to be employed by peers implementing the Generic Security Service Application Program Interface (GSS-API) when using the Kerberos Version 5 mechanism.</t>
              <t>RFC 1964 is updated and incremental changes are proposed in response to recent developments such as the introduction of Kerberos cryptosystem framework. These changes support the inclusion of new cryptosystems, by defining new per-message tokens along with their encryption and checksum algorithms based on the cryptosystem profiles. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="4121"/>
          <seriesInfo name="DOI" value="10.17487/RFC4121"/>
        </reference>
        <reference anchor="RFC4506">
          <front>
            <title>XDR: External Data Representation Standard</title>
            <author fullname="M. Eisler" initials="M." role="editor" surname="Eisler"/>
            <date month="May" year="2006"/>
            <abstract>
              <t>This document describes the External Data Representation Standard (XDR) protocol as it is currently deployed and accepted. This document obsoletes RFC 1832. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="STD" value="67"/>
          <seriesInfo name="RFC" value="4506"/>
          <seriesInfo name="DOI" value="10.17487/RFC4506"/>
        </reference>
        <reference anchor="RFC5531">
          <front>
            <title>RPC: Remote Procedure Call Protocol Specification Version 2</title>
            <author fullname="R. Thurlow" initials="R." surname="Thurlow"/>
            <date month="May" year="2009"/>
            <abstract>
              <t>This document describes the Open Network Computing (ONC) Remote Procedure Call (RPC) version 2 protocol as it is currently deployed and accepted. This document obsoletes RFC 1831. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="5531"/>
          <seriesInfo name="DOI" value="10.17487/RFC5531"/>
        </reference>
        <reference anchor="RFC5662">
          <front>
            <title>Network File System (NFS) Version 4 Minor Version 1 External Data Representation Standard (XDR) Description</title>
            <author fullname="S. Shepler" initials="S." role="editor" surname="Shepler"/>
            <author fullname="M. Eisler" initials="M." role="editor" surname="Eisler"/>
            <author fullname="D. Noveck" initials="D." role="editor" surname="Noveck"/>
            <date month="January" year="2010"/>
            <abstract>
              <t>This document provides the External Data Representation Standard (XDR) description for Network File System version 4 (NFSv4) minor version 1. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="5662"/>
          <seriesInfo name="DOI" value="10.17487/RFC5662"/>
        </reference>
        <reference anchor="RFC7530">
          <front>
            <title>Network File System (NFS) Version 4 Protocol</title>
            <author fullname="T. Haynes" initials="T." role="editor" surname="Haynes"/>
            <author fullname="D. Noveck" initials="D." role="editor" surname="Noveck"/>
            <date month="March" year="2015"/>
            <abstract>
              <t>The Network File System (NFS) version 4 protocol is a distributed file system protocol that builds on the heritage of NFS protocol version 2 (RFC 1094) and version 3 (RFC 1813). Unlike earlier versions, the NFS version 4 protocol supports traditional file access while integrating support for file locking and the MOUNT protocol. In addition, support for strong security (and its negotiation), COMPOUND operations, client caching, and internationalization has been added. Of course, attention has been applied to making NFS version 4 operate well in an Internet environment.</t>
              <t>This document, together with the companion External Data Representation (XDR) description document, RFC 7531, obsoletes RFC 3530 as the definition of the NFS version 4 protocol.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="7530"/>
          <seriesInfo name="DOI" value="10.17487/RFC7530"/>
        </reference>
        <reference anchor="RFC7861">
          <front>
            <title>Remote Procedure Call (RPC) Security Version 3</title>
            <author fullname="A. Adamson" initials="A." surname="Adamson"/>
            <author fullname="N. Williams" initials="N." surname="Williams"/>
            <date month="November" year="2016"/>
            <abstract>
              <t>This document specifies version 3 of the Remote Procedure Call (RPC) security protocol (RPCSEC_GSS). This protocol provides support for multi-principal authentication of client hosts and user principals to a server (constructed by generic composition), security label assertions for multi-level security and type enforcement, structured privilege assertions, and channel bindings. This document updates RFC 5403.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="7861"/>
          <seriesInfo name="DOI" value="10.17487/RFC7861"/>
        </reference>
        <reference anchor="RFC7862">
          <front>
            <title>Network File System (NFS) Version 4 Minor Version 2 Protocol</title>
            <author fullname="T. Haynes" initials="T." surname="Haynes"/>
            <date month="November" year="2016"/>
            <abstract>
              <t>This document describes NFS version 4 minor version 2; it describes the protocol extensions made from NFS version 4 minor version 1. Major extensions introduced in NFS version 4 minor version 2 include the following: Server-Side Copy, Application Input/Output (I/O) Advise, Space Reservations, Sparse Files, Application Data Blocks, and Labeled NFS.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="7862"/>
          <seriesInfo name="DOI" value="10.17487/RFC7862"/>
        </reference>
        <reference anchor="RFC7863">
          <front>
            <title>Network File System (NFS) Version 4 Minor Version 2 External Data Representation Standard (XDR) Description</title>
            <author fullname="T. Haynes" initials="T." surname="Haynes"/>
            <date month="November" year="2016"/>
            <abstract>
              <t>This document provides the External Data Representation (XDR) description for NFS version 4 minor version 2.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="7863"/>
          <seriesInfo name="DOI" value="10.17487/RFC7863"/>
        </reference>
        <reference anchor="RFC8178">
          <front>
            <title>Rules for NFSv4 Extensions and Minor Versions</title>
            <author fullname="D. Noveck" initials="D." surname="Noveck"/>
            <date month="July" year="2017"/>
            <abstract>
              <t>This document describes the rules relating to the extension of the NFSv4 family of protocols. It covers the creation of minor versions, the addition of optional features to existing minor versions, and the correction of flaws in features already published as Proposed Standards. The rules relating to the construction of minor versions and the interaction of minor version implementations that appear in this document supersede the minor versioning rules in RFC 5661 and other RFCs defining minor versions.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8178"/>
          <seriesInfo name="DOI" value="10.17487/RFC8178"/>
        </reference>
        <reference anchor="RFC8434">
          <front>
            <title>Requirements for Parallel NFS (pNFS) Layout Types</title>
            <author fullname="T. Haynes" initials="T." surname="Haynes"/>
            <date month="August" year="2018"/>
            <abstract>
              <t>This document defines the requirements that individual Parallel NFS (pNFS) layout types need to meet in order to work within the pNFS framework as defined in RFC 5661. In so doing, this document aims to clearly distinguish between requirements for pNFS as a whole and those specifically directed to the pNFS file layout. The lack of a clear separation between the two sets of requirements has been troublesome for those specifying and evaluating new layout types. In this regard, this document updates RFC 5661.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8434"/>
          <seriesInfo name="DOI" value="10.17487/RFC8434"/>
        </reference>
        <reference anchor="RFC8435">
          <front>
            <title>Parallel NFS (pNFS) Flexible File Layout</title>
            <author fullname="B. Halevy" initials="B." surname="Halevy"/>
            <author fullname="T. Haynes" initials="T." surname="Haynes"/>
            <date month="August" year="2018"/>
            <abstract>
              <t>Parallel NFS (pNFS) allows a separation between the metadata (onto a metadata server) and data (onto a storage device) for a file. The flexible file layout type is defined in this document as an extension to pNFS that allows the use of storage devices that require only a limited degree of interaction with the metadata server and use already-existing protocols. Client-side mirroring is also added to provide replication of files.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8435"/>
          <seriesInfo name="DOI" value="10.17487/RFC8435"/>
        </reference>
        <reference anchor="RFC8881">
          <front>
            <title>Network File System (NFS) Version 4 Minor Version 1 Protocol</title>
            <author fullname="D. Noveck" initials="D." role="editor" surname="Noveck"/>
            <author fullname="C. Lever" initials="C." surname="Lever"/>
            <date month="August" year="2020"/>
            <abstract>
              <t>This document describes the Network File System (NFS) version 4 minor version 1, including features retained from the base protocol (NFS version 4 minor version 0, which is specified in RFC 7530) and protocol extensions made subsequently. The later minor version has no dependencies on NFS version 4 minor version 0, and is considered a separate protocol.</t>
              <t>This document obsoletes RFC 5661. It substantially revises the treatment of features relating to multi-server namespace, superseding the description of those features appearing in RFC 5661.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8881"/>
          <seriesInfo name="DOI" value="10.17487/RFC8881"/>
        </reference>
        <reference anchor="RFC2119">
          <front>
            <title>Key words for use in RFCs to Indicate Requirement Levels</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <date month="March" year="1997"/>
            <abstract>
              <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="2119"/>
          <seriesInfo name="DOI" value="10.17487/RFC2119"/>
        </reference>
        <reference anchor="RFC8174">
          <front>
            <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
            <author fullname="B. Leiba" initials="B." surname="Leiba"/>
            <date month="May" year="2017"/>
            <abstract>
              <t>RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="8174"/>
          <seriesInfo name="DOI" value="10.17487/RFC8174"/>
        </reference>
      </references>
      <references anchor="sec-informative-references">
        <name>Informative References</name>
        <reference anchor="Plank97">
          <front>
            <title>A Tutorial on Reed-Solomon Coding for Fault-Tolerance in RAID-like System</title>
            <author initials="J." surname="Plank" fullname="J. Plank">
              <organization/>
            </author>
            <date year="1997" month="September"/>
          </front>
        </reference>
        <reference anchor="RFC1813">
          <front>
            <title>NFS Version 3 Protocol Specification</title>
            <author fullname="B. Callaghan" initials="B." surname="Callaghan"/>
            <author fullname="B. Pawlowski" initials="B." surname="Pawlowski"/>
            <author fullname="P. Staubach" initials="P." surname="Staubach"/>
            <date month="June" year="1995"/>
            <abstract>
              <t>This paper describes the NFS version 3 protocol. This paper is provided so that people can write compatible implementations. This memo provides information for the Internet community. This memo does not specify an Internet standard of any kind.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="1813"/>
          <seriesInfo name="DOI" value="10.17487/RFC1813"/>
        </reference>
        <reference anchor="RFC4519">
          <front>
            <title>Lightweight Directory Access Protocol (LDAP): Schema for User Applications</title>
            <author fullname="A. Sciberras" initials="A." role="editor" surname="Sciberras"/>
            <date month="June" year="2006"/>
            <abstract>
              <t>This document is an integral part of the Lightweight Directory Access Protocol (LDAP) technical specification. It provides a technical specification of attribute types and object classes intended for use by LDAP directory clients for many directory services, such as White Pages. These objects are widely used as a basis for the schema in many LDAP directories. This document does not cover attributes used for the administration of directory servers, nor does it include directory objects defined for specific uses in other documents. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="4519"/>
          <seriesInfo name="DOI" value="10.17487/RFC4519"/>
        </reference>
      </references>
    </references>
  </back>
  <!-- ##markdown-source:
H4sIANiY9mgAA+y9a3cbx5Uo+r1+RY9y1rKUgJBI0S86ToYmKZsTSdQhKSs+
s+ZiNYEG0RHQjekGSDG25rec33J/2d3Pql3VDZBKPJk5Z12uxCK7q+u5a78f
Ozs7blWu5sVB9iZv8vm8mGevX1xkj5fw3yfZi3nxobyaF9mLEv7zMr+r16vs
x6Jpy7rK9lx+ddUUNwfULGpys+cm9bjKF9DvpMmnq51ZflcV7U41bW/2d6bQ
fgrN252bvZ1ne26cr4rrurk7yNrVxE3gr4Ps5+PDy5OPblxXbVG16/YgWzXr
wpXLhn5rV3vPnn0N3+ZNkR9k3xdVAdN3t3Xz/rqp18uD7HWxwr94Xhd37apY
+Knvu/fFHbydHGSn1apoqmK1c4zzdK5d5dVklM/rCiZxV7RuWR5k/7qqx4Os
rZtVU0xb+O1uwb/AKhf5cllW14NsXC8WRbVq/825fL2a1c2By3ZcBj9lBdO/
HGY/0CbQI96by1m9yFv7vG6u86r8a76CaR7AC+iyaZf5uKC3xSIv5wfZvL6e
3TX/fI1/DWFY56q6WcA3NwWMmZ2/ONrf3dvVXz9/9oX8+vnnz/Xp5198sSe/
fvn582f661df7IZf98Kvz+XXr3a//Ep/3X++H379XH/96ivowZXV1M7ozTyv
3n/95QEtQuDtMLtcr+qmzOcZnMh5UUx2Lup5vYA/juoJ7GgGXWQv8vV8tXNZ
z+F4q3EBW5mdH54e78zL93qq1KnfcfjZ4Q3/lyGPS890x5OHq7y5LlYH2Wy1
Wh48fXpbXA2LYtwO16v3w2Kyfvoff1li66fy33wJp/H06GLn6y92nj/fG85W
PDqD7EWxhOlcFU22+/XXX/J+7H61+9wfxO7XsDU7OztZftWumnwM4NZ36+BB
fdtmedYWS3iNoJBdATQXRZWtZkW2KFY5jJhnj+tqVWe58w/aorkpGuihmmS2
Bdyrusmvi2xS3JTj4gltbZ7hHRxmAIWF673pl3fLIlz3rGzh+2lZFRM8htUM
/67HawT6LG9dXmXFhxVcVmwNo+JioFW+0gXh3NdtkdXTZD4tNXNN8e/rsoH3
1fwOZjcvF+UKxpoU101BX5V4V2HbcIDbcjWLdsPx4mntOEo+B8wwuduBdbUr
BKdlU8M1ructLPkYNwcfFNwbLCWft7XLJxMYESff1DflpKAhr5tydQcffVfD
kEfzEta70+LLRdk0AMHQNw4Kk3EnTd6uYQkCwfkc0BpMdAHdN7T4CW39JB5+
yFBRwd+j1/ifVT06h8nDzjt3XLbjdUubClvAu46oCmD3PewcQCYgh5ohA3b8
Zj9DtIejEyLMEEngX3PYhuwxIeB/LovVdAio5snA3c7K8YzW34xncF0nGZzE
v/7bY7wRLVwJ/FxeDfWzp/jgaVvQP38kpDTC7r+l3p8Ms4t63cCsxjVsEm5N
2bZrmCtNAddvVjHOEbjh6bpKh76GnVtfIYJ7iiPv3F4z/Xhq6ccT2Lx3suDv
acEe98CmJL1nD+gdO7wM84NfxuumgTMnoCSSAtAPhwdQ2SIsna6yCrBXi2Bz
VbhJAUC64FsyzW6L7DaH+wHvxvXyLrvZzVZwS/Dvmz3A9rbNX4CqwRCTcjp1
dNQFdHwL3wEMVisY6QW0b8rrGQxY3w7wO4Qq7BdXj+21e9h0hGV9jN3AcPgK
ujkESB/AXV0C9AHKIBCk2S/qG+iyxY6X65aulysWy1newi5Mm3qR4c4T2mjl
kugFWgGqaGHnRoeIuN6dvhnQwH8+PkekoJd2kd8hlANdXdFODkcM+YtyMpkX
zv0GyXFTT9bU3LnTqpctedwCPriQkXf38F78/LMQn48fnwx68UJTrNYNrHPO
uA0nDGiogaHgwjIGAlTTjpsSAOZ2VsDO4kL5rpatm9fIpkwYXza89XhUBcKG
7bXlK+5fKa5riVohnpzg5GY1gAzMPG+Asl3jSdFIsKIEOcKIvDqgsh8/Kgp2
uMYYayum3o3wNyEdwpa4nJ2rHP82GyNLx+XkYxgOX69bhRzcdY84DwjBPOfp
IGX7+HHgCOcMn/FD5CXgISOi4a49lgFhAn6xJ62Bx/j4Ee9bQLh5dj2vr4Ap
AEZsBacIKGSeIWG4yecFXySar1wQgkVndh+Gya7y8fudAkbDq9PUc78CADS8
PXDA5WI5L5BywWqFtrqIthpqIov5Xc/BHFahJ8I4DjFOAbsN3/JJZeUKzvmW
UMayAVSTN3cwzHgGfF67IBQAWLBezyf6Qe468waMk2cXyJrmDYDNJVyo9574
MkhmQj6JB0UQdHl3/XjKLdz7cloyFTcHRGsdA7LXl07Bbp8PaVZ0O9Qb08MS
wO8MbACSsIWEKUEsgPuBKEDPG+/Ke0Bn82JyLawBnHup6DRb5gSRhv9J2R0l
vj3Hc1rBlcsnA/vaKc/BuLOalteAACZZux7PGLT64ABwYDZdz+dyRxyh4Zah
UXAEYQu+5DqlqrcvuJGta+8qeLcqx5msVbd2TAyGjIMvympSwlat4U74cRDV
AnIExDopEezyuWFGYBOxoayax4TDACy3BERP+ALBoymW8xJwGgkvcLvKifyO
k4eXeUm9AXkpi1bvm58B7O47AHM37uOHBr3rliNHAqEXFtm+JUyMYBa/yW+Q
38Apcmd+i3kcAXXZI9rhZTl+D0hV2mNz5PuYXOFKUHr0CB5ISXYLHBn8fV1j
W3wgIw2d6Zp4UBQ+ywmjyIK/wy2vWiFn8P0sB5IJoDMGDoAovjM98nVb00ki
8Nxl6+VEiMgpMifMDBfYeGBWSUgEeVh89PLwp7O3lyfn52fnWQ3yR65DM5/T
t9OGq6MtSk6ePgkHLCfLcy4m4cCdIljliGRvEt7XHyt27YGK2VWa1BWcNN57
l957z1xrU94XHLa4oVMA8ljp/vCeNCDjI64E1EBgQueNS/HkILu6gynOYR+U
iAloCCMFB306LIa04XcZ94ZABGdexJuBcuQKxc5W7wStDVE6jEvbUxOm13YZ
MQnITsGW/VDfFngYAOxjYk/DcsezAoB2gNPEZY7rplkvmQ1DkMF9ksW5K8Rn
yGUhZy6IbjyrAYPhXG8RqBFMQaherwoG1Ou6njC3CWwL8iHlGIRoujoWPzMs
8iebxT+Ywo2XAGFn+fw8uxLWhAdaiPgzJvFH5Cx3BbvynldGtwfBlggGiaYz
EnRobUCwZuvqPfPUpXCCSC+KW+dBP9zk+tbcGQKEej5H2q9XPMXOcChvPWcD
CyEMC7sLKMdSw90vv0J2hW/JTdnSoELenN++qwJEhyKLpF44kT4Gh5DWHW3A
1bqcI3Zxq3qpwIbfN5Wid4AyRoh8iR4DC/2Ee0NNDpDpa9J1ATw4wnE60HMi
0r/5DWw6shFEFkB8pB09cO4gy84CfMMIABKE3vG9EtviA7Il18ow5llEZqcw
TUC0/gx5gkgZr5A050u4Hiw9y4d08J47iUZ0KArgHbaMcvlXRHl5FU6Ne3Ce
IUId27pSBGM5Hl4iUTzmb6QN45kwXfgfEx+8fsJqDBhvKBYdEIsemH+CtwUK
Z1eFmUHMlfRxjq6fK2H5oTQaHiTThO1SHi4r8vHMsrdmL5QHCyuHvlZ41/PG
MJh8W1I+lVBrJAsRInREdUASLArmg+7deCEoeSQCDZleoFhMyIwOOjChyAHC
eRBBrJC+rgBjMe5v8U4hYWwHvHv5ojDsOwMNiTQ57l+XwYWT0x3Xh7hnfVwK
bxyghiJHGRCvONPNgFUGNP3ARw36GClCpKzumW+mpmG72+Ka2HbnAozzXA75
D085y8rM5TMgwAANhVXeSRf4O/dwqTdnWTcrrzNiwjignWGG3N8wXhd01EO5
uctjwUr+LWzBIn+vai/sE39f4PUGEDMrljU4IyYb+hjrynQDEtVY1sKKFyqQ
5+n+tIEoM0EBnJhf8/UhijLx/DEsHIAN9SymCfapKj/Y2xIINt4i1PcRN0j6
F8aQxMvx/sGx9C+JlYPZBYJZ//KYQSGxKJBO1gbIGmX7cLhFDm1yVbguRRMC
4zzGbp6YM6ILiNffnrcKI/XVX1DTsxJeAXtsAwAw5kee2QFcIeUEaiYkuPU4
gLZL+pb+AG8h7+MRdr5agSSIXIiLGiqQCl58fHzxRG8eqYjlOS9BeUldAwA9
fXyrwpRZmJOJICZVzUVNqNeqOgwWmBbV2J+F5QGArwk3vyu2EDPa9smYTIKl
G7wFp0/PRPTwdMgwKOahXF9ajsGcujMWM0d4KdWcR3qbmzJXrY3zKJG0ZT//
3BbjHe51BlAFfESiNYPJXJeTsDesPS4nqFCpgOVpYNIgJa4LPik4pWqFmgLi
HXh+/AnvPjCu87q6xlvBo4aeLTnOdXdQKOZbSCcSqC81ilct5MX2U1bj+RoB
J6b+AohhL8y+PnHKT/NaWAzp22O+Ikh1/GqyskbVlB7XdZOzuCKqH7xOOyRD
1vzHU5YeETxigdZ3KFRBe2zXVzso97cJsSb+AfY/eu/VWgDI2Dc3dTzHSF3a
Zs+Hz4d7zwg77u4N94Zfd9WnuN9NuSThDm41MKcuVrlCH92vaKX/vi5a4ozu
YLWo2CzMCpnd0hXu7n21cwWS1L+vc9591tDSx1keoQXgPKBj2E3WkNFVjHpV
xDHRHZCvmVT6S2fUDl5zhboDe9s875domYefDzuaZtTJrxs88YGLWqbbE0bz
lAMBq0T2uZqQfUa1kPFmMRljKVMeeV17D9roQP9VvYrxhr8KxESFC6f0wF+K
/Pq6Ka5zS4T1Exg4430s4otKFj7k6OZokXAp6wtTrusWCcp6OfeI2KP2rtaz
NVff4HEhf4qoYwb4wRTwBuh8vW7laJZoq3Yigc2Lm1x1zdITdzFAtrNeLmvZ
iXgHylVbzKeCnVilK3iJGq7KBQlg8xxECbiZ/qAG/hCgxSA7OXvhYISSXxWr
MSw5JUqPX3kySgvfSEj9hxZbBg4yJqWe7KMxtKM2EsGT9HtNMQZmxKgLb2rk
CAUmWXbpngBKWvACRHmcCykRQIwYwwnfBd1WK/JX3BAxp9UB4PSuSdRhc1SA
IsKrqPho+MwLFmn6SLsemGN2XW8R6U76OPYsQ/OsEMApKYy0rdUatU70fAPW
v6FcIGydipGqCETXjWqHNylHbTAghsAwXn53jIy5bLWfj6E642K6nstpDIj+
w/RzVoMMeA0p9hMMmdrHUj3r99o5caWLIlfNi9G5sPIMdaCsDFui1AHIGpA5
dDedoxExr+6yd+enlyctw/IAlg+HWgi6V3WMkKz+gyLxhCEs3QQk/aq8Vkp5
z4p9p2eossOOFV+Ox+um7Vde35ZwiKSYg6u6RIcH1pnj+mBygtJ1F2m2gCJ4
AgPRbsX4MO3S4U7F2n9vPwk7RFvh1ZsBA+RjUSCghomdDyIhtBeghVV1aICp
jN5Q4HkL9JOpB9XUhClQSQmXAU9vDNzOCiU1FJ06E+jpFdaDip+wEpZKSMkE
+361RnLJuiHvP3F+cniM5OTX5ShQCYOiC4qYsIE7QAdIrtzGYcCmXCilRu0E
oNzyuoIHPLdM+4CdAexVMbIaEHu0Q+wRjYaeZEB0rgX/ISsliJQWaaEm7BP7
LgXuWxAfT4ORHyJN4ckChrzLZvV8gu46+ZJtkjhRz11eGK5nb7gbGBmyAzJi
Z54HUGzsmlIY5sbrxhIlYs9FMGvs0WlF/MrVnV8mrHtS95CFrqkPFnUCwn1s
+RcjqD9465UUPIVgZttQ84r47pidwX0lkz6rx2KlUg+Hg0SEoaaYOM+/oW5s
KjaLjv6NybQow0TiyK19OQh/gMOWBMO2Y5JEI2VhjOuQOwhdeCS0xQLt3NrK
jmineZDoyFtDzQEDtUGtxeISAGqhmpPwhPlCmnCwnACPWlTI0YkaRbZldVt7
joANEOwrU6s5TwAG1Yyk6MHFE11nRIwHBFuCjVnh5bw/mTQXyxVM+ZPRGJNF
1tqfW83vSwCfNQIwKaHek5kKd/3Rq7cXl48G/G/2+ox+Pz/5n29Pz0+O8feL
Hw5fvvS/OGlx8cPZ25fH4bfw5dHZq1cnr4/5Y3iaRY/co1eHPz1i6vXo7M3l
6dnrw5ePepz/mkJkZPLQWwLaJQWti7wDvjt68//+7919wCb/BOhkb3f3a0An
/MdXu18ibkEMxaOR6Mx/IsZygKmKvCE3COSl8mW5AsJD3Hg7Q9qFZw4b+dt/
xZ35t4Ps91fj5e7+H+QBLjh6qHsWPaQ96z7pfMyb2POoZxi/m9HzZKfj+R7+
FP2t+24e/v6Pc/QT2dn96o9/cOg1lR0JCkJwvBAsdiwqKnfo2d3YBIAIZDwj
icwjERLQyPQNHcK5sf8NIQRCdp1X95hA+vQpQ+tYYiRj0jqxIzciMxA7FiBI
5GPSQ6CAE8+/FUHLgyFb8lrAEWwYRA5kqobNVNGGel904IMNYhGBfRRRvVKH
gURu61n4oMf64NAWxhdBvLTQRUiUmX0TgXHqlhxiGEHncxdZgGor0gz6SQhz
kUXktgpYaHcI51vlQoVUE9AWwOEiZ4RHw54FCIWnl6RncG5vmDhglXrO5LUW
dcjbpnoKdMNEBsNvnWfs+yCK16JOFvEm0oLIYqSGZzarDtgoCXuVX5VzkTH6
3cbKifOAS43YnUBZB+J62cdN5dWyMszQ/I7BAAgKnJgLGkpGfUDCiuqmbOpq
IcdioZCmz+T2jkBR93zALCEjOOExDerkzZ2IDdeejXPnwBo2kzmKBMJoKc9h
t7PPZwkVZV6Op10bgOxJJrxxUd6wnGAHS9ReXw33UevlrNYLJ8xeNUEgbIFH
gjMf0wTpsuYk4uFGIDHAXr2K7HmHrXzCZtEW5QivM8kjOyMr7VBgFRuFMTQn
gtUVgSVeNhE1CdjG7MdLjhMkrtploz+VN3jEG+a33KvxF/l79OuRjoPzFwj+
3XHTU7kqrkvmVIEvb+56rQr5larX2B7fWl0Tcn/TIFZnPQ7+wIeT4dWx1Z0v
Dk9thCAJ2wpXBT58cfryZHTx0+ujQQoGiSCuWgKipuiWh/AzUckq/iZVn4fz
GrpIf2J2dZazWyCKkfd0yEqVzgmKElzMze1dNZ7BDQV+yyvqSO/Ge+ZFBtc7
hFzCF2wvEp9/wRvYOiax2c+/QeOKtN6R1h8BIeK2p/gv2Z3+q0vWEO8iiMx1
mz0Wrrrl+I5reqZWmvaJ9+zy8DgQVpg4ckCJwCchnz3tQC070xQTa6kQDYSS
cBgto+9d0pytHNyaqZ1VwgqdI5wynd7sTdoR+XCRgo//lhVUYqLCpyP+zLty
B15chXDnwz9oU2g/qJ9gqlHj+XmxQJh7gza6CRlkYfru8fmbI3Grwago4EPG
TUFyCfCVRE57teMinihg1NMprlTpChkiyYg82aBz4vuMilICOu8DuiZF0uQp
all1p7s+l8oPxOIt+QIu0ApeIpfCqidGeUDZUGLDqcJ67RLxqJSwib+REuxb
uDbXpm0ALJBSUZkje4KEzqEGIif/3v2T8/PR4dHRycUFe+wN5QIQedzCBXSU
sq11gMvQaFs43GloAXOcKzx8g254Jamax3xYU8HuU5YS+FRgpeqJ55ggijlF
Tw2geBx0/Xyyg1g7WXp9DLv1eCESl49sJmCtDTcZdWQzEmnWK9GKyWGjuVVc
UAElyw1SMmSkAz1fJoOoCEBZnM9AmSeNJ2BqnKiMIkW8H5Dubd+MmxoIx+N2
vWQ8hOFk3he5iyoZQ9SkQqG9VSSDt0y1+yjqo9LJuw36EBIitzAgjaQuisSK
ocZGrRgOtRvodDtGR2fi7IgHoaNP94Jwi4eEcS24/kYZOVS/Ie9NsUfikore
Jll+jXYhVKzgXULTkHzwDYiXFRKeEr2VgxMzABU5/HiuOo5TOHx7+QMQ1wtv
VTGH6oIqPfKsuk+cijcfvUNp8IJpU5HfFO0E2FiMU1SWfaesdqC7HQ6yCYLD
Kl8sSYnMri5+EYtCnTjFz3Y5u2tp532Tx8Xwemj9y1m5Mq53JEqGAxb0QwA0
mSVG6wJSh91s7kgvFa2+HaALCrmA8RWFY5mRotbjkkQUexAxBdBkRjihFYxl
BwRQ2VWJgIEvDxmcjgScXlLc3OHRyyfKNwZ8zNTGK5AEvfjxhVs7VLddRrdo
AqIIYUSv5E7siVnP1UI3dWBk1tczMw4Fu0qUsFMnllLtMeYTdQhJTR/ez1p8
0BNN5kDx4D0ccY/3nRP0Hal+WDLvKI4Mak5le3UhSg9c5SZGAsa1hWSrHXmN
QgqRJ5KL8+w2v7NyoFpXoNuiQWOwVSpHHjrYyAUDk1BI1DrShhBC85b6mDpn
XvmMbo3r8QyQmOooI7ZG5d1rQFGV93cXoZeo6neHx6OLy8PLk9NjjcPhT72D
JRpw+KBpX9ZJyMwDnVNDMCNTB6aMFAXRrjgCEskHSB4FxzeUlfMxubATG+CY
u21njE9QDcHE1dr9KFJDAzg2Ib+8fS+maySj3mJxKMLhsobJDHrDuExUVkKM
1ULnH1e1zukxkhc9c/LckVtVrtjdBRikO3uSYhcJ9NJGUPwGJYrTWAOH0hDb
yi8sK9g+RebeCVqei9QsXoLKjUXMYxtEAsMBo3Oqv8jVfZII3nX0+xlTpGsn
sI30Y+p9VBKuzPLJoqxKjGWHrkQAg5tBnaD4TgYuJJAYW4VLCuE9cswv8Zvb
gr489r4AgoLfeLetl8eHb55kn8FKP4P9LOYTJx7xqEQWD/O+yah1UocUdSeu
bkJAI9EkVcGiuXdxYYcuL9Zckir6dlNE1yyfuFwtbUZsUDkszy5O4P5enmc+
eKUtgoNJQtQ73JQhWd0orMuuEERrZ95gzuoXIscNRkHvNPUV/DXN2xl5gqhI
688qHBFr9S5nuGMIWfHcKdACo+KCWJd1xDpeoFMF5u2s1oAbljnw7ULpZiKi
bDTmvYtdPSgvAakNxuqFqdfOi6+pGgPxpbgzwH238kDwJrUdDYRdRIXKhqMh
jxQF/YhhtvEn7zaBEEaAt4kQ4xW/OLgqgpC93yQ3eu8Dngce/KSN7Pas9cxd
1CSOkVuF8AQeFKEirBY9/QvUpKDBH4cPV5q2k27SNbJcqibwos3biiN9PGds
mE6WBzchNZjBs0yMLFcFoJEatQBeabi6WwqfKUiery/LMOEwmKJbh1IVoZGo
kMM1cM4slCElrqu7BYoB5SQwTNOyaVfENg0oSE30WC7Sp9AZTiIqz/ioXaly
WxCS0S8whMgwsDE1huPSON61nrwM1NczTBY/sLNtldycfMgRieP28cFvJDN0
fTmvjLhwKPbKHtWYAmA4fqQMThptyFZcpHWV+TZmkjbpMILC7bLnUsDGYbA0
cG9Au95nmPTlwLn/+I//AG6iAPZup7ndaXbwf5iFZVcXgCwZ3B989MXXX2bH
xTjL9rPd3YPnu5ksBjtxPx9kv5mW1zs47A694OQ03z56JRP5rAUWGuX7qX74
6KNzZ31LGASso7FNLVv1WaJZV4DzJnDlyKYQAfpTubvETzBE9C1zR5e5+/X+
53vw295X+7tf9S8TJz/qWyu+iNd6vHGd7yKnezSFL4tKfQHUU6maeP/zQdDI
EVOLhwdMEOmIVBXY0VKiy0lJrlNBX8joGRjbGWGrK/IM064fU3YbjSMfw719
QhjSP1qQzzJSaxYiY/BGEfM6b67YIcdcvSi0WCx5SqYw9hT1clPeeyeqH1aV
LjHfB7nBsGq+9Cq1lphlJi/sAKHLh94GfGNKiS1D9h/NWDwMH60MQwHNHouq
x7Fq0NNLMwH5ZVIYWhIRMWbDnWB2mYqlIVEcPOBWmjeqRBmzwubC+yJvYJ8a
Jy99DBgBSlVrM1TbeJhXahL8SSrfj3Cpw/ug/rlA/defCPWMkGO4f0HPss3g
z+w3ewb563q9UdLzEo7hu0jui6lqVWeSMoy4sspL05igis6QmHF1d+pyQHCM
FL6MoYHCe8jIFIyO/VMWKlIm9rncEXAjC8zhATA821FOz16dHZ/sj9C7TjxX
Vz3pFmQwkeHyRIvNth2+iLR5VY9GcojBqKJ+jLY28JgkcsfSLukigMC19bwg
6TR2T2Z1ICUOQygMt7J1xP72qqDZ0MsaUaEe1v5BV4YtukVFPnPIpPYrQiVk
1ijOEV08++KLA/jPl2Ygx+jH3Op4mOwBw7jOMF/6YWZyaRWw8FhS5JijSr66
Ro146c+rI1YyH1VMWPIBVBSs+uqM+m5Gm71uZ/pK1LLqJhDOgcPeyW5SRd7G
FAzcFqydvi3y9zCYt5pHTBS0k7xeHLTmIcDFEGAOX22x1uFeovlCVJY4BJOO
kYIsJxJDnhqg8vltftcJAUU/L14ZSjhl62iwO7aue0FEgibZcFwz9apzYEnH
RfDSJIFMGT7SWLYcKY8SjrQS8yQ5nrInh2jFXiFIt2KPjDViznVy1Yj/Li2I
XFaW8/pOI2v7bwteR0JjA3RtYM0dIBzUr0moTQyskr6EtJwcmlfFwWmMLjgN
2ETi1m9wXHZPLJeUbgGvb68XDubTy7w51isyUh8UTlsUcvk8Cy6eCAmYw0/2
YUX864TvR7QDQxpqUy4gXsiETzWot7Dnkz8f/fD96fH+6MXLw+9Hby9ORm+g
l9HxRTad59cs6tfU6vD19yej02MCTtk2XkG+wo5WmkRA1M6bp5plL1SuV4Up
0jdRkEVdSSdiLCX1Mp4mdKpb1Y3iecg+iJMn4fgFTO4GHQFEP/i37QqCyLJe
MdKb32EnVx0FseZaYDpCeQC3pIMC5hTTc0beXv3eT22s0DbuMENysEtwdCDH
klhBMkKV7RivAfI/cBcm8Ouq1LBV16tSX6kb0qYroLLeS4GBI3kZ4QTh5VPP
duM5ze46Er43CACybMpF3pTkKZ/PV2x47zduDzQMVBJUSEYf42QenMVoOowM
bva9o1fZ+th+LyOIhKwb0p/cDQkDe+XwCH6FHq/2h3Za43Cgg+J2NBEfc8yW
p4qkkMYiBOjBLnEABSrNN0SwbPCFhMP7ntT+ZBcBzhjork9KN7jvvCh2sJgg
EZ/WlGOT8eHZm5PX3ILOLNtyZjZ+QTChZ143uDCwJr6txyXNySv9hZZDD3BZ
5bZ4i4nPYFBXVeGzJ/VNCDvo0wVQ7kBxc1Sjw0bLGV9sNkYGCZbvIguxbARG
ZX+yx9N1NeaUAcB9YC80qKqGN2kvZNdHx2fvXn9/fnh8Qtt09PLs4kRtKZzc
FOc0z4oPwN5oPhLBiN2YJ+j0cHJTtqgSTyNGNLWlTWRHfWPM8cY+yUGnjEx/
DC2aIk1BwZ8v9iLwk0yAIs9VxbEFVuhWQC92/3rXehxCYB7UOfMOyDEgssEh
OA5OQl0wt9u8ZMtUuBFiLSVK8bpOVe+UrPmmBmqlKewUF/AUL08uLtUUR4t7
cX5yEh7Y/u8/DXZo5WDfCHrNFJC6gFiwNczFm4sbDsMy6Ru9VlBDpYg+rv1x
R9+zD14cnu8dEfHWsAOYuq+ySjjlhNFMxHyvCTCJpj90lCOUMSkjrFcYVrLq
AXOKs2AvDRSr1xRgSNdxnEuCNEJZvQtBPgbDMzheY12yiIJzoV0ib4tZPkcX
LuGJMs4uQgsTy0jdyqZeiSJHGKCTWMppZ+LvCicrh7TNdI7j+UjLEyYAZFrE
2ak9o+cY9YYO4k5Yg2R9ckuR8iVIUbmZHryNvfjDlbSCM03AxsmD/Od0Hyiy
caPrViqTic2dDtLGNmLyM0rovUqO66oIQhIyvon8dZtHbsc6sSojOh8lpJKp
4hVqrRjHJgp2gC5aszyfWUviBWP9rrMqHOkKLSEbVNqEHiryUCcnaIr8xuau
+LAsGwn8JhMCoSvhc8qWnDaYL7Ipq9UJlfwaiLWk3tpNKJUsFkj/vWWeJ2tG
7zWVWzqLV0wNyMZ/QqNblVxwFhQ+OCCjmASMfMxpF9V7vuKj0+j8XOPxidOk
EOLk0IirvRSW9/8MrvYyConwwd6LfKIsYj/m5psuvH8QjjH2BZBGqWoIDXCh
xH84J00WKIGYIQurM5F5/ygGM+zpRohUVpME7qb478pq+nTevwKfqf6RJJ30
YQm5FUqBdLmbnAF3eI47Y6RMVUDskoA+kARivRSve6LgswsiStnEserxMJdK
UFWy0L3RBHi1NghZNP3E8iDo6z3AK5Nfazz0Jj76/wKWV3VK3jSOlHAdKEV3
jt8JQxMzenoQakk3FiniVEICw76ZCBjztpiYqftAcRMcKog+DBQTAzBFv2tw
FrLZTM7jyz3czg16CAiMIJFOn+TGqwVIjN0WgyjKmRSTc78KM7pgBZt8E2QO
xFbWAx89IKkg8nD4IETlFUELvz8EBJG+faNlum8eHkAeDB3ELG4EkAdCh99W
8W7xWCqExmWP2cbl5QybMmGgneDbHUrl1Y/f+i+VvVE0lS2XKskyZuo8xMiM
NcjzvsxhTKlxCt5zg93iUTtlUuipmLxKhdLMpi3yUgrptq6leMh/gez8yQgr
JLjYjrZ4LiR0R7AJnAfafIfZ34vCsk9DY0jRmQ9SsKOKP2E1nKPPgh7NdI9n
etUU+Xs2qqO1JUypR6HAAU696oONuoMe8TdWH8AgkZIiGuQ+ct6DUPvORiJx
+T6I1Ot9IZUtSZymU6PCd0GkjzyjlzUGabe+ro8n2CzZ9oZEEWA1RRZCie+X
17yOF3NvWXE9FZvYlkppIMTaeoUr9UEr4uKRBuGUkrRKb3HrmQr2lloDPWvv
Ipd4dCUJFRNEE92xEAQpIQ0DlyPM4V4iVgkhkQbRfWLEiU9yyoIoO9jeL41m
vsZIu6ZMYDihsKUPEVZ7cYV3F+BqNkFq3bZLREbJ/4LSuHv3/qIj2G5z6bdK
Eh8j7g0IrAS5Kfo5sw0yttsiY2dGxs5ppSpi9/LRIDpj7Z1jzW0XUvRsTPXu
knQKUarYE81RTtmAz3tylDubo3zSHXdjJga5RNCHs5/BVNCNhbLzdmuLVBrV
Qbm5qRIWOjeRaOBCjQNKLU+gRjWINOc8VwZb1TvjerEsmf9a+ETl9BE6+07p
iJKBVXJBH6WMp8sWQfQzknFgB/gWYv561rJgYGgFDAz0Tkwcns4n7ZJ1jfrN
Pz0FefdpO3PXQHWyz/6f7LdPnz79LPsfv81+yRCvf9b+kZ9lf/zjZ+mz/wHP
Iu8o3R1xjJI/h+2MvaDE31/czfIrrBKlC29NTmRxdUCyBQ9sN5JUMqkO0/0E
FWXD1YfVo+B+ZM+jWct15MHztrPx8C95vJJZNuxXO8vCbLLfZzpO9gfa8BE5
Bu+NEFkMP/RtjfarW6SOtVLOLt0xwAzTqej8zISVBFCdLUBwRAmvsLRgRjUe
NYgPU74DsAyy5XzNWeJRgVqgdxBQVXYPzx7BST6CVSaA1BpdReczEur1Ukk0
i/pWGo2RTfPGkEqV3AQh9N3HSQE8vOBLzE7qfFyBVoKqpu2+bDCfOcdef+EL
JBBASBJfEujgC8f1tCTlhq9KsD8QhgpVYdO2WMGDeVFdr2b7mmOQqpghfUJf
nEbyypRjKdClX6/hOj/fG7GyBP/4Yn8UnJr84lnoRHfraqK6i1yS55nyC4zr
Ja0oF79rqxK+WqFCGt2tRA9RNpoZg3w+AcLFPg2dwoCMviW3wCGnqdFs68fi
QXDnHGHi2CjU8b9Rq6NhqqCZRou5SmqkSi6cfiO54EXv8fH9yeXxyY+nRyen
r1+chboc2WObNORZmirX2/av7uLZOGCJGkq8LjKJF7fEzWk6HfFERjjNffFu
Sp7KtUvbSnYQ0XdqvCB5hCbff3ziyHoREgbyFWI+iIkgo2OSOHd8mCBwvf+O
jr0UlTTJqbvRVT25U6VTNKEwFzYx+OJAm7ZV8NiHCQlGiNG5C7NU8Z5oYWu0
jf54AJef6XRyo+2/eVDjRVnVzSd9QekcH9b0trfpVQ38mvmhpuL7MhLfF//R
R/0t9vft2RxB3T2vEGnft8kCe+lc2VMPxHgqgLmv881HFbJ8k6btrK5vavSF
/vn7Pzx4cTypzsLo8SN/JcxsGEz9XVzVVnfdE7XpNN8L+mPFd4LQDVX01Mh+
MqoCc1sHEVzM6gmeKdqhmZuum9Im3tlisR0hImgTuARS69hNH/MNSTeDjCA2
/ElV7TRbEhwr0Yk4ME0wnL0cEsqWXIAg4yttc9ZRy8QPmrieHmxaahlKFyfs
jpORakYlSrlp5tZm7Mu4T+EGLpo2ZfaBBoaBspl78Bi7qwIk/CwsOLlpFEcA
fSYBEtTdyhgue0TeqIzkzfMgjJkCkhwyYdfgZA37n7gGU7+Li1LGkICZKK8o
3t8rqM1KuuflokVIZV3PWvkaitEYEe9065mIKATM14fNjO9rxRkwJBxAPQG1
NCFIKcDuC5mv490adPeC9aN9ZykhrDDmteRE0KPi0rzKtCXudapOKb2d3vr1
bnJ4owBo1n6qCoRYYO/sV6H0Bau6ouAbq/YIm+Zuc6yfuYabMGclGzMLliB3
5+31WBK1mautGTOwj+cYdXIXeah2VWjBn4BSl3GSAZn6Doe/c9EMjzmI7oVr
dMt/NgEnmNJRPNv8Q7lYL1z4kL8xbi690r3PKZhAKqeauYmLanHnWNklLq/V
KRm42r4OX26ti5RdvLCYiYvcbPp1Fb1Yh7uI/DZy0adzWBcsSWIWOhEqKGcl
rrIbDLgaHMsxOiaeM9Qh8pUINO5VGGECLgpckJhO501FccBPXPMAQYuFQ4O9
O4v38TMGB1p0hTteSvxZ25/DrPWxZlxziKMgML6kJ+mfpCro2iW99I+vpaye
8gQ8iQCnLkgYkgZPZ4QeLeoYxklvOynpvv3W+WR0eLZXd8aVTGZJFeg2dfBP
toMoiSFWPPnSyyNMdp74GIook9srZedQG8dXe2MCUbmmbeABxU3GR2+kxgnP
/fhxd/jVDqcSijoSHo2UWVdwG2/LCS54zDpSSuzSrCtfhgubzQCMEPOyWdEr
sKEbV6ZlNVOMN4VvgNjA1OweGG7Mni7s/XjGYdJamqRvpaqdu+MsOTllV9/w
icaNFR+ocpnwDRSPoEJcuSwcFhcIfpOA7WTPSTcx7q/d20kZQgWn5fjSedgz
GCTMs9eIIkkhZrWeRlxv0leX7dUEV/ixe8wpohGWEjHiiTEmKZdt8pNrmsjH
FALo02mL21oy6pNBQH7WskHnQZHJfWUJQsJlorM4AR+FjlRUOfig9542RaH6
bBvVmO4B2TtQ07sSloazh3UsDd51nzlEmql05kxn6FLNZTzbsiGkg9sY515g
+ypCXbslrjHkcDZVWxB1d5dQVi6CDJ5iMo/uZ2QzMvYZh5euVgYryvkmyV6Q
q6XE4aR5DeO3tpcV5tDgDB/Fcp6LK4BgltNj1HPLH3ovNQCV8jp3P8pOjzXN
hrecgWAyIfZPbBuF57r6uX60hptjCklFqvhSUVI12XoOfow1IYRlgMgsllzi
lSfLyEwysbsHLFWOqA89cHnd1vkC0XpwDEj+JvL9ijECsXeVX12/R2W8uqPv
RgBipy9+Gskij7PHMcs7I8X2RKsariQ3iD8b5A1MKCOVCZCyrSJvnx63Tzjd
jY/yfsgeqVQcTstirlMr0uM0uHqON7LK0Yhv8oZNYC8D2mbNqOixU+iZJbR1
6zjBX6y91JyLJiOj+dSvBSikYx4H4IkaRXCFjpGca3x+xzLHbd2HrDweTKB2
hWnpaem+5EegWB2G3WdS6BiFXVy6UNXlRLKEnicKDewGJ5tw/2HOHEgs5A1b
d0u/ym7CftWNQ/M64WHlJx6zbEWBwnHawiccCTdVK0AUB3cZT6zD9sjEZnlj
t0pKTnDayMYt8r/AYnFiUyv5s3e6JlKk/KAcsFkVuH15c+fZXZjCRs6Ls2zc
c14Ekc7vBttc1+z28gLDjNn4MwgExqdK9BlxBz7FmjSgHGpsK0I+RcqWtqzr
j22yP/p624l1FhmN8rpEMywDP/IO+6FSdjgStqtQpVYKexalFFEu1n9jTzua
ZzdWMuNPUa0X0SA/uyz88L3aHwG0/rg/2h0h430Bz7/Ndgd97c4ujvdGZ9/9
y8nRJTaDdnu97b57eXb0p9GPZy/fvjrhds972714efJnPyi127+n3ejHPWr3
uTb7+I3TX0XXy6sdSbXTZMV2K+zPvB6P8Ok3trGYBDo/2BgtA6Lh3TaNZHgx
c/X0OOJX0fhiC+trza++6S6Nk0Hsx635YU/rsEumtTyza7NqawNxqrLuAWni
h1l3HRsYWTPq7Scd31GjEKJj4twkPaAAt9hWZowrAekRhYEcJ8tAQyCx6o/4
dB+JOlTOWgs6oKrKFkERyVjNj5QZX32R8jsM7r9aizE8/SgWPW21xwShq1TR
irwbaZG9vYnUipjDB/ng27L1U6d9QlFH0m7vs2hMibi54NGINzS1kRCO6DSL
bSQvXvy4Nzo6Oz59/f3o1en5+dk5MLf882327MOuNXXENo60WwMy/NxAStEz
jcjWl7xEe2TB6keqAMLS4enh60NyqmxXQk4+ixBzazBzXN3ZIYbOzuXTz5BO
kfsqphPGLB8YhUbJDUibgWMZzE5pGkmSizsFuU3riMhH8Xv6GMPHPlD1OJxv
325LohjNp0r2b5azvcMUMNfz+s6xuHZrbcqdjK3O9Z4oCSmhGHcZaohLGNTK
lLsGSKSSSaTHRFN8nL6J84+GOWDWr+zoh7ev/zTi5P1GIWseExVje7HPIB+M
xSGpPAcW0RPMNNBGbfjJxxTSkXyveHcxXcEFcPCjSH+G0PziRf/Lb7b1cno2
uvzh/O3o1bEQRtuLebm1E0zYA40zf7FsJ/Jycwe0e6Oz1ydynFEH6cvN3Zy9
fvkTNaRPzuWCP6Of3WfekCmfIxwCbHfMwvhjTqIHKcihGXRgHj/KLD6Qpt5J
yB7xx4xTHF6VKxAZMhX+VNnVY3pkjbQVRjC7TQhyxYR9pbhioG4F9V5Sf4OT
upBdtGE1QZxveVN2ArltvWDFlcY0IILTZQSbTFKosyexASU02lbyI8RmbwTZ
T5qD5I1ixTYghdOnxs2h3TS6UUFb9byIysx4JyY11rzfkl+tSnSUeYPiwSI/
Gesv3DNPTmCHipYPTJJtFuAeMOnslVy+v2ufsA8XqjjGrA7ZAyWRV5LF6x35
qRIBIiWLz9vFntk+I9/GDmDcoOTZutIUS3zSeik9go+s4MK4lFw0qqrb2mTf
RxevWIW/Gf/8PVNADaazFCfMiz7Cwn8+pswmMmSq/74olqxHCd7xTc01efEr
H05kaxqS8zuxfqbe4JATuKF2FZPi3GsW4v5kfiTF2lWE8MnbfHS9zpsJbQzM
qG4mRVwpC6ZLpeVdPm5qWwBF8v0bHhHXPkLk2OUQvRdN1CzmEMXBuSOtTKfT
ciQvIyeaatqOpj3SDX0wnZFhfCtbaWbSoSL+VUJJwidlK9xYYK1h42bFfGmN
jBznRjmz4+hcNUp9HtfA+pw9IYRr9DbyXb0EG722Bwbd0dBUImWFqZ05EbPO
hZ0IrLAkDWTL2CjhJ0FMY5Sdh/WSiHFufZQMu8Jb6uo3Cv0hQyfOhNFwYgK4
4aiK5BraGhqiNZR9TL6BM5wq0e4OmBle5PhC0MHh0eXpjyeBJVJW5NluHwvj
P7t4c3juv7Kf7W39DL46vfyp57P9rZ+dn7w5PD3v+eyrbx7MKPlt6YF0/64D
6P5NDOfhg8cx1+RfoOhEv/lodDYBENKedvFEllFSvUR0yefXIPevZotW4Jfq
h5Db70UouEsJ2+pqxz6SKBvWSDMGDK9Z/46lPDgXiRc5KHqNtNNZvqjX1XXL
+D+kc+JKrcDgFZgcFRAoGmklaivRn3LQScg5jXSpD/jI670PvJINwt3va4ZF
hVxwvAgacBCz8pJK/uZ38zqfeHtoUeXXajpNfHgCdpkaR6n+SyPV48TUM3F0
IZnGk4YuYpeYBUNLViCOaki4Z/GhQlSx6ZyNkhV3M5zoHF2nASkFT5dkVx9+
eK53E7qHJ5c8GYfNIi4dhvNmrjS3KIIjp37iwxuEci0UQoU2+p7dEitrL9Yg
lkAYHOtZncZI9YoungMKhqbEgyayJrrEs2ua0z++FjK9LslCPi4aLnz6+3FT
oN123YwR89SLR394gQaAsvr9U3z1B+cuiKN7gxEQX38ZFxeH0bWSuImIVaaX
S2NPOGWwJxCY/ZfnuJ0VsQ1jZoTFgS43IqRSX0f8SB9mNt8UwJaMYc/Hd9FX
KVuRfOXfGG9gep2vVs3+iM0l3cEwX+jG9lKTwbanJ915BTLQmRc+3spdRZvb
ITvmZUJ57Gcp8THvkP6EksFc/9P5FKtU+td738IV0wJIAU2gBtYWeO9RddKA
XRBaV6WwVd3G4hOTOns/7uolp+PVRJ48iYGP5II+9dpB2i2WP3io5lTml55E
p0V8HN0OkjPpNIgOxhWNFhHzqlpbWamYU10KpkgcZh/qNptvHOK31G+SFISM
AdYVS8A5hdIMMIwK09rAfyl1MIIE41uJJELTJ0lH3bCKoM0mVyOcQVeTfXF5
fvoGj+Y1SJrIsA02N0EEfnHibWL9jY5PXlObva7Lfw/iYieovkCP7h3S5619
nAQXdODGD2DARZ5lN3uPPobrwlL5djwrbTbMtgti8HzRedwTQAEfvy/uujh6
gY/72/szjdvr43vDVfo+GqEz2ohCWOBbziRUVujSK3uGGAODUv9KfqBf7FMO
i3b4KWMxuR4lJCdeVmHxdJhhgYeNE3vJIQ+1qBLYc6309iGcKN2kbYChR5mi
EXkeIw9tnKAMeRxjcCvcOivcBltLF2/wSGZ3slUxn0duiux4XvoQa3Z87ytW
lqg1etiW1+v3yXh/9CyM3gcxOGy9D5FpV89yk7F2Op3LORKcdYm07nLyjajL
Ur7B6MPTL2Kavh0i56RPaEeoICrGq9GsrFZbMYquOQUceR4DjjZOAEcefyrg
DNPTwcluJeuyd7Qq9XLtI+HTxWzEeWd+JQKebUT5ZubxYH3Tms9GPniAHvaC
QbRK+sg+ecBp8mz6T5Te9Z0qf9R7svSqc7pi8eZpRg7dnw9394b7nQjTVKuL
kR8+E8Ym24YBBhPz1xEQ8GezcJCKBtvvUFcoyLJtetCgxPONN+tAg0LPeJls
lhqszJC2jOWFSFoQt5WwYxGp5w/62BEmTz0MSafDjhvMp+BJmUAPhuzDkdK6
Bzum+BF//gbc2LlJvVgRcJlUKLOuHje7yHVhjMpBomr1WHETO1WS4wrytZzV
1DGmC97goj1Cp7ScsklqghH18BONghRIRh0wJ+zYQHuzB9Jet4n2Msrwu7Mp
cNvQg+C9o/YsDnWgcqCNzawRKpBFmfxp2uumEY8cisnkMjne21CS9/vy4+jV
kVdSNU4rBDFIUbDeEqbjNJ5/0yzUKZS/k4APHxukpabJM9kXpp7XHKPP2pYq
KevT+sQFqLsrPviK8O0MNfYG8YYNFBW9Mi+F1EHwYfA+4dh1Ix9lPvDBO9fu
ozcWOWXYDADPh3sdDG0F93jyGABINbbZE8YAAanHtn7Ip8SxDfMiVNbWqmWo
lAL4rzi9uzfFS9a1tDrS0J2ELfElk14d/mSCf4M2wcsVIHLmC7RDBziO8FKI
vDZst5EPKk4EJLmjYog0a8WeOZCVA0JYNINeuXowgAlSO0ohwpCldzh8JIfc
slNzjI2Dmze3d/EgdVUYy2OoaJaulYyRgETy9ZwyPf21aGr1l0NTa4hw1EiE
djyDWVE+GdjJlrU5pRoZAHcR+HJZG95xMphFmUqkaiVlWVsvOK0OPSV3DJ8k
srP3Gg/c3VfrqayWUHRdF/phTlqN1NEp+6vikUNftRRns3VpA3VmVkDgGX2T
BTyIAAd0Zr6DYE6Kc/KACqCXkiKZmylgT0+jSkyoUUc1TD1dxZHXHIJQqvkB
Pcv5/mNauQsJseIKljgDCrRlhMaVLEXh4lKq+bud8PO7zttf7O9/w1sSC/7G
bze/5Tn/bsOco6/s8vwnO8mP9PFLXzfdfn9neupOyD76nR3il+wV44Nd36l/
tNd99FxGgwfq42fadB+dn5wcjy7OXp69Onv9t0/yk/dBv7RDbH7kfum2uPeR
+6Xb5P5H7ne//BKiS3n+9z9y8EAz/7S66nsfuftX3rcZliXtIhPLmp5jdnB0
0pqVS58vED0WKSPRRdEGL9w2wYQ2jrEKCJHtxTZM3DvBWkbKRxFupIdEq5Vt
wbznnrejWMhYd8Vp1BJbTkCIKx+DxDU4ux55murExayc71NdKcp4tcH3oe1y
o1QpU/sMkVcmpYlPQ8g0vi87E9AppLmOMiHmXMEd47qqu4Tcxx4frZBNLjjm
onQs6eZqUlVaVPDYkLxTwQ/SmWwVD01W0Zt4ROliKHkf5aOKssVgCzVdRoZY
CeujtAiBa7MHgp+GN0mqDfdJWWXQ6ux91NJcEeSgLfYpcYSRJsz8t1hI1Sbj
NHncY29F8gvFLYl8SAeJw08M1/owZKlxaSKuTi1qjnZK0hfkXJ+SnXr6KiDb
3K1aAY1jgrtlTCjhZdGQcZoSCW9It0leAlXdyerZH2+paw2FYzQDh3JauVR5
06o4oRZoqI2TZJwxDlPRXvK1TRc2lPiVOL2P5niMu9FS7JL2bsE1HBEfcjDg
B78f3mx54Fxc8o+cHe05S/3iqw377uL8GNs+TwriYbW7wCgL2BLBIMkqh4vS
cvFfC8u+3IdLFl+GcELyClMRGzvkBDod1OrsB1VwW4ukQK4IS4EPmHjeVB5P
YwOJoeUseercZnPVm1TXHle6gvJOlO2M8jTHhTIT/KOHrh59kqDIZxZJlBEm
1kKSHPqSh7rZUuV3xfVSNLRxQqJO/z5M16REMWolOV2/lez+7rM2YHJiHm2g
7sglXqQxBuHDAoH9zzl5n94onxwg5KsVpE0BlutqUjTzOy3OKwgs3bpEOvIY
jap3M5kIXgux2BSq1mpbVl6arFyiy8DKuqF4butSrbHnV/pLtcR40nsftIIt
07wznSXhV8kcUa3WHUjDqv0e5puSK3/DN0tytPYmlDG+093EhyR1FyupR262
gpBfeV3V6pu37TxE9tTM76Yy+L2fcmJoTamFAXesqaYSJkETPTBZMylo23rT
fp0om4YEYZx3TPNMkTf2IN1+xL4U/C5p9iVQ12Ed9QVcO9KfAmTJim6FFmn+
VtRt+MIUzKHlvBigLz6qT7LQ38nWctS6KF1k5BkRWkOlqHh4KElxSf6KVLSK
M0hT7VbJZcqLwhLuZk2Y/dQz4FFKFd6CNpTbTrIcCIV/jkWgnRSG12Qrs7S0
R7RBkVKmnHI2CPenormifWJGzZZ25kC0XHM+dyvPU8ErokRYIf2fJ/UCM6jz
CgPrHSwqiY0wuSyftVmBHUt9u1ZTH7HeFY9PEqEaYSRRNLG3CvAtilEx1Bs5
jSVlk+CIcz847gyeluRimKz5jIuGThLpRz4ZuChnT7+xilPbZtkPnBVAdgfw
GOavklQBodv1ivoSuOFU2dYyJrWX6Cnmr1d1ofGeamPPdVG/0Wc73Fpi7n4D
ogJ9fFQjv0mMgNcOO2ejbT1TCpMD+TK7BgAjXTozHLPY15XynDgqV1D47L8h
qQOdB+uq2dRBk8CzI4098k/7J+fnEjZ1ef7Ty8PLk/NBFj9/+/rwx8PTl4ff
vTwZaI1oen18Ag24KLgaCjDC+cPKu1IoqQZ+bGOfylv5/HkKNCHJhugfpfYf
prioUxgIXrzmdsWUfiWJ+GdYVl5HkbRfmOWFE29INCal0hqm09YtiudMqXc4
lMFU8yO7gMm/x5YjKRgvgStBiJGi7TPh2UzNHe6D/GGF1cBdLqt1wWYiCrnq
TXBmpk9H9V87a+TIdObZ/TOnW3MkaTjUNObTd2+IAnXuxMTApXJPJPBt6gGN
mYMOLqAUdlYS2xKNV1Afvk6uFLBAXbJ65lCskvgVt4mC2dd07M6+oBhiCk1E
fZRmSdR2OC0fjiZPsjOQzGs0jZXvkZOkipO8DMIqmFobdRkcGWyjHbdkd/OR
yGuq0bHWbM6tcJsOHbnFPiVhYXlzzXkKYPZ2FEkMBX1w3H7ruvkIBmJik3FY
b0aslyRzRA5DzN9dM9uejSNiM5usNoIqFcZUa4h4gP1lWtmJNL2dkcyIxfB8
ctOjDXRR6jri74yKBUFMImHRwsnm1GeDbHeAHMtrZ9tGjh67z4fPO6vLsgu8
2hw3mosKx4lrG8LZGsRRMrZY9ipJO2PCkdB0gUwrKWi8HOMzuYhB1wvvQZLs
zfWj9VM35g0chMSwwWAlmkK7E6ZaJdwlyq9v1HDRfpsMNW5Dkiu23xjmcuNC
NeLMb9OW1YoxMCBBTUnme/faXDWat2JyFmJudHKxXDK06eXGnLS0SiJ9tYYs
8W8myUGsMqDQIj+OxsvhQz/LniNODHAusQd76i2X7Qc8NDwF9T47YS045xZI
dIFSt/ReHN6XHps5EZfmAN8q4EX4HiGOzOS12c0kqSqnzVVGsJMrks+uuvOg
AjIKVX391ETFbpPu1+YDFo6ZkGS5KINexbEhEnedNxY3J04eF2MTqksQZ616
MkgKDBNzJOwUi6R34s/i3VBDNmPWMEb5b2lvKJWXibIKmuwkxbjrJrK2ewjb
8Jay6cGZljUsEg7aHzI/oMIFYvVnxIoDMpU5P7l8e/66uwkd1zlX289OKAtE
/NXnwy/8R19+Rcml8Ox4Muazjx9dyhw/6bAbZVvPlbzD1sBuc9BXrOYWhQIz
9VRNg+tnAbdPtzNaI0zmqX/GCwBiq+zhq9PXZ+ejH0/OL+DXi1eHl0c/sKbM
k/oFZzvTe0BXUxSZjoKhOC+ZHtLjYng9HNg1Ia/AIjMm9MEuSXv7hHKaRXnI
N3+Lid+ir4d7vHmON8HwVFR/hCnXliuYZo4P9y02wcTng8N3QNpwsS6yw5h1
YHJuljM7KRwRmJht7aTWwC+DYr7XFEN8giClaY7qAAylRwxWK1OfZGbo8fSk
bL/irYNhQRfsaPJKPFAYU3vvEvWl4mS1kepUU+uRXguIsniriOOKz605BT7x
GqMMJXsXk9/ZXRs9u2XpUoyBeOkGrLuWJHPO8EKdezvYzmpwTUmfpzhPpqjh
l/HTVvlRO/d0mi9JaYiuWYQ+k1VttOMcBoceo71zuLTO4iQuiPlGQu2YJwEG
ZTccovdJCkS6Ff5K24qZsRsNMVQfxCPl5cGGtSorALt0oB9SEmZyU3iXfftA
nyrnLg5MU4oB4XStktMYu7uA7t5lv009qUDC9WOLRxa2fg2tX2ZPs4vIiO99
0cR0f8HfKYC/Am6RIniy7LygIkJEWVCiUjkU7hDzLYx6bbYyVGFhyqw1ShVK
b8mxk6uZosWZ2AW5kC4VeW2G/5V6SGmhvJCsukfpwXifZhxS/eNceRY2vWZX
unKJTINccoTwSNISl6a4Y41i1rDXVUxyyMcplLgIJDp2W3Vdgi2UrVttMZ/f
/VV4P8b4nNtUuDTdMs7h42RL7myuLC1s1YQDXhQTQHaS3rtFRIovK54k6maB
zrU+hpdtGf3zk50SPRKMC8Ij8PSkA9bAXA14diW6oKIshnHanJdE7SaqEmEf
OxZeZMK+XrrTTCSMf1VrTsc/K1exNltZBMm/RJkBHOuQMSSjx6KLFoF5QRUf
ZRNFUTaT9aAfMK7RJpqVieekKSY9OXaAWl3cscM5xuBf2wL1aez5TLTC9VLz
XoeiqbmLleqFamNpyGXIqxZV7vWKQ1OHi2AG+qUzWle49+jMhyfZl+dIkis7
Zq6EfvukjFF5X7QMCJMpWYR8Qju9ZqTe5xXoZUJa2xTXlCPLFxGj5EK1JPN/
YBooP3PJJRzSQocaYjbPu4R/6pripYSLmxCRadm0/CkJxZu+18DiFFeJbo6P
b4kMVruSLK+MZXcukNF5413qs1eoNGcp0rZ45R3DmTPBnEf3lSSYiQ2dahOS
IwESGs6Yra7FweHcirs+S+2qydX1v8fV+pJydbAe23jZcvUMpMjWaUssCqZG
JBldKe+C62UR1IrqM+yiDKWJ+cnbRVIV4v1tJG0x+u8Uk7XJPBgSM1mqQ3AN
v+3ALzu3gEId5/xgsux3QrKYSpkLzf+sMxcFPDMnLBn4XQvlHslGAgvGpKFN
XCsVzoPFSeOjnTi7JaoGxRqif/H+5iiwVO7xdo/gTbSGOSumvaLtUDqj98Kz
LC6JbfAGC6odJk4NsbnsXVL7iJLhBSZfHY/R3ENOatKxz3Vu1BkhRMxL630S
hJip2dsLA2+IxUL/txvV+tj86gC/VBiZhIhyTO6IRn5OFB6P+cgoFEHqKc9i
nygUPr1PB1XzaBNXgSRYw8vcdhPbvOwpuIznR3pmubT+OJIuSWWLNwZxE691
pyVtnF55daViZWmUYiVoQFK4WOSYBttROD9nk9bM86RDyyekIUSmIFajJbNz
peS55SQ72EpxqJTlIF15inTwG2IRAMe35Zx5hJ5SlwM++vhCkQ+F+ICV7GXh
xwquXr18js/3Rk4ByzsuQZUoi4LNvJSaxHYpK2uGSVPkOXa6YIFBYsyik+A0
qEiCQkq8GLO2zlwTYrHELNuymjw2ZsF4klXOa9OicKe2Q9K83Xm1SVtJPqdq
AukAgMcGmOa1KKpgRF5M2h09TgrlGLjJulHEY94MLCQRbHKp4E4RN7jTyMFg
JegESDesSk1wF4SrmY8R33fRBESG7i0qWzIemsAkAk6eFok6GDoEvKYFHSrA
ishvigWliFVeUqSXwaLiulSLgtv5vI1NU2L9M7HGpBElutyOW4RbYOXfGDmH
NFxULSKfowevQj2H93CBlelqQ4mo1N1tQK5+tXdq8QbdprxarwqntXN1LCkE
R9VW2mVRbHC8HngVAy6jBo4smM+10os0FBYN3XLZkd1rubwqHz0uyJnCOc8d
3LKRyKfeJIxxU89v1HMsnTRWsQnuFibPKMcGbvDk7VGLJHV04EOUsNtyteaA
NY+CRb8hZFo48hxdeeGXJUFSMPltqnjGumnaMNW0mXofugzqCE4ycUmGl9+I
AJLgTmV/mUvDDSQYz6+ID+fsahxgtF7hZLrb44yjs871SjwvS+aCKq3/OS4S
l/j1FcwchTCTULqMYlJl78ZkXGFy3vF4QTtiI25dyg96xyuSuhlR4pEYXJrQ
44BQKetccKsqQC5WtPMufP6qtKaiGI9Ta8RS7BabVCt7S5i31R6cu4/oqGk+
kBz1/SMUc09CVh+MsYXMs9sd8Rcoeg/6IJDoa1ERQ+GL7vhOnREdqLwOjT9h
XROpHTp0kPQ63IyL6vUMSgUvvY8QdRZ4qk6P4gupoYxb8qd5SZlzrJLzjYwS
3WzqL+0sRaYdem4KTYa+PmtTvRnddXGOxDOThM6c5n2uNdud7ZmL97VSfL2w
aj7xROzfa7qm2IrCVTu+d1Qexng09zqNOdp+j9QQVqic71DBXfSRCt+HAB9/
J4zbvLluM6/rQSJEeQaxJwL4cMuNL5YG4cZc7aF6tU9Z882gSgzgWnZuxVV2
JYciypqYK9zXAGfOGP2BUcVCGVOXdy5AvakjjnpYFc+QJGDrEPIpkMksneVI
/HXfAp2MzL12o2OhR4gQGSpSlCZ2uWiQbqJ1vViCd2TPlG/o1qieJjy5P0KL
IKDzkis96b3Wolyreul8vDhqfhhBeFW3aCS9gjhfpSoqk2AzZbh754IaGZHU
5RzCdLDarRS7xeNOmTvRD2Zt7aEOk5zI5PAL0Vzb6RvOmvfUXzPvtPCO9LMn
XTrUdViQxfGQrc9hHakr+wSIviKt2zQQ5TRoA0g1uQDGxQnw5ZVKesEg4D3y
IuXTLCjDJfifax4GhkQuBPvWpstErWKrRcesorNfctSitaVXBSHH0YrqlFCP
apciZNJTWhvLEJKGzWYrl4+ddysAuMCwHqHBJO5y8A1XLvQ1GG8RDzCEMi8l
Uptgr1v0iFGzL5d0ZUW+p4f+3ogRgKoH8CC+p7obG++dXgkFSdlnmbsoS9TR
pqeusLcsdRP8G7kvk2PmjLXRUXtKnKgVy54CqJhAO1T+a1ZkSsD6qwpgCRcf
XQRRxPfeAtcRozvDRH74OVp3WMDF/fTlwIkG+YYRxg/Mj93zRYE+52W7cN6Z
yaYQ9vgMJbYgUHfvKVfhm5jMw15ZarLDi7TvEh9ab30WhwatyU3UwpIF8kg0
ZMMZZzVU6G0yDRiy5I05wJDgmEN3GA1RewN22WpV2LramUJncw8ohyhNtChs
g0wS0THNspwbW1/ikhEtPBwQ19i8C+aa9LLAsMBtdUazQ7nS8K9+sHQzPB3n
k4oQi9PCzUt82ZjA2oQ8A9a7vqaMN0ffjXRSR4cvX7oNnsfGRFkaXgJE4WYl
M1GzsIR/9RIgZltjCqR8qpAgKSwtlajJsw85nA2gIdKd8F9OyhNRvU2NU5Io
2DVjy41c5RPKyLPVas2xORQQI85KwbqH78UQJ6mUcR3AjojZEWd2Z1RzfGB4
mTTwwNoGyagZLAnrqn9XBqI3iqbl3SPYubi6Y5txfTUvr8XXJldLJhc4dyEy
KW/vqvGsqSsgOvO7ZH1CB6zJhZRXuQviCJ0AoZIBwb+WQcwW6xYhZepjUmQt
5C5OU6VoR14WoyrUduNW0qehSUTCeGO8L4LoqU3xxVNT/CLHrHyieNHqV+L6
Bwii8EmuUUXA9MVPf1VLS5m2xmjylPw+TQ6c20EUY6VIImHGlJ4eJ2nYcS0M
xFnGB/m4B9hjkvZE8vDUsuh8wgZ07IKZabQoYWdDmhZdMAND0/maIlzVYC8y
qvVOImZeNYDY73gOoEJ1HSiGtljkaDfDItS+ijud15Vc3CgInSZWR6qB9ZLq
1ZIbmjFk++hIr7rCb8smBAh618tQoEmvP+6E6mFe6SZe8OrOg+7Zp8hHgsEo
KdVb99vWyFVmLqL4GIP01C4tTOkGQRI1IXAf0a9ES/qxOu1KXB95YDI5orWw
6tQxEm4OhXCMY9asCCBikirdBsB3DY5hInVIMkFUyKa7kkh0E2MBTAbqHGFH
WjzqgWNW+jBOCWcsMflt3kS8qWp7mAPpSAq930b8ayR1OVUsWCuHN0NtMfmI
awtOAWGN7mD89eP2iRX3DSik0p+TYG0brsYcx4auVa2qDt4U9MaRiOoho+av
LQYZZKwmRSXV3dVkIP7am+PeYvlDSiSgZKoJHDQ0yLMv2z0tf5PWR3RJ/Y9V
/t5nb8+u5vX4PZ++FuHQVFO+xIWTimVe228SB9rsgCj44giIMvId6lmMYSsu
wklAk4dJzwAZSDiwKQEic6KATrs03CX5Am+ndpIksqEZdnpDIQSxL+rF1ek2
9NCy28Yd1z1QIyffITvHmbot2HlKO61bcjmLkjtG21xKwRbOWqhbMpEx2LYv
BaSmymiGz8UazcWjmCFdVyVWEWUPiWmJBjHfaSfFiOoGWaQRp0zWINCqxHZh
9KA0F65Wtc8neYnVNkgQlmhJ4WR0JhziWAEFytf/brPUURg4YKm64oOULcIU
lnHI8LJuS29DUXmVG+MhoroXeDdeQzN+vqemBvjtCqWbZhyZigTTySk+VsOM
42853ovv+LMnQbyUs/zuThyhVYeDFbaRqHlducxsYIqjoOKXiOgkBOArMQxp
Iv09YshgWiMxZ3OfM3z//XdiW1fW0WnN7RLAFd0hAhKem1TjWgsGJgN9aCE/
buE4tR8LDR5O+ooGeBNlIsO7czQWXtTzelFXdu4wlq2AFlYZtof9CciHqF6U
UjqFawZoeQAUyNj1UAYMkjd2+Kr+C5rp3KVeRWGd1bDkZ7Ors5GFq2+lXk9x
UkZaMllXGAqt4SZE9UL1n3hvXLw3RjSPi9PpMFS5i8DUlDlSJ/oIPxhcOikU
l17xzTv0/QGVlUwJdIHJNEO6Cq+nYqODYpUk+o6u89CdMvMbDuZWlEyctFO7
91ciyc3Ug7JwEljTNwt3czwrxu9bH0wSQVjYQMBA86mxXpKGxSrnEVrHKxcC
jTyAa0J6O1x6eoaION9ZBKbKL/Baq+jM9SDCfB2HW3g/XtHAsCMvow5mcE8q
/TQ7xjl8h4fcn4oxTVbYbfCLBZSexIi+h+25EbPeb6NXnRyJptvNPzRglCpQ
2Q6/DY/9jUVsAszk5Ek86oYxtkzIL7MvceHGx7/87V91psnp/TY/7jnYX7If
Tg6PT85Dz9lwOHz4495DCvN40GOeB+GCg86AD3ysJw48B9LxLDvInvvGD3ys
ffiM1gfZF6bxgx5zH4ajOMie+cbR41f9jz/XPhiN4NSihT/w8a93LlQ51MIj
D/jAx7/ePBDlHOAow2jABz7+teaRMfoULcFuWKl9/Lr/8RdRlE6PhKIBO73I
mmopUCZGg3ytwzIKhhQaXlrT1WfogDGeqZzNJbJUa0JzIAM9EKPXRr4XkNTk
b23hEwS7SOLK2zSreK/gxWoAU4PWeixGtdufkOjK8wpx2SQZjIie7fNkcye2
dCMuoG6JMnrnYxApUNvC7PxSue88mgLL2DJrZu5YyhCNc95ct5TYhonkfUTH
w8gvdhTsJOv5SZO13tMjVuqVjGqIUDb1yA05fO3AQmd/Q9YvHmQYIDW6+On1
0f6GhgFD9Y5uhsYsXQfdFlFD8zOuRwkm39CQGl+PIoS9rWGEleM5siRJGb8R
Xe4+2/+qfzGEU9uNq4mH/tdn/wYtn314vldMv/p6W8Ndbvj5F9N8e8M9bvjl
F18/z6c9Df1iWlrIcOjRXdTwgWAW4aYN18EjKJGF4E4ZWB9hEyn5UraZzdX/
XPAVi/hauoAzRVY+WnQKIEvtnAm7ZNxBDrHcB8kQwtkryKmjH7mseVc7h3YH
U14u4r+D6licR2MmmoUKbzEtPwPenAW4YAgO+x89C7dF3IzCLFWE4e5Y7g+f
CsSpHOG7p8wVgoOvOTCdAnwo8AyTzuYL9qZBJXMbaWXJF7JdL7zb6IqrRLOh
3yd33IwFYS8/fnSwVWsMbln54EOKxk10TGwtxZQIpBRCPRaWnlCX+wQtPhiz
dpAqjF6//3RAV1TajMgpRlFI783CNmI/6uDH0Ia2CGBpCmjxw3R37/l+fjVO
2tDJt4wetiAQwISMmBBl7W5s04ctu1i1iyj72qQ4sjPn3U+Z895/jznvfcqc
n29s85895/th9T58TDhwKzqGFoSNycZ+pIpIUR8enR8937M3sJff7Qh8W+fP
X/Rsne6K2a0voxcbyHUiR5kD9+LOswfOKhFMHvAFixCZJapbv+CfSDyIDhFV
wTti55eDo0PIvuNnR0FVTMdm8Hak+re6POVfTd+cMUHUUM77YFwVIa6DPHK9
uXvKxZgNDWJdtIZPRy7cXs9tOvORBPtSmdGKBa5ruKBRRK19KsmOg0vWKul9
VWPSz2cf9nYnxVfeC1/LtRt4J9tJLIzQtsCUdyjN4f/PzP+3Zea/3Nrwv4yZ
J5jra/ifx3rHIBsjikN6lOIJxO7HxVb17t+js/hV9Ca/gprxV5nF361k/BV0
jL+CivFX0DD+CgrGX+VE/m714q8yi79bufhgpbs8Tuah/MMGbX//07/xm83W
jW1Pf5fiZjWpeNRjTCrnKBS1xZNozIeO192VzWt/yJv7jFH3WbNSq4nF1sY0
qqi6FxUToia/Oo0rFOqBYbqqluRcKBrs06mi4mgwsp4uWAka5t2niY3MtqgH
kJzl6FcWp3oji7bYIsX+KPxgMMmKk4Ek0yD/F3XalbgYtDqTU9Y6JMAY18tC
s2Q7TQRIH6gPcqhbIUP6VqI/oiQ4mCOjxytAHYiP0Ni6UbK598cyengIPfqF
+Po8sMdx04yKenqAueraYiO4ckMGiD49gWkoP+MG+ZmDXg4lbbiRi/sv59/M
Ykix1sNJ/Sr6S8psi2JNxExJ4PC7sim64pa5kJp4Lt9oLtD+MX1BkJEiyQvh
MxG7xLs1dkkZuiPUWS5Xa45Sy/sEP6kxcbUu56tI1Eom16DXFA0coju3im+u
K771+J35A+sR31yP+JYl4tu65W90Ob6qlwkfR3eQdGf+z9FVqFoi5rOMsuK/
h65is6qiB4RSlQUJ3/zKSyEcvSF5rC59emX0VLvK23Lsg+UXXB8C3bgSRxwG
MYz0wkTXmh0qdW/lpF0vXvy4J2EaZ69f/kRRGiSnn4fsyD70VwwKpmB2XEza
FtJ+EmJTbVCpJoUNOQXI3EDqOB/txZmyKX0yR6X2thbXXau98E5qKFuyY6UE
Kzu58Fq9jI0o3udI72A7Lqq8KWvNbM3BX9RYww2JrcDYYe6CE2li7zVaTL37
qIT/w05LFEA9nzhP+ddRKx9HQnlqaN4YpE/f8VglNyJVFH1pyps09XxO7mf8
gY9MlEjtdNfcQ3YNTUnKXfRumevdsc6eUJYnUputxTVQ7dFO0xAmiXTIpy04
38YbxfRANshpWEud+QSHZJOSqItqotuW9WybS7eN4ld81bhWkwq3IYTUH2Yw
7WsuK6pt4TRLall5j7vxnYYf+ADMaS11FtFoxeZ7YWiZFqKtyHJ3nMOedJWh
hsxKKpSdUdRAGM27vAthQCLA0T7Emib5OjV2kINP1e8Ps7EgrmHsgnN7SmmA
7p76mOdEZZiLJy362PusfiXGD4U/Ccr4uxenrw9fnv6vE/fYei/oU8QduH7h
H85evvzu8OhPWdRWn1I2h9NVFkDBWyg5plLI7fZgaUx5mLNXcBzMAXfWEm4B
SxvVaLgcx16tklV7JfEY8o0P4+KAdJMeLdkSrPwTL9zmxkJupjM4H4MGE2OZ
cF9qnjl660I8ICYs+HG8PEv3Fp9gHnNMv8EhV2wKpRRa1p/aJ4iZrhtajpnO
Bdx1Svpp47AxUKmoMGaNdyHya15XOF1GEtzR29c4lRhI+Bnn4pMgEbsJJsNx
bt1eg5QXAj6cDaeheIg8OQzJLBz2G92c6zVHThja1ptzwS8s8b4VQqTIBKMY
qUhYnW7h/SuMMM0q8mnS5JnW1UhAswhVmcKBnxybSBiNui2wJG3u0ojWHmZC
AmxJtgYpV/t/c/jTy7PD4xFgz9HR2euL04vLk9eXQ2Kju8kqCi2khYPKHB7X
UdTIkyRwOna7hx37/bgppllbr5tx8e1nl/Xisz9cUF2LWwqS+0yd8tG32ReD
VMSn8W/EgpMPFV/TMaFHom9lVXF44LLEWBFMWTsBNM6zfRrFt3QyWIQOzF5r
bBKHXyAytubH85M3h6fn6OYciE9wllby4xMOKCgMAwugU3SaRa1zypyn6y8Y
SZhgnoCF5SrytYWOKQOQ+rwTptTUPYwbfv8Uz+EPolqQjEvvmMlD3lZenPtw
5rjJG6Gv0uzVejUnH7ZtPbxSR7e+PqAdq4yOeOM4BDEhu84mYeEQz9Q1JVAW
VDoF3C+hNCEzeX/oACEHZkaYLTKX1zPrPsIkifO6pDAFzUeDaTFxoGVIfqup
UlPQQx6gqCjGzAzYO0Ofm8JJJYGxTFZqtpKPDArNkyjSwfOcUUS0+9RB7Z3j
9GIulKnI4/QD02w7klFXy57I3jKtsGA0FZ4waXXahy2B06UpEtf8TxpwmrC4
gc9aaVUizcpJoUcrjjnCvEXoiWVq0lG0dHz/OfboTpGAs0ggUOhSZ4N8sgbe
YGBXZ2GOcYpN+I9sDG0hpdHQpHoSXgYgXTFEaFaCJECu7t/oUMfCc0kUKe0M
YZS4i3coT3DoMF/3BM0/AjT/6A/ZsfjKUTLI3MfwBE4XNaWwl9eYiYQIlgZ5
kBB8fCFy8OHR5emPJ3FUJK88ZPGNv2AsHX3BCCFCtMRnSDVFTMATIZe4w4s3
h+fJDASc7x04SUpvoUWzVvf2QckPfbrngn2ajXJ8YJNfhEwdkpkDmTa+p3Fh
QNgGTwp6D+0FxfG283yRTYg7LbEq+WQtlajGrMkjx2MM4/1wF59r5JioSWUo
zXRra2Dzko/Ojk9ffy9JKE6OGY8jaTWXtuVbU660PGzLyeha4gnYp7r1mHBF
fhRY0JtdLxMIiDtyCPct3UXeX+nHE5ctPZ1p5tJoD1BCA7EFEwv0h2Iza0uZ
Q5brFuubObQsaBJ+1RpuGTicHtLh8oOIE4YkAYF9FWp/ebOCL+Mrsgvz0i80
3TrdZq3vtee4fB6TuCiNU6Rs0tqRUuGNX3LA+/7AadUyRmCcvp1a8KyoauC+
dEtlb7mgYSizMNVsi2VU3Y1UbEFeQxSyKEM4Kue9ER2c43ujkEU5ZrxyTl/I
rhcftOIW9uJbpl3Y7k0v6pSaFL+bcNkGOM/O0jknS7gH5ycnx6OLs5dnr85e
O9GWsILlK0LuHkPJnlGdY59ELJfS0WE/ChdqEwSnY6Y/PqGvp4mUS8zkfE40
73CsHyib8UdNf8Dngms0ByAVK2sj6zDjpo7JLTRrsRI2YigiPGkFr709/Pxf
pVbOvz1xmLhA0vdNig/ZM5uEJTczQdUWFXajrcIVi9bQsTFQdV+G6HUiNvAp
B2zImMRLE4mg0Qvv4fsgC06fnvqX6A5tdkXq6Mr/jhHxB/XGkpJpo0EuGTHY
vdK6P9s+jz7kj2G95ut9GP9BH+rHWO8QNd5bPMviqfI1O8h6yUz84a+4qz2u
xfdP9b/DrvbyWw8ZcfcfPuLeP3zE5//wEfd/nRGBZz29/OlBI37+Dx/xi19n
RGbMHzTil/9JI27CN5acJyP+rTinx5wpdLnHax6YQ6CyljFklo9Zu+C8I9pk
bKUVjfJ+xGm4SQ4CQk4oygUTpa/iamTvkALnkghAOHvDlQ7cQ/PVeA6CxYwX
rOMecL5l9UAKIVgr8vSRhPVU+atbGe0K+XdJF6i5VqJc4IZ7piKUd0HuDowB
pasgy15tW0WikOE5OA9TMIw4FRTLSpNF7QpbJpqWZk4uNyQRUAbDta++hpCA
a5c4UiqbyPx2N3mjN5/iqNgMWSOcRe+0QtpXndWzeFY4I/ZjJ0+g+8Y1Tlbm
CFw3+yk3xxzXwMnpZ4eVz1Wylspnmu1H+L2uGKAcuzilBK+LJH8XBs3OivlS
NUIIeXNff0x78fYjBop2DKKiuvY7z3yzXS7Jsd/V+hAYkwmDMjd3EvNHTjV+
QrJ9/rjakKDfPbNpVlXuj4A1AUHzMcAbfewkOUjcQSfdmbGh63wZyDkEWUro
TSIrCT3kGdItEwjjCOyQXcUl0q6mI9LEx5EumNTd2FpqQiD4BSMZF0qBQyzy
Br4CWcHrcWkPjEUzbAlvgwA4LZ7kbJ/K05jTseqZpH3NzkKJt0NJvQnLIlcU
lKvbflXLabVB/N7NjPgtNXIXyxlZgDH/XV1JNd7jC3IveCfFowVibcE5quzB
ihSRgjTRmNecqz8XZzAc0O5wUCdbvah/sSg6O86r4wv89PuTy8PLy/NBdqG/
HEEXlyfcx/nJq7MfsZPHL9ZVdSf624GX2rg+cTWRPF1NMXxCA6pSXrLWlZRG
kkNvEJk0+XQlWYRBqLsN7WmCND09nLAdmHGK8glwiasaU7pyCkMeesBq5Qm6
FvIu0A7wQiTBI1Us0ZJTjkkDvj58+fLsiFZ99PLsNf5z9uanQXZ8El7ARr34
YZC9eUv/4ACjNy/fXgzc+cnFJRBYfHpx+CP/e3LyJx734uR/vj15fXQSKiLj
mFYT1AdbZ1Mu6Z3Pgbo2rMGW84FPj2nLUHWAGDxk8lEPCwYIX4CmWVdjTYQa
wOaPVGwHrhzL1bgfmBTeLBlGelM0s3zZ+uH4xoUmf+TMWe42v8M2qKJqb/Ml
HXaa6EouvyDVtKfR928Pz4+H7rxAjOHhIdCXcb5kbxMpd/DnFxc01j9FSrUs
uZNWEXbOeIFtVaZ0m5O86Yw3RjjBfS3HqnaJuGiAd8LgEO2b4k662MHT3fHV
2G3uvShTKVf4poD0qVbAiKqgD3eTksPZYzqnxJ/aTouN+MBjZh8mjfjTPf0t
cJDERa1kPX3VydNUytlvn8r3pPf2lWvhC/j/ERWv5fffZrvfbG96cXqsTfe2
N8WBtenzb5y0Lar1IovOh5VvPyc+8XZN+zzDb3snPtj+Ic6350N4vP1Dmn33
Q0xMzc0/8pLEjNADcdGKOMQuhMvNm6mE3X1jm3HitrgZP4uaSWTfvm0mz6J2
AC48Md5jD8eENTxQ8ASXOSY0jEa+qid3v/8D9SirXVcI0Xax0CtghPHscc+Z
Ap/Mf+OfT+INIZaxe8gHfUEXPZsrcxS1XbRouIL5er7q7QmzLtv1yOlF86AI
zJ+TbTx6e35+8vpy9OKHA+YI4927qut5z3DZvG5y2IPxPC8X8UGbY+l8wC9p
E3s+Kmt0wNtPP+LHPe31nHoH4Ze6JVagTepPkzgr2FcQ75+Pz7HGtxjT05lH
xWIBM3bLZfskoxGchLYWLoxfhkKmgiwl3XQG917dYSb7ePFGs5y+00yfKDew
56GhbqF8ga/VHVEB+FPWuW4xNzLeQs5NHLl/dOoImNlSqS4u9UoMoruNM2b6
Et+cKZtd8utKbX9tMAkS2fG1D8iGQvWjUR5D18impFpu5F9GqU87Z8ZpD9X+
m+7Uk0w2iHifSiss95XTADCdY8CsroIZ91D1HS3VYt1h+l3WXABG3LxN2QN+
H2qaRzTx6dOneonNF/pGfxLsOwVBxGJf2zTBwNTUYGDbNMHC1NRiYduWc2aP
aIn70pbzOwqCxcYf9bfoIoZ1yS30D/D2nWsiaZAsiL348qsv9oC9YPBqk4Ft
gEn05uPHg2hfZU/jr3+O0Sy+CsufFCN9FCGhatripuyH72iT1m3aalQvgTfY
963ozz7MFE9KNiV6+EirV/ut85fIM4LBc0Zv9cTX8USfVyuQ4eaGTMOSl1Nq
HVE7re4stsCWnWO2lhIx+hWu3VcDk7vw/HUunDXVuCPTrRQPMqAzNHs+sFDN
ooqBXedticy0xuVosHEoMG8zEavyg4V5wiicVX5MCZUn39jpBINlGwu+jOgB
v2AxIa19IiZTsqlHB1omZd08H/358AunTDRDOWCVkxsiCz5DeLK0YE8lPwzo
ERWqLK7DrJzWliJkzY9lMVF1jC11ix0XzVwuRWEk1Suw4PGyKVFOIzSZMdBn
7NhA+F/uwdC9LN8Xt6Wmv+BZGK2BcYjDcfgPrBEJlA/nQ/2buvaCa4VYv0W6
lF0EumTlpR2cQhvh3BGqtqrx3f5WPOtbdZHtGqD+i/3RKqDFOdzldiQZ4osu
cuz7AuHxU7/BUdAFeV582ih/yzdVvYK7Ny8pS37nO0BnGAK7b79b1ascvl63
dyN896Bv8uvrpsBqHjpHON/46y00I5yRJRz+KSLJ71AHF+4p5axqPT5oPV6j
0nSApd6js5EPfIrGiCKd7Bt1UdCyzJ6r4GiJfdfHm8k7UqB7/SDrLs3NQCTr
twh1aVjsblXMUaccRzoz7+Ql5pUShwDtaDmmah1ZRrvCgxlV05a+0r3Avkg3
CZ09OlcYfpRJoTzaVzShCAkwRSLz1apYLNUbclIzJn/kIbS/j9scI5+rIinK
ZQrJZmntCgp5EtbW9+7Ue16KxdwUXOKNa0LIMryLvXHM4rpkWKO4rNHZc9NF
sdSX508uQ3qExoOQ4S8MilFeqJycrueYTIhqDbI5gQjSGMsVCfbru23qXx6G
oqcgEVdSs8KUxXHnb45YAz285xpqt+16QUYOrH2DARA7q6ZcOmzR+nIAtMNZ
6JoCqpS4PR8+76iImBx4tGAkHHaNNe7ULXo0TvjmVnlV698t+fYvyklFdBBm
8teiqTMgl032L3m1zps7tzvIdr/+8ll2VNfNROSDtxWeV5vPs0tc5eO3l0dP
YpodplWl0F9KQS5OQoW7XIp7nvja+YyyWAMTRIJu3TtKQhOqSUUOk1hGieyW
tzBftiK2BaUvZBqpzpk5sWiBAUHoyaUQiDiKY/Ub57zkJXLnxeXh5QXd8fF6
QVlwbtCyRMZDLvyHlj/iyWh5eBQmrKtVkygKxrd11KnBIYqGKKAP+SzyMdNC
LSxGaNgA1ibVLaL2QPJtjUDEgxIogQtfhYY0QSl50Fcqk9k8QB0anBe2u8Gy
bCRct6F0APVTt+y1iWiFzrfA0pXOy3nMSkSIPEhxMX7fwmQk33fIJa5m0gRy
CdcU/u4jq6OpFejm8CeFDHaa9mBxpAgPaUfY/n6SPh8pSHbaJiokbDuvAVPc
T+njjQq0PnqO1F73KCRhLskyzJZirRQp2TISjhcgoKLoCQq2qXp4bQ0TFnMw
XHKOZc1Rmrge2J03DtI8NqU3IJJE4YVoVfPRGv4cRKiRzXasqBdgxaMoBdmS
YXEj0zDQ4tVkIR46eyiWPOkUaQACc6ZuEkRL08abbzQ+lPF0XM914oQBmnUx
CKfZY493qBThFWP5rLSUjSbX1jhZKjndUVS5h5RTfjSrV4+0AD35zrJ8tzL2
5gpVW2ixuSp83cKgBKuroZUVSHrYLihwk3tVMszwtg/UymjrhylmtPUm3Qzc
Zdy6tHnv1d/Qtv/6pyoSaRypSRK8Et9lWaV59hC+X7bcMv38CHFAn47IL8qq
h8x3mtQwUQ75zyK9UCIyZWXJqXC/uacRMXx96h4/jCxI/8bVvEPObQnUcJDo
YVin4gNzfXD5FSC5QkML4TtnqGgXpbXKUGPSy/4QAqxvjD48NED2vqpvgTe9
9j5LeLcN0efSS1SNkEdcUxwABSZx8T5EQlhv6ZbqjzJmIbQRdM58mWXiXOAV
+A9AUVbh6leCaEcKu1voeHX4k09eoPWi+1TeD1Ror5d15XojXJzjANOQcWFa
5KEOd6+uvaDKulz8LV+5tUT0+7rb1nEM1iH1wO4EwQVVFtMDUV5tWJawtXQX
SmUWn7JIAeAnlMH5AsccQpGHwkETEmwEtyoY4VH4Wq0iHRkV27YcVj47lVSh
Uq8oLX0pjv0cfsGSGcZpkHuA2L6UbSPeENjlJcc8uGZd7RApW+QVwMaC406A
157mY81PvmrqueixUJpeFCRslq1r19fXJIulFa5zG0TGOIXBUCaD696hdaN8
Wc/Xaj8H+NvNXn2HOIc2THdvNYN2sxrTBHGGBTUVsH8bfvOUJRyUaHafpbcM
S+nxe4A+rz8N2z+wtEZVpUpNskhV6khi4SAUTSBPh8rHKBPmv+T7smLlCQmi
TEaGigQHWfSM2g0MCTENUTsZv1CR2GpyTUQtHB2AK5ZoV97HMe8Du9OGqYaq
pT1id1ft44w22NcfD1spta5MuSwOPDJkzrNtxHpy4TzAxL6IX+u8J5gv+ZlT
EcmJePkxXwqC9YqAtFzZyniGrbRsUdcmFXO9SgmcLwAS8aDilOJ14MrxokyV
sL2sbe1YHBOBR42N9wo8/vteqYOMGfzXvOG/R3wsxo4UfSH43n9Bf3e/uUe6
0Fml0oU8f6TeyyEAPLmT07ycD/pmbXhu5yvFI/aJ9fA9FlQb6lYwbmzRQECW
TL9bACtU0UZaqAMQf2ksFlrv3BFfrgYOH/Hp1+XhVgzNvdYYCjGval1DUE7o
FRp0yETP1jieJGkUiDyi8iZx2gXyR45EqJbpPWFD+PJsXnI0WELZN2xmJpvp
DCA9dDtN72hNy21xZIM+sOhkoNaaqQEzGOVAl9Tu4SZl+5e61LSOsGYYqchJ
oUX0C51Oh/e4i9nweL6c5knwYwgiL2cAU79Foo6LBeAvkgq9G5wCpou0Kujq
XnBJ1Nuco9u5JG/sViGFZhnwerzLUqg3qnJvMP9I9eJxV5BRipYZx/J9Pvwi
i+1nT5JVMeybeQipJ/jUjaVbYJw7xHkUi3O2zqjRaArsTUPFIAvvw01YFGAu
3NEHHR2p0P5Lzwl1lQaw7z+kyMrGB+UQfMNBsV4wPagv/5EHRVOggxrO2zwS
N6neDawJczhgyjXdTDxBTfimaSs7sup9p3qEIIEL/gHuMPMRwYtz3zsOBZdK
Gtr6UNLAEnZhPt252d0gstr+vdTa44s1r2ex+1XqHidtYg85Sznj6SjdtOPf
7CLVzLb+9C+tu1dd94ZOOWTKPpZnM8KiKEGSOZ82GWBNwERVQUAo3hfeGZtv
hEHg4vN7yvl6dbPu9/iyXlyye856cWWxF5dY6VKgYLiKX3TYKvZwMhHw7YiW
zt6K2WN6E5LDjIRpWPBSnsTsFyk0X7zYG705P7s8Obo8PXs9uvzpzcno1dm/
nFxestei5by8n+H9PUg01/YuAncmD57+1rf/Lfrnj0MsB11yjGjhGt99CzXf
dmVwUpOxiGuS54RysOFbhBb0RB6GZ087bK2PrTa3Tte54RDmsxEGbhGjRM/a
hLntHit9ZJ/YrQueqwA0vdBAync4/fEI4LCcGF9VOrjL87exZyrqr57vsf6K
PpNev4k+e3H48iL+boP/aQLkfvB0uvSob6EJ8on727nZ6/DtgoL2HmklNXH/
Q5V6m7Hjkk+aW7Rlo9GEi3yJ5nyqfazaATK+qZIha63jDOkErlHaFJUEdkpC
FmngtAtOKEYujlesNIIbzq5tnHWQ6YZWE/eUcm/4+fDz1LNes0C2ohlpKVMB
po9CPUpAdgxtnA/GD2RVTH2UlMMLU5IvScK4I2tK9rlE1GmCkltScnaAkPbA
uR2M2yiCu9VnyO2D8IFpOJf1vBxjnXBij9UGeXj0ktLYLDGSosUQJQQB9P7F
kvDMRYPAMOztW9buu2xwW8iRrWfW2G9Z0ZUwvcGmtCu7tpzC+WyRbuUT1HWp
QE5lTHhHky5yCp68JPzxt/RtvBtooqjLaEQh1pbzG15VnniTMV+2mLQ7phVx
CyjFH3034pCJ0eHrn5jMpkcdcoIWcXMy65OO1Iu9KHuBPIOqyVTPSZ6Hbb2g
dCcScYeHJ+uGOyZA7vb2hs87IC6aexK91EpDwU8iTT5ipzO+QqgadGdXf0EN
NUcenxfXsJ3N3aNhh3ByMMf50eG+EKjDiz+NkHIxOQeCxeHT32Z7z7558EeH
f9aPdnsVDVVxu9OMc69gwK4A17SSlokRVZefi09gPxRP8ijUYmpCzE2uLG47
WtWj90Wx9Fgbrg/gt2Cuoca4ryOcSh+m7Rlfk1h334jDPBwB6vQxjjIGIBup
zVBC6EQTEQpdLrjelYCjTwEvUQYI/BR6FkrF2zXIEp2EgggIKRfov2XjJN49
ngNlVhVv/b4dFPYt9Io5xhaY39Qrwbkb14dTU1uzsTti5z4nEUyfcgD4ipkb
RR2auvGW8VYt7GSHOkklBARjCk4CGjm+GvGER7AE3reIb3kDtwvAe5QAPOld
8V4MHtL4ndyH7Rq4MGF/NRgqXuGsXtCLcEMuaVPwWawm5uCQ9DAMsjmt1Ke1
GNh85Bc/nL19eaz4So8r7Ue6Z6F0pQXhkMsHAgzEFE+RkPGKnUd6QSg6TKdT
4zVt23AyiBOWVXBO8awz85apMl46PXt1dnyyrykqLyhivcGLiYC++eDuH9JT
0I1DvhsKyeOc2h64lXwPEs9AGW7SHYNUdpyAi4di9un/q+5qm9rItfR3/Yqu
mQ+T7Nge3hIIU7emHHAS9hKSxSSTW1tbVGM34BvjprrtEO5l//vqvElHarVt
knzY4UMCtlotHUlH5xwdPc8BPfoKMiNmV5hthjYHIQ24y6WYyzjzsQU8GaCl
PqlwAAs85Cy+3qIViAdqttnyPAR2VZviuSXZ5LmZCm/spJTwJ6kwiud9KT87
MbrNH1lfCJTQUFRYqagImS4KoqRMw6c9c1ZmOVjhZM2Oyqpa3HoqnHizR1iK
S+SbLS/pSjy1dOz777Mg3cmKujsQ2R+G7A8elC6NEekiO2QnxZ2LY9E1mY9o
NceKynuBMQgqB5wpzw7H2blk8pDggn6C28dbL49eZ7w1b25svHjesXV+zefz
ynvk87K028aV9u4cQDJBkgC06PDD+/fvTs8Gh0r3UZ27Hby26kFaFzPn3qUq
bQEspcr2oDLBi2XDEO3CVOsUfDM//gIed9DTjKjd+ijeYeZn7dP6UWEhUPIN
4GBwHLp2yBwUDI0M2AAZa2rAr5D01TGHv2Oqe1eP2Kg0Lw7dcbWdHNkDf9r6
85Cd0BHfA8OAEl89weZ0u12oYtmYYhU4lvALzeFl5a2J+qDrbBlSqnOvWWe6
fFxrML5U14tE+1SpdA0yzA80zG1VcDGsA8Z5fjHtQvCiHJXTrpxW0WB/+onv
VCwV6hMatQPYHVC4T/lgYqloIxhmSZBOgDn6LLVYsRkHRMwr0aHjhMgeDHHO
CpmCEPxK2FNC/EYB53GZbQ11Gr+1Q/ARfg0IKKYgsYeHXXSZh4JcHJEHjawb
V0PqSD7NKPU6ABzARFeIZfNdHTrNN14VJbpS98JhbJnH8UDuxQPZMp+DoQTw
VEztx8H0Kk6NIvOwYZ6hdooxddjq/Ps6+LgBFQPSRWSkAuh4LwWQWd8/EOQc
xmQmRB08Z0ifqwvxGo0hXK5Spz9iuSdcaAwEw6khZE4QxwRitqpD90j2wWqP
Jf6isXT0qg/kPK/AisK0GMnZLpEOAVW75KPCFQdFGKd4o1Q6SshlADmctKP0
9KxFrw66a1SC5hJRZktE2YDrTokyW1OUovZCWVpppmUp6i8lTLcdapChhlTH
kzEDzMz5NiGx6WjaDC1xPmdweE0YctHEPNYsrYp/YratZw4LYPur4hJyiXT9
BPiOObXjmBGB4FowIFoRcMltibFQcCzZxhZ0G3Viblj7T3RH/PQSzwqjdgoc
h7iXMEfLuMP3ED+dlW1NMwoL8foHgoWQ7s3KwgFDM3SbAAuNSnDD4A4Nd2tc
8kh8RgS3uugtw685q+45QhjmCVDegyIz0NBLMCBn6ER8hAAimShE2hXcpXK+
BsYZdcpEBH8L+215W3ftE7zd0tUygA1XVhT0KzgzVCl1RscFgrulcYQNsX7k
283oOLaHNpfra2RqsaW3+gdNEba+HpRREXZSbIqVoiWbA7ibKQy55lg0w5aP
HZTRxf+rcWkK4BFjEo/LwctzgSVDHHhvO77sH1pjvqP/Poej9MHRof/wcNA/
HA6Gw6N3J/pDq9g7S21D/+3Rycf+sfrznfp9eHikGiA8NfK3rRDq8x+8O060
cDg4/Tg4fdX/cHymPjzrA+aGkojMzcZYu3jn2tOO5ykPieht9RhAUGUf4Nzt
urj5ToXBDhS017bbTUx9vfpb5uXe0nn5LDkvV/homVYpiamqZ+UKXy1rTtwO
GxOvB2d6OEPxRL7qqqHBgcwGnw7evD463EFA1vMPw8E5Bs4Ohy1HC83yg9P+
8MPpwD5CPf1btvF1AwwR+xOG21vfJS2HudJayIOs5gGmJt9t4jsxsYnFdCjW
HjNQc//k9eD8qImTvt0INnHEHk514KoBEBFaQ+zsOgT0lPOkpWJR8KKtvXOc
ku7eEkZjxdfiK6MqwqTWQAAh08hCd73c7m2axkkUBbfIO8BkPGY+YfPzC7Af
LpQIagPBTsrwCDuD4VXojYaUx244xxHjpHAMu0xczpldKlMreCOHLcjF51OY
ia7LwWBS2zE660nHguNfoHta+jIGC4Eh0JMoZOJGjh8MOfKY6Tb40TIYHQqi
D6LaBIWXhlBqgSQu+3l5hefzzJSSYk/yd+6yeJoiA11+hyf9wiGBqSQI/+iv
6R799g5jmTqU2Z9bN8Ean4jS+bP/M9vbA95q++eOxC3Qwz2HvNiG9oC3jcEy
tV/68zz901ZVnDxDeugVxD93RIW+hN3zfAisdlb/7O2l06FbXuDCPoen6IC1
lQsif7kTQ2u1atm3lYFVT7wP795DVlH/GAZYJO/ewbRFQjPogx4c72kkNCil
4VL4TMhKopP5MJE3s74KeQqT8JIOXQzhQIynBCQgaewpM9hSHjOdL5QOgldz
0Fs3RyhWo1l2UN7c2GYR47DDzMIZh08Qdask5uuPWpPyw+f0oZ46VnYs5b+3
F3B8zjotSc+s4E3RbNLfuRkUPKCwN4J+0dxwIGDARpI934EzYIPR/Y7PprDj
bYd9ewtTSDq+Ux2+s4ajCSnxMweFBMkJk3HAnimu/KEDd8vgKDes2AmD64Yw
Il0LycCcs7vmxRRJe2giqHkEAYQ57/ANFWrW2JEb2cAevxOiK0Fb5SAOMydh
uwLjNoAPBVvKe/m4WDXoZ5y18VTIjy9vpl4IgSFhtnovnqaDEpeTr3xohiW1
001jXt7ZUWm9jKvLRMmNeiYBmCf93jabMyqzairzm5JTmb6LpjI/0JjK9DlP
ZcmM8MoEpwgB1tjpUt5kolYgGic8aRxFhlYHV7z53qBxsdVMEwuTQQMcN8EU
j+CxxNjQJNzBVbEA84KqGE+AOckn6cKHhq6P+bZSbAwpKlOLvaHE/vdpx8Q9
g8jcBBGy4EYXGWbuypZ/A2b7UkvoXpvbmoI19ay319vsAVGSCaY1SiNnSAwW
EiJzX/gGKK5FhvZXy17a4k8hHCOdG+mxGB2+JyYfVWWN+SIBfjbmrnBD4IQX
w2/j5o7h/ZolB6zwQATbEZlOyw5b7a7MbhijCGXi4uzudpqF6NqGKrSXKOTo
d6XQi0ShN4P+4eCUMiTQotlIFEJyY/W6vc1EIa7CF9pKFgInc+Cwgfe2U4WE
OFUK7SQKEZWqet2zRCGK/6pCz9sKSdwGCjmJB/oKwhnqnFY5u3JWm2Wt0T59
snoGkClzu1rQgqIv9WnrA1WELjx66Il5gV/s7srTwyyLX6fpsehZOtT0NYaT
iGrc41/eNqoMa6TLUFGF4YSjCl+s2UTPWh7WGc5P/GJvY8061bNxteGMpmo3
16yWmLPD+sLJT/VtrVlfqn3hOqH6tvmXVWMjzzbqDJYV1bmzbhsdUXxYZ7gK
qc5na9YpHORhjeGSpRqfr1kjPpus0C1vqnDdhaOfTZ/nq2jYe/7MahaJXXpl
sLu7Hy7kbtYHYxsiMAd4xkScyuSXkO8RLF46meufvv7wdnBy1gyasf2mn1EQ
1WKgtUNUS4kYcGU0ylNoK3gGreBNoJjH7whtRjbZpBS6ZeGV4iBVttGDyDhs
FEDli+zUg+GH4zVlAwliUbomJHlfTgqHFTUaVXSr3n5xuVZzqdJl7cUS2ODk
RaS4rLt54uBxoVEECJq6fAQB3/N3f2/cEkq0kvqHv+8Eg6aB0ZfdNFomhVUy
cEN2OBgenB5hSMKYYIFYe5N/azhsSX/NKNLOurjJZ3j3svzCfKNqKlL2Qj4X
f5VCBj3jaTZ9OI1wjnSFl9IsBxaTA3kUm8vIiCoIAqMiAJIBlDcdTGUTVPfa
AIQBUsVwHMMh0mVVfueAvov8xhMUqSUK+BvOl4DukbciXyqwXqkX+w9Xq10Z
ozyWIVw8SOW7oNlNlECClqDCNLWhezR4GDzzMqZreSr86LQG+y/4K6A6ShBH
hso3mJEn/DeA3SH1ESQ68DXl90xgrLIvQvAJJTXAwPBNoXT28WI2JqYVpbUo
WnW7qG5Lgi7zXxmKS3lWlnVQJbLbcu7YpBzChIhO5mXKzQ9YuoN14xmSnD0m
4faeeWNLY5A8XGr8PQe0Jd8RO/ZH8tjeWor7gRnZZaR2Pr0CVbZ0V5Pb9NGm
Fmjxhn4IW/BiP+5nN/v06dO6LXAW5/c0Ym9jP2GogjByefEbSrEgenKYnroR
2kR91A6vHvymbV5jn42uqzyJeqZtASy0whDAMqEVkNwnGo1P7hZxqcdt8/rp
xF4fM3HYpiNb3+8rC9HiiG6aNgwdKLmepdNs6EppeCMipRk0SxTp6DzY/Sj1
C6jYALt1QeGvov7DHHmQg1HZpSSijqdXJA54n3HEyUpeQSyxaKLWJ8waENi3
2jVNEbox+KH2TdyLRwxUi62jtQbtEHxTQLmVEjabODRmGDsKBlL+lksDC67g
CvBSqLA297UL3M2OYV9erinRTfsuLbm1rzuk1WPdohgfrxF/jCpcRxOuoQjX
04MrFOCjNZ+iFG09huL2nY+q0bLQPRRxB0IAebZc3VX0x6oaOVs5PgRr6Fml
ZaFUdnb4xs4azELM8YBdPx0zhzgHKdLRMTbHiDV0u4LW8oxGSH2V8uaaM3PF
HjRKbD/69aO195NVG0mkmJYo7SXa+puUdVMc0vNHaOk11PRy/SzfLlPLpHcZ
IS92P7e20ueF3v80y/1PzS27xNWEFBreGH6kowlVfpubWf1l3cxqlZvpcKMD
FxPxFzkVTLh+c74Q4iRHZ0uRx1frsylIo8GFgaOAl/l1UUmyJlNtJmTVzMXg
cFnNEdsBUFVwMZITzukyTC2qxF3RccNmiq8jd+VWYTmyFMRPjtJ5OJkHEtyM
vAIu9Nzczu9Z3nTPOuexETBNwf30l9Tok3jzEwYDgditP9PN7R1pmZtm7lh+
uxJ68fCE9YwOvkvrz+teyMlecMU70RpUQ5ggdebv1N7xBYF8TNlSdDhPEEqU
WUpnioAoTu3dwpm0gxekqVyO5MOZSCjbdtfkA3ZvX0n2jBvRuGU6yqcArgQR
EX49bebq2e3sIq8dJqHxjFSXk2KK55ugM0WfUiIMvWHLfrqS5f5XE557KDgF
//Owfk3K5trPNuKKsKYHNYv3s61GGVUIF/d+ttMsE7cp3bxfw/uZ6SnrEmDp
6/2GNDzojWyyP1Defu/8RnmjftgnRvpWUYq98d8b/7PfUgh+aPbZgfu6fWl3
QAAIThbCna9ZkSqEBUt66znMha32QpgGENUWFsKCkp1kf9/PtlsLuSSb/ex5
o+HeZt3PNltFAI3ez3r2J9GmbxDmGBy7neJZutA3CHO7vdC3CHOjtZASZlTo
kcKEtG4rT0zv/j5h7u7u7uUXo/EPE+ZOe6G/0sxcQ2GsUIaBn+G04Slei4Qd
u1G0KqwJy6a3jhBsex3KJ99dzwCaTxhid3mcwh18f1esYse1RM7Lu9lpyZdW
DggPa422yIH5d7XlmbSFz9m72YcZmdqr3s+H69/19ufydjqT72Z/YrYWx22s
fdyI2tAB/KqwDfmYGHU8D1PW0CMe3dl5f12MPqd8Swexp33CODXRVtDMTYyg
9taN/CWaGbmVzRIrIwIop+8OVknv71bHrFTRROgKU2mL8+vybieq9CKiP4oD
P1LnyuAPF2wJACWEDKVXJphKtfSkzuFfWroabW+tig7pmlfGXNR4JmMO/vvH
HWLQc4nQUSP94q5KpF/E40qlbm4g2zWUayP14S6V+gA/jWjbXUu4rREjvOOt
dk1pLglhqQKrYliuaCKIdfftJw66idy3H3rS4Ju9XADLglikta13Sb80sijS
YazHpFFw6r6kU+C5OgdV1oxxwefUPAlymR8S5MI6vy3KdfeXjXKJDoiiXJzd
HgvBfUwkJHBfzSt9qAKgmM+H/zg52Ok04kIUikBVgt9xZXweRYn22Eh86Ha6
oOxrlFZ9X8+LG3+1grCE4aXMmGEukEgp8zxBTCUq3GYqWVvA1SFXeyuTtDwf
gIWRBXhBjJBdFFbKE+Lsqe3CgqtmgKEiiXv2u6kE22JpHPbP+koas6UisX01
POPTEmn0GQsUM+SnpieNFpAVRTUpvijeplhGzbt4ngqIAnmXVUG8KPKx6pNM
Z2KZzOtrxHKujZ4DAAvBCSK3ZU3UTEydlcPiHFflLZHZeclBVsyHk+FZ/+Xx
IDGNVKtEcnawbv1FiJYJJZqnfQ5B2HE0XSBNK+qfyk76WdFpyA2UAfgkIZF3
yJBrHNEEXDXCqSTwdtBegsXDTZWah1Au9eKiBlSOGSi1iyLzZRqN5bHD0133
qtpqv7GQcgZxQKQJ4RNhfBVdTbeqrLz3AWQYKgfhC2KgW7YA1gY3M2VzZ1lG
tamlLQL38xGvTUzt01P7L8a+cw5NKx5eDWXL2jzYJ1oWhqQi1T7Ya51qyfUa
Y2AY74/w8gqVG3MpSquZSExL3kTLzi1l/RYKlNKKPnlHNyGJWQVCZj3zTnhq
5S1LxjatLQyofKxT1DnwN+omTC4x6YK6WReKuZiOgfaN+Y8s85eJiCWX7tPg
VHbBZgFk1SnNcPd4eN4/ODv62Ld/Hr06H7x9f/YPQqwkIhmsf+6gdHxW5qSu
FwjcOOdkkiD0T8sG78QtLubT4K43Dyo1EiLisPUJDYEtA7OU8RLtSL6b+Yuh
GIO/k9QTCDLa3+keEZ6GgG66KLAyuGCFdeZag1s5RnrIr09OgHEQjYZJwxww
E+q8sDKvGOWUB85JxgVj3+qRpIuRagJl8STmnZm7xz4SHKMweVXqzrxqDjQA
u+4p1uxEv5nYrz8XM2QznPLmAQNFK5NoqeGSJt/elYQeMBmC9gNRVz6lM6d8
rhssPDyuLOy2RglWnabaLutkF8JoAxRDnaCpFjpywSwKp8ZAr2kRhMQVgIKl
5ykBWf2cvV4w1jeUxfCFMf0kCtWk9tdIGdQMqbp5ERHCJ2CYTqx4xDftuViF
sC85bjq65MP3+ASWAI08AHI1brPlMKCvki9/uiBGz8cKHW0g0qr4RvgC8qxu
X1RIjChr3dopc7kA0kgiNSM6Fop4ZPWomAEkg8OHFiDBHJH+UCodQ4/UeJEb
WmfrvQVYsYpUIVVlZxkEz0A5E2cTH0ESPy9a3bQBOaQ5+gpZMwGwngFtZeI1
7AiD+xZBfJFdgM/IwakRdLu17kJHo5UbCfs68etpA7p4Ukjqb7SSCWqtpAnD
PgGbGUwCyAsZ+sIbhdYLPImoUoNTkKk7ePKEU0PHi8NRI3hmo6Gea6DvtXt1
fJxde+PONvWmwD1qAgBqDkY/nZGcHZGUD4eECsBXTL8Ucy0Z2qbKinjIqYjf
vhBm5g+r+gEDUj9GCL+IHCzIYsYhi9kHnuCUnuZQ94yPXGdl96rsPQUgn7pE
xMq7AqpFO8jub/jKDjGYXEOyOHiRsOL1+NqdvcDxJK55fha0wx+mb9XfHYqJ
/DFxBx2IB64GpFsHOjS43qoakc875g7MBJi7blhxc/Wk2FOQJ2K0p9Ow9/aC
GK1cc5JQrYudLw8WB9ecHhkyTsyEAeUj2M1oMUfIoFs7LtipAPavcpRRdstW
aeYIJEPEFgdMnCq3b88Safzljd22EGIZRokwWNjFRHg+nLZo+uH0r2T9OPKM
CqIIc2ZypDu/m7/Srs9Yzbi6ZwYy9e0DRIyNBL0gSNh68sBtYByYIkemJeRD
ovaLo4ov0L6w80DtDldiZAOeqvd5W0SyVeMKiR4i35o/VSCeTM6qKqUlLFA9
IBfkQm6TA+zrIDfrLTBaORg1XxjQg6TEZg1YWtzrJMr+JUUn+Cp7bRrtpi57
32LGPhRwe9Y+pQPbaMKxCjxA4gJNNJxvarDcvwbwD3W22dvtbeJbt3pbvc04
JIbhg3EgqEQj1moAqQ4UoWERyjV8oVq/jklDFfAG8vii/JHrOOi796bZiHO0
yfkCNpU5boVApsIpNvg+w6pXQWgmYViJQbhAv8N2vcILLHaxjunmiqvHY9tz
f9Rde+FFDSlR0SCwExOZKeOmI4A6OuhUG7GL+Zeo+oyjWE21n0llaq+guUa3
UIg+x0rV43WAeRzuoYwUCmb26fsDlGxZTf5FikiLAxp6RXsEbfW8EpiMwvvI
DPj/5EmTZcDYwjERwNOONwCpZxy1p555mh/cLk3/4FjeWQfj1ZEd/l5FQQQp
rn9wMBgOOXcspBXwfBuJXnjP28nQeMwTa+Nh5lmQ6FUVo2LyBciF8lrHfBy3
EkDp4oeq8dQIwg5CelJZOzwhmFmElIihyefDhUQO3daBPyMYGQhDGNv6iKIW
nI/iOp9eOi6eEqjX7cTgtD801NTKKxB3WSYMbT9QGnwY2O85wimsAO/eD05w
pHksAHjTLbJocneAvg3IKPiuh2O4pMU3LqbA3AzNbmFgtgIiUmRb0/3MvsG2
OFtMmMflauKgjZnWiqMNd/l91grf7RZctM54H2ng6Yi8Mf4FJr8fU9++Wppr
SmQo8oFsOjkoq/uMv8FaF3YsCFIkV7VkRDHBE8xzzWub92ohyhj8mJl+ga+o
5y8d5nWoRoOViH3ylwKpRU7SxkoaXarfyFPyT7qlHj2BQ4LJiL5sx3ye0ZrA
JeCKTsYoNDCIyS4Vx2ZWgHytdydtM17iuYuW+w6ltXc77xXqc7tixVbCiKH9
C2MOwIZMtBZ+NsBbOgzcgryjLmrZ7BKx0IDeooCEUdq9Qa8Cy1PgqFgP47wG
pwTR1WRaXzqWQuG3n8wWxE7d2CbxrIR75mmIAt22qKVFge554sjDDd8hoIF/
Gqg4YOGkuDpWEakfWnRebkAEFwxMrVVZPDyidO0qYA2I/WU8H6kwmMEVMMnh
YnEsP8gsBIccwsDiwoDuPAyd2YgXBeaA8KGg32Ouy+m4hVRObZhONZI+B7V5
Y93u0QRC60w8569zWzP/C0BJAnw1sJchQtFYSRgRxzUWkAEnoprUUgMReM8y
IjBA1lw0VKwb9F8fMDmJdR8xaI7tCpMzK44Eq40GZlm0N/9GMiUwPMWT4Cdk
sFPWGcc3/BYi6+oqB2ZiFfMoxibSujIDepop0Wv02iF54rH3BGAj50D2h5Yp
yBIDsJrFRowboULPaI+sURDkKGtuKEDYwjkAocRLZtA7A3gmvDB8nN/beSku
H31rDazh4OD89XAogLnko0DaLMxwY87/vL4nV3r2y9zddhQkS/9879yYlxQY
dkBbYCPQhg0f2YG32ugWFp6C7JqW1my3M2hULm6BjwxMhGnHu+jkJPnz2sYZ
LT4If+OTDKyfHXO1B1ytUT0V/2o7e+I//bL91CEEA44qjCqMeXaZjybTCXgh
xNFMaeq000zmTCPvHIzA9lhmQ4fWTUKD2Gl0WGJAu3Sp8ehiG4/tlX6y03Bu
chVkh6qMIN77I0otCWbwttPOftrFyBuF+LoMvWwcv/EYhvWLXcag1eoa3LVy
RgGCm3wGyQsB26PA6u30XvQaYOHsg5JlOard7olHhuMSBMGBcBNUGsPjctTV
TziMzHPg3mpGCIQK4d6IAWa1gLPiq12YNehBJx8pVn0mGoU766gZCTvG+4Ls
sFQrTI8ZILrR2ScdsZN9BLXYTxc1HWQFTWI3TAIzaDOBr4RR1pyr6WjHB7le
2iXjDh0NO5puFcOreJGqZTIvr+gI1rnGvFYNr1VecQob+HKBkRjVnpqPlC6U
y9AxWjy0kjA4LSw0ctFmfl3hOT3b8WH7vmzD1g50C9TI/F7IYbIzOPPQi/9P
7AGehIi6YIQJmSKpEIFbVrh1AXDn1xyvh7mwcF6nnov9/Gjha8LSRggJArm0
y9JcBQ9YmSy9xuDzGeCs9AEm1ptVMc35pJTWs5ed4aO2OQtKBpSiZMxd7+8V
XWZ/L6qLoirr7OOz7DVcc7EmdbxZZH2vKLL3VXlV5TfWgrSb1CWcIz6x7+32
3x+xmt3Z3ALmdEF/ZNve9eGmAFNpUt+oY91wldHEMnwat0yO8GwkR3GRYp2L
UdGj/km/ERHVGNsKnRvq/gnDkAp8V/G5/q6NX1ygdB/MWwMVF3UsJBeoSSdX
TLoOrWGl5gOLmm/WhyvruhxNcMxpfBv0vboF+wk+eDmsmV9MmQ6S6L6RqQ9w
7lUnsxNYAfjzQLR49n8rI/kIEFAesreTGbDm0Y5ba3qz5tvPP27Zzze+Psfn
bVVnLw83N6AofqAz7zOFmR+0VNL4dEP7KE3kJQa4sIAlAByoeDhjqt5MU/Ua
T9XLO01jUGDEK/+0uNFAqrScxhe26aWcvUaNj3+DH550m2WkUsNkR6kxSFmm
x2l5kx+yLbp8osbrOBithxWV9D9BJZurKpHBVt0OCVgbvY6GPRwrN+a0hn4R
SO2sHVI7idAus+GX1tkAaubVq49bAmz+9ggQecC1EbpzIp3sDmFLfotM8ujC
EtY4cnzgC7vhUky1Bgd6/dUoo8MjnWxlptZeYlFuNUYoaK0M0RriTXYoGsOf
s/4IQjB2r7oinvF/75M2K8Z/+wmTJxzrriN2R1fqTX5zY996CzsR07WBDYum
HR7cCGb+Gtjq+9lhbu0nW/R+NutYDwtyWN7ef54WFws46DycjCfWM3tlN0z7
9X8W+az73n5gX/q2nOG2Zj/mT+CwslyQhf6ymliV/T6/sw2vP0/sMB9cW2d5
Xt5eZ2+K6d3kCl28uOU3OUbvUXYwlwNZm7SsoR626+UAuqRkN9pq35b/tL5n
YdB7xMRDPuHHFtld3TpFZSduMXXjdVnZ3exVPqmuF3DWfF1Mb61qvVqAyRyc
1Zr/A3waI8CDBAIA

-->

</rfc>
