<?xml version="1.0" encoding="US-ASCII"?>
<!DOCTYPE rfc SYSTEM "rfc2629.dtd">
<?rfc toc="yes"?>
<?rfc tocompact="yes"?>
<?rfc tocdepth="3"?>
<?rfc tocindent="yes"?>
<?rfc symrefs="yes"?>
<?rfc sortrefs="yes"?>
<?rfc comments="yes"?>
<?rfc inline="yes"?>
<?rfc compact="yes"?>
<?rfc subcompact="no"?>
<rfc category="std" docName="draft-ietf-pim-sr-p2mp-policy-08"
     ipr="trust200902">
  <front>
    <title abbrev="SR P2MP Policy">Segment Routing Point-to-Multipoint
    Policy</title>

    <author fullname="Daniel Voyer (editor)" initials="D."
            surname="Voyer, Ed.">
      <organization>Bell Canada</organization>

      <address>
        <postal>
          <street/>

          <city>Montreal</city>

          <region/>

          <code/>

          <country>CA</country>
        </postal>

        <phone/>

        <facsimile/>

        <email>daniel.voyer@bell.ca</email>

        <uri/>
      </address>
    </author>

    <author fullname="Clarence Filsfils" initials="C." surname="Filsfils">
      <organization>Cisco Systems, Inc.</organization>

      <address>
        <postal>
          <street/>

          <city>Brussels</city>

          <region/>

          <code/>

          <country>BE</country>
        </postal>

        <phone/>

        <facsimile/>

        <email>cfilsfil@cisco.com</email>

        <uri/>
      </address>
    </author>

    <author fullname="Rishabh Parekh" initials="R." surname="Parekh">
      <organization>Cisco Systems, Inc.</organization>

      <address>
        <postal>
          <street/>

          <city>San Jose</city>

          <region/>

          <code/>

          <country>US</country>
        </postal>

        <phone/>

        <facsimile/>

        <email>riparekh@cisco.com</email>

        <uri/>
      </address>
    </author>

    <author fullname="Hooman Bidgoli" initials="H." surname="Bidgoli">
      <organization>Nokia</organization>

      <address>
        <postal>
          <street/>

          <city>Ottawa</city>

          <region/>

          <code/>

          <country>CA</country>
        </postal>

        <phone/>

        <facsimile/>

        <email>hooman.bidgoli@nokia.com</email>

        <uri/>
      </address>
    </author>

    <author fullname="Zhaohui Zhang" initials="Z." surname="Zhang">
      <organization>Juniper Networks</organization>

      <address>
        <email>zzhang@juniper.net</email>
      </address>
    </author>

    <date day="12" month="April" year="2024"/>

    <abstract>
      <t>This document describes an architecture to construct a
      Point-to-Multipoint (P2MP) tree to deliver Multi-point services in a
      Segment Routing domain. A SR P2MP tree is constructed by stitching a set
      of Replication segments. A SR Point-to-Multipoint (SR P2MP) Policy
      defines a P2MP tree and a PCE computes and instantiates the tree.</t>
    </abstract>

    <note title="Requirements Language">
      <t>The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
      "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and
      "OPTIONAL" in this document are to be interpreted as described in BCP 14
      <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when,
      they appear in all capitals, as shown here.</t>
    </note>
  </front>

  <middle>
    <section title="Introduction">
      <t>A Multi-point service delivery can be realized with P2MP trees in a
      Segment Routing domain <xref target="RFC8402"/>. A P2MP tree spans from
      a Root node to a set of Leaf nodes via intermediate Replication nodes.
      It consists of a Replication segment <xref target="RFC9524"/> at the
      root node, stitched to one or more Replication segments at Leaf nodes
      and intermediate Replication nodes. A Bud node <xref target="RFC9524"/>
      is a node that is both a Replication node and a Leaf node. Any mention
      of "Leaf node(s)" in this document should be considered as referring to
      "Leaf or Bud node(s)".</t>

      <t>A Segment Routing P2MP policy, a variant of the SR Policy <xref
      target="RFC9256"/>, defines the Root and Leaf nodes of a P2MP tree. One
      or more Candidate Paths define optional constraints and/or optimization
      objectives for the tree. A PCE computes the tree from the Root node to
      the set of Leaf nodes via a set of Replication nodes based on a SR P2MP
      Policy. The PCE then instantiates the P2MP tree in the SR domain by
      signaling Replication segments to Root, Replication and Leaf nodes using
      protocols such as PCEP, BGP, NetConf etc. The Replication segments of a
      P2MP tree can be instantiated for SR-MPLS and SRv6 dataplanes.</t>
    </section>

    <section title="SR P2MP Policy">
      <t>An SR P2MP policy is a variant of an SR policy<xref target="RFC9256">
      </xref> and is used to instantiate SR P2MP trees.</t>

      <t>A SR P2MP Policy is identified by the tuple &lt;Root, Tree-ID&gt;,
      where:</t>

      <t><list style="symbols">
          <t>Root: The address of Root node of a P2MP trees instantiated by
          the SR P2MP Policy</t>

          <t>Tree-ID: A identifier that is unique in context of the Root. This
          is an unsigned 32-bit number.</t>
        </list></t>

      <t>A SR P2MP Policy is defined by following elements:</t>

      <t><list style="symbols">
          <t>Leaf nodes: A set of nodes that terminate the P2MP trees.</t>

          <t>Candidate Paths: See below.</t>
        </list></t>

      <t>A SR P2MP policy is provisioned on a PCE to compute and instantiate
      P2MP trees. A PCE computes the P2MP tree instances of a policy and
      instantiates Replication segments at Root, Replication and Leaf nodes of
      the trees. The Root and Tree-ID of the SR P2MP policy are mapped to
      Replication-ID element of the Replication segment identifier<xref
      target="RFC9524"/>.</t>

      <t>A SR P2MP Policy has one or more Candidate paths. Each candidate path
      has optional topological/resource constraints and/or optimization
      objectives that determine the P2MP trees computed for that Candidate
      path. The Root node selects the active Candidate path based on the tie
      breaking rules amongst the candidate-paths as specified in<xref
      target="RFC9256"/>.</t>

      <t>A Candidate path has zero or more P2MP tree instances. Instance-ID is
      the identifier of an instance of a Candidate path. This is an unsigned
      16-bit number which is unique in context of the SR P2MP policy of the
      Candidate path. The identifier of Replication segments used to
      instantiate an instance is &lt;Root-ID, Tree-ID, Instance-ID,
      Node-ID&gt;. The PCE designates an Active instance of a Candidate Path
      at the Root node of SR P2MP policy by signalling this state in the
      protocol used to instantiate the Replication segment of the
      instance.</t>

      <t>The Tree-SID (<xref target="P2MP_Tree"/> below) is an identifier of a
      P2MP tree instance in the forwarding plane. It is instantiated in the
      forwarding plane at Root node, intermediate Replication nodes and Leaf
      nodes of the P2MP tree of an instance. The Tree-SID of the active
      instance of the active Candidate path SHOULD be used as Binding SID of
      the SR P2MP policy.</t>

      <t>The Root node steers an incoming packet of a Multi-point service into
      a SR P2MP policy in one of two ways:</t>

      <t><list style="symbols">
          <t>Based on a local policy-based routing at the Root node. This
          packet is carried by the active instance of the active Candidate
          path of the policy.</t>

          <t>Based on the Tree-SID (Binding SID) in the incoming packet.</t>
        </list></t>
    </section>

    <section anchor="P2MP_Tree" title="P2MP Tree">
      <t>A P2MP tree in a SR domain connects a Root to a set of Leaf nodes via
      a set of intermediate Replication nodes. It consists of a Replication
      segment at the Root stitched to zero or more Replication segments at
      intermediate Replication nodes, and eventually the Replication segments
      at Leaf nodes.</t>

      <t>The Replication SID of the Replication segment at Root node is called
      the Tree-SID. The Tree-SID SHOULD also be the Replication-SID of
      Replication segments at Replication and Leaf nodes. The Replication
      segments at Replication and Leaf nodes MAY have Replication-SIDs that
      are not same as the Tree-SID.</t>

      <t>A Replication Segment MAY be shared by P2MP tree instances, e.g. for
      protection. A shared Replication Segment MAY be identified with zero
      Root-ID address (0.0.0.0 for IPv4 and :: for IPv6) and a Replication-ID
      that is unique in context of Node address where the Replication segment
      is instantiated. A shared Replication Segment MUST NOT be associated
      with a SR P2MP tree.</t>

      <t>For SR-MPLS, a PCE MAY decide not to instantiate Replication segments
      at Leaf nodes of a P2MP tree if it is known a priori that Multi-point
      services mapped to the P2MP tree can be identified using a context that
      is globally unique in SR domain.&nbsp;In this case, Replication nodes
      upstream to the Leaf nodes effectively implement Penultimate-Hop Pop
      (PHP) behavior to pop Tree-SID from a packet. A Multi-point service
      context assigned from "Domain-wide Common Block" (DCB) <xref
      target="I-D.ietf-bess-mvpn-evpn-aggregation-label"/> is an example of a
      globally unique context.</t>

      <t>A packet steered into a P2MP tree instance is replicated by the
      Replication segment at Root node to its downstream nodes. A replicated
      packet has the Replication-SID of the Replication segment at a
      downstream node. A downstream node could be a Leaf node or an
      intermediate Replication Node. In the latter case, the packet is
      replicated through Replication segments till it reaches all the Leaf
      nodes.</t>
    </section>

    <section title="Using Controller to build a P2MP Tree">
      <t>A P2MP tree can be instantited by a Path Computation Element (PCE).
      This section outlines a high-level architecture for such an
      approach.</t>

      <figure anchor="Control-Plane" title="Centralized Control Plane Model">
        <artwork align="center"><![CDATA[

                   North Bound                South Bound
                   Programming          ..... Programming 
                   Interface                  Interface 
                        |
                        |
                        v
                     +-----+ ..........................
        .............| PCE | .............             .
        .            +-----+             .             .
        .               .                .             .
        .               .                .             .
        .               .                .             .
        .               .                V             .
        .               .              +----+          .
        .               .              | N3 |          . 
        .               .              +----+          .
        .               .                 | Leaf (L2)   .
        .               .                 |            .
        .               .                 |            .
        V               V                 |            V
      +----+          +----+ --------------          +----+
      | N1 |----------| N2 |-------------------------| N4 |
      +----+          +----+                         +----+
     Root (R)         Replication node (M)           Leaf (L1)        
                                          
]]></artwork>
      </figure>

      <section title="Provisioning SR P2MP Policy Creation">
        <t>A SR P2MP policy can be instantiated and maintained in a using a
        Path Computation Element (PCE).</t>

        <section title="API">
          <t>North-bound APIs on a PCE can be used to:</t>

          <t><list style="numbers">
              <t>Create SR P2MP policy: CreateSRP2MPPolicy&lt;Root,
              Tree-ID&gt;</t>

              <t>Delete SR P2MP policy: DeleteSRP2MPPolicy&lt;Root,
              Tree-ID&gt;</t>

              <t>Modify SR P2MP policy Leaf Set:
              SRP2MPPolicyLeafSetModify&lt;Root, Tree-ID, {Leaf Set}&gt;</t>

              <t>Create a Candidate Path for SR P2MP policy:
              CreateSRP2MPCandidatePath&lt;Root, Tree-ID,
              &lt;CP-ID&gt;&gt;</t>

              <t>Delete a Candidate Path for SR P2MP policy:
              DeleteSRP2MPCandidatePath&lt;Root, Tree-ID,
              &lt;CP-ID&gt;&gt;</t>

              <t>Update a Candidate Path for SR P2MP policy:
              UpdateSRP2MPCandidatePath&lt;Root, Tree-ID, &lt;CP-ID&gt;,
              Preference, Constraints, Optimization, ...&gt;</t>
            </list></t>

          <t>CP-ID is identifier of a Candidate Path within a SR P2MP policy.
          One possible identifier is the tuple &lt;Protocol-Origin,
          originator, discriminator&gt; as specified in <xref
          target="RFC9256"/>.</t>

          <t>Note these are conceptual APIs. Actual implementations may offer
          different APIs as long as they provide same functionality. For
          example, API might allow symbolic name to be assigned for a P2MP
          policy or APIs might allow individual Leaf nodes to be added or
          deleted from a policy instead of an update operation.</t>
        </section>

        <section title="Invoking API">
          <t>Interaction with a PCE can be via PCEP, REST, Netconf, gRPC, CLI.
          A YANG model shall be be developed for this purpose as well.</t>
        </section>
      </section>

      <section title="P2MP Tree Computation">
        <t>An entity (an operator, a network node or a machine) provisions a
        SR P2MP policy by specifying the addresses of the root (R) and set of
        leaves {L} as well as Traffic Engineering (TE) attributes of Candidate
        paths via a suitable North-Bound API. The PCE computes one or more
        instances of P2MP trees of a candidate path. The PCE MAY compute P2MP
        trees for all Candidate paths. If tree computation is successful, PCE
        instantiates the P2MP tree instance(s) using Replication segments on
        Root, Replication, and Leaf nodes. A Candidate path may not have any
        instance of P2MP tree if PCE cannot compute a tree.</t>

        <t>Candidate path constraints shall include link color affinity,
        bandwidth, disjointness (link, node, SRLG), delay bound, link loss,
        flexible algorithm etc. Candidate path shall be optimized based on IGP
        or TE metric or link latency. Other constraints and optimization
        objectives MAY be used for P2MP tree computation.</t>

        <t>The Tree SID of an instance of a Candidate path of a SR P2MP policy
        can be either dynamically allocated by the PCE or statically assigned
        by entity provisioning the SR P2MP policy. Ideally, same Tree-SID
        SHOULD be used for Replication segments at Root, Replication, and Leaf
        nodes. Different Tree-SIDs MAY be used at Replication Node(s) if it is
        not feasible to use same Tree SID.</t>

        <t>A PCE can modify a P2MP tree of a Candidate path on detecting a
        change in the network topology or in case a better path can be found
        based on the new network state. In this case, the PCE MAY create a new
        instance of a P2MP tree and remove the old instance of the tree from
        the network in order to minimize traffic loss.</t>

        <t>A PCE shall be capable of computing paths across multiple IGP areas
        or levels as well as Autonomous Systems (ASs).</t>

        <section title="Topology Discovery">
          <t>A PCE shall learn network topology, TE attributes of link/node as
          well as SIDs via dynamic routing protocols (IGP and/or BGP-LS). It
          may be possible for entities to pass topology information to PCE via
          north-bound API.</t>
        </section>

        <section title="Capability and Attribute Discovery">
          <t>It shall be possible for a node to advertise SR P2MP tree
          capability via IGP, BGP-LS and/or PCEP. Similarly, a PCE can also
          advertise its P2MP tree computation capability via IGP, BGP-LS
          and/or PCEP. Capability advertisement allows a network node to
          dynamically choose one or more PCE(s) to obtain services pertaining
          to SR P2MP policies, as well a PCE to dynamically identify SR P2MP
          tree capable nodes.</t>
        </section>

        <section title="Loop Prevention">
          <t>A PCE MUST compute a P2MP tree such that there are no loops in
          the tree at steady state (Section 2 of <xref target="RFC9524"/>). An
          OPTIONAL algorithm to compute a loop free tree is listed below,</t>

          <t>Given SR P2MP Polciy with Root (R) and Leaf node set (LS), a
          Candidate path of the policy with constraints(C) and optimization
          objective(O), and Constrained Shortest Path First(CSPF) algorithm to
          compute a path between a pair of nodes:</t>

          <figure>
            <artwork><![CDATA[S01.  Path Set<PS> = {}
S02.  For each Leaf(L) in LS {
S03.    Path P = Compute CSPF(R, L, C, O)
S04.    Add P to PS
S05.  }
S06.  Tree = Merge(PS)]]></artwork>
          </figure>

          <t>Notes:<vspace blankLines="0"/></t>

          <t><list style="symbols">
              <t>Specification of CSPF algorithm is outside the scope of this
              document</t>

              <t>Path Set Merge function merges individual paths resulting in
              a tree of Root, intermediate Replication and Leaf nodes. The
              specfication of this function is outside the scope of this
              document.</t>
            </list></t>

          <t>A PCE MAY implement other tree computation algorithm(s) which
          MUST guarantee loop prevention or loop detection and mitigation at
          steady state.</t>
        </section>
      </section>

      <section title="Instantiating P2MP tree on nodes">
        <t>Once a PCE computes a P2MP tree for an instance of a Candidate path
        of a SR P2MP policy, it needs to instantiate the tree on the relevant
        network nodes via Replication segments. The PCE can use various
        protocols to program the Replication segments as described below.</t>

        <section title="PCEP">
          <t>PCE Protocol (PCEP) has been traditionally used:</t>

          <t><list style="numbers">
              <t>For a head-end to obtain paths from a PCE.</t>

              <t>A PCE to instantiate SR policies.</t>
            </list>PCEP protocol can be stateful in that a PCE can have a
          stateful control of an SR policy on a head-end which has delegated
          the control of the SR policy to the PCE. PCEP shall be extended to
          provision and maintain SR P2MP trees in a stateful fashion.</t>
        </section>

        <section title="BGP">
          <t>BGP has been extended to instantiate and report SR policies. It
          shall be extended to instantiate and maintain P2MP trees for SR P2MP
          policies.</t>
        </section>
      </section>

      <section title="Protection">
        <section title="Local Protection">
          <t>A network link, node or path on the instance of a P2MP tree can
          be protected using SR policies computed by PCE. The backup SR
          policies shall be programmed in forwarding plane in order to
          minimize traffic loss when the protected link/node fails. It is also
          possible to use node local Loop-Free Alternate protection mechanisms
          (LFA) to protect link/nodes of P2MP tree.</t>
        </section>

        <section title="Path Protection">
          <t>It is possible for PCE create a disjoint backup tree for
          providing end-to-end path protection.</t>
        </section>
      </section>
    </section>

    <section anchor="IANA" title="IANA Considerations">
      <t>This document makes no request of IANA.</t>
    </section>

    <section anchor="Security" title="Security Considerations">
      <t>This document describes how a P2MP tree can be created in an SR
      domain by stitching Replication Segments together. Some security
      considerations for Replication Segments outlined in <xref
      target="RFC9524"/> are also appicable to this document. Following is a
      brief reminder of the same.</t>

      <t>An SR domain needs protection from outside attackers as described in
      <xref target="RFC8754"/>.</t>

      <t>Failure to protect the SR MPLS domain by correctly provisioning MPLS
      support per interface permits attackers from outside the domain to send
      packets to receivers of the Multipoint services that use the SR P2MP
      trees provisioned within the domain.</t>

      <t>Failure to protect the SRv6 domain with inbound Infrastructure Access
      Control Lists (IACLs) on external interfaces, combined with failure to
      implement BCP 38 <xref target="RFC2827"/>or apply IACLs on nodes
      provisioning SIDs, permits attackers from outside the SR domain to send
      packets to the receivers of Multipoint services that use the SR P2MP
      trees provisioned within the domain.</t>

      <t>Incorrect provisioning of Replication segments by a PCE that computes
      SR P2MP tree instance can result in a chain of Replication segments
      forming a loop. In this case, replicated packets can create a storm till
      MPLS TTL (for SR-MPLS) or IPv6 Hop Limit (for SRv6) decrements to
      zero.</t>

      <t>The control plane protocols (like PCEP, BGP, etc.) used to
      instantiate Replication segments of SR P2MP tree instance can leverage
      their own security mechanisms such as encryption, authentication
      filtering etc.</t>

      <t>For SRv6, <xref target="RFC9524"/> describes an exception for
      Parameter Problem Message, code 2 ICMPv6 Error messages. If an attacker
      is able to inject a packet into Multipoint service with source address
      of a node and with an extension header using unknown option type marked
      as mandatory, then a large number of ICMPv6 Parameter Problem messages
      can cause a denial-of-service attack on the source node.</t>
    </section>

    <section anchor="Acknowledgements" title="Acknowledgements">
      <t>The authors would like to acknowledge Siva Sivabalan, Mike Koldychev
      and Vishnu Pavan Beeram for their valuable inputs.</t>
    </section>

    <section title="Contributors">
      <t/>

      <t>Clayton Hassen <vspace blankLines="0"/> Bell Canada <vspace
      blankLines="0"/> Vancouver <vspace blankLines="0"/> Canada</t>

      <t>Email: clayton.hassen@bell.ca</t>

      <t>Kurtis Gillis <vspace blankLines="0"/> Bell Canada <vspace
      blankLines="0"/> Halifax <vspace blankLines="0"/> Canada</t>

      <t>Email: kurtis.gillis@bell.ca</t>

      <t>Arvind Venkateswaran <vspace blankLines="0"/> Cisco Systems, Inc.
      <vspace blankLines="0"/> San Jose <vspace blankLines="0"/> US</t>

      <t>Email: arvvenka@cisco.com</t>

      <t>Zafar Ali <vspace blankLines="0"/> Cisco Systems, Inc. <vspace
      blankLines="0"/> US</t>

      <t>Email: zali@cisco.com</t>

      <t>Swadesh Agrawal <vspace blankLines="0"/> Cisco Systems, Inc. <vspace
      blankLines="0"/> San Jose <vspace blankLines="0"/> US</t>

      <t>Email: swaagraw@cisco.com</t>

      <t>Jayant Kotalwar <vspace blankLines="0"/> Nokia <vspace
      blankLines="0"/> Mountain View <vspace blankLines="0"/> US</t>

      <t>Email: jayant.kotalwar@nokia.com</t>

      <t>Tanmoy Kundu <vspace blankLines="0"/> Nokia <vspace blankLines="0"/>
      Mountain View <vspace blankLines="0"/> US</t>

      <t>Email: tanmoy.kundu@nokia.com</t>

      <t>Andrew Stone <vspace blankLines="0"/> Nokia <vspace blankLines="0"/>
      Ottawa <vspace blankLines="0"/> Canada</t>

      <t>Email: andrew.stone@nokia.com</t>

      <t>Tarek Saad <vspace blankLines="0"/> Juniper Networks <vspace
      blankLines="0"/> Canada</t>

      <t>Email:tsaad@juniper.net</t>

      <t>Kamran Raza <vspace blankLines="0"/> Cisco Systems, Inc. <vspace
      blankLines="0"/> Canada</t>

      <t>Email:skraza@cisco.com</t>

      <t>Anuj Budhiraja <vspace blankLines="0"/> Cisco Systems, Inc. <vspace
      blankLines="0"/> US</t>

      <t>Email:abudhira@cisco.com</t>
    </section>
  </middle>

  <back>
    <references title="Normative References">
      <?rfc include="reference.RFC.2119"?>

      <?rfc include="reference.RFC.8174"?>

      <?rfc include="reference.RFC.8402"?>

      <?rfc include='reference.RFC.9524'?>

      <?rfc include='reference.RFC.9256'?>
    </references>

    <references title="Informative References">
      <?rfc include='reference.I-D.ietf-bess-mvpn-evpn-aggregation-label'?>

      <?rfc include='reference.RFC.8986'?>

      <?rfc include='reference.I-D.filsfils-spring-srv6-net-pgm-illustration'?>

      <?rfc include='reference.RFC.8754'?>

      <?rfc include='reference.RFC.2827'?>
    </references>

    <section title="Illustration of SR P2MP Policy and P2MP Tree">
      <t>Consider the following topology:</t>

      <figure title="Figure 1">
        <artwork><![CDATA[                               R3------R6
                         PCE--/         \
                      R1----R2----R5-----R7
                              \         / 
                               +--R4---+  ]]></artwork>
      </figure>

      <t>In these examples, the Node-SID of a node Rn is N-SIDn and
      Adjacency-SID from node Rm to node Rn is A-SIDmn. Interface between Rm
      and Rn is Lmn.</t>

      <t>For SRv6, the reader is expected to be familiar with SRv6 Network
      Programming <xref target="RFC8986"/> to follow the examples. This
      document re-uses SID allocation scheme, reproduced below, from
      Illustrations in SRv6 Network Programming <xref
      target="I-D.filsfils-spring-srv6-net-pgm-illustration"/></t>

      <t><list style="symbols">
          <t>2001:db8::/32 is an IPv6 block allocated by a RIR to the
          operator</t>

          <t>2001:db8:0::/48 is dedicated to the internal address space</t>

          <t>2001:db8:cccc::/48 is dedicated to the internal SRv6 SID
          space</t>

          <t>We assume a location expressed in 64 bits and a function
          expressed in 16 bits</t>

          <t>node k has a classic IPv6 loopback address 2001:db8::k/128 which
          is advertised in the IGP</t>

          <t>node k has 2001:db8:cccc:k::/64 for its local SID space. Its SIDs
          will be explicitly assigned from that block</t>

          <t>node k advertises 2001:db8:cccc:k::/64 in its IGP</t>

          <t>Function :1:: (function 1, for short) represents the End function
          with PSP support</t>

          <t>Function :Cn:: (function Cn, for short) represents the End.X
          function to node n</t>

          <t>Function :C1n: (function C1n for short) represents the End.X
          function to node n with USD</t>
        </list></t>

      <t>Each node k has: <list style="symbols">
          <t>An explicit SID instantiation 2001:db8:cccc:k:1::/128 bound to an
          End function with additional support for PSP</t>

          <t>An explicit SID instantiation 2001:db8:cccc:k:Cj::/128 bound to
          an End.X function to neighbor J with additional support for PSP</t>

          <t>An explicit SID instantiation 2001:db8:cccc:k:C1j::/128 bound to
          an End.X function to neighbor J with additional support for USD</t>
        </list></t>

      <t>Assume a PCE is provisioned with following SR P2MP policy at Root R1
      with Tree-ID T-ID:</t>

      <figure>
        <artwork><![CDATA[SR P2MP Policy <R1,T-ID>:
 Leaf nodes: {R2, R6, R7}
 Candidate-path 1:
   Optimize: IGP metric
   Tree-SID: T-SID1
]]></artwork>
      </figure>

      <t>The PCE is responsible for computing a P2MP tree instance of the
      Candidate Path. In this example, we assume one active instance of P2MP
      tree with Instance-ID I-ID1. Assume PCE instantiates P2MP trees by
      signalling Replication segments i.e. Replication-ID of these Replication
      segments is &lt;Root, Tree-ID, Instance-ID&gt;.. All Replication
      segments use the Tree-SID T-SID1 as Replication-SID. For SRv6, assume
      the Replication-SID at node k, bound to an End.Replicate function, is
      2001:db8:cccc:k:FA::/128.</t>

      <section title="P2MP Tree with non-adjacent Replication Segments">
        <t>Assume PCE computes a P2MP tree instance with Root node R1,
        Intermediate and Leaf node R2, and Leaf nodes R6 and R7. The PCE
        instantiates the instance by stitching Replication segments at R1, R2,
        R6 and R7. Replication segment at R1 replicates to R2. Replication
        segment at R2 replicates to R6 and R7. Note nodes R3, R4 and R5 do not
        have any Replication segment state for the tree.</t>

        <section title="SR-MPLS">
          <t>The Replication segment state at nodes R1, R2, R6 and R7 is shown
          below.</t>

          <t>Replication segment at R1:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R1>:
 Replication-SID: T-SID1
 Replication State:
   R2: <T-SID1->L12>
]]></artwork>
          </figure>

          <t>Replication to R2 steers packet directly to the node on interface
          L12.</t>

          <t>Replication segment at R2:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R2>:
 Replication-SID: T-SID1
 Replication State:
   R2: <Leaf>
   R6: <N-SID6, T-SID1>
   R7: <N-SID7, T-SID1>]]></artwork>
          </figure>

          <t>R2 is a Bud node. It performs role of Leaf as well as a transit
          node replicating to R6 and R7. Replication to R6, using N-SID6,
          steers packet via IGP shortest path to that node. Replication to R7,
          using N-SID7, steers packet via IGP shortest path to R7 via either
          R5 or R4 based on ECMP hashing.</t>

          <t>Replication segment at R6:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R6>:
 Replication-SID: T-SID1
 Replication State:
   R6: <Leaf>]]></artwork>
          </figure>

          <t>Replication segment at R7:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R7>:
 Replication-SID: T-SID1
 Replication State:
   R7: <Leaf>]]></artwork>
          </figure>

          <t>When a packet is steered into the active instance Candidate path
          1 of SR P2MP Policy at R1:</t>

          <t><list style="symbols">
              <t>Since R1 is directly connected to R2, R1 performs PUSH
              operation with just &lt;T-SID1&gt; label for the replicated copy
              and sends it to R2 on interface L12.</t>

              <t>R2, as Leaf, performs NEXT operation, pops T-SID1 label and
              delivers the payload. For replication to R6, R2 performs a PUSH
              operation of N-SID6, to send &lt;N-SID6,T-SID1&gt; label stack
              to R3. R3 is the penultimate hop for N-SID6; it performs
              penultimate hop popping, which corresponds to the NEXT operation
              and the packet is then sent to R6 with &lt;T-SID1&gt; in the
              label stack. For replication to R7, R2 performs a PUSH operation
              of N-SID7, to send &lt;N-SID7,T-SID1&gt; label stack to R4, one
              of IGP ECMP nexthops towards R7. R4 is the penultimate hop for
              N-SID7; it performs penultimate hop popping, which corresponds
              to the NEXT operation and the packet is then sent to R7 with
              &lt;T-SID1&gt; in the label stack.</t>

              <t>R6, as Leaf, performs NEXT operation, pops T-SID1 label and
              delivers the payload.</t>

              <t>R7, as Leaf, performs NEXT operation, pops T-SID1 label and
              delivers the payload.</t>
            </list></t>
        </section>

        <section title="SRv6">
          <t>For SRv6, the replicated packet from R2 to R7 has to traverse R4
          using a SR-TE policy, Policy27. The policy has one SID in segment
          list: End.X function with USD of R4 to R7 . The Replication segment
          state at nodes R1, R2, R6 and R7 is shown below.</t>

          <figure>
            <artwork><![CDATA[Policy27: <2001:db8:cccc:4:C17::>
]]></artwork>
          </figure>

          <t>Replication segment at R1:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R1>:
 Replication-SID: 2001:db8:cccc:1:FA::
 Replication State:
   R2: <2001:db8:cccc:2:FA::->L12>
]]></artwork>
          </figure>

          <t>Replication to R2 steers packet directly to the node on interface
          L12.</t>

          <t>Replication segment at R2:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R2>:
 Replication-SID: 2001:db8:cccc:2:FA::
 Replication State:
   R2: <Leaf>
   R6: <2001:db8:cccc:6:FA::>
   R7: <2001:db8:cccc:7:FA:: -> Policy27>]]></artwork>
          </figure>

          <t>R2 is a Bud node. It performs role of Leaf as well as a transit
          node replicating to R6 and R7. Replication to R6, steers packet via
          IGP shortest path to that node. Replication to R7, via SR-TE policy,
          first encapsulates the packet using H.Encaps and then steers the
          outer packet to R4. End.X USD on R4 decapsulates outer header and
          sends the original inner packet to R7.</t>

          <t>Replication segment at R6:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R6>:
 Replication-SID: 2001:db8:cccc:6:FA::
 Replication State:
   R6: <Leaf>]]></artwork>
          </figure>

          <t>Replication segment at R7:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R7>:
 Replication-SID: 2001:db8:cccc:7:FA::
 Replication State:
   R7: <Leaf>]]></artwork>
          </figure>

          <t>When a packet (A,B2) is steered into the active instance of
          Candidate path 1 of SR P2MP Policy at R1 using H.Encaps.Replicate
          behavior:</t>

          <t><list style="symbols">
              <t>Since R1 is directly connected to R2, R1 sends replicated
              copy (2001:db8::1, 2001:db8:cccc:2:FA::) (A,B2) to R2 on
              interface L12.</t>

              <t>R2, as Leaf removes outer IPv6 header and delivers the
              payload. R2, as a bud node, also replicates the packet.</t>

              <t><list style="symbols">
                  <t>For replication to R6, R2 sends (2001:db8::1,
                  2001:db8:cccc:6:FA::) (A,B2) to R3. R3 forwards the packet
                  using 2001:db8:cccc:6::/64 packet to R6.</t>

                  <t>For replication to R7 using Policy27, R2 encapsulates and
                  sends (2001:db8::2, 2001:db8:cccc:4:C17::) (2001:db8::1,
                  2001:db8:cccc:7:FA::) (A,B2) to R4. R4 performs End.X USD
                  behavior, decapsulates outer IPv6 header and sends
                  (2001:db8::1, 2001:db8:cccc:7:FA::) (A,B2) to R7.</t>
                </list></t>

              <t>R6, as Leaf, removes outer IPv6 header and delivers the
              payload.</t>

              <t>R7, as Leaf, removes outer IPv6 header and delivers the
              payload.</t>
            </list></t>
        </section>
      </section>

      <section title="P2MP Tree with adjacent Replication Segments">
        <t>Assume PCE computes a P2MP tree with Root node R1, Intermediate and
        Leaf node R2, Intermediate nodes R3 and R5, and Leaf nodes R6 and R7.
        The PCE instantiates the P2MP tree instance by stitching Replication
        segments at R1, R2, R3, R5, R6 and R7. Replication segment at R1
        replicates to R2. Replication segment at R2 replicates to R3 and R5.
        Replication segment at R3 replicates to R6. Replication segment at R5
        replicates to R7. Note node R4 does not have any Replication segment
        state for the tree.</t>

        <section title="SR-MPLS">
          <t>The Replication segment state at nodes R1, R2, R3, R5, R6 and R7
          is shown below.</t>

          <t>Replication segment at R1:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R1>:
 Replication-SID: T-SID1
 Replication State:
   R2: <T-SID1->L12>
]]></artwork>
          </figure>

          <t>Replication to R2 steers packet directly to the node on interface
          L12.</t>

          <t>Replication segment at R2:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R2>:
 Replication-SID: T-SID1
 Replication State:
   R2: <Leaf>
   R3: <T-SID1->L23>
   R5: <T-SID1->L25>]]></artwork>
          </figure>

          <t>R2 is a Bud node. It performs role of Leaf as well as a transit
          node replicating to R3 and R5. Replication to R3, steers packet
          directly to the node on L23. Replication to R5, steers packet
          directly to the node on L25.</t>

          <t>Replication segment at R3:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R3>:
 Replication-SID: T-SID1
 Replication State:
   R6: <T-SID1->L36>
]]></artwork>
          </figure>

          <t>Replication to R6, steers packet directly to the node on L36.</t>

          <t>Replication segment at R5:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R5>:
 Replication-SID: T-SID1
 Replication State:
   R7: <T-SID1->L57>
]]></artwork>
          </figure>

          <t>Replication to R7, steers packet directly to the node on L57.</t>

          <t>Replication segment at R6:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R6>:
 Replication-SID: T-SID1
 Replication State:
   R6: <Leaf>]]></artwork>
          </figure>

          <t>Replication segment at R7:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R7>:
 Replication-SID: T-SID1
 Replication State:
   R7: <Leaf>]]></artwork>
          </figure>

          <t>When a packet is steered into the SR P2MP Policy at R1:</t>

          <t><list style="symbols">
              <t>Since R1 is directly connected to R2, R1 performs PUSH
              operation with just &lt;T-SID1&gt; label for the replicated copy
              and sends it to R2 on interface L12.</t>

              <t>R2, as Leaf, performs NEXT operation, pops T-SID1 label and
              delivers the payload. It also performs PUSH operation on T-SID1
              for replication to R3 and R5. For replication to R6, R2 sends
              &lt;T-SID1&gt; label stack to R3 on interface L23. For
              replication to R5, R2 sends &lt;T-SID1&gt; label stack to R5 on
              interface L25.</t>

              <t>R3 performs NEXT operation on T-SID1 and performs a PUSH
              operation for replication to R6 and sends &lt;T-SID1&gt; label
              stack to R6 on interface L36.</t>

              <t>R5 performs NEXT operation on T-SID1 and performs a PUSH
              operation for replication to R7 and sends &lt;T-SID1&gt; label
              stack to R7 on interface L57.</t>

              <t>R6, as Leaf, performs NEXT operation, pops T-SID1 label and
              delivers the payload.</t>

              <t>R7, as Leaf, performs NEXT operation, pops T-SID1 label and
              delivers the payload.</t>
            </list></t>
        </section>

        <section title="SRv6">
          <t>The Replication segment state at nodes R1, R2, R3, R5, R6 and R7
          is shown below.</t>

          <t>Replication segment at R1:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R1>:
 Replication-SID: 2001:db8:cccc:1:FA::
 Replication State:
   R2: <2001:db8:cccc:2:FA::->L12>
]]></artwork>
          </figure>

          <t>Replication to R2 steers packet directly to the node on interface
          L12.</t>

          <t>Replication segment at R2:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R2>:
 Replication-SID: 2001:db8:cccc:2:FA::
 Replication State:
   R2: <Leaf>
   R3: <2001:db8:cccc:3:FA::->L23>
   R5: <2001:db8:cccc:5:FA::->L25>]]></artwork>
          </figure>

          <t>R2 is a Bud node. It performs role of Leaf as well as a transit
          node replicating to R3 and R5. Replication to R3, steers packet
          directly to the node on L23. Replication to R5, steers packet
          directly to the node on L25.</t>

          <t>Replication segment at R3:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R3>:
 Replication-SID: 2001:db8:cccc:3:FA::
 Replication State:
   R6: <2001:db8:cccc:6:FA::->L36>
]]></artwork>
          </figure>

          <t>Replication to R6, steers packet directly to the node on L36.</t>

          <t>Replication segment at R5:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R5>:
 Replication-SID: 2001:db8:cccc:5:FA::
 Replication State:
   R7: <2001:db8:cccc:7:FA::->L57>
]]></artwork>
          </figure>

          <t>Replication to R7, steers packet directly to the node on L57.</t>

          <t>Replication segment at R6:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R6>:
 Replication-SID: 2001:db8:cccc:6:FA::
 Replication State:
   R6: <Leaf>]]></artwork>
          </figure>

          <t>Replication segment at R7:</t>

          <figure>
            <artwork><![CDATA[Replication segment <R1,T-ID,I-ID1,R7>:
 Replication-SID: 2001:db8:cccc:7:FA::
 Replication State:
   R7: <Leaf>]]></artwork>
          </figure>

          <t>When a packet (A,B2) is steered into the active instance of
          Candidate path 1 of SR P2MP Policy at R1 using H.Encaps.Replicate
          behavior:</t>

          <t><list style="symbols">
              <t>Since R1 is directly connected to R2, R1 sends replicated
              copy (2001:db8::1, 2001:db8:cccc:2:FA::) (A,B2) to R2 on
              interface L12.</t>

              <t>R2, as Leaf, removes outer IPv6 header and delivers the
              payload. R2, as a bud node, also replicates the packet. For
              replication to R3, R2 sends (2001:db8::1, 2001:db8:cccc:3:FA::)
              (A,B2) to R3 on interface L23. For replication to R5, R2 sends
              (2001:db8::1, 2001:db8:cccc:5:FA::) (A,B2) to R5 on interface
              L25.</t>

              <t>R3 replicates and sends (2001:db8::1, 2001:db8:cccc:6:FA::)
              (A,B2) to R6 on interface L36.</t>

              <t>R5 replicates and sends (2001:db8::1, 2001:db8:cccc:7:FA::)
              (A,B2) to R7 on interface L57.</t>

              <t>R6, as Leaf, removes outer IPv6 header and delivers the
              payload.</t>

              <t>R7, as Leaf, removes outer IPv6 header and delivers the
              payload.</t>
            </list></t>
        </section>
      </section>
    </section>
  </back>
</rfc>
