<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  

<!ENTITY nbsp    "&#160;">
<!ENTITY zwsp   "&#8203;">
<!ENTITY nbhy   "&#8209;">
<!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<?rfc toc="yes"?>
<?rfc sortrefs="yes"?>
<?rfc symrefs="yes"?>
<rfc
  xmlns:xi="http://www.w3.org/2001/XInclude" category="info" docName="draft-wei-nmrg-gnn-based-dtn-modeling-00" ipr="trust200902" submissionType="IETF" obsoletes="" updates="" xml:lang="en" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.17.0 -->
  <front>
    <title abbrev="Network Modeling for DTN">Graph Neural Network Based Modeling for 
       Digital Twin Network</title>
    <seriesInfo name="Internet-Draft" value="draft-wei-nmrg-gnn-based-dtn-modeling-00"/>
    <author fullname="Yong Cui" initials="Y." surname="Cui">
      <organization>Tsinghua University</organization>
      <address>
        <postal>
          <street>30 Shuangqing Rd, Haidian District</street>
          <city>Beijing</city>
          <country>China</country>
        </postal>
        <email>cuiyong@tsinghua.edu.cn</email>
      </address>
    </author>
    <author fullname="Yunze Wei" initials="Y." surname="Wei">
      <organization>Tsinghua University</organization>
      <address>
        <postal>
          <street>30 Shuangqing Rd, Haidian District</street>
          <city>Beijing</city>
          <code>100876</code>
          <country>China</country>
        </postal>
        <email>yunzewei@outlook.com </email>
      </address>
    </author>
    <author fullname="Zhiyong Xu" initials="Z." surname="Xu">
      <organization>Tsinghua University
      </organization>
      <address>
        <postal>
          <street>30 Shuangqing Rd, Haidian District </street>
          <city>Beijing</city>
          <code>100876</code>
          <country>China</country>
        </postal>
        <email>xuzhiyong@tsinghua.edu.cn </email>
      </address>
    </author>
    <author fullname="Peng Liu" initials="P." surname="Liu">
      <organization>China Mobile</organization>
      <address>
        <postal>
          <street>No.32 XuanWuMen West Street</street>
          <city>Beijing</city>
          <code>100053</code>
          <country>China</country>
        </postal>
        <email>liupengyjy@chinamobile.com</email>
      </address>
    </author>
    <author fullname="Zongpeng Du" initials=" Z." surname="Du">
      <organization>China Mobile</organization>
      <address>
        <postal>
          <street>No.32 XuanWuMen West Street</street>
          <city>Beijing</city>
          <code>100053</code>
          <country>China</country>
        </postal>
        <email>duzongpeng@foxmail.com</email>
      </address>
    </author>
    <date day="15" month="April" year="2023"/>
    <workgroup>Internet Research Task Force</workgroup>
    <abstract>
      <t>This draft introduces the scenarios and requirements for performance 
      modeling of digital twin networks, and explores the implementation methods 
      of network models, proposing a network modeling method based on graph 
      neural networks (GNNs). This method combines GNNs with graph sampling 
      techniques to improve the expressiveness and granularity of the model. 
      The model is generated through data training and validated with typical 
      scenarios. The model performs well in predicting QoS metrics such as 
      network latency, providing a reference option for network performance 
      modeling methods.</t>
    </abstract>
  </front>
  <middle>
    <section anchor="introduction" numbered="true" toc="default">
      <name>Introduction</name>
      <t>Digital twin networks are virtual images (or simulations) of physical 
        network infrastructures that can help network designers achieve simplified, 
        automated, elastic, and full-lifecycle operations. The task of network 
        modeling is to predict how network performance metrics, such as throughput 
        and latency, change in various "what-if" scenarios<xref target="I-D.irtf-nmrg-network-digital-twin-arch" format="default"/>, 
        such as changes in traffic conditions and reconfigurations of network devices. In this paper, 
        we propose a network performance modeling framework based on graph neural networks, 
        which supports modeling various network configurations including topology, 
        routing, and caching, and can make time-series predictions of flow-level 
        performance metrics. </t>
    </section>
    <section anchor="definition-of-terms" numbered="true" toc="default">
      <name>Definition of Terms</name>
      <t>This document makes use of the following terms:</t>
      <dl newline="false" spacing="normal" indent="2">
        <dt>DTN:</dt>
        <dd>Digital twin networks.</dd>
        <dt>GNN:</dt>
        <dd>Graph neural network.</dd>
        <dt>NGN:</dt>
        <dd>Networking Graph Networks.</dd>
      </dl>
    </section>
    <section anchor="Scenarios__Requirements_and_challenges" numbered="true" toc="default">
      <name>Scenarios, Requirements and Challenges of Network Modeling for DTN</name>
      <section numbered="true" toc="default">
        <name>Scenarios</name>
        <t>Digital twin networks are digital virtual mappings of physical 
        networks, and some of their main applications include network technology 
        experiments, network configuration validation, network performance 
        optimization, etc. All of these applications require accurate network 
        models in the twin network to enable precise simulation and prediction 
        of the functionality and performance characteristics of the physical 
        network.</t>
        <t>This document mainly focuses on network performance modeling, while the 
        modeling for network configuration validation is not within the scope.</t>
      </section>
      <section numbered="true" toc="default">
        <name>Requirements</name>
        <t>Physical networks are composed of various network elements and links 
        between them, and different network elements have different functionalities
        and performance characteristics. In the early planning stages of the network
	   lifecycle, the physical network does not fully exist, but the network owner
	   hopes to predict the network's capabilities and effects based on the network
	   model and its simulation, to determine whether the network can meet the 
	   future application requirements running on it, such as network throughput
	   capacity and network latency requirements, and to build the network at the
	   optimal cost. During the network operation stage, network performance 
	   modeling can work in conjunction with the online physical network to 
	   achieve network changes and optimization, and reduce network operation risks 
	   and costs. Therefore, network modeling requires the ability of 
	   various performance-related factors in the physical network and striving 
	   for accuracy as much as possible. This also puts higher demands on network 
	   modeling, including the following aspects.</t>
        <t>(1) In order to produce accurate predictions, a network 
        model must have sufficient expressiveness to include as many influencing 
        factors related to network performance indicators as possible. Otherwise, 
        it will inevitably fail to generalize more general network environments. 
        Among these factors, network configuration can span various different levels
        of operation from end hosts to network devices. For example, congestion 
        control at the host level, scheduling strategies, and active queue management
        at the queue level, bandwidth and propagation delay at the link level, shared 
        buffer management strategies at the device level, as well as topology and 
        routing schemes at the network level. In addition, there are complex 
        interactions between these factors.</t>
        <t>(2) In different network scenarios, the granularity of concern 
        for operators may vary greatly. In wide area network scenarios, 
        operators primarily focus on the long-term average performance of 
        aggregated traffic, where path-level steady-state modeling is usually 
        sufficient to guide the planning process (e.g., traffic engineering). 
        In local area networks and cloud data center networks, operators are 
        more concerned with meeting performance metrics such as latency and
        throughput, as well as network infrastructure utilization. However, 
        fine-grained network performance observation is a goal that network 
        operators and cloud providers continuously strive for, in order to 
        provide precise information about when and which traffic is being 
        interfered with. This requires network models to support flow-level
        time series performance prediction.</t>
      </section>
      <section numbered="true" toc="default">
        <name>Main Challenges</name>
        <t>(1) Challenges related to the large state space. Corresponding to the 
        requirement of expressiveness of the large state space, the number of 
        potential scenarios that the network model faces is large. This is because 
        network systems typically consist of dozens to hundreds of network nodes, 
        each of which may contain multiple configurations, leading to an explosion 
        in the combination of potential states. One simple solution to build a 
        network model is to construct a large neural network that takes flat feature 
        vectors containing all configuration information as input. However, the 
        input size of such a neural network is fixed, and it cannot be scaled to 
        handle information from an arbitrary number of nodes and configurations. 
        The final complexity of the neural network will increase with the number 
        of configurations, making it difficult to train and generalize.</t>
        <t>(2) Challenges related to modeling granularity. Unlike aggregated 
        end-to-end path-level traffic, the transmission behavior of flows 
        undergoes cascading effects since it is typically controlled by some
        control loop (e.g., congestion control). Once the configurations related 
        to control (e.g., ECN threshold, queue buffer size) change during flow 
        transmission, the resulting flow traffic measurements (e.g., throughput 
        and packet loss) will experience significant changes, and the measured 
        traffic state at this time will not reflect the results of these changes. 
        Therefore, predicting flow-level performance from traffic measurements 
        may be more difficult than inferring QoS from traffic measurements. Here, 
        we use traffic measurements as input to predict the corresponding QoS, 
        which we call "inference", while using traffic demand as another input 
        together to output flow-level performance (e.g., flow completion time) 
        in "prediction" for the hypothetical scenario.</t>
      </section>
    </section>
    <section anchor="Modeling_Digital_Twin_Networks" numbered="true" toc="default">
      <name>Modeling Digital Twin Networks</name>
      <section numbered="true" toc="default">
        <name>Consideration/Analysis on Network Modeling Methods</name>
        <t>Traditional network modeling typically uses methods such as queuing 
        theory and network calculus, which mainly model from the perspective of 
        queues and their forwarding capabilities. In the construction of operator
        networks, network elements come from different device vendors with varying 
        processing capabilities, and these differences lack precise quantification. 
        Therefore, modeling networks built with these devices is a very complex task. 
        In addition to queue forwarding behavior, the network itself is also 
        influenced by various configuration policies and related network features
        (such as ECN, Policy Routing, etc.), and coupled with the flexibility 
        of network size, this method is difficult to adapt to the modeling 
        requirements of digital twin networks.</t>
        <t>In recent years, the academic community has proposed data-driven 
        graph neural network (GNN) methods, which extend existing neural networks 
        for systems represented in graph form. Networks themselves are a kind of 
        graph structure, and GNNs can be used to learn the complex network behavior 
        from the data. The advantage of GNN is its ability to model non-linear 
        relationships and adapt to different types of data, improving the 
        expressiveness and granularity of network modeling. By combining GNN 
        with graph sampling techniques, the method improves the expressiveness 
        and granularity of network models. This method involves sampling 
        subgraphs from the original network based on specific 
        criteria, such as the degree of connectivity and centrality. Then, these 
        subgraphs are used to train a GNN model that captures the most relevant 
        network features. Experimental results show that this method can improve the 
        accuracy and granularity of network modeling compared to traditional 
        techniques.</t>
        <t>This document will introduce a method of network modeling using graph 
        neural networks (GNNs) as a technical option for providing network 
        modeling for DTN.</t>
      </section>
      <section numbered="true" toc="default">
        <name>Network Modeling Framework</name>
        <artwork name="Network modeling design process" type="" align="left" alt="">
          <![CDATA[ 
+--------------------+
| +----------------+ | +----------------------+   +-----------------+
| |    Intent      |-->|Network Graph Abstract|-->|NGN Configuration|
| +----------------+ | +----------^-----------+   +-------+---------+
|                    |            |                       |
| +----------------+ |            |              +--------V---------+
| |Domain Knowledge|--------------+              | State Transition |
| +----------------+ |                           |Model Construction|
|                    |                           +--------+---------+
|                    |                                    |
| +----------------+ |    +---------------+     +---------V---------+
| |     Data       |----->|Model Training |<----| Network Model Desc|
| +----------------+ |    +-------+-------+     +-------------------+
|                    |            |
|  Target Network    |    +-------V-------+
+--------------------+    | Network Model |
                          +---------------+ 
        Figure 1: Network modeling design process ]]>
        </artwork>
        <t>Network modeling design process:</t>
        <t>1. Before modeling, determine the network configurations and modeling
        granularity based on the modeling intent.</t>
        <t>2. Use domain knowledge from network experts to abstract the network 
        system into a network relationship graph to represent the complex 
        relationships between different network entities.</t>
        <t>3. Build the network model using configurable graph neural network 
        modules and determine the form of the aggregation function based on
        the properties of the relationships.</t>
        <t>4. Use a recurrent graph neural network to model the changes in 
        network state between adjacent time steps.</t>
        <t>5. Train the model parameters using the collected data.</t>
      </section>
      <section numbered="true" toc="default">
        <name>Building a Network Model</name>
        <t>Describing the process and results of network modeling, i.e., the
         four steps (Steps 2 to 5) in Section 4.2 of the network modeling design
         process.</t>
        <section numbered="true" toc="default">
          <name>Networking System as a Relation Graph</name>
          <t>Representing a network system as a heterogeneous relationship graph 
         (referred to as "graph" hereafter) to provide a unified interface to simulate
         various network configurations and their complex relationships. Network entities
         related to performance are mapped as graph nodes with relevant characteristics. 
         Heterogeneous nodes represent different network entities based on their attributes
         or configurations. Edges in the graph connect nodes that are considered directly 
         related. There are two types of nodes in the graph, physical nodes representing
         specific network entities with local configurations (e.g., switches with buffers 
         of a certain size), and virtual nodes representing performance-related 
         entities (e.g., flows or paths), thus allowing final performance metrics 
         to be attached to the graph. Edges reflect the relationships between 
         entities and can be used to embed domain knowledge-induced biases. 
         Specifically, edges can be used to model local or global 
         configurations.</t>
        </section>
        <section numbered="true" toc="default">
          <name>Message-passing on the Heterogeneous Graph</name>
          <t>Use Networking Graph Networks (NGN) <xref target="battaglia2018" format="default"/> 
          as the fundamental building block for network modeling. An NGN module is 
          defined as a "graph-to-graph" module with heterogeneous nodes that 
          takes an attribute graph as input and, after a series of message-passing 
          steps, outputs another graph with different attributes. 
          Attributes represent the features of nodes and are represented 
          as tensors of fixed dimensions. Each NGN block contains multiple
         configurable functions, such as aggregation, transformation, and update 
         functions, which can be implemented using standard neural networks and shared 
         among same-type nodes. The aggregation function can take the form of a 
         simple sum or an RNN, while the transformation function can map the 
         information of heterogeneous nodes to the same hidden space of the target 
         type nodes, allowing for unified operations in the update function and no 
         limitation on the modeling capability of GNNs.</t>
          <t>One feed-forward NGN pass can be viewed as one step of message passing 
         on the graph. In each round of message passing, nodes aggregate same-type 
         messages using the corresponding aggregation function and transform the 
         aggregated messages using the type transformation function to handle
         heterogeneous nodes. The transformed messages are then fed into the update 
         function to update the node's state. After a specified number of rounds of 
         message passing, a readout function is used to predict the final performance
         metric.</t>
          <t>Typically, NGNs first perform a global update and then independent local 
         updates for nodes in each local domain. Circular dependencies between different
         update operations can be resolved through multiple rounds of message 
         passing.</t>
        </section>
        <section numbered="true" toc="default">
          <name>State Transition Learning</name>
          <t>The network model needs to support fine-grained prediction granularity 
         and transient prediction (such as the state of a flow) at short time scales. 
         To achieve this, this document uses the recurrent form of the NGN module to learn to
         predict future states from the current state. The model runs at a time step and
         has an "encoder-processor-decoder" structure.</t>
          <artwork name="State transition learning" type="" align="left" alt="">
            <![CDATA[ 
                       +-------------------+
                       | +--------------+  |
                       | | +----------+ |  |
G_hidden(t-1)---^----->| +>| NGN_core |-+  |------+----->G_hidden(t)
                |      |   +----------+    |      |
         +------+----+ |Message passing x M| +----V------+
G_in(t)->|NGN_encoder| +-------------------+ |NGN_decoder|->G_out(t)
         +-----------+      Processor        +-----------+
         
         Figure 2: State transition learning]]>
          </artwork>
          <t>These three components are NGN modules with the same abstract 
         graph but different neural network parameters.</t>
          <t>Encoder: converts the input state into a fixed-dimensional vector, 
         independently encoding different nodes, ignoring relationships between 
         nodes, and not performing message passing.</t>
          <t>Processor: performs M rounds of message passing, with the input being 
         the output of the encoder and the previous output of the processor;</t>
          <t>Decoder: independently decodes different nodes as the readout function, 
         extracting dynamic information from the hidden graph, including the current
         performance metrics and the state used for the next step state update. 
		 Note that the next graph G_(t+1) is updated according to G_out(t), 
		 which is not shown in Figure 2.
          </t>
          <t>To support state transition modeling, the model distinguishes between 
         the static and dynamic features of the network system and represents them as 
         different graphs. The static graph contains the static configuration of the
         system, including physical node configurations (such as queue priorities 
         and switch buffer sizes) and virtual node configurations (such as flow sizes).
         The dynamic graph contains the temporary state of the system, mainly related 
         to virtual nodes (such as the remaining size of a flow or end-to-end delay of
         a path). In addition, when considering dynamic configurations (such as 
         time-varying ECN thresholds), the actions taken (i.e., new configurations)
         should be placed in the dynamic graph and input at each time step.</t>
        </section>
        <section numbered="true" toc="default">
          <name>Model Training</name>
          <t>The L2 loss between the predicted values and the corresponding true 
        values is used to supervise the output features of each node generated 
        by the decoder for model training. To generate long-term prediction 
        trajectories, the model iteratively feeds back the updated absolute 
        state prediction values to the model as input. As a data preprocessing
        and postprocessing step, we standardized the input and output of
        the NGN model.</t>
        </section>
      </section>
      <section numbered="true" toc="default">
        <name>Model Performance in Data Center Networks and Wide Area Networks</name>
        <section numbered="true" toc="default">
          <name>QoS Inference in Data Center Networks</name>
          <t>This use case aims to verify whether the model can accurately perform
        time-series inference and generalize to unseen configurations, demonstrating 
        the application of online performance monitoring. The network model needs to 
        infer the evolution of path-level latency in the time series given real-time 
        measurements of traffic on the given path. The datasets used in this scenario
        is generated by ns-3
                        
            <xref target="NS-3" format="default"/>.
        Under specific experimental settings, the MAPE of path-level latency can  
        be controlled below 7%
                        
            <xref target="wang2022" format="default"/>.
                    
          </t>
        </section>
        <section numbered="true" toc="default">
          <name>Time-Series Prediction in Data Center Networks</name>
          <t>This use case verifies whether the model can provide flow-level time-series 
        modeling capability under different configurations. Unlike the previous case, 
        the behavior of the network model in this case is like a network simulator, 
        which needs to predict the Flow Completion Time (FCT) without traffic collection
        information, only using flow descriptions and static topology information as 
        input. The datasets used in this scenario is generated by ns-3
                        
            <xref target="NS-3" format="default"/>.
        Under specific experimental settings, the predicted FCT distribution 
        matches the true distribution well, with a Pearson correlation coefficient 
        of 0.9
                        
            <xref target="wang2022" format="default"/>. In addition, the model can also predict throughput, latency, and 
        other path/flow-level metrics in time-series prediction. This use case
        verifies the model's ability in time-series prediction, and theoretical 
        analysis combined with experimental verification shows that the model does 
        not have cumulative errors in long-term time-series 
        prediction.
                    
          </t>
        </section>
        <section numbered="true" toc="default">
          <name>Steady-State QoS Inference in Wide Area Networks</name>
          <t>This use case aims to verify that the model can work in the Wide Area
        Network (WAN) scenario and demonstrate that the model can effectively model
        and generalize to global and local configurations, which reflects the application
        of offline network planning. It is worth noting that the WAN scenario has more 
        topology changes compared to the data center network scenario, which imposes higher 
        demand on the model's performance. Public network modeling dataset
                        
            <xref target="NM-D" format="default"/>
        is used in this scenario for evaluation. Under specific experimental settings, the
        model is experimentally verified in three different WAN topologies, including NSFnet,
        GEANT2, and RedIRIS, and achieves a 50th percentile APE of 10% for path-level latency, 
        which is comparable to the performance of the domain-specific model RouteNet
                        
            <xref target="rusek2019" format="default"/>. 
        This use case verifies the model's generalization in topology and configuration 
        and its versatility in the scenario. 
                    
          </t>
        </section>
      </section>
    </section>
    <section anchor="Conclusion" numbered="true" toc="default">
      <name>Conclusion</name>
      <t> This draft implements a network performance modeling method based on graph neural 
    networks, addressing the problems and challenges in network modeling in terms of expressiveness 
     and modeling granularity. The model's versatility and generalization are verified in 
    typical network scenarios, and good simulation performance prediction is achieved.</t>
    </section>
    <section anchor="security-considerations" numbered="true" toc="default">
      <name>Security Considerations</name>
      <t>TBD.</t>
    </section>
    <section anchor="iana-considerations" numbered="true" toc="default">
      <name>IANA Considerations</name>
      <t>TBD.</t>
    </section>
    <!-- <section numbered="true" toc="default">
      <name>Contributors</name>
      <t>The following people have substantially contributed to this
      document:</t>
      <artwork name="" type="" align="left" alt=""/>
    </section> -->
  </middle>
  <back>
    <references>
      <!-- <name>Normative References</name> -->
      <name>Informative References</name>
      <reference anchor="NS-3" target="https://www.nsnam.org/">
        <front>
          <title>Network Simulator, NS-3</title>
          <author/>
        </front>
      </reference>
      <reference anchor="NM-D" target="https://github.com/BNN-UPC/NetworkModelingDatasets">
        <front>
          <title>Network Modeling Datasets</title>
          <author/>
        </front>
      </reference>
      <reference anchor="I-D.irtf-nmrg-network-digital-twin-arch" target="https://datatracker.ietf.org/doc/html/draft-irtf-nmrg-network-digital-twin-arch-02" xml:base="https://bib.ietf.org/public/rfc/bibxml-ids/reference.I-D.irtf-nmrg-network-digital-twin-arch.xml">
        <front>
          <title>Digital Twin Network: Concepts and Reference Architecture</title>
          <author fullname="Cheng Zhou" initials="C." surname="Zhou">
            <organization>China Mobile</organization>
          </author>
          <author fullname="Hongwei Yang" initials="H." surname="Yang">
            <organization>China Mobile</organization>
          </author>
          <author fullname="Xiaodong Duan" initials="X." surname="Duan">
            <organization>China Mobile</organization>
          </author>
          <author fullname="Diego Lopez" initials="D." surname="Lopez">
            <organization>Telefonica I+D</organization>
          </author>
          <author fullname="Antonio Pastor" initials="A." surname="Pastor">
            <organization>Telefonica I+D</organization>
          </author>
          <author fullname="Qin Wu" initials="Q." surname="Wu">
            <organization>Huawei</organization>
          </author>
          <author fullname="Mohamed Boucadair" initials="M." surname="Boucadair">
            <organization>Orange</organization>
          </author>
          <author fullname="Christian Jacquenet" initials="C." surname="Jacquenet">
            <organization>Orange</organization>
          </author>
          <date day="24" month="October" year="2022"/>
          <abstract>
            <t>Digital Twin technology has been seen as a rapid adoption technology in Industry 4.0. The application of Digital Twin technology in the networking field is meant to develop various rich network applications and realize efficient and cost effective data driven network management and accelerate network innovation. This document presents an overview of the concepts of Digital Twin Network, provides the basic definitions and a reference architecture, lists a set of application scenarios, and discusses the benefits and key challenges of such technology.</t>
          </abstract>
        </front>
        <seriesInfo name="Internet-Draft" value="draft-irtf-nmrg-network-digital-twin-arch-02"/>
      </reference>
      <reference anchor="wang2022">
        <front>
          <title>xNet: Improving Expressiveness and Granularity for Network Modeling with Graph 
          Neural Networks. IEEE INFOCOM, </title>
          <author fullname="Mowei Wang, Linbo Hui, Yong Cui, Ru Liang, Zhenhua Liu." surname="">
            <organization/>
          </author>
          <date year="2022"/>
        </front>
      </reference>
      <reference anchor="rusek2019">
        <front>
          <title>Unveiling the potential of Graph Neural Networks for network modeling and optimization in SDN</title>
          <author surname="Rusek" fullname="Krzysztof Rusek" />
          <author surname="Suarez-Varela" fullname="Jose Suarez-Varela" />
          <author surname="Mestres" fullname="Albert Mestres" />
          <author surname="Barlet-Ros" fullname="Pere Barlet-Ros" />
          <author surname="Cabellos-Aparicio" fullname="Albert Cabellos-Aparicio" />
          <date year="2019" />
        </front>
      </reference>
      <reference anchor="battaglia2018"><front><title>Relational inductive biases, deep learning, and graph networks</title><author surname="Battaglia" fullname="Peter W Battaglia" /><author surname="Hamrick" fullname="Jessica B Hamrick" /><author surname="Bapst" fullname="Victor Bapst" /><author surname="Sanchez-Gonzalez" fullname="Alvaro Sanchez-Gonzalez" /><author surname="Zambaldi" fullname="Vinicius Zambaldi" /><author surname="Malinowski" fullname="Mateusz Malinowski" /><author surname="Tacchetti" fullname="Andrea Tacchetti" /><author surname="Raposo" fullname="David Raposo" /><author surname="Santoro" fullname="Adam Santoro" /><author surname="Faulkner" fullname="Ryan Faulkner" /><author surname="others" fullname=" others" /><date year="2018" /></front></reference>
    </references>
    <section anchor="acknowledgements" numbered="false" toc="default">
      <name>Acknowledgements</name>
      <t></t>
    </section>
  </back>
</rfc>
