LinkedIn emplea cookies para mejorar la funcionalidad y el rendimiento de nuestro sitio web, así como para ofrecer publicidad relevante. Si continúas navegando por ese sitio web, aceptas el uso de cookies. Consulta nuestras Condiciones de uso y nuestra Política de privacidad para más información.
LinkedIn emplea cookies para mejorar la funcionalidad y el rendimiento de nuestro sitio web, así como para ofrecer publicidad relevante. Si continúas navegando por ese sitio web, aceptas el uso de cookies. Consulta nuestra Política de privacidad y nuestras Condiciones de uso para más información.
En general: * Recordar que el punt de RINA es que es proporciona un building block (el DIF) adaptable a diferents requeriments (a través de policies diferents). Aquesta és la eina fonamental que es pot utilitzar tants cops com sigui necessari, anant construint estructures de DIFs. El building block serveix per separar diferents scopes (per exemple, diferents xarxes de diferents proveïdors, diferents regions de la xarxa dins un proveidor (metro, regional, backbone), diverses VPNs d’usuari, etc.) * El DIF ha de proporcionar servei a les aplicacions que te a sobre, recolzant-se en les característiques dels DIFs que té a sota (rollo casteller )
* Els numeros son les adreces, A1, A2, B1, B2, etc son application names * PoAs -> Point of attachment (el punt en el qual un procés està conectat a la xarxa). Mirant el dibuix, el PoA de A1 és B1, els PoAs de A3 son C3 I D1, etc. Application name spaces are not tied to any layer or DIF. Recognizing that they may all be members of other DIFs.
IPC Process Components Data transfer service API This is the only externally visible API for application processes using the IPC Process services. This API allows applications to make themselves available through a DIF and to request and use IPC services to other applications. The abstract API has six operations (implementations may have more operations for convenience of use and to adapt to the specifics of each operating system, but still logically providing the same operations): portId _allocateFlow(destAppName, List<qosParams>). This operation enables an application to allocate a flow to a destination application (identified by destAppName), specifying a list of desired QoS parameters. The operation returns a handle to the flow, the portId, used in other operations to read/write SDUs (Service Data Units, the user data) to the flow. void _write(portId, sdu). Sends an SDU through the flow identified by portId. SDUs are buffers of user data with a certain length. SDUs are delivered to the destination application as they where written by the source application. sdu _read(portId). Read an SDU from the flow identified by portId. void _registerApplication(appName, List<DIFName>). Register the application identified by appName to the DIFs identified in the list of difNames. This operation advertises the application within a DIF, so that flows can be allocated to it (it will be always up to the application to take the final decision refusing or accepting them). void _unregisterApplication(appName, List<DIFName>). Unregister an application from a set of DIFs or all the DIFs (if the second argument is not present). More information about the data transfer service API is available at the “Data Transfer Service Definition” specification, pages 179-192 of the “RINA specification handbook”. SDU Delimiting The first step in this processing path is to delimit the SDUs posted by the application; since the data transfer protocol may implement concatenation and/or fragmentation of the SDUs in order to achieve a better data transport efficiency and/or to better adapt to the DIF characteristics. More information about the SDU Delimiting component is available at the “Specification template for a DIF delimiting module” specification, pages 193-194 of the “RINA specification handbook”. Error and Flow Control Protocol (EFCP) The Error and Flow Control Protocol (EFCP) is split into two parts: the data transfer protocol (DTP) and the Data Transfer Control Protocol (DTCP), loosely coupled through the use of a state vector. DTP performs the mechanisms that are tightly coupled to the transported SDU, such as fragmentation, reassembly, sequencing, addressing, concatenation and separation. DTCP performs the mechanisms that are loosely coupled to the transported SDU, such as transmission control, retransmission control and flow control. When a flow is allocated an instance of DTP and its associated state vector are created. The flows that require flow control, transmission control or retransmission control will have a companion DTCP instance allocated. The string of octets exchanged between two protocol machines is referred to as Protocol Data Unit (PDU). PDUs comprise of two parts, Protocol Control Information (PCI) and user data. PCI is the part understood by the DIF, while the user data is incomprehensible to the DIF and is passed to its user. The PDUs generated by EFCP are passed to the relaying and multiplexing components. RINA’s EFCP is designed based on delta-t, designed by Richard Watson in 1981 . Watson proved that the necessary and sufficient conditions for reliable synchronization is to bound 3 timers: Maximum Packet Lifetime (MPL), Maximum time to acknowledge and Maximum time to keep retransmitting. In other words: SYNs and FINs in TCP are unnecessary, allowing for a simpler and more secure data transfer protocol. More information about the EFCP component is available at the “Error and Flow Control Protocol Specification: Data Transfer + Data Transfer Control” specification, pages 195-232 of the “RINA specification handbook”. Relaying and Multiplexing Task (RMT) The role of the Relaying task is to forward the PDUs passing through the IPC Process to the destination EFCP Protocol Machine (PM) by checking the destination address in the PCI. The decision on forwarding is based on the routing information and the Quality of Service agreed. The Multiplexing task multiplexes PDUs from different EFCP instances onto the points of attachment of lower ranking (N-1) DIFs. There are several policies that decide when and where the PDU are forwarded (management of queues, scheduling, length of queues). These policies affect the delivered Quality of Service. More information about the RMT component is available at the “Relaying and Multiplexing Task” specification, pages 233-240 of the “RINA specification handbook”. SDU Protection SDU Protection includes all the checks necessary to determine whether or not a PDU should be processed further (for incoming PDUs) or to protect the contents of the PDU while in transit to another IPC Process that is a member of the DIF (for outgoing PDUs). It may include but is not limited to checksums, CRCs, encryption, Hop Count/Time To Live mechanisms. The SDU Protection mechanisms to be applied may change hop by hop (since they depend on the characteristics of the underlying DIFs). In RINA, Deep Packet Inspection is unnecessary and often impossible. More information about the SDU Protection component is available at the “Specification Template for a DIF SDU Protection module” specification, pages 241-244 of the “RINA specification handbook”. The Resource Information Base (RIB) and the RIB Daemon The Resource Information Base (RIB) is the logical representation of the objects that capture the information that define an application state. Looking at the IPC Process, this means objects that represent information about mappings of addresses, resource allocation, connectivity, available applications, security credentials, established flows, forwarding and routing tables, and so on. The RIB Daemon is the task that controls the access to the RIB, and also optimizes the operations on the RIB performed by other components of the IPC Processes. More information about the RIB and RIB Daemon components is available at the “Specification of Managed Objects for the Demo DIF” specification, pages 281-289 of the “RINA specification handbook”. The Common Distributed Application Protocol (CDAP) and the Common Application Connection Establishment Phase (CACEP) The Common Distributed Application Protocol, CDAP, is the canonical application protocol, similar to an assembly language that can be used to build all the distributed applications. CDAP provides six primitives to operate on remote objects: create, delete, read, write, start and stop. IPC Processes use CDAP to modify the RIBs of other IPC Processes, which triggers changes in the behaviour of the IPC Processes. CDAP is modelled after OSI’s CMIP, the Common Management Information Protocol. Any existing application protocol can use the DIF (can be transported by a flow), however we only use CDAP inside the DIF to test our theory that there is only one application protocol. More information about CACEP and CDAP is available at the “Common Application Establishment Phase” and “CDAP - Common Distributed Application Protocol” specifications, pages 106-118 and pages 119-160 of the “RINA specification handbook”, respectively. The Enrollment Task All communication goes through three phases: Enrollment, Allocation (Establishment), and Data Transfer. RINA is no exception. Enrollment is the procedure by which an IPC Process joins an existing DIF and obtains enough information to start operating as a member of this DIF. Enrollment starts when the joining IPC Process establishes an application connection with another IPC Process that is already a member of the DIF. During the application connection establishment, the IPC Process that is a DIF member may want to authenticate the joining process, depending on the DIF security requirements. The CACE component (Common Application Connection Establishment) is the one in charge of establishing and releasing application connections. Several authentication modules can be plugged into CACE, to implement different authentication policies. Once the application connection has been established, the joining IPC Process needs to acquire the DIF static information: what QoS classes are supported and what are its characteristics, what are the policies that the DIF supports, and other parameters such as the DIF’s MPL or maximum PDU size. More information about the Enrollment task component is available at the “Basic Enrollment” specification, pages 251-256 of the “RINA specification handbook”. The Flow Allocator (FA) Flow allocation is the component responsible for managing a flow’s lifecycle: allocation, monitoring and deallocation. Unlike with TCP, in RINA port allocation and data transfer are separate functions, meaning that a single flow can be supported by one or more data transport connections (in TCP a port number is mapped to one and only one TCP connection, the port numbers identify the TCP connection). The Flow Allocator (FA) component handles the flow allocation/deallocation requests. Among its tasks it has to: i) find the IPC Process through which the destination application is accessible; ii) map the requested QoS to policies that will be associated with the flow, iii) negotiate the flow allocation with the destination IPC Process FA (access control permissions, policies associated with the flow), iv) create one or more DTP and optionally DTCP instances to support the flow, v) monitor the DTP/DTCP instances to ensure the requested QoS is maintained during the flow lifetime, and take specific actions to correct any misbehaviours and vi) deallocate the resources associated to the flow once the flow is terminated. More information about the FA component is available at the “Flow Allocator” specification, pages 257-268 of the “RINA specification handbook”. The Forwarding Table Generator (Routing) The Forwarding Table Generator (or Routing) is the IPC Process component that exchanges connectivity information with other IPC processes of the DIF and applies an algorithm to generate the forwarding table used by the Relaying and Multiplexing Task (connectivity as well as QoS and resource allocation information is used to generate the forwarding table). The algorithms and information required to generate the forwarding table may be multiple, depending on the QoS classes supported by the DIF. More information about the routing component is available as one of the specifications proposed by the IRATI consortium. It can be found in section 6 of this document. The Resource Allocator (RA) The Resource Allocator is the component that decides how the resources in the IPC Process are allocated (dimensioning of the queues, creation/suspension/deletion of queues, creation/deletion of N-1 flows, and others). More information about the RA component is available at the “RINA Reference model part 3: Distributed InterProcess Communication” document, pages 79-80 of the “RINA specification handbook”.
Shim IPC Process over TCP/UDP This IPC Process wraps a TCP/UDP layer and presents it with the IPC API, allowing "normal" IPC Processes to be overlaid on IP layers. More information about the shim DIF over TCP/UDP component is available at the “Specification for shim IPC Processes over IP layers” document, pages 273-280 of the “RINA specification handbook”.
Shim IPC Process over 802.1q This IPC Process wraps an Ethernet layer and presents it with the IPC API, allowing "normal" IPC Processes to be overlaid on 8021.q layers (VLANs). More information about the shim IPC Process over 802.1q component is available as one of the specifications proposed by the IRATI consortium. It can be found in section 6 of this document.
The management agent The Management agent is used by the DIF Management System (DMS) to monitor the state of the DIF and to make configuration changes including policy changes relating to QoS and security.
Layer violations -> capes que miren informació d’altres capes per fer la seva feina (e.g. TCP pseudo-header) Overlays / “Virtual Networks” -> capes que estan per sobre de transport (TCP/UDP). Per exemple protocols de tuneling com VXLAN, STP, NVGRE, … Naming addressing and routing. -------------------------------------------- IP només assigna nom a les interfícies, no als nodes (de manera que un node amb 2 o més interfícies es lo mateix que 2 o + nodes per la xarxa) -> problemes de multi-homing I mobilitat Els noms d’aplicació avui en dia es mapegen a una adreça IP i un port TCP o UDP a través de DNS, que es un sistema extern a la xarxa (La xarxa només entén d’adreces IP) -> complica la mobilitat Congestion control ------------------------------ 2 problemes: * Detecció implicita (es creu que es detecta la congestó perquè es perden paquets, però no se’n pot estar segur que realment hi hagi congestio) * El control I la detecció es fa a TCP, que es on més lluny s’està del problema (enlloc de detectar-se I arreglar-se a la xarxa on hi hagi la congestió) RINA arregla els 2: * Detecció explicita (en cada DIF) * Cada DIF controla la congestió que hi ha en el seu DIF, no en la dels DIFs dels atlres
With feedback between all the different activities
Shim DIF is =
Explicar una mica la illa: 5 switchos NEC conectats a servidors i a les altres illes d’OFELIA (per IRATI l’altra illa relevant is iMinds).
Taronja, espais més petits
DISTRIBUTED CLOUD SlapOS is a decentralized cloud technology used to build a physically distributed cloud . Customer's applications are run in traditional datacenters, but also in servers from offices and home users. SlapOS is in charge of managing the overall cloud from a logically centralized location: the SlapOS master (a distributed approach is currently under development). The SlapOS master controls the different computers running SlapOS slaves. In terms of networking, the master and the nodes at different locations are interconnected through multiple IPv6 providers. In order to guarantee a high reliability (99.999\%), SlapOS uses an overlay called re6st, which creates a mesh network of OpenVPN tunnels on top of several IPv6 providers and uses the Babel protocol  for choosing the best routes between nodes. PRISTINE will provide an alternative to the re6st overlay, by using RINA on top of IPv6. The advantadges and added value of using RINA instead of re6st are detailed in the description of task T2.1. DATACENTRE NETWORKING The datacenter space is one of the areas that has seen more virtual networking innovations during the last few years, fuelled by the flexibility requirements of cloud computing. A myriad of SDN-based virtual network solutions, usually providing L2 over L3 or L4 tunnels and a control plane, are available in the market (VXLAN , NVGRE , STT , etc). PRISTINE will investigate and trial the use of RINA-based solutions for intra- as well as inter-datacenter networking. Important issues to be addressed in a datacenter environment are the mobility of Virtual Machines to allow an efficient utilization of datacenter resources as well as high reliability; multi-homing support; guaranteeing the level of service in inter-data center communications and flexible allocation of flows supporting computer and storage resources. RINA provides an excellent framework to tackle these issues, and the PRISTINE project will exploit them as explained in task T2.1. NETWORK SERVICE PROVIDER The goals of this scenario are to investigate and trial the efficiencies and benefits of a Networks Service Provider (NSP) using the RINA technology, and to analyze RINA as a materialization of the Network Functions Virtualization concept within an operator network. The NSP will internally use several DIFs over Ethernet, in order to transport the traffic of the services provided to the customers: IPv4 and IPv6 Internet access, VoIP, Ethernet, etc., but also native RINA traffic. With PRISTINE solutions the NSP will benefit from DIFs that provide different levels of service, being able to consolidate separate infrastructures. He will also have the tools to better manage the congestion within their networks and provide stronger flow isolation; achieve higher realiability and other benefits detailed in T2.1.
Figure 4 illustrates an example of how RINA could be applied to the NRENs and GEANT scenario. The picture doesn’t show all the DIFs that would be utilized in a real scenario; it has been simplified for the sake of clarity (for example, NRENs or GEANT would be composed by more than one DIF in reality). The underlying GEANT DIF is the backbone of the system and supports the interconnection of the different NREN DIFs. NRENs would be interconnected together by jointly operating one or more peering DIFs, that would directly interconnect their border routers through the establishment of one or more flows through the GEANT backbone DIF, as shown in Figure 5. Note that there can be multiple peering DIFs, representing different NREN federations supported by GEANT (for example 1 peering DIF supported by all the NRENs, but also different peering DIFs between subsets of NRENs). Each peering DIF can have different policies, in terms of addressing, routing, security, resource allocation, data transfer, etc. Peering DIFs support DIFs that are customized for different types of applications. For example, a Public Internet DIF that gives access to the Internet, and provides a best-effort, low-security type of service. But there can be many other “application-specific DIFs” such as DIFs tailored for radio-astronomers, high-energy physics, DIFs that provide access to scientific clouds, research-project specific VPNs, etc. Again, the characteristics of each of these DIFs can be optimized for the application (or applications) they are designed to support.