Dtn Interest Compiling Dtn2 For Mac

40 rows  Feb 15, 2013  DTN Reference Implementation. This is the Delay Tolerant Networking. . DTN techniques can be used to increase capacity (e.g., local sharing of popular content) 17 January 12, 2010 Sensor Networks. Disruptions common. Not always possible to have complete coverage of area of interest. Node failures. Duty-cycling. Node mobility.

AbstractWith the advancement of computing and network virtualization technology, the networking research community shows great interest in network emulation. Compared with network simulation, network emulation can provide more relevant and comprehensive details.

In this paper, EmuStack, a large-scale real-time emulation platform for Delay Tolerant Network (DTN), is proposed. EmuStack aims at empowering network emulation to become as simple as network simulation. Based on OpenStack, distributed synchronous emulation modules are developed to enable EmuStack to implement synchronous and dynamic, precise, and real-time network emulation. Meanwhile, the lightweight approach of using Docker container technology and network namespaces allows EmuStack to support a (up to hundreds of nodes) large-scale topology with only several physical nodes. In addition, EmuStack integrates the Linux Traffic Control (TC) tools with OpenStack for managing and emulating the virtual link characteristics which include variable bandwidth, delay, loss, jitter, reordering, and duplication. Finally, experiences with our initial implementation suggest the ability to run and debug experimental network protocol in real time.

EmuStack environment would bring qualitative change in network research works. IntroductionThe current Internet is based on a number of key assumptions on communication system, including a long-term and stable end-to-end path, small packet loss probability, and short round-trip time. However, many challenging networks (such as sensor/actuator networks and ad hoc networks) cannot satisfy one or more of those assumptions. Excited enough, there have been increasing efforts to support these challenging networks on some special delay and interrupt scenes ,. In particular, in order to adapt Internet to these challenging environments, Fall proposes Delay Tolerant Networks (DTN). The key idea of DTN is custody transfer which adopts the hop-by-hop reliable delivery to guarantee the end-to-end reliability.

DTN was initially invented for the deep space communication, while currently it has been gradually applied in wireless sensor networks, ad hoc networks, and even satellite networks.In DTN areas, related research works such as routing and congestion control strategies have obtained many achievements along with a number of DTN implementations such as DTN2, ION, and IBRDTN –. However, many problems such as security and contact plan design have not been resolved yet.In order to further study DTN architecture, many experimental platforms have been designed. Koutsogiannis implements a testbed to evaluate space-suitable DTN architectures and protocols with many deep space communication scenarios. The DTN testbed can support about ten nodes experimental topology. Based on the generic-purpose wireless network bench, Beuran designs a testbed named QOMB.

QOMB has a good support for emulating a large-scale mobile networks, but it wastes lots of hardware resources since none of virtual computing technology is employed. Thus, QOMB lacks a monitoring system; the experimental fidelity cannot be guaranteed especially in the large-scale scene. Komnios introduces the SPICE testbed for researching space and satellite communication.

SPICE is equipped with special hardware and it can accurately emulate the link characteristics between the space and ground stations. However, due to the introduction of professional hardware, SPICE is hard to be imitated by other researchers.

Meanwhile, without using network virtualization technology, the emulation topology of SPICE is fixed and will be changed difficultly.With the advancement of network and compute virtualization technology, it becomes much easier to design and implement a scalable and flexible emulation platform than before. In this work, EmuStack, a network emulation platform for DTN, is introduced. Our design objective is enabling EmuStack to support a large-scale, real-time, and distributed network emulation and provide synchronous and dynamical precise management for topology and link characteristics. For example, Docker container technology is utilized as the compute virtualization technique into efficiently virtualize several physical emulation nodes into hundreds of virtual emulation nodes. By integrating Linux Traffic Control (TC) utility with OpenStack , EmuStack can achieve more fine-grained control of the virtual topology and link characteristics. Meanwhile, OpenStack is composed of various independent modules; thus it possesses a good support for the development of the other functionalities in EmuStack. To improve the performance of EmuStack, many OpenStack subprojects are adopted.

An example is Ceilometer which is developed lightly and integrated into EmuStack for ensuring experimental fidelity and monitoring, alarming, and collecting relevant data.As we have a deeper insight into our initial work , in this paper, we further present details of controlling link characteristics and analyze the reason for link rate-limiting difference between the Ethernet device of virtual emulation node and the TAP device of physical emulation node. Moreover, we further introduce EmuStack scalability and performance and discuss their main factors. Additionally, we provide one more DTN experiment to better evaluate and demonstrate the performance of EmuStack.The remainder of this paper is organized as follows. In Section we introduce the related work. In Sections and, we present architectural design, implementation of EmuStack and thoroughly discuss performance of EmuStack.

Then we reproduce two published classic DTN experiments and compare and analyze the key experimental results in Section. Finally, in Section, we conclude this paper along with future works. Related WorkRecently, with the advancement of container virtualization technology, network researchers show their interest in employing container to construct their experimental platforms to support their large-scale topology experiments. Emulab is one of the well-known testbeds using the container virtualization in Linux. Due to the efficiency of container, Emulab possesses a good support for scalability. Although these technologies introduced in Emulab are not the latest now, the design philosophies are still helpful for current researchers to design large-scale test bed.

Additionally, Lantz et al. designed Mininet based on container virtualization technique including processes and network namespaces technique. Mininet can support SDN and run on a single computer. Handigol et al. further improved Mininet performance with enhancements to resource provisioning, isolation, and monitoring system. Besides, Handigol replicated a number of previously published experimental results and proved that Linux Container (LXC) technology is not only lightweight but also possesses a good fidelity and performance. In order to perform an in-depth performance evaluation of LXC, Xavier et al.

conducted a number of experiments to evaluate various compute virtualization technologies and finally proved that LXC virtualization has a near-native performance on CPU, memory, disk, and network. Therefore, in EmuStack, we employ Docker container (based on LXC) as compute virtualization technology.OpenStack is an open-source reference framework mainly for developing private and public cloud, which consists of loosely-coupled components that can control hardware pools of compute, network, and storage resources. OpenStack is composed of many different independent modules, and anyone can add additional components into OpenStack to meet their requirements. Therefore, OpenStack is definitely a good choice for developing emulation platform.

Architectural DesignThis section describes the overall architecture design of EmuStack from the perspective of hardware and software. HardwareFigure shows EmuStack hardware structure (where gray rectangles stand for primary services installed). EmuStack hardware can be composed of only several physical nodes (general-purpose computer). There are two types of physical nodes: network emulator and physical emulation nodes. Network emulator is the core hardware which is a physical node equipped with multiple NICs in EmuStack and it plays multiple roles. It is not only an OpenStack controller node which manages compute and network resources and an OpenStack network node which manages virtual emulation networks, but also an emulation orchestrator which is responsible for creating emulation parameters and orchestrating the whole resources of CPU, memory, and network.

In addition, physical emulation node is a compute node of OpenStack, which hosts all virtual emulation nodes and executes the emulation control commands from network emulator. EmuStack hardware structure.In EmuStack, there are two types of physical networks, namely, the management network and emulation network. Management network carries management traffic which consists of lightweight control information and usually does not become the determinant of performance. Emulation network transfers emulation data which consumes much bandwidth and would vary greatly with different DTN protocol experiments. Therefore, the physical emulation network possibly becomes the main limitation of EmuStack. For several physical nodes system of EmuStack, adopting the star structure can solve the emulation data traffic bottleneck problem, as shown in the bottom right of Figure.

In this structure, all emulation NICs of physical emulation nodes are directly connected to those of network emulator. NICs of network emulator are attached to an Open vSwitch bridge, where the “internal” device named after itself is assigned an IP address belonging to the emulation network. In practice, this physical emulation network structure can meet most requirements of our DTN research works; however if researchers want to construct the EmuStack system that consists of dozens of physical nodes, this structure would become infeasible since network emulator would not have enough NICs to directly connect to all the emulation NICs of physical emulation nodes. For system with dozens of physical nodes, physical emulation network can employ several physical switches to carry the emulation data as management network does.

In this scheme, as the first step, we need to determine which one of physical NICs on network emulator (and physical emulation nodes) will carry management traffic. Then we connect all the remaining NICs of network emulator (and physical emulation nodes) to those physical switches. Those physical switches ports will need to be specially configured to allow trunked or general traffic. Finally, for EmuStack system with hundreds of physical nodes, as a part of the future work, we will extend network emulator to support distributed processing and enable multiple network emulators to exist in EmuStack. SoftwareFigure describes EmuStack software synopsis involving network emulator, physical emulation node, and virtual emulation node. As the key component of EmuStack, network emulator carries many open-source services and customized service extensions.

Nova service and the core plugin in Neutron service is attended to initialize virtual emulation nodes and virtual emulation network, respectively. Additionally, these services also have the ability to create, modify, and delete virtual emulation nodes and virtual emulation network. Neutron-Netem service is responsible for generating experimental parameters and data to dynamically control experimental program, topologies, and link characteristics. Meanwhile, in order to provide sufficient fidelity and reduce experimental complexity at the same time, we adopt Telemetry Management (Ceilometer) service to monitor and collect hardware resources and experimental data. In addition, Keystone , Horizon , and Glance are utilized to provide the support for managing authentication, authorization, service catalog, web interface, and image services. Besides, as a part of the future work, on the basis of OpenStack Heat service, we will develop the orchestrator to more efficiently and flexibly orchestrate the distributed hardware resource management. Most of those services are open-source projects and available in OpenStack; hence we only need to integrate them to meet most EmuStack design requirements.

In order to implement synchronous, dynamic, precise, and real-time emulation control service, we design and implement the Neutron-Netem service and Neutron-Netem agent, which will be further discussed in Section. Synopsis of EmuStack software.As shown in the bottom left of Figure, physical emulation node is regarded as a compute node in OpenStack where virtual emulation nodes are hosted. Physical emulation node runs Nova-Compute service driven by the Docker hypervisor to manage virtual emulation nodes and Open vSwitch agent to execute the managing emulation network commands (including create, modify, and delete function) from network emulator. Open vSwitch agent employs two Open vSwitch (OVS), “OVS for emulation” and “OVS for control” to manage virtual emulation networks and virtual management networks, respectively. Open vSwitch agents manage virtual networks by configuring flow rules on the above two OVS.

Moreover, as the agent of Ceilometer service in network emulator, Telemetry Agent is responsible for publishing collected data to network emulator through the management network and creating alarms once collected data breaks the monitoring rules. Finally, Neutron-Netem agent is designed to precisely and dynamically control emulation topologies and link characteristics, which will be further introduced in Section.As shown in the upper left of Figure, virtual emulation node (VEN) is a Docker virtual machine which is hosted in physical emulation node. It is spawned from the operating system image where Network Time Protocol (NTP) service, custom network protocol software, and Puppet client service can be installed. In particular, Puppet client service can be utilized by virtual emulation nodes to receive control information from network emulator or physical emulation nodes.Note that time synchronization is very essential for EmuStack. The DTN bundle protocol depends on absolute time to determine whether received packets are expired. Furthermore, EmuStack must ensure the experimental program in different virtual emulation nodes which can be exactly synchronously executed in the correct time sequences. Therefore, Chrony , an implementation of NTP , is installed in all nodes to provide the properly synchronizing services.

In detail, network emulator is configured to reference accurate time servers while physical and virtual emulation nodes refer to network emulator. In our local area network (LAN) of EmuStack, the time synchronization precision reaches as high as 0.1 milliseconds, which meets the requirements for most emulation experiments. ImplementationThis section describes the details of EmuStack core modules (Neutron-Netem service and Neutron-Netem agent).

Rane ttm 57sl. Rane Control Panel for Mac OS X 10.7.5 and higher. Rane Control Panel for Windows 7-SP1 and higher (including the ASIO/MIDI driver) MP2015 and MP2014 Magazine Advert. MP2015 Mixer Manual. MP2015 Mixer White Paper. MP2015 Mixer Data Sheet. Rane DJ Product Comparison. MacOS Sierra Compatibility. SL 4 CoreAudio Driver for Mac OS X 10.11.4 and higher. Rane Control Panel for Windows 7-SP1 and higher. Rane Control Panel for Mac OS X 10.7.5 and higher. Rane Control Panel for Windows 7-SP1 and higher. Rane Control Panel for Mac OS X 10.7.5 and higher. Rane Control Panel for Windows. Rane Control Panel for Mac. MP2016S Rotary Club Mixer & XP2016S Processor. MP2016a and XP2016a Manual.

Firstly, in order to sketch the outline of EmuStack implement, EmuStack emulation workflow is described in Section. Secondly, Sections, and present the details of emulation synchronous control, topology control, and customization of link characteristics, respectively. Finally, the scalability and performance of EmuStack are discussed in Section. Emulation WorkflowBefore the beginning of emulation, we first create a virtual machine image, where special software and shell scripts should be installed to fulfill the specific experimental requirements. For example, you must install an SSH server (or Puppet client) into the image and ensure that it starts up on boot with the correct configuration, or you may install shell scripts to collect some experiment results. Next, we create virtual networks before launching virtual emulation nodes. Virtual networks are composed of two types of networks, namely, management network and emulation network.

Management network is Neutron flat network in OpenStack, where all nodes (including virtual, physical emulation nodes, and network emulator) reside on the same network and no VLAN tags are created. Emulation network involves one or more private virtual networks. Moreover, one virtual emulation node could belong to either one or more virtual emulation networks. After creating virtual network, we launch a sufficient number of virtual emulation nodes and initialize virtual networks, right before running the emulation.Unlike a simulator running in virtual time based on discrete event, EmuStack runs in real time and cannot pause a node’s clock to pend for events. For a distributed real-time emulation platform, it is difficult to ensure that every control command can be executed synchronously in the different physical nodes due to the stochastic communication delay and background system load.

In order to avoid communication delay, especially the control information transmission delay, EmuStack stores the control information in the local-storage before emulation starts to run.We can now introduce the process flow of emulation network described in Figure. Note that the ML2-OVS plugin, L2-OVS agent, and L2-OVS driver are components of the core plugin in Neutron service.

As with OpenStack, EmuStack firstly initializes emulation topology by launching instances together. Secondly, after a successful initialization, the orchestrator requests Neutron-Netem service to run mobility module to create topology and link characteristics data. Meanwhile, in order to support the requirements of those who evaluate the same experimental protocol with different protocol parameters and the same model data, Neutron-Netem service stores the generated model data in the persistent storage. Thirdly, Neutron-Netem service dispatches the emulation data to each agent residing in every physical emulation node. The emulation data is split into different parts for each agent and every agent just only receives its own part and stores it.

Relatively, every agent can transmit experimental configuring parameters to virtual emulation nodes by invoking Puppet server API. In each virtual emulation node, Puppet client works in kick-mode and starts to receive configuration (or command) once triggered by Neutron-Netem agent. Finally, after dispatching the emulation data, the orchestrator sends a request to Neutron-Netem service to start emulation; then Neutron-Netem service delivers an absolute timestamp to every agent.

Once the staring time is up, agents will start to emulate the experiment, and therefore, the starting timestamp has to be a little (such as sixty seconds) larger than current timestamp, and that extra time is left for Neutron-Netem agents receiving the starting timestamp. Process flow of emulation network.In EmuStack, Neutron-Netem service is organized into separate submodules such as storage and mobility modules. In particular, Neutron-Netem service provides a simple plugin mechanism to enable users to extend different mobility modules.

Thus mobility modules can be individually built as researchers’ own experimental purposes. The various mobility modules are intended to provide required realistic network emulation environment for different experimental network protocol development. Besides, Neutron-Netem service provides the inheritance mechanism that a mobility module can be developed based on the others.

The primary functionality of a mobility module is to create data for dynamically controlling emulation topology and link characteristics. In Section, we will employ two mobility modules for DTN large file transmission experiment and the DTN routing protocol comparison experiment of Probabilistic Routing with Epidemic, respectively. Synchronous ControlAlgorithm describes the synchronous control of Neutron-Netem agent.

As shown in lines (2) to (4), Neutron-Netem agents all are asleep and synchronously start emulation once the starting time comes. The time synchronization accuracy depends on the sleeping time SLEEPTIME and the NTP synchronization precision. Since the NTP synchronization precision is as high as 0.1 milliseconds in our platform environment, the synchronization accuracy is only up to SLEEPTIME.

In fact, the SLEEPTIME is a trade-off between synchronization precision and system load. In practice, we set SLEEPTIME to 100 (milliseconds) to satisfy the requirements of most experiment with lightweight CPU load. (1) INIT protocol software(2) WHILE currenttime THRESHOLD(18) collect error log(19) END IF(20) WHILE currenttime.

Synchronous control on Neutron-Netem agent.With the coming of starting time, Algorithm goes into the outer loop as shown in lines (11) to (23). This outer loop takes advantage of absolute time to control its cycles. As shown in line (13), LOOPCYCLE (loop cycle) is an important parameter for this loop system.

The topology and link characteristics are updated every LOOPCYCLE. The control operation delay (lines (14) to (16)) plus sleeping time (lines (20) to (22)) is around equal to LOOPCYCLE. However, due to system load and other unknown factors, the control operation delay may be larger than LOOPCYCLE by accident; this will lead to synchronous control failure. To help users evaluate the fidelity of the experiments, this failure information all is logged (lines (17) to (19)).

Besides, the exceeded time will force future cycles of the loop to reduce the sleeping time; this will enable platform to synchronize again. After the end of outer loop, Neutron-Netem agents kill all experiment processes to get ready for next experiment. Controlling TopologyFigure provides details on the controlling topology and link characteristics. As shown in the right of Figure, Neutron-Netem service delivers the control information to Neutron-Netem agents in advance. According to the received control information, Neutron-Netem agents invoke their driver to dynamically control the emulation experiment once the starting time is coming. In particular, as a part of this control information, the topology control information is described by connection matrix in EmuStack, as shown in Figure. In fact, a network topology, no matter how complex it is, can be represented by a connecting relationship between any two nodes.

An example for a three nodes topology is shown in Figure, where “1” corresponds to connection between two nodes and “0” means disconnection. Simple topology and connection matrix.In EmuStack, the connection matrix along with time sequences is generated by mobility module. According to connection matrix, Neutron-Netem agents periodically invoke their drivers to dynamically change emulation topology during the emulation.

There are two ways to dynamically controlling emulation topology: one is based on Open vSwitch and the other is to depend on iptables. Neutron-Netem agents can control virtual emulation topology by configuring flow tables on “OVS for emulation.” Managing virtual emulation topology in this way is similar to how Neutron-Open vSwitch agent manages virtual topology in OpenStack, but Neutron-Netem agents can do these more efficiently and quickly. Meanwhile, Neutron-Netem agents can achieve higher synchronous precision since they have already store the emulation control information into local-storage, while Neutron-Open vSwitch needs to get this control information by Remote Procedure Call (RPC) services which take a long-term delay. Additionally, Neutron-Netem agents can dynamically control virtual emulation topology by configuring iptables entries in the special named network namespace. This namespace is corresponding to the virtual emulation node as shown in the top right of Figure. In the initial implement of EmuStack, the second way to control topology is implemented in Neutron-Netem agent driver, whose performance will be discussed in Section. As to the first method, we would take it into consideration in the future work.

Controlling Link CharacteristicsIn Linux, system offers a very rich set of tools for traffic control. The Traffic Control, TC, utility is one of the most famous tools. TC is good at shaping link characteristics which include link bandwidth, latency, jitter, packet loss, duplication, and reordering.

Besides, it allows users to set queuing disciplines (QDiscs) within network namespace. There are two types of QDiscs in TC: one is classful queuing disciplines which have filters attached to them and allow traffic to be directed to particular classed queues or subqueues; the other is classless queuing disciplines which can be used as primary QDiscs or inside a leaf class of a classful QDiscs. As shown in the bottom right of Figure, Hierarchical Token Bucket (HTB) is classful QDiscs, and Netem is classless. In EmuStack, Neutron-Netem agents use HTB to control link rate, attaching filters to HTB QDiscs to distinguish different virtual emulation links.

Meanwhile, Netem is used inside HTB leaf classes to emulate variable delay, loss, reordering, and duplication.In telecommunications, a link is a communication channel that connects two communicating devices (such as network interfaces); a media access control address (MAC address) is a unique identifier assigned to network interface for communications. Hence, in EmuStack, we can use source-destination MAC addresses to configure filter rules to distinguish different virtual emulation links. In particular, due to the high link asymmetry in most DTN experiments, EmuStack adopts the source-destination ordered pairs to distinguish the difference between uplink and downlink.

Meanwhile, we elaborately design control policies since TC QDiscs are only good at shaping outgoing traffic. For example, assuming a link between node A and node B, for A, EmuStack handles A’s uplink at one end of the link (on A) and controls A’s downlink at the other end of the link (on B); then emulation data can be shaped bilaterally. In addition, EmuStack also can create one or more special intermediate virtual nodes for all virtual emulation nodes of the same physical emulation node to shape their downlink traffic.We can limit link rate at both locations as shown at the middle of Figure. The two locations marked with red circles stand for two different network devices. The first location stands for network interfaces in virtual emulation nodes. All network interfaces are corresponding to those of named network namespaces. The second location represents TAP devices which are paired with those network interfaces mentioned above and attached to Open vSwitch (“OVS for emulation”).

Limiting link rate at both locations is feasible, but there are some notable differences. Assuming experimental network protocols (such as UDP) do not have any congestion control algorithms, then any rate-limiting at the second location will lead to a large number of packet loss, but this will not happen at the first location.

In most DTN experiments, rate-limiting leading to much packet loss probably is not what we want, and we mostly expect that rate-limiting and packet loss do not interfere with each other.Figure describes the rate-limiting difference between the two locations with a simple topology and a sending program. In this simple topology, device is at the first location and device is at the second location. Assume that the sending program calls UDP socket API. When sending program sends application data, Linux kernel copies application data from user space buffer to socket buffer. If socket buffer ever gets full, the blocking socket will put the program in sleep state until the socket buffer has enough space, or the nonblocking socket will return the error “Operation Would Block” immediately. Therefore, no matter which mode (blocking or nonblocking) the socket works in, sending program always receives a “feedback,” and this prevents the socket buffer from overflowing as shown in Figure.

These packets are delivered to QDiscs buffer to shape them and finally transmitted to link by NIC driver. In brief, TC QDiscs consume packets in socket buffer and clear socket buffer, and then sending program can send application data to socket buffer again. As a result, TC indirectly affects the transmission of sending program; a sufficient TC buffer and the feedback mechanism ensure packet loss does not to happen. As to rate-limiting at (the second location), since there is not feedback between and sending program, ingress buffer will overflow and drop most application data as shown in Figure. Feature of rate-limiting at the second location.Figure presents the relationship between packet loss rate and sending rate for rate-limiting at the second location. As expected, when rate-limiting is fixed, the larger the sending rate (times), the more packets the link drops.

Meanwhile, setting the sending rate as constant and HTB buffer as default, the larger the rate-limiting, the smaller the packet loss rate. For most DTN scenarios, this is not what we want to see except for testing congestion control algorithms. For example, assuming NIC bandwidth is 90 Mbps and rate-limiting is 10 Mbps, packet loss rate will be up to 80 percent.In current EmuStack version, we implement all link characteristics control at the first location but only achieve rate-limiting function at the second location by configuring ingress policing rules in Open vSwitch (OVS for emulation). Although rate-limiting at the second location has been implemented in QoS (Quality of Service) plugin of OpenStack Neutron service, it is implemented in centralized model and the synchronous precision is too low. Hence we reimplement the function with the distributed model and obtain that higher synchronous precision is achieved. Scalability and PerformanceWe deploy EmuStack in our experimental platform consisting of nine physical nodes.

Each physical node is an identical Dell™ PowerEdge™ R720 2U rack server with one 2.4 GHz Intel Xeon E5-2609 processor (with 4 cores), 10 M of L3 cache per core, 32 GB RAM, and Broadcom 5720 Quad Port 1 GbE BASE-T. In particular, network emulator is integrated with four more Intel EXPI9402PT Dual Port NICs. All management network interfaces of nine physical nodes are interconnected by TP-LINK TL-SF1024D Ethernet switch. All emulation network interfaces of eight physical emulation nodes are linked to those of network emulator.

The Ubuntu 14.04 LTS Linux distribution is installed on the all physical nodes and the NetworkManager service is not allowed to start up upon boot, since NetworkManager always repeatedly invokes the useless dhclient program and occupies an amazing number of CPU resources whenever EmuStack launches Docker containers. In addition, operating system kernel version is 3.19.0-31, iptables version is 1.4.21, iprouter2 version is ss131122, and Docker version is 1.10.1. Based on these platform environments, we analyze the emulation scalability and performance as follows.Compute (CPU), memory (RAM), and network (NIC) are the three chief factors of EmuStack scalability.

To make efficient use of CPU and RAM, EmuStack adopts Docker container as virtualization technology instead of kernel-based full virtualization solutions. Docker containers share the same operating system kernel so that they can consume fewer CPU and RAM resources. For example, our platform launches sixty containers on a single machine with about nine percent of CPU usage and ten percent of RAM usage, which serve as virtual emulation nodes which are installed with Ubuntu 14.04 LTS and start up with OpenSSH server and Puppet client. Additionally, in order to ensure that emulation network does not hit network bottleneck easily, EmuStack dispatches compute requests to as few as possible physical emulation nodes for the same experiment, so most virtual emulation nodes are interconnected by the internal bridge (OVS for emulation) and communications between them can consume the least bandwidth of physical emulation network. Meanwhile, all emulation network interfaces on the network emulator are attached to a Linux bridge to improve the bandwidth of physical emulation network. All of these enable EmuStack to support hundreds of nodes with nine physical nodes.The major factor of EmuStack performance is the updating delay, the time consumed by changing experimental emulation topology or link characteristics for one time.

In Algorithm, the updating delay determines the LOOPCYCLE parameter presented on line (13). The minimum LOOPCYCLE should be no less than the maximum updating delay; otherwise EmuStack will fails to synchronize. In addition, current EmuStack version employs iptables and TC utility to dynamically control the virtual emulation network, respectively.

Hence the iptables and TC performance directly impact EmuStack performance and their processing delay has direct impact on the updating delay. The performance of iptables and TC is analyzed as follows.Figure shows the average performance of iptables and TC, where average performance stands for updating delay trend.

The left of Figure describes the performance for operating at a single network interface. Interesting enough, for iptables, the average processing delay can be represented by quadratic function of inserted entries number; for TC-HTB, the relationship between average processing delay and inserted entries number can be well described by a linear function. Hence EmuStack can estimate processing delay with both functions of law. The right shows the performance of concurrently operating multiple virtual nodes in a single physical node. The processing delay of iptables and TC all grow linearly when the inserted iptables entries (or TC-HTB class) number is fixed, and this is influenced by serialization, contention, and system load. The average performance of iptables and TC.Figure shows the real-time performance where the processing delay is the time it takes to insert ten iptables entries (or TC-HTB classes) into a single network interface.

The processing delay starts to fluctuate violently with the increasing number of virtual nodes in a single physical node (each virtual node has a network interface which is paired with TAP device in host namespace and linked to Open vSwitch). For example, when there are less than thirty virtual nodes in a single physical node, the processing delay remains stable throughout one thousand trials. However, when the number of virtual nodes increases up to sixty, the fluctuation scope gets wider, with an iptables maximum scope that reaches about 350 milliseconds.

TC maximum scope is up to 1800 milliseconds which is five times more than that of iptables. Hence the updating delay of link characteristics (TC processing delay) is probably the most serious limitation in EmuStack. The real-time performance of iptables and TC.By analyzing the feature of the real-time performance, we can estimate the maximum updating delay and obtain the minimum LOOPCYCLE for a specific scale experiment in simple experimental environment (single user).

However, it is hard to do that in complex experimental environment (multiuser), and this will raise a lot of complex problems such as virtual nodes orchestration. Because of the limited space, we do not get into details and have a deeper insight into such topic here, leaving this part to be discussed in the future work.

Experimental EvaluationTo evaluate and demonstrate EmuStack, this section reproduces key results from two published DTN experiments. One is the DTN large file transmission experiment that applies Low Earth Orbit (LEO) and the other one is the DTN routing protocol comparison experiment of Probabilistic Routing with Epidemic ,. The goal of the first experiment is to prove that results obtained on EmuStack can match with the results measured on hardware. The goal of the second experiment is to demonstrate that EmuStack can dynamically change a large-scale topology and precisely support a large-scale experiment.

Large File Transmission Using LEO SatelliteOne type of LEO satellite is the remote sensing satellite. Generally, remote sensing satellite transfers a lot of sensing data to the ground station, and these data are usually in a large scale. For example, a single raw picture created by earth observation satellite is usually in a room of hundreds of megabytes (MB) or even more. Unfortunately, only about 10 minutes contact time is allowed for LEO when passing over a ground station in one orbital cycle. Additionally, the LEO transmission rate is low; taking the UK-DMC satellite as an example, there is a downlink of 8.134 Mbps and uplink of 9600 bps. Therefore, it is almost impossible that LEO can transfer a large file to the ground during the period that it passes over a single ground station.

Actually, three passes are needed to transfer the complete file to the ground as shown in Figure. During each pass, LEO transfers one segment of the total file to Earth Control Center via one Earth Gateway (GW), and once the job of transferring the complete image file is finished, it has been reassembled at the Earth Control Center. LEO block transmission scenario.To test whether the results obtained by EmuStack can match those of hardware, we created the experimental topology both in EmuStack and in real hardware as shown in Figure. The real hardware environment is built by seven physical nodes, where we utilize TC shell scripts to dynamically control the topology and link characteristics. All the parameters of real hardware that are related to the large file transmission are the same as those of EmuStack, which is described in the following passage.To ensure reality of experiment process and data, firstly we use Satellite Tool Kit (STK) to model the LEO link characteristics and topology as described in the Table. Based on the parameters generated by STK, we write this experimental mobility module in Neutron-Netem service. The mobility module can create the emulation topology and link control information according to the requirements of the large file transmission.

Secondly, the OpenStack virtual machine image equipped with the DTN network protocol software ION-3.3.1 is built. ION-3.3.1 uses CFDP program to fragment and reassemble the 258 MB image file from LEO to Earth Control Center. CFDP is configured with 32 kilobytes (kB) bundle , 128 kB block of Licklider Transmission Protocol (LTP) , and Contact Graph Routing (CGR) protocol.

Additionally, it is worth noting that TC Netem delay is limited by the frequency (HZ) of the Linux system clock (the tick rate), and the system clock should run at 1000 HZ to allow Netem delays in increment of 1 millisecond. LEO block transmission scenario contact plan.Figure shows the downlink rate and uplink rate in the Earth Control Center.

Due to the effectiveness of ION scheduled, LTP starts a transmission as soon as the link is available. During the whole transmission, LEO first transmits about 79 MB block of image file to Earth Control Center via earth GW1.

After about 73 minutes’ disconnection, LEO secondly establishes the connection with earth GW2 and transmits another 79 MB block to Earth Control Center. Finally, with 106 minutes’ break, LEO transmits the rest of image file to Earth Control Center via earth GW3. In this experiment, experimental results show that downlink unitization is high (about 94%), and the ratio between downlink rate and uplink rate is 1600: 1. These results prove that DTN protocol family has a good support for intermittent and asymmetric links.

Thus EmuStack can be employed to achieve significant results in advance of (or possibly without) setting up a hardware testbed. Meanwhile, since the results of EmuStack closely match with those of the hardware, it indicates that EmuStack has good support for experimental fidelity. Downlink rate and uplink rate in the Earth Control Center. Comparison of PROPHET Routing with EpidemicVahdat and Becker present a routing protocol for DTN called Epidemic routing.

This routing protocol allows nodes to exchange summary vectors (an index of their own messages) and request messages which were not owned once they encounter each other. This means messages will spread through the network like an Epidemic, as long as buffer is large enough and the possibility exists. Lindgren et al. Propose PROPHET, a Probabilistic Routing Protocol using History of Encounters and Transitivity. The operation of PROPHET is similar to that of Epidemic routing. When two hosts meet, they exchange summary vectors which also contain delivery possibility. Relying on this predictability data, each node calculates the new delivery possibility, which is used to decide which messages to request from the other node.To evaluate the ability of EmuStack precise control in large-scale experiment, we emulate the simulation experiment described in the PROPHET paper and compare PROPHET with Epidemic in the community scenario.

The community scenario consists of a m area and fifty-six virtual emulation nodes as shown in Figure. The area is split into twelve subareas: eleven communities (C1–C11) and one “gathering place (G).” Every community contains five nodes (the same color circles as shown in Figure ): one fixed node acting as the gateway of the community and four mobile nodes; all nodes treat the community as their home community. The four mobile nodes of every community select a destination, move there with a speed between ten and thirty miles per second, pause there for a moment, and select a new destination and speed. The probabilities of different destinations are chosen according to the current location of mobile nodes. In the experiment, a warm-up period of 500 seconds is used to initialize protocols, and 3000 seconds is used to create and delivery massages, and another 8000 seconds is used for allowing more messages to be delivered. Community mobility model.In order to emulate the above community mobility scene in EmuStack, we develop the community mobility model into a mobility module.

Before the beginning of experiment, we first create the experimental virtual image which is equipped with IBRDTN. IBRDTN supports the Epidemic routing and PROPHET routing whose “linkrequestinterval” parameter is set as 1000 milliseconds since the community model is updated every second. The other model parameters are configured the same as those in PROPHET paper.Note that LOOPCYCLE is set as one second in the experiment. We attempt to dispatch the fifty-six virtual nodes to different numbers of physical nodes; then EmuStack performs the above experiment for several times with the different configurations of the IBRDTN “limitstorage” parameter (namely, the queue size). At the end of experiments, we check Neutron-Netem agents logs for synchronizing errors. We find that even though all the fifty-six virtual nodes are orchestrated into a single physical node, no synchronizing errors were thrown in EmuStack.

This indicates the ability to precisely control large-scale experiment in EmuStack. We further discuss the details of experimental results in the following passage.Figure shows the average delivery rates in both EmuStack and the simulator described in the PROPHET paper (Hop count = 11). The Epidemic and PROPHET routing protocol show the similar performance in both EmuStack and the simulator. For example, with the increasing size of the queue, the number of messages which eventually reach destination goes up.

It is obvious that they can be buffered for longer time and get more opportunities to be delivered successfully, since the larger queue size would enable more messages to be cached and less be dropped. Meanwhile, as shown in Figure, the PROPHET routing protocol has a much better performance compared with the Epidemic routing protocol in terms of the delivery rate, and the results of EmuStack matches with those of the simulator. All these results can demonstrate that both PROPHET and Epidemic routing protocols run normally in EmuStack. EmuStack can emulate the large-scale experiments.

The average delivery rates in community scenario.Figure presents the consumption of the network resources in the community scenario. In the simulator , Lindgren utilizes the number of forwarded messages that occur when nodes encounter each other to indirectly evaluate the consumption; in EmuStack, we employ the value of the total egress traffic to directly measure the consumption. The egress traffic is composed of the forwarded messages and routing overhead; hence it can achieve the more comprehensive evaluation for the consumption of network resources. As described in Figure, in EmuStack, PROPHET has a much higher network overhead than Epidemic, as opposed to that in the simulator. This is because the Epidemic routing protocol has been optimized by IBRDTN.

IBRDTN already has replaced the summary vectors of the basic Epidemic with the efficient Bloom-Filter mechanism and manages a purge vector as an extension of the Epidemic routing protocol which ensures the bundles delivered successfully to be deleted throughout the network. Therefore Epidemic can consume fewer network traffic than the origin PROPHET described in. The consumption of the network resources in the community scenario. In the simulator, Lindgren utilizes the number of forwarded messages to indirectly evaluate the consumption; in EmuStack, we employ the value of the total egress traffic to directly measure the consumption.Finally, Figure describes the average delivery delay in both EmuStack and the simulator.

There are two ways of calculating the average delay. One way is by dividing the sum of the delay of the messages successfully delivered by the number of those (delay 1). The other way is by dividing the sum of the delay of all the messages successfully and unsuccessfully delivered by the number of those (delay 2). The delay of those unsuccessfully delivered messages is defined as the value of subtracting the messages’ sending time from the experimental ending time. The average delivery delay in community scenario.The delay 1 is utilized to evaluate the average delay of massages in.

As shown in the left of Figure, the value of the delay 1 fluctuates back and forth with the increase of queue size. As we all know, the larger queue can shorten the delivery delay of the messages which would be successfully delivered even if the queue is relatively small, and it can also enable messages which would be unsuccessfully delivered to reach their destination nodes, while the value of the delivery delay of these messages become larger compared with the origin zero. Therefore, there are increases and decreases in the delivery delay value, which result in the value of delay 1 fluctuating in small scope.Due to the above phenomenon, we argue that the first way of calculating delivery delay may be unreasonable. Hence we attempt to evaluate the average delivery delay of the forwarded massages by the second way where we take the delivery delay of unsuccessfully delivered messages into consideration when calculating the average delivery delay. As shown in the right of Figure, with the increasing size of the queue, there is an obvious decrease in the average delivery delay (delay 2) for both routing protocols. It is intuitive that the value of delay 2 decreases since larger queue leads to more messages delivered successfully and quickly.

Dtn

In short, no matter which method is used to calculate the average delivery delay, PROPHET always has shorter delivery delay than Epidemic in both EmuStack and the simulator.As we expected, all the above results demonstrate that EmuStack can reproduce the key results of the large-scale DTN experiment described in and achieve more details of the experimental network protocols than the simulator, which is helpful for us to further improve the design of the experimental network protocol. ConclusionIn this work, we present a real-time, distributed, and scalable emulation platform based on OpenStack for DTN.

Firstly, we discuss hardware, software deployment, the design architecture, and implementation. Specially, we present the details of control of link characteristics and topology. Secondly, we analyze the platform scalability and performance.

Finally, we evaluate and demonstrate the emulation platform with two classical DTN experiments.In order to have a thorough evaluation, as a part of the future work, we will create more realistic mobility and link characteristic models to emulate more complex DTN experiments. Meanwhile, we will also evaluate these effects with different virtual computing, virtual network technologies and complex experimental environment (multiuser orchestration). Competing InterestsThe authors declare that there are no competing interests regarding the publication of this paper. AcknowledgmentsThis work was supported in part by NSFC of China under Grant no. 61271202, NSAF of China under Grant no.

U1530118, National High Technology of China (“863 program”) under Grant no. 2015AA015702, and National Basic Research Program of China (“973 program”) under Grant no.

.Burleigh, Scott2011-01-01This paper describes a technique for implementing scalable, reliable, multi-source multipoint data distribution in space flight communications - Delay-Tolerant Reliable Multicast (DTRM) - that is fully supported by the 'Remote AMS' (RAMS) protocol of the Asynchronous Message Service (AMS) proposed for standardization within the Consultative Committee for Space Data Systems (CCSDS). The DTRM architecture enables applications to easily 'publish' messages that will be reliably and efficiently delivered to an arbitrary number of 'subscribing' applications residing anywhere in the space network, whether in the same subnet or in a subnet on a remote planet or vehicle separated by many light minutes of interplanetary space.