-
Notifications
You must be signed in to change notification settings - Fork 5
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
UPdate ONVM performance stats and architecture
- Loading branch information
Showing
5 changed files
with
7 additions
and
12 deletions.
There are no files selected for viewing
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -8,19 +8,17 @@ permalink: /onvm/ | |
|
||
openNetVM is a high performance NFV platform based on [DPDK](http://dpdk.org) and [Docker](http://www.docker.com) containers. openNetVM can be SDN-enabled, allowing the network controller to provide rules that dictate what network functions need to process each packet flow. | ||
|
||
openNetVM is an open source version of the NetVM platform described in our [NSDI 2014 paper](http://faculty.cs.gwu.edu/~timwood/papers/14-NSDI-netvm.pdf), released under the [BSD license](https://github.com/sdnfv/openNetVM/blob/master/LICENSE). | ||
openNetVM is an open source version of the NetVM platform described in our [NSDI 2014 paper](http://faculty.cs.gwu.edu/~timwood/papers/14-NSDI-netvm.pdf), released under the [BSD license](https://github.com/sdnfv/openNetVM/blob/master/LICENSE). Our [HotMiddlebox workshop paper](http://faculty.cs.gwu.edu/~timwood/papers/16-HotMiddlebox-onvm.pdf) is a good way to learn about openNetVM's overall architecture. | ||
|
||
|
||
## Current Status | ||
Our [HotMiddlebox workshop paper](http://faculty.cs.gwu.edu/~timwood/papers/16-HotMiddlebox-onvm.pdf) is a good way to learn about openNetVM's overall architecture. | ||
|
||
**OpenNetVM's source code and documentation are [available on github](https://github.com/sdnfv/openNetVM).** | ||
|
||
We will be releasing [NSF CloudLab](http://cloudlab.us) template images soon. Please [contact us](mailto:[email protected]) if you are interested in testing these. | ||
The fastest way to get started with OpenNetVM is using NSF CloudLab. You can find a premade [profile here](https://www.cloudlab.us/p/GWCloudLab/onvm-18.03). | ||
|
||
## Features | ||
|
||
<img src="/res/netvm-arch.png" style="float:right; padding-left:15px; padding-bottom:10px"> | ||
<img src="/res/netvm-arch.png" widht="700px" style="float:right; padding-left:15px; padding-bottom:10px"> | ||
|
||
|
||
**Container-based NFs:** Writing and managing network functions for openNetVM is easy since they run as standard user space processes inside Docker containers. | ||
|
@@ -31,20 +29,17 @@ We will be releasing [NSF CloudLab](http://cloudlab.us) template images soon. P | |
|
||
**Zero-Copy IO:** Packets are DMA'd directly into a shared memory region that allows the NF Manager to grant NFs direct access to packets with no additional copies. | ||
|
||
**NUMA-Aware:** openNetVM maximizes performance by ensuring that packets in memory DIMMs local to a particular CPU socket are only processed by threads running on that socket. | ||
|
||
**No Interrupts:** We use DPDK's poll mode driver in place of traditional interrupt-driven networking, allowing the system to process packets at line rates of 10 Gbps and beyond. | ||
|
||
**Scalable:** NFs can be easily replicated for scalability, and the NF Manager will automatically load balance packets across threads to maximize performance. | ||
|
||
## Performance | ||
|
||
Our original NetVM platform was able to achieve performance significantly higher than the state of the art when processing packets through a chain of virtual machines. We expect similar or better performance on our updated openNetVM platform. | ||
<img src="/res/onvm-chain-perf.png" width="300px" height="196px" style="float:left; padding-right:10px"> | ||
|
||
<img src="/res/netvm-perf.png" width="300px" height="221px" style="float:left; padding-right:10px"> | ||
The OpenNetVM Speed Tester NF can be used to measure the throughput of the system by generating either fake packets or replaying PCAP files to simulate real traffic. To stress test packet movement through ONVM, a service chain of Speed Tester NFs can be run on a single machine, avoiding NIC overheads. Because there is no data copying and each NF handles its own sending and receiving of packets, we get high throughput even for long NF chains. Our measurements at left show that a chain of length two using our NF Direct communication mechanism has a maximum throughput of 32 million packets per second, while extending the chain to seven NFs only incurs a 10% throughput drop. Using indirect NF communication via the management layer sees decreasing performance as the manager's TX thread becomes a bottleneck. | ||
|
||
We have evaluated the performance of NetVM compared to SR-IOV and raw DPDK on a machine with four 10 Gbps NIC ports. DPDK provides the highest performance for software-based switching, but it does not directly support running NFs in virtual machines or containers, limiting its use for NFV deployments. SR-IOV allows a physical NIC to be virtualized and given to a virtual machine for direct access; we measure the performance of running DPDK inside a VM with SR-IOV. The forwarding rate achieved by NetVM for 64 byte packets significantly surpasses that of SR-IOV, nearly reaching the same level of performance as DPDK, even though NetVM must transfer packets from the host to the virtual machine that processes the packets. | ||
<img src="/res/onvm-web-traffic.png" width="288px" height="300px" style="float:right; padding-left:10px"> | ||
|
||
The performance gap between SR-IOV and NetVM becomes even larger when there are multiple NFs that must process a packet. SR-IOV drops to nearly one third of the line rate when sending packets through a chain of two VMs, while NetVM can maintain the full line rate as long as there are sufficient CPU cores to dedicate to each VM. | ||
We have evaluated the performance of NetVM compared to SR-IOV and raw DPDK on a machine with eight 10 Gbps NIC ports. If web traffic is directed to a single NF, we observe a maximum throughput of 48Gbps, at which point the NF itself (running a simple forwarding example) becomes the bottleneck. Starting a second replica of the NF allows OpenNetVM to automatically load balance traffic across the two NFs, while preserving flow affinity. This improves performance up to 68 Gbps, which we believe is the hardware limit on our server. Even if the traffic is sent through a chain of 5 NFs, we can still process 40 Gbps. | ||
|
||
For additional performance results, please see our [TNSM Journal article](http://faculty.cs.gwu.edu/~timwood/papers/15-TNSM-netvm.pdf). |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.