Difference between revisions of "Cloud benchmarks"

From SimpleWiki
Jump to navigationJump to search
Line 31: Line 31:
 
=== Post-processing the data ===
 
=== Post-processing the data ===
  
The previous steps generate a pcap file per benchmark step in '''/tmp/output/'''. In order to produce the figures presented in the paper (Section 5), the pcap files need to be post-processed. The scripts in the following have been developed by manually evaluating the traffic of each cloud storage tool. The typical flows of each tool are isolated by means of lists of IP addresses of servers, and statistics are calculated according to heuristics to determine the start and end of synchronization steps during the benchmarks.
+
The previous steps generate a pcap file per benchmark step. In order to produce the figures presented in the paper (Section 5), the pcap files need to be post-processed. The scripts in the following are examples to generate Figure 7 of the paper. The typical flows of each tool are isolated by means of lists of server IP addresses, and statistics are calculated according to heuristics to determine the start and end of synchronization steps during the benchmarks.
 +
 
 +
 
  
 
== Traces ==
 
== Traces ==
  
The traffic traces that generated the results in the paper can be downloaded from '''this link'''. These traces together with the previous scripts produce the results in Figure 7 of the paper. More details about this dataset can be obtained in Chapter 7 of the following thesis:
+
The traffic traces that generated the results in the paper can be downloaded from these links:
 +
 
 +
{| class="wikitable" style="text-align: center; width: 400px; height: 100px;"
 +
|-
 +
! scope="col" | Provider
 +
! scope="col" | File Size
 +
|-
 +
! scope="row" | [http://traces.simpleweb.org/cloud_benchmarks/cloud_drive.tar.gz Amazon Cloud Drive]
 +
|  197M
 +
|-
 +
! scope="row" | [http://traces.simpleweb.org/cloud_benchmarks/dropbox.tar.gz Dropbox]
 +
|  88M
 +
|-
 +
! scope="row" | [http://traces.simpleweb.org/cloud_benchmarks/gdrive.tar.gz Google Drive]
 +
|  70M
 +
|-
 +
! scope="row" | [http://traces.simpleweb.org/cloud_benchmarks/skydrive.tar.gz Microsoft SkyDrive]
 +
|  69M
 +
|-
 +
! scope="row" | [http://traces.simpleweb.org/cloud_benchmarks/wuala.tar.gz LaCie Wuala]
 +
|  63M
 +
|}
 +
 
 +
These traces, together with the previous scripts, produce the results in Figure 7 of the paper. More details about this dataset can also be obtained in Chapter 7 of:
  
* Drago, I. (2013) [https://sites.google.com/site/idiliod/publications/2013_drago_thesis.pdf "Understanding and Monitoring Cloud Services"]. PhD thesis, University of Twente. CTIT Ph.D. thesis Series No. 13-279. ISBN 978-90-365-3577-9.
+
* Drago, I. (2013) [http://eprints.eemcs.utwente.nl/24136/ "Understanding and Monitoring Cloud Services"]. PhD thesis, University of Twente. CTIT Ph.D. thesis Series No. 13-279. ISBN 978-90-365-3577-9.
  
 
== Acceptable Use Policy ==
 
== Acceptable Use Policy ==

Revision as of 10:51, 13 January 2014

This page contains the software and data presented in the following paper:

  • "Benchmarking Personal Cloud Storage" by Idilio Drago, Enrico Bocchi, Marco Mellia, Herman Slatman and Aiko Pras. In Proceedings of the 13th ACM Internet Measurement Conference. IMC 2013.

This paper is a continuation of our work on personal cloud storage. Previous results can be found on this page and on this page.

The slides of the presentation can be downloaded from here.

Benchmarks Scripts

The scripts are written in python. All scripts require: netifaces, pcapy

How to execute the benchmarks

 ./delta_encoding.py -i wlan0 --seed 123134 --bytes 10000 --test 3 -o /tmp/output/ --ftp 1.1.1.1 --port 2121 --user "user_name" --passwd "password" --folder="."

Important remarks:

1 - The folder ftp://user:pass@server/folder/ must be in a synchronized folder of the storage tool.

2 - The file delta_encoding.py must not be in a synchronized folder, otherwise the .pyc files created at run-time will disturb the experiment.

3 - The folder /tmp/output/ must not be in a synchronized folder, for the same reasons as above.

4 - Disable as much processes as possible in the benchmarking machine. This will minimize external interference on the test.

5 - If the storage system is running on a virtual machine, make sure the host machine is powerful enough to support the load. Check also whether the virtual machine limit or shape the network traffic.

Post-processing the data

The previous steps generate a pcap file per benchmark step. In order to produce the figures presented in the paper (Section 5), the pcap files need to be post-processed. The scripts in the following are examples to generate Figure 7 of the paper. The typical flows of each tool are isolated by means of lists of server IP addresses, and statistics are calculated according to heuristics to determine the start and end of synchronization steps during the benchmarks.


Traces

The traffic traces that generated the results in the paper can be downloaded from these links:

Provider File Size
Amazon Cloud Drive 197M
Dropbox 88M
Google Drive 70M
Microsoft SkyDrive 69M
LaCie Wuala 63M

These traces, together with the previous scripts, produce the results in Figure 7 of the paper. More details about this dataset can also be obtained in Chapter 7 of:

Acceptable Use Policy

  • When writing a paper using software or data from this page, please cite:
 @inproceedings{drago2013_imc,
   author        = {Idilio Drago and Enrico Bocchi and Marco Mellia and Herman Slatman and Aiko Pras},
   title         = {Benchmarking Personal Cloud Storage},
   booktitle     = {Proceedings of the 13th ACM Internet Measurement Conference},
   series        = {IMC'13},
   year          = {2013}
 }

Paper abstract

Personal cloud storage services are data-intensive applications already producing a significant share of Internet traffic. Several solutions offered by different companies attract more and more people. However, little is known about each service capabilities, architecture and - most of all - performance implications of design choices. This paper presents a methodology to study cloud storage services. We apply our methodology to compare 5 popular offers, revealing different system architectures and capabilities. The implications on performance of different designs are assessed executing a series of benchmarks. Our results show no clear winner, with all services suffering from some limitations or having potential for improvement. In some scenarios, the upload of the same file set can take seven times more, wasting twice as much capacity. Our methodology and results are useful thus as both benchmark and guideline for system design.

Error creating thumbnail: File missing
Error creating thumbnail: File missing