Homework 2: Learning and Benchmarking RDMA

This assignment is due at 10:59 PM on Monday, October 21st (10/21/19).

The purpose of this assignment is to give you hands on experience with CloudLab and RDMA. This assignment has two sections:

  1. Getting Started with CloudLab.
  2. Benchmarking RDMA Performance.

This section concludes with details about the assignment submission and grading

Getting Started with CloudLab

In this section of the homework, you will learn how to create and configure a CloudLab experiment. You will likely do this many times as you work on this homework.

CloudLab Setup and Background

  1. First, you must go to Cloudlab and create a CloudLab account. After that, request to join the group for this class. (If you need to list a project name, you should use uic-cs-edu)

  2. Next, you should read Sections 1-6 of the CloudLab documentation

Instantiating a CloudLab profile

In this section, you will learn how to instantiate a CloudLab profile. For this homework, you will use the following CloudLab profile.

Note: Do not leave your CloudLab experiment instantiated unless you are using it! It is important to be a good citizen of CloudLab.

Cluster Setup

To use RDMA, Mellanox OFED must first be installed. For this homework, you will be using a CloudLab image that already has the necessary packages installed. As such, you should not need to install any packages on your cluster of servers to run your RDMA applications. However, there are other environment configurations that you will need to perform.

The following steps explain how to configure your CloudLab cluster:

  1. Clone your git repository into a folder that is mounted via NFS. I recommend /proj/uic-cs-edu-PG0/exp/<exp>/datastore/<uname>/git/<repo>/

  2. Configure Passwordless SSH across the cluster. For this step you can either choose the official or unofficial SSH configuration.
    • Official: Create a new Cloudlab-specific public/private key pair on your local workstation. Deploy your public key through the CloudLab website. Then, via SSH, copy your private key to every server in your cluster.
    • Unofficial: Generate a new public/private key pair on one of the servers in your cluster. Then, on each machine in the cluster, copy this key pair to the .ssh directory.
      • Note: CloudLab may periodically remove these keys and cause you to need to re-deploy them.
  3. Set environment variables
    • Because the image you are using already has OFED installed, you should only need to run the env/bootstrap_env.sh command. This command will set necessary environment variables and will also perform the unofficial SSH configuration.
  4. Understand the steps outlined in env/README.md
    • Note: For cluster sizes larger than 2, ansible and parallel-ssh are very useful tools for running the same commands on multiple machines.

Measuring RDMA Performance:

Vanilla RDMA Performance Experiment (Optional)

As a first part of this assignment, you will re-run an experiment from the paper RoGUE: RDMA over Generic Unconverged Ethernet. This will involve running an experiment in the exps/rdma_seg directory. You will create your own version of Figure 3(a) in this paper. NOTE: your Figure 3(a) will not match the one in the paper because you are using a different CPU!

To run this part of the homework, do the following steps:

  1. Follow the steps outlined in env/REAME.md to generate configuration files, run multiple experiments, and plot the results.

Writeup

Once you have successfully run an existing RDMA experiment, you should prepare the following for your assignment submission:

  1. Two pdf files for the results of your experiments (plots.rdma_seg_cpu.segsize.pdf and rdma_seg_cpu.B_per_sig.lines.pdf).
  2. A section in the WRITEUP.md (writeup/WRITEUP.md) file in the root directory titled “Vanilla RDMA Performance Experiment”. This section should contain the following subsections:
    1. Description: A description of the experiment being run.
    2. Figures: An explanation of the figures (e.g., what are the axis, what are the lines).
    3. Discussion: An explanation of why your figure does not match the one in RoGUE.

Custom RDMA Program Performance Experiment

Custom RDMA Program

As the first part of this experiment, you will write your own RDMA program. Your are encouraged to use any on-line guides and repos to develop this application.

The repository also contains example code intended to be used as starter code. In particular, the examples/the-geek-in-the-corner/02_read-write/ directory contains a simple RDMA program that sets up memory regions and queue pairs and sends a single verb. This is the recommended starter program for this homework. However, it is not required to use this starter code. For example, you may use any language you would like, although one of C or Java is recommended. However, you are required to use this program to complete another custom RDMA benchmark, and you are required to submit the source code for this program under the code directory.

Although you are allowed to decide how to implement your own program, you are required to support at least the following flags in your program:

  • -t <time>: The duration of the experiment in seconds.
  • -s <size>: The size of each segment/verb in bytes.
  • -w <window_size>: The number of outstanding verbs in a window to send at a time. You should do error checking as needed for invalid invocations.

Notably, to help demonstrate the benefits of windowing (and batching), the one significant feature that your program must implement is a -w <window_size> flag. In particular, to support this flag, your program must be capable of ensuring that there is always window_size outstanding verbs at any point in time.

Further, the following resources may be helpful:

Custom RDMA Benchmark

In the final part of this homework, you will use your custom RDMA program to benchmark the performance of a single CPU thread sending different size segments/verbs. To do so, you should save timestamps for when each verb was started and completed to an array in memory that is written out after the experiment completes before the program terminates. This will allow you to compute throughput and latency. Also, in your experiments, you are encouraged to consider extremely small (e.g., 1) values and extremely large values (e.g., 100K-1M).

Using your custom RDMA application and given an RDMA verb of your choice, you will then benchmark both the latency and throughput of sending different size segments for different window sizes. You should look at very small segments (e.g., 8B) and very large segments (e.g., 128MB). You should also look at window sizes of 1, 2, 4, and 8.

Given this, you will generate three figures, with a line for each window size:

  • Throughput versus segment size.
  • Tail latency (95th Percentile) versus segment size.
  • CPU utilization versus segment size.

Ideally, you should be able to use the experimental framework from the Vanilla section to run these experiments and generate all of these figures. However, this is not required.

Writeup

Once you have successfully run your own RDMA experiment, you should prepare the following for your assignment submission:

  1. Three pdf files for the results of your experiments (one each for latency, throughput, and CPU utilization).
  2. A section in the WRITEUP.md (writeup/WRITEUP.md) file in the root directory titled “Custom RDMA Performance Experiment”. This section should contain the following subsections:
    1. Design: A brief description of the design of your program. An example of something to include in this subsection would be how your program detects the completion of an RDMA verb and how windowing and batching are implemented.
    2. Description: A description of the experiment being run.
    3. Figures: An explanation of the figures (e.g., what are the axis, what are the lines).
    4. Discussion: An explanation of what implications the results of your experiment have on the design of RDMA programs.

Submission and Grading

Submission format

Your submission should be four folders that is in the same format as the starter code (code, env, exps, and writeup)

There should only be one writeup for this assignment. It should follow the formats specified in the individual sections.

Submission points

There are a total of 100 possible points for this assignment. 50 for running the existing benchmark tool. 50 for creating your own. If you choose to not run the existing benchmark tool, then that part is worth 0/0 points. Otherwise, the points breakdown is as follows:

  • 20 points: The WRITEUP.md section for the “Vanilla RDMA Performance Experiment”.
  • 30 points (2 * 15): The two figures from the “Vanilla RDMA Performance Experiment”.
  • 15 points: The WRITEUP.md section for the “Custom RDMA Performance Experiment”.
  • 30 points (3 * 10): The three figures from the “Custom RDMA Performance Experiment”.
  • 5 points: Code style for the custom RDMA program in code.

Submission Website:

This homework assignment will be submitted via github classroom.

If you have any problems, please make a post on the course discussion website.

This assignment is due at 10:59 PM on Monday, October 21st (10/21/19).