Btrfs Performance Analysis
Machine Details
We are running tests on two different systems. The first uses a large,
fiber-attached array of disks, and the second uses only internal disks.
RAID System
- Four dual-core Pentium 4 (Paxville) sockets with HyperThreading
enabled.
- 32 GB of physical RAM, configured to run with only 4 GB to prevent
excessive caching.
- Four Emulex 2 Gb/s fiber adapters.
- IBM DS4500 storage controllers with four 2 Gb/s fiber connections.
- Ten Exp300 disk trays connected to the DS4500.
- Eight 17-disk hardware RAID-0 arrays.
- Total of 136 17 GB disks.
- 280 GB per RAID-0 array.
- 2.2 TB LVM striped volume (64 kB chunk-size) combining the eight RAID-0
arrays.
Single-Disk System
- Four dual-core Pentium 4 (Paxville) sockets with HyperThreading
enabled.
- 32 GB of physical RAM, configured to run with only 2 GB to prevent
excessive caching.
- Single internal 70 GB SAS drive.
Benchmark Details
Currently, all the tests are done using the
Flexible FileSystem
Benchmark (FFSB). Each run of the benchmark requires a profile, which
describes the initial state of the filesystem (number and size of files and
directories) and the desired mix of operations to perform during the test.
It also specifies the time length of the test, the number of threads to use
when running the test, and other parameters.
We have currently defined five profiles with the following characteristics. On
the RAID system, each profile is run with 1, 16, and 128 threads. On the
single-disk system, each profile is run with 1, 8, and 32 threads. All
tests run for 300 seconds.
- Large-File Sequential Reads
(raid,
single-disk)
- Start with 1024 files.
- 100 MB files on the raid system.
- 35 MB files on the single-disk system.
- Each thread reads an entire file using 4 kB reads.
- Large-File Creates
(raid,
single-disk)
- Start with an empty filesystem.
- Each thread creates a file using 4 kB writes.
- 1 GB files on the raid system.
- 100 MB files on the single-disk system.
- Random Reads
(raid,
single-disk)
- Start with 1024 files.
- 100 MB files on the raid system.
- 35 MB files on the single-disk system.
- Each thread reads a fixed amount of data from a random location in
one file using 4 kB reads.
- 5 MB reads on the raid system.
- 1 MB reads on the single-disk system.
- Random Writes
(raid,
single-disk)
- Start with 1024 files.
- 100 MB files on the raid system.
- 35 MB files on the single-disk system.
- Each thread writes a fixed amount of data at a random location in
one file using 4 kB writes.
- 5 MB writes on the raid system.
- 1 MB writes on the single-disk system.
- Mail Server
(raid,
single-disk)
- Start with one million files spread across one thousand directories.
- File sizes range from 1 kB to 1 MB
- Each thread creates a new file, reads an entire existing file, or
deletes a file.
- 57% (4/7) reads
- 29% (2/7) creates
- 14% (1/7) deletes
- All reads and writes are done in 4 kB blocks.
Test Output
Each individual FFSB test (and the automation framework in which it runs)
produces a directory tree with a variety of output files.
Here is one example.
- analysis/
This directory contains the output from any profilers that ran during the
test. For most of the FFSB tests, we run iostat, mpstat, sar, and oprofile.
- benchmark/
This directory contains benchmark-specific output. For FFSB, it includes
the output of the FFSB command and the profile used for that run.
- config/ and
proc/
These directories contain system configuration data from before and after
the test.
- summary.html
This page contains links to charts for the profiling data found in the
analysis directory, as well as any benchmark-specific charts.
Test Results
- Initial Comparisons
This set of tests gives a baseline of comparison between btrfs, ext3, ext4,
xfs, and jfs. In addition to testing btrfs with no extra mount options,
three runs were performed while mounting with the nodatacow option, with the
nodatasum option, and with both the nodatacow and nodatasum options (the
kernel code was modified to allow mounting with nodatacow without forcing
nodatasum).
- History Graphs
- This shows the results of BTRFS
over many differnet builds and kernel levels. It is useful for seeing
if BTRFS has improved or degraded during a release cycle.
Comparisons are also made to the other filesystems so we can
track how BTRFS is doing relative to them.
- RAID System History Graphs
- Single Disk History Graphs
- Results List