Giter Site home page Giter Site logo

westerndigitalcorporation / zbdbench Goto Github PK

View Code? Open in Web Editor NEW
22.0 7.0 8.0 111 KB

The Zoned Storage benchmark suite for ZNS SSDs and SMR HDDs.

License: GNU General Public License v2.0

Python 76.49% Shell 6.55% Dockerfile 3.01% C 13.96%
nvme ssd zonedstorage zns smr zoned zoned-namespace zoned-based

zbdbench's Introduction

ZBDBench: Benchmark Suite for Zoned Block Devices

ZBDBench is a collection of benchmarks for zoned storage devices (Zoned Namespace (ZNS) SSDs and Shingled-Magnetic Recording (SMR) HDDs) that tests both the raw performance of the device, and runs standard benchmarks for applications such as RocksDB (dbbench) and MySQL (sysbench).

Community

For help or questions about zbdbench usage (e.g. "how do I do X?") see ZonedStorage.io, our Matrix chat, or on Slack.

To report a bug, file a documentation issue, or submit a feature request, please open a GitHub issue.

For release announcements and other discussions, please subscribe to this repository or join us on Matrix.

Dependencies

The benchmark tool requires Python 3.4+. In addition to a working python environment, the script requires the following installed:

  • Linux kernel 5.9 or newer

    • Check your loaded kernel version using: uname -a
  • nvme-cli

    • Ubuntu: sudo apt-get install nvme-cli
    • Fedora: sudo dnf -y install nvme-cli
  • blkzone and blkdiscard (available through util-linux)

    • Ubuntu: sudo apt-get install util-linux
    • Fedora: sudo dnf -y install util-linux-ng
    • CentOS: sudo yum -y install util-linux-ng
  • a valid container (podman) environment

    • If you do not have a container environment installed, please see this link
  • installed containers:

    • zfio - contains latest fio compiled with zone capacity support
    • zrocksdb - contains rocksdb with zenfs built-in
    • zzenfs - contains the zenfs tool to inspect the zenfs file-system

    The containers can be installed with: cd recipes/docker; sudo ./build.sh

    The container installation can be verified by listing the image: sudo podman images

  • matplotlib, pandas and openpyxl for graph plotting

    sudo pip install matplotlib
    sudo pip install pandas
    sudo pip install openpyxl
    

Getting Started

The run.py script runs a predefined benchmark on a block device.

The block device does not have to be zoned - the workloads will work on both types of block devices.

The script performs a set of checks before running the benchmark, such as validating that it is about to write to a block device, not mounted, and ready.

After the benchmark has run, the output is available in:

zbdbench_results/YYYYMMDDHHMMSS (date format is replaced with the current time)

Each benchmark has a report function, which creates a csv file with the specific output. See the section below for the csv format for each benchmark.

To execute the 'fio_zone_mixed' benchmark, run:

sudo ./run.py -d /dev/nvmeXnY -b fio_zone_mixed

If you have the latest fio installed, you may skip the container installation and run the benchmarks using the system commands.

sudo ./run.py -d /dev/nvmeXnY -b fio_zone_mixed -c no

To list available benchmarks, run:

./run.py -l

WARNING

You need to have read/write permissions to the device or file you are targeting. Usually block devices are owned by root user or disk group. You can either change ownership of the block device your are testing:

sudo chown myusername /dev/nvmeXnY

or make it world writable:

sudo chmod o+rw /dev/nvmeXnY

Or elevate the privileges when running zbdbench:

sudo ./run.py <args>

Please be sure that you are familiar with the security implications of the option you choose. If you start a test on a different block device than the one you intended, you may loose data and your system may fail to boot.

Command Options

List available benchmarks:

./run.py -l

Run specific benchmark:

./run.py -b benchmark -d /dev/nvmeXnY

Run fio_zone_xxx benchmark with SPDK FIO plugin(io_uring zoned bdev) in a container env.:

./run.py -b fio_zone_xxx --mq-deadline-scheduler -d /dev/nvmeXnY -s yes -c yes

Run fio_zone_xxx benchmark with SPDK FIO plugin(io_uring zoned bdev) directly on Host System. Zbdbench will checkout and build SPDK(also FIO) in dir provided using --spdk-path option:

./run.py -b fio_zone_xxx --mq-deadline-scheduler -d /dev/nvmeXnY -s yes -c no --spdk-path /dir/path

Regenerate a report (and its plots)

./run.py -b fio_zone_mixed -r zbdbench_results/YYYYMMDDHHMMSS

Regenerate plots from existing csv report

./run.py -b fio_zone_throughput_avg_lat -p zbdbench_results/YYYYMMDDHHMMSS/fio_zone_throughput_avg_lat.csv

Overwrite benchmark run with the none device scheduler:

./run.py -b benchmark -d /dev/nvmeXnY --none-scheduler

Overwrite benchmark run with the mq-deadline device scheduler:

./run.py -b benchmark -d /dev/nvmeXnY --mq-deadline-scheduler

Benchmarks

  • All fio benchmarks are setting the none scheduler by default if the iodepth is 1.
  • When doing random fio workloads, the norandommap fio option is set.

SPDK FIO plugin support:

  • Following benchmarks have SPDK FIO plugin support
    • fio_zone_write
    • fio_zone_mixed
    • fio_zone_throughput_avg_lat
  • Adding SPDK FIO plugin support for a new benchmark
    • See benchs/template.py for guidance

fio_steady_state_performance

  • Puts the (conventional) drive into its steady state by completely filling it and then overwriting it. This puts conventional block devices into the state where the on device garbage colletion is working to free up space.

  • (Random) Read and (Random) Write performance of the drive is subseqently messured.

fio_zone_write

  • executes a fio workload that writes sequential to 14 zones in parallel and while writing 6 times the capacity of the device.

  • generated csv output (fio_zone_write.csv)

    1. written_gb: gigabytes written (GB)
    2. write_avg_mbs: average throughput (MB/s)

fio_zone_mixed

  • executes a fio workload that first preconditions the block device to steady state. Then rate limited writes are issued, in which 4KB random reads are issued in parallel. The average latency for the 4KB random read is reported.

  • generated csv output (fio_zone_mixed.csv)

    1. write_avg_mbs_target: target write throughput (MB/s)
    2. read_lat_avg_us: avg 4KB random read latency (us)
    3. write_avg_mbs: write throughput (MB/s)
    4. read_lat_us_avg_measured: avg 4KB random read latency (us)
    5. clat_*_us: Latency percentiles

    ** Note that (2) is only reported if write_avg_mbs_target and write_avg_mbs are equal. When they are not equal, the reported average latency is misleading, as the write throughput requested has not been possible to achieve.

fio_zone_throughput_avg_lat

  • Executes all combinations of the following workloads report the throughput and latency in the csv report (Note: 14 is a possible value for max_open_zones):

    • Sequential read, random read, sequential write
    • BS: 4K, 8K, 16K, 32K, 64K, 128K
    • Sequential write and sequential read specific:
      • Number of parallel jobs: 1, 2, 4, 8, 14, 16, 32, 64, 128 (skipping entries > max_open_zones)
      • QD: 1
      • ioengine: psync
    • Random read specific:
      • QD: 1, 2, 4, 8, 14, 16, 32, 64, 128
      • ioengine: io_uring

    For reads the drive is prepared with a write. The ZBD is reset before each run.

  • Generated csv output file is fio_zone_throughput_avg_lat.csv

    1. avg_lat_us: Average latency in µs for the specific run.
    2. throughput_MiBs: Throughput in MiBs for the specific run.
    3. clat_p1_us - clat_p100us: completion latency percentiles in µs.
  • Generates multiple graphs that plot the behavior of throughput and latency.

usenix_atc_2021_zns_eval

Executes RocksDB's db_bench according to the RocksDB evaluation section (5.2 RocksDB) of the paper 'ZNS: Avoiding the Block Interface Tax for Flash-based SSDs'.

Depending on if the specified drive to benchmark is a ZNS or Conventional device different benchmarks are run.

  • For conventional devices the db_bench workload is run on the following filesystems: - xfs - f2fs
  • For ZNS devices the db_bench workload is run on the f2fs filesystem and with the ZenFS RocksDB plugin without an additional filesystem.

Note: the tests are designed to run on 2TB devices.

sysbench

Executes a sysbench workload within a percona-server MyRocks installation. For conventional devices, the default filesystem will be xfs whereas for ZBD devices by default the benchmark will be issued through ZenFS, the RocksDB plugin which enables direct access to zoned storage. If the -x btrfs is supplied the benchmark will run on zoned or conventional devices with btrfs as the filesystem.

The benchmark will first bulk-load the drive with a database of about 800GB. 10 million db-entries correspond to ~2GB of capacity. With 200.000.000 table-size * 20 tables = 4000M db-entries the database size will result in 800GB. After that the following oltp workloads are run each for 30 minutes in the given order:

  • oltp_update_index.lua
  • oltp_update_non_index.lua
  • oltp_delete.lua
  • oltp_write_only.lua
  • oltp_insert.lua
  • oltp_read_write.lua
  • oltp_read_only.lua

Advance Data Analysis using SQLite

Benchmarks can implement to collect their CSV report into a SQLite database. See data_collector/sqlite_data_collector.py

The database file data-collection.sqlite3 will be created/modified in the given output directory (by default zbdbench_results)

The database design is keeped in an easy format. Each ZBDBench benchmarking run causes an entry in the zbdbench_run table which collects general system information. Each ZBDBench run can generate multiple results that are collected in a benchmark specific table (e.g. fio_zone_throughput_avg_lat)

TODO: Add graph for the database layout

In case you want to connect your SQLite DB with Excel you need to install the MySQL ODBC https://dev.mysql.com/downloads/connector/odbc/ .

On MacOS also install iOBDC http://www.iodbc.org/dataspace/doc/iodbc/wiki/iodbcWiki/Downloads . Copy /usr/local/mysql-connector-odbc-8.0.12-macos10.13-x86-64bit to /Library/ODBC and adjust /Library/ODBC/odbcinst.init https://stackoverflow.com/questions/52896893/macos-connector-mysql-odbc-driver-could-not-be-loaded-in-excel-for-mac-2016 .

In the 'ODBC Data Source Administrator' a 'User DSN' needs to be created with the following keywords and values:

SERVER <IP>
NO_SCHEMA 1

Within Excel in the 'Data' tab you can 'Get Data' 'From Database (Microsoft Query)' with the specified 'User DSN' and the following query:

SELECT * FROM fio_zone_throughput_avg_lat INNER JOIN zbdbench_run ON fio_zone_throughput_avg_lat.zbdbench_run_id = zbdbench_run.id;

zbdbench's People

Contributors

imukherjee-wdc avatar maisenbacherd avatar matiasbjorling avatar metaspace avatar yhr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

zbdbench's Issues

FIO 3.16 bug

Running the fio_zone_write benchmark on a conventional disk fails with fio-3.16. It works fine with fio-3.28. We should check the version of fio before executing the benchmark command.

There is an error when I am building the containers

[bpan@host01 docker]$ sudo ./build.sh
[sudo] bpan 的密码:
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
BuildKit is currently disabled; enable it by removing the DOCKER_BUILDKIT=0
environment-variable.

Sending build context to Docker daemon 9.728kB
Step 1/6 : FROM alpine:3.13
---> 6b5c5e00213a
Step 2/6 : MAINTAINER Matias Bjørling [email protected]
---> Using cache
---> 971b766e5069
Step 3/6 : ARG VERSION=fio-3.28
---> Running in a41447016aca
---> Removed intermediate container a41447016aca
---> c907508e5163
Step 4/6 : COPY blkzoned.h /root/blkzoned.h
---> 71e1b26aaf30
Step 5/6 : RUN apk --no-cache add libaio git && apk --no-cache add --virtual build-dependencies libaio-dev zlib-dev build-base linux-headers coreutils && mv /root/blkzoned.h /usr/include/linux/blkzoned.h && git clone https://github.com/axboe/fio.git -b "$VERSION" /root/fio && cd /root/fio && ./configure && make -j "$(nproc)" && make install && rm -rf /root/fio && apk del build-dependencies
---> Running in ee093a1dc0a7
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/community/x86_64/APKINDEX.tar.gz
(1/8) Installing ca-certificates (20220614-r0)
(2/8) Installing brotli-libs (1.0.9-r3)
(3/8) Installing nghttp2-libs (1.42.0-r1)
(4/8) Installing libcurl (7.79.1-r3)
(5/8) Installing expat (2.2.10-r8)
(6/8) Installing pcre2 (10.36-r1)
(7/8) Installing git (2.30.6-r0)
(8/8) Installing libaio (0.3.112-r1)
Executing busybox-1.32.1-r9.trigger
Executing ca-certificates-20220614-r0.trigger
OK: 19 MiB in 22 packages
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/community/x86_64/APKINDEX.tar.gz
(1/32) Upgrading musl (1.2.2-r1 -> 1.2.2-r2)
(2/32) Installing libaio-dev (0.3.112-r1)
(3/32) Installing pkgconf (1.7.3-r0)
(4/32) Installing zlib-dev (1.2.12-r3)
(5/32) Installing libgcc (10.2.1_pre1-r3)
(6/32) Installing libstdc++ (10.2.1_pre1-r3)
(7/32) Installing binutils (2.35.2-r1)
(8/32) Installing libmagic (5.39-r0)
(9/32) Installing file (5.39-r0)
(10/32) Installing libgomp (10.2.1_pre1-r3)
(11/32) Installing libatomic (10.2.1_pre1-r3)
(12/32) Installing libgphobos (10.2.1_pre1-r3)
(13/32) Installing gmp (6.2.1-r1)
(14/32) Installing isl22 (0.22-r0)
(15/32) Installing mpfr4 (4.1.0-r0)
(16/32) Installing mpc1 (1.2.0-r0)
(17/32) Installing gcc (10.2.1_pre1-r3)
(18/32) Installing musl-dev (1.2.2-r2)
(19/32) Installing libc-dev (0.7.2-r3)
(20/32) Installing g++ (10.2.1_pre1-r3)
(21/32) Installing make (4.3-r0)
(22/32) Installing fortify-headers (1.1-r0)
(23/32) Installing patch (2.7.6-r7)
(24/32) Installing build-base (0.5-r3)
(25/32) Installing linux-headers (5.7.8-r0)
(26/32) Installing libacl (2.2.53-r0)
(27/32) Installing libattr (2.4.48-r0)
(28/32) Installing skalibs (2.10.0.0-r0)
(29/32) Installing s6-ipcserver (2.10.0.0-r0)
(30/32) Installing utmps (0.1.0.0-r0)
Executing utmps-0.1.0.0-r0.pre-install
(31/32) Installing coreutils (8.32-r2)
(32/32) Installing build-dependencies (20240228.125435)
Executing busybox-1.32.1-r9.trigger
OK: 217 MiB in 53 packages
Cloning into '/root/fio'...
error: RPC failed; curl 92 HTTP/2 stream 0 was not closed cleanly: CANCEL (err 8)
error: 3582 bytes of body are still expected
fetch-pack: unexpected disconnect while reading sideband packet
fatal: early EOF
fatal: index-pack failed
The command '/bin/sh -c apk --no-cache add libaio git && apk --no-cache add --virtual build-dependencies libaio-dev zlib-dev build-base linux-headers coreutils && mv /root/blkzoned.h /usr/include/linux/blkzoned.h && git clone https://github.com/axboe/fio.git -b "$VERSION" /root/fio && cd /root/fio && ./configure && make -j "$(nproc)" && make install && rm -rf /root/fio && apk del build-dependencies' returned a non-zero code: 128
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
BuildKit is currently disabled; enable it by removing the DOCKER_BUILDKIT=0
environment-variable.

Sending build context to Docker daemon 10.24kB
Step 1/5 : FROM alpine:3.13 as builder
---> 6b5c5e00213a
Step 2/5 : MAINTAINER Matias Bjørling [email protected]
---> Using cache
---> 971b766e5069
Step 3/5 : COPY blkzoned.h /root/blkzoned.h
---> Using cache
---> 9bc6d1188c94
Step 4/5 : RUN apk --no-cache add libaio snappy gflags && apk --no-cache add --virtual build-dependencies git libaio-dev zlib-dev build-base linux-headers snappy-dev gflags-dev autoconf autoconf-archive automake libtool cmake coreutils && mv /root/blkzoned.h /usr/include/linux/blkzoned.h && git clone https://github.com/westerndigitalcorporation/libzbd.git /root/libzbd && cd /root/libzbd && ./autogen.sh && ./configure && make && make install && cd /root && rm -rf /root/libzbd && git clone https://github.com/facebook/rocksdb.git /root/rocksdb && cd /root/rocksdb && git clone https://github.com/westerndigitalcorporation/zenfs.git /root/rocksdb/plugin/zenfs && DEBUG_LEVEL=0 ROCKSDB_PLUGINS=zenfs make -j "$(nproc)" db_bench install && cp db_bench /usr/local/bin && cd plugin/zenfs/util && make && cp zenfs /usr/local/bin && cd /root && rm -rf /root/rocksdb && apk del build-dependencies
---> Running in 3e632c5fd290
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/community/x86_64/APKINDEX.tar.gz
(1/5) Installing libgcc (10.2.1_pre1-r3)
(2/5) Installing libstdc++ (10.2.1_pre1-r3)
(3/5) Installing gflags (2.2.2-r1)
(4/5) Installing libaio (0.3.112-r1)
(5/5) Installing snappy (1.1.8-r2)
Executing busybox-1.32.1-r9.trigger
OK: 8 MiB in 19 packages
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/community/x86_64/APKINDEX.tar.gz
(1/61) Upgrading musl (1.2.2-r1 -> 1.2.2-r2)
(2/61) Installing ca-certificates (20220614-r0)
(3/61) Installing brotli-libs (1.0.9-r3)
(4/61) Installing nghttp2-libs (1.42.0-r1)
(5/61) Installing libcurl (7.79.1-r3)
(6/61) Installing expat (2.2.10-r8)
(7/61) Installing pcre2 (10.36-r1)
(8/61) Installing git (2.30.6-r0)
(9/61) Installing libaio-dev (0.3.112-r1)
(10/61) Installing pkgconf (1.7.3-r0)
(11/61) Installing zlib-dev (1.2.12-r3)
(12/61) Installing binutils (2.35.2-r1)
(13/61) Installing libmagic (5.39-r0)
(14/61) Installing file (5.39-r0)
(15/61) Installing libgomp (10.2.1_pre1-r3)
(16/61) Installing libatomic (10.2.1_pre1-r3)
(17/61) Installing libgphobos (10.2.1_pre1-r3)
(18/61) Installing gmp (6.2.1-r1)
(19/61) Installing isl22 (0.22-r0)
(20/61) Installing mpfr4 (4.1.0-r0)
(21/61) Installing mpc1 (1.2.0-r0)
(22/61) Installing gcc (10.2.1_pre1-r3)
(23/61) Installing musl-dev (1.2.2-r2)
(24/61) Installing libc-dev (0.7.2-r3)
(25/61) Installing g++ (10.2.1_pre1-r3)
(26/61) Installing make (4.3-r0)
(27/61) Installing fortify-headers (1.1-r0)
(28/61) Installing patch (2.7.6-r7)
(29/61) Installing build-base (0.5-r3)
(30/61) Installing linux-headers (5.7.8-r0)
(31/61) Installing snappy-dev (1.1.8-r2)
(32/61) Installing gflags-dev (2.2.2-r1)
(33/61) Installing m4 (1.4.18-r2)
(34/61) Installing libbz2 (1.0.8-r1)
(35/61) Installing perl (5.32.0-r0)
(36/61) Installing perl-error (0.17029-r1)
(37/61) Installing perl-git (2.30.6-r0)
(38/61) Installing git-perl (2.30.6-r0)
(39/61) Installing autoconf (2.69-r3)
(40/61) Installing autoconf-archive (2019.01.06-r0)
(41/61) Installing automake (1.16.3-r0)
(42/61) Installing ncurses-terminfo-base (6.2_p20210109-r1)
(43/61) Installing ncurses-libs (6.2_p20210109-r1)
(44/61) Installing readline (8.1.0-r0)
(45/61) Installing bash (5.1.16-r0)
Executing bash-5.1.16-r0.post-install
(46/61) Installing libltdl (2.4.6-r7)
(47/61) Installing libtool (2.4.6-r7)
(48/61) Installing libacl (2.2.53-r0)
(49/61) Installing lz4-libs (1.9.2-r1)
(50/61) Installing xz-libs (5.2.5-r1)
(51/61) Installing zstd-libs (1.4.9-r0)
(52/61) Installing libarchive (3.5.3-r0)
(53/61) Installing rhash-libs (1.4.1-r0)
(54/61) Installing libuv (1.40.0-r0)
(55/61) Installing cmake (3.18.4-r1)
(56/61) Installing libattr (2.4.48-r0)
(57/61) Installing skalibs (2.10.0.0-r0)
(58/61) Installing s6-ipcserver (2.10.0.0-r0)
(59/61) Installing utmps (0.1.0.0-r0)
Executing utmps-0.1.0.0-r0.pre-install
(60/61) Installing coreutils (8.32-r2)
(61/61) Installing build-dependencies (20240228.125841)
Executing busybox-1.32.1-r9.trigger
Executing ca-certificates-20220614-r0.trigger
OK: 314 MiB in 79 packages
Cloning into '/root/libzbd'...
libtoolize: putting auxiliary files in AC_CONFIG_AUX_DIR, 'build-aux'.
libtoolize: copying file 'build-aux/ltmain.sh'
libtoolize: putting macros in AC_CONFIG_MACRO_DIRS, 'm4'.
libtoolize: copying file 'm4/libtool.m4'
libtoolize: copying file 'm4/ltoptions.m4'
libtoolize: copying file 'm4/ltsugar.m4'
libtoolize: copying file 'm4/ltversion.m4'
libtoolize: copying file 'm4/lt~obsolete.m4'
configure.ac:31: installing 'build-aux/ar-lib'
configure.ac:24: installing 'build-aux/compile'
configure.ac:10: installing 'build-aux/config.guess'
configure.ac:10: installing 'build-aux/config.sub'
configure.ac:14: installing 'build-aux/install-sh'
configure.ac:14: installing 'build-aux/missing'
lib/Makefile.am: installing 'build-aux/depcomp'
checking build system type... x86_64-pc-linux-musl
checking host system type... x86_64-pc-linux-musl
checking target system type... x86_64-pc-linux-musl
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... no
checking for mawk... no
checking for nawk... no
checking for awk... awk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking whether make supports nested variables... (cached) yes
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking whether gcc understands -c and -o together... yes
checking whether make supports the include directive... yes (GNU style)
checking dependency style of gcc... gcc3
checking how to run the C preprocessor... gcc -E
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking minix/config.h usability... no
checking minix/config.h presence... no
checking for minix/config.h... no
checking whether it is safe to define EXTENSIONS... yes
checking for special C compiler options needed for large files... no
checking for _FILE_OFFSET_BITS value needed for large files... no
checking for ar... ar
checking the archiver (ar) interface... ar
checking how to print strings... printf
checking for a sed that does not truncate output... /bin/sed
checking for fgrep... /bin/grep -F
checking for ld used by gcc... /usr/x86_64-alpine-linux-musl/bin/ld
checking if the linker (/usr/x86_64-alpine-linux-musl/bin/ld) is GNU ld... yes
checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B
checking the name lister (/usr/bin/nm -B) interface... BSD nm
checking whether ln -s works... yes
checking the maximum length of command line arguments... 98304
checking how to convert x86_64-pc-linux-musl file names to x86_64-pc-linux-musl format... func_convert_file_noop
checking how to convert x86_64-pc-linux-musl file names to toolchain format... func_convert_file_noop
checking for /usr/x86_64-alpine-linux-musl/bin/ld option to reload object files... -r
checking for objdump... objdump
checking how to recognize dependent libraries... pass_all
checking for dlltool... no
checking how to associate runtime and link libraries... printf %s\n
checking for archiver @file support... @
checking for strip... strip
checking for ranlib... ranlib
checking command to parse /usr/bin/nm -B output from gcc object... ok
checking for sysroot... no
checking for a working dd... /bin/dd
checking how to truncate binary pipes... /bin/dd bs=4096 count=1
checking for mt... no
checking if : is a manifest tool... no
checking for dlfcn.h... yes
checking for objdir... .libs
checking if gcc supports -fno-rtti -fno-exceptions... no
checking for gcc option to produce PIC... -fPIC -DPIC
checking if gcc PIC flag -fPIC -DPIC works... yes
checking if gcc static flag -static works... yes
checking if gcc supports -c -o file.o... yes
checking if gcc supports -c -o file.o... (cached) yes
checking whether the gcc linker (/usr/x86_64-alpine-linux-musl/bin/ld -m elf_x86_64) supports shared libraries... yes
checking whether -lc should be explicitly linked in... no
checking dynamic linker characteristics... GNU/Linux ld.so
checking how to hardcode library paths into programs... immediate
checking whether stripping libraries is possible... yes
checking if libtool supports shared libraries... yes
checking whether to build shared libraries... yes
checking whether to build static libraries... yes
checking for rpmbuild... notfound
checking for rpm... notfound
checking for libgen.h... yes
checking for linux/blkzoned.h... yes
checking for struct blk_zone.capacity... yes
checking for pkg-config... /usr/bin/pkg-config
checking pkg-config is at least version 0.9.0... yes
checking for GTK... no
checking that generated files are newer than configure... done
configure: creating ./config.status
config.status: creating lib/libzbd.pc
config.status: creating lib/Makefile
config.status: creating tools/Makefile
config.status: creating Makefile
config.status: creating include/config.h
config.status: executing depfiles commands
config.status: executing libtool commands
Making all in lib
make[1]: Entering directory '/root/libzbd/lib'
CC libzbd_la-zbd.lo
CC libzbd_la-zbd_utils.lo
CCLD libzbd.la
ar: u' modifier ignored since D' is the default (see `U')
make[1]: Leaving directory '/root/libzbd/lib'
Making all in tools
make[1]: Entering directory '/root/libzbd/tools'
Making all in .
make[2]: Entering directory '/root/libzbd/tools'
CC cli/zbd.o
CC cli/zbd_dump.o
CCLD zbd
make[2]: Leaving directory '/root/libzbd/tools'
make[1]: Leaving directory '/root/libzbd/tools'
make[1]: Entering directory '/root/libzbd'
make[1]: Nothing to be done for 'all-am'.
make[1]: Leaving directory '/root/libzbd'
Making install in lib
make[1]: Entering directory '/root/libzbd/lib'
make[2]: Entering directory '/root/libzbd/lib'
/bin/mkdir -p '/usr/lib'
/bin/sh ../libtool --mode=install /usr/bin/install -c libzbd.la '/usr/lib'
libtool: install: /usr/bin/install -c .libs/libzbd.so.2.0.4 /usr/lib/libzbd.so.2.0.4
libtool: install: (cd /usr/lib && { ln -s -f libzbd.so.2.0.4 libzbd.so.2 || { rm -f libzbd.so.2 && ln -s libzbd.so.2.0.4 libzbd.so.2; }; })
libtool: install: (cd /usr/lib && { ln -s -f libzbd.so.2.0.4 libzbd.so || { rm -f libzbd.so && ln -s libzbd.so.2.0.4 libzbd.so; }; })
libtool: install: /usr/bin/install -c .libs/libzbd.lai /usr/lib/libzbd.la
libtool: install: /usr/bin/install -c .libs/libzbd.a /usr/lib/libzbd.a
libtool: install: chmod 644 /usr/lib/libzbd.a
libtool: install: ranlib /usr/lib/libzbd.a
libtool: finish: PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/sbin" ldconfig -n /usr/lib

Libraries have been installed in:
/usr/lib

If you ever happen to want to link against installed libraries
in a given directory, LIBDIR, you must either use libtool, and
specify the full pathname of the library, or use the '-LLIBDIR'
flag during linking and do at least one of the following:

  • add LIBDIR to the 'LD_LIBRARY_PATH' environment variable
    during execution
  • add LIBDIR to the 'LD_RUN_PATH' environment variable
    during linking
  • use the '-Wl,-rpath -Wl,LIBDIR' linker flag

See any operating system documentation about shared libraries for
more information, such as the ld(1) and ld.so(8) manual pages.

/bin/mkdir -p '/usr/lib/pkgconfig'
/usr/bin/install -c -m 644 libzbd.pc '/usr/lib/pkgconfig'
/bin/mkdir -p '/usr/include/libzbd'
/usr/bin/install -c -m 644 ../include/libzbd/zbd.h '/usr/include/libzbd'
make[2]: Leaving directory '/root/libzbd/lib'
make[1]: Leaving directory '/root/libzbd/lib'
Making install in tools
make[1]: Entering directory '/root/libzbd/tools'
Making install in .
make[2]: Entering directory '/root/libzbd/tools'
make[3]: Entering directory '/root/libzbd/tools'
/bin/mkdir -p '/usr/bin'
/bin/sh ../libtool --mode=install /usr/bin/install -c zbd '/usr/bin'
libtool: install: /usr/bin/install -c .libs/zbd /usr/bin/zbd
/bin/mkdir -p '/usr/share/man/man8'
/usr/bin/install -c -m 644 cli/zbd.8 '/usr/share/man/man8'
make[3]: Leaving directory '/root/libzbd/tools'
make[2]: Leaving directory '/root/libzbd/tools'
make[1]: Leaving directory '/root/libzbd/tools'
make[1]: Entering directory '/root/libzbd'
make[2]: Entering directory '/root/libzbd'
make[2]: Nothing to be done for 'install-exec-am'.
make[2]: Nothing to be done for 'install-data-am'.
make[2]: Leaving directory '/root/libzbd'
make[1]: Leaving directory '/root/libzbd'
Cloning into '/root/rocksdb'...
error: RPC failed; curl 92 HTTP/2 stream 0 was not closed cleanly: CANCEL (err 8)
error: 2588 bytes of body are still expected
fetch-pack: unexpected disconnect while reading sideband packet
fatal: early EOF
fatal: index-pack failed
The command '/bin/sh -c apk --no-cache add libaio snappy gflags && apk --no-cache add --virtual build-dependencies git libaio-dev zlib-dev build-base linux-headers snappy-dev gflags-dev autoconf autoconf-archive automake libtool cmake coreutils && mv /root/blkzoned.h /usr/include/linux/blkzoned.h && git clone https://github.com/westerndigitalcorporation/libzbd.git /root/libzbd && cd /root/libzbd && ./autogen.sh && ./configure && make && make install && cd /root && rm -rf /root/libzbd && git clone https://github.com/facebook/rocksdb.git /root/rocksdb && cd /root/rocksdb && git clone https://github.com/westerndigitalcorporation/zenfs.git /root/rocksdb/plugin/zenfs && DEBUG_LEVEL=0 ROCKSDB_PLUGINS=zenfs make -j "$(nproc)" db_bench install && cp db_bench /usr/local/bin && cd plugin/zenfs/util && make && cp zenfs /usr/local/bin && cd /root && rm -rf /root/rocksdb && apk del build-dependencies' returned a non-zero code: 128
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
BuildKit is currently disabled; enable it by removing the DOCKER_BUILDKIT=0
environment-variable.

Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM zrocksdb
pull access denied for zrocksdb, repository does not exist or may require 'docker login': denied: requested access to the resource is denied

There is an query about fio_param in /benchs/fio_zone_writes.py(line57)

When I study the params of fio config in /benchs/fio_zone_writes.py, I find the fio I/O type is randwrite. Is this different from the description in README.md? As far as I know, ZNS SSDs only support sequential writing.

/benchs/fio_zone_writes.py:

        fio_param = ("--filename=%s"
                     " --io_size=%sk"
                     " --log_avg_msec=1000"
                     " --write_bw_log=%s/fio_zone_write"
                     " --output=%s/fio_zone_write.log"
                     " --direct=1 --zonemode=zbd"
                     " --name=seqwriter --rw=randwrite --norandommap"
                     " --bs=64k --max_open_zones=%s %s") % (dev,
                                                            io_size,
                                                            self.result_path(),
                                                            self.result_path(),
                                                            max_open_zones,
                                                            extra)

The describe about fio_zone_writes in README.md:

  • executes a fio workload that writes sequential to 14 zones in parallel and while writing 6 times the capacity of the device.

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.