Giter Site home page Giter Site logo

fio's Introduction

Overview and history

Fio was originally written to save me the hassle of writing special test case programs when I wanted to test a specific workload, either for performance reasons or to find/reproduce a bug. The process of writing such a test app can be tiresome, especially if you have to do it often. Hence I needed a tool that would be able to simulate a given I/O workload without resorting to writing a tailored test case again and again.

A test work load is difficult to define, though. There can be any number of processes or threads involved, and they can each be using their own way of generating I/O. You could have someone dirtying large amounts of memory in a memory mapped file, or maybe several threads issuing reads using asynchronous I/O. fio needed to be flexible enough to simulate both of these cases, and many more.

Fio spawns a number of threads or processes doing a particular type of I/O action as specified by the user. fio takes a number of global parameters, each inherited by the thread unless otherwise parameters given to them overriding that setting is given. The typical use of fio is to write a job file matching the I/O load one wants to simulate.

Source

Fio resides in a git repo, the canonical place is:

https://git.kernel.dk/cgit/fio/

Snapshots are frequently generated and :file:`fio-git-*.tar.gz` include the git meta data as well. Other tarballs are archives of official fio releases. Snapshots can download from:

https://brick.kernel.dk/snaps/

There are also two official mirrors. Both of these are automatically synced with the main repository, when changes are pushed. If the main repo is down for some reason, either one of these is safe to use as a backup:

https://git.kernel.org/pub/scm/linux/kernel/git/axboe/fio.git

https://github.com/axboe/fio.git

Mailing list

The fio project mailing list is meant for anything related to fio including general discussion, bug reporting, questions, and development. For bug reporting, see REPORTING-BUGS.

An automated mail detailing recent commits is automatically sent to the list at most daily. The list address is [email protected], subscribe by sending an email to [email protected] with

subscribe fio

in the body of the email. Archives can be found here:

https://www.spinics.net/lists/fio/

or here:

https://lore.kernel.org/fio/

and archives for the old list can be found here:

http://maillist.kernel.dk/fio-devel/

Author

Fio was written by Jens Axboe <[email protected]> to enable flexible testing of the Linux I/O subsystem and schedulers. He got tired of writing specific test applications to simulate a given workload, and found that the existing I/O benchmark/test tools out there weren't flexible enough to do what he wanted.

Jens Axboe <[email protected]> 20060905

Maintainers

Fio is maintained by Jens Axboe <[email protected] and Vincent Fu <[email protected]> - however, for reporting bugs please use the fio reflector or the GitHub page rather than email any of them directly. By using the public resources, others will be able to learn from the responses too. Chances are also good that other members will be able to help with your inquiry as well.

Binary packages

Debian:
Starting with Debian "Squeeze", fio packages are part of the official Debian repository. https://packages.debian.org/search?keywords=fio .
Ubuntu:
Starting with Ubuntu 10.04 LTS (aka "Lucid Lynx"), fio packages are part of the Ubuntu "universe" repository. https://packages.ubuntu.com/search?keywords=fio .
Red Hat, Fedora, CentOS & Co:
Starting with Fedora 9/Extra Packages for Enterprise Linux 4, fio packages are part of the Fedora/EPEL repositories. https://packages.fedoraproject.org/pkgs/fio/ .
Mandriva:
Mandriva has integrated fio into their package repository, so installing on that distro should be as easy as typing urpmi fio.
Arch Linux:
An Arch Linux package is provided under the Community sub-repository: https://www.archlinux.org/packages/?sort=&q=fio
Solaris:
Packages for Solaris are available from OpenCSW. Install their pkgutil tool (http://www.opencsw.org/get-it/pkgutil/) and then install fio via pkgutil -i fio.
Windows:
Beginning with fio 3.31 Windows installers for tagged releases are available on GitHub at https://github.com/axboe/fio/releases. The latest installers for Windows can also be obtained as GitHub Actions artifacts by selecting a build from https://github.com/axboe/fio/actions. These require logging in to a GitHub account.
BSDs:
Packages for BSDs may be available from their binary package repositories. Look for a package "fio" using their binary package managers.

Building

Just type:

$ ./configure
$ make
$ make install

Note that GNU make is required. On BSDs it's available from devel/gmake within ports directory; on Solaris it's in the SUNWgmake package. On platforms where GNU make isn't the default, type gmake instead of make.

Configure will print the enabled options. Note that on Linux based platforms, the libaio development packages must be installed to use the libaio engine. Depending on the distro, it is usually called libaio-devel or libaio-dev.

For gfio, gtk 2.18 (or newer), associated glib threads, and cairo are required to be installed. gfio isn't built automatically and can be enabled with a --enable-gfio option to configure.

To build fio with a cross-compiler:

$ make clean
$ make CROSS_COMPILE=/path/to/toolchain/prefix

Configure will attempt to determine the target platform automatically.

It's possible to build fio for ESX as well, use the --esx switch to configure.

Windows

The minimum versions of Windows for building/running fio are Windows 7/Windows Server 2008 R2. On Windows, Cygwin (https://www.cygwin.com/) is required in order to build fio. To create an MSI installer package install WiX from https://wixtoolset.org and run :file:`dobuild.cmd` from the :file:`os/windows` directory.

How to compile fio on 64-bit Windows:

  1. Install Cygwin (https://www.cygwin.com/). Install make and all packages starting with mingw64-x86_64. Ensure mingw64-x86_64-zlib are installed if you wish to enable fio's log compression functionality.
  2. Open the Cygwin Terminal.
  3. Go to the fio directory (source files).
  4. Run make clean && make -j.

To build fio for 32-bit Windows, ensure the -i686 versions of the previously mentioned -x86_64 packages are installed and run ./configure --build-32bit-win before make.

It's recommended that once built or installed, fio be run in a Command Prompt or other 'native' console such as console2, since there are known to be display and signal issues when running it under a Cygwin shell (see mintty/mintty#56 and https://github.com/mintty/mintty/wiki/Tips#inputoutput-interaction-with-alien-programs for details).

Documentation

Fio uses Sphinx to generate documentation from the reStructuredText files. To build HTML formatted documentation run make -C doc html and direct your browser to :file:`./doc/output/html/index.html`. To build manual page run make -C doc man and then man doc/output/man/fio.1. To see what other output formats are supported run make -C doc help.

Platforms

Fio works on (at least) Linux, Solaris, AIX, HP-UX, OSX, NetBSD, OpenBSD, Windows, FreeBSD, and DragonFly. Some features and/or options may only be available on some of the platforms, typically because those features only apply to that platform (like the solarisaio engine, or the splice engine on Linux).

Some features are not available on FreeBSD/Solaris even if they could be implemented, I'd be happy to take patches for that. An example of that is disk utility statistics and (I think) huge page support, support for that does exist in FreeBSD/Solaris.

Fio uses pthread mutexes for signaling and locking and some platforms do not support process shared pthread mutexes. As a result, on such platforms only threads are supported. This could be fixed with sysv ipc locking or other locking alternatives.

Other *BSD platforms are untested, but fio should work there almost out of the box. Since I don't do test runs or even compiles on those platforms, your mileage may vary. Sending me patches for other platforms is greatly appreciated. There's a lot of value in having the same test/benchmark tool available on all platforms.

Note that POSIX aio is not enabled by default on AIX. Messages like these:

Symbol resolution failed for /usr/lib/libc.a(posix_aio.o) because:
    Symbol _posix_kaio_rdwr (number 2) is not exported from dependent module /unix.

indicate one needs to enable POSIX aio. Run the following commands as root:

# lsdev -C -l posix_aio0
    posix_aio0 Defined  Posix Asynchronous I/O
# cfgmgr -l posix_aio0
# lsdev -C -l posix_aio0
    posix_aio0 Available  Posix Asynchronous I/O

POSIX aio should work now. To make the change permanent:

# chdev -l posix_aio0 -P -a autoconfig='available'
    posix_aio0 changed

Running fio

Running fio is normally the easiest part - you just give it the job file (or job files) as parameters:

$ fio [options] [jobfile] ...

and it will start doing what the jobfile tells it to do. You can give more than one job file on the command line, fio will serialize the running of those files. Internally that is the same as using the :option:`stonewall` parameter described in the parameter section.

If the job file contains only one job, you may as well just give the parameters on the command line. The command line parameters are identical to the job parameters, with a few extra that control global parameters. For example, for the job file parameter :option:`iodepth=2 <iodepth>`, the mirror command line option would be :option:`--iodepth 2 <iodepth>` or :option:`--iodepth=2 <iodepth>`. You can also use the command line for giving more than one job entry. For each :option:`--name <name>` option that fio sees, it will start a new job with that name. Command line entries following a :option:`--name <name>` entry will apply to that job, until there are no more entries or a new :option:`--name <name>` entry is seen. This is similar to the job file options, where each option applies to the current job until a new [] job entry is seen.

fio does not need to run as root, except if the files or devices specified in the job section requires that. Some other options may also be restricted, such as memory locking, I/O scheduler switching, and decreasing the nice value.

If jobfile is specified as -, the job file will be read from standard input.

fio's People

Contributors

aclamk avatar aggienick02 avatar ak-gh avatar albertofaria avatar ankit-sam avatar axboe avatar bcran avatar bengland2 avatar bvanassche avatar damien-lemoal avatar davidel avatar dmitry-fomichev avatar dpronin avatar erwanaliasr1 avatar floatious avatar gollub avatar gpaio avatar horshack-dpreview avatar kaga-koko avatar kawasaki avatar kusumi avatar lsgunth avatar minwooim avatar robertcelliott avatar rootfs avatar rouming avatar sitsofe avatar smcameron avatar tycho avatar vincentkfu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fio's Issues

Failure to compile git revision 37fbd7e9

engines/rdma.c: In function ‘fio_rdmaio_setup’:
engines/rdma.c:1205: error: too few arguments to function ‘init_rand_seed’
make: *** [engines/rdma.o] Error 1

Cannot verify

I'm getting totally weird results when trying to verify the contents of a file. My job file, test.fio, is based on examples/surface-scan.fio:

[global]
thread
bs=64k
direct=1
ioengine=sync
verify=meta
verify_pattern=0xaaff
verify_interval=512
size=1m

[write]
filename=test.tmp
rw=write
do_verify=0

[verify]
stonewall
create_serialize=0
filename=test.tmp
rw=read
do_verify=1
continue_on_error=verify

I do this:
(1) Run "fio test.fio"; everything works as expected.
(2) Run "fio --section=write test.fio"; also good.
(3) Run "fio --section=verify test.fio"; also good.
(4) Overwrite a block in the middle of the file:
"dd if=/dev/zero of=test.tmp bs=512 seek=123 count=1"
(5) Run "fio --section=verify test.fio". Fio then says the following:

verify: Laying out IO file(s) (1 file(s) / 1MB)
verify: bad magic header 5cda, wanted acca at file test.tmp offset 0, length 47328155
verify: bad magic header c468, wanted acca at file test.tmp offset 65536, length 428267661
verify: bad magic header c381, wanted acca at file test.tmp offset 131072, length 353269872
verify: bad magic header 9ece, wanted acca at file test.tmp offset 196608, length 474993625
verify: bad magic header bae6, wanted acca at file test.tmp offset 262144, length 400783196
verify: bad magic header a6a8, wanted acca at file test.tmp offset 327680, length 312186069
verify: bad magic header aa25, wanted acca at file test.tmp offset 393216, length 314692932
verify: bad magic header c9c5, wanted acca at file test.tmp offset 458752, length 193190200
verify: bad magic header a39d, wanted acca at file test.tmp offset 524288, length 476001395
verify: bad magic header 79c2, wanted acca at file test.tmp offset 589824, length 431042360
verify: bad magic header e4c7, wanted acca at file test.tmp offset 655360, length 458333336
verify: bad magic header 3613, wanted acca at file test.tmp offset 720896, length 426215106
verify: bad magic header a51d, wanted acca at file test.tmp offset 786432, length 336098467
verify: bad magic header d2cb, wanted acca at file test.tmp offset 851968, length 57784921
verify: bad magic header 1468, wanted acca at file test.tmp offset 917504, length 496321165
verify: bad magic header 97c2, wanted acca at file test.tmp offset 983040, length 131846904

  • Why does fio decide to regenerate "test.tmp" in this case? Can I prevent it from doing that?
  • Why does the file end up containing seemingly random garbage when the verify pattern should be a different one?
  • The reported verify block lengths are random garbage. (The offsets are only meaningful "by accident".) It seems that inside verify_io_u(), hdr_inc should be used instead of hdr->len in most places. The pattern seems to repeat in other places too, though.

Thanks!

Issue with --refill-buffers option

Hello all,
Lately I have been running into an issue mentioned below when I use refill-buffers. Here are the details.

Environment:
root@localhost:~# fio --version
fio-2.1.7

root@localhost:~# cat /proc/version
Linux version 3.2.0-23-generic (buildd@crested) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu4) ) #36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012

Repro Steps:
I am using below command to generate compressible workload:

root@localhost:~# /usr/local/bin/fio --ioengine=libaio --name=job1 --direct=1 --verify=md5 --rw=write --size=350m --do_verify=0 --buffer_compress_percentage=60 --refill_buffers=1 --filename=/tmp/testing-11 --bs=64k --iodepth=8 job1: (g=0): rw=write, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=8 fio-2.1.7 Starting 1 process Jobs: 1 (f=1): [W] [-.-% done] [0KB/9342KB/0KB /s] [0/145/0 iops] [eta 00m:00s] job1: (groupid=0, jobs=1): err= 0: pid=8788: Mon Dec 22 20:28:18 2014 write: io=358400KB, bw=10971KB/s, iops=171, runt= 32667msec slat (usec): min=13, max=110863, avg=203.78, stdev=2461.41 clat (msec): min=2, max=479, avg=45.88, stdev=32.48 lat (msec): min=2, max=479, avg=46.09, stdev=32.63 clat percentiles (msec): | 1.00th=[ 10], 5.00th=[ 16], 10.00th=[ 19], 20.00th=[ 24], | 30.00th=[ 28], 40.00th=[ 33], 50.00th=[ 38], 60.00th=[ 44], | 70.00th=[ 51], 80.00th=[ 64], 90.00th=[ 84], 95.00th=[ 103], | 99.00th=[ 155], 99.50th=[ 182], 99.90th=[ 449], 99.95th=[ 449], | 99.99th=[ 482] bw (KB /s): min= 2024, max=19200, per=100.00%, avg=10972.35, stdev=3787.40 lat (msec) : 4=0.12%, 10=1.16%, 20=11.86%, 50=55.55%, 100=25.84% lat (msec) : 250=5.34%, 500=0.12% cpu : usr=12.34%, sys=3.36%, ctx=1097, majf=0, minf=35 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=99.9%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=0/w=5600/d=0, short=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
WRITE: io=358400KB, aggrb=10971KB/s, minb=10971KB/s, maxb=10971KB/s, mint=32667msec, maxt=32667msec

Disk stats (read/write):
dm-0: ios=0/5562, merge=0/0, ticks=0/240504, in_queue=240504, util=98.98%, aggrios=0/5568, aggrmerge=0/50, aggrticks=0/239644, aggrin_queue=239604, aggrutil=98.87%
sda: ios=0/5568, merge=0/50, ticks=0/239644, in_queue=239604, util=98.87%

Here I am running read workload to verify the data written and command fails

root@localhost:~# /usr/local/bin/fio --ioengine=libaio --name=job1 --direct=1 --verify=md5 --rw=read --size=350m --do_verify=1 --filename=/tmp/testing-11 --bs=64k --iodepth=8job1: (g=0): rw=read, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=8 fio-2.1.7 Starting 1 process verify: bad magic header f2d6, wanted acca at file /tmp/testing-11 offset 524288, length 65536 verify: bad magic header 8c9e, wanted acca at file /tmp/testing-11 offset 589824, length 65536 fio: pid=8791, err=84/file:io_u.c:1798, func=io_u_queued_complete, error=Invalid or incomplete multibyte or wide character

job1: (groupid=0, jobs=1): err=84 (file:io_u.c:1798, func=io_u_queued_complete, error=Invalid or incomplete multibyte or wide character): pid=8791: Mon Dec 22 20:28:34 2014
read : io=655360B, bw=6956.6KB/s, iops=173, runt= 92msec
slat (usec): min=35, max=655, avg=128.38, stdev=162.26
clat (msec): min=14, max=75, avg=43.51, stdev=16.00
lat (msec): min=15, max=75, avg=43.63, stdev=15.93
clat percentiles (usec):
| 1.00th=[14784], 5.00th=[14784], 10.00th=[14784], 20.00th=[23680],
| 30.00th=[45312], 40.00th=[45312], 50.00th=[45824], 60.00th=[45824],
| 70.00th=[45824], 80.00th=[45824], 90.00th=[46848], 95.00th=[76288],
| 99.00th=[76288], 99.50th=[76288], 99.90th=[76288], 99.95th=[76288],
| 99.99th=[76288]
lat (msec) : 20=6.25%, 50=50.00%, 100=6.25%
cpu : usr=30.77%, sys=0.00%, ctx=7, majf=0, minf=200
IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=56.2%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=16/w=0/d=0, short=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
READ: io=640KB, aggrb=6956KB/s, minb=6956KB/s, maxb=6956KB/s, mint=92msec, maxt=92msec

Disk stats (read/write):
dm-0: ios=15/0, merge=0/0, ticks=284/0, in_queue=340, util=21.52%, aggrios=16/0, aggrmerge=0/0, aggrticks=692/0, aggrin_queue=692, aggrutil=33.21%
sda: ios=16/0, merge=0/0, ticks=692/0, in_queue=692, util=33.21%

The same set of commands work just if I run the write workload without --refill-buffers

root@localhost:~# /usr/local/bin/fio --ioengine=libaio --name=job1 --direct=1 --verify=md5 --rw=write --size=350m --do_verify=0 --buffer_compress_percentage=60 --filename=/tmp/testing-11 --bs=64k --iodepth=8
job1: (g=0): rw=write, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=8
fio-2.1.7
Starting 1 process
Jobs: 1 (f=1): [W] [-.-% done] [0KB/7460KB/0KB /s] [0/116/0 iops] [eta 00m:00s]
job1: (groupid=0, jobs=1): err= 0: pid=8810: Mon Dec 22 20:31:47 2014
write: io=358400KB, bw=9670.3KB/s, iops=151, runt= 37062msec
slat (usec): min=13, max=93531, avg=236.52, stdev=2123.37
clat (msec): min=2, max=339, avg=52.07, stdev=29.35
lat (msec): min=3, max=340, avg=52.31, stdev=29.49
clat percentiles (msec):
| 1.00th=[ 11], 5.00th=[ 19], 10.00th=[ 24], 20.00th=[ 29],
| 30.00th=[ 35], 40.00th=[ 39], 50.00th=[ 45], 60.00th=[ 53],
| 70.00th=[ 63], 80.00th=[ 74], 90.00th=[ 89], 95.00th=[ 105],
| 99.00th=[ 153], 99.50th=[ 169], 99.90th=[ 223], 99.95th=[ 225],
| 99.99th=[ 338]
bw (KB /s): min= 4749, max=17664, per=100.00%, avg=9675.30, stdev=2994.73
lat (msec) : 4=0.18%, 10=0.75%, 20=5.45%, 50=50.84%, 100=36.89%
lat (msec) : 250=5.88%, 500=0.02%
cpu : usr=12.52%, sys=3.85%, ctx=1087, majf=0, minf=35
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=99.9%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=5600/d=0, short=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
WRITE: io=358400KB, aggrb=9670KB/s, minb=9670KB/s, maxb=9670KB/s, mint=37062msec, maxt=37062msec

Disk stats (read/write):
dm-0: ios=0/5607, merge=0/0, ticks=0/268028, in_queue=268312, util=97.04%, aggrios=0/5521, aggrmerge=0/100, aggrticks=0/263024, aggrin_queue=262984, aggrutil=96.98%
sda: ios=0/5521, merge=0/100, ticks=0/263024, in_queue=262984, util=96.98%
root@localhost:~# /usr/local/bin/fio --ioengine=libaio --name=job1 --direct=1 --verify=md5 --rw=read --size=350m --do_verify=1 --filename=/tmp/testing-11 --bs=64k --iodepth=8job1: (g=0): rw=read, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=8
fio-2.1.7
Starting 1 process
Jobs: 1 (f=1): [V] [100.0% done] [6618KB/0KB/0KB /s] [103/0/0 iops] [eta 00m:00s]
job1: (groupid=0, jobs=1): err= 0: pid=8813: Mon Dec 22 20:32:47 2014
read : io=358400KB, bw=350000MB/s, iops=5600.0K, runt= 1msec
slat (usec): min=0, max=40297, avg=246.94, stdev=1761.44
clat (usec): min=0, max=491492, avg=62248.51, stdev=41876.67
lat (usec): min=0, max=491738, avg=62456.24, stdev=41946.60
clat percentiles (usec):
| 1.00th=[ 0], 5.00th=[ 0], 10.00th=[21120], 20.00th=[34048],
| 30.00th=[41216], 40.00th=[46848], 50.00th=[54528], 60.00th=[62720],
| 70.00th=[74240], 80.00th=[87552], 90.00th=[112128], 95.00th=[134144],
| 99.00th=[187392], 99.50th=[230400], 99.90th=[387072], 99.95th=[477184],
| 99.99th=[489472]
bw (KB /s): min= 5120, max= 5120, per=0.00%, avg=5120.00, stdev= 0.00
lat (usec) : 2=7.00%
lat (msec) : 2=0.02%, 4=0.02%, 10=0.05%, 20=2.34%, 50=35.09%
lat (msec) : 100=40.29%, 250=14.82%, 500=0.38%
cpu : usr=0.00%, sys=0.00%, ctx=1855, majf=0, minf=168
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=99.9%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=5600/w=0/d=0, short=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
READ: io=358400KB, aggrb=350000MB/s, minb=350000MB/s, maxb=350000MB/s, mint=1msec, maxt=1msec

Disk stats (read/write):
dm-0: ios=5583/4, merge=0/0, ticks=365556/644, in_queue=366676, util=97.06%, aggrios=5560/3, aggrmerge=40/1, aggrticks=364104/448, aggrin_queue=364508, aggrutil=97.01%
sda: ios=5560/3, merge=40/1, ticks=364104/448, in_queue=364508, util=97.01%

Please help me with the bug

Files read/write to a a different device via network.

Hi,

I'm trying to to create a CIFS like test in order to test a couple of things.

I have 2 Linux devices connected to a L2 network. I'm controlling both via console.
I want to create files in device 1 and transfer them to device 2 simultaneously using all the thread available.
I can create the files but is there a way to transfer them to the other device using the fio tool?

Can anyone help with me with this?

Misleading error message for 'cpus_allowed' option

The error message displayed by the option parser for the 'cpus_allowed' parameter is misleading.

My system has 4 processors:

$ grep -c processor /proc/cpuinfo
4

If I provide a high number for cpus_allowed, I get the error message:

$ fio --filename=/tmp/foo.fio --cpus_allowed=5 --name=job1
fio: CPU 5 too large (max=4)
fio: failed parsing cpus_allowed=5

If it says "(max=4)", I would expect it to accept the value "4" if I want to use the last CPU of the system (even though we know that the CPUs are generally numbered starting with 0), but that's not what happens:

$ fio --name=global --filename=/tmp/foo.fio --cpus_allowed=4 --name=job1
job1: (g=0): rw=read, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=1
fio-2.1.3
Starting 1 process
fio: pid=24942, err=22/file:backend.c:1192, func=cpu_set_affinity, error=Invalid argument

So for setting the affinity the CPU number really starts from 0, so 3 would be the right value in this case and the following command works as expected:

$ fio --name=global --filename=/tmp/foo.fio --cpus_allowed=3 --name=job1

More details:
fio version: fio-2.2.7-11-g1d6d
OS version: Linux kleberpc 3.13.0-49-generic #81-Ubuntu SMP Tue Mar 24 19:29:48 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

Terse format should generate header line and reduce metadata in output

At the moment, as wonderful as it is to get "raw-ish" output, I think there are a few issues with it. First, it would be ideal for pairs like 1.00%=6112 to instead be just 6112, but in a column identified by a descriptive column heading. Really, the 1.00%=<value> is a metadata+data pairing, i.e. field is described in-line so to speak. From the standpoint of rapid data processing this is less than ideal.

It may not be necessary to force column header to print, but having it be an option would be great. So, if format was fixed, and each field was just a value with a well-defined meaning, these mappings would be unnecessary, and at same time each field in each row would have a firm description.

Ideally, I would love to be able to toggle writing of header row, even if output is just one long line. This would make life much easier for me and others who use statistical tools, like R for example. Even with Python, AWK and other tools processing would be simplified.

Fio replay blktrace file failed when blktrace file is large than 3GB.

Fio replay blktrace file failed when blktrace file is large than 3GB.
cmd:
fio --name=replay --filename=/dev/md1 --read_iolog=md1.bin --ioengine=libaio --iodepth=1

When md1.bin is large than 3GB,replay io will make the system dump,then dump reason is out of memory. But it works fine when the read_iolog is 100MB.

Does anyone help me? Thank you!

How to tell 'iops' exponent from json output?

I'm currently running fio with the --status-interval=x and --output-format=json options. I get a nice large json dump which is very easy to navigate, but I can't seem to find any indication of the current exponent of the 'iops' value returned.

Example:

json <-- one status interval json dump

json['jobs'][0]['write']['iops'] == 28

Another is:

json['jobs'][0]['read']['iops'] == 11.11

The second one seems like it's probably measured in K iops, but for the first one I have no real indication if it's actual iops or K iops.

Although I'm fairly new to fio (and the product I'm measuring) and I'm not 100% sure if either are reasonable units for this value, which is why having a sure-fire way of figuring out those units would be very useful :)

aggrb is showing wrong value

I have built fio from master and ran for 4k sequential write and its aggrb output is not the sum of all threads bw.

aggrb should be => 8904.1 + 13961 + 13953 + 13548 = 50366.1 KB/s v/s aggrb=35619KB/s

[root@fractal-c92e fio-zfs]# cat FS_4k_streaming_writes.org

seqwrite4: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=1
...
fio-2.1.14-17-g8671
Starting 4 processes
seqwrite4: Laying out IO file(s) (1 file(s) / 1024MB)
FS_4k_streaming_writes: Laying out IO file(s) (1 file(s) / 1024MB)
FS_4k_streaming_writes: Laying out IO file(s) (1 file(s) / 1024MB)
FS_4k_streaming_writes: Laying out IO file(s) (1 file(s) / 1024MB)

seqwrite4: (groupid=0, jobs=1): err= 0: pid=28438: Wed Dec 3 17:32:29 2014
write: io=1024.0MB, bw=8904.1KB/s, iops=2226, runt=117752msec
clat (usec): min=93, max=2967.4K, avg=445.96, stdev=11094.00
lat (usec): min=94, max=2967.4K, avg=446.34, stdev=11094.01
clat percentiles (usec):
| 1.00th=[ 99], 5.00th=[ 100], 10.00th=[ 102], 20.00th=[ 107],
| 30.00th=[ 125], 40.00th=[ 133], 50.00th=[ 155], 60.00th=[ 213],
| 70.00th=[ 229], 80.00th=[ 249], 90.00th=[ 290], 95.00th=[ 418],
| 99.00th=[ 3568], 99.50th=[ 4320], 99.90th=[25216], 99.95th=[59648],
| 99.99th=[284672]
bw (KB /s): min= 2, max=19456, per=27.90%, avg=9939.32, stdev=5149.18
lat (usec) : 100=2.73%, 250=77.83%, 500=14.70%, 750=1.27%, 1000=0.15%
lat (msec) : 2=0.26%, 4=2.45%, 10=0.36%, 20=0.13%, 50=0.06%
lat (msec) : 100=0.03%, 250=0.01%, 500=0.01%, 750=0.01%, 2000=0.01%
lat (msec) : >=2000=0.01%
cpu : usr=0.97%, sys=16.13%, ctx=334283, majf=0, minf=31
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=262144/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
FS_4k_streaming_writes: (groupid=0, jobs=1): err= 0: pid=28439: Wed Dec 3 17:32:29 2014
write: io=1024.0MB, bw=13961KB/s, iops=3490, runt= 75106msec
clat (usec): min=106, max=544667, avg=283.11, stdev=3149.32
lat (usec): min=106, max=544668, avg=283.50, stdev=3149.33
clat percentiles (usec):
| 1.00th=[ 141], 5.00th=[ 183], 10.00th=[ 193], 20.00th=[ 201],
| 30.00th=[ 207], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 233],
| 70.00th=[ 245], 80.00th=[ 262], 90.00th=[ 302], 95.00th=[ 370],
| 99.00th=[ 692], 99.50th=[ 860], 99.90th=[ 1384], 99.95th=[ 2096],
| 99.99th=[179200]
bw (KB /s): min= 2440, max=19016, per=39.63%, avg=14114.48, stdev=3714.64
lat (usec) : 250=73.06%, 500=23.38%, 750=2.89%, 1000=0.35%
lat (msec) : 2=0.27%, 4=0.03%, 10=0.01%, 20=0.01%, 50=0.01%
lat (msec) : 250=0.01%, 500=0.01%, 750=0.01%
cpu : usr=1.67%, sys=26.39%, ctx=423258, majf=0, minf=33
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=262144/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
FS_4k_streaming_writes: (groupid=0, jobs=1): err= 0: pid=28440: Wed Dec 3 17:32:29 2014
write: io=1024.0MB, bw=13953KB/s, iops=3488, runt= 75151msec
clat (usec): min=107, max=546482, avg=283.31, stdev=3146.09
lat (usec): min=107, max=546482, avg=283.68, stdev=3146.10
clat percentiles (usec):
| 1.00th=[ 141], 5.00th=[ 183], 10.00th=[ 193], 20.00th=[ 199],
| 30.00th=[ 207], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 233],
| 70.00th=[ 245], 80.00th=[ 266], 90.00th=[ 306], 95.00th=[ 370],
| 99.00th=[ 692], 99.50th=[ 876], 99.90th=[ 1400], 99.95th=[ 2192],
| 99.99th=[179200]
bw (KB /s): min= 2416, max=19224, per=39.59%, avg=14102.28, stdev=3707.82
lat (usec) : 250=72.91%, 500=23.55%, 750=2.87%, 1000=0.36%
lat (msec) : 2=0.26%, 4=0.03%, 10=0.01%, 20=0.01%, 50=0.01%
lat (msec) : 250=0.01%, 500=0.01%, 750=0.01%
cpu : usr=1.55%, sys=26.53%, ctx=423709, majf=0, minf=31
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=262144/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
FS_4k_streaming_writes: (groupid=0, jobs=1): err= 0: pid=28441: Wed Dec 3 17:32:29 2014
write: io=1024.0MB, bw=13548KB/s, iops=3387, runt= 77396msec
clat (usec): min=95, max=3491.5K, avg=291.91, stdev=7521.86
lat (usec): min=95, max=3491.5K, avg=292.29, stdev=7521.86
clat percentiles (usec):
| 1.00th=[ 105], 5.00th=[ 157], 10.00th=[ 189], 20.00th=[ 199],
| 30.00th=[ 205], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 233],
| 70.00th=[ 245], 80.00th=[ 262], 90.00th=[ 298], 95.00th=[ 354],
| 99.00th=[ 684], 99.50th=[ 820], 99.90th=[ 1400], 99.95th=[ 2160],
| 99.99th=[191488]
bw (KB /s): min= 47, max=28864, per=40.03%, avg=14258.43, stdev=4304.14
lat (usec) : 100=0.07%, 250=73.82%, 500=23.14%, 750=2.34%, 1000=0.32%
lat (msec) : 2=0.25%, 4=0.02%, 10=0.01%, 20=0.01%, 50=0.01%
lat (msec) : 250=0.01%, 500=0.01%, 750=0.01%, >=2000=0.01%
cpu : usr=1.55%, sys=26.21%, ctx=425174, majf=0, minf=31
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=262144/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
WRITE: io=4096.0MB, aggrb=35619KB/s, minb=8904KB/s, maxb=13961KB/s, mint=75106msec, maxt=117752msec

[root@fractal-c92e fio-zfs]# cat fsmb.fio

[global]
directory=/rzpool/ds1

[FS_4k_streaming_writes]
name=seqwrite4
numjobs=4
rw=write
bs=4k
size=1g
exec_prerun=echo 3 > /proc/sys/vm/drop_caches

fio could not run due to libgfapi.so libglusterfs.so not found

Hi,

I trying to run fio (compiled from the master branch) using --ioengine=gfapi on gluster v3.6.1 CentSO v6.6 and it is saying "libgfapi.so libglusterfs.so" not found

I created symbolic links as below and it does not give the error anymore. I am not sure that it is the right fix for the issue.

[root@fractal-c92e fio-gluster]# ls -l /usr/lib64/libgfapi.so*
lrwxrwxrwx 1 root root 26 Nov 29 16:57 /usr/lib64/libgfapi.so -> /usr/local/lib/libgfapi.so
lrwxrwxrwx 1 root root 28 Nov 29 16:58 /usr/lib64/libgfapi.so.0 -> /usr/local/lib/libgfapi.so.0
[root@fractal-c92e fio-gluster]# ls -l /usr/lib64/libglusterfs.so*
lrwxrwxrwx 1 root root 30 Nov 29 17:00 /usr/lib64/libglusterfs.so -> /usr/local/lib/libglusterfs.so
lrwxrwxrwx 1 root root 32 Nov 29 16:59 /usr/lib64/libglusterfs.so.0 -> /usr/local/lib/libglusterfs.so.0

Thanks.

libaio library not found on ESXi

Ryan H. added support for ESXi, but libaio gets linked when found on a system. ESXi doesn't have libaio available.

$ ./configure --esx
$ make
$ scp fio root@esxhost:/scratch/

esxhost # ./fiobin
./fiobin: error while loading shared libraries: libaio.so.1: cannot open shared object file: No such file or directory

Is it possible to write to the second half of the device using --size flag?

I am trying to write one pattern to the half of the device and a different pattern to the other half. This is working fine with --offset option. However, I need to calculate the offset explicitly depending on my disk size.
To avoid this wondering if I can use --size flag on fio.
I noticed that --size=50% is writing to the first 50% of the device. Since FIO already seem to be calculating the disk size and calculating 50% of it, Is it possible that we could tell the tool to write to the second half of the disk or start writing the size backwards ? Say something like --size=-50% or something?
Please suggest.

Passing more than one profile definition file on command line results in incomplete reporting

I did not see other issues that seemed similar to this one, so I am opening it, in case this is in fact not just me doing something stupid but is a bug. I have not dug into the source enough to confirm either way.

What I observed is that when I run fio with more than one job definition file, when those files specify numjobs=X, where X > 1, all reporting appears correct for jobs in first file, but only the first job in the second file appears to be reported on. What is strange is that if I have say two files, let's assume it is actually one file and I repeat its name twice on the command line, like this: ENV_NUM_JOBS=4 ./fio --output-format=json ./reportingtest.fio ./reportingtest.fio > /tmp/log, the resulting JSON structure shows 8 jobs, 4 from each instance of the file passed to fio. This file is fairly basic:

[global]
include global-include.fio

[warmup_noop]
stonewall
bs=128k
rw=write
loops=1
time_based=1
runtime=5
numjobs=${ENV_NUM_JOBS}

This is a stupid test, that's all. What I get is all jobs in the first file are reported correctly, and ONLY the first job in the second file is reported correctly, but the other three are all zeroes across the board. Maybe I am indeed doing something stupid without realizing it, or perhaps it is a bug.

Unable to get verify_only option working.

As per documentation, verify_only option is supposed to read back and verify the data. Expect to see only reads when this option is used. However, I do see that it is actually writing the data (as shown below).

verify_only Do not perform specified workload---only verify data still matches previous invocation of this workload. This option allows one to check data multiple times at a later date without overwriting it. This option makes sense only for workloads that write data, and does not support workloads with the time_based option set.

# fio --name=job1 --filename=/tmp/aa --size=1M --rw=write --ioengine=libaio --direct=1 --verify_dump=1 --verify_only
job1: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1
fio-2.2.8
Starting 1 process

job1: (groupid=0, jobs=1): err= 0: pid=4316: Mon Jun 8 15:38:40 2015
write: io=1024.0KB, bw=1000.0MB/s, iops=256000, runt= 1msec
clat (usec): min=133118, max=133189, avg=133156.36, stdev=19.06
lat (usec): min=0, max=5, avg= 0.07, stdev= 0.38
clat percentiles (msec):
| 1.00th=[ 135], 5.00th=[ 135], 10.00th=[ 135], 20.00th=[ 135],
| 30.00th=[ 135], 40.00th=[ 135], 50.00th=[ 135], 60.00th=[ 135],
| 70.00th=[ 135], 80.00th=[ 135], 90.00th=[ 135], 95.00th=[ 135],
| 99.00th=[ 135], 99.50th=[ 135], 99.90th=[ 135], 99.95th=[ 135],
| 99.99th=[ 135]
lat (msec) : 250=100.00%
cpu : usr=0.00%, sys=0.00%, ctx=2, majf=0, minf=28
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=256/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0

Run status group 0 (all jobs):
WRITE: io=1024KB, aggrb=1000.0MB/s, minb=1000.0MB/s, maxb=1000.0MB/s, mint=1msec, maxt=1msec

Disk stats (read/write):
dm-1: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
sda: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%

glfs_init failed. Is glusterd running on brick?

I am trying to run the latest master branch fio on gluster v3.6.1 CentOS 6.6 and I am getting below errors.

[root@fractal-c92e fio-gluster]# fio $args --output=gluster_zfs.log --group_reporting --ioengine=gfapi --volume=vol1 --brick=192.168.1.246 fsmb.fio
glfs_init failed. Is glusterd running on brick?
glfs_init failed. Is glusterd running on brick?
glfs_init failed. Is glusterd running on brick?
glfs_init failed. Is glusterd running on brick?
glfs_init failed. Is glusterd running on brick?
glfs_init failed. Is glusterd running on brick?

I stopped gluster deamon and run fio same result.

I started gluster daemon and run fio same result.

What could be the reason for this error ?

Thanks.

Strong evidence suggesting there is a memory leak while using the --verify parameter.

While comparing the memory consumption of the following command on a stock RHEL 6.5 machine over time to the equivalent without the --verify parameter, there is a considerable difference in memory consumption over time. Attempting different verification methods did not seem to change the rate of memory leakage. crc32c crc64 md5 sha1... etc all had the same effect. while lowering the iodepth slowed the effect which also lowered the maximum number of iops being processed.


./fio --ioengine=libaio --direct=1 --rw=randrw --iodepth=256 --time_based --bs=4k --runtime 24h --verify md5 --verify_fatal=1 --name=output0 --filename=/dev/mapper/mpathdof --name=output1 --filename=/dev/mapper/mpathdog --name=output2 --filename=/dev/mapper/mpathdoh --name=output3 --filename=/dev/mapper/mpathdod --name=output4 --filename=/dev/mapper/mpathdoe --name=output5 --filename=/dev/mapper/mpathdoa --name=output6 --filename=/dev/mapper/mpathdoc --name=output7 --filename=/dev/mapper/mpathdnw --name=output8 --filename=/dev/mapper/mpathdob --name=output9 --filename=/dev/mapper/mpathdnz --name=output10 --filename=/dev/mapper/mpathdny --name=output11 --filename=/dev/mapper/mpathdnx --name=output12 --filename=/dev/mapper/mpathdnv --name=output13 --filename=/dev/mapper/mpathdnt --name=output14 --filename=/dev/mapper/mpathdnu --name=output15 --filename=/dev/mapper/mpathdnq --name=output16 --filename=/dev/mapper/mpathdns --name=output17 --filename=/dev/mapper/mpathdnr --name=output18 --filename=/dev/mapper/mpathdnp --name=output19 --filename=/dev/mapper/mpathdno --name=output20 --filename=/dev/mapper/mpathdnk --name=output21 --filename=/dev/mapper/mpathdnl --name=output22 --filename=/dev/mapper/mpathdnn --name=output23 --filename=/dev/mapper/mpathdnm --name=output24 --filename=/dev/mapper/mpathdnj --name=output25 --filename=/dev/mapper/mpathdnf --name=output26 --filename=/dev/mapper/mpathdne --name=output27 --filename=/dev/mapper/mpathdni --name=output28 --filename=/dev/mapper/mpathdnh --name=output29 --filename=/dev/mapper/mpathdnd --name=output30 --filename=/dev/mapper/mpathdnc --name=output31 --filename=/dev/mapper/mpathdng

After only 5 minutes of running with the --verify parameter, a considerable amount of memory has been used.

top - 18:49:11 up 45 min,  2 users,  load average: 36.92, 33.17, 20.32
Tasks: 454 total,  19 running, 435 sleeping,   0 stopped,   0 zombie
Cpu(s): 22.9%us, 32.8%sy,  0.0%ni,  0.2%id, 28.0%wa,  0.0%hi, 16.1%si,  0.0%st
Mem:  24599096k total, 12130100k used, 12468996k free,   162512k buffers
Swap: 12369912k total,        0k used, 12369912k free,   922372k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
25540 root      20   0  753m 334m  552 D 21.6  1.4   2:13.45 fio
25555 root      20   0  751m 333m  552 R 21.2  1.4   2:13.75 fio
25535 root      20   0  751m 333m  552 D 21.2  1.4   2:12.22 fio
25537 root      20   0  751m 333m  548 R 20.9  1.4   2:13.57 fio
25559 root      20   0  751m 333m  552 D 20.9  1.4   2:13.15 fio
25557 root      20   0  751m 333m  548 D 21.6  1.4   2:13.48 fio
25552 root      20   0  751m 332m  552 D 21.6  1.4   2:13.43 fio
25538 root      20   0  751m 332m  544 D 21.2  1.4   2:12.90 fio
25545 root      20   0  750m 332m  552 R 21.9  1.4   2:12.86 fio
25520 root      20   0  750m 332m  552 R 20.6  1.4   2:11.30 fio
25539 root      20   0  749m 331m  552 R 21.2  1.4   2:12.60 fio
25546 root      20   0  748m 330m  544 D 21.9  1.4   2:12.76 fio
25550 root      20   0  748m 330m  548 D 21.6  1.4   2:11.86 fio
25548 root      20   0  742m 324m  552 D 21.6  1.4   2:09.51 fio
25541 root      20   0  740m 322m  548 D 22.2  1.3   2:09.55 fio
25554 root      20   0  740m 322m  544 D 21.6  1.3   2:08.30 fio
25549 root      20   0  740m 321m  544 D 21.2  1.3   2:07.98 fio
25556 root      20   0  739m 321m  548 R 21.2  1.3   2:07.79 fio
25532 root      20   0  739m 321m  552 D 20.2  1.3   2:07.09 fio
25543 root      20   0  739m 321m  556 R 20.9  1.3   2:08.99 fio
25544 root      20   0  739m 320m  552 D 20.6  1.3   2:08.26 fio
25558 root      20   0  738m 320m  544 D 21.6  1.3   2:08.07 fio
25534 root      20   0  737m 319m  548 R 20.9  1.3   2:06.16 fio
25562 root      20   0  737m 319m  548 D 21.9  1.3   2:07.70 fio
25547 root      20   0  722m 303m  560 D 19.6  1.3   2:00.81 fio
25560 root      20   0  721m 303m  548 D 19.9  1.3   1:59.64 fio
25553 root      20   0  718m 300m  548 D 19.2  1.3   1:58.87 fio
25533 root      20   0  711m 293m  544 D 19.6  1.2   1:54.97 fio
25561 root      20   0  711m 293m  548 D 19.2  1.2   1:56.98 fio
25536 root      20   0  711m 292m  552 D 19.6  1.2   1:55.96 fio
25551 root      20   0  710m 292m  556 D 18.9  1.2   1:56.46 fio
25542 root      20   0  708m 290m  548 R 18.9  1.2   1:55.94 fio
24307 root      20   0  377m 172m 171m S  0.7  0.7   0:04.95 fio
If left unchecked, the memory leak will consume all of the system's ram until the kernel starts killing the fio job because there is no memory left on the system.

divide-by-zero error in eta.c by calc_rate() and calc_iops()

When both --eta=always and --output-format=json are enabled, it seems
get_jobs_eta() in show_thread_status_json() may be invoked shortly
after get_jobs_eta() from print_thread_status(), such that disp_time
in calc_thread_status() may be less than 1ms, and thus rounded to 0.

Since get_jobs_eta() is forced for json output, calc_rate() and
calc_iops() may crash due to divide-by-zero error.

fio: job <(null)> has write bit set, but fio is in read-only mode

When passing the --readonly option, with the following write job (and probably with any write job), I get an error message with a NULL job name. Job description used:

[write]
thread
bs=64k
direct=1
ioengine=sync
size=128m
filename=test.tmp
rw=write
do_verify=0

about IOPS calculating

Hi,
I run a test on my virtual machine with the latest release, the result looks normal for the throughput (58MBs) but IOPS is very high. The bandwidth is normal, because max bandwidth of my hard drive is around 115MB/s (from some benchmark website). As I know, my hard drive has 7200rpm, it should have the maximum IOPS around 190 but Fio reported 20k. Is this a bug from Fio?

Here is the config file:

ubuntu@vm1:~$ cat sequential-read.fio
; sequence read of 4gb of data

[sequence-read]
rw=read
size=4gb
directory=/home/ubuntu/fiodata
ioengine=libaio
direct=1
iodepth=16 

And here is the result:

ubuntu@vm1:~$ fio sequential-read.fio

sequence-read: (g=0): rw=read, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=16
fio-2.2.3
Starting 1 process
sequence-read: Laying out IO file(s) (1 file(s) / 4096MB)
Jobs: 1 (f=1): [R(1)] [100.0% done] [80506KB/0KB/0KB /s] [20.2K/0/0 iops] [eta 00m:00s]
    sequence-read: (groupid=0, jobs=1): err= 0: pid=20158: Thu Jan  1 10:16:33 2015
    read : io=4096.0MB, bw=57875KB/s, iops=14468, runt= 72472msec
    slat (usec): min=6, max=8049, avg=13.45, stdev=51.91
    clat (usec): min=91, max=925638, avg=1087.67, stdev=3674.23
    lat (usec): min=121, max=925669, avg=1102.28, stdev=3674.53
    clat percentiles (usec):
     |  1.00th=[  322],  5.00th=[  852], 10.00th=[  956], 20.00th=[  996],
     | 30.00th=[ 1012], 40.00th=[ 1032], 50.00th=[ 1048], 60.00th=[ 1064],
     | 70.00th=[ 1080], 80.00th=[ 1144], 90.00th=[ 1272], 95.00th=[ 1320],
     | 99.00th=[ 1528], 99.50th=[ 2736], 99.90th=[ 7840], 99.95th=[ 9280],
     | 99.99th=[17536]
    bw (KB  /s): min=    4, max=81120, per=100.00%, avg=58117.17, stdev=6324.94
    lat (usec) : 100=0.01%, 250=0.11%, 500=4.45%, 750=0.19%, 1000=18.37%
    lat (msec) : 2=76.17%, 4=0.38%, 10=0.29%, 20=0.03%, 50=0.01%
    lat (msec) : 100=0.01%, 250=0.01%, 1000=0.01%
  cpu          : usr=8.42%, sys=20.69%, ctx=65164, majf=0, minf=43
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=1048576/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):
   READ: io=4096.0MB, aggrb=57874KB/s, minb=57874KB/s, maxb=57874KB/s, mint=72472msec, maxt=72472msec

Disk stats (read/write):
   vda: ios=1048572/4, merge=0/1, ticks=962356/0, in_queue=962044, util=100.00%

I also have here several articles on calculating max IOPS for hard drive, if you want to take a look.

Thank you very much!

fio: setsockopt: Protocol not available

I executed the command "fio --server" in the shell, but it occurred the error "fio: setsockopt: Protocol not available".
I failed to find a similar problem or solution in Google.:-(

deadlock between thread_main & helper_thread_main

I am running into a deadlock with fio for the following lines:

thread_main

fio_mutex_down(stat_mutex) at backend.c:1529

helper_thread_main

fio_mutex_down(td->rusage_sem) at stat.c:1467

thread_main is waiting for stat_mutex, which is already locked by helper_thread_main in function __show_running_run_stats() at stat.c:1441. However, the helper_thread_main is waiting for td->rusage_sem, which is supposed to be unlocked by check_update_rusage() in do_io() at backend.c:1525 in thread_main.

The issue is not reproducible every time, and I was using a customized ioengine derived from rbd.c.

Is there any chance that this issue is caused by the customized io engine? Or is there a way to get around this?

Thanks.

fio version
commit f6facd2

fio2gnuplot won't generate graphs - 'IOError: [Errno 21] Is a directory: './''

  • fio-2.1.11
  • gnuplot 4.6 patchlevel 6
  • Debian 8

Running # fio2gnuplot -p '*.log' -g gives:

12 files Selected with pattern '*.log'
 |-> s1-san5-128k-md200-randread-para.results_bw.4.log
 |-> s1-san5-128k-md200-randread-para.results_iops.4.log
 |-> s1-san5-128k-md200-randwrite-para.results_bw.3.log
 |-> s1-san5-128k-md200-randwrite-para.results_iops.3.log
 |-> s1-san5-1m-md200-randread-para.results_bw.6.log
 |-> s1-san5-1m-md200-randread-para.results_iops.6.log
 |-> s1-san5-1m-md200-randwrite-para.results_bw.5.log
 |-> s1-san5-1m-md200-randwrite-para.results_iops.5.log
 |-> s1-san5-4k-md200-randread-para.results_bw.2.log
 |-> s1-san5-4k-md200-randread-para.results_iops.2.log
 |-> s1-san5-4k-md200-randwrite-para.results_bw.1.log
 |-> s1-san5-4k-md200-randwrite-para.results_iops.1.log

Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/pudb/__init__.py", line 77, in runscript
    dbg._runscript(mainpyfile)
  File "/usr/lib/python2.7/dist-packages/pudb/debugger.py", line 371, in _runscript
    self.run(statement, globals=globals_, locals=locals_)
  File "/usr/lib/python2.7/bdb.py", line 400, in run
    exec cmd in globals, locals
  File "<string>", line 1, in <module>
  File "/usr/bin/fio2gnuplot", line 517, in <module>
    sys.exit(main(sys.argv))
  File "/usr/bin/fio2gnuplot", line 497, in main
    compute_aggregated_file(fio_data_file, gnuplot_output_filename, gnuplot_output_dir)
  File "/usr/bin/fio2gnuplot", line 137, in compute_aggregated_file
    f = open(gnuplot_output_dir+gnuplot_output_filename, "w")
IOError: [Errno 21] Is a directory: './'

Running a trace, it looks like the variable gnuplot_output_filename is empty:

file: 's1-san5-4k-md200-randwrite-para.results_iops.1.log'
fio_data_file: list
gnuplot_output_dir: './'
gnuplot_output_filename: ''
temp_files: list

Changing the output destination with -d or -o does not resolve the issue.

If I override gnuplot_output_dir to gnuplot_output_dir='/tmp/output' here:
https://github.com/axboe/fio/blob/master/tools/plot/fio2gnuplot#L387

It works - so it looks like somewhere the concatenation of gnuplot_output_dir and gnuplot_output_filename is broken?

workqueue fails to compile for mip32 (undefined reference to `__sync_fetch_and_add_8')

mipsel-cros-linux-gnu-gcc -Wl,-O1 -Wl,-O2 -Wl,--as-needed -Wl,-O1 -Wl,-O2 -Wl,--as-needed -rdynamic -std=gnu99 -Wwrite-strings -Wall -Wdeclaration-after-statement -D_GNU_SOURCE -include config-host.h -O2 -pipe -march=mips32 -g -fno-exceptions -fno-unwind-tables -fno-asynchronous-unwind-tables -DBITS_PER_LONG=32 -DFIO_VERSION='"fio-2.2.9"' -o fio gettime.o ioengines.o init.o stat.o log.o time.o filesetup.o eta.o verify.o memory.o io_u.o parse.o mutex.o options.o lib/rbtree.o smalloc.o filehash.o profile.o debug.o lib/rand.o lib/num2str.o lib/ieee754.o crc/crc16.o crc/sha512.o crc/crc7.o crc/sha1.o crc/crc32c.o crc/test.o crc/crc32.o crc/murmur3.o crc/crc32c-intel.o crc/xxhash.o crc/sha256.o crc/fnv.o crc/md5.o crc/crc64.o engines/cpu.o engines/mmap.o engines/sync.o engines/null.o engines/net.o memalign.o server.o client.o iolog.o backend.o libfio.o flow.o cconv.o lib/prio_tree.o json.o lib/zipf.o lib/axmap.o lib/lfsr.o gettime-thread.o helpers.o lib/flist_sort.o lib/hweight.o lib/getrusage.o idletime.o td_error.o profiles/tiobench.o profiles/act.o io_u_queue.o filelock.o lib/tp.o lib/bloom.o lib/gauss.o lib/mountcheck.o workqueue.o engines/libaio.o engines/posixaio.o engines/falloc.o engines/e4defrag.o engines/splice.o engines/mtd.o lib/libmtd.o lib/libmtd_legacy.o diskutil.o fifo.o blktrace.o cgroup.o trim.o engines/sg.o engines/binject.o lib/linux-dev-lookup.o fio.o -lrt -laio -lz -lm -lpthread -ldl

workqueue.o: In function sum_val': /build/mipsel-o32-generic/tmp/portage/sys-block/fio-9999/work/fio-9999/workqueue.c:202: undefined reference to__sync_fetch_and_add_8'
/build/mipsel-o32-generic/tmp/portage/sys-block/fio-9999/work/fio-9999/workqueue.c:202: undefined reference to __sync_fetch_and_add_8' /build/mipsel-o32-generic/tmp/portage/sys-block/fio-9999/work/fio-9999/workqueue.c:202: undefined reference to__sync_fetch_and_add_8'
/build/mipsel-o32-generic/tmp/portage/sys-block/fio-9999/work/fio-9999/workqueue.c:202: undefined reference to __sync_fetch_and_add_8' /build/mipsel-o32-generic/tmp/portage/sys-block/fio-9999/work/fio-9999/workqueue.c:203: undefined reference to__sync_fetch_and_add_8'

git bisect result:

first bad commit: [a9da8ab] First cut at supporting IO offload

Disk utilization doesn't show in Terse output (release 2.1.13)

Hi,
In the json and normal output, the disk utilisation shows but it doesn't in case of terse. Is it a bug or an intention? Is it fixed in version 2.1.14?
One more question on the CPU utilisation? are you mind me asking how the program calculates the CPU utilization? In a multi-threads test, I see each thread have different CPU usages.
Thanks a lot for pointing out.
Sincerely yours

Empty log files

I've compiled fio from git (a1f871c), but when running with the write_*_log options, all the log files are empty.

Example job:

[global]
ioengine=libaio
randrepeat=1
direct=1
gtod_reduce=1
bs=4k
iodepth=64
size=4G

[random-read]
stonewall
rw=randread
filename=/fio/output/random-read
write_bw_log=/fio/logs/random-read_bw.log
write_iops_log=/fio/logs/random-read_iops.log

[random-write]
stonewall
rw=randwrite
filename=/fio/output/random-write
write_bw_log=/fio/logs/random-write_bw.log
write_iops_log=/fio/logs/random-write_iops.log
{snip}

All the log files are blank:

# ls -la /fio/logs/
total 8
drwxr-xr-x 2 root root 4096 Oct 29 19:17 .
drwxr-xr-x 6 root root 4096 Oct 29 18:35 ..
-rw-r--r-- 1 root root    0 Oct 29 19:41 random-read_bw.log_bw.1.log
-rw-r--r-- 1 root root    0 Oct 29 19:41 random-read_iops.log_iops.1.log
-rw-r--r-- 1 root root    0 Oct 29 19:15 random-readwrite_bw.log_bw.3.log
-rw-r--r-- 1 root root    0 Oct 29 18:41 random-readwrite_iops.1.log
-rw-r--r-- 1 root root    0 Oct 29 19:15 random-readwrite_iops.log_iops.3.log
-rw-r--r-- 1 root root    0 Oct 29 19:17 random-write_bw.log_bw.2.log
-rw-r--r-- 1 root root    0 Oct 29 19:17 random-write_iops.log_iops.2.log

clock setaffinity failed on solaris

When running fio-2.2.5-3-g209e on solaris 11.2 I receive the message clock setaffinity failed. I disabled HTT to ensure it was not interfering, but the results persist. I get the same error whether testing within a virtual machine, or on bare metal. I'm running on a pair of Intel E5-2640s. When I build the same version on debian the issue is not present so I wonder whether it's related to solaris 11.2. In any case, any suggestions on whether there's anything I can try, or whether this may be due to an issue with fio and solaris 11, are greatly appreciated.

Please let me know if I can provide any additional information.

Crash with posixaio on Mac OS X 10.10.2

fio segfaults on Mac OS X when trying to use the posixaio engine. I tried with current master:

uname -a
Darwin couchbook.lan 14.1.0 Darwin Kernel Version 14.1.0: Mon Dec 22 23:10:38 PST 2014; root:xnu-2782.10.72~2/RELEASE_X86_64 x86_64

gcc --version
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 6.0 (clang-600.0.56) (based on LLVM 3.5svn)
Target: x86_64-apple-darwin14.1.0
Thread model: posix

cat rw.fio
[foo]
thread
ioengine=posixaio
rw=randwrite
iodepth=32
size=1024m
directory=/Users/felix/tmp/fio-1

~/bin/fio rw.fio
foo: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=posixaio, iodepth=32
fio-2.2.5-11-gd47d
Starting 1 thread
foo: Laying out IO file(s) (1 file(s) / 1024MB)
fio: pid=2819, err=-1/file:engines/posixaio.c:213, func=xfer, error=Unknown error: -1
fio(45155,0x11084a000) malloc: *** error for object 0x7fbad8504500: pointer being freed was not allocated
*** set a breakpoint in malloc_error_break to debug
[1]    45155 abort      ~/bin/fio rw.fio

lldb ~/bin/fio
(lldb) target create "/Users/felix/bin/fio"
Current executable set to '/Users/felix/bin/fio' (x86_64).
(lldb) r /Users/felix/tmp/rw.fio
Process 45124 launched: '/Users/felix/bin/fio' (x86_64)
foo: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=posixaio, iodepth=32
fio-2.2.5-11-gd47d
Starting 1 thread
foo: Laying out IO file(s) (1 file(s) / 1024MB)
fio: pid=2819, err=-1/file:engines/posixaio.c:213, func=xfer, error=Unknown error: -1
fio(45124,0x10071c000) malloc: *** error for object 0x102104780: pointer being freed was not allocated
*** set a breakpoint in malloc_error_break to debug
Process 45124 stopped
* thread #2: tid = 0x45972, 0x00007fff8dd98286 libsystem_kernel.dylib`__pthread_kill + 10, stop reason = signal SIGABRT
    frame #0: 0x00007fff8dd98286 libsystem_kernel.dylib`__pthread_kill + 10
libsystem_kernel.dylib`__pthread_kill + 10:
-> 0x7fff8dd98286:  jae    0x7fff8dd98290            ; __pthread_kill + 20
   0x7fff8dd98288:  movq   %rax, %rdi
   0x7fff8dd9828b:  jmp    0x7fff8dd93c53            ; cerror_nocancel
   0x7fff8dd98290:  retq
(lldb) bt all
  thread #1: tid = 0x4594c, 0x00007fff8dd9848a libsystem_kernel.dylib`__semwait_signal + 10, queue = 'com.apple.main-thread'
    frame #0: 0x00007fff8dd9848a libsystem_kernel.dylib`__semwait_signal + 10
    frame #1: 0x00007fff8ffe9f5d libsystem_c.dylib`nanosleep + 199
    frame #2: 0x00007fff8ffe9e50 libsystem_c.dylib`usleep + 54
    frame #3: 0x0000000100033f42 fio`fio_backend [inlined] do_usleep(usecs=<unavailable>) + 2546 at backend.c:1831
    frame #4: 0x0000000100033f2e fio`fio_backend [inlined] run_threads + 2077 at backend.c:2073
    frame #5: 0x0000000100033711 fio`fio_backend + 449 at backend.c:2190
    frame #6: 0x00007fff9352c5c9 libdyld.dylib`start + 1

  thread #3: tid = 0x45971, 0x00007fff8dd98136 libsystem_kernel.dylib`__psynch_cvwait + 10
    frame #0: 0x00007fff8dd98136 libsystem_kernel.dylib`__psynch_cvwait + 10
    frame #1: 0x00007fff8ba78e0c libsystem_pthread.dylib`_pthread_cond_wait + 693
    frame #2: 0x0000000100037781 fio`helper_thread_main(data=<unavailable>) + 145 at backend.c:2118
    frame #3: 0x00007fff8ba78268 libsystem_pthread.dylib`_pthread_body + 131
    frame #4: 0x00007fff8ba781e5 libsystem_pthread.dylib`_pthread_start + 176
    frame #5: 0x00007fff8ba7641d libsystem_pthread.dylib`thread_start + 13

* thread #2: tid = 0x45972, 0x00007fff8dd98286 libsystem_kernel.dylib`__pthread_kill + 10, stop reason = signal SIGABRT
  * frame #0: 0x00007fff8dd98286 libsystem_kernel.dylib`__pthread_kill + 10
    frame #1: 0x00007fff8ba7a42f libsystem_pthread.dylib`pthread_kill + 90
    frame #2: 0x00007fff8ffc8b53 libsystem_c.dylib`abort + 129
    frame #3: 0x00007fff960d5937 libsystem_malloc.dylib`free + 428
    frame #4: 0x00000001000348dc fio`thread_main [inlined] cleanup_io_u + 77 at backend.c:971
    frame #5: 0x000000010003488f fio`thread_main(data=0x0000000100400000) + 1951 at backend.c:1577
    frame #6: 0x00007fff8ba78268 libsystem_pthread.dylib`_pthread_body + 131
    frame #7: 0x00007fff8ba781e5 libsystem_pthread.dylib`_pthread_start + 176
    frame #8: 0x00007fff8ba7641d libsystem_pthread.dylib`thread_start + 13
(lldb)

git.kernel.dk Git Smart HTTP Support

While updating the homebrew formula for Mac OS X, I noticed that checkout over http from the canonical repo is very slow, so I think Git Smart HTTP is not configured on the server, which gives a big speed boost and enables progress reports in the git client.

git clone git://git.kernel.dk/fio.git  1,06s user 0,74s system 14% cpu 12,245 total
git clone http://git.kernel.dk/fio.git  2,00s user 3,32s system 5% cpu 1:40,52 total

As you are running GitWeb on Apache and probably only want read-only access, something like this should be appropriate:

SetEnv GIT_PROJECT_ROOT /var/www/git

AliasMatch ^/git/(.*/objects/[0-9a-f]{2}/[0-9a-f]{38})$          /var/www/git/$1
AliasMatch ^/git/(.*/objects/pack/pack-[0-9a-f]{40}.(pack|idx))$ /var/www/git/$1
ScriptAliasMatch \
    "(?x)^/git/(.*/(HEAD | \
            info/refs | \
            objects/info/[^/]+ | \
            git-upload-pack))$" \
    /usr/libexec/git-core/git-http-backend/$1
ScriptAlias /git/ /var/www/cgi-bin/gitweb.cgi/

The important change to the "Accelerated static Apache 2.x" example for read-only access is the removal of git-receive-pack in the ScriptAliasMatch directive.

Feel free to ignore or close as completely off-topic 😄

generic_open_file() error lost

With the following job description, when I run fio --section=verify test.fio as a regular user, I get a permission denied error but fio still exits with status 0:

; fio job description for writing a defined pattern to disk and later
; verifying the pattern.
;
; Usage:
;  fio --section=write write-verify.fio
;  fio --section=verify write-verify.fio

[global]
thread=1
bs=4k
buffered=1
ioengine=sync
verify=meta
verify_interval=4k
filename=/dev/vda

[write]
rw=write
fill_device=1
do_verify=0

[verify]
stonewall
create_serialize=0
rw=read
do_verify=1

$ gdb --args fio --section=verify test.fio 
[...]
(gdb) b filesetup.c:611
Breakpoint 1 at 0x420608: file filesetup.c, line 611.
(gdb) r
Starting program: /usr/local/bin/fio --section=verify test.fio
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
verify: (g=0): rw=read, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=1
fio-2.1.10-54-gb736
[New Thread 0x7fffeedfe700 (LWP 6844)]
Starting 1 thread
[New Thread 0x7fffee5fd700 (LWP 6845)]
[Switching to Thread 0x7fffee5fd700 (LWP 6845)]

Breakpoint 1, generic_open_file (td=0x7fffeedff000, f=0x7ffff66dc210) at filesetup.c:611
611         td_verror(td, __e, buf);
(gdb) n
614s: 1 (f=0): [if (!from_hash && f->fd != -1) {s] [0/0/0 iops] [eta 00m:00s]
(gdb) p td->error
$1 = 13
(gdb) bt
#0  generic_open_file (td=0x7fffeedff000, f=0x7ffff66dc210) at filesetup.c:614
#1  0x000000000041fbea in bdev_size (td=0x7fffeedff000, f=0x7ffff66dc210) at filesetup.c:297
#2  0x000000000041fdb6 in get_file_size (td=0x7fffeedff000, f=0x7ffff66dc210) at filesetup.c:367
#3  0x0000000000420721 in generic_get_file_size (td=0x7fffeedff000, f=0x7ffff66dc210) at filesetup.c:644
#4  0x00000000004207b3 in get_file_sizes (td=0x7fffeedff000) at filesetup.c:660
#5  0x0000000000420c72 in setup_files (td=0x7fffeedff000) at filesetup.c:791
#6  0x000000000044a51e in thread_main (data=0x7fffeedff000) at backend.c:1430
#7  0x00007ffff74a8d15 in start_thread () from /lib64/libpthread.so.0
#8  0x00007ffff6fd748d in clone () from /lib64/libc.so.6
(gdb) c
Continuing.
file:filesetup.c:611, func=open(/dev/vda), error=Permission deniedta 00m:00s]
[Thread 0x7fffee5fd700 (LWP 6845) exited]


Run status group 0 (all jobs):

Disk stats (read/write):
  vda: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
[Thread 0x7fffeedfe700 (LWP 6844) exited]
[Inferior 1 (process 6840) exited normally]

Issue using zer_buffers and verify options simultaneously

I am trying to write all zeros to a volume and read them back.
Able to write all zeros to the volume using --zero_buffers option.
However if I use verify=crc32|md5|meta it doesn't generate zero buffers any more.
Is there any work around it?

parse.c: 2 * missing string terminator ?

[parse.c:465]: (error) Dangerous usage of 'tmp' (strncpy doesn't always null-terminate it).
strncpy(tmp, ptr, sizeof(tmp) - 1);
p = strchr(tmp, ',');

Maybe better code

    strncpy(tmp, ptr, sizeof(tmp) - 1);
    tmp[sizeof(tmp) - 1] = '\0';
    p = strchr(tmp, ',');

[parse.c:666]: (error) Dangerous usage of 'tmp' (strncpy doesn't always null-terminate it).

Duplicate.

fio acting like my drive is in read only mode when I issue write commands so unable to perform write measurements

Hi
The issue occurs during random 4K or sequential 128K writes, fio complains file system I am testing is read only.
The drives are in a new server just installed Windows 2008 R2 and the LSI HBA does in fact recognize the drives.
The drive with errors does show in Device Manager so the Windows driver is recognizing it.
But any attempt to issue write commands I see this:
writeissue

Anyone got any ideas?

Feature request: set fixed dimension output format in fio command line

At now fio show results in more right dimensions, for example some test shows results in KB/s, other in B/s, same problem with latency - msec, usec.

This is good for single test, but bad with several tests or test on different hardware. With different format there are too hard compare results by eye, because we need convert msec to usec manually.

Will be good to have option for fix output dimension format, for example, to show all results in KB/s and msec, independently of result numbers. Can you add this option?

fio: end_fsync failed for file FS_4k_streaming_writes.1.0 unsupported operation

Hi,

I am getting the error "fio: end_fsync failed for file FS_4k_streaming_writes.1.0 unsupported operation" while running with libgfapi on gluster volume.

fio $args --output=4k_stream_writes.log --section=FS_4k_streaming_writes --ioengine=gfapi --volume=vol1 --brick=192.168.1.246 fsmb.fio

fsmb.fio file content

[FS_4k_streaming_writes]
name=seqwrite4
numjobs=4
rw=write
bs=4k
size=1g
iodepth=16
group_reporting
end_fsync=1

Thanks.

OSX fcntl(fd, F_NOCACHE, 1) not equivalent to O_DIRECT on Linux

This is probably already well known, but it bit me while planning to do some comparative benchmarks between Linux and OSX. I started with OSX and direct=1, rw=randread, but noticed that OSX was reading at 1200MB/s from a 1GB file. This was unexpected as I only have one spinning rust HDD in the MBP.

So I looked up the ways to do direct I/O on OSX. On stack overflow and the Apple mailing lists, fcntl(fd, F_NOCACHE, 1) looked to be the canonical solution. This was also implemented in fio(1) in 2011 in commit 7e8ad19. It seems that F_NOCACHE disables the page cache from that point on, but the file in quesition was already in the page cache, it will not be purged and the pages will be used.

I also commented on stack overflow: clear buffer cache on OSX with my observations. I'll copy it here as well:

It's my impression that even when turning off the cache like this (with F_NOCACHE or F_GLOBAL_NOCACHE), if there are pages of the file already in the page cache, those will still be used. I tried to test this by using fio(1) with direct=1. It seems to confirm my suspicions (I get ~1200MB/s throughput on a random read, on a spindle HDD in my MBP, not an SSD). I've confirmed with dtruss that fio(1) actually calls fcntl correctly.

After running "sudo purge" and trying the same fio(1) invocation, it's much slower. So yes, it appears that F_NOCACHE is no direct equivalent of O_DIRECT on Linux.

I'm not advocating running sudo purge as part of fio, but perhaps it could be added to the documentation of direct that the behaviour is quite different from O_DIRECT but they're both fio's way of doing direct IO. Running sudo purge is both slow, has an adverse effect on the rest of the system while (and after) it runs for obvious reasons.

Another idea I had was to forcibly re-write the file each time for reading, while having the file open with F_NOCACHE, which makes the written pages not enter the page cache (UPC). That would also be slow (possibly) but hopefully it wouldn't evict other, unrelated files.

An interesting discussion involving an Apple dev on the Apple mailing list: http://lists.apple.com/archives/filesystem-dev/2007/Sep/msg00010.html. Specifically the part mentioning that files opened with F_NOCACHE will still uses already loaded pages if they're present.

EDIT: Some train of thought rambling: but perhaps mmap + MAP_NOCACHE on OSX is a possible avenue (+ telling uses to use that for direct IO testing). The man page doesn't give me a lot of hope though, as it appears that there's not guarantess and the OS might keep things in memory if it feels like it anyway. A small reference mentioning it: http://www.qtcentre.org/threads/24733-High-performance-large-file-reading-on-OSX

Data size always 64 percent when use fio produce data on the nfs storage

Hi,
There is a problem need your help, I wrote something by FIO but in certain point something weird happens. Fio version is 2.1.13.
My command as below:
fio --directory=./ --direct=1 --rw=randwrite --refill_buffers --norandommap --randrepeat=0 --ioengine=libaio --bs=4k --rwmixread=100 --iodepth=1 --numjobs=1 --group_reporting --name=4ktestwrite --size=100M
If the directory is local disk, the data actual size(use du -h) is 100M, but if the directory is the nfs share storage the data actual size is 64M. Could you please help to solve this?

This is nfs storage.
Perofrmance-Node ~ # mount
10.121.190.9:/root/lvm on /root/centOS type nfs (rw,vers=4,addr=10.121.190.9,clientaddr=10.121.147.10)
Perofrmance-Node ~ # cd -
/root/centOS
Perofrmance-Node centOS # pwd
/root/centOS
Perofrmance-Node centOS # du -h
16K ./lost+found
64M .
Perofrmance-Node centOS # ls -lh
total 64M
-rw-r--r-- 1 nobody nogroup 100M Oct 20 12:02 4ktestwrite.0.0
drwx------ 2 nobody nogroup 16K Aug 13 16:25 lost+found

This is local disk
Perofrmance-Node test # pwd
/mnt/disk/test
Perofrmance-Node test # du -h
101M .
Perofrmance-Node test # ls -lh
total 101M
-rw-r--r-- 1 root root 100M Oct 17 03:38 4ktestwrite.0.0

Any assistance will be greatly appreciated!

Thanks

eta is whack in fio version 2.1.+

Jobs: 30 (f=30): [RRRRRRRRRRRRRRRRRRRRRRRRRRRRRR] [0.0% done] [494.9MB/0KB/0KB /s] [63.4K/0/0 iops] [eta 1158047090d:11h:36m:31s]]

config for fio

[global]
direct=1
zonesize=1g
zoneskip=200g

runtime=120

write_bw_log

write_lat_log=lat.out

write_iops_log

ioengine=libaio
rw=read
iodepth=1
bs=8192,8192

[/dev/sda]
[/dev/sdaa]
[/dev/sdab]
[/dev/sdac]
[/dev/sdad]
[/dev/sdb]
[/dev/sdc]
[/dev/sdd]
[/dev/sde]
[/dev/sdf]
[/dev/sdg]
[/dev/sdh]
[/dev/sdi]
[/dev/sdj]
[/dev/sdk]
[/dev/sdl]
[/dev/sdm]
[/dev/sdn]
[/dev/sdo]
[/dev/sdp]
[/dev/sdq]
[/dev/sdr]
[/dev/sds]
[/dev/sdt]
[/dev/sdu]
[/dev/sdv]
[/dev/sdw]
[/dev/sdx]
[/dev/sdy]
[/dev/sdz]

fio_generate_plots does not generate graphs for *bw.x.log pattern (x is index of job)

Hi,

I am running fio with 4 jobs per job section with write_lat_log, write_iops_log, write_bw_log.

They produced files as log_bw.x.log, log_clat.x.log ... where is x is jobs index 1,2,3,4.

Size of generated svg file is zero.

ls -l *.svg

-rw-r--r-- 1 root root 0 Jan 7 03:23 4k_gfapi_gluster-bw.svg
-rw-r--r-- 1 root root 0 Jan 7 03:23 4k_gfapi_gluster-clat.svg
-rw-r--r-- 1 root root 0 Jan 7 03:23 4k_gfapi_gluster-iops.svg
-rw-r--r-- 1 root root 0 Jan 7 03:23 4k_gfapi_gluster-lat.svg
-rw-r--r-- 1 root root 0 Jan 7 03:23 4k_gfapi_gluster-slat.svg

I ran the command, fio_generate_plots 4k_gfapi_gluster and it output is below,

Title: set title "4k_gfapi_gluster\n\n{/0.6 I/O Latency}" font "Helvetica,28"
File type: lat
yaxis: set ylabel "Time (msec)" font "Helvetica,16"
gnuplot> set title "4k_gfapi_gluster\n\n{/0.6 I/O Latency}" font "Helvetica,28" ; set ylabel "Time (msec)" font "Helvetica,16" ;
gnuplot> set object 1 rectangle from screen 0,0 to screen 1,1 fillcolor rgb"#ffffff" behind
gnuplot> set style line 1 lc rgb "#E41A1C" lw 2 lt 1;
gnuplot> set style line 2 lc rgb "#377EB8" lw 2 lt 1;
gnuplot> set style line 3 lc rgb "#4DAF4A" lw 2 lt 1;
gnuplot> set style line 4 lc rgb "#984EA3" lw 2 lt 1;
gnuplot> set style line 5 lc rgb "#FF7F00" lw 2 lt 1;
gnuplot> set style line 6 lc rgb "#DADA33" lw 2 lt 1;
gnuplot> set style line 7 lc rgb "#A65628" lw 2 lt 1;
gnuplot> set style line 20 lc rgb "#000000" lt 3 lw 2;
gnuplot> ; ; set grid ls 20 ; ; set xlabel "Time (sec)" font "Helvetica,16" ; set xrange [0:
] ; set yrange [0:
] ; set xtics font "Helvetica,14" ; set ytics font "Helvetica,14" ; set mxtics 0 ; set mytics 2 ; set key outside bottom center ; set key box enhanced spacing 2.0 samplen 3 horizontal width 4 height 1.2 ; set terminal svg enhanced dashed size 1280,768 dynamic ; set label 30 "Data source: http://example.com" font "Helvetica,14" tc rgb "#00000f" at screen 0.976,0.175 right ; show style lines ; set output "4k_gfapi_gluster-lat.svg" ; plot '*_lat.log' using ($1/1000):($2/1000) title "Queue depth log_bw.1.log log_bw.2.log log_bw.3.log log_bw.4.log log_clat.1.log log_clat.2.log log_clat.3.log log_clat.4.log log_iops.1.log log_iops.2.log log_iops.3.log log_iops.4.log log_lat.1.log log_lat.2.log log_lat.3.log log_lat.4.log log_slat.1.log log_slat.2.log log_slat.3.log log_slat.4.log" with lines ls 1
Terminal type set to 'svg'
Options are 'size 1280,768 dynamic enhanced fname 'Arial' fsize 12 butt dashed '
linestyle 1, linetype 1 linecolor rgb "#e41a1c" linewidth 2.000 pointtype 1 pointsize default
linestyle 2, linetype 1 linecolor rgb "#377eb8" linewidth 2.000 pointtype 2 pointsize default
linestyle 3, linetype 1 linecolor rgb "#4daf4a" linewidth 2.000 pointtype 3 pointsize default
linestyle 4, linetype 1 linecolor rgb "#984ea3" linewidth 2.000 pointtype 4 pointsize default
linestyle 5, linetype 1 linecolor rgb "#ff7f00" linewidth 2.000 pointtype 5 pointsize default
linestyle 6, linetype 1 linecolor rgb "#dada33" linewidth 2.000 pointtype 6 pointsize default
linestyle 7, linetype 1 linecolor rgb "#a65628" linewidth 2.000 pointtype 7 pointsize default
linestyle 20, linetype 3 linecolor rgb "black" linewidth 2.000 pointtype 20 pointsize default

     warning: Skipping unreadable file "*_lat.log"
     No data in plot

gnuplot>
Title: set title "4k_gfapi_gluster\n\n{/0.6 I/O Operations Per Second}" font "Helvetica,28"
File type: iops
yaxis: set ylabel "IOPS" font "Helvetica,16"
gnuplot> set title "4k_gfapi_gluster\n\n{/0.6 I/O Operations Per Second}" font "Helvetica,28" ; set ylabel "IOPS" font "Helvetica,16" ;
gnuplot> set object 1 rectangle from screen 0,0 to screen 1,1 fillcolor rgb"#ffffff" behind
gnuplot> set style line 1 lc rgb "#E41A1C" lw 2 lt 1;
gnuplot> set style line 2 lc rgb "#377EB8" lw 2 lt 1;
gnuplot> set style line 3 lc rgb "#4DAF4A" lw 2 lt 1;
gnuplot> set style line 4 lc rgb "#984EA3" lw 2 lt 1;
gnuplot> set style line 5 lc rgb "#FF7F00" lw 2 lt 1;
gnuplot> set style line 6 lc rgb "#DADA33" lw 2 lt 1;
gnuplot> set style line 7 lc rgb "#A65628" lw 2 lt 1;
gnuplot> set style line 20 lc rgb "#000000" lt 3 lw 2;
gnuplot> ; ; set grid ls 20 ; ; set xlabel "Time (sec)" font "Helvetica,16" ; set xrange [0:
] ; set yrange [0:
] ; set xtics font "Helvetica,14" ; set ytics font "Helvetica,14" ; set mxtics 0 ; set mytics 2 ; set key outside bottom center ; set key box enhanced spacing 2.0 samplen 3 horizontal width 4 height 1.2 ; set terminal svg enhanced dashed size 1280,768 dynamic ; set label 30 "Data source: http://example.com" font "Helvetica,14" tc rgb "#00000f" at screen 0.976,0.175 right ; show style lines ; set output "4k_gfapi_gluster-iops.svg" ; plot '*_iops.log' using ($1/1000):($2/1) title "Queue depth " with lines ls 1
Terminal type set to 'svg'
Options are 'size 1280,768 dynamic enhanced fname 'Arial' fsize 12 butt dashed '
linestyle 1, linetype 1 linecolor rgb "#e41a1c" linewidth 2.000 pointtype 1 pointsize default
linestyle 2, linetype 1 linecolor rgb "#377eb8" linewidth 2.000 pointtype 2 pointsize default
linestyle 3, linetype 1 linecolor rgb "#4daf4a" linewidth 2.000 pointtype 3 pointsize default
linestyle 4, linetype 1 linecolor rgb "#984ea3" linewidth 2.000 pointtype 4 pointsize default
linestyle 5, linetype 1 linecolor rgb "#ff7f00" linewidth 2.000 pointtype 5 pointsize default
linestyle 6, linetype 1 linecolor rgb "#dada33" linewidth 2.000 pointtype 6 pointsize default
linestyle 7, linetype 1 linecolor rgb "#a65628" linewidth 2.000 pointtype 7 pointsize default
linestyle 20, linetype 3 linecolor rgb "black" linewidth 2.000 pointtype 20 pointsize default

     warning: Skipping unreadable file "*_iops.log"
     No data in plot

gnuplot>
Title: set title "4k_gfapi_gluster\n\n{/0.6 I/O Submission Latency}" font "Helvetica,28"
File type: slat
yaxis: set ylabel "Time (μsec)" font "Helvetica,16"
gnuplot> set title "4k_gfapi_gluster\n\n{/0.6 I/O Submission Latency}" font "Helvetica,28" ; set ylabel "Time (μsec)" font "Helvetica,16" ;
gnuplot> set object 1 rectangle from screen 0,0 to screen 1,1 fillcolor rgb"#ffffff" behind
gnuplot> set style line 1 lc rgb "#E41A1C" lw 2 lt 1;
gnuplot> set style line 2 lc rgb "#377EB8" lw 2 lt 1;
gnuplot> set style line 3 lc rgb "#4DAF4A" lw 2 lt 1;
gnuplot> set style line 4 lc rgb "#984EA3" lw 2 lt 1;
gnuplot> set style line 5 lc rgb "#FF7F00" lw 2 lt 1;
gnuplot> set style line 6 lc rgb "#DADA33" lw 2 lt 1;
gnuplot> set style line 7 lc rgb "#A65628" lw 2 lt 1;
gnuplot> set style line 20 lc rgb "#000000" lt 3 lw 2;
gnuplot> ; ; set grid ls 20 ; ; set xlabel "Time (sec)" font "Helvetica,16" ; set xrange [0:
] ; set yrange [0:
] ; set xtics font "Helvetica,14" ; set ytics font "Helvetica,14" ; set mxtics 0 ; set mytics 2 ; set key outside bottom center ; set key box enhanced spacing 2.0 samplen 3 horizontal width 4 height 1.2 ; set terminal svg enhanced dashed size 1280,768 dynamic ; set label 30 "Data source: http://example.com" font "Helvetica,14" tc rgb "#00000f" at screen 0.976,0.175 right ; show style lines ; set output "4k_gfapi_gluster-slat.svg" ; plot '*_slat.log' using ($1/1000):($2/1) title "Queue depth " with lines ls 1
Terminal type set to 'svg'
Options are 'size 1280,768 dynamic enhanced fname 'Arial' fsize 12 butt dashed '
linestyle 1, linetype 1 linecolor rgb "#e41a1c" linewidth 2.000 pointtype 1 pointsize default
linestyle 2, linetype 1 linecolor rgb "#377eb8" linewidth 2.000 pointtype 2 pointsize default
linestyle 3, linetype 1 linecolor rgb "#4daf4a" linewidth 2.000 pointtype 3 pointsize default
linestyle 4, linetype 1 linecolor rgb "#984ea3" linewidth 2.000 pointtype 4 pointsize default
linestyle 5, linetype 1 linecolor rgb "#ff7f00" linewidth 2.000 pointtype 5 pointsize default
linestyle 6, linetype 1 linecolor rgb "#dada33" linewidth 2.000 pointtype 6 pointsize default
linestyle 7, linetype 1 linecolor rgb "#a65628" linewidth 2.000 pointtype 7 pointsize default
linestyle 20, linetype 3 linecolor rgb "black" linewidth 2.000 pointtype 20 pointsize default

     warning: Skipping unreadable file "*_slat.log"
     No data in plot

gnuplot>
Title: set title "4k_gfapi_gluster\n\n{/0.6 I/O Completion Latency}" font "Helvetica,28"
File type: clat
yaxis: set ylabel "Time (msec)" font "Helvetica,16"
gnuplot> set title "4k_gfapi_gluster\n\n{/0.6 I/O Completion Latency}" font "Helvetica,28" ; set ylabel "Time (msec)" font "Helvetica,16" ;
gnuplot> set object 1 rectangle from screen 0,0 to screen 1,1 fillcolor rgb"#ffffff" behind
gnuplot> set style line 1 lc rgb "#E41A1C" lw 2 lt 1;
gnuplot> set style line 2 lc rgb "#377EB8" lw 2 lt 1;
gnuplot> set style line 3 lc rgb "#4DAF4A" lw 2 lt 1;
gnuplot> set style line 4 lc rgb "#984EA3" lw 2 lt 1;
gnuplot> set style line 5 lc rgb "#FF7F00" lw 2 lt 1;
gnuplot> set style line 6 lc rgb "#DADA33" lw 2 lt 1;
gnuplot> set style line 7 lc rgb "#A65628" lw 2 lt 1;
gnuplot> set style line 20 lc rgb "#000000" lt 3 lw 2;
gnuplot> ; ; set grid ls 20 ; ; set xlabel "Time (sec)" font "Helvetica,16" ; set xrange [0:
] ; set yrange [0:
] ; set xtics font "Helvetica,14" ; set ytics font "Helvetica,14" ; set mxtics 0 ; set mytics 2 ; set key outside bottom center ; set key box enhanced spacing 2.0 samplen 3 horizontal width 4 height 1.2 ; set terminal svg enhanced dashed size 1280,768 dynamic ; set label 30 "Data source: http://example.com" font "Helvetica,14" tc rgb "#00000f" at screen 0.976,0.175 right ; show style lines ; set output "4k_gfapi_gluster-clat.svg" ; plot '*_clat.log' using ($1/1000):($2/1000) title "Queue depth slat.svg log_bw.1.log log_bw.2.log log_bw.3.log log_bw.4.log log_clat.1.log log_clat.2.log log_clat.3.log log_clat.4.log log_iops.1.log log_iops.2.log log_iops.3.log log_iops.4.log log_lat.1.log log_lat.2.log log_lat.3.log log_lat.4.log log_slat.1.log log_slat.2.log log_slat.3.log log_slat.4.log" with lines ls 1
Terminal type set to 'svg'
Options are 'size 1280,768 dynamic enhanced fname 'Arial' fsize 12 butt dashed '
linestyle 1, linetype 1 linecolor rgb "#e41a1c" linewidth 2.000 pointtype 1 pointsize default
linestyle 2, linetype 1 linecolor rgb "#377eb8" linewidth 2.000 pointtype 2 pointsize default
linestyle 3, linetype 1 linecolor rgb "#4daf4a" linewidth 2.000 pointtype 3 pointsize default
linestyle 4, linetype 1 linecolor rgb "#984ea3" linewidth 2.000 pointtype 4 pointsize default
linestyle 5, linetype 1 linecolor rgb "#ff7f00" linewidth 2.000 pointtype 5 pointsize default
linestyle 6, linetype 1 linecolor rgb "#dada33" linewidth 2.000 pointtype 6 pointsize default
linestyle 7, linetype 1 linecolor rgb "#a65628" linewidth 2.000 pointtype 7 pointsize default
linestyle 20, linetype 3 linecolor rgb "black" linewidth 2.000 pointtype 20 pointsize default

     warning: Skipping unreadable file "*_clat.log"
     No data in plot

gnuplot>
Title: set title "4k_gfapi_gluster\n\n{/0.6 I/O Bandwidth}" font "Helvetica,28"
File type: bw
yaxis: set ylabel "Throughput (KB/s)" font "Helvetica,16"
gnuplot> set title "4k_gfapi_gluster\n\n{/0.6 I/O Bandwidth}" font "Helvetica,28" ; set ylabel "Throughput (KB/s)" font "Helvetica,16" ;
gnuplot> set object 1 rectangle from screen 0,0 to screen 1,1 fillcolor rgb"#ffffff" behind
gnuplot> set style line 1 lc rgb "#E41A1C" lw 2 lt 1;
gnuplot> set style line 2 lc rgb "#377EB8" lw 2 lt 1;
gnuplot> set style line 3 lc rgb "#4DAF4A" lw 2 lt 1;
gnuplot> set style line 4 lc rgb "#984EA3" lw 2 lt 1;
gnuplot> set style line 5 lc rgb "#FF7F00" lw 2 lt 1;
gnuplot> set style line 6 lc rgb "#DADA33" lw 2 lt 1;
gnuplot> set style line 7 lc rgb "#A65628" lw 2 lt 1;
gnuplot> set style line 20 lc rgb "#000000" lt 3 lw 2;
gnuplot> ; ; set grid ls 20 ; ; set xlabel "Time (sec)" font "Helvetica,16" ; set xrange [0:
] ; set yrange [0:
] ; set xtics font "Helvetica,14" ; set ytics font "Helvetica,14" ; set mxtics 0 ; set mytics 2 ; set key outside bottom center ; set key box enhanced spacing 2.0 samplen 3 horizontal width 4 height 1.2 ; set terminal svg enhanced dashed size 1280,768 dynamic ; set label 30 "Data source: http://example.com" font "Helvetica,14" tc rgb "#00000f" at screen 0.976,0.175 right ; show style lines ; set output "4k_gfapi_gluster-bw.svg" ; plot '*_bw.log' using ($1/1000):($2/1) title "Queue depth lat.svg 4k_gfapi_gluster" with lines ls 1
Terminal type set to 'svg'
Options are 'size 1280,768 dynamic enhanced fname 'Arial' fsize 12 butt dashed '
linestyle 1, linetype 1 linecolor rgb "#e41a1c" linewidth 2.000 pointtype 1 pointsize default
linestyle 2, linetype 1 linecolor rgb "#377eb8" linewidth 2.000 pointtype 2 pointsize default
linestyle 3, linetype 1 linecolor rgb "#4daf4a" linewidth 2.000 pointtype 3 pointsize default
linestyle 4, linetype 1 linecolor rgb "#984ea3" linewidth 2.000 pointtype 4 pointsize default
linestyle 5, linetype 1 linecolor rgb "#ff7f00" linewidth 2.000 pointtype 5 pointsize default
linestyle 6, linetype 1 linecolor rgb "#dada33" linewidth 2.000 pointtype 6 pointsize default
linestyle 7, linetype 1 linecolor rgb "#a65628" linewidth 2.000 pointtype 7 pointsize default
linestyle 20, linetype 3 linecolor rgb "black" linewidth 2.000 pointtype 20 pointsize default

     warning: Skipping unreadable file "*_bw.log"
     No data in plot

gnuplot>

Thanks.

Is it possible when logging iops for mixed read write workloads to separate read iops from write iops?

Hi
Example: 75% reads 25% writes:
in xxx_IOPS_iops.log cannot differnentiate between read iops and write iops.
Sample iops_log where we see this below.

time us, IOPs ( I just labelled write iops and read iops manually in example below)

65051, 3790, 1, 0 <- write iops
66051, 11190, 0, 0 <- read iops @ 66051us
66051, 3643, 1, 0 <- write iops @ @ 66051us
67052, 11276, 0, 0 <-read iops
67052, 3868, 1, 0 <-write iops

Would be really nice to be able to specify 2 filenames for logging IOPs for mixed read write tests.
One filename for logging read IOPs. One filename for logging Write IOPs.
Then could plot these separately.

Thanks

fio-2.2.0 write only operation fluctuates traffic for 8k block size

Hi,

I am doing a write only 100 percent 8k block size I/O on the scsi disk. I see a lot of fluctuation in the traffic pattern. Read I/O happens in a steady manner. Am I missing something while giving fio command. My command structure follows this way.

SCSI disk is a software disk created using tgtd software on Rhel6.5 kernel.

[root@localhost ~]# fio --filename=/dev/sdc:/dev/sdd --direct=1 --rw=write --ioengine=libaio --bs=8k --rwmixwrite=100 --iodepth=16 --numjobs=16 --time_based --runtime=9000 --group_reporting --name=8k7030test
8k7030test: (g=0): rw=write, bs=8K-8K/8K-8K/8K-8K, ioengine=libaio, iodepth=16
...
fio-2.2.0
Starting 16 processes
Jobs: 16 (f=32): [W(16)] [12.1% done] [0KB/149.3MB/0KB /s] [0/19.2K/0 iops] [eta 02h:11m:47s]

so the above command shows the write operation happening with 149.3MB. This particular values comes to 25 KB and gradually increases to MB.

image

The image is a finisar trace showing fluctuation in the write operation

FS_cached_4k_random_reads fails on gluster v3.6.1

Hi,

I am running fio latest from master branch with below configuration on gluster v3.6.1 CentOS v6.6 and it is failing.

fio $args --output=4k_caranred_gz.log --section=FS_cached_4k_random_reads --ioengine=gfapi --volume=vol1 --brick=192.168.1.246 fsmb.fio

fio: failed to lseek pre-read file[0KB/0KB/0KB /s] [0/0/0 iops] [eta 1158050440d:23h:16m:01s]

fio $args --output=4k_caranredmt_gz.log --section=FS_multi-threaded_cached_4k_random_reads --ioengine=gfapi --volume=vol1 --brick=192.168.1.246 fsmb.fio

fio: failed to lseek pre-read file[0KB/0KB/0KB /s] [0/0/0 iops] [eta 1158050440d:23h:12m:37s]
fio: failed to lseek pre-read fileone] [0KB/0KB/0KB /s] [0/0/0 iops] [eta 1158050440d:23h:12m:33s]
fio: failed to lseek pre-read fileone] [4KB/0KB/0KB /s] [1/0/0 iops] [eta 1158050440d:23h:12m:32s]
fio: failed to lseek pre-read file

fsmb.fio configuration file is below:

[global]

[FS_128k_streaming_writes]
name=seqwrite
rw=write
bs=128k
size=5g

end_fsync=1

loops=1

[FS_cached_4k_random_reads]
name=randread
rw=randread
pre_read=1
norandommap
bs=4k
size=256m
runtime=30
loops=1

[FS_multi-threaded_cached_4k_random_reads]
name=randread
numjobs=4
rw=randread
pre_read=1
norandommap
bs=4k
size=256m/4
runtime=30
loops=1

Thanks.

nsecs

Could you please change the code to allow for nanosecond slat/clat/lat. This would also future proof the program.

Fio 2.2.3 parallel jobs of random read/write never finish if numjobs > 2

Hi,
I have a config file like this:

; random read of 170mb of data
[randomread]
rw=randread
size=170m
directory=/home/ubuntu/fiodata
ioengine=libaio
direct=1
iodepth=16 
numjobs=3

When I ran the jobs, one of the jobs finished very quickly with high bandwidth. But others got very low bandwidth even after the high one already finished. When the job group reached nearly 100%, one or two last jobs run forever with bandwidth of 0KB. It only happen for random read and random write and when the number of parallel job larger than 2. I only test it with numjobs = 3,4,5. For larger value, I haven't tested them.
With the older version of fio, I also got similar phenomena:

  • one job got high throughput, others got low throughput even the high one had finished.
  • last job took very long time (but still finish, not like this version, running forever)
    Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.