Giter Site home page Giter Site logo

fleetbench's Introduction

Fleetbench

Fleetbench is a benchmarking suite for Google workloads. It's a portmanteau of "fleet" and "benchmark". It is meant for use by chip vendors, compiler researchers, and others interested in making performance optimizations beneficial to workloads similar to Google's. This repository contains the Fleetbench C++ code.

Overview

Fleetbench is a benchmarking suite that consists of a curated set of microbenchmarks for hot functions across Google's fleet. The data set distributions it uses for executing the benchmarks are derived from data collected in production.

IMPORTANT: The benchmarking suite is not complete at this time. As described in this paper, a significant portion of compute is spent in code common to many applications - the so-called ‘Data Center Tax’. This benchmark at v0.4 represents subset of core libraries used across the fleet. Future releases will continue to increase this coverage. The goal is to expand coverage iteratively and keep distributions up-to-date, so always use its version at HEAD.

For more information, see:

Benchmark fidelity

Benchmark fidelity is an important consideration in building this suite. There are 3 levels of fidelity that we consider:

  1. The suite exercises the same functionality as production.
  2. The suite's performance counters match production.
  3. An optimization impact on the suite matches the impact on production.

Versioning

Fleetbench uses semantic versioning for its releases, where PATCH versions will be used for bug fixes, MINOR for updates to distributions and category coverage, and MAJOR for substantial changes to the benchmarking suite. All releases will be tagged, and the suite can be built and run at any version of the tag.

If you're starting out, authors recommend you always use the latest version at HEAD only.

Workloads coverage

As of Q3'23, Fleetbench provides coverage for several major hot functions.

Benchmark Description
Proto Instruction-focused.
Swissmap Data-focused.
Libc Data-focused.
TCMalloc Data-focused.
Compression Data-focused. Covers Snappy, ZSTD, Brotli, and Zlib.
Hashing Data-focused. Supports algorithms CRC32 and absl::Hash.
STL-Cord Instruction-focused.

Running Benchmarks

Setup

Bazel is the official build system for Fleetbench.

Bazel 6 is our supported version.

As an example, to run the Swissmap benchmarks:

bazel run --config=opt fleetbench/swissmap:swissmap_benchmark

Important: Always run benchmarks with --config=opt to apply essential compiler optimizations.

Run commands

Replacing the $WORK_LOAD and $BUILD_TARGET with one of the entry in the table to build and run the benchmark. The reasons why we add each build flag are explained in the next few sections.

GLIBC_TUNABLES=glibc.pthread.rseq=0 bazel build --config=clang --config=opt --config=haswell fleetbench/WORK_LOAD:BUILD_TARGET
bazel-bin/fleetbench/WORK_LOAD/BUILD_TARGET

Or combining build and run together:

GLIBC_TUNABLES=glibc.pthread.rseq=0 bazel run --config=clang --config=opt --config=haswell fleetbench/WORK_LOAD:BUILD_TARGET
Benchmark WORKLOAD BUILD_TARGET Binary run flags
Proto proto proto_benchmark --benchmark_min_time=3s
Swissmap swissmap swissmap_benchmark
Libc memory libc mem_benchmark --benchmark_counters_tabular=true
TCMalloc tcmalloc empirical_driver --benchmark_min_time=10s. Check --benchmark_filter below.
Compression compression compression_benchmark --benchmark_counters_tabular=true
Hashing hashing hashing_benchmark --benchmark_counters_tabular=true
STL-Cord stl cord_benchmark

NOTE: By default, each benchmark only runs a minimal set of tests that we have selected as the most representative. To see the default lists, you can use the --benchmark_list_tests flag when running the target. You can add --benchmark_filter=all to see the exhaustive list.

You can also specify a regex in --benchmark_filter flag to specify a subset of benchmarks to run (more info). The TCMalloc Empirical Driver benchmark can take ~1hr to run all benchmarks, so running a subset may be advised.

Example to run for only sets of 16 and 64 elements of swissmap:

bazel run --config=opt fleetbench/swissmap:swissmap_benchmark -- \
--benchmark_filter=".*set_size:(16|64).*"

To extend the runtime of a benchmark, e.g. to collect more profile samples, use --benchmark_min_time.

bazel run --config=opt fleetbench/proto:proto_benchmark -- --benchmark_min_time=30s

Some benchmarks also provide counter reports after completion. Adding --benchmark_counters_tabular=true (doc) can help print counters as table columns for improved layout.

Ensuring TCMalloc per-CPU Mode

TCMalloc is the underlying memory allocator in this benchmark suite. By default it operates in per-CPU mode.

Note: the Restartable Sequences (RSEQ) kernel feature is required for per-CPU mode. RSEQ has the limitation that a given thread can only register a single rseq structure with the kernel. Recent versions of glibc do this on initialization, preventing TCMalloc from using it.

Set the environment variable: GLIBC_TUNABLES=glibc.pthread.rseq=0 to prevent glibc from doing this registration. This will allow TCMalloc to operate in per-CPU mode.

Clang Toolchain

For more consistency with Google's build configuration, we suggest using the Clang / LLVM tools. These instructions have been tested with LLVM 14.

These can be installed with the system's package manager, e.g. on Debian:

sudo apt-get install clang llvm lld

Otherwise, see https://releases.llvm.org to obtain these if not present on your system or to find the newest version.

Once installed, specify --config=clang on the bazel command line to use the clang compiler. We assume clang and lld are in the PATH.

Note: to make this setting the default, add build --config=clang to your .bazelrc.

Architecture-Specific Flags

If running on an x86 Haswell or above machine, we suggest adding --config=haswell for consistency with our compiler flags.

Use --config=westmere for Westmere-era processors.

Reducing run-to-run variance

It is expected that there will be some variance in the reported CPU times across benchmark executions. The benchmark itself runs the same code, so the causes of the variance are mainly in the environment. The following is a non-exhaustive list of techniques that help with reducing run-to-run latency variance:

  • Ensure no other workloads are running on the machine at the same time. Note that this makes the environment less representative of production, where multi-tenant workloads are common.
  • Run the benchmark for longer, controlled with --benchmark_min_time.
  • Run multiple repetitions of the benchmarks in one go, controlled with --benchmark_repetitions.
  • Recommended by the benchmarking framework here:
    • Disable frequency scaling
    • Bind the process to a core by setting its affinity
    • Disable processor boosting
    • Disable Hyperthreading/SMT (should not affect single-threaded benchmarks)
    • NOTE: We do not recommend reducing the working set of the benchmark to fit into L1 cache, contrary to the recommendations in the link, as it would significantly reduce this benchmarking suite's representativeness.
  • Disable memory randomization (ASLR)

Future Work

Potential areas of future work include:

  • Increasing the set of benchmarks included in the suite to capture more of the range of code executed by google workloads.
  • Generate benchmarking score.
  • Update data distributions based on new fleet measurements.
  • Rewrite individual components with macrobenchmarks.
  • Extend the benchmarking suite to allow for drop-in replacement of equivalent implementations for each category of workloads.

FAQs

  1. Q: How do I compare results of the two different runs of a benchmark, e.g. contender vs baseline?

    A: Fleetbench is using the benchmark framework. Please reference its documentation for comparing results across benchmark runs: link.

  2. Q: How do I build the benchmark with FDO?

    A: Note that Clang and the LLVM tools are required for FDO builds.

    Take fleetbench/swissmap/swissmap_benchmark as an example.

# Instrument.
bazel build --config=clang --config=opt --fdo_instrument=.fdo fleetbench/swissmap:swissmap_benchmark
# Run to generate instrumentation.
bazel-bin/fleetbench/swissmap/swissmap_benchmark --benchmark_filter=all
# There should be a file with a .profraw extension in $PWD/.fdo/.
# Build an optimized binary.
bazel build --config=clang --config=opt --fdo_optimize=.fdo/<filename>.profraw fleetbench/swissmap:swissmap_benchmark
# Run the FDO-optimized binary.
bazel-bin/fleetbench/swissmap/swissmap_benchmark --benchmark_filter=all
  1. Q: How do I build the benchmark with ThinLTO?

    A: Note that Clang and the LLVM tools are required for ThinLTO builds. In particular, the lld linker must be in the PATH. Specify --features=thin_lto on the bazel command line. E.g.

bazel run --config=clang --config=opt --features=thin_lto fleetbench/proto:proto_benchmark
  1. Q: Does Fleetbench run on _ OS?

    A: The supported platforms are same as TCMalloc's, see link for more details.

  2. Q: Can I run Fleetbench without TCMalloc?

    A: Yes. Specify --custom_malloc="@bazel_tools//tools/cpp:malloc" on the bazel command line to override with the system allocator.

  3. Q: Can I run with Address Sanitizer?

    A: Yes. Note that you need to override TCMalloc as well for ASAN to work.

    Example:

    bazel build --custom_malloc="@bazel_tools//tools/cpp:malloc" -c opt fleetbench/proto:proto_benchmark --copt=-fsanitize=address --linkopt=-fsanitize=address
  1. Q: Are the benchmarks fixed in nature?

    A: No. It is our expectation that the code under benchmark, the hardware, the compiler, and compiler flags used may all change in concert as to identify optimization opportunities.

  2. Q: My question isn't addressed here. How do I contact the development team?

    A: Please see previous GH issues and file a new one, if your question isn't addressed there.

License

Fleetbench is licensed under the terms of the Apache license. See LICENSE for more information.

Disclaimer: This is not an officially supported Google product.

fleetbench's People

Contributors

andreas-abel avatar aysylu avatar ckennelly avatar connull avatar katre avatar liyuying0000 avatar rickeylev avatar rjogrady avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fleetbench's Issues

FDO build failing due to missing bazel_tools

I am trying to build fleetbench with FDO following these steps from FAQ -

Instrument.

bazel build --config=clang --config=opt --fdo_instrument=.fdo fleetbench/swissmap:hot_swissmap_benchmark

Run to generate instrumentation.

bazel-bin/fleetbench/swissmap/hot_swissmap_benchmark --benchmark_filter=all

There should be a file with a .profraw extension in $PWD/.fdo/.

Build an optimized binary.

bazel build --config=clang --config=opt --fdo_optimize=.fdo/.profraw fleetbench/swissmap:hot_swissmap_benchmark

Run the FDO-optimized binary.

bazel-bin/fleetbench/swissmap/hot_swissmap_benchmark --benchmark_filter=all

Clang15 and Bazel5

Build failing with this error -
$ bazel build --config=clang --config=opt --fdo_optimize=.fdo/default_18006715353208796581_0.profraw fleetbench/swissmap:hot_swissmap_benchmark

DEBUG: /root/.cache/bazel/_bazel_root/9906b1d63bb73ec34cafd40427c4e498/external/rules_python/python/repositories.bzl:32:10: py_repositories is a no-op and is deprecated. You can remove this from your WORKSPACE file
INFO: Build options --copt, --fdo_instrument, and --fdo_optimize have changed, discarding analysis cache.
ERROR: /root/.cache/bazel/_bazel_root/9906b1d63bb73ec34cafd40427c4e498/external/local_config_cc/BUILD:57:13: every rule of type cc_toolchain implicitly depends upon the target '@bazel_tools//tools/zip:unzip_fdo', but this target could not be found because of: no such target '@bazel_tools//tools/zip:unzip_fdo': target 'unzip_fdo' not declared in package 'tools/zip' defined by /root/.cache/bazel/_bazel_root/9906b1d63bb73ec34cafd40427c4e498/external/bazel_tools/tools/zip/BUILD
ERROR: Analysis of target '//fleetbench/swissmap:hot_swissmap_benchmark' failed; build aborted:
INFO: Elapsed time: 0.341s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (1 packages loaded, 820 targets configured)

C++14 requirement error using Bazel 6.3.2 and earlier

Since 43d9a76, C++14 is needed to build absl.

When using Bazel 6.3.2, I get this error when building fleetbench/proto:proto_benchmark

external/com_google_absl/absl/base/policy_checks.h:79:2: error: "C++ versions less than C++14 are not supported."

because Bazel 6.3.2 explicitly sets -std=c++0x, so it will not work.

I think this means the minimum requirement is now Bazel 6.4.0 because it sets -std=c++14 due to #bazelbuild/bazel#19794.

Would you please update the Fleetbench documentation to reflect this requirement, or provide a workaround for Bazel 6.3.2 and earlier?

Thank you.

dataset and micro-benchmark support

I have been using hyperprotobench and its dataset (as described in the Mico’21 paper) for my work. As I looked at Fleetbench, from the protobuf aspect, it’s very nice to see the benchmark methodology where it shuffling messages in the working sets to prevent CPU prefetcher, which is the main delta. One big delta between hyperprotobench and fleetbench is the dataset – the max string/byte size is 1KB. Are the data set in hyperprotobench “still” representative of data center messages? Or the fleetbench is still being developing? Benchmark is hard and produce repeated result is even harder. In my test, I have seen 25+% run to run variations (p-state off, c-state off, etc.) In addition to Lifecycle.Run(), is there a plan to have a proper way to micro-benchmark induvial functions – Create, Serialize, Deserialize, Reflect, etc.?

libc output - time_unit is ns instead of seconds

In the first line, the cpu_time is 0.0982076 ns, meaning less the 1 cpu cycle, therefore doesn't make sense of course..

I would expect to have cpu_time ~ bytes / bytes_per_second, and it is the case, just that cpu_time should be in seconds unit and not in nanoseconds as the time_unit suggests.

I would suggest to check that in the other benchmarks as well.

Thanks.

image

bazel threw out errors when building

Environment

os: ubuntu
kernel: 5.15.0-56-generic
cpu: x86_64

Output from bazel version

Build label: 6.0.0
Build target: bazel-out/k8-opt/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar
Build time: Mon Dec 19 15:52:35 2022 (1671465155)
Build timestamp: 1671465155
Build timestamp as int: 1671465155

Command I ran

bazel run -c opt fleetbench/swissmap:hot_swissmap_benchmark

Result

ERROR: /home/user1/.cache/bazel/_bazel_user1/d0ccc3366f5edc4e660a0237ca9928bc/external/bazel_tools/platforms/BUILD:89:6: in alias rule @bazel_tools//platforms:windows: Constraints from @bazel_tools//platforms have been removed. Please use constraints from @platforms repository embedded in Bazel, or preferably declare dependency on https://github.com/bazelbuild/platforms. See https://github.com/bazelbuild/bazel/issues/8622 for details.
ERROR: /home/user1/.cache/bazel/_bazel_user1/d0ccc3366f5edc4e660a0237ca9928bc/external/bazel_tools/platforms/BUILD:89:6: Analysis of target '@bazel_tools//platforms:windows' failed
ERROR: /home/user1/fleetbench/fleetbench/swissmap/BUILD:44:28: While resolving toolchains for target //fleetbench/swissmap:hot_swissmap_benchmark: invalid registered toolchain '@bazel_skylib//toolchains/unittest:cmd_toolchain': 
ERROR: Analysis of target '//fleetbench/swissmap:hot_swissmap_benchmark' failed; build aborted: 
INFO: Elapsed time: 0.250s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded, 1 target configured)
ERROR: Build failed. Not running target

I believe this is a compatibility problem. I'm new to bazel and I didn't find any clues to solve it.

Issue with libpfm during bazel run --config=opt fleetbench/tcmalloc:empirical_driver

using bazel 6.3.2
using gcc 12.3.0

DEBUG: /private/var/tmp/_bazel_athbha01/d016f079ed8063c5398a942b18f70a7b/external/rules_python/python/pip.bzl:47:10: pip_install is deprecated. Please switch to pip_parse. pip_install will be removed in a future release.
INFO: Repository libpfm instantiated at:
Desktop/bechmark_fleetbech/fleetbench/WORKSPACE:110:13: in
Repository rule http_archive defined at:
/private/var/tmp/_bazel_athbha01/d016f079ed8063c5398a942b18f70a7b/external/bazel_tools/tools/build_defs/repo/http.bzl:372:31: in
WARNING: Download from https://sourceforge.net/projects/perfmon2/files/libpfm4/libpfm-4.11.0.tar.gz/download failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException Checksum was 251a85b3bac687974f360d3796048c20ded3bf0bd69e0d1cfd1db23d013f89ed but wanted 5da5f8872bde14b3634c9688d980f68bda28b510268723cc12973eedbab9fecc
ERROR: An error occurred during the fetch of repository 'libpfm':
Traceback (most recent call last):
File "/private/var/tmp/_bazel_athbha01/d016f079ed8063c5398a942b18f70a7b/external/bazel_tools/tools/build_defs/repo/http.bzl", line 132, column 45, in _http_archive_impl
download_info = ctx.download_and_extract(
Error in download_and_extract: java.io.IOException: Error downloading [https://sourceforge.net/projects/perfmon2/files/libpfm4/libpfm-4.11.0.tar.gz/download] to /private/var/tmp/_bazel_athbha01/d016f079ed8063c5398a942b18f70a7b/external/libpfm/temp3458073497591957110/download.tar.gz: Checksum was 251a85b3bac687974f360d3796048c20ded3bf0bd69e0d1cfd1db23d013f89ed but wanted 5da5f8872bde14b3634c9688d980f68bda28b510268723cc12973eedbab9fecc
ERROR: /Desktop/bechmark_fleetbech/fleetbench/WORKSPACE:110:13: fetching http_archive rule //external:libpfm: Traceback (most recent call last):
File "/private/var/tmp/_bazel_athbha01/d016f079ed8063c5398a942b18f70a7b/external/bazel_tools/tools/build_defs/repo/http.bzl", line 132, column 45, in _http_archive_impl
download_info = ctx.download_and_extract(
Error in download_and_extract: java.io.IOException: Error downloading [https://sourceforge.net/projects/perfmon2/files/libpfm4/libpfm-4.11.0.tar.gz/download] to /private/var/tmp/_bazel_athbha01/d016f079ed8063c5398a942b18f70a7b/external/libpfm/temp3458073497591957110/download.tar.gz: Checksum was 251a85b3bac687974f360d3796048c20ded3bf0bd69e0d1cfd1db23d013f89ed but wanted 5da5f8872bde14b3634c9688d980f68bda28b510268723cc12973eedbab9fecc
ERROR: /private/var/tmp/_bazel_athbha01/d016f079ed8063c5398a942b18f70a7b/external/com_google_benchmark/BUILD.bazel:35:11: @com_google_benchmark//:benchmark depends on @libpfm//:libpfm in repository @libpfm which failed to fetch. no such package '@libpfm//': java.io.IOException: Error downloading [https://sourceforge.net/projects/perfmon2/files/libpfm4/libpfm-4.11.0.tar.gz/download] to /private/var/tmp/_bazel_athbha01/d016f079ed8063c5398a942b18f70a7b/external/libpfm/temp3458073497591957110/download.tar.gz: Checksum was 251a85b3bac687974f360d3796048c20ded3bf0bd69e0d1cfd1db23d013f89ed but wanted 5da5f8872bde14b3634c9688d980f68bda28b510268723cc12973eedbab9fecc
ERROR: Analysis of target '//fleetbench/tcmalloc:empirical_driver' failed; build aborted:
INFO: Elapsed time: 335.156s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (75 packages loaded, 703 targets configured)
ERROR: Build failed. Not running target

Use own version of protobuf locally

Hello,

I am trying to use fleetbench to do some benchmarking, however I want to point to my own version of protobuf that I have locally on my machine.
I tried changing URL archive for protobuf to local repository in the WORKSPACE file ( see below)
image

However, I then get this error
image

I am wondering is it possible to use my own version of protobuf locally and if so what would be the easiest way to implement this?

Thanks in advance,
David

compile fleetbench/tcmalloc:empirical_driver failed

When I run: bazel run --config=opt fleetbench/tcmalloc:empirical_driver --config=clang. It causes error like

ld.lld:_ error: undefined symbol: std::filesystem::__cxx11::directory_iterator::directory_iterator(std::filesystem::__cxx11::path const&, std::filesystem::directory_options, std::error_code*)
>>> referenced by common.cc
>>> bazel-out/k8-opt-clang/bin/fleetbench/common/_objs/common/common.o:(fleetbench::GetMatchingFiles[abi:cxx11](std::basic_string_view<char, std::char_traits>, std::basic_string_view<char, `std::char_traits>))

It seems to need to add ldflags -lstdc++fs

Error while running the tcmalloc/empirical . bazel run --config=opt fleetbench/tcmalloc:empirical_driver

using bazel 6.3.2
using gcc 12.3.0

above issue is now fixed but now ended up with this issue. please guide me.

bazel run --config=opt fleetbench/tcmalloc:empirical_driver
WARNING: Output base ' /.cache/bazel/_bazel_athbha01/8f4a24188ec25142b28e12f4ec4ee68a' is on NFS. This may lead to surprising failures and undetermined behavior.
DEBUG: /.cache/bazel/_bazel_athbha01/8f4a24188ec25142b28e12f4ec4ee68a/external/rules_python/python/pip.bzl:47:10: pip_install is deprecated. Please switch to pip_parse. pip_install will be removed in a future release.
INFO: Analyzed target //fleetbench/tcmalloc:empirical_driver (79 packages loaded, 2119 targets configured).
INFO: Found 1 target...
INFO: From Compiling absl/strings/internal/str_format/float_conversion.cc:
In file included from /usr/include/string.h:638,
from /gnu/gcc/12.2.0/rhe7-x86_64/lib/gcc/x86_64-pc-linux-gnu/12.2.0/../../../../include/c++/12.2.0/cstring:42,
from external/com_google_absl/absl/strings/internal/str_format/extension.h:23,
from external/com_google_absl/absl/strings/internal/str_format/float_conversion.h:18,
from external/com_google_absl/absl/strings/internal/str_format/float_conversion.cc:15:
In function 'void* memset(void*, int, size_t)',
inlined from 'absl::str_format_internal::FormatSinkImpl::Append(size_t, char)::<lambda(size_t)>' at external/com_google_absl/absl/strings/internal/str_format/extension.h:82:13,
inlined from 'void absl::str_format_internal::FormatSinkImpl::Append(size_t, char)' at external/com_google_absl/absl/strings/internal/str_format/extension.h:88:19,
inlined from 'void absl::str_format_internal::{anonymous}::FormatFNegativeExpSlow(absl::uint128, int, const FormatState&)' at external/com_google_absl/absl/strings/internal/str_format/float_conversion.cc:616:49:
/usr/include/bits/string3.h:81:30: warning: call to '__warn_memset_zero_len' declared with attribute warning: memset used with constant zero length parameter; this could be due to transposed parameters [-Wattribute-warning]
81 | __warn_memset_zero_len ();
| ~~~~~~~~~~~~~~~~~~~~~~~^~
INFO: From Compiling fleetbench/benchmark_main.cc:
fleetbench/benchmark_main.cc: In function 'int main(int, char**)':
fleetbench/benchmark_main.cc:26:16: warning: unused variable 'background' [-Wunused-variable]
26 | static auto* background =
| ^~~~~~~~~~
INFO: From Compiling fleetbench/tcmalloc/empirical_driver.cc:
In file included from external/com_google_absl/absl/base/macros.h:36,
from external/com_google_absl/absl/base/dynamic_annotations.h:54,
from external/com_google_absl/absl/base/internal/spinlock.h:37,
from fleetbench/tcmalloc/empirical_driver.cc:32:
external/com_google_absl/absl/log/internal/check_op.h: In instantiation of 'constexpr std::string* absl::log_internal::Check_GTImpl(const T1&, const T2&, const char*) [with T1 = long unsigned int; T2 = int; std::string = std::__cxx11::basic_string]':
fleetbench/tcmalloc/empirical_driver.cc:127:5: required from here
external/com_google_absl/absl/log/internal/check_op.h:341:43: warning: comparison of integer expressions of different signedness: 'const long unsigned int' and 'const int' [-Wsign-compare]
341 | ABSL_LOG_INTERNAL_CHECK_OP_IMPL(Check_GT, >)
external/com_google_absl/absl/base/optimization.h:179:58: note: in definition of macro 'ABSL_PREDICT_TRUE'
179 | #define ABSL_PREDICT_TRUE(x) (__builtin_expect(false || (x), true))
| ^
external/com_google_absl/absl/log/internal/check_op.h:341:1: note: in expansion of macro 'ABSL_LOG_INTERNAL_CHECK_OP_IMPL'
341 | ABSL_LOG_INTERNAL_CHECK_OP_IMPL(Check_GT, >)
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ERROR: /test_bench/google-fleetbench-b1a91dd/fleetbench/tcmalloc/BUILD:48:8: Linking fleetbench/tcmalloc/empirical_driver failed: (Exit 1): gcc failed: error executing command (from target //fleetbench/tcmalloc:empirical_driver)/gnu/gcc/12.2.0/rhe7-x86_64/bin/gcc @bazel-out/k8-opt/bin/fleetbench/tcmalloc/empirical_driver-2.params

Use --sandbox_debug to see verbose messages from the sandbox and retain the sandbox build root for debugging
/bin/ld:/gnu/gcc/12.2.0/rhe7-x86_64/lib/gcc/x86_64-pc-linux-gnu/12.2.0/libgcc.a(_muldi3.o): unable to initialize decompress status for section .debug_info
/bin/ld: /gnu/gcc/12.2.0/rhe7-x86_64/lib/gcc/x86_64-pc-linux-gnu/12.2.0/libgcc.a(_muldi3.o): unable to initialize decompress status for section .debug_info
/bin/ld: /gnu/gcc/12.2.0/rhe7-x86_64/lib/gcc/x86_64-pc-linux-gnu/12.2.0/libgcc.a(_popcountsi2.o): unable to initialize decompress status for section .debug_info
/bin/ld: /gnu/gcc/12.2.0/rhe7-x86_64/lib/gcc/x86_64-pc-linux-gnu/12.2.0/libgcc.a(_popcountsi2.o): unable to initialize decompress status for section .debug_info
/gnu/gcc/12.2.0/rhe7-x86_64/lib/gcc/x86_64-pc-linux-gnu/12.2.0/libgcc.a: error adding symbols: File format not recognized
collect2: error: ld returned 1 exit status
Target //fleetbench/tcmalloc:empirical_driver failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 102.390s, Critical Path: 96.84s
INFO: 387 processes: 97 internal, 290 processwrapper-sandbox.
FAILED: Build did NOT complete successfully
ERROR: Build failed. Not running target

Proto benchmark finer details

I noticed that the proto benchmark has 11 representative protos. Right now, the benchmark outputs only a single runtime value. I was wondering if it could be possible to get runtimes for each representative protobufs independently.

Also, I was wondering whether metrics like processed messages/second and processed bytes/second could be added to this benchmark.

Bazel invokes `external/local_config_cc/None` as the linker

... if binutils not installed.

Note: This is not necessarily a fleetbench issue, but I want to at least document this here as I know others have hit similar issues, and it was a bit nasty to undo because it required knowing that you had to delete ~/.cache/bazel in order to get it working again once in the bad state. Ideally though, this scenario would work.

Context: I'm putting together a docker container with only a clang toolchain in it for the purposes of testing that toolchain, and I want to make sure there is no chance another toolchain is being picked up. If I build with --config=opt --config=clang and Clang/LLD in the $PATH, I get the following confusing error:

ERROR: /root/.cache/bazel/_bazel_root/9c087a2e32e91c2e0a08651c5d06d7a5/external/com_google_absl/absl/random/internal/BUILD.bazel:267:11: Linking external/com_google_absl/absl/random/internal/libplatform.a failed: (Exit 1): None failed: error executing command (from target @com_google_absl//absl/random/internal:platform)
  (cd /root/.cache/bazel/_bazel_root/9c087a2e32e91c2e0a08651c5d06d7a5/sandbox/linux-sandbox/328/execroot/com_google_fleetbench && \
  exec env - \
    PATH=/home/fleetbench/../compiler/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
    PWD=/proc/self/cwd \
  external/local_config_cc/None @bazel-out/aarch64-fastbuild-clang/bin/external/com_google_absl/absl/random/internal/libplatform.a-2.params)

That is, bazel appears to be is invoking external/local_config_cc/None which is not a valid path.

Furthermore, the error persists even after installing binutils, and even after a bazel clean. Removing the bazel cache directory or changing the bazel version does however allow the build to succeed once binutils is installed.

I found also that what seems to be going wrong is that it is checking for the presence of /usr/bin/ld.gold and hardcoding -fuse-ld=/usr/bin/ld.gold, even though that linker is ultimately unused, though I did also need to pass --linkopt=-fuse-ld=lld.

Supplying --features=thin_lto also did not work but supplying --{c,cxx,link}opt=-flto=thin along with -fuse-ld=lld did work.

Upgrade tcmalloc

In my local builds with libc++ and an up-to-date clang I was seeing an unused variable error from tcmalloc. I worked around it with the patch below.

--- tcmalloc/system-alloc.cc.orig       2024-04-11 19:05:57.249351705 -0700
+++ tcmalloc/system-alloc.cc    2024-04-11 19:05:58.672340539 -0700
@@ -652,10 +652,9 @@
           strerror(errno));
       return nullptr;
     }
-    if (int err = munmap(result, size)) {
+    if (munmap(result, size)) {
       Log(kLogWithStack, __FILE__, __LINE__, "munmap() failed (error)",
           strerror(errno));
-      ASSERT(err == 0);
     }
     next_addr = RandomMmapHint(size, alignment, tag);
   }

This seems to have been fixed in upstream tcmalloc by changing the macro used for assertions to always evaluate its argument, so it looks like upgrading tcmalloc will make this problem go away.

Question about glibc version

It seems after e32b70c the glibc version needed is higher. Do you have plan to support older glibc version?
I use old glibc version and after e32b70c the fleetbench couldn't work.

Issue while bulding KMaxStackDepth

Kindly assist me where is the actual issue?
Using bazel 6.3.2

`Compiling tcmalloc/transfer_cache.cc failed: (Exit 1): gcc failed: error executing command (from target @com_google_tcmalloc//tcmalloc:common) /bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' ... (remaining 40 arguments skipped)

Use --sandbox_debug to see verbose messages from the sandbox and retain the sandbox build root for debugging
In file included from external/com_google_tcmalloc/tcmalloc/internal/lifetime_tracker.h:21,
                from external/com_google_tcmalloc/tcmalloc/huge_page_filler.h:33,
                from external/com_google_tcmalloc/tcmalloc/huge_region.h:24,
                from external/com_google_tcmalloc/tcmalloc/huge_page_aware_allocator.h:26,
                from external/com_google_tcmalloc/tcmalloc/page_allocator.h:25,
                from external/com_google_tcmalloc/tcmalloc/static_vars.h:39,
                from external/com_google_tcmalloc/tcmalloc/transfer_cache.cc:35:
external/com_google_tcmalloc/tcmalloc/internal/lifetime_predictions.h:179:20: error: declaration of 'const int tcmalloc::tcmalloc_internal::LifetimeDatabase::kMaxStackDepth' changes meaning of 'kMaxStackDepth' [-Wchanges-meaning]
179 |   static const int kMaxStackDepth = 64;
     |                    ^~~~~~~~~~~~~~
external/com_google_tcmalloc/tcmalloc/internal/lifetime_predictions.h:107:18: note: used here to mean 'constexpr const int tcmalloc::tcmalloc_internal::kMaxStackDepth'
107 |     void* stack_[kMaxStackDepth];
     |                  ^~~~~~~~~~~~~~
In file included from external/com_google_tcmalloc/tcmalloc/common.h:37,
                from external/com_google_tcmalloc/tcmalloc/central_freelist.h:28,
                from external/com_google_tcmalloc/tcmalloc/transfer_cache.h:35,
                from external/com_google_tcmalloc/tcmalloc/transfer_cache.cc:15:
external/com_google_tcmalloc/tcmalloc/internal/logging.h:49:22: note: declared here
  49 | static constexpr int kMaxStackDepth = 64;
     |                      ^~~~~~~~~~~~~~
Target //fleetbench/tcmalloc:empirical_driver failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 17.559s, Critical Path: 4.14s
INFO: 116 processes: 22 internal, 94 processwrapper-sandbox.
FAILED: Build did NOT complete successfully
ERROR: Build failed. Not running target
`

How to use local Protobuf

Hello,

I am trying to use Fleetbench with my own local Protobuf repo (i.e., build from source). My understanding is that currently Protobuf is loaded from online using Bazel's http_archive. Is it possible to change the Bazel build files to use a local Protobuf version? Is this supported?

Build failure with gcc on arm

bazel run --config=opt -k fleetbench/swissmap:swissmap_benchmark fails like so:

WARNING: Build options --copt and --platform_suffix have changed, discarding analysis cache (this can be expensive, see https://bazel.build/advanced/performance/iteration-speed).
INFO: Analyzed target //fleetbench/swissmap:swissmap_benchmark (82 packages loaded, 2088 targets configured).
ERROR: /mnt/disk2/pcc/fleetbench/cache/bazel/_bazel_pcc/6ebf5179f821c2a2e8b4f49bd226762e/external/com_google_tcmalloc/tcmalloc/BUILD:94:11: Compiling tcmalloc/tcmalloc.cc failed: (Exit 1): gcc failed: error executing CppCompile command (from target @@com_google_tcmalloc//tcmalloc:tcmalloc) /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections ... (remaining 37 arguments skipped)

Use --sandbox_debug to see verbose messages from the sandbox and retain the sandbox build root for debugging
In file included from external/com_google_tcmalloc/tcmalloc/cpu_cache.h:50,
                 from external/com_google_tcmalloc/tcmalloc/allocation_sampling.h:33,
                 from external/com_google_tcmalloc/tcmalloc/tcmalloc.cc:81:
external/com_google_tcmalloc/tcmalloc/internal/percpu_tcmalloc.h:813:36: error: 'always_inline' function might not be inlinable [-Werror=attributes]
  813 | ABSL_ATTRIBUTE_ALWAYS_INLINE void* TcmallocSlab<NumClasses>::Pop(
      |                                    ^~~~~~~~~~~~~~~~~~~~~~~~
during RTL pass: expand
In function 'bool tcmalloc::tcmalloc_internal::subtle::percpu::TcmallocSlab_Internal_Push(size_t, void*)',
    inlined from 'bool tcmalloc::tcmalloc_internal::subtle::percpu::TcmallocSlab<NumClasses>::Push(size_t, void*) [with long unsigned int NumClasses = 172]' at external/com_google_tcmalloc/tcmalloc/internal/percpu_tcmalloc.h:706:36,
    inlined from 'bool tcmalloc::tcmalloc_internal::cpu_cache_internal::CpuCache<Forwarder>::DeallocateFast(void*, size_t) [with Forwarder = tcmalloc::tcmalloc_internal::cpu_cache_internal::StaticForwarder]' at external/com_google_tcmalloc/tcmalloc/cpu_cache.h:735:24:
external/com_google_tcmalloc/tcmalloc/internal/percpu_tcmalloc.h:630:3: internal compiler error: 'asm' clobber conflict with output operand
  630 |   asm volatile(
      |   ^~~
0x194b0fb internal_error(char const*, ...)
        ???:0
Please submit a full bug report, with preprocessed source (by using -freport-bug).
Please include the complete backtrace with any bug report.
See <https://github.com/archlinuxarm/PKGBUILDs/issues> for instructions.
INFO: Found 1 target...
Target //fleetbench/swissmap:swissmap_benchmark failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 35.533s, Critical Path: 27.80s
INFO: 52 processes: 3 internal, 49 linux-sandbox.
ERROR: Build did NOT complete successfully
ERROR: Build failed. Not running target

Build fleetbench failed as no permission with Bazel 5.4.0

Hi,

I want to create aarch64 version fleetbench. However it failed as no permission.

Here is the build log. I had granted the fleetbench folder as 777.
bazel run -c opt fleetbench/swissmap:hot_swissmap_benchmark --verbose_failures
2023/01/09 03:40:53 Downloading https://releases.bazel.build/5.4.0/release/bazel-5.4.0-linux-arm64...
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
INFO: Analyzed target //fleetbench/swissmap:hot_swissmap_benchmark (65 packages loaded, 836 targets configured).
INFO: Found 1 target...
ERROR: /home/nvidia/walter/fleetbench/fleetbench/BUILD:15:11: Compiling fleetbench/benchmark_main.cc failed: (Exit 1): gcc failed: error executing command
(cd /root/.cache/bazel/bazel_root/0bce1989468318c371f4348e6ac4d902/sandbox/linux-sandbox/15/execroot/com_google_fleetbench &&
exec env -
PATH=/root/.cache/bazelisk/downloads/bazelbuild/bazel-5.4.0-linux-arm64/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
PWD=/proc/self/cwd
/usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections -fdata-sections '-std=c++0x' -MD -MF bazel-out/aarch64-opt/bin/fleetbench/objs/benchmark_main/benchmark_main.d '-frandom-seed=bazel-out/aarch64-opt/bin/fleetbench/objs/benchmark_main/benchmark_main.o' -DBENCHMARK_STATIC_DEFINE -iquote . -iquote bazel-out/aarch64-opt/bin -iquote external/com_google_benchmark -iquote bazel-out/aarch64-opt/bin/external/com_google_benchmark -Ibazel-out/aarch64-opt/bin/external/com_google_benchmark/virtual_includes/benchmark '-std=c++17' -fno-canonical-system-headers -Wno-builtin-macro-redefined '-D__DATE="redacted"' '-D__TIMESTAMP
="redacted"' '-D__TIME__="redacted"' -c fleetbench/benchmark_main.cc -o bazel-out/aarch64-opt/bin/fleetbench/_objs/benchmark_main/benchmark_main.o)

Configuration: a0b0f0a2e12d5d8ebd5c1e57a8b5134db01aaef167d6db5c638a140b29cfa08a

Execution platform: @local_config_platform//:host

Use --sandbox_debug to see verbose messages from the sandbox and retain the sandbox build root for debugging
gcc: error: fleetbench/benchmark_main.cc: Permission denied
gcc: fatal error: no input files
compilation terminated.
Target //fleetbench/swissmap:hot_swissmap_benchmark failed to build
INFO: Elapsed time: 17.432s, Critical Path: 1.09s
INFO: 170 processes: 166 internal, 4 linux-sandbox.
FAILED: Build did NOT complete successfully
FAILED: Build did NOT complete successfully
root@nvidia:/home/nvidia/walter/fleetbench# bazel version
Bazelisk version: v1.13.2
Build label: 5.4.0
Build target: bazel-out/aarch64-opt/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar
Build time: Thu Dec 15 16:14:11 2022 (1671120851)
Build timestamp: 1671120851
Build timestamp as int: 1671120851

I did some researches and found that it was caused by a loop soft link. The link didn't point to the correct source file. It pointed to itself. Should I missed some build options or configurations?
image

Fleetbench-proto build fails with Clang+ThinLTO

ERROR: /root/fleetbench/fleetbench/proto/BUILD:164:28: Linking fleetbench/proto/proto_benchmark failed: (Exit 1): clang failed: error executing command
  (cd /root/.cache/bazel/_bazel_root/9906b1d63bb73ec34cafd40427c4e498/sandbox/linux-sandbox/476/execroot/com_google_fleetbench &&
  exec env -
    PATH=/root/.cargo/bin:/root/dcsomc:/usr/share/Modules/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/bin:/root/bin
    PWD=/proc/self/cwd
  /usr/bin/clang @bazel-out/k8-opt/bin/fleetbench/proto/proto_benchmark-2.params)
Execution platform: @local_config_platform//:host Use --sandbox_debug to see verbose messages from the sandbox
/usr/bin/ld.gold: error: bazel-out/k8-opt/bin/fleetbench/proto/_objs/proto_benchmark/benchmark.o:1:3: invalid character
/usr/bin/ld.gold: error: bazel-out/k8-opt/bin/fleetbench/proto/_objs/proto_benchmark/benchmark.o:1:3: syntax error, unexpected $end
/usr/bin/ld.gold: error: bazel-out/k8-opt/bin/fleetbench/proto/_objs/proto_benchmark/benchmark.o: not an object or archive
/usr/bin/ld.gold: error: bazel-out/k8-opt/bin/fleetbench/proto/_objs/benchmark_lib/lifecycle.o:1:3: invalid character
/usr/bin/ld.gold: error: bazel-out/k8-opt/bin/fleetbench/proto/_objs/benchmark_lib/lifecycle.o:1:3: syntax error, unexpected $end
/usr/bin/ld.gold: error: bazel-out/k8-opt/bin/fleetbench/proto/_objs/benchmark_lib/lifecycle.o: not an object or archive
/usr/bin/ld.gold: internal error in remove_blocker, at token.h:161
clang-12: error: linker command failed with exit code 1 (use -v to see invocation)
Target //fleetbench/proto:proto_benchmark failed to build

Fidelity about fleetbench

Hi:

We are using the fleetbench for a research topic. May I get some design ideas on how fleetbench protocol-buffer benchmark is designed? For example

  1. Why is kIteration set to 10?
  2. In ProtoLifecycle::Run(), Is there any reason why the logic looks like this?
    For example: After a package m493_messages_[0] is de-serialized (like those in line 196), its de-serialized data is not used immediately. Rather, the logic keep de-serializing other 9 packages -- m493_messages_[1-9]. Even after all 10 packages are de-serialized, the logic is followed by a copy of m1_messages_. When CPU tries to use data from m493_strings_[0], data might not be in the cache.
    In a real protocol buffer application, I would image that a package is de-serialized, then its de-serialized data is used by CPU immediately for further work(like being modify and then serialized into another package). In this case, most of the data is still in cache.
    Due to that, I am concerning if the cache behavior in fleetbench protocol buffer benchmark might be really different with a real protocol buffer based application...Thus I am worried about the Fidelity... Please correct me if you think I am wrong.

We am interested to know the fidelity of fleetbench protocol-buffer benchmark, comparing to a real protocol-buffer based application like RPC.

Thank you~
Jerry

compression_benchmark corpus generation not consistent

GLIBC_TUNABLES=glibc.pthread.rseq=0 ./bazelisk-v1.17 run --config=clang --config=opt --copt=-mcpu=neoverse-v1 --copt=-mtune=neoverse-v1 fleetbench/compression:compression_benchmark -- --benchmark_min_time=60s --benchmark_filter="BM_Snappy-COMPRESS-L|BM_Snappy-DECOMPRESS-L"

I'm running fleetbench v2.1 on two similar systems. The generated corpus is different and this is affecting the execution time. This makes it difficult to compare the performance of the two systems. Is there some parameter to ensure the corpus is the same?

ls -alh machine1
$ ls -alh /home/ubuntu/.cache/bazel/_bazel_ubuntu/5f241f2ce88852e31cc5854c84122669/execroot/com_google_fleetbench/bazel-out/aarch64-opt-clang/bin/fleetbench/compression/corpora/Snappy-DECOMPRESS-L/
total 12M
drwxrwxr-x  2 ubuntu ubuntu 4.0K Jul 24 08:19 .
drwxrwxr-x 41 ubuntu ubuntu 4.0K Jul 24 08:15 ..
-r-xr-xr-x  1 ubuntu ubuntu  64K Jul 24 08:17 corpus_0
-r-xr-xr-x  1 ubuntu ubuntu   32 Jul 24 08:17 corpus_1
-r-xr-xr-x  1 ubuntu ubuntu 128K Jul 24 08:17 corpus_10
-r-xr-xr-x  1 ubuntu ubuntu 2.0M Jul 24 08:17 corpus_11
-r-xr-xr-x  1 ubuntu ubuntu  128 Jul 24 08:17 corpus_12
-r-xr-xr-x  1 ubuntu ubuntu   32 Jul 24 08:17 corpus_13
-r-xr-xr-x  1 ubuntu ubuntu 2.0K Jul 24 08:17 corpus_14
-r-xr-xr-x  1 ubuntu ubuntu  64K Jul 24 08:17 corpus_15
-r-xr-xr-x  1 ubuntu ubuntu    8 Jul 24 08:17 corpus_16
-r-xr-xr-x  1 ubuntu ubuntu  32K Jul 24 08:17 corpus_17
-r-xr-xr-x  1 ubuntu ubuntu  32K Jul 24 08:17 corpus_18
-r-xr-xr-x  1 ubuntu ubuntu   64 Jul 24 08:17 corpus_19
-r-xr-xr-x  1 ubuntu ubuntu   32 Jul 24 08:17 corpus_2
-r-xr-xr-x  1 ubuntu ubuntu 4.0K Jul 24 08:17 corpus_20
-r-xr-xr-x  1 ubuntu ubuntu 256K Jul 24 08:17 corpus_21
-r-xr-xr-x  1 ubuntu ubuntu 2.0K Jul 24 08:17 corpus_22
-r-xr-xr-x  1 ubuntu ubuntu 256K Jul 24 08:17 corpus_23
-r-xr-xr-x  1 ubuntu ubuntu   32 Jul 24 08:17 corpus_24
-r-xr-xr-x  1 ubuntu ubuntu  512 Jul 24 08:17 corpus_25
-r-xr-xr-x  1 ubuntu ubuntu  32K Jul 24 08:17 corpus_26
-r-xr-xr-x  1 ubuntu ubuntu 1.0M Jul 24 08:17 corpus_27
-r-xr-xr-x  1 ubuntu ubuntu   32 Jul 24 08:17 corpus_28
-r-xr-xr-x  1 ubuntu ubuntu   16 Jul 24 08:17 corpus_29
-r-xr-xr-x  1 ubuntu ubuntu   32 Jul 24 08:17 corpus_3
-r-xr-xr-x  1 ubuntu ubuntu 256K Jul 24 08:17 corpus_30
-r-xr-xr-x  1 ubuntu ubuntu  64K Jul 24 08:17 corpus_31
-r-xr-xr-x  1 ubuntu ubuntu  16K Jul 24 08:17 corpus_32
-r-xr-xr-x  1 ubuntu ubuntu   32 Jul 24 08:17 corpus_33
-r-xr-xr-x  1 ubuntu ubuntu   16 Jul 24 08:17 corpus_34
-r-xr-xr-x  1 ubuntu ubuntu 128K Jul 24 08:17 corpus_35
-r-xr-xr-x  1 ubuntu ubuntu    8 Jul 24 08:17 corpus_36
-r-xr-xr-x  1 ubuntu ubuntu  256 Jul 24 08:17 corpus_37
-r-xr-xr-x  1 ubuntu ubuntu   64 Jul 24 08:17 corpus_38
-r-xr-xr-x  1 ubuntu ubuntu 256K Jul 24 08:17 corpus_39
-r-xr-xr-x  1 ubuntu ubuntu   32 Jul 24 08:17 corpus_4
-r-xr-xr-x  1 ubuntu ubuntu 4.0K Jul 24 08:17 corpus_40
-r-xr-xr-x  1 ubuntu ubuntu  64K Jul 24 08:17 corpus_41
-r-xr-xr-x  1 ubuntu ubuntu   32 Jul 24 08:17 corpus_42
-r-xr-xr-x  1 ubuntu ubuntu   16 Jul 24 08:17 corpus_43
-r-xr-xr-x  1 ubuntu ubuntu 128K Jul 24 08:17 corpus_44
-r-xr-xr-x  1 ubuntu ubuntu    8 Jul 24 08:17 corpus_45
-r-xr-xr-x  1 ubuntu ubuntu 128K Jul 24 08:17 corpus_46
-r-xr-xr-x  1 ubuntu ubuntu 128K Jul 24 08:17 corpus_47
-r-xr-xr-x  1 ubuntu ubuntu 512K Jul 24 08:17 corpus_48
-r-xr-xr-x  1 ubuntu ubuntu  64K Jul 24 08:17 corpus_49
-r-xr-xr-x  1 ubuntu ubuntu  64K Jul 24 08:17 corpus_5
-r-xr-xr-x  1 ubuntu ubuntu 128K Jul 24 08:17 corpus_50
-r-xr-xr-x  1 ubuntu ubuntu 256K Jul 24 08:17 corpus_51
-r-xr-xr-x  1 ubuntu ubuntu 128K Jul 24 08:17 corpus_52
-r-xr-xr-x  1 ubuntu ubuntu  32K Jul 24 08:17 corpus_53
-r-xr-xr-x  1 ubuntu ubuntu 256K Jul 24 08:17 corpus_54
-r-xr-xr-x  1 ubuntu ubuntu  32K Jul 24 08:17 corpus_55
-r-xr-xr-x  1 ubuntu ubuntu  64K Jul 24 08:17 corpus_56
-r-xr-xr-x  1 ubuntu ubuntu  32K Jul 24 08:17 corpus_57
-r-xr-xr-x  1 ubuntu ubuntu   16 Jul 24 08:17 corpus_58
-r-xr-xr-x  1 ubuntu ubuntu 128K Jul 24 08:17 corpus_59
-r-xr-xr-x  1 ubuntu ubuntu 256K Jul 24 08:17 corpus_6
-r-xr-xr-x  1 ubuntu ubuntu  32K Jul 24 08:17 corpus_60
-r-xr-xr-x  1 ubuntu ubuntu  32K Jul 24 08:17 corpus_61
-r-xr-xr-x  1 ubuntu ubuntu 8.0K Jul 24 08:17 corpus_62
-r-xr-xr-x  1 ubuntu ubuntu   16 Jul 24 08:17 corpus_63
-r-xr-xr-x  1 ubuntu ubuntu  64K Jul 24 08:17 corpus_64
-r-xr-xr-x  1 ubuntu ubuntu 512K Jul 24 08:17 corpus_65
-r-xr-xr-x  1 ubuntu ubuntu  32K Jul 24 08:17 corpus_66
-r-xr-xr-x  1 ubuntu ubuntu  32K Jul 24 08:17 corpus_67
-r-xr-xr-x  1 ubuntu ubuntu  32K Jul 24 08:17 corpus_68
-r-xr-xr-x  1 ubuntu ubuntu   16 Jul 24 08:17 corpus_69
-r-xr-xr-x  1 ubuntu ubuntu 256K Jul 24 08:17 corpus_7
-r-xr-xr-x  1 ubuntu ubuntu   16 Jul 24 08:17 corpus_70
-r-xr-xr-x  1 ubuntu ubuntu   16 Jul 24 08:17 corpus_71
-r-xr-xr-x  1 ubuntu ubuntu   16 Jul 24 08:17 corpus_72
-r-xr-xr-x  1 ubuntu ubuntu  32K Jul 24 08:17 corpus_73
-r-xr-xr-x  1 ubuntu ubuntu    8 Jul 24 08:17 corpus_74
-r-xr-xr-x  1 ubuntu ubuntu   16 Jul 24 08:17 corpus_75
-r-xr-xr-x  1 ubuntu ubuntu 128K Jul 24 08:17 corpus_76
-r-xr-xr-x  1 ubuntu ubuntu  16K Jul 24 08:17 corpus_77
-r-xr-xr-x  1 ubuntu ubuntu  64K Jul 24 08:17 corpus_78
-r-xr-xr-x  1 ubuntu ubuntu 512K Jul 24 08:17 corpus_79
-r-xr-xr-x  1 ubuntu ubuntu   16 Jul 24 08:17 corpus_8
-r-xr-xr-x  1 ubuntu ubuntu   16 Jul 24 08:17 corpus_80
-r-xr-xr-x  1 ubuntu ubuntu  64K Jul 24 08:17 corpus_81
-r-xr-xr-x  1 ubuntu ubuntu 256K Jul 24 08:17 corpus_82
-r-xr-xr-x  1 ubuntu ubuntu   64 Jul 24 08:17 corpus_83
-r-xr-xr-x  1 ubuntu ubuntu 1.0M Jul 24 08:17 corpus_84
-r-xr-xr-x  1 ubuntu ubuntu  64K Jul 24 08:17 corpus_85
-r-xr-xr-x  1 ubuntu ubuntu    8 Jul 24 08:17 corpus_86
-r-xr-xr-x  1 ubuntu ubuntu  64K Jul 24 08:17 corpus_87
-r-xr-xr-x  1 ubuntu ubuntu  64K Jul 24 08:17 corpus_88
-r-xr-xr-x  1 ubuntu ubuntu  64K Jul 24 08:17 corpus_89
-r-xr-xr-x  1 ubuntu ubuntu   16 Jul 24 08:17 corpus_9
-r-xr-xr-x  1 ubuntu ubuntu   16 Jul 24 08:17 corpus_90
-r-xr-xr-x  1 ubuntu ubuntu 256K Jul 24 08:17 corpus_91
-r-xr-xr-x  1 ubuntu ubuntu   32 Jul 24 08:17 corpus_92
-r-xr-xr-x  1 ubuntu ubuntu  32K Jul 24 08:17 corpus_93
-r-xr-xr-x  1 ubuntu ubuntu 256K Jul 24 08:17 corpus_94
-r-xr-xr-x  1 ubuntu ubuntu  32K Jul 24 08:17 corpus_95
-r-xr-xr-x  1 ubuntu ubuntu  32K Jul 24 08:17 corpus_96
-r-xr-xr-x  1 ubuntu ubuntu 256K Jul 24 08:17 corpus_97
-r-xr-xr-x  1 ubuntu ubuntu  32K Jul 24 08:17 corpus_98
-r-xr-xr-x  1 ubuntu ubuntu   32 Jul 24 08:17 corpus_99
ls -alh machine2
ls -alh /home/ubuntu/.cache/bazel/_bazel_ubuntu/5f241f2ce88852e31cc5854c84122669/execroot/com_google_fleetbench/bazel-out/aarch64-opt-clang/bin/fleetbench/compression/corpora/Snappy-DECOMPRESS-L/
total 15M
drwxrwxr-x  2 ubuntu ubuntu 4.0K Jul 24 08:19 .
drwxrwxr-x 41 ubuntu ubuntu 4.0K Jul 24 08:16 ..
-r-xr-xr-x  1 ubuntu ubuntu 4.0K Jul 24 08:17 corpus_0
-r-xr-xr-x  1 ubuntu ubuntu   16 Jul 24 08:17 corpus_1
-r-xr-xr-x  1 ubuntu ubuntu 512K Jul 24 08:17 corpus_10
-r-xr-xr-x  1 ubuntu ubuntu    8 Jul 24 08:17 corpus_11
-r-xr-xr-x  1 ubuntu ubuntu  64K Jul 24 08:17 corpus_12
-r-xr-xr-x  1 ubuntu ubuntu  32K Jul 24 08:17 corpus_13
-r-xr-xr-x  1 ubuntu ubuntu   16 Jul 24 08:17 corpus_14
-r-xr-xr-x  1 ubuntu ubuntu   64 Jul 24 08:17 corpus_15
-r-xr-xr-x  1 ubuntu ubuntu   16 Jul 24 08:17 corpus_16
-r-xr-xr-x  1 ubuntu ubuntu 128K Jul 24 08:17 corpus_17
-r-xr-xr-x  1 ubuntu ubuntu 128K Jul 24 08:17 corpus_18
-r-xr-xr-x  1 ubuntu ubuntu   32 Jul 24 08:17 corpus_19
-r-xr-xr-x  1 ubuntu ubuntu  64K Jul 24 08:17 corpus_2
-r-xr-xr-x  1 ubuntu ubuntu   64 Jul 24 08:17 corpus_20
-r-xr-xr-x  1 ubuntu ubuntu   16 Jul 24 08:17 corpus_21
-r-xr-xr-x  1 ubuntu ubuntu    8 Jul 24 08:17 corpus_22
-r-xr-xr-x  1 ubuntu ubuntu 4.0K Jul 24 08:17 corpus_23
-r-xr-xr-x  1 ubuntu ubuntu  64K Jul 24 08:17 corpus_24
-r-xr-xr-x  1 ubuntu ubuntu 4.0K Jul 24 08:17 corpus_25
-r-xr-xr-x  1 ubuntu ubuntu 512K Jul 24 08:17 corpus_26
-r-xr-xr-x  1 ubuntu ubuntu  16K Jul 24 08:17 corpus_27
-r-xr-xr-x  1 ubuntu ubuntu   32 Jul 24 08:17 corpus_28
-r-xr-xr-x  1 ubuntu ubuntu 8.0K Jul 24 08:17 corpus_29
-r-xr-xr-x  1 ubuntu ubuntu 1.0M Jul 24 08:17 corpus_3
-r-xr-xr-x  1 ubuntu ubuntu 128K Jul 24 08:17 corpus_30
-r-xr-xr-x  1 ubuntu ubuntu  32K Jul 24 08:17 corpus_31
-r-xr-xr-x  1 ubuntu ubuntu   16 Jul 24 08:17 corpus_32
-r-xr-xr-x  1 ubuntu ubuntu 1.0K Jul 24 08:17 corpus_33
-r-xr-xr-x  1 ubuntu ubuntu 256K Jul 24 08:17 corpus_34
-r-xr-xr-x  1 ubuntu ubuntu 128K Jul 24 08:17 corpus_35
-r-xr-xr-x  1 ubuntu ubuntu   16 Jul 24 08:17 corpus_36
-r-xr-xr-x  1 ubuntu ubuntu  32K Jul 24 08:17 corpus_37
-r-xr-xr-x  1 ubuntu ubuntu 2.0M Jul 24 08:17 corpus_38
-r-xr-xr-x  1 ubuntu ubuntu 4.0K Jul 24 08:17 corpus_39
-r-xr-xr-x  1 ubuntu ubuntu   32 Jul 24 08:17 corpus_4
-r-xr-xr-x  1 ubuntu ubuntu  32K Jul 24 08:17 corpus_40
-r-xr-xr-x  1 ubuntu ubuntu 2.0M Jul 24 08:17 corpus_41
-r-xr-xr-x  1 ubuntu ubuntu  64K Jul 24 08:17 corpus_42
-r-xr-xr-x  1 ubuntu ubuntu    8 Jul 24 08:17 corpus_43
-r-xr-xr-x  1 ubuntu ubuntu  256 Jul 24 08:17 corpus_44
-r-xr-xr-x  1 ubuntu ubuntu   32 Jul 24 08:17 corpus_45
-r-xr-xr-x  1 ubuntu ubuntu   32 Jul 24 08:17 corpus_46
-r-xr-xr-x  1 ubuntu ubuntu   16 Jul 24 08:17 corpus_47
-r-xr-xr-x  1 ubuntu ubuntu 128K Jul 24 08:17 corpus_48
-r-xr-xr-x  1 ubuntu ubuntu 8.0K Jul 24 08:17 corpus_49
-r-xr-xr-x  1 ubuntu ubuntu  32K Jul 24 08:17 corpus_5
-r-xr-xr-x  1 ubuntu ubuntu  64K Jul 24 08:17 corpus_50
-r-xr-xr-x  1 ubuntu ubuntu 8.0K Jul 24 08:17 corpus_51
-r-xr-xr-x  1 ubuntu ubuntu 512K Jul 24 08:17 corpus_52
-r-xr-xr-x  1 ubuntu ubuntu 512K Jul 24 08:17 corpus_53
-r-xr-xr-x  1 ubuntu ubuntu  32K Jul 24 08:17 corpus_54
-r-xr-xr-x  1 ubuntu ubuntu    8 Jul 24 08:17 corpus_55
-r-xr-xr-x  1 ubuntu ubuntu   16 Jul 24 08:17 corpus_56
-r-xr-xr-x  1 ubuntu ubuntu   64 Jul 24 08:17 corpus_57
-r-xr-xr-x  1 ubuntu ubuntu   64 Jul 24 08:17 corpus_58
-r-xr-xr-x  1 ubuntu ubuntu  256 Jul 24 08:17 corpus_59
-r-xr-xr-x  1 ubuntu ubuntu  64K Jul 24 08:17 corpus_6
-r-xr-xr-x  1 ubuntu ubuntu 8.0K Jul 24 08:17 corpus_60
-r-xr-xr-x  1 ubuntu ubuntu 256K Jul 24 08:17 corpus_61
-r-xr-xr-x  1 ubuntu ubuntu 256K Jul 24 08:17 corpus_62
-r-xr-xr-x  1 ubuntu ubuntu 512K Jul 24 08:17 corpus_63
-r-xr-xr-x  1 ubuntu ubuntu 2.0K Jul 24 08:17 corpus_64
-r-xr-xr-x  1 ubuntu ubuntu  64K Jul 24 08:17 corpus_65
-r-xr-xr-x  1 ubuntu ubuntu  64K Jul 24 08:17 corpus_66
-r-xr-xr-x  1 ubuntu ubuntu 4.0K Jul 24 08:17 corpus_67
-r-xr-xr-x  1 ubuntu ubuntu   16 Jul 24 08:17 corpus_68
-r-xr-xr-x  1 ubuntu ubuntu   32 Jul 24 08:17 corpus_69
-r-xr-xr-x  1 ubuntu ubuntu   32 Jul 24 08:17 corpus_7
-r-xr-xr-x  1 ubuntu ubuntu   32 Jul 24 08:17 corpus_70
-r-xr-xr-x  1 ubuntu ubuntu 4.0K Jul 24 08:17 corpus_71
-r-xr-xr-x  1 ubuntu ubuntu  256 Jul 24 08:17 corpus_72
-r-xr-xr-x  1 ubuntu ubuntu 128K Jul 24 08:17 corpus_73
-r-xr-xr-x  1 ubuntu ubuntu    8 Jul 24 08:17 corpus_74
-r-xr-xr-x  1 ubuntu ubuntu 512K Jul 24 08:17 corpus_75
-r-xr-xr-x  1 ubuntu ubuntu 2.0K Jul 24 08:17 corpus_76
-r-xr-xr-x  1 ubuntu ubuntu   16 Jul 24 08:17 corpus_77
-r-xr-xr-x  1 ubuntu ubuntu 1.0M Jul 24 08:17 corpus_78
-r-xr-xr-x  1 ubuntu ubuntu 512K Jul 24 08:17 corpus_79
-r-xr-xr-x  1 ubuntu ubuntu  64K Jul 24 08:17 corpus_8
-r-xr-xr-x  1 ubuntu ubuntu   32 Jul 24 08:17 corpus_80
-r-xr-xr-x  1 ubuntu ubuntu  32K Jul 24 08:17 corpus_81
-r-xr-xr-x  1 ubuntu ubuntu    8 Jul 24 08:17 corpus_82
-r-xr-xr-x  1 ubuntu ubuntu   16 Jul 24 08:17 corpus_83
-r-xr-xr-x  1 ubuntu ubuntu  32K Jul 24 08:17 corpus_84
-r-xr-xr-x  1 ubuntu ubuntu  128 Jul 24 08:17 corpus_85
-r-xr-xr-x  1 ubuntu ubuntu 256K Jul 24 08:17 corpus_86
-r-xr-xr-x  1 ubuntu ubuntu   16 Jul 24 08:17 corpus_87
-r-xr-xr-x  1 ubuntu ubuntu  64K Jul 24 08:17 corpus_88
-r-xr-xr-x  1 ubuntu ubuntu    8 Jul 24 08:17 corpus_89
-r-xr-xr-x  1 ubuntu ubuntu  64K Jul 24 08:17 corpus_9
-r-xr-xr-x  1 ubuntu ubuntu  32K Jul 24 08:17 corpus_90
-r-xr-xr-x  1 ubuntu ubuntu 128K Jul 24 08:17 corpus_91
-r-xr-xr-x  1 ubuntu ubuntu   16 Jul 24 08:17 corpus_92
-r-xr-xr-x  1 ubuntu ubuntu   16 Jul 24 08:17 corpus_93
-r-xr-xr-x  1 ubuntu ubuntu 8.0K Jul 24 08:17 corpus_94
-r-xr-xr-x  1 ubuntu ubuntu  32K Jul 24 08:17 corpus_95
-r-xr-xr-x  1 ubuntu ubuntu   16 Jul 24 08:17 corpus_96
-r-xr-xr-x  1 ubuntu ubuntu 512K Jul 24 08:17 corpus_97
-r-xr-xr-x  1 ubuntu ubuntu 1.0M Jul 24 08:17 corpus_98
-r-xr-xr-x  1 ubuntu ubuntu   32 Jul 24 08:17 corpus_99

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.