Giter Site home page Giter Site logo

highwayhash's Introduction

Strong (well-distributed and unpredictable) hashes:

Quick Start

To build on a Linux or Mac platform, simply run make. For Windows, we provide a Visual Studio 2015 project in the msvc subdirectory.

Run benchmark for speed measurements. sip_hash_test and highwayhash_test ensure the implementations return known-good values for a given set of inputs.

64-bit SipHash for any CPU:

    #include "highwayhash/sip_hash.h"
    using namespace highwayhash;
    HH_ALIGNAS(16) const HH_U64 key2[2] = {1234, 5678};
    char in[8] = {1};
    return SipHash(key2, in, 8);

64, 128 or 256 bit HighwayHash for the CPU determined by compiler flags:

    #include "highwayhash/highwayhash.h"
    using namespace highwayhash;
    HH_ALIGNAS(32) const HHKey key = {1, 2, 3, 4};
    char in[8] = {1};
    HHResult64 result;  // or HHResult128 or HHResult256
    HHStateT<HH_TARGET> state(key);
    HighwayHashT(&state, in, 8, &result);

64, 128 or 256 bit HighwayHash for the CPU on which we're currently running:

    #include "highwayhash/highwayhash_target.h"
    #include "highwayhash/instruction_sets.h"
    using namespace highwayhash;
    HH_ALIGNAS(32) const HHKey key = {1, 2, 3, 4};
    char in[8] = {1};
    HHResult64 result;  // or HHResult128 or HHResult256
    InstructionSets::Run<HighwayHash>(key, in, 8, &result);

C-callable 64-bit HighwayHash for the CPU on which we're currently running:

#include "highwayhash/c_bindings.h"
const uint64_t key[4] = {1, 2, 3, 4};
char in[8] = {1};
return HighwayHash64(key, in, 8);

Printing a 256-bit result in a hexadecimal format similar to sha1sum:

HHResult256 result;
printf("%016"PRIx64"%016"PRIx64"%016"PRIx64"%016"PRIx64"\n",
     result[3], result[2], result[1], result[0]);

Introduction

Hash functions are widely used, so it is desirable to increase their speed and security. This package provides two 'strong' (well-distributed and unpredictable) hash functions: a faster version of SipHash, and an even faster algorithm we call HighwayHash.

SipHash is a fast but 'cryptographically strong' pseudo-random function by Aumasson and Bernstein [https://www.131002.net/siphash/siphash.pdf].

HighwayHash is a new way of mixing inputs which may inspire new cryptographically strong hashes. Large inputs are processed at a rate of 0.24 cycles per byte, and latency remains low even for small inputs. HighwayHash is faster than SipHash for all input sizes, with 5 times higher throughput at 1 KiB. We discuss design choices and provide statistical analysis and preliminary cryptanalysis in https://arxiv.org/abs/1612.06257.

Applications

Unlike prior strong hashes, these functions are fast enough to be recommended as safer replacements for weak hashes in many applications. The additional CPU cost appears affordable, based on profiling data indicating C++ hash functions account for less than 0.25% of CPU usage.

Hash-based selection of random subsets is useful for A/B experiments and similar applications. Such random generators are idempotent (repeatable and deterministic), which is helpful for parallel algorithms and testing. To avoid bias, it is important that the hash function be unpredictable and indistinguishable from a uniform random generator. We have verified the bit distribution and avalanche properties of SipHash and HighwayHash.

64-bit hashes are also useful for authenticating short-lived messages such as network/RPC packets. This requires that the hash function withstand differential, length extension and other attacks. We have published a formal security analysis for HighwayHash. New cryptanalysis tools may still need to be developed for further analysis.

Strong hashes are also important parts of methods for protecting hash tables against unacceptable worst-case behavior and denial of service attacks (see "hash flooding" below).

128 and 256-bit hashes can be useful for verifying data integrity (checksums).

SipHash

Our SipHash implementation is a fast and portable drop-in replacement for the reference C code. Outputs are identical for the given test cases (messages between 0 and 63 bytes).

Interestingly, it is about twice as fast as a SIMD implementation using SSE4.1 (https://goo.gl/80GBSD). This is presumably due to the lack of SIMD bit rotate instructions prior to AVX-512.

SipHash13 is a faster but weaker variant with one mixing round per update and three during finalization.

We also provide a data-parallel 'tree hash' variant that enables efficient SIMD while retaining safety guarantees. This is about twice as fast as SipHash, but does not return the same results.

HighwayHash

We have devised a new way of mixing inputs with SIMD multiply and permute instructions. The multiplications are 32x32 -> 64 bits and therefore infeasible to reverse. Permuting equalizes the distribution of the resulting bytes.

The internal state is quite large (1024 bits) but fits within SIMD registers. Due to limitations of the AVX2 instruction set, the registers are partitioned into two 512-bit halves that remain independent until the reduce phase. The algorithm outputs 64 bit digests or up to 256 bits at no extra cost.

In addition to high throughput, the algorithm is designed for low finalization cost. The result is more than twice as fast as SipTreeHash.

We also provide an SSE4.1 version (80% as fast for large inputs and 95% as fast for short inputs), an implementation for VSX on POWER and a portable version (10% as fast). A third-party ARM implementation is referenced below.

Statistical analyses and preliminary cryptanalysis are given in https://arxiv.org/abs/1612.06257.

Versioning and stability

Now that 21 months have elapsed since their initial release, we have declared all (64/128/256 bit) variants of HighwayHash frozen, i.e. unchanging forever.

SipHash and HighwayHash are 'fingerprint functions' whose input -> hash mapping will not change. This is important for applications that write hashes to persistent storage.

Speed measurements

To measure the CPU cost of a hash function, we can either create an artificial 'microbenchmark' (easier to control, but probably not representative of the actual runtime), or insert instrumentation directly into an application (risks influencing the results through observer overhead). We provide novel variants of both approaches that mitigate their respective disadvantages.

profiler.h uses software write-combining to stream program traces to memory with minimal overhead. These can be analyzed offline, or when memory is full, to learn how much time was spent in each (possibly nested) zone.

nanobenchmark.h enables cycle-accurate measurements of very short functions. It uses CPU fences and robust statistics to minimize variability, and also avoids unrealistic branch prediction effects.

We compile the 64-bit C++ implementations with a patched GCC 4.9 and run on a single idle core of a Xeon E5-2690 v3 clocked at 2.6 GHz. CPU cost is measured as cycles per byte for various input sizes:

Algorithm 8 31 32 63 64 1024
HighwayHashAVX2 7.34 1.81 1.71 1.04 0.95 0.24
HighwayHashSSE41 8.00 2.11 1.75 1.13 0.96 0.30
SipTreeHash 16.51 4.57 4.09 2.22 2.29 0.57
SipTreeHash13 12.33 3.47 3.06 1.68 1.63 0.33
SipHash 8.13 2.58 2.73 1.87 1.93 1.26
SipHash13 6.96 2.09 2.12 1.32 1.33 0.68

SipTreeHash is slower than SipHash for small inputs because it processes blocks of 32 bytes. AVX2 and SSE4.1 HighwayHash are faster than SipHash for all input sizes due to their highly optimized handling of partial vectors.

Note that previous measurements included the initialization of their input, which dramatically increased timings especially for small inputs.

CPU requirements

SipTreeHash(13) requires an AVX2-capable CPU (e.g. Haswell). HighwayHash includes a dispatcher that chooses the implementation (AVX2, SSE4.1, VSX or portable) at runtime, as well as a directly callable function template that can only run on the CPU for which it was built. SipHash(13) and ScalarSipTreeHash(13) have no particular CPU requirements.

AVX2 vs SSE4

When both AVX2 and SSE4 are available, the decision whether to use AVX2 is non-obvious. AVX2 vectors are twice as wide, but require a higher power license (integer multiplications count as 'heavy' instructions) and can thus reduce the clock frequency of the core or entire socket(!) on Haswell systems. This partially explains the observed 1.25x (not 2x) speedup over SSE4. Moreover, it is inadvisable to only sporadically use AVX2 instructions because there is also a ~56K cycle warmup period during which AVX2 operations are slower, and Haswell can even stall during this period. Thus, we recommend avoiding AVX2 for infrequent hashing if the rest of the application is also not using AVX2. For any input larger than 1 MiB, it is probably worthwhile to enable AVX2.

SIMD implementations

Our x86 implementations use custom vector classes with overloaded operators (e.g. const V4x64U a = b + c) for type-safety and improved readability vs. compiler intrinsics (e.g. const __m256i a = _mm256_add_epi64(b, c)). The VSX implementation uses built-in vector types alongside Altivec intrinsics. A high-performance third-party ARM implementation is mentioned below.

Dispatch

Our instruction_sets dispatcher avoids running newer instructions on older CPUs that do not support them. However, intrinsics, and therefore also any vector classes that use them, require (on GCC < 4.9 or Clang < 3.9) a compiler flag that also allows the compiler to generate code for that CPU. This means the intrinsics must be placed in separate translation units that are compiled with the required flags. It is important that these source files and their headers not define any inline functions, because that might break the one definition rule and cause crashes.

To minimize dispatch overhead when hashes are computed often (e.g. in a loop), we can inline the hash function into its caller using templates. The dispatch overhead will only be paid once (e.g. before the loop). The template mechanism also avoids duplicating code in each CPU-specific implementation.

Defending against hash flooding

To mitigate hash flooding attacks, we need to take both the hash function and the data structure into account.

We wish to defend (web) services that utilize hash sets/maps against denial-of-service attacks. Such data structures assign attacker-controlled input messages m to a hash table bin b by computing the hash H(s, m) using a hash function H seeded by s, and mapping it to a bin with some narrowing function b = R(h), discussed below.

Attackers may attempt to trigger 'flooding' (excessive work in insertions or lookups) by finding multiple m that map to the same bin. If the attacker has local access, they can do far worse, so we assume the attacker can only issue remote requests. If the attacker is able to send large numbers of requests, they can already deny service, so we need only ensure the attacker's cost is sufficiently large compared to the service's provisioning.

If the hash function is 'weak', attackers can easily generate 'hash collisions' (inputs mapping to the same hash values) that are independent of the seed. In other words, certain input messages will cause collisions regardless of the seed value. The author of SipHash has published C++ programs to generate such 'universal (key-independent) multicollisions' for CityHash and Murmur. Similar 'differential' attacks are likely possible for any hash function consisting only of reversible operations (e.g. addition/multiplication/rotation) with a constant operand. n requests with such inputs cause n^2 work for an unprotected hash table, which is unacceptable.

By contrast, 'strong' hashes such as SipHash or HighwayHash require infeasible attacker effort to find a hash collision (an expected 2^32 guesses of m per the birthday paradox) or recover the seed (2^63 requests). These security claims assume the seed is secret. It is reasonable to suppose s is initially unknown to attackers, e.g. generated on startup or even per-connection. A timing attack by Wool/Bar-Yosef recovers 13-bit seeds by testing all 8K possibilities using millions of requests, which takes several days (even assuming unrealistic 150 us round-trip times). It appears infeasible to recover 64-bit seeds in this way.

However, attackers are only looking for multiple m mapping to the same bin rather than identical hash values. We assume they know or are able to discover the hash table size p. It is common to choose p = 2^i to enable an efficient R(h) := h & (p - 1), which simply retains the lower hash bits. It may be easier for attackers to compute partial collisions where only the lower i bits match. This can be prevented by choosing a prime p so that R(h) := h % p incorporates all hash bits. The costly modulo operation can be avoided by multiplying with the inverse (https://goo.gl/l7ASm8). An interesting alternative suggested by Kyoung Jae Seo chooses a random subset of the h bits. Such an R function can be computed in just 3 cycles using PEXT from the BMI2 instruction set. This is expected to defend against SAT-solver attacks on the hash bits at a slightly lower cost than the multiplicative inverse method, and still allows power-of-two table sizes.

Summary thus far: given a strong hash function and secret seed, it appears infeasible for attackers to generate hash collisions because s and/or R are unknown. However, they can still observe the timings of data structure operations for various m. With typical table sizes of 2^10 to 2^17 entries, attackers can detect some 'bin collisions' (inputs mapping to the same bin). Although this will be costly for the attacker, they can then send many instances of such inputs, so we need to limit the resulting work for our data structure.

Hash tables with separate chaining typically store bin entries in a linked list, so worst-case inputs lead to unacceptable linear-time lookup cost. We instead seek optimal asymptotic worst-case complexity for each operation (insertion, deletion and lookups), which is a constant factor times the logarithm of the data structure size. This naturally leads to a tree-like data structure for each bin. The Java8 HashMap only replaces its linked list with trees when needed. This leads to additional cost and complexity for deciding whether a bin is a list or tree.

Our first proposal (suggested by Github user funny-falcon) avoids this overhead by always storing one tree per bin. It may also be worthwhile to store the first entry directly in the bin, which avoids allocating any tree nodes in the common case where bins are sparsely populated. What kind of tree should be used?

Given SipHash and HighwayHash provide high quality randomness, depending on expecting attack surface simple non-balancing binary search tree could perform reasonably well. Wikipedia says

After a long intermixed sequence of random insertion and deletion, the expected height of the tree approaches square root of the number of keys, √n, which grows much faster than log n.

While O(√n) is much larger than O(log n), it is still much smaller than O(n). And it will certainly complicate the timing attack, since the time of operation on collisioned bin will grow slower.

If stronger safety guarantees are needed, then a balanced tree should be used. Scapegoat and splay trees only offer amortized complexity guarantees, whereas treaps require an entropy source and have higher constant factors in practice. Self-balancing structures such as 2-3 or red-black trees require additional bookkeeping information. We can hope to reduce rebalancing cost by realizing that the output bits of strong H functions are uniformly distributed. When using them as keys instead of the original message m, recent relaxed balancing schemes such as left-leaning red-black or weak AVL trees may require fewer tree rotations to maintain their invariants. Note that H already determines the bin, so we should only use the remaining bits. 64-bit hashes are likely sufficient for this purpose, and HighwayHash generates up to 256 bits. It seems unlikely that attackers can craft inputs resulting in worst cases for both the bin index and tree key without being able to generate hash collisions, which would contradict the security claims of strong hashes. Even if they succeed, the relaxed tree balancing still guarantees an upper bound on height and therefore the worst-case operation cost. For the AVL variant, the constant factors are slightly lower than for red-black trees.

The second proposed approach uses augmented/de-amortized cuckoo hash tables (https://goo.gl/PFwwkx). These guarantee worst-case log n bounds for all operations, but only if the hash function is 'indistinguishable from random' (uniformly distributed regardless of the input distribution), which is claimed for SipHash and HighwayHash but certainly not for weak hashes.

Both alternatives retain good average case performance and defend against flooding by limiting the amount of extra work an attacker can cause. The first approach guarantees an upper bound of log n additional work even if the hash function is compromised.

In summary, a strong hash function is not, by itself, sufficient to protect a chained hash table from flooding attacks. However, strong hash functions are important parts of two schemes for preventing denial of service. Using weak hash functions can slightly accelerate the best-case and average-case performance of a service, but at the risk of greatly reduced attack costs and worst-case performance.

Third-party implementations / bindings

Thanks to Damian Gryski and Frank Wessels for making us aware of these third-party implementations or bindings. Please feel free to get in touch or raise an issue and we'll add yours as well.

By Language URL
Damian Gryski Go and x64 assembly https://github.com/dgryski/go-highway/
Simon Abdullah NPM package https://www.npmjs.com/package/highwayhash-nodejs
Lovell Fuller node.js bindings https://github.com/lovell/highwayhash
Andreas Sonnleitner WebAssembly and NPM package https://www.npmjs.com/package/highwayhash-wasm
Nick Babcock Rust port https://github.com/nickbabcock/highway-rs
Caleb Zulawski Rust portable SIMD https://github.com/calebzulawski/autobahn-hash
Vinzent Steinberg Rust bindings https://github.com/vks/highwayhash-rs
Frank Wessels & Andreas Auernhammer Go and ARM assembly https://github.com/minio/highwayhash
Phil Demetriou Python 3 bindings https://github.com/kpdemetriou/highwayhash-cffi
Jonathan Beard C++20 constexpr https://gist.github.com/jonathan-beard/632017faa1d9d1936eb5948ac9186657
James Cook Ruby bindings https://github.com/jamescook/highwayhash
John Platts C++17 Google Highway port https://github.com/johnplatts/simdhwyhash

Modules

Hashes

  • c_bindings.h declares C-callable versions of SipHash/HighwayHash.
  • sip_hash.cc is the compatible implementation of SipHash, and also provides the final reduction for sip_tree_hash.
  • sip_tree_hash.cc is the faster but incompatible SIMD j-lanes tree hash.
  • scalar_sip_tree_hash.cc is a non-SIMD version.
  • state_helpers.h simplifies the implementation of the SipHash variants.
  • highwayhash.h is our new, fast hash function.
  • hh_{avx2,sse41,vsx,portable}.h are its various implementations.
  • highwayhash_target.h chooses the best available implementation at runtime.

Infrastructure

  • arch_specific.h offers byte swapping and CPUID detection.
  • compiler_specific.h defines some compiler-dependent language extensions.
  • data_parallel.h provides a C++11 ThreadPool and PerThread (similar to OpenMP).
  • instruction_sets.h and targets.h enable efficient CPU-specific dispatching.
  • nanobenchmark.h measures elapsed times with < 1 cycle variability.
  • os_specific.h sets thread affinity and priority for benchmarking.
  • profiler.h is a low-overhead, deterministic hierarchical profiler.
  • tsc_timer.h obtains high-resolution timestamps without CPU reordering.
  • vector256.h and vector128.h contain wrapper classes for AVX2 and SSE4.1.

By Jan Wassenberg [email protected] and Jyrki Alakuijala [email protected], updated 2023-03-29

This is not an official Google product.

highwayhash's People

Contributors

0xxon avatar awelzel avatar calebzulawski avatar cdluminate avatar dgryski avatar dryman avatar easyaspi314 avatar funny-falcon avatar ismail avatar jan-wassenberg avatar jasperla avatar johnplatts avatar katrinleinweber avatar kgotlinux avatar kluever avatar leres avatar lorenzhs avatar lvandeve avatar maskray avatar nkurz avatar noraj avatar rurban avatar vks avatar wanghan02 avatar zopolis4 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

highwayhash's Issues

Build fails

My build fails when compiling through Bazel (while building TensorFlow).

ERROR: /hmt/sirius1/skv0/u/4/r/user/.cache/bazel/_bazel_user/d217f35631206796f447d50c6f1d6243/external/highwayhash/BUILD:125:1: C++ compilation of rule '@highwayhash//:sip_hash' failed: crosstool_wrapper_driver_is_not_gcc failed: error executing command 
  (cd /hmt/sirius1/skv0/u/4/r/user/.cache/bazel/_bazel_user/d217f35631206796f447d50c6f1d6243/execroot/tensorflow && \
  exec env - \
    LD_LIBRARY_PATH=/usr/local/cuda-7.5/lib64:/usr/local/gurobi/lib:/usr/local/cuda-7.5/extras/CUPTI/lib64 \
    PATH=/vega/astro/users/user/applications/pythonenv/bin:/opt/rh/devtoolset-1.1/root/usr/bin:/usr/local/cuda-7.5/bin:/vega/astro/users/user/applications/bazel/output:/vega/astro/users/user/applications/swig/bin:/vega/astro/users/user/applications/jdk1.8.0_102/bin:/usr/local/bin:/usr/lib64/qt-3.3/bin:/usr/lib64/ccache:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/usr/local/gurobi/bin:/opt/openlava-3.3/bin \
    TMPDIR=/tmp \
  external/local_config_cuda/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc -U_FORTIFY_SOURCE '-D_FORTIFY_SOURCE=1' -fstack-protector -fPIE -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 -DNDEBUG -ffunction-sections -fdata-sections -g0 '-std=c++11' -MD -MF bazel-out/host/bin/external/highwayhash/_objs/sip_hash/external/highwayhash/highwayhash/sip_hash.d '-frandom-seed=bazel-out/host/bin/external/highwayhash/_objs/sip_hash/external/highwayhash/highwayhash/sip_hash.o' -iquote external/highwayhash -iquote bazel-out/host/genfiles/external/highwayhash -iquote external/bazel_tools -iquote bazel-out/host/genfiles/external/bazel_tools -isystem external/highwayhash -isystem bazel-out/host/genfiles/external/highwayhash -isystem external/bazel_tools/tools/cpp/gcc3 -no-canonical-prefixes -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -c external/highwayhash/highwayhash/sip_hash.cc -o bazel-out/host/bin/external/highwayhash/_objs/sip_hash/external/highwayhash/highwayhash/sip_hash.o): com.google.devtools.build.lib.shell.BadExitStatusException: Process exited with status 1.
In file included from external/highwayhash/highwayhash/sip_hash.h:24:0,
                 from external/highwayhash/highwayhash/sip_hash.cc:15:
external/highwayhash/highwayhash/state_helpers.h: In function 'void highwayhash::PaddedUpdate(highwayhash::uint64, const char*, highwayhash::uint64, State*)':
external/highwayhash/highwayhash/state_helpers.h:31:13: error: there are no arguments to 'alignas' that depend on a template parameter, so a declaration of 'alignas' must be available [-fpermissive]
external/highwayhash/highwayhash/state_helpers.h:31:13: note: (if you use '-fpermissive', G++ will accept your code, but allowing the use of an undeclared name is deprecated)
external/highwayhash/highwayhash/state_helpers.h:31:15: error: expected ';' before 'char'
external/highwayhash/highwayhash/state_helpers.h:42:10: error: 'final_packet' was not declared in this scope
Target //tensorflow/cc:tutorials_example_trainer failed to build

Adding the -fpermissive does allow it to continue with warnings, at which point I get this error:

ERROR: /hmt/sirius1/skv0/u/4/r/user/.cache/bazel/_bazel_user/d217f35631206796f447d50c6f1d6243/external/highwayhash/BUILD:125:1: C++ compilation of rule '@highwayhash//:sip_hash' failed: crosstool_wrapper_driver_is_not_gcc failed: error executing command 
  (cd /hmt/sirius1/skv0/u/4/r/user/.cache/bazel/_bazel_user/d217f35631206796f447d50c6f1d6243/execroot/tensorflow && \
  exec env - \
    LD_LIBRARY_PATH=/usr/local/cuda-7.5/lib64:/usr/local/gurobi/lib:/usr/local/cuda-7.5/extras/CUPTI/lib64 \
    PATH=/vega/astro/users/user/applications/pythonenv/bin:/opt/rh/devtoolset-1.1/root/usr/bin:/usr/local/cuda-7.5/bin:/vega/astro/users/user/applications/bazel/output:/vega/astro/users/user/applications/swig/bin:/vega/astro/users/user/applications/jdk1.8.0_102/bin:/usr/local/bin:/usr/lib64/qt-3.3/bin:/usr/lib64/ccache:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/usr/local/gurobi/bin:/opt/openlava-3.3/bin \
    TMPDIR=/tmp \
  external/local_config_cuda/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc -U_FORTIFY_SOURCE '-D_FORTIFY_SOURCE=1' -fstack-protector -fPIE -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 -DNDEBUG -ffunction-sections -fdata-sections '-std=c++11' -MD -MF bazel-out/local_linux-opt/bin/external/highwayhash/_objs/sip_hash/external/highwayhash/highwayhash/sip_hash.pic.d '-frandom-seed=bazel-out/local_linux-opt/bin/external/highwayhash/_objs/sip_hash/external/highwayhash/highwayhash/sip_hash.pic.o' -fPIC -iquote external/highwayhash -iquote bazel-out/local_linux-opt/genfiles/external/highwayhash -iquote external/bazel_tools -iquote bazel-out/local_linux-opt/genfiles/external/bazel_tools -isystem external/highwayhash -isystem bazel-out/local_linux-opt/genfiles/external/highwayhash -isystem external/bazel_tools/tools/cpp/gcc3 -no-canonical-prefixes -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -c external/highwayhash/highwayhash/sip_hash.cc -o bazel-out/local_linux-opt/bin/external/highwayhash/_objs/sip_hash/external/highwayhash/highwayhash/sip_hash.pic.o): com.google.devtools.build.lib.shell.BadExitStatusException: Process exited with status 1.
In file included from external/highwayhash/highwayhash/sip_hash.h:24:0,
                 from external/highwayhash/highwayhash/sip_hash.cc:15:
external/highwayhash/highwayhash/state_helpers.h: In function 'void highwayhash::PaddedUpdate(highwayhash::uint64, const char*, highwayhash::uint64, State*)':
external/highwayhash/highwayhash/state_helpers.h:31:15: error: expected ';' before 'char'
external/highwayhash/highwayhash/state_helpers.h:42:10: error: 'final_packet' was not declared in this scope
Target //tensorflow/cc:tutorials_example_trainer failed to build

Makefile ignores CXXFLAGS

Running commands like CXXFLAGS=-fPIC make libhighwayhash.a does not seem to work, it looks like the environment variable is ignored. (I think it did work before the recent changes.)

(I relied on this functionality for the Rust bindings (because Rust requires -fPIC even for static linking). A possible workaround is to patch the makefile, but it would be cleaner to just pass the additional flag via environment variables.)

Different hashes for identical input....

I'm using HighwayHash to hash tons of strings. While reviewing some results I came across something odd: HH appeared to give a different hash for the same input. Same key, same state, other inputs work normally.

The string that triggered this: "GCC: (GNU) 4.8.3" (quotes not included). My test case is in C, with the previous string zero-padded to 32 bytes. If I hash it 16 times I get 16 different hashes. I've tried different keys, different architectures, this is also the only string that exhibits this behavior so far. Strangely enough it doesn't seem to matter if the above string is hashed directly, or if it's zero-padded to 32 bytes.

Can anyone repeat this? Am I or my machine crazy?

Extend C-Bindings?

Hi,

Are there any plan to extend the C-Binding support, e.g. highywayhash256 with cat/append capabilities?

Thanks!

Modular reduction for finalization

HighwayHash has 1024 state bits which are reduced to 256 bits via 3 additions. Most callers simply retain the lower 64 bits and discard the others. For additional security, it would be useful to include the additional state bits in the output.

It has been suggested to use AES to scramble the state during finalization. However, the nature of AES is comparable to what HighwayHash is already doing, so another commonly used (by cryptographic hashes) approach might be better: modular reduction. To avoid costly divisions, we could use Mersenne primes (notably 2^127 - 1) for faster reduction. The 2nd level of VHASH does something similar. This might lead to more security and perhaps enable a cryptographic hash function with say 256 bits (using reductions instead of adding).

Can we let runtime dispatcher support HighwayHashCatT's Append and Finalize operation?

The runtime dispatcher InstructionSets::Run can only do the hash calculation once. If we want to append another block of data later and do the Finalize at the end (what HighwayHashCatT is used for), it doesn't work. This is because we need the hash state variable stored for Append and Finalize operations. But we cannot store this hash state variable directly in the application (outside the target TU). This means the portable version is the only choice for developers who want to use Append and Finalize (normally we cannot guarantee the user's CPU supports SSE4.1 or AVX2). The portable version is slow and makes highwayhash less attractive.

However, can we provide a solution to this? Say provide a hash state variable as a block of memory. And all the calculations are done in the target TU with reinterpret_cast to HHStatePortable/SSE41/AVX2.

Files are installed into the root directory

@lvandeve

This commit on Sep 12, 2017 broke Makefile:
be5491d

Now it doesn't install libraries into $(PREFIX). It installs them into root.

Now installed files look like this:

/highwayhash/arch_specific.h
/highwayhash/c_bindings.h
/highwayhash/compiler_specific.h
/highwayhash/data_parallel.h
/highwayhash/endianess.h
/highwayhash/hh_avx2.h
/highwayhash/hh_buffer.h
/highwayhash/hh_portable.h
/highwayhash/hh_sse41.h
/highwayhash/hh_types.h
/highwayhash/hh_vsx.h
/highwayhash/highwayhash.h
/highwayhash/highwayhash_target.h
/highwayhash/highwayhash_test_target.h
/highwayhash/iaca.h
/highwayhash/instruction_sets.h
/highwayhash/load3.h
/highwayhash/nanobenchmark.h
/highwayhash/os_specific.h
/highwayhash/profiler.h
/highwayhash/robust_statistics.h
/highwayhash/scalar.h
/highwayhash/scalar_sip_tree_hash.h
/highwayhash/sip_hash.h
/highwayhash/sip_tree_hash.h
/highwayhash/state_helpers.h
/highwayhash/tsc_timer.h
/highwayhash/vector128.h
/highwayhash/vector256.h
/highwayhash/vector_test_target.h
/libhighwayhash.a
/libhighwayhash.so
/libhighwayhash.so.0

install flags are wrong

  1. headers should be installed read-only: 0444
  2. libhighwayhash.a should also be read-only: 0444
  3. libhighwayhash.so should be 0555

HighwayHash for the CPU on which we're currently running

Hi,

I wanted to try the InstructionSets::Run<HighwayHash>(key, in, 8, &result); that you have in your README. However, I cannot make it work as it requires the definition of the operator ():
undefined reference to highwayhash::HighwayHash<1u>::operator()(unsigned long const (&) [4], char const*, unsigned long, unsigned long (*) [2]) const', which is in the highwayhash_target.cc file. I do not see any highwayhash_target.o file produced in obj. The highwayhash/highwayhash_test.cc does not use the InstructionSets::Run, but the InstructionSets::RunAll, (PRINT_RESULTSis 0). If I setPRINT_RESULTSto 1 it fails to build for the same error. Is there a way to use theInstructionSets::Run` interface with your current repository?
Many thanks in advance!

Provide Python bindings

It would be awesome if there was a Python binding for this. You do not happen to have one stashed somewhere by chance?
Thanks!

Publish the test suite/security metrics

We'd like to publish the test suite, forked from smhasher and much faster thanks to data-parallel optimizations. This requires a portable thread pool and removal of other dependencies.

A Cmple question: Why not in C also

Hi,
I wish to run your scalar_highway_tree_hash using 64bit Intel Optimizer, two things:

  • What are the drawbacks of having the C counterpart of your CC master?
  • If you are not willing to write it in C, could you make a benchmark tool featuring simple main loop and hashing fixed length data - I want to test it by feeding it with 1 trillion Knight-Tours and compare the dispersion quality of:
  • CRC32C1_8slice: 0x82F63B78 polynomial used
  • CRC32C2_8slice: 0x8F6E37A0 polynomial used
  • CRC32K1_8slice: 0xBA0DC66B polynomial used
  • CRC32K2_8slice: 0x90022004 polynomial used
  • FNV1A_YoshimitsuTRIAD

The console benchmarker, in C, is there:
http://www.linuxquestions.org/questions/programming-9/c-code-to-test-cpu-performances-benchmarking-4175581626/#post5570720

Since I am very fond of English language x-grams (2..114 bytes long phrases), 2-3 billion in use currently, I need hasher having collisions at each slot not exceeding some number, e.g. 10, as in the dump below:
FNV1A_YoshimitsuTRIAD: KT_derivatives = 00,000,067,108,865; 000,000,005 x MAXcollisionsAtSomeSlots = 000,011; HASHfreeSLOTS = 0,025,228,150; HashUtilization = 062%

CRC32K1_8slice : KT_derivatives = 00,000,067,108,865; 000,000,001 x MAXcollisionsAtSomeSlots = 000,011; HASHfreeSLOTS = 0,024,687,907; HashUtilization = 063%

CRC32K2_8slice : KT_derivatives = 00,000,067,108,865; 000,000,005 x MAXcollisionsAtSomeSlots = 000,010; HASHfreeSLOTS = 0,024,680,327; HashUtilization = 063%

CRC32C1_8slice : KT_derivatives = 00,000,067,108,865; 000,000,002 x MAXcollisionsAtSomeSlots = 000,011; HASHfreeSLOTS = 0,024,689,921; HashUtilization = 063%

CRC32C2_8slice : KT_derivatives = 00,000,067,108,865; 000,000,006 x MAXcollisionsAtSomeSlots = 000,010; HASHfreeSLOTS = 0,024,688,174; HashUtilization = 063%

Warnings and errors from clang: warning: 'constexpr' non-static member, etc

In file included from highwayhash/vector_test_portable.cc:19:
./highwayhash/vector_test_target.cc:73:21: warning: 'constexpr' non-static member function will not be implicitly 'const' in C++14; add 'const' to avoid a change in
      behavior [-Wconstexpr-not-const]
  constexpr uint8_t operator()() { return 0xFFu; }
                    ^
                                 const                                                                                                                                      
./highwayhash/vector_test_target.cc:77:22: warning: 'constexpr' non-static member function will not be implicitly 'const' in C++14; add 'const' to avoid a change in
      behavior [-Wconstexpr-not-const]
  constexpr uint16_t operator()() { return 0xFFFFu; }
                     ^
                                  const                                                                                                                                     
./highwayhash/vector_test_target.cc:81:22: warning: 'constexpr' non-static member function will not be implicitly 'const' in C++14; add 'const' to avoid a change in
      behavior [-Wconstexpr-not-const]
  constexpr uint32_t operator()() { return 0xFFFFFFFFu; }
                     ^
                                  const                                                                                                                                     
./highwayhash/vector_test_target.cc:85:22: warning: 'constexpr' non-static member function will not be implicitly 'const' in C++14; add 'const' to avoid a change in
      behavior [-Wconstexpr-not-const]
  constexpr uint64_t operator()() { return 0xFFFFFFFFFFFFFFFFull; }
                     ^
                                  const                                                                                                                                    
In file included from highwayhash/benchmark.cc:31:
./highwayhash/robust_statistics.h:127:30: error: call to 'abs' is ambiguous
    abs_deviations.push_back(std::abs(sample - median));
                             ^~~~~~~~
highwayhash/benchmark.cc:178:31: note: in instantiation of function template specialization 'highwayhash::MedianAbsoluteDeviation<float>' requested here
    const float variability = MedianAbsoluteDeviation(durations, median);
                              ^
/usr/include/stdlib.h:83:6: note: candidate function
int      abs(int) __pure2;
         ^
/usr/include/c++/v1/stdlib.h:115:44: note: candidate function
inline _LIBCPP_INLINE_VISIBILITY long      abs(     long __x) _NOEXCEPT {return  labs(__x);}
                                           ^
/usr/include/c++/v1/stdlib.h:117:44: note: candidate function
inline _LIBCPP_INLINE_VISIBILITY long long abs(long long __x) _NOEXCEPT {return llabs(__x);}

In file included from highwayhash/benchmark.cc:31:
./highwayhash/robust_statistics.h:127:30: error: call to 'abs' is ambiguous
    abs_deviations.push_back(std::abs(sample - median));
                             ^~~~~~~~
highwayhash/benchmark.cc:178:31: note: in instantiation of function template specialization 'highwayhash::MedianAbsoluteDeviation<float>' requested here
    const float variability = MedianAbsoluteDeviation(durations, median);
                              ^
/usr/include/stdlib.h:83:6: note: candidate function
int      abs(int) __pure2;
         ^
/usr/include/c++/v1/stdlib.h:115:44: note: candidate function
inline _LIBCPP_INLINE_VISIBILITY long      abs(     long __x) _NOEXCEPT {return  labs(__x);}
                                           ^
/usr/include/c++/v1/stdlib.h:117:44: note: candidate function
inline _LIBCPP_INLINE_VISIBILITY long long abs(long long __x) _NOEXCEPT {return llabs(__x);}
In file included from highwayhash/highwayhash_test.cc:30:
./highwayhash/data_parallel.h:246:7: warning: private field 'padding' is not used [-Wunused-private-field]
  int padding[15];
highwayhash/benchmark.cc:186:6: warning: unused function 'MeasureAndAdd' [-Wunused-function]
void MeasureAndAdd(DurationsForInputs* input_map, const char* caption,
./test_exports.sh lib/libhighwayhash.a
_ZN11highwayhash13SipHashStateTILi2ELi4EE8CompressILm4EEEvv
The above-mentioned symbols are duplicates

nanobenchmark: RaiseThreadPriority seems to have adverse effects?

I get better results in the memcpy example when removing the call to RaiseThreadPriority. I'm on a Core i7 4790T running a fully up to date copy of Debian unstable. Results of the example as-is:

Running on CPU #1, APIC ID  2
TimerResolution32 126
NumReplicas 3720
 3: median= 15.8 cycles; median abs. deviation= 0.0 cycles
 4: median=  8.5 cycles; median abs. deviation= 0.0 cycles
 7: median=  8.4 cycles; median abs. deviation= 0.1 cycles
 8: median= 15.0 cycles; median abs. deviation= 0.0 cycles

With the call to RaiseThreadPriority removed:

Running on CPU #1, APIC ID  2
TimerResolution32 156
NumReplicas 4541
 3: median= 15.8 cycles; median abs. deviation= 0.0 cycles
 4: median=  7.1 cycles; median abs. deviation= 0.0 cycles
 7: median=  7.1 cycles; median abs. deviation= 0.1 cycles
 8: median= 15.0 cycles; median abs. deviation= 0.0 cycles

Note that the time for sizes 4 and 7 decreased by ~1.3 cycles, the other two are constant.

The results for each version vary a bit but the difference is always noticeable (with: 7.9-8.5 cycles, without: 6.6-7.3 cycles). Any idea what's going on there?

Highwayhash on PPC

Hi,

I am looking at the PPC implementation https://github.com/google/highwayhash/blob/master/highwayhash/hh_vsx.h

My query is if this implementation can be optimized further by completely implementing in assembly?

There is a minio implementation of Highwayhash for Intel all in assembly https://github.com/minio/highwayhash/blob/master/highwayhashAVX2_amd64.s, what i wanted an opinion on was if converting this file to PPC (a pure assembly implemention) will be a faster version as compared to using the .c/.h VSX files in this repo? @jan-wassenberg

Cannot build with bazel

I'm getting this error when trying to compile highwayhash with bazel:

ERROR: highwayhash/BUILD:39:1: no such package 'base': BUILD file not found on package path and referenced by '//:vector_test'.

Maybe base is something internal at Google?

[linux/Makefile] "pthread_create" not found

g++  obj/highwayhash_test.o obj/arch_specific.o obj/instruction_sets.o obj/nanobenchmark.o obj/os_specific.o obj/highwayhash_test_portable.o obj/highwayhash_test_avx2.o obj/highwayhash_test_sse41.o -o bin/highwayhash_test #-Wl,--as-needed -lpthread
obj/highwayhash_test.o: In function `void std::vector<std::thread, std::allocator<std::thread> >::_M_emplace_back_aux<void (&)(highwayhash::ThreadPool*), highwayhash::ThreadPool*>(void (&)(highwayhash::ThreadPool*), highwayhash::ThreadPool*&&)':
highwayhash_test.cc:(.text._ZNSt6vectorISt6threadSaIS0_EE19_M_emplace_back_auxIJRFvPN11highwayhash10ThreadPoolEES6_EEEvDpOT_[_ZNSt6vectorISt6threadSaIS0_EE19_M_emplace_back_auxIJRFvPN11highwayhash10ThreadPoolEES6_EEEvDpOT_]+0x9a): undefined reference to `pthread_create'
obj/highwayhash_test.o: In function `highwayhash::ThreadPool::ThreadPool(int)':
highwayhash_test.cc:(.text._ZN11highwayhash10ThreadPoolC2Ei[_ZN11highwayhash10ThreadPoolC5Ei]+0xfa): undefined reference to `pthread_create'
collect2: error: ld returned 1 exit status
Makefile:45: recipe for target 'bin/highwayhash_test' failed
make: *** [bin/highwayhash_test] Error 1

Patch:

diff --git a/Makefile b/Makefile
index 8f8fd82..f29caa1 100644
--- a/Makefile
+++ b/Makefile
@@ -43,7 +43,7 @@ obj/%.o: highwayhash/%.cc

 bin/%: obj/%.o
        @mkdir -p -- $(dir $@)
-       $(CXX) $(LDFLAGS) $^ -o $@
+       $(CXX) $(LDFLAGS) $^ -o $@ -Wl,--as-needed -lpthread

 .DELETE_ON_ERROR:
 deps.mk: $(wildcard highwayhash/*.cc) $(wildcard highwayhash/*.h) Makefile

OS: Debian Sid

building the static library with -fPIC fails

> CXXFLAGS=-fPIC make lib/libhighwayhash.a
[...]
./test_exports.sh lib/libhighwayhash.a
DW.ref.__gxx_personality_v0
The above-mentioned symbols are duplicates
FAIL

The newly added check for duplicated symbols fails.

Issues with Core 2 Duo

First of all, Penryn lacks the rdtscp instruction. It can use rdtsc instead. Otherwise, it gets a bad instruction issue on the benchmark. Despite this, it seems the benchmark is nonfunctional anyways. :(

In addition, HighwayHash64 seems excessively slow on my (admittedly old) chip compared to other hashes.

xxhsum benchmark (100 KB)
gcc 8.2.0 gcc-8 -O2 -march=native
MacBook (13-inch, Mid 2009)/Macbook5,2
2.13 GHz Intel Core 2 Duo (Penryn, SSE4.1, P7450)
macOS 10.13.6 with High Sierra Patcher
4 GB RAM

Hash Aligned Unaligned
XXH32 3912.6 MB/s 2985.9 MB/s
XXH64 4004.1 MB/s 2891.6 MB/s
XXH32a (two vector_size(16) lanes) 4970.8 MB/s 3144.7 MB/s
XXH64a (two vector_size(16) lanes) 4935.6 MB/s 3152.1 MB/s
FarmHash32 5654.1 MB/s 3619.6 MB/s
FarmHash64 6092.9 MB/s 4197.5 MB/s
HighwayHash64 (SSE4.1) 2462.1 MB/s 1998.7 MB/s
HighwayHash64 (Portable) 290.4 MB/s 289.2 MB/s
HighwayHash64 (C) 451.4 MB/s 435.6 MB/s
SpookyHash v2 6349.3 MB/s 3720.1 MB/s

Note that the Core 2 Duo has a slow multiplier, which takes twice as many cycles as it does for newer Intels. It is the main slowdown for the xxHash family, as replacing multiplies with xors gets it to the upper 5700s (it is ineffective as a hash, though). It also doesn't seem to have fast 64x2 vectors. GCC appears to do operations with 2 32-bit lanes, which is another slowdown.

I mostly want to bring this to attention, because I definitely was disappointed after the effort to make it compile.

build fails on OS X

highwayhash/sip_hash_main.cc:169:23: error: implicit instantiation of undefined template 'std::__1::basic_string<char,
      std::__1::char_traits<char>, std::__1::allocator<char> >'
    const std::string caption;

When built with

Apple LLVM version 8.0.0 (clang-800.0.24.1)
Target: x86_64-apple-darwin16.0.0
Thread model: posix
InstalledDir: /Applications/Xcode-beta.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin

Adding an #include in sip_hash_main.cc seems to fix it.

Publish SMHahser binding and results

AFAIR, You have mentioned in encode.ru forum that HighwayHash passed SMHasher tests with better results than any other hashes. Can you please publish your results and SMHasher integratyion code?

I'm excited since it seems that even SHA1/MD5 doesn't provide results different than MurMur3/Spooky2/xxHash, i.e. all these hashes are indistinguishable from ideal hash as far as you meaure only with SMHasher.

Escape internal state=zero

In HighwayTreeHash, keys are not attacker-controlled, but a potential worst-case involves key = init0/1. As a result, either v0 or v1 can be zero. Suppose it is v1 and an attacker chooses all-zero input packets. Then any packets with (length mod 256 = 0) collide.

If instead v0 = 0 after init and the key is known, attackers can choose the first input such that v1 becomes and remains 0. Obviously this attack requires the secret key to be known, but we would like the hash to be viable in that scenario as well (to serve as a fingerprint).

We are considering several possible workarounds:

  • preventing v1=0 after init by checking the result of key ^ init1
  • mixing non-zero bits into v1 during every round
  • some other way of escaping v0 = v1 = 0

Confusing wording about stability

The README says:

SipHash and HighwayHash 1.0 are 'fingerprint functions' whose input -> hash mapping will not change. This is important for applications that write hashes to persistent storage.

HighwayHash has not yet reached 1.0 and may still change in the near future. We will announce when it is frozen.

It seems to imply that HighwayHash 1.0 exists and then says that it doesn't. Was this a typo, and it was supposed to say that a different hash function had reached 1.0?

On that note, are you waiting for further review by third parties to reach HWH 1.0? or something more specific?

[linux] duplicated symbol DW.ref.__gxx_personality_v0

./test_exports.sh lib/libhighwayhash.a
DW.ref.__gxx_personality_v0
The above-mentioned symbols are duplicates
FAIL
Makefile:78: recipe for target 'lib/libhighwayhash.a' failed
make: *** [lib/libhighwayhash.a] Error 1

I patched highwayhash with this change:

diff --git a/Makefile b/Makefile
index 8f8fd82..f29caa1 100644
--- a/Makefile
+++ b/Makefile
@@ -43,7 +43,7 @@ obj/%.o: highwayhash/%.cc

 bin/%: obj/%.o
        @mkdir -p -- $(dir $@)
-       $(CXX) $(LDFLAGS) $^ -o $@
+       $(CXX) $(LDFLAGS) $^ -o $@ -Wl,--as-needed -lpthread

 .DELETE_ON_ERROR:
 deps.mk: $(wildcard highwayhash/*.cc) $(wildcard highwayhash/*.h) Makefile

OS: Debian Sid
Compiler: GCC 6.3

Hash values mismatch on big endian Vs little endian

While working on big endian machine, we observed that the hash results are coming different than what we got on little endian machine. Though this is expected behavior, by any chance is there any possibility to make the hash values consistent across big and little endian architectures?

A 64 bit byte swap in the "Update" method in "highwayhash/sip_hash.h", made the hash results consistent for us.

macOS build failure

$ make
c++ -c -I. -std=c++11 -Wall -O3 -fPIC -pthread highwayhash/os_specific.cc -o obj/os_specific.o
highwayhash/os_specific.cc:106:2: error: "port"
#error "port"
 ^
highwayhash/os_specific.cc:169:2: error: "port"
#error "port"
 ^
highwayhash/os_specific.cc:196:2: error: "port"
#error "port"
 ^
highwayhash/os_specific.cc:212:2: error: "port"
#error "port"
 ^
4 errors generated.
make: *** [obj/os_specific.o] Error 1

macOS: 10.13.5

[Ubuntu/Makefile] undefined reference to `pthread_create'

Hi. On Ubuntu 16.04 with gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.5) I got the error.

g++ -lpthread obj/highwayhash_test.o obj/arch_specific.o obj/instruction_sets.o obj/nanobenchmark.o obj/os_specific.o obj/highwayhash_test_portable.o obj/highwayhash_test_avx2.o obj/highwayhash_test_sse41.o -o bin/highwayhash_test
obj/highwayhash_test.o: In function `void std::vector<std::thread, std::allocator<std::thread> >::_M_emplace_back_aux<void (&)(highwayhash::ThreadPool*), highwayhash::ThreadPool*>(void (&)(highwayhash::ThreadPool*), highwayhash::ThreadPool*&&)':
highwayhash_test.cc:(.text._ZNSt6vectorISt6threadSaIS0_EE19_M_emplace_back_auxIJRFvPN11highwayhash10ThreadPoolEES6_EEEvDpOT_[_ZNSt6vectorISt6threadSaIS0_EE19_M_emplace_back_auxIJRFvPN11highwayhash10ThreadPoolEES6_EEEvDpOT_]+0x102): undefined reference to `pthread_create'
obj/highwayhash_test.o: In function `highwayhash::ThreadPool::ThreadPool(int)':
highwayhash_test.cc:(.text._ZN11highwayhash10ThreadPoolC2Ei[_ZN11highwayhash10ThreadPoolC5Ei]+0x18f): undefined reference to `pthread_create'
collect2: error: ld returned 1 exit status
Makefile:46: recipe for target 'bin/highwayhash_test' failed
make: *** [bin/highwayhash_test] Error 1

It can be fixed by changing override LDFLAGS += -lpthread to override LDFLAGS += -pthread

Alignment warning when compiling with GCC7 on aarch64

g++ -c -I. -std=c++11 -Wall -O3 -fPIC -pthread highwayhash/vector_test_portable.cc -o obj/vector_test_portable.o
In file included from ./highwayhash/arch_specific.h:39:0,
from ./highwayhash/vector_test_target.h:23,
from ./highwayhash/vector_test_target.cc:18,
from highwayhash/vector_test_portable.cc:19:
./highwayhash/vector_test_target.cc: In function ‘void highwayhash::Portable::{anonymous}::NotifyIfUnequal(highwayhash::Portable::{anonymous}::V&, T, size_t, highwayhash::HHNotify)’:
./highwayhash/compiler_specific.h:52:46: warning: requested alignment 32 is larger than 16 [-Wattributes]
#define HH_ALIGNAS(multiple) alignas(multiple) // C++11
^
./highwayhash/vector_test_target.cc:53:20: note: in expansion of macro ‘HH_ALIGNAS’
T lanes[V::N] HH_ALIGNAS(32);
^~~~~~~~~~
./highwayhash/vector_test_target.cc: In function ‘void highwayhash::Portable::{anonymous}::TestLoadStore(highwayhash::HHNotify)’:
./highwayhash/compiler_specific.h:52:46: warning: requested alignment 32 is larger than 16 [-Wattributes]

latest build broken

dear Jan,

trying to build the latest version:

$ make all
g++ -std=c++11 -O3 -mavx2 -Wall -I. -c -o highwayhash/os_specific.o highwayhash/os_specific.cc
highwayhash/os_specific.cc:1:61: fatal error: third_party/highwayhash/highwayhash/os_specific.h: No such file or directory
compilation terminated.
: recipe for target 'highwayhash/os_specific.o' failed
make: *** [highwayhash/os_specific.o] Error 1

Btw, defining uint64 in a less portable way "for interoperability with TensorFlow"
seems like a missed opportunity to fix what looks like a bug in TensorFlow :-(
Now the need for using the less portable long long type extends to all users of highwayhash...

regards,
-John

It seems weak/zilch

Now i not ready to clear define/describe the proof, but it seems to poor, near to пшик/zilch

False security claims

Please refrain from using the false SipHash security claims and adding your own nonsense.

"cryptographically strong pseudo-random function"

"Expected applications include DoS-proof hash tables and random generators."

"SipHash is immune to hash flooding because multi-collisions are infeasible to compute. This makes it suitable for hash tables storing user-controlled data."

SipHash is not so easily reversible as simple, fast xor-add or mult hash functions, for which simple collision attacks can be performed even when mixed with random seeds. But this doesn't mean it is cryptographically strong. A cryptographically strong hash function starts with 256 bits, and in the context of a hash table this can never be available, since most hash tables use only the last ~10 bits, for which efficient brute force attacks can easily be precalculated.

14 bit collisions (16383 keys) need <10s to calculate for SipHash brute-force as used in a hash table with a size scheme of power of 2, i.e. using a fast ctz or bittest, and not a slow mod as with a prime sized scheme. 16bit (65535) need 1m30s, 16-28bit need 4m. That's for attacking already big-sized tables. A usual attack is against small tables.
That's not strong or secure, that's weak and false security.
The only security SipHash offers is a max-brute-force calculation time of 4minutes vs 2m30 with faster hash functions for a practical attack against big tables. For normal tables <16383 keys 10s is enough.

No single hash function can guarantee immunity against hash table flooding, you need to do a bit more than using a slow hash function such as yours. E.g. hiding your seed, using a proper collision resolution scheme, or different ideas. e.g. look how djb himself solved it in his name server. not with siphash.

These false claims of djb and yours are spreading, and are actually doing damage in the dynamic language community, who are using simple insecure hash tables with SipHash.
Python 3.4, ruby, rust, systemd, OpenDNS, Haskell and OpenBSD are all following this security theatre nonsense, and they think that by using SipHash they are now immune.
At least rust found out last month that this is nonsense, but are still claiming that SipHash is secure.

Fails to build on several architectures

The newest version (Jan 3 2018) of highwayhash fails to build on several architecutures, as shown in the following link: https://buildd.debian.org/status/package.php?p=highwayhash&suite=experimental

Succeed: amd64, arm64, x32

Failed: armel, armhf, i386, mips, mips64el, ppc64el, s390x, alpha, hppa, hurd-i386, ia64, powerpc, ppc64, sh4, sparc64

Not sure: kfreebsd-amd64, kfreebsd-i386

I'd like to confirm whether your supported architectures are amd64 and arm64. If so, I'll set the package no longer to build on other architectures. Thanks

compilation with makefile does not work

This patch fixes some include problems:

diff --git a/Makefile b/Makefile
index de631a3..8f8fd82 100644
--- a/Makefile
+++ b/Makefile
@@ -1,6 +1,6 @@
 # We assume X64 unless HH_POWER or HH_AARCH64 are defined.
 
-override CPPFLAGS += -I../..
+override CPPFLAGS += -I.
 override CXXFLAGS +=-std=c++11 -Wall -O3
 
 SIP_OBJS := $(addprefix obj/, \

However, there is still another problem:

make: ./test_exports.sh: Command not found

Maybe this file was not committed by accident?

Secure hash lookups

It seems there is a disconnect between hash security models and actual practice.
I propose to add the following to the README:

Defending against hash flooding

We wish to defend (web) services that utilize hash sets/maps against denial-of-service attacks. Such data structures assign attacker-controlled input messages m to bin H(s, m) % p using a seed s, hash function H, and preferably prime table size p. Attackers can trigger 'flooding' (excessive work in insertions/lookups) by finding 'collisions', i.e. many m assigned to the same bin.

If the attacker has local access, they can do far worse, so we assume the attacker can only issue remote requests. If the attacker is able to send large numbers of requests, they can already deny service, so we need only ensure the attacker's cost is sufficiently large compared to the service's provisioning.

If the hash function is 'weak' (e.g. CityHash/Murmur), attackers can easily generate collisions irregardless of the seed. This causes n^2 work for n requests to an unprotected hash table, which is unacceptable. If the seed is known, the attacker can find collisions for any H by computing H(s, m) % p for various m. This raises the attacker's cost by a factor of p (typically 10^3..10^5), but we need a further increase in the cost/work ratio to be safe.

It is reasonable to assume s is a secret property of the service generated on startup or even per-connection, and therefore initially unknown to remote attackers. A timing attack by Wool/Bar-Yosef recovers 13-bit seeds by testing all 8K possibilities using millions of requests, which takes several days (even assuming unrealistic 150 us round-trip times). It appears wildly infeasible to recover 64-bit seeds in this way.

If the seed remains secret, the security claims of 'strong' hashes such as SipHash or HighwayHash imply attackers need 2^32 guesses of m before expecting a collision (birthday paradox), and 2^63 requests to guess the seed. These costs are large enough to consider the service safe, even when using a conventional hash table.

Even if the seed is somehow revealed and/or attackers manage to find collisions, there are two ways to prevent denial of service by limiting the work per request.

  1. Instead of conventional chained or closed hash tables, the service can use augmented/de-amortized cuckoo hash tables (e.g. https://arxiv.org/pdf/0903.0391.pdf). These guarantee worst-case log n bounds, but only if the hash function is 'indistinguishable from random', which is claimed for SipHash and HighwayHash but certainly not for weak hashes.

  2. When flooding is detected, the service can switch from hashing to a tree. @funny-falcon proposes to avoid the space and time overhead of self-balancing algorithms (AVL/splay/red-black/a,b trees) by indexing the tree with H(s, m) rather than m. This relies on the equidistribution property of strong hashes.

In both cases, attackers pay a high cost (likely at least proportional to p) to trigger only modest additional work (a factor of log n).

In summary, a strong hash function is not, by itself, sufficient to protect a chained hash table from flooding attacks. However, strong hash functions are important parts of two schemes for preventing denial of service. Using weak hash functions can slightly accelerate the best-case and average-case performance of a service, but at the risk of greatly reduced attack costs and higher worst-case work.

Couldn't figure out how to install on a Raspberry Pi

I am pretty sure this is a dumb question, hope you'll bear with me.
I have a clean install of Raspbian on a Raspberry Pi 3, just with git and the few tools needed to compile. I naively followed the very brief Build instructions on the README and attempted a make.

g++: error: unrecognized command line option ‘-mavx2’
<builtin>: recipe for target 'highwayhash/os_specific.o' failed
make: *** [highwayhash/os_specific.o] Error 1

Am I to assume that the Raspberry Pi is not a supported platform for Highway? Otherwise, could someone point me to an easily followable set of instructions to get Highway on my Raspberry and compute a 256 bit hash of something? I easily get lost.

Thank you so much,
sorry for the not-really-inspired question.

build fails

The gunit dependency breaks the build for me:

    deps = [
        ":sip_hash",
        "//testing/base/public:gunit_main_no_google3",
    ],

I'm not sure how to get it.

Bazel build broken

When exporting to GitHub, the third_party/highwayhash/ prefix should be stripped from include lines so they become relative to the root of the repository. Otherwise I get errors like this:

external/highwayhash/highwayhash/sip_hash.cc:15:10: fatal error: 'third_party/highwayhash/highwayhash/sip_hash.h' file not found
#include "third_party/highwayhash/highwayhash/sip_hash.h"
         ^

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.