Giter Site home page Giter Site logo

kahypar / kahypar Goto Github PK

View Code? Open in Web Editor NEW
393.0 19.0 88.0 136.4 MB

KaHyPar (Karlsruhe Hypergraph Partitioning) is a multilevel hypergraph partitioning framework providing direct k-way and recursive bisection based partitioning algorithms that compute solutions of very high quality.

Home Page: http://www.kahypar.org

License: GNU General Public License v3.0

CMake 2.43% HTML 0.37% Perl 0.52% C++ 95.50% Shell 0.10% C 0.64% Ruby 0.01% Python 0.42% SCSS 0.01%
hypergraph-partitioning algorithm-engineering partitioning graph-partitioning hypergraph partitioning-algorithms cpp papers-with-code evolutionary-algorithms graph

kahypar's Introduction

KaHyPar - Karlsruhe Hypergraph Partitioning

License Fossa Zenodo
License: GPL v3 FOSSA Status DOI
Linux & macOS Build Code Coverage Code Quality Coverity Scan Issues
Build Status codecov Codacy Badge Coverity Status Average time to resolve an issue

Table of Contents

What is a Hypergraph? What is Hypergraph Partitioning?

Hypergraphs are a generalization of graphs, where each (hyper)edge (also called net) can connect more than two vertices. The k-way hypergraph partitioning problem is the generalization of the well-known graph partitioning problem: partition the vertex set into k disjoint blocks of bounded size (at most 1 + ε times the average block size), while minimizing an objective function defined on the nets.

The two most prominent objective functions are the cut-net and the connectivity (or λ − 1) metrics. Cut-net is a straightforward generalization of the edge-cut objective in graph partitioning (i.e., minimizing the sum of the weights of those nets that connect more than one block). The connectivity metric additionally takes into account the actual number λ of blocks connected by a net. By summing the (λ − 1)-values of all nets, one accurately models the total communication volume of parallel sparse matrix-vector multiplication and once more gets a metric that reverts to edge-cut for plain graphs.

alt textalt text

What is KaHyPar?

KaHyPar is a multilevel hypergraph partitioning framework for optimizing the cut- and the (λ − 1)-metric. It supports both recursive bisection and direct k-way partitioning. As a multilevel algorithm, it consist of three phases: In the coarsening phase, the hypergraph is coarsened to obtain a hierarchy of smaller hypergraphs. After applying an initial partitioning algorithm to the smallest hypergraph in the second phase, coarsening is undone and, at each level, a local search method is used to improve the partition induced by the coarser level. KaHyPar instantiates the multilevel approach in its most extreme version, removing only a single vertex in every level of the hierarchy. By using this very fine grained n-level approach combined with strong local search heuristics, it computes solutions of very high quality. Its algorithms and detailed experimental results are presented in several research publications.

Additional Features

  • Hypergraph partitioning with variable block weights

    KaHyPar has support for variable block weights. If command line option --use-individual-part-weights=true is used, the partitioner tries to partition the hypergraph such that each block Vx has a weight of at most Bx, where Bx can be specified for each block individually using the command line parameter --part-weights= B1 B2 B3 ... Bk-1. Since the framework does not yet support perfectly balanced partitioning, upper bounds need to be slightly larger than the total weight of all vertices of the hypergraph. Note that this feature is still experimental.

  • Hypergraph partitioning with fixed vertices

    Hypergraph partitioning with fixed vertices is a variation of standard hypergraph partitioning. In this problem, there is an additional constraint on the block assignment of some vertices, i.e., some vertices are preassigned to specific blocks prior to partitioning with the condition that, after partitioning the remaining “free” vertices, the fixed vertices are still in the block that they were assigned to. The command line parameter --fixed / -f can be used to specify a fix file in hMetis fix file format. For a hypergraph with V vertices, the fix file consists of V lines - one for each vertex. The ith line either contains -1 to indicate that the vertex is free to move or <part id> to indicate that this vertex should be preassigned to block <part id>. Note that part ids start from 0.

    KaHyPar currently supports three different contraction policies for partitioning with fixed vertices:

    1. free_vertex_only allows all contractions in which the contraction partner is a free vertex, i.e., it allows contractions of vertex pairs where either both vertices are free, or one vertex is fixed and the other vertex is free.
    2. fixed_vertex_allowed additionally allows contractions of two fixed vertices provided that both are preassigned to the same block. Based on preliminary experiments, this is currently the default policy.
    3. equivalent_vertices only allows contractions of vertex pairs that consist of either two free vertices or two fixed vertices preassigned to the same block.
  • Evolutionary framework (KaHyPar-E)

    KaHyPar-E enhances KaHyPar with an evolutionary framework as described in our GECCO'18 publication. Given a fairly large amount of running time, this memetic multilevel algorithm performs better than repeated executions of nonevolutionary KaHyPar configurations, hMetis, and PaToH. The command line parameter --time-limit=xxx can be used to set the maximum running time (in seconds). Parameter --partition-evolutionary=true enables evolutionary partitioning.

  • Improving existing partitions

    KaHyPar uses direct k-way V-cycles to try to improve an existing partition specified via parameter --part-file=</path/to/file>. The maximum number of V-cycles can be controlled via parameter --vcycles=.

  • Partitioning directed acyclic hypergraphs

    While the code has not been merged into the main repository yet, there exists a fork that supports acyclic hypergraph partitioning. More details can be found in the corresponding conference publication.

Experimental Results

We use the performance profiles to compare KaHyPar to other partitioning algorithms in terms of solution quality. For a set of algorithms and a benchmark set containing instances, the performance ratio relates the cut computed by partitioner p for instance i to the smallest minimum cut of all algorithms, i.e.,

.

The performance profile of algorithm p is then given by the function

.

For connectivity optimization, the performance ratios are computed using the connectivity values instead of the cut values. The value of corresponds to the fraction of instances for which partitioner p computed the best solution, while is the probability that a performance ratio is within a factor of of the best possible ratio. Note that since performance profiles only allow to assess the performance of each algorithm relative to the best algorithm, the values cannot be used to rank algorithms (i.e., to determine which algorithm is the second best etc.).

In our experimental analysis, the performance profile plots are based on the best solutions (i.e., minimum connectivity/cut) each algorithm found for each instance. Furthermore, we choose parameters for all p, i, and such that a performance ratio if and only if algorithm p computed an infeasible solution for instance i, and if and only if the algorithm could not compute a solution for instance i within the given time limit. In our performance profile plots, performance ratios corresponding to infeasible solutions will be shown on the x-tick on the x-axis, while instances that could not be partitioned within the time limit are shown implicitly by a line that exits the plot below . Since the performance ratios are heavily right-skewed, the performance profile plots are divided into three segments with different ranges for parameter to reflect various areas of interest. The first segment highlights small values (), while the second segment contains results for all instances that are up to a factor of worse than the best possible ratio. The last segment contains all remaining ratios, i.e., instances for which some algorithms performed considerably worse than the best algorithm, instances for which algorithms produced infeasible solutions, and instances which could not be partitioned within the given time limit.

In the figures, we compare KaHyPar with PaToH in quality (PaToH-Q) and default mode (PaToH-D), the k-way (hMETIS-K) and the recursive bisection variant (hMETIS-R) of hMETIS 2.0 (p1), Zoltan using algebraic distance-based coarsening (Zoltan-AlgD), Mondriaan v.4.2.1 and the recently published HYPE algorithm.

Solution Quality Solution Quality

Running Time Running Time

Additional Resources

We provide additional resources for all KaHyPar-related publications:

kKaHyPar-SEA20 /
rKaHyPar-SEA20
SEA'20 Paper TR Slides Experimental Results
kKaHyPar /
rKaHyPar
- Dissertation - Slides Experimental Results
KaHyPar-MF /
KaHyPar-R-MF
SEA'18 /
JEA'19
SEA Paper /
JEA Paper
TR Slides Experimental Results:
SEA / JEA
KaHyPar-E (EvoHGP) GECCO'18 Paper TR Slides Experimental Results
KaHyPar-CA SEA'17 Paper - Slides Experimental Results
KaHyPar-K ALENEX'17 Paper - Slides Experimental Results
KaHyPar-R ALENEX'16 Paper TR Slides Experimental Results

Projects using KaHyPar

Requirements

The Karlsruhe Hypergraph Partitioning Framework requires:

  • A 64-bit operating system. Linux, Mac OS X and Windows (through the Windows Subsystem for Linux) are currently supported.
  • A modern, C++14-ready compiler such as g++ version 9 or higher or clang version 11.0.3 or higher.
  • The cmake build system.
  • The Boost.Program_options library and the boost header files. If you don't want to install boost yourself, you can add the -DKAHYPAR_USE_MINIMAL_BOOST=ON flag to the cmake command to download, extract, and build the necessary dependencies automatically.

Building KaHyPar

  1. Clone the repository including submodules:

    git clone --depth=1 --recursive [email protected]:SebastianSchlag/kahypar.git

  2. Create a build directory: mkdir build && cd build

  3. Run cmake: cmake .. -DCMAKE_BUILD_TYPE=RELEASE

  4. Run make: make

Testing and Profiling

Tests are automatically executed while project is built. Additionally a test target is provided. End-to-end integration tests can be started with: make integration_tests. Profiling can be enabled via cmake flag: -DENABLE_PROFILE=ON.

Running KaHyPar

The standalone program can be built via make KaHyPar. The binary will be located at: build/kahypar/application/.

KaHyPar has several configuration parameters. For a list of all possible parameters please run: ./KaHyPar --help. We use the hMetis format for the input hypergraph file as well as the partition output file.

The command line parameter --quiet=1 can be used to suppress all logging output. If you are using the library interfaces, adding quiet=1 to the corresponding .ini configuration file has the same effect.

Default / Most Recent Presets

We provide two default framework configurations - one for recursive bipartitioning (rKaHyPar) and one for direct k-way partitioning (kKaHyPar).

In general, we recommend using kKaHyPar as it performs better than rKaHyPar in terms of both running time and solution quality.

To start kKaHyPar optimizing the (connectivity - 1) objective run:

./KaHyPar -h <path-to-hgr> -k <# blocks> -e <imbalance (e.g. 0.03)> -o km1 -m direct -p ../../../config/km1_kKaHyPar_sea20.ini

To start kKaHyPar optimizing the cut net objective run:

./KaHyPar -h <path-to-hgr> -k <# blocks> -e <imbalance (e.g. 0.03)> -o cut -m direct -p ../../../config/cut_kKaHyPar_sea20.ini

To start rKaHyPar optimizing the (connectivity - 1) objective run:

./KaHyPar -h <path-to-hgr> -k <# blocks> -e <imbalance (e.g. 0.03)> -o km1 -m recursive -p ../../../config/km1_rKaHyPar_sea20.ini

To start rKaHyPar optimizing the cut net objective run:

./KaHyPar -h <path-to-hgr> -k <# blocks> -e <imbalance (e.g. 0.03)> -o cut -m recursive -p ../../../config/cut_rKaHyPar_sea20.ini

To start the memetic algorithm kKaHyPar-E optimizing the (connectivity - 1) objective run:

./KaHyPar -h <path-to-hgr> -k <# blocks> -e <imbalance (e.g. 0.03)> -o km1 -m direct -p ../../../config/km1_kKaHyPar-E_sea20.ini

Old Presets

Additionally, we provide different presets that correspond to the configurations used in the publications at ALENEX'16, ALENEX'17, SEA'17, SEA'18, GECCO'18, as well as in our JEA journal paper and in the dissertation of Sebastian Schlag. These configurations are located in the config/old_reference_configs folder. In order to use these configurations, you have to checkout KaHyPar release 1.1.0, since some old code as been removed in the most current release.

To start KaHyPar-MF (using flow-based refinement) optimizing the (connectivity - 1) objective using direct k-way mode run:

./KaHyPar -h <path-to-hgr> -k <# blocks> -e <imbalance (e.g. 0.03)> -o km1 -m direct -p ../../../config/old_reference_configs/km1_kahypar_mf_jea19.ini

To start KaHyPar-MF (using flow-based refinement) optimizing the cut-net objective using direct k-way mode run:

./KaHyPar -h <path-to-hgr> -k <# blocks> -e <imbalance (e.g. 0.03)> -o cut -m direct -p ../../../config/old_reference_configs/cut_kahypar_mf_jea19.ini

To start EvoHGP/KaHyPar-E optimizing the (connectivity - 1) objective using direct k-way mode run

 ./KaHyPar -h <path-to-hgr> -k <# blocks> -e <imbalance (e.g. 0.03)> -o km1 -m direct -p ../../../config/old_reference_configs/km1_direct_kway_gecco18.ini

Note that the configuration km1_direct_kway_gecco18.ini is based on KaHyPar-CA. However, KaHyPar-E also works with flow-based local improvements. In our JEA publication the km1_kahypar_e_mf_jea19.ini configuration was used.

To start KaHyPar-CA (using community-aware coarsening) optimizing the (connectivity - 1) objective using direct k-way mode run:

./KaHyPar -h <path-to-hgr> -k <# blocks> -e <imbalance (e.g. 0.03)> -o km1 -m direct -p ../../../config/old_reference_configs/km1_direct_kway_sea17.ini

To start KaHyPar in direct k-way mode (KaHyPar-K) optimizing the (connectivity - 1) objective run:

./KaHyPar -h <path-to-hgr> -k <# blocks> -e <imbalance (e.g. 0.03)> -o km1 -m direct -p ../../../config/old_reference_configs/km1_direct_kway_alenex17.ini

To start KaHyPar in recursive bisection mode (KaHyPar-R) optimizing the cut-net objective run:

./KaHyPar -h <path-to-hgr> -k <# blocks> -e <imbalance (e.g. 0.03)> -o cut -m recursive -p ../../../config/old_reference_configs/cut_rb_alenex16.ini

All preset parameters can be overwritten by using the corresponding command line options.

Input Validation

When creating a hypergraph KaHyPar validates that the input is actually a correct hypergraph, otherwise printing an error and aborting. This applies to hgr input files, the C interface and the Python interface. The runtime cost of the validation should be negligible in almost all cases. However, the input validation can also be disabled using the cmake flag -DKAHYPAR_INPUT_VALIDATION=OFF.

Additionally, warnings are printed for non-fatal issues (e.g. hyperedges with duplicate pins). To treat non-fatal issues as hard errors instead, use the cmake flag -DKAHYPAR_INPUT_VALIDATION_PROMOTE_WARNINGS_TO_ERRORS=ON.

Using the Library Interfaces

The C-Style Interface

We provide a simple C-style interface to use KaHyPar as a library. Note that this interface is not thread-safe yet. However there are some existing workarounds. The library can be built and installed via

make install.library

and can be used like this:

#include <memory>
#include <vector>
#include <iostream>

#include <libkahypar.h>

int main(int argc, char* argv[]) {

  kahypar_context_t* context = kahypar_context_new();
  kahypar_configure_context_from_file(context, "/path/to/config.ini");
  
  kahypar_set_seed(context, 42);

  const kahypar_hypernode_id_t num_vertices = 7;
  const kahypar_hyperedge_id_t num_hyperedges = 4;

  std::unique_ptr<kahypar_hyperedge_weight_t[]> hyperedge_weights = std::make_unique<kahypar_hyperedge_weight_t[]>(4);

  // force the cut to contain hyperedge 0 and 2
  hyperedge_weights[0] = 1;  hyperedge_weights[1] = 1000;
  hyperedge_weights[2] = 1;  hyperedge_weights[3] = 1000;

  std::unique_ptr<size_t[]> hyperedge_indices = std::make_unique<size_t[]>(5);

  hyperedge_indices[0] = 0; hyperedge_indices[1] = 2;
  hyperedge_indices[2] = 6; hyperedge_indices[3] = 9;
  hyperedge_indices[4] = 12;

  std::unique_ptr<kahypar_hyperedge_id_t[]> hyperedges = std::make_unique<kahypar_hyperedge_id_t[]>(12);

  // hypergraph from hMetis manual page 14
  hyperedges[0] = 0;  hyperedges[1] = 2;
  hyperedges[2] = 0;  hyperedges[3] = 1;
  hyperedges[4] = 3;  hyperedges[5] = 4;
  hyperedges[6] = 3;  hyperedges[7] = 4;
  hyperedges[8] = 6;  hyperedges[9] = 2;
  hyperedges[10] = 5; hyperedges[11] = 6;

  const double imbalance = 0.03;
  const kahypar_partition_id_t k = 2;

  kahypar_hyperedge_weight_t objective = 0;

  std::vector<kahypar_partition_id_t> partition(num_vertices, -1);

  kahypar_partition(num_vertices, num_hyperedges,
       	            imbalance, k,
               	    /*vertex_weights */ nullptr, hyperedge_weights.get(),
               	    hyperedge_indices.get(), hyperedges.get(),
       	            &objective, context, partition.data());

  for(int i = 0; i != num_vertices; ++i) {
    std::cout << i << ":" << partition[i] << std::endl;
  }

  kahypar_context_free(context);
}

To compile the program using g++ run:

g++ -std=c++14 -DNDEBUG -O3 -I/usr/local/include -L/usr/local/lib program.cc -o program -lkahypar

Note: If boost is not found during linking, you might need to add -L/path/to/boost/lib -I/path/to/boost/include -lboost_program_options to the command.

To remove the library from your system use the provided uninstall target:

make uninstall-kahypar

The Python Interface

To compile the Python interface, do the following:

  1. Create a build directory: mkdir build && cd build
  2. If you have boost installed on your system, run cmake: cmake .. -DCMAKE_BUILD_TYPE=RELEASE -DKAHYPAR_PYTHON_INTERFACE=1. If you don't have boost installed, run: cmake .. -DCMAKE_BUILD_TYPE=RELEASE -DKAHYPAR_PYTHON_INTERFACE=1 -DKAHYPAR_USE_MINIMAL_BOOST=ON instead. This will download, extract, and build the necessary dependencies automatically.
  3. Go to libary folder: cd python
  4. Compile the libarary: make
  5. Copy the libary to your site-packages directory: cp kahypar.so <path-to-site-packages>

After that you can use the KaHyPar libary like this:

import os
import kahypar as kahypar

num_nodes = 7
num_nets = 4

hyperedge_indices = [0,2,6,9,12]
hyperedges = [0,2,0,1,3,4,3,4,6,2,5,6]

node_weights = [1,2,3,4,5,6,7]
edge_weights = [11,22,33,44]

k=2

hypergraph = kahypar.Hypergraph(num_nodes, num_nets, hyperedge_indices, hyperedges, k, edge_weights, node_weights)

context = kahypar.Context()
context.loadINIconfiguration("<path/to/config>/km1_kKaHyPar_sea20.ini")

context.setK(k)
context.setEpsilon(0.03)

kahypar.partition(hypergraph, context)

For more information about the python library functionality, please see: module.cpp

We also provide a precompiled version as a PyPI version , which can be installed via:

python3 -m pip install --index-url https://pypi.org/simple/ --no-deps kahypar

The Julia Interface

Thanks to Jordan Jalving (@jalving) KaHyPar now also offers a Julia interface, which can currently be found here: kahypar/KaHyPar.jl.

The corresponding dependency can be installed via:

using Pkg
Pkg.add(PackageSpec(url="https://github.com/jalving/KaHyPar.jl.git"))
Pkg.test("KaHyPar")

After that, you can use KaHyPar to partition your hypergraphs like this:

using KaHyPar
using SparseArrays

I = [1,3,1,2,4,5,4,5,7,3,6,7]
J = [1,1,2,2,2,2,3,3,3,4,4,4]
V = Int.(ones(length(I)))

A = sparse(I,J,V)

h = KaHyPar.hypergraph(A)

KaHyPar.partition(h,2,configuration = :edge_cut)

KaHyPar.partition(h,2,configuration = :connectivity)

KaHyPar.partition(h,2,configuration = joinpath(@__DIR__,"../src/config/km1_kKaHyPar_sea20.ini"))

The Java Interface

Romain Wallon has created a Java interface for KaHyPar. Please refer to the readme for a detailed description on how to build and use the interface.

Bug Reports

We encourage you to report any problems with KaHyPar via the github issue tracking system of the project.

Licensing

KaHyPar is free software provided under the GNU General Public License (GPLv3). For more information see the COPYING file. We distribute this framework freely to foster the use and development of hypergraph partitioning tools. If you use KaHyPar in an academic setting please cite the appropriate papers. If you are interested in a commercial license, please contact me.

// Overall KaHyPar framework
@phdthesis{DBLP:phd/dnb/Schlag20,
  author    = {Sebastian Schlag},
  title     = {High-Quality Hypergraph Partitioning},
  school    = {Karlsruhe Institute of Technology, Germany},
  year      = {2020}
}

@article{10.1145/3529090, 
  author = {Schlag, Sebastian and 
            Heuer, Tobias and 
            Gottesb\"{u}ren, Lars and 
            Akhremtsev, Yaroslav and 
            Schulz, Christian and 
            Sanders, Peter}, 
  title = {High-Quality Hypergraph Partitioning}, 
  url = {https://doi.org/10.1145/3529090}, 
  doi = {10.1145/3529090}, 
  journal = {ACM J. Exp. Algorithmics},
  year = {2022}, 
  month = {mar}
}

// KaHyPar-R
@inproceedings{shhmss2016alenex,
 author    = {Sebastian Schlag and
              Vitali Henne and
              Tobias Heuer and
              Henning Meyerhenke and
              Peter Sanders and
              Christian Schulz},
 title     = {k-way Hypergraph Partitioning via \emph{n}-Level Recursive
              Bisection},
 booktitle = {18th Workshop on Algorithm Engineering and Experiments, (ALENEX 2016)},
 pages     = {53--67},
 year      = {2016},
}

// KaHyPar-K
@inproceedings{ahss2017alenex,
 author    = {Yaroslav Akhremtsev and
              Tobias Heuer and
              Peter Sanders and
              Sebastian Schlag},
 title     = {Engineering a direct \emph{k}-way Hypergraph Partitioning Algorithm},
 booktitle = {19th Workshop on Algorithm Engineering and Experiments, (ALENEX 2017)},
 pages     = {28--42},
 year      = {2017},
}

// KaHyPar-CA
@inproceedings{hs2017sea,
 author    = {Tobias Heuer and
              Sebastian Schlag},
 title     = {Improving Coarsening Schemes for Hypergraph Partitioning by Exploiting Community Structure},
 booktitle = {16th International Symposium on Experimental Algorithms, (SEA 2017)},
 pages     = {21:1--21:19},
 year      = {2017},
}

// KaHyPar-MF
@inproceedings{heuer_et_al:LIPIcs:2018:8936,
 author ={Tobias Heuer and Peter Sanders and Sebastian Schlag},
 title ={{Network Flow-Based Refinement for Multilevel Hypergraph Partitioning}},
 booktitle ={17th International Symposium on Experimental Algorithms  (SEA 2018)},
 pages ={1:1--1:19},
 year ={2018}
}


@article{KaHyPar-MF-JEA,
  author = {Heuer, T. and Sanders, P. and Schlag, S.},
  title = {Network Flow-Based Refinement for Multilevel Hypergraph Partitioning},
  journal = {ACM Journal of Experimental Algorithmics (JEA)}},
  volume = {24},
  number = {1},
  month = {09},
  year = {2019},
  pages = {2.3:1--2.3:36},
  publisher = {ACM}
}

// KaHyPar-E (EvoHGP)
@inproceedings{Andre:2018:MMH:3205455.3205475,
 author = {Robin Andre and Sebastian Schlag and Christian Schulz},
 title = {Memetic Multilevel Hypergraph Partitioning},
 booktitle = {Proceedings of the Genetic and Evolutionary Computation Conference},
 series = {GECCO '18},
 year = {2018},
 pages = {347--354},
 numpages = {8}
}

// KaHyPar-SEA20 (KaHyPar-HFC)
@InProceedings{gottesbren_et_al:LIPIcs:2020:12085,
author = {Lars Gottesb{\"u}ren and Michael Hamann and Sebastian Schlag and Dorothea Wagner},
title =	{{Advanced Flow-Based Multilevel Hypergraph Partitioning}},
booktitle = {18th International Symposium on Experimental Algorithms (SEA)},
pages =	{11:1--11:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
year =	{2020}
}

Contributing

If you are interested in contributing to the KaHyPar framework feel free to contact me or create an issue on the issue tracking system.

kahypar's People

Contributors

coloquinte avatar giordano avatar jbj avatar katrinleinweber avatar kittobi1992 avatar larsgottesbueren avatar malleusmalleficarum avatar marvinwilliams avatar mathefuchs avatar mofeing avatar n-maas avatar qr4 avatar s-mandra avatar sebastianschlag avatar tobiasheuer avatar yarchi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kahypar's Issues

macOS:Reason: image not found

ImportError: dlopen(/opt/anaconda3/lib/python3.8/site-packages/kahypar.cpython-38-darwin.so, 2): Library not loaded: /usr/local/opt/boost/lib/libboost_program_options-mt.dylib
Referenced from: /opt/anaconda3/lib/python3.8/site-packages/kahypar.cpython-38-darwin.so
Reason: image not found
how to solve it?

Scalability?

How how large of a graph can KaHyPar scale to? It would be nice to have some simple numbers on the running time for large graphs on the website. I can't tell much from the current plots.

manual for kahypar and benchmarks

Just as metis and hmetis have their user manuals which describe the different options that are available, can you also provide a manual for kahypar? This will enable new users to quickly start using your excellent system. Also, is there a set of large-ish benchmarks (and their kahypar outputs) available for comparison?

unused best_imbalance in initial partitioning

In partitioner.h:290 best_imbalance is set to max.

Whenever the initial partitioner returns an imbalanced partition, you use init_alpha to succesively reduce the allowed imbalance. However this best_imbalance variable is never updated.

Is this some kind of leftover or do we have a bug here?

Make IP output less verbose

Even if --verbose=false, initial partitioning prints too much output. This is especially annoying for large values of k.

imbalance and part-weights are not being honored

I have a graph with 26 vertices with weight 1 and 71 vertices with weight 0 and need to partition into 2 blocks with weights 29 25 (--part-weights 29 25).

I get a final solution like this:
Partition sizes and weights:
|part 0 | = 1 w( 0 ) = 0
|part 1 | = 96 w( 1 ) = 26

where part 1 is overutilized.

Also, regardless of imbalance/epsilon value, result is the same and the initial report says epsilon = 0.

Attached are the hypergraph and fix files, as well as the ini file

Command used:
KaHyPar -p cut.ini --mode direct --blocks 2 --objective cut --write-partition true -h tmp1.hgr -f tmp1.fix --use-individual-part-weights=true --part-weights 29 25 -e 0.5

cut.ini.txt
tmp1.hgr.txt
tmp1.fix.txt

Directed hypergraph support

does kahypar support directed hypergraphs? I was going through config files and documentation, was not able to find anything regarding direction support

python library interface

i have just installed kahypar .
and used the same example for python interface.
but i am just asking how to get nodes of each block .
i found that it should be in blockID():

after i used partition :
kahypar.partition(self.__get_graph(),self.__get_context())

i just tried to get block id for each node.

        for p in self.__get_graph().nodes():

            print(self.__get_graph().blockID(p))

please assist and sorry for inconvenient

empty blocks when imbalance is large

KaHyPar produces partitions with empty blocks when epsilon is large.
Local search algorithms like 2FM and k-way FM don't do moves if the source block would become
empty. Therefore I assume the problem is somewhere in initial partitioning.
@kittobi1992 if you find time, it would be nice to take a look at this.

To reproduce this issue, you can use the following KaHyPar call:

./KaHyPar -h /global_data/schlag/hypergraph_collection/esa15/ISPD98_ibm18.hgr -k 16 -e 0.8 -o km1 -m direct -p ../../../config/km1_direct_kway_alenex17.ini --seed=2

Output
_16-way Partition Result_*
Hyperedge Cut (minimize) = 7506
SOED (minimize) = 15394
(k-1) (minimize) = 7888
Absorption (maximize) = 199796
Imbalance = 0.749924
Partition time = 20.1645 s
| initial parallel HE removal = 0 s [currently not implemented]
| initial large HE removal = 0 s
| coarsening = 0.821498 s
| initial partitioning = 7.67142 s
| uncoarsening/refinement = 11.6443 s
| initial large HE restore = 0 s
| initial parallel HE restore = 0 s [currently not implemented]
Partition sizes and weights:
|part0| = 0 w(0) = 0
|part1| = 14318 w(1) = 14318
|part2| = 6424 w(2) = 6424
|part3| = 22714 w(3) = 22714
|part4| = 23036 w(4) = 23036
|part5| = 15498 w(5) = 15498
|part6| = 23036 w(6) = 23036
|part7| = 2523 w(7) = 2523
|part8| = 10518 w(8) = 10518
|part9| = 18692 w(9) = 18692
|part10| = 0 w(10) = 0
|part11| = 10327 w(11) = 10327
|part12| = 16827 w(12) = 16827
|part13| = 11831 w(13) = 11831
|part14| = 22890 w(14) = 22890
|part15| = 11979 w(15) = 11979

Errors when building on windows

I was following the instructions to build the python interface on windows and after running cmake --build . --config Release in the python folder under build, I get the following errors. It looks like some syntax errors

(base) PS D:\Downloads\kahypar\build> cmake --build . --config Release
Microsoft (R) Build Engine version 16.9.0+5e4b48a27 for .NET Framework
Copyright (C) Microsoft Corporation. All rights reserved.

  Checking Build System
  Building Custom Rule D:/Downloads/kahypar/python/CMakeLists.txt
cl : command line warning D9025: overriding '/W4' with '/w' [D:\Downloads\kahypar\build\python\kahypar_python.vcxproj]
  module.cpp
D:\Downloads\kahypar\kahypar/partition/context.h(458,14): error C2039: 'accumulate': is not a member of 'std' [D:\Downl
oads\kahypar\build\python\kahypar_python.vcxproj]
D:\Downloads\kahypar\kahypar/definitions.h(62): message : see declaration of 'std' [D:\Downloads\kahypar\build\python\k
ahypar_python.vcxproj]
D:\Downloads\kahypar\kahypar/partition/context.h(458,1): error C3861: 'accumulate': identifier not found [D:\Downloads\
kahypar\build\python\kahypar_python.vcxproj]
D:\Downloads\kahypar\kahypar/partition/context.h(680,12): error C2039: 'accumulate': is not a member of 'std' [D:\Downl
oads\kahypar\build\python\kahypar_python.vcxproj]
D:\Downloads\kahypar\kahypar/definitions.h(62): message : see declaration of 'std' [D:\Downloads\kahypar\build\python\k
ahypar_python.vcxproj]
D:\Downloads\kahypar\kahypar/partition/context.h(680,22): error C3861: 'accumulate': identifier not found [D:\Downloads
\kahypar\build\python\kahypar_python.vcxproj]
D:\Downloads\kahypar\kahypar\datastructure/hypergraph.h(438,1): error C2672: 'kahypar::ds::end': no matching overloaded
 function found [D:\Downloads\kahypar\build\python\kahypar_python.vcxproj]
C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.28.29910\include\utility(331): message :
deWeight,kahypar::HyperedgeWeight,kahypar::PartitionID,kahypar::meta::Empty,kahypar::meta::Empty>::Vertex<kahypar::ds::
GenericHypergraph<kahypar::HypernodeID,kahypar::HyperedgeID,kahypar::HypernodeWeight,kahypar::HyperedgeWeight,kahypar::
PartitionID,kahypar::meta::Empty,kahypar::meta::Empty>::HypernodeTraits,kahypar::ds::GenericHypergraph<kahypar::Hyperno
deID,kahypar::HyperedgeID,kahypar::HypernodeWeight,kahypar::HyperedgeWeight,kahypar::PartitionID,kahypar::meta::Empty,k
ahypar::meta::Empty>::AdditionalHypernodeData>>' being compiled [D:\Downloads\kahypar\build\python\kahypar_python.vcxpr
oj]
D:\Downloads\kahypar\kahypar/io/hypergraph_io.h(194): message : see reference to class template instantiation 'std::pai
eight,kahypar::PartitionID,kahypar::meta::Empty,kahypar::meta::Empty>::HypergraphElementIterator<const kahypar::ds::Gen
ericHypergraph<kahypar::HypernodeID,kahypar::HyperedgeID,kahypar::HypernodeWeight,kahypar::HyperedgeWeight,kahypar::Par
titionID,kahypar::meta::Empty,kahypar::meta::Empty>::Vertex<kahypar::ds::GenericHypergraph<kahypar::HypernodeID,kahypar
::HyperedgeID,kahypar::HypernodeWeight,kahypar::HyperedgeWeight,kahypar::PartitionID,kahypar::meta::Empty,kahypar::meta
::Empty>::HypernodeTraits,kahypar::ds::GenericHypergraph<kahypar::HypernodeID,kahypar::HyperedgeID,kahypar::HypernodeWe
ight,kahypar::HyperedgeWeight,kahypar::PartitionID,kahypar::meta::Empty,kahypar::meta::Empty>::AdditionalHypernodeData>
>,kahypar::ds::GenericHypergraph<kahypar::HypernodeID,kahypar::HyperedgeID,kahypar::HypernodeWeight,kahypar::HyperedgeW
eight,kahypar::PartitionID,kahypar::meta::Empty,kahypar::meta::Empty>::HypergraphElementIterator<const kahypar::ds::Gen
ericHypergraph<kahypar::HypernodeID,kahypar::HyperedgeID,kahypar::HypernodeWeight,kahypar::HyperedgeWeight,kahypar::Par
titionID,kahypar::meta::Empty,kahypar::meta::Empty>::Vertex<kahypar::ds::GenericHypergraph<kahypar::HypernodeID,kahypar
::HyperedgeID,kahypar::HypernodeWeight,kahypar::HyperedgeWeight,kahypar::PartitionID,kahypar::meta::Empty,kahypar::meta
::Empty>::HypernodeTraits,kahypar::ds::GenericHypergraph<kahypar::HypernodeID,kahypar::HyperedgeID,kahypar::HypernodeWe
ight,kahypar::HyperedgeWeight,kahypar::PartitionID,kahypar::meta::Empty,kahypar::meta::Empty>::AdditionalHypernodeData>
>>' being compiled [D:\Downloads\kahypar\build\python\kahypar_python.vcxproj]
D:\Downloads\kahypar\kahypar\datastructure/hypergraph.h(441,1): error C2672: 'kahypar::ds::begin': no matching overload
ed function found [D:\Downloads\kahypar\build\python\kahypar_python.vcxproj]
D:\Downloads\kahypar\external_tools\WHFC\util\meta.h(12,10): fatal error C1083: Cannot open include file: 'cxxabi.h': N
o such file or directory [D:\Downloads\kahypar\build\python\kahypar_python.vcxproj]

Return error codes reference

Hello, thanks for this great project! I'm sure it's somewhere in the source but I can't quite find it - what do the different return error codes mean? Currently I'm getting KaHyPar failed with -4 semi regularly on some cluster nodes and its a bit hard to debug since I can't reproduce locally.

Initial Partitioning: Perform n-level RB-based partitioning multiple times

Currently, our RB-based initial partitioner is only called once, and only calls the actual flat initial 2-way partitioning algorithms multiple times (i.e., in its own initial partitioning phase). However, we might be able to improve initial k-way partitions by calling the entire RB-based initial partitioner multiple times.

problem about block weight.

hello.
I'm using m1 Mac.
I followed your comments, so KaHyPar in /build/kahypar/application is work very well.
Then, I tired to using kahypar as c++ library.

I want to use block_weight, so I write the code like below.

#include
#include
#include
#include <libkahypar.h>

int main(){

kahypar_context_t* context = kahypar_context_new();
kahypar_configure_context_from_file(context, "../kahypar/config/cut_rKaHyPar_sea20.ini");

const kahypar_hypernode_id_t num_vertices = 7;
const kahypar_hyperedge_id_t num_hyperedges = 4;

std::unique_ptr<kahypar_hypernode_weight_t[]> hypernode_weights = std::make_unique<kahypar_hypernode_weight_t[]>(num_vertices);
hypernode_weights[0] = 1;
hypernode_weights[1] = 2;
hypernode_weights[2] = 3;
hypernode_weights[3] = 4;
hypernode_weights[4] = 3;
hypernode_weights[5] = 2;
hypernode_weights[6] = 1;

std::unique_ptr<size_t[]> hyperedge_indices = std::make_unique<size_t[]>(num_hyperedges + 1);
hyperedge_indices[0] = 0;
hyperedge_indices[1] = 2;
hyperedge_indices[2] = 6;
hyperedge_indices[3] = 9;
hyperedge_indices[4] = 12;

std::unique_ptr<kahypar_hyperedge_id_t[]> hyperedges = std::make_unique<kahypar_hyperedge_id_t[]>(num_hyperedges);

hyperedges[0] = 0;  hyperedges[1] = 2;
hyperedges[2] = 0;  hyperedges[3] = 1;
hyperedges[4] = 3;  hyperedges[5] = 4;
hyperedges[6] = 3;  hyperedges[7] = 4;
hyperedges[8] = 6;  hyperedges[9] = 2;
hyperedges[10] = 5; hyperedges[11] = 6;

const double imbalance = 0.2;
const kahypar_partition_id_t k = 2;

std::unique_ptr<kahypar_hypernode_weight_t[]> block_weights = std::make_unique<kahypar_hypernode_weight_t[]>(k);
block_weights[0] = 6;
block_weights[1] = 10;

kahypar_set_custom_target_block_weights(k, block_weights, context);

kahypar_hyperedge_weight_t objective = 0;

std::vector<kahypar_partition_id_t> partition(num_vertices, -1);

kahypar_partition(num_vertices, num_hyperedges, imbalance, k, hypernode_weights.get(), nullptr, hyperedge_indices.get(), hyperedges.get(), &objective, context, partition.data());

for(int i = 0; i != num_vertices; ++i){
    std::cout<< i << " : " << partition[i] << std::endl;
}

kahypar_context_free(context);

return 0;

}

But, when I compile the main.cpp with g++, there is an error.

main.cpp:46:5: error: no matching function for call to 'kahypar_set_custom_target_block_weights'
kahypar_set_custom_target_block_weights(k, block_weights, context);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/include/libkahypar.h:65:18: note: candidate function not viable: no known conversion from 'std::unique_ptr<kahypar_hypernode_weight_t []>' to 'const kahypar_hypernode_weight_t *' (aka 'const int *') for 2nd argument
KAHYPAR_API void kahypar_set_custom_target_block_weights(const kahypar_partition_id_t num_blocks,
^
1 error generated.

I tired to repair that code, but I can't
(using reinterpret_cast and new-delete are all failed)

So I wonder how can I add the block_weight in the main.cpp and what is the problem about my code.

+) there is one more error below, when I execute compiled program without block_weights. How can I fix it?
(the error was not always being. sometimes, program run well. but the other times, there is error(error 1 or error 2))

error 1)
program(27277,0x1040d8580) malloc: Heap corruption detected, free list is damaged at 0x600001c04310
*** Incorrect guard value: 8589934598
program(27277,0x1040d8580) malloc: *** set a breakpoint in malloc_error_break to debug

error2)
zsh: segmentation fault ./program

Thank you for your efforts.

Mac Instillation

I am trying to install KaHyPar on Mac: pip install KaHyPar. But getting the following error; I appreciate any help

pip install kahypar
Collecting kahypar
Using cached kahypar-1.0.tar.gz (8.0 MB)
Preparing metadata (setup.py) ... done
Building wheels for collected packages: kahypar
Building wheel for kahypar (setup.py) ... error
error: subprocess-exited-with-error

× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [46 lines of output]
running bdist_wheel
running build
running build_ext
Traceback (most recent call last):
File "/private/var/folders/zv/sxj_wv191mgg0_1j8tskyx6r0000gn/T/pip-install-1buzos80/kahypar_41bddcb4ee774feaa882df54efee8756/setup.py", line 20, in run
out = subprocess.check_output(['cmake', '--version'])
File "/Users/rezah/opt/anaconda3/lib/python3.9/subprocess.py", line 424, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "/Users/rezah/opt/anaconda3/lib/python3.9/subprocess.py", line 505, in run
with Popen(*popenargs, **kwargs) as process:
File "/Users/rezah/opt/anaconda3/lib/python3.9/subprocess.py", line 951, in init
self._execute_child(args, executable, preexec_fn, close_fds,
File "/Users/rezah/opt/anaconda3/lib/python3.9/subprocess.py", line 1821, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'cmake'

  During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "", line 2, in
File "", line 34, in
File "/private/var/folders/zv/sxj_wv191mgg0_1j8tskyx6r0000gn/T/pip-install-1buzos80/kahypar_41bddcb4ee774feaa882df54efee8756/setup.py", line 74, in
setup(
File "/Users/rezah/opt/anaconda3/lib/python3.9/site-packages/setuptools/init.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/Users/rezah/opt/anaconda3/lib/python3.9/distutils/core.py", line 148, in setup
dist.run_commands()
File "/Users/rezah/opt/anaconda3/lib/python3.9/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/Users/rezah/opt/anaconda3/lib/python3.9/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/Users/rezah/opt/anaconda3/lib/python3.9/site-packages/wheel/bdist_wheel.py", line 299, in run
self.run_command('build')
File "/Users/rezah/opt/anaconda3/lib/python3.9/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/Users/rezah/opt/anaconda3/lib/python3.9/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/Users/rezah/opt/anaconda3/lib/python3.9/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/Users/rezah/opt/anaconda3/lib/python3.9/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/Users/rezah/opt/anaconda3/lib/python3.9/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/private/var/folders/zv/sxj_wv191mgg0_1j8tskyx6r0000gn/T/pip-install-1buzos80/kahypar_41bddcb4ee774feaa882df54efee8756/setup.py", line 22, in run
raise RuntimeError("CMake must be installed to build the following extensions: " +
RuntimeError: CMake must be installed to build the following extensions: kahypar
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for kahypar
Running setup.py clean for kahypar
Failed to build kahypar
Installing collected packages: kahypar
Running setup.py install for kahypar ... error
error: subprocess-exited-with-error

Have a CMake flag to disable tests

CTest defines an option, BUILD_TESTING, to let the user decide whether to build tests or not. It'd be good to use it, to avoid building tests when not necessary, for example when cross-compiling without having an emulator to actually run the tests. This is on the same vein as #85.

For what is worth, I can successfully cross-compile the library for Windows using BinaryBuilder after applying #86, manually disabling all tests (it took a while....) and working around the errors reported in #83 (comment).

block size not match, bug or incorrect using?

I did a partitioning on my graph, and got a result.

I tried to check the correctness of partitioning result, and found something wrong.

First, I directly got the block weight information from the partitioning result that KaHyPar outputted on the console.

Then, for each block(according to Partition File), I manual summed weight of all vertices in that block.

Unfortunately, I found preceding two weights were not matched.

The table below shows the difference:

index from console manual summed
part 0 w( 0 ) = 8056461 w = 4370837
part 1 w( 1 ) = 8083478 w = 14302264
part 2 w( 2 ) = 6445754 w = 2117552
part 3 w( 3 ) = 6659848 w = 2686193
part 4 w( 4 ) = 8081908 w = 9015610
part 5 w( 5 ) = 3284632 w = 223157
part 6 w( 6 ) = 7752835 w = 1500391
part 7 w( 7 ) = 8081145 w = 17349691
part 8 w( 8 ) = 8072179 w = 6118026
part 9 w( 9 ) = 7892213 w = 4591978
part 10 w( 10 ) = 8077959 w = 23184113
part 11 w( 11 ) = 7424514 w = 3071367
part 12 w( 12 ) = 8053980 w = 17117247
part 13 w( 13 ) = 7102451 w = 5421560
part 14 w( 14 ) = 7487524 w = 2910723
part 15 w( 15 ) = 7033688 w = 3595229

Other information:
command line:
./KaHyPar -h ~/graph -k 16 -e 0.10 -o km1 -m direct -p ../../../config/km1_direct_kway_sea18.ini -v on
Graph meta information:

******************************************************************************** 
*                                    Input                                     * 
******************************************************************************** 
Hypergraph Information 
Name : countries.2019-05-16.carrier 
Type: edgeWeights=true nodeWeights=true 
# HNs : 47986 # HEs : 29502 # pins: 1971962 
HE size             HE weight           HN degree           HN weight 
| min= 0            | min= 0            | min= 0            | min= 0           
| Q1 = 4            | Q1 = 118          | Q1 = 14           | Q1 = 0           
| med= 16           | med= 375          | med= 30           | med= 102         
| Q3 = 50           | Q3 = 1336         | Q3 = 57           | Q3 = 578         
| max= 2086         | max= 2469924      | max= 277          | max= 2469924     
| avg= 66.8416      | avg= 3971.84      | avg= 41.0945      | avg= 2450.52     
| sd = 158.885      | sd = 28399.8      | sd = 37.0743      | sd = 28464.9     

******************************************************************************** 
*                          Top Level Preprocessing..                           * 
******************************************************************************** 
 Removed 2701 hyperedges with |e|=1  
 ===> 33 unconnected HNs could have been removed  
Performing community detection: 
  hypergraph is a graph = false 
  # communities         = 166 
  modularity            = 0.818004 

Graph file could be find at: https://github.com/a8156268/graph/blob/master/graph_info.
It is in the format of hMetis.
Was there something I did wrong? Or there is a bug in KaHyPar?

Verbosity Level of KaHyPar

Hi,

I am currently working on a Java interface for KaHyPar, and I was wondering whether it were possible to reduce the verbosity of KaHyPar.

Indeed, in its most recent version, KaHyPar outputs many statistics, and I would like to turn them off.
However, I did not find a way to do so (without editing the source).

My main purpose with this Java implementation is to use KaHyPar as a library in another software, which requires to make many calls to the hypergraph partitioner.
Outputing so many statistics would make the invokation too slow in this context.

Is there any option for programmatically disabling this output?

Best regards,
Romain Wallon

Build Problem

Hi I am having troubles with building KaHyPar on Linux system.

Currently I am using
CentOS 6.5 x86_64
g++ 5.3.0
Cmake 3.14.5
Boost 1.70.0

Thanks for any help !

Error I am getting
-While cloning the repo

image

M1 Mac support

Is there any plan for supporting m1 mac?
When I tried to install kahypar in m1 mac, it occurs doesn't support cpu=vortex error.

Symbol not found: __PyThreadState_Current

Hello, I'm attempting to use the kahypar backend in the cotengra library. In an attempt to run their Sycamore depth 12 example, I've run into the following exception:

Traceback (most recent call last):
  File "test_cotengra.py", line 40, in <module>
    info = tn.contract(all, optimize=opt, get='path-info', output_inds=[])
  File "/Users/david/Develop/cotengra/tensor/lib/python3.7/site-packages/quimb/tensor/tensor_core.py", line 3141, in contract
    return tensor_contract(*self, **opts)
  File "/Users/david/Develop/cotengra/tensor/lib/python3.7/site-packages/quimb/tensor/tensor_core.py", line 315, in tensor_contract
    path_info = get_contraction(eq, *ops, path=True, **contract_opts)
  File "/Users/david/Develop/cotengra/tensor/lib/python3.7/site-packages/quimb/tensor/tensor_core.py", line 110, in get_contraction
    return fn(eq, *shapes, **kwargs)
  File "/Users/david/Develop/cotengra/tensor/lib/python3.7/site-packages/quimb/tensor/tensor_core.py", line 78, in _get_contract_path
    return oe.contract_path(eq, *shapes, shapes=True, **kwargs)[1]
  File "/Users/david/Develop/cotengra/tensor/lib/python3.7/site-packages/opt_einsum/contract.py", line 259, in contract_path
    path = path_type(input_sets, output_set, dimension_dict, memory_arg)
  File "/Users/david/Develop/cotengra/tensor/lib/python3.7/site-packages/cotengra/hyper.py", line 347, in __call__
    for trial in trials:
  File "/Users/david/Develop/cotengra/tensor/lib/python3.7/site-packages/tqdm/std.py", line 1129, in __iter__
    for obj in iterable:
  File "/Users/david/Develop/cotengra/tensor/lib/python3.7/site-packages/cotengra/hyper.py", line 312, in _gen_results_parallel
    yield self.get_and_report_next_future()
  File "/Users/david/Develop/cotengra/tensor/lib/python3.7/site-packages/cotengra/hyper.py", line 296, in get_and_report_next_future
    trial = future.result()
  File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/concurrent/futures/_base.py", line 425, in result
    return self.__get_result()
  File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
    raise self._exception
ImportError: dlopen(/Users/david/Develop/cotengra/tensor/lib/python3.7/site-packages/kahypar.cpython-37m-darwin.so, 2): Symbol not found: __PyThreadState_Current
  Referenced from: /Users/david/Develop/cotengra/tensor/lib/python3.7/site-packages/kahypar.cpython-37m-darwin.so
  Expected in: flat namespace
 in /Users/david/Develop/cotengra/tensor/lib/python3.7/site-packages/kahypar.cpython-37m-darwin.so

Helpful info:
OS: macOS Catalina
$PYTHONPATH: /Users/david/Library/Python/3.7/lib/python

I've tried changing my environment variables and editing the CMake of kahypar such that
CC=clang, CXX=clang++ and, as a sanity check, to CC=gcc, CXX=g++ -- taking care to uninstall and reinstall the kahypar python bindings with pip each time.

Any insight would be much appreciated.

Performance of coarsing for super large case

Hi,

I tested one large hypergraph from vlsi circuit with KaHyPar-K config. It is very slow.
The size is hypernode: 11965399, hyperedge: 4586372, pins: 22904645.
In the 1st pass of ml_coarser, it reduces nodes to 4919487, and takes 1001.71s. And
in the 2nd pass, it reduce nodes to 1802051, takes 673.67s.
While patoh can finish partition in 250s.
Do you have any suggestions ?
Maybe I can try special coarsing first, such as for node connect to only one edge? For those node, if weight is not much large, should can be contract safely. Then can avoid heavy rate operation.
How do you think about it ?

Best wishes,
Frandy

Questions about Using the C-Style Interface

After successfully installing KaHyPar and building KaHyPar as a library, I tried to test the source file you provided. However, when I use g++ like
g++ -std=c++14 -DNDEBUG -O3 -I/usr/local/include -L/usr/local/lib -L/usr/lib -I/usr/include -lboost_program_options program.cc -o program
it reports
program.cc:(.text.startup+0x21): undefined reference to `kahypar_context_new'
program.cc:(.text.startup+0x37): undefined reference to `kahypar_configure_context_from_file'
program.cc:(.text.startup+0x107): undefined reference to `kahypar_partition'
program.cc:(.text.startup+0x1d5): undefined reference to `kahypar_context_free'
collect2: error: ld returned 1 exit status
I wonder if I have misunderstood the meaning of your example. I will appreciate it if your can help me solve this problem.

Refinements after undoing Sparsification

Currently, we just undo sparsification after uncoarsening. However, since this actually changes the structure of the hypergraph, we could use a refinement algorithm afterwards in order to further improve the 'unsparsified' version.

Python windows install unicode error

When installing the kahypar python interface on windows, a unicode error is raised when long_description is set from the README.md.

This should be easily fixed (in setup.py) with :

with open("README.md", "r", encoding="utf-8") as fh:
    long_description = fh.read()

Though note that that this is a python 3 only keyword argument.

Custom target block weights and python interface issue for kahypar 1.1.2

Hi,

I am trying to use the python interface installing it via pip. The version installed via pip is 1.1.1 and it seems to not contain the custom block weights in the context object:

[('__init__',
  <bound method PyCapsule.__init__ of <kahypar.Context object at 0x2aaaabf1a030>>),
 ('loadINIconfiguration',
  <bound method PyCapsule.loadINIconfiguration of <kahypar.Context object at 0x2aaaabf1a030>>),
 ('setEpsilon',
  <bound method PyCapsule.setEpsilon of <kahypar.Context object at 0x2aaaabf1a030>>),
 ('setK',
  <bound method PyCapsule.setK of <kahypar.Context object at 0x2aaaabf1a030>>),
 ('setSeed',
  <bound method PyCapsule.setSeed of <kahypar.Context object at 0x2aaaabf1a030>>),
 ('suppressOutput',
  <bound method PyCapsule.suppressOutput of <kahypar.Context object at 0x2aaaabf1a030>>)]

I believe this option is added in 1.1.2. If so, do you have any idea why pip cannot install kahypar 1.1.2?
If I run pip install --user kahypar or python3 -m pip install --index-url https://pypi.org/simple/ --no-deps kahypar, it installs version 1.1.1.
And if I try to enforce 1.1.2 by specifying kahypar==1.1.2, it outputs:

Looking in indexes: https://pypi.org/simple/
Collecting kahypar==1.1.2
  ERROR: Could not find a version that satisfies the requirement kahypar==1.1.2 (from versions: 1.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.1.1)
ERROR: No matching distribution found for kahypar==1.1.2

It is weird since when I run pip search kahypar, it seems to be aware that 1.1.2 is the latest version and it is there:

kahypar (1.1.2)  - Python Inferface for the Karlsruhe Hypergraph Partitioning Framework (KaHyPar)
  INSTALLED: 1.1.1
  LATEST:    1.1.2

Any idea what may be wrong and what is going on?
I am using python version 3.7.4.

Thanks!

Linking CXX executable KaHyPar: Undefined symbols for architecture x86_64

System specifications

Mac OS High Sierra
Version 10.13.1

> g++ --version
g++ (GCC) 6.3.1 20170510 (for GNAT GPL 2017 20170515)
Copyright (C) 2016 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.
See your AdaCore support agreement for details of warranty and support.
If you do not have a current support agreement, then there is absolutely
no warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE.
> clang++ --version
Apple LLVM version 9.0.0 (clang-900.0.39.2)
Target: x86_64-apple-darwin17.2.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin

I've just installed doxygen and boost using respectively the following commands

brew install doxygen

and

brew install boost

Now, I've tried to follow the instruction to install kahypar, that is first I downloaded this repositry to a local directory as follows

git clone --depth=1 --recursive [email protected]:SebastianSchlag/kahypar.git

then, from inside the downloaded kahypar directory, I created a subdirectory build and cd into it as follows

mkdir build && cd build

then I did

cmake .. -DCMAKE_BUILD_TYPE=RELEASE

and this is the output

[build] > cmake .. -DCMAKE_BUILD_TYPE=RELEASE
-- The CXX compiler identification is GNU 6.3.1
-- The C compiler identification is GNU 6.3.1
-- Checking whether CXX compiler has -isysroot
-- Checking whether CXX compiler has -isysroot - yes
-- Checking whether CXX compiler supports OSX deployment target flag
-- Checking whether CXX compiler supports OSX deployment target flag - yes
-- Check for working CXX compiler: /usr/local/gnat/bin/c++
-- Check for working CXX compiler: /usr/local/gnat/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Checking whether C compiler has -isysroot
-- Checking whether C compiler has -isysroot - yes
-- Checking whether C compiler supports OSX deployment target flag
-- Checking whether C compiler supports OSX deployment target flag - yes
-- Check for working C compiler: /usr/local/gnat/bin/gcc
-- Check for working C compiler: /usr/local/gnat/bin/gcc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - found
-- Found Threads: TRUE  
-- Found Threads: 
-- Found PythonInterp: /Library/Frameworks/Python.framework/Versions/2.7/bin/python (found version "2.7.10") 
-- Boost version: 1.65.1
-- Found the following Boost libraries:
--   program_options
-- Boost Include: /usr/local/include
-- Boost Library Dirs: /usr/local/lib
-- Boost Libraries: /usr/local/lib/libboost_program_options-mt.dylib
-- Found Doxygen: /usr/local/bin/doxygen (found version "1.8.13") found components:  doxygen dot 
-- Found Git: /usr/bin/git (found version "2.14.3 (Apple Git-98)") 
-- Detected git refspec refs/heads/master sha 42cc3fe7dc7eb88c73bb32ac468e0aef10adf3b7
-- Performing Test KAHYPAR_HAS_CRC32
-- Performing Test KAHYPAR_HAS_CRC32 - Success
-- CMAKE_CXX_FLAGS:  -W -Wall -Wextra  -Wunused  -Wmaybe-uninitialized -Wfatal-errors -Wcast-qual -Woverloaded-virtual -Wredundant-decls -Winit-self -pedantic -Wnoexcept -DPARANOID  -Weffc++ -Wno-unused-function -msse4.2 -mcrc32
-- CMAKE_CXX_FLAGS_RELEASE: -O3 -DNDEBUG -O3 -mtune=native -march=native
-- CMAKE_CXX_FLAGS_DEBUG: -g -pthread -g3
-- Configuring done
-- Generating done
-- Build files have been written to: /some_path/kahypar/build

then I tried to make from within the build folder as follows

 make

and this is the output

[build] > make
Scanning dependencies of target gmock
[  1%] Building CXX object external_tools/googletest/googlemock/CMakeFiles/gmock.dir/__/googletest/src/gtest-all.cc.o
[  2%] Building CXX object external_tools/googletest/googlemock/CMakeFiles/gmock.dir/src/gmock-all.cc.o
[  3%] Linking CXX static library libgmock.a
[  3%] Built target gmock
Scanning dependencies of target gmock_main
[  3%] Building CXX object external_tools/googletest/googlemock/CMakeFiles/gmock_main.dir/__/googletest/src/gtest-all.cc.o
[  4%] Building CXX object external_tools/googletest/googlemock/CMakeFiles/gmock_main.dir/src/gmock-all.cc.o
[  5%] Building CXX object external_tools/googletest/googlemock/CMakeFiles/gmock_main.dir/src/gmock_main.cc.o
[  6%] Linking CXX static library libgmock_main.a
[  6%] Built target gmock_main
Scanning dependencies of target gtest
[  6%] Building CXX object external_tools/googletest/googlemock/gtest/CMakeFiles/gtest.dir/src/gtest-all.cc.o
[  7%] Linking CXX static library libgtest.a
[  7%] Built target gtest
Scanning dependencies of target gtest_main
[  8%] Building CXX object external_tools/googletest/googlemock/gtest/CMakeFiles/gtest_main.dir/src/gtest_main.cc.o
[  9%] Linking CXX static library libgtest_main.a
[  9%] Built target gtest_main
Scanning dependencies of target KaHyPar
[ 10%] Building CXX object kahypar/application/CMakeFiles/KaHyPar.dir/kahypar.cc.o
In file included from /some_path/kahypar/kahypar/partition/coarsening/full_vertex_pair_coarsener.h:34:0,
                 from /some_path/kahypar/kahypar/kahypar.h:26,
                 from /some_path/kahypar/kahypar/application/command_line_options.h:36,
                 from /some_path/kahypar/kahypar/application/kahypar.cc:26:
/some_path/kahypar/kahypar/partition/coarsening/policies/rating_score_policy.h: In static member function 'static kahypar::RatingType kahypar::EdgeFrequencyScore::score(const Hypergraph&, kahypar::HyperedgeID)':
/some_path/kahypar/kahypar/partition/coarsening/policies/rating_score_policy.h:38:84: warning: unused parameter 'hypergraph' [-Wunused-parameter]
   KAHYPAR_ATTRIBUTE_ALWAYS_INLINE static inline RatingType score(const Hypergraph& hypergraph, const HyperedgeID he) {
                                                                                    ^~~~~~~~~~
/some_path/kahypar/kahypar/partition/coarsening/policies/rating_score_policy.h:38:114: warning: unused parameter 'he' [-Wunused-parameter]
   KAHYPAR_ATTRIBUTE_ALWAYS_INLINE static inline RatingType score(const Hypergraph& hypergraph, const HyperedgeID he) {
                                                                                                                  ^~
In file included from /some_path/kahypar/kahypar/application/kahypar.cc:26:0:
/some_path/kahypar/kahypar/application/command_line_options.h: In lambda function:
/some_path/kahypar/kahypar/application/command_line_options.h:75:54: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
       if (context.partition.hyperedge_size_threshold == -1) {
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~
[ 11%] Linking CXX executable KaHyPar
Undefined symbols for architecture x86_64:
  "boost::program_options::to_internal(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)", referenced from:
      boost::program_options::basic_command_line_parser<char>::basic_command_line_parser(int, char const* const*) in kahypar.cc.o
  "boost::program_options::validation_error::get_template[abi:cxx11](boost::program_options::validation_error::kind_t)", referenced from:
      boost::program_options::validation_error::validation_error(boost::program_options::validation_error::kind_t, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int) in kahypar.cc.o
  "boost::program_options::basic_parsed_options<char> boost::program_options::parse_config_file<char>(std::basic_istream<char, std::char_traits<char> >&, boost::program_options::options_description const&, bool)", referenced from:
      kahypar::processCommandLineInput(kahypar::Context&, int, char**) in kahypar.cc.o
      kahypar::parseIniToContext(kahypar::Context&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) in kahypar.cc.o
  "boost::program_options::options_description::options_description(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned int, unsigned int)", referenced from:
      kahypar::createGeneralOptionsDescription(kahypar::Context&, int) in kahypar.cc.o
      kahypar::createPreprocessingOptionsDescription(kahypar::Context&, int) in kahypar.cc.o
      kahypar::createCoarseningOptionsDescription(kahypar::Context&, int) in kahypar.cc.o
      kahypar::createInitialPartitioningOptionsDescription(kahypar::Context&, int) in kahypar.cc.o
      kahypar::createRefinementOptionsDescription(kahypar::Context&, int) in kahypar.cc.o
      kahypar::processCommandLineInput(kahypar::Context&, int, char**) in kahypar.cc.o
  "boost::program_options::invalid_option_value::invalid_option_value(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)", referenced from:
      void boost::program_options::validate<double, char>(boost::any&, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&, double*, long) in kahypar.cc.o
      void boost::program_options::validate<long double, char>(boost::any&, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&, long double*, long) in kahypar.cc.o
      void boost::program_options::validate<unsigned int, char>(boost::any&, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&, unsigned int*, long) in kahypar.cc.o
      void boost::program_options::validate<int, char>(boost::any&, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&, int*, long) in kahypar.cc.o
  "boost::program_options::error_with_option_name::error_with_option_name(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int)", referenced from:
      boost::program_options::validation_error::validation_error(boost::program_options::validation_error::kind_t, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int) in kahypar.cc.o
  "boost::program_options::arg[abi:cxx11]", referenced from:
      boost::program_options::typed_value<unsigned int, char>::name[abi:cxx11]() const in kahypar.cc.o
      boost::program_options::typed_value<int, char>::name[abi:cxx11]() const in kahypar.cc.o
      boost::program_options::typed_value<bool, char>::name[abi:cxx11]() const in kahypar.cc.o
      boost::program_options::typed_value<double, char>::name[abi:cxx11]() const in kahypar.cc.o
      boost::program_options::typed_value<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, char>::name[abi:cxx11]() const in kahypar.cc.o
      boost::program_options::typed_value<long double, char>::name[abi:cxx11]() const in kahypar.cc.o
  "boost::program_options::detail::cmdline::set_additional_parser(boost::function1<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&>)", referenced from:
      boost::program_options::basic_command_line_parser<char>::extra_parser(boost::function1<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&>) in kahypar.cc.o
  "boost::program_options::detail::cmdline::cmdline(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)", referenced from:
      boost::program_options::basic_command_line_parser<char>::basic_command_line_parser(int, char const* const*) in kahypar.cc.o
  "boost::program_options::validate(boost::any&, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*, int)", referenced from:
      boost::program_options::typed_value<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, char>::xparse(boost::any&, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&) const in kahypar.cc.o
  "boost::program_options::validate(boost::any&, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&, bool*, int)", referenced from:
      boost::program_options::typed_value<bool, char>::xparse(boost::any&, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&) const in kahypar.cc.o
  "boost::program_options::operator<<(std::basic_ostream<char, std::char_traits<char> >&, boost::program_options::options_description const&)", referenced from:
      kahypar::processCommandLineInput(kahypar::Context&, int, char**) in kahypar.cc.o
  "boost::program_options::error_with_option_name::substitute_placeholders(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const", referenced from:
      vtable for boost::exception_detail::error_info_injector<boost::program_options::invalid_option_value> in kahypar.cc.o
      vtable for boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<boost::program_options::invalid_option_value> > in kahypar.cc.o
      vtable for boost::exception_detail::error_info_injector<boost::program_options::validation_error> in kahypar.cc.o
      vtable for boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<boost::program_options::validation_error> > in kahypar.cc.o
      vtable for boost::program_options::validation_error in kahypar.cc.o
      vtable for boost::program_options::invalid_option_value in kahypar.cc.o
  "boost::program_options::value_semantic_codecvt_helper<char>::parse(boost::any&, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&, bool) const", referenced from:
      vtable for boost::program_options::typed_value<int, char> in kahypar.cc.o
      vtable for boost::program_options::typed_value<unsigned int, char> in kahypar.cc.o
      vtable for boost::program_options::typed_value<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, char> in kahypar.cc.o
      vtable for boost::program_options::typed_value<double, char> in kahypar.cc.o
      vtable for boost::program_options::typed_value<bool, char> in kahypar.cc.o
      vtable for boost::program_options::typed_value<long double, char> in kahypar.cc.o
ld: symbol(s) not found for architecture x86_64
collect2: error: ld returned 1 exit status
make[2]: *** [kahypar/application/KaHyPar] Error 1
make[1]: *** [kahypar/application/CMakeFiles/KaHyPar.dir/all] Error 2
make: *** [all] Error 2

where /some_path is some directory on my file system where I downloaded karhypar to.

How can I solve this problem? I've seen questions on the web asking about similar (linking) issues, but, given that I've basically never used boost and I don't build C++ applications very often, I am not sure what the problem is.

Insufficient number of friutless moves during IP local search

If k is large, top-level coarsening is stopped early so that recursive bisection based IP can partition the hypergraph. The current # of friutless moves (50) might not be enough in this case. An adaptive variant would bei better in this case.

Confusion for function LabelPropagationInitialPartitoner::computeMaxGainMoveForUnssignedSourcePart(const HypernodeID hn)

Hi,
I'm studying hypergraph partition for vlsi circuit and found this library. It is great and I can learn a lot.
While I got confused when reading the function LabelPropagationInitialPartitoner::computeMaxGainMoveForUnssignedSourcePart(const HypernodeID hn).
When he connectivity > 1, then tmp_scores for all connected part is 0. how does it can get the part with max_score ?

I have read the contributing guidelines, and not sign it yet. I think I will sign it if I deliver code.

Best wishes,
Frandy

MaxPin IP slow on hypergraphs with large hyperedges

Using maxpin-based IP algorithms is very slow when the coarsened hypergraph still has large hyperedges. Can we speedup the gain-calculation/gain-update code so that we can re-enable both IP algorithms?

Linking CXX executable - Mac

Hi all,

I am a beginner as far as the c++ mac world, and am having a very similar issue to [https://github.com//issues/19#issue-281404025] with linking CXX executable and boost. He has some of the details worded better than I could explain so I have referenced that issue. I have the same system specifications, and have downloaded boost using both brew and macports as well. However, the system I am trying to run must be compiled (and has been compiled by others) using c++98, not c++11. I have tried

set CMAKE_CXX_FLAGS -std=c++98

and then compiled using

make -j 4

some of the errors I have include:

ld: warning: -L path '/usr/local/lib/libgsl.dylib' is not a directory
Undefined symbols for architecture x86_64:
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [samos] Error 1
make[1]: *** [src/CMakeFiles/samos.dir/all] Error 2
make: *** [all] Error 2

Does anyone have any additional information/advice for how to approach this problem? Thanks.

Failing building with `-DKAHYPAR_USE_MINIMAL_BOOST=ON`

Hello!

I'm trying to build the KaHyPar Python module without installing the Boost library on my system I'm using the following commands:

mkdir build && cd build
cmake .. -DCMAKE_BUILD_TYPE=RELEASE -DKAHYPAR_PYTHON_INTERFACE=ON -DKAHYPAR_USE_MINIMAL_BOOST=ON
cmake --build . --target kahypar_python -- -j2

Everything seems to work fine, but I got a linking error related to the Boost library:

[...]
[100%] Linking CXX shared module kahypar.cpython-38-x86_64-linux-gnu.so
/usr/bin/ld: ../libmini_boost.a(config_file.cpp.o): warning: relocation against `_ZTVN5boost15program_options25error_with_no_option_nameE' in read-only section `.text'
/usr/bin/ld: ../libmini_boost.a(cmdline.cpp.o): relocation R_X86_64_PC32 against symbol `_ZTVN5boost15program_options26invalid_command_line_styleE' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: final link failed: bad value
collect2: error: ld returned 1 exit status
gmake[3]: *** [python/CMakeFiles/kahypar_python.dir/build.make:98: python/kahypar.cpython-38-x86_64-linux-gnu.so] Error 1
gmake[2]: *** [CMakeFiles/Makefile2:3592: python/CMakeFiles/kahypar_python.dir/all] Error 2
gmake[1]: *** [CMakeFiles/Makefile2:3599: python/CMakeFiles/kahypar_python.dir/rule] Error 2
gmake: *** [Makefile:1635: kahypar_python] Error 2

My system is a Linux workstation:

uname -a
> Linux 5.15.0-2-amd64 #1 SMP Debian 5.15.5-2 (2021-12-18) x86_64 GNU/Linux

Thanks for your help!

Fail building on windows 10!

While I'm building on windows I'm getting this error :

-- Found PythonInterp: C:/Program Files/Python39/python.exe (found version "3.9.1")
CMake Error at C:/Program Files/CMake/share/cmake-3.20/Modules/FindPackageHandleStandardArgs.cmake:230 (message):
  Could NOT find Boost (missing: Boost_INCLUDE_DIR program_options) (Required
  is at least version "1.69")
Call Stack (most recent call first):
  C:/Program Files/CMake/share/cmake-3.20/Modules/FindPackageHandleStandardArgs.cmake:594 (_FPHSA_FAILURE_MESSAGE)
  C:/Program Files/CMake/share/cmake-3.20/Modules/FindBoost.cmake:2345 (find_package_handle_standard_args)
  CMakeLists.txt:174 (find_package)

make error

Hi there,

When I was trying to install the system by doing "make" after "cmake .. -DCMAKE_BUILD_TYPE=RELEASE" as stated in the README file, I got an error message after "make".

"""
In file included from /mnt/homes/khngnathan/kahypar/external_tools/WHFC/algorithm/cutter_state.h:5:0,
from /mnt/homes/khngnathan/kahypar/external_tools/WHFC/algorithm/dinic.h:2,
from /mnt/homes/khngnathan/kahypar/kahypar/partition/refinement/flow/2way_hyperflowcutter_refiner.h:28,
from /mnt/homes/khngnathan/kahypar/kahypar/partition/factories.h:36,
from /mnt/homes/khngnathan/kahypar/kahypar/partition/registries/register_coarsening_algorithms.h:34,
from /mnt/homes/khngnathan/kahypar/kahypar/kahypar.h:23,
from /mnt/homes/khngnathan/kahypar/kahypar/application/command_line_options.h:37,
from /mnt/homes/khngnathan/kahypar/kahypar/application/kahypar.cc:22:
/mnt/homes/khngnathan/kahypar/external_tools/WHFC/algorithm/../datastructure/border.h: In member function ‘sub_range<std::vector > whfc::PersistentSet<T, trackElements>::persistent_entries() const’:
/mnt/homes/khngnathan/kahypar/external_tools/WHFC/algorithm/../datastructure/border.h:23:20: error: missing template arguments before ‘(’ token
return sub_range(elements, persistent_begin, persistent_end);
^
compilation terminated due to -Wfatal-errors.
kahypar/application/CMakeFiles/KaHyPar.dir/build.make:81: recipe for target 'kahypar/application/CMakeFiles/KaHyPar.dir/kahypar.cc.o' failed
make[2]: *** [kahypar/application/CMakeFiles/KaHyPar.dir/kahypar.cc.o] Error 1
CMakeFiles/Makefile2:851: recipe for target 'kahypar/application/CMakeFiles/KaHyPar.dir/all' failed
make[1]: *** [kahypar/application/CMakeFiles/KaHyPar.dir/all] Error 2
Makefile:159: recipe for target 'all' failed
make: *** [all] Error 2
"""

I am using ubuntu 18.04.4 LTS with cmake version 3.18.3 and boost version 1.74.0. Do you have any hint of why this problem occurs? Thanks in advance!

Discussing Refactorings and Cleanup

Hi Tobias,
at some point we have to discuss all the refactorings that still have to be done after merging the SEA pull request. This issue should serve as a reminder to actually do this.

have cmake flags to disable including Python binding and application in CMakeFiles

I think this will make building the share lib for other languages much easier when Python/pybind11 is not needed

I currently have to use a patch here: https://github.com/JuliaPackaging/Yggdrasil/pull/3111/files#diff-04b01b818aba662637ce0b4bb535b08e4d9944ac0bad644cfcb1220b6991974dR37
because this cause cmake complains the default python is not 32-bit on some platforms (not sure if you want to support 32-bit or not tho)

Configuration of Min-Hash Sparsifier for hypergraphs with vertex weights

The current sparsifier configuration is problematic for vertex-weighted hypergraphs, because of the
'--p-sparsifier-max-cluster-size' parameter. This is currently set to 10 (allowing only small clusters). For hypergraphs with vertex weights > 10, this leads to the fact that the sparsifier doesn't do anything
except wasting running time. How can we fix this?

Precompiled python interface issues on Mac

System specifications:
MacOS 10.15.5

I installed the precompiled Python interface of KaHyPar using python3 -m pip install --index-url https://pypi.org/simple/ --no-deps kahypar.

I created a file which has the python example code - when I run the file using the Terminal, I receive an error Could not load context file at: <path/to/config>/km1_kKaHyPar_dissertation.ini

I was having the same issues as in Issue #58 when I built KaHyPar on my machine, so I decided to try the precompiled version instead, but it seems there is some other problem with it.

Hide Kahypar Result Message

When I run experiments, usually kahypar's result message is unneeded.
Is there any method to hide the below result message with the python interface?
image

Build on Windows platforms needs different shared library targets

When trying to build this on a Windows platform, I came across the following error, which you can also see on your failing Windows build on appveyor linked on the main github page for this project

CMake Error at lib/CMakeLists.txt:13 (install):
  install Library TARGETS given no DESTINATION

This appears to be because of the following lines:

install(TARGETS kahypar
		LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR}
		PUBLIC_HEADER DESTINATION ${CMAKE_INSTALL_INCLUDEDIR})

The catch seems to be that shared library targets have to be handled differently depending on the platform. The CMake documentation reads:

For non-DLL platforms shared libraries are treated as LIBRARY targets, except that those marked with the FRAMEWORK property are treated as FRAMEWORK targets on OS X. For DLL platforms the DLL part of a shared library is treated as a RUNTIME target and the corresponding import library is treated as an ARCHIVE target. All Windows-based systems including Cygwin are DLL platforms. The ARCHIVE, LIBRARY, RUNTIME, and FRAMEWORK arguments change the type of target to which the subsequent properties apply.

I was able to get it to build on my machine based on this StackExchange post by changing the code as follows:

if(WIN32)
	install(TARGETS kahypar
		RUNTIME DESTINATION ${CMAKE_INSTALL_LIBDIR}
		PUBLIC_HEADER DESTINATION ${CMAKE_INSTALL_INCLUDEDIR})
else()
	install(TARGETS kahypar
		LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR}
		PUBLIC_HEADER DESTINATION ${CMAKE_INSTALL_INCLUDEDIR})
endif()

I don't have much experience with CMAKE so I can't guarantee this fix will work for everyone. I also ran into an issue with the Windows build where I had to adjust the linking behavior for boost (with add_definitions(-DBOOST_ALL_NO_LIB) ), but I won't create a new issue for it until I'm sure it wasn't accidentally created by this fix of mine.

ImportError

macOS 10.15.6

installed the precompiled version use:
python3 -m pip install --index-url https://pypi.org/simple/ --no-deps kahypar

get message:
Looking in indexes: https://pypi.org/simple/
Requirement already satisfied: kahypar in /opt/anaconda3/lib/python3.8/site-packages (1.1.5)

But while import kahypar in Jupyter notebook:
import kahypar as kahypar

Got the following Import Error:


ImportError Traceback (most recent call last)
in
1 import os
----> 2 import kahypar as kahypar

ImportError: dlopen(/opt/anaconda3/lib/python3.8/site-packages/kahypar.cpython-38-darwin.so, 2): Library not loaded: /usr/local/opt/boost/lib/libboost_program_options-mt.dylib
Referenced from: /opt/anaconda3/lib/python3.8/site-packages/kahypar.cpython-38-darwin.so
Reason: image not found

Any help is appreciated!

Context initialization

Dear kahypar developer,
I try to integrate kahypar in d4 (http://www.cril.univ-artois.fr/kc/d4.html) to replace which is not open source.
However, kahypar requieres the use of external file for initializing a context which is not really optimal for me.
Can we initialize a context without a configuration file?

BTW, there are a lot of configuration files and I do not which one is more related to my application.
The hypergraph I consider are not that big, and I can call kahypar several times on such small hypergraph.
Actually, I would like to know if it is possible to have different level of optimization (ex, level 1 speed is mandatory, level 2 quality is mandatory, ...).
Best regards,
JM

Fix performance regression for larger imbalances

Partitioning ISPD98_ibm18.hgr with k=2, -e 0.1, and -o km1 using the latest kKaHyPar configuration seems to be incredibly slow. Since it runs quite fast using smaller imbalances, I assume we have a performance regression somewhere in the SEA'20 flow code.

Code version 1.1.0 (https://github.com/kahypar/kahypar/releases/tag/1.1.0) does not seem to exhibit this problem.
On my MacBook, it could partition ibm18.hgr within 37 seconds.

So I assume something is wrong/different with the scaling of the flow networks. This may explain the poor running time results reported in https://arxiv.org/pdf/2012.13618.pdf for 10% imbalance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.