Giter Site home page Giter Site logo

torchcraft / torchcraftai Goto Github PK

View Code? Open in Web Editor NEW
646.0 50.0 124.0 46.65 MB

A platform that lets you build agents to learn to play StarCraft: Brood War.

Home Page: https://torchcraft.github.io/TorchCraftAI

License: MIT License

CMake 8.52% C 0.08% C++ 81.02% Python 0.40% JavaScript 4.45% CSS 1.71% Shell 0.10% HTML 3.72%

torchcraftai's Introduction

TorchCraftAI

TorchCraftAI is a platform that lets you build agents to play (and learn to play) StarCraft®: Brood War®†. TorchCraftAI includes:

  • A modular framework for building StarCraft agents
  • CherryPi, a bot which plays complete games of StarCraft (1st place SSCAIT 2017-18)
  • A reinforcement learning environment with minigames, models, and training loops
  • TorchCraft support for TCP communication with StarCraft and BWAPI
  • Support for Linux, Windows, and OSX

Get started

See guides for:

Documentation

Tutorials

Licensing

We encourage you to experiment with TorchCraftAI! See LICENSE, plus more on contributing and our code of conduct.

†: StarCraft is a trademark or registered trademark of Blizzard Entertainment, Inc., in the U.S. and/or other countries. Nothing in this repository should be construed as approval, endorsement, or sponsorship by Blizzard Entertainment, Inc.

torchcraftai's People

Contributors

abdullahselek avatar danthe3rd avatar ebetica avatar jgehring avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

torchcraftai's Issues

How can i get the external play script(/workspace/scripts/ladder/play)?

I'm trying to run the bo-switch tarin script and find it uses GameVsBotInWine class to make scenarios.
I think GameVsBotInWine implements a SC game with another AI bot set by “vars”. This class uses PlayScript to lanuch the game .
But PlayScript uses a string called script=/workspace/scripts/ladder/play. I cant find the script in application. Where can i get it?

Build Error when building CherryPi

My compilation environment is windows10,CUDA Toolkit 10.2, python 3.7
cmake .. -DMSVC=true -DZMQ_LIBRARY="../3rdparty/zmq.lib" -DZMQ_INCLUDE_DIR="../3rdparty/libzmq/include" -DGFLAGS_LIBRARY="../3rdparty/gflags_static.lib" -DGFLAGS_INCLUDE_DIR="../3rdparty/gflags/build/include" -DGLOG_ROOT_DIR="../3rdparty/glog" -DCMAKE_CXX_FLAGS_RELEASE="/MP /EHsc" -G "Visual Studio 15 2017 Win64"
when using the command above to build CherryPi ,following problems occurred:

CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
LIB_glog
linked by target "common" in directory D:/Coding/TorchCraftAI/common
linked by target "cherrypi" in directory D:/Coding/TorchCraftAI/src
So I traced back to find waht happened and we found this error :
CMake Warning at CMakeLists.txt:81 (FIND_PACKAGE):
Found package configuration file:

D:/Coding/TorchCraftAI/3rdparty/glog/build/glog-config.cmake
but it set glog_FOUND to FALSE so package "glog" is considered to be NOT
FOUND. Reason given by package:

glog could not be found because dependency gflags could not be found.

I read the document that this build command may complain about missing GFLAGS_LIBRARY so I just continue to use the next command then the error occurred like this :

MSBUILD : error MSB1009: project file does not exist

It shows that we miss the file called CherryPi.sln . I think the missing may due to the failure build of the previous command. Could you please help me to solve this problem. Thx

TorchCraftAI failed to build

I follow the official website tutorial at this step: python setup.py build
I don't know about this setup. The py file is build, I hope the tutorial can be more detailed, because I am a beginner, thank you

Unable to load replay with CherryVis

I have managed to build and run CherryVis, both on my local machine and in a Docker container, but in both cases I am not able to load a replay. When I click on the replay file, it writes "fetching : OK" in the status bar, but nothing else happens.

To aid in reproducibility, I've uploaded a repo with my Docker setup here: https://github.com/bmnielsen/cherryvis-docker

So the full reproduction steps are:

  1. git clone https://github.com/bmnielsen/cherryvis-docker
  2. cd cherryvis-docker
  3. docker build -t cvis .
  4. docker run -p 8770:8770 -it cvis
  5. Open localhost:8770 in a browser (I'm using Chrome on Windows 10)
  6. Select the MPQ files
  7. Click on one of the sample replays

The browser console logs this error when loading initially:

GET http://localhost:8770/static/replays/viewer/test.js net::ERR_ABORTED 404 (Not Found)

Nothing is logged to the browser console when clicking the replay file.

How to save replay?

Hi,
I have tried to set save_replay in the bwapi.ini file, and tried to set BWAPI_CONFIG_AUTO_MENU__SAVE_REPLAY variable.
But I still can't see any saved replay file.

I had trouble running Cherrypi

I'm a computer novice. It's too difficult for me to run Cherrypi properly.
Is there a detailed running tutorial for beginners?

When I run cherrypi, this happens :

C:\windows\system32>F:\StarCraft\bwapi-data\cherrypi.exe -rlbp_model bwapi-data\AI\rlbp_model.bin -modules CreateGatherAttack,Strategy,GenericAutoBuild,RLBuildingPlacer,Builder,Tactics,SquadCombat,Scouting,Gatherer,Harass,StaticDefenceFocusFireModule,RecurrentBos -hostname 127.0.0.1 -bos_model bwapi-data\AI\bos_model.bin -bos_min_advantage 0.08 -vmodule modules=1 -bos_start 6 -bos_start_vs_rush -bos_model_type lstm -bos_num_layers 1 -bos_hid_dim 2048 -bos_bo_input
I05696/XXXXX [main.cpp:38] Connected to TorchCraft server at 127.0.0.1:11111
I05696/XXXXX [state.cpp:435] Enemy: Shelak Tribe playing Protoss
I05696/XXXXX [state.cpp:439] Map: iCCup Fighting spirit1.3.scx
I05696/XXXXX [state.cpp:440] Game is being played at LF6
I05696/XXXXX [banditconfigurations.cpp:160] Using default AIIDE bandit configuration
F1007 18:20:56.238013 5696 main.cpp:92] Exception: Read folder does not exist: ./bwapi-data/read
F05696/XXXXX [main.cpp:92] Exception: Read folder does not exist: ./bwapi-data/read
*** Check failure stack trace: ***
@ 00007FF7152F7125 private: static struct cereal::detail::bind_to_archives<class torch::optim::SGD,struct cereal::detail::anonymous namespace'::polymorphic_binding_tag> & __ptr64 __cdecl cereal::detail::StaticObject<struct cereal::detail::bind_to_archives<class torch::opt @ 00007FF7152F663C private: static struct cereal::detail::bind_to_archives<class torch::optim::SGD,struct cereal::detail::anonymous namespace'::polymorphic_binding_tag> & __ptr64 __cdecl cereal::detail::StaticObject<struct cereal::detail::bind_to_archives<class torch::opt
@ 00007FF7153238FD private: static struct cereal::detail::bind_to_archives<class torch::optim::SGD,struct cereal::detail::anonymous namespace'::polymorphic_binding_tag> & __ptr64 __cdecl cereal::detail::StaticObject<struct cereal::detail::bind_to_archives<class torch::opt @ 00007FFC9B9F1030 (unknown) @ 00007FFC9B9F3298 _is_exception_typeof @ 00007FFCB1C241C3 RtlCaptureContext @ 00007FF7151936DE private: static struct cereal::detail::bind_to_archives<class torch::optim::SGD,struct cereal::detail::anonymous namespace'::polymorphic_binding_tag> & __ptr64 __cdecl cereal::detail::StaticObject<struct cereal::detail::bind_to_archives<class torch::opt
@ 00007FF715317C85 private: static struct cereal::detail::bind_to_archives<class torch::optim::SGD,struct cereal::detail::`anonymous namespace'::polymorphic_binding_tag> & __ptr64 __cdecl cereal::detail::StaticObject<struct cereal::detail::bind_to_archives<class torch::opt
@ 00007FFCB01F1FE4 BaseThreadInitThunk
@ 00007FFCB1BEEF91 RtlUserThreadStart

How to use the trained model of micro?

I train one model in micro-tutorial, and wonder how to use it to play game, you know when I use:
cherrypi[.exe] -help,I can only see the model flags such as: bos_model,rlbp_model,-worker_rush_attack_model. So, how can we load or use micro model we trained?

error when running pretrained model

Hi,
I've just installed and compiled TorchCraftAI on ubuntu 16.04 server, cherrypi can run without any error, but when I download the model file provided on https://torchcraft.github.io/TorchCraftAI/docs/play-games.html and try to run by

cherrypi[.exe] -hostname 127.0.0.1 -port 11111 -bos_model bwapi-data/AI/bos_model.bin -bp_model bwapi-data/AI/bp_model.bin

I got following error, it seems a mismatch (of Tensor size) of model file and model code in TorchCraftAI, how can I fix this?

I43403/XXXXX 10/11 12:10:29 [main.cpp:38] Using TorchCraft server at 127.0.0.1:11111
I43403/XXXXX 10/11 12:10:29 [state.cpp:459] Playing Random vs Protoss () on 256x256 map micro-big.scm (Untitled Scenario) at LF2
I43403/XXXXX 10/11 12:10:29 [banditconfigurations.cpp:168] Using default tournament bandit configuration
I43403/XXXXX 10/11 12:10:29 [bandit.cpp:92] No history for opponent , initializing with default values
I43403/XXXXX 10/11 12:10:29 [bandit.cpp:274] Selecting build order with scoring algorithm expmoorolling
I43403/XXXXX 10/11 12:10:29 [strategy.cpp:484] This build order disables BOS
I43403/XXXXX 10/11 12:10:29 [bandit.cpp:92] No history for opponent , initializing with default values
I43403/XXXXX 10/11 12:10:29 [bandit.cpp:119] Got || saving history to ./bwapi-data/write/.json
I43403/XXXXX 10/11 12:10:29 [buildingplacer.cpp:226] Loading building placer model from bwapi-data/AI/bp_model.bin
I43403/00002 10/11 12:10:34 [genericautobuild.cpp:44] Running build 5pool
E1011 12:10:34.313184 43403 main.cpp:92] Exception: The expanded size of the tensor (64) must match the existing size (256) at non-singleton dimension 1.  Target sizes: [64, 64].  Tensor sizes: [256, 256] (inferExpandGeometry at /home/guanlinwu/TorchCraftAI/3rdparty/pytorch/aten/src/ATen/ExpandUtils.cpp:75)
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x59 (0x7f91818345f9 in /home/guanlinwu/TorchCraftAI/3rdparty/pytorch/torch/lib/tmp_install/lib/libc10.so)
frame #1: at::inferExpandGeometry(c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>) + 0x6aa (0x7f9181e9e1aa in /home/guanlinwu/TorchCraftAI/3rdparty/pytorch/torch/lib/tmp_install/lib/libcaffe2.so)
frame #2: at::native::expand(at::Tensor const&, c10::ArrayRef<long>, bool) + 0x8c (0x7f91820deffc in /home/guanlinwu/TorchCraftAI/3rdparty/pytorch/torch/lib/tmp_install/lib/libcaffe2.so)
frame #3: at::TypeDefault::expand(at::Tensor const&, c10::ArrayRef<long>, bool) const + 0x2a (0x7f91822970ca in /home/guanlinwu/TorchCraftAI/3rdparty/pytorch/torch/lib/tmp_install/lib/libcaffe2.so)
frame #4: torch::autograd::VariableType::expand(at::Tensor const&, c10::ArrayRef<long>, bool) const + 0x1eb (0x7f9184abc0fb in /home/guanlinwu/TorchCraftAI/3rdparty/pytorch/torch/lib/tmp_install/lib/libtorch.so.1)
frame #5: <unknown function> + 0x8c7282 (0x7f918230c282 in /home/guanlinwu/TorchCraftAI/3rdparty/pytorch/torch/lib/tmp_install/lib/libcaffe2.so)
frame #6: at::TypeDefault::copy_(at::Tensor&, at::Tensor const&, bool) const + 0x92 (0x7f91822ced72 in /home/guanlinwu/TorchCraftAI/3rdparty/pytorch/torch/lib/tmp_install/lib/libcaffe2.so)
frame #7: cherrypi::BuildingPlacerSample::BuildingPlacerSample(cherrypi::State*, std::shared_ptr<cherrypi::UPCTuple>, cherrypi::BuildingPlacerSample::StaticData*) + 0xbfb (0x7f9185bff2cb in /home/guanlinwu/TorchCraftAI/build/src/libcherpi.so)
frame #8: cherrypi::BuildingPlacerModule::upcWithPositionForBuilding(cherrypi::State*, cherrypi::UPCTuple const&, cherrypi::BuildType const*) + 0x1d5 (0x7f9185c512a5 in /home/guanlinwu/TorchCraftAI/build/src/libcherpi.so)
frame #9: cherrypi::BuildingPlacerModule::step(cherrypi::State*) + 0xdf1 (0x7f9185c531a1 in /home/guanlinwu/TorchCraftAI/build/src/libcherpi.so)
frame #10: cherrypi::BasePlayer::stepModule(std::shared_ptr<cherrypi::Module>) + 0xae (0x7f9185ac250e in /home/guanlinwu/TorchCraftAI/build/src/libcherpi.so)
frame #11: cherrypi::BasePlayer::stepModules() + 0x45 (0x7f9185ac0425 in /home/guanlinwu/TorchCraftAI/build/src/libcherpi.so)
frame #12: cherrypi::BasePlayer::doStep() + 0x1d0 (0x7f9185ac23e0 in /home/guanlinwu/TorchCraftAI/build/src/libcherpi.so)
frame #13: cherrypi::BasePlayer::step() + 0x138 (0x7f9185ac2648 in /home/guanlinwu/TorchCraftAI/build/src/libcherpi.so)
frame #14: cherrypi::Player::run() + 0x18 (0x7f9185cee2f8 in /home/guanlinwu/TorchCraftAI/build/src/libcherpi.so)
frame #15: main + 0x3a3 (0x439663 in ./cherrypi)
frame #16: __libc_start_main + 0xf0 (0x7f9180ede830 in /lib/x86_64-linux-gnu/libc.so.6)
frame #17: _start + 0x29 (0x43b369 in ./cherrypi)
E43403/00002 10/11 12:10:34 [main.cpp:92] Exception: The expanded size of the tensor (64) must match the existing size (256) at non-singleton dimension 1.  Target sizes: [64, 64].  Tensor sizes: [256, 256] (inferExpandGeometry at /home/guanlinwu/TorchCraftAI/3rdparty/pytorch/aten/src/ATen/ExpandUtils.cpp:75)
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x59 (0x7f91818345f9 in /home/guanlinwu/TorchCraftAI/3rdparty/pytorch/torch/lib/tmp_install/lib/libc10.so)
frame #1: at::inferExpandGeometry(c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>) + 0x6aa (0x7f9181e9e1aa in /home/guanlinwu/TorchCraftAI/3rdparty/pytorch/torch/lib/tmp_install/lib/libcaffe2.so)
frame #2: at::native::expand(at::Tensor const&, c10::ArrayRef<long>, bool) + 0x8c (0x7f91820deffc in /home/guanlinwu/TorchCraftAI/3rdparty/pytorch/torch/lib/tmp_install/lib/libcaffe2.so)
frame #3: at::TypeDefault::expand(at::Tensor const&, c10::ArrayRef<long>, bool) const + 0x2a (0x7f91822970ca in /home/guanlinwu/TorchCraftAI/3rdparty/pytorch/torch/lib/tmp_install/lib/libcaffe2.so)
frame #4: torch::autograd::VariableType::expand(at::Tensor const&, c10::ArrayRef<long>, bool) const + 0x1eb (0x7f9184abc0fb in /home/guanlinwu/TorchCraftAI/3rdparty/pytorch/torch/lib/tmp_install/lib/libtorch.so.1)
frame #5: <unknown function> + 0x8c7282 (0x7f918230c282 in /home/guanlinwu/TorchCraftAI/3rdparty/pytorch/torch/lib/tmp_install/lib/libcaffe2.so)
frame #6: at::TypeDefault::copy_(at::Tensor&, at::Tensor const&, bool) const + 0x92 (0x7f91822ced72 in /home/guanlinwu/TorchCraftAI/3rdparty/pytorch/torch/lib/tmp_install/lib/libcaffe2.so)
frame #7: cherrypi::BuildingPlacerSample::BuildingPlacerSample(cherrypi::State*, std::shared_ptr<cherrypi::UPCTuple>, cherrypi::BuildingPlacerSample::StaticData*) + 0xbfb (0x7f9185bff2cb in /home/guanlinwu/TorchCraftAI/build/src/libcherpi.so)
frame #8: cherrypi::BuildingPlacerModule::upcWithPositionForBuilding(cherrypi::State*, cherrypi::UPCTuple const&, cherrypi::BuildType const*) + 0x1d5 (0x7f9185c512a5 in /home/guanlinwu/TorchCraftAI/build/src/libcherpi.so)
frame #9: cherrypi::BuildingPlacerModule::step(cherrypi::State*) + 0xdf1 (0x7f9185c531a1 in /home/guanlinwu/TorchCraftAI/build/src/libcherpi.so)
frame #10: cherrypi::BasePlayer::stepModule(std::shared_ptr<cherrypi::Module>) + 0xae (0x7f9185ac250e in /home/guanlinwu/TorchCraftAI/build/src/libcherpi.so)
frame #11: cherrypi::BasePlayer::stepModules() + 0x45 (0x7f9185ac0425 in /home/guanlinwu/TorchCraftAI/build/src/libcherpi.so)
frame #12: cherrypi::BasePlayer::doStep() + 0x1d0 (0x7f9185ac23e0 in /home/guanlinwu/TorchCraftAI/build/src/libcherpi.so)
frame #13: cherrypi::BasePlayer::step() + 0x138 (0x7f9185ac2648 in /home/guanlinwu/TorchCraftAI/build/src/libcherpi.so)
frame #14: cherrypi::Player::run() + 0x18 (0x7f9185cee2f8 in /home/guanlinwu/TorchCraftAI/build/src/libcherpi.so)
frame #15: main + 0x3a3 (0x439663 in ./cherrypi)
frame #16: __libc_start_main + 0xf0 (0x7f9180ede830 in /lib/x86_64-linux-gnu/libc.so.6)
frame #17: _start + 0x29 (0x43b369 in ./cherrypi)

build cherrypi on windows: cmake error

I built pytorch, gflags, glog and zmq successfully under commit 65f345b. In the build cherrypi procedure, command "cmake .. -DMSVC=true -DZMQ_LIBRARY="../3rdparty/zmq.lib" -DZMQ_INCLUDE_DIR="../3rdparty/libzmq/include" -DGFLAGS_LIBRARY="../3rdparty/gflags_static.lib" -DGFLAGS_INCLUDE_DIR="../3rdparty/gflags/build/include" -DGLOG_ROOT_DIR="../3rdparty/glog" -DCMAKE_CXX_FLAGS_RELEASE="/MP /EHsc" -G "Visual Studio 15 2017 Win64"" failed:

-- ZSTD_LEGACY_SUPPORT not defined!
ZSTD VERSION 1.3.2
-- Could NOT find GFLAGS (missing: GFLAGS_LIBRARY)
CMake Warning at CMakeLists.txt:81 (FIND_PACKAGE):
  Found package configuration file:

    D:/Work/TorchCraftAI/3rdparty/glog/build/glog-config.cmake

  but it set glog_FOUND to FALSE so package "glog" is considered to be NOT
  FOUND.  Reason given by package:

  glog could not be found because dependency gflags could not be found.



-- CMake version: 3.14.0
-- Version: 5.0.0
-- Build type: Release
-- CPP14_FLAG:
-- Could NOT find GFLAGS (missing: GFLAGS_LIBRARY)
-- Could NOT find libdw (missing: LIBDW_LIBRARY LIBDW_INCLUDE_DIR)
-- Could NOT find libbfd (missing: LIBBFD_LIBRARY LIBBFD_INCLUDE_DIR LIBDL_LIBRARY LIBDL_INCLUDE_DIR)
-- BACKWARD_HAS_UNWIND=1
-- BACKWARD_HAS_BACKTRACE=0
-- BACKWARD_HAS_BACKTRACE_SYMBOL=1
-- BACKWARD_HAS_DW=0
-- BACKWARD_HAS_BFD=0
-- Could NOT find Backward (missing: BACKWARD_LIBRARIES)
-- Could NOT find GFLAGS (missing: GFLAGS_LIBRARY)
-- cotire 1.7.10 loaded.
-- Could NOT find libdw (missing: LIBDW_LIBRARY LIBDW_INCLUDE_DIR)
-- Could NOT find libbfd (missing: LIBBFD_LIBRARY LIBBFD_INCLUDE_DIR LIBDL_LIBRARY LIBDL_INCLUDE_DIR)
-- BACKWARD_HAS_UNWIND=1
-- BACKWARD_HAS_BACKTRACE=0
-- BACKWARD_HAS_BACKTRACE_SYMBOL=1
-- BACKWARD_HAS_DW=0
-- BACKWARD_HAS_BFD=0
-- Could NOT find Backward (missing: BACKWARD_LIBRARIES)
-- Could NOT find Doxygen (missing: DOXYGEN_EXECUTABLE)
CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
LIB_glog
    linked by target "common" in directory D:/Work/TorchCraftAI2/common
    linked by target "cherrypi" in directory D:/Work/TorchCraftAI2/src

-- Configuring incomplete, errors occurred!
See also "D:/Work/TorchCraftAI2/build/CMakeFiles/CMakeOutput.log".
See also "D:/Work/TorchCraftAI2/build/CMakeFiles/CMakeError.log".

CMakeOutput.log: https://paste.ubuntu.com/p/y7vVS8zpPd/
CMakeError.log: https://paste.ubuntu.com/p/9VTDykgKyp/

Problem when installing Anaconda

Small hiccough when installing conda on Ubuntu during this process.

After:

# Download Anaconda from https://www.anaconda.com/download/#linux
bash Anaconda-latest-Linux-x86_64.sh

An error occurs here:

export CMAKE_PREFIX_PATH="$(dirname $(which conda))/../" # [anaconda root directory] 

Something about the dirname missing operand. As it turns out, one needs to let the OS know where the conda path is supposed to go, and since $(which conda) does not resolve, the dirname then proceeds to fail.

Following the steps here resolved this issue:
https://support.anaconda.com/hc/en-us/articles/360023863234-Conda-command-not-found-error

Open a text editor, go to your home directory and open the file named .bashrc or .bash_profile.

In this file, add the line:

export PATH="bin:$PATH"

NOTE: Replace with the actual path of your installed anaconda file.

Save the file. If you have any Terminal windows open, close them all and then open a new one for the changes to take effect.

NOTE: If after closing and opening your Terminal window conda still does not work, you may need to restart your computer.

Anyway, this is great documentation and it's much appreciated.

[Linux] PyTorch build ERROR on release-1.1

on Linux, got the following error when building pytorch according to the installation instruction:

error: ‘float* cblas_sgemm_alloc(CBLAS_IDENTIFIER, int, int, int)’ is deprecated [-Werror=deprecated-declarations]
weights(i, d, p) = cblas_sgemm_alloc(CblasAMatrix, m_p, n, k_p);
^

at the step

pushd 3rdparty/pytorch/tools/
REL_WITH_DEB_INFO=1 python build_libtorch.py
popd

whole error log:

[ 22%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir//src/google/protobuf/util/type_resolver_util.cc.o
[ 22%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/
/src/google/protobuf/wire_format.cc.o
/home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp: In member function ‘void mkldnn::impl::cpu::_ref_rnn_common_t::pack_weights(int, int, int, int, int, int, int, float**, int, int*, const float*, float*, bool)’:
/home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:891:36: error: ‘float* cblas_sgemm_alloc(CBLAS_IDENTIFIER, int, int, int)’ is deprecated [-Werror=deprecated-declarations]
weights(i, d, p) = cblas_sgemm_alloc(CblasAMatrix, m_p, n, k_p);
^
In file included from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/gemm/os_blas.hpp:39:0,
from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.hpp:30,
from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:39:
/home/chenyuyang01/anaconda3/envs/sc2/include/mkl_cblas.h:791:25: note: declared here
MKL_DEPRECATED_C float* cblas_sgemm_alloc(const CBLAS_IDENTIFIER identifier,
^
/home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:891:36: error: ‘float* cblas_sgemm_alloc(CBLAS_IDENTIFIER, int, int, int)’ is deprecated [-Werror=deprecated-declarations]
weights(i, d, p) = cblas_sgemm_alloc(CblasAMatrix, m_p, n, k_p);
^
In file included from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/gemm/os_blas.hpp:39:0,
from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.hpp:30,
from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:39:
/home/chenyuyang01/anaconda3/envs/sc2/include/mkl_cblas.h:791:25: note: declared here
MKL_DEPRECATED_C float* cblas_sgemm_alloc(const CBLAS_IDENTIFIER identifier,
^
/home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp: In member function ‘void mkldnn::impl::cpu::_ref_rnn_common_t::free_packed_weights(int, int, int, float**)’:
/home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:979:17: error: ‘void cblas_sgemm_free(float*)’ is deprecated [-Werror=deprecated-declarations]
cblas_sgemm_free(weights(i, j, k));
^
In file included from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/gemm/os_blas.hpp:39:0,
from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.hpp:30,
from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:39:
/home/chenyuyang01/anaconda3/envs/sc2/include/mkl_cblas.h:804:23: note: declared here
MKL_DEPRECATED_C void cblas_sgemm_free(float dest);
^
/home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:979:17: error: ‘void cblas_sgemm_free(float
)’ is deprecated [-Werror=deprecated-declarations]
cblas_sgemm_free(weights(i, j, k));
^
In file included from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/gemm/os_blas.hpp:39:0,
from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.hpp:30,
from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:39:
/home/chenyuyang01/anaconda3/envs/sc2/include/mkl_cblas.h:804:23: note: declared here
MKL_DEPRECATED_C void cblas_sgemm_free(float dest);
^
/home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp: In instantiation of ‘void mkldnn::impl::cpu::_ref_rnn_common_t::pack_weights(int, int, int, int, int, int, int, float
*, int, int*, const float*, float*, bool) [with mkldnn_prop_kind_t aprop = (mkldnn_prop_kind_t)64u]’:
/home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:1183:17: required from here
/home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:891:53: error: ‘float* cblas_sgemm_alloc(CBLAS_IDENTIFIER, int, int, int)’ is deprecated [-Werror=deprecated-declarations]
weights(i, d, p) = cblas_sgemm_alloc(CblasAMatrix, m_p, n, k_p);
^
In file included from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/gemm/os_blas.hpp:39:0,
from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.hpp:30,
from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:39:
/home/chenyuyang01/anaconda3/envs/sc2/include/mkl_cblas.h:791:25: note: declared here
MKL_DEPRECATED_C float* cblas_sgemm_alloc(const CBLAS_IDENTIFIER identifier,
^
/home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:891:53: error: ‘float* cblas_sgemm_alloc(CBLAS_IDENTIFIER, int, int, int)’ is deprecated [-Werror=deprecated-declarations]
weights(i, d, p) = cblas_sgemm_alloc(CblasAMatrix, m_p, n, k_p);
^
In file included from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/gemm/os_blas.hpp:39:0,
from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.hpp:30,
from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:39:
/home/chenyuyang01/anaconda3/envs/sc2/include/mkl_cblas.h:791:25: note: declared here
MKL_DEPRECATED_C float* cblas_sgemm_alloc(const CBLAS_IDENTIFIER identifier,
^
/home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:891:53: error: ‘float* cblas_sgemm_alloc(CBLAS_IDENTIFIER, int, int, int)’ is deprecated [-Werror=deprecated-declarations]
weights(i, d, p) = cblas_sgemm_alloc(CblasAMatrix, m_p, n, k_p);
^
In file included from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/gemm/os_blas.hpp:39:0,
from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.hpp:30,
from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:39:
/home/chenyuyang01/anaconda3/envs/sc2/include/mkl_cblas.h:791:25: note: declared here
MKL_DEPRECATED_C float* cblas_sgemm_alloc(const CBLAS_IDENTIFIER identifier,
^
/home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp: In instantiation of ‘void mkldnn::impl::cpu::_ref_rnn_common_t::free_packed_weights(int, int, int, float**) [with mkldnn_prop_kind_t aprop = (mkldnn_prop_kind_t)64u]’:
/home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:1183:17: required from here
/home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:979:33: error: ‘void cblas_sgemm_free(float*)’ is deprecated [-Werror=deprecated-declarations]
cblas_sgemm_free(weights(i, j, k));
^
In file included from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/gemm/os_blas.hpp:39:0,
from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.hpp:30,
from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:39:
/home/chenyuyang01/anaconda3/envs/sc2/include/mkl_cblas.h:804:23: note: declared here
MKL_DEPRECATED_C void cblas_sgemm_free(float dest);
^
/home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:979:33: error: ‘void cblas_sgemm_free(float
)’ is deprecated [-Werror=deprecated-declarations]
cblas_sgemm_free(weights(i, j, k));
^
In file included from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/gemm/os_blas.hpp:39:0,
from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.hpp:30,
from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:39:
/home/chenyuyang01/anaconda3/envs/sc2/include/mkl_cblas.h:804:23: note: declared here
MKL_DEPRECATED_C void cblas_sgemm_free(float dest);
^
/home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:979:33: error: ‘void cblas_sgemm_free(float
)’ is deprecated [-Werror=deprecated-declarations]
cblas_sgemm_free(weights(i, j, k));
^
In file included from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/gemm/os_blas.hpp:39:0,
from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.hpp:30,
from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:39:
/home/chenyuyang01/anaconda3/envs/sc2/include/mkl_cblas.h:804:23: note: declared here
MKL_DEPRECATED_C void cblas_sgemm_free(float dest);
^
[ 22%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/wrappers.pb.cc.o
/home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp: In instantiation of ‘void mkldnn::impl::cpu::_ref_rnn_common_t::pack_weights(int, int, int, int, int, int, int, float
*, int, int*, const float*, float*, bool) [with mkldnn_prop_kind_t aprop = (mkldnn_prop_kind_t)128u]’:
/home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:1184:17: required from here
/home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:891:53: error: ‘float* cblas_sgemm_alloc(CBLAS_IDENTIFIER, int, int, int)’ is deprecated [-Werror=deprecated-declarations]
weights(i, d, p) = cblas_sgemm_alloc(CblasAMatrix, m_p, n, k_p);
^
In file included from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/gemm/os_blas.hpp:39:0,
from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.hpp:30,
from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:39:
/home/chenyuyang01/anaconda3/envs/sc2/include/mkl_cblas.h:791:25: note: declared here
MKL_DEPRECATED_C float* cblas_sgemm_alloc(const CBLAS_IDENTIFIER identifier,
^
/home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:891:53: error: ‘float* cblas_sgemm_alloc(CBLAS_IDENTIFIER, int, int, int)’ is deprecated [-Werror=deprecated-declarations]
weights(i, d, p) = cblas_sgemm_alloc(CblasAMatrix, m_p, n, k_p);
^
In file included from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/gemm/os_blas.hpp:39:0,
from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.hpp:30,
from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:39:
/home/chenyuyang01/anaconda3/envs/sc2/include/mkl_cblas.h:791:25: note: declared here
MKL_DEPRECATED_C float* cblas_sgemm_alloc(const CBLAS_IDENTIFIER identifier,
^
/home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:891:53: error: ‘float* cblas_sgemm_alloc(CBLAS_IDENTIFIER, int, int, int)’ is deprecated [-Werror=deprecated-declarations]
weights(i, d, p) = cblas_sgemm_alloc(CblasAMatrix, m_p, n, k_p);
^
In file included from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/gemm/os_blas.hpp:39:0,
from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.hpp:30,
from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:39:
/home/chenyuyang01/anaconda3/envs/sc2/include/mkl_cblas.h:791:25: note: declared here
MKL_DEPRECATED_C float* cblas_sgemm_alloc(const CBLAS_IDENTIFIER identifier,
^
/home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp: In instantiation of ‘void mkldnn::impl::cpu::_ref_rnn_common_t::free_packed_weights(int, int, int, float**) [with mkldnn_prop_kind_t aprop = (mkldnn_prop_kind_t)128u]’:
/home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:1184:17: required from here
/home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:979:33: error: ‘void cblas_sgemm_free(float*)’ is deprecated [-Werror=deprecated-declarations]
cblas_sgemm_free(weights(i, j, k));
^
In file included from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/gemm/os_blas.hpp:39:0,
from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.hpp:30,
from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:39:
/home/chenyuyang01/anaconda3/envs/sc2/include/mkl_cblas.h:804:23: note: declared here
MKL_DEPRECATED_C void cblas_sgemm_free(float dest);
^
/home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:979:33: error: ‘void cblas_sgemm_free(float
)’ is deprecated [-Werror=deprecated-declarations]
cblas_sgemm_free(weights(i, j, k));
^
In file included from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/gemm/os_blas.hpp:39:0,
from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.hpp:30,
from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:39:
/home/chenyuyang01/anaconda3/envs/sc2/include/mkl_cblas.h:804:23: note: declared here
MKL_DEPRECATED_C void cblas_sgemm_free(float dest);
^
/home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:979:33: error: ‘void cblas_sgemm_free(float
)’ is deprecated [-Werror=deprecated-declarations]
cblas_sgemm_free(weights(i, j, k));
^
In file included from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/gemm/os_blas.hpp:39:0,
from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.hpp:30,
from /home/chenyuyang01/git/TorchCraftAI/3rdparty/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:39:
/home/chenyuyang01/anaconda3/envs/sc2/include/mkl_cblas.h:804:23: note: declared here
MKL_DEPRECATED_C void cblas_sgemm_free(float *dest);
^
Scanning dependencies of target c10_StreamGuard_test
[ 22%] Building CXX object c10/test/CMakeFiles/c10_StreamGuard_test.dir/core/StreamGuard_test.cpp.o
Scanning dependencies of target c10_DeviceGuard_test
[ 22%] Building CXX object c10/test/CMakeFiles/c10_DeviceGuard_test.dir/core/DeviceGuard_test.cpp.o
Scanning dependencies of target c10_TensorTypeId_test
[ 23%] Building CXX object c10/test/CMakeFiles/c10_TensorTypeId_test.dir/core/TensorTypeId_test.cpp.o
Scanning dependencies of target c10_registry_test
[ 23%] Linking CXX executable ../../bin/c10_InlineDeviceGuard_test
[ 23%] Building CXX object c10/test/CMakeFiles/c10_registry_test.dir/util/registry_test.cpp.o

Using Linux or Windows for this project?

Which platform is best for this project? I'm using Linux and I found I need to install wine to run Starcraft, which is so complex for me? Could you please remind me is Windows more suitable for this project then Linux?

3rdparty/Ale is not an existing directory

Following the guide at https://torchcraft.github.io/TorchCraftAI/docs/install-macos.html, doing the cmake of TorchCraftAI and CherryPi:

cmake .. -DCMAKE_BUILD_TYPE=relwithdebinfo

You get the following errror:

CMake Error at CMakeLists.txt:72 (ADD_SUBDIRECTORY):
ADD_SUBDIRECTORY given source
"[path-to-TorchCraft]/TorchCraftAI/3rdparty/ale" which is not an existing
directory.

CMake Error at CMakeLists.txt:73 (SET_TARGET_PROPERTIES):
SET_TARGET_PROPERTIES Can not find target to add properties to: ale-lib

CMake Error at CMakeLists.txt:74 (TARGET_INCLUDE_DIRECTORIES):
Cannot specify include directories for target "ale-lib" which is not built
by this project.

Is it a bug that the Ale directory is not present? Or is it a bug that the lines 72, 73 and 74 TorchCraftAI/CMakeLists.txt are there?

Commeting out lines 72, 73 and 74 in TorchCraftAI/CMakeLists.txt resolves the issue.

Link error while building Cherrypi in both Windows and Linux OS

Hi there,

I tried to build cherrypi on Windows and Linux and encountered the same link error of "multiple definition" of some functions and found out that it might be caused by the duplicated cpp/h files in src/ directory and the src/gameutils/ directory, such as the botscenario.cpp.

The building procedure can succeed when I removed the redundant files from src/.

Would you please have a look at the case if it is an intended duplication or a careless mistake?

Thank you in advance.

[Linux]Cherrypi build error on bwapilib

got the error when running 'make' in the following:

mkdir -p build
cd build
cmake .. -DCMAKE_BUILD_TYPE=relwithdebinfo -DWITH_CPIDLIB=OFF
make -j$(nproc)

whole log as follows:

[ 26%] Building CXX object 3rdparty/torchcraft/BWEnv/CMakeFiles/BWEnvClient.dir/src/main.cc.o
[ 26%] Building CXX object 3rdparty/torchcraft/BWEnv/CMakeFiles/BWEnv.dir/src/dll.cc.o
[ 26%] Building CXX object 3rdparty/torchcraft/BWEnv/CMakeFiles/BWEnv.dir/src/module.cc.o
[ 26%] Linking CXX executable BWEnvClient
[ 26%] Linking CXX shared library libbwapilib.so
[ 26%] Built target bwapilib
Scanning dependencies of target bwem
/usr/include/c++/7/bits/basic_string.h:133: error: undefined reference to 'BWAPI::Type<BWAPI::Error, 27>::typeNames[abi:cxx11]'
/home/chenyuyang01/git/TorchCraftAI/3rdparty/torchcraft/BWEnv/src/controller.cc:195: error: undefined reference to 'BWAPI::Type<BWAPI::Race, 8>::typeNames[abi:cxx11]'
/home/chenyuyang01/git/TorchCraftAI/3rdparty/torchcraft/BWEnv/src/controller.cc:205: error: undefined reference to 'BWAPI::Type<BWAPI::Race, 8>::typeNames[abi:cxx11]'
/home/chenyuyang01/git/TorchCraftAI/3rdparty/torchcraft/BWEnv/src/controller.cc:422: error: undefined reference to 'BWAPI::Type<BWAPI::Error, 27>::typeNames[abi:cxx11]'
/home/chenyuyang01/local/usr/local/include/BWAPI/Type.h:97: error: undefined reference to 'BWAPI::Type<BWAPI::Error, 27>::typeNames[abi:cxx11]'
collect2: error: ld returned 1 exit status
3rdparty/torchcraft/BWEnv/CMakeFiles/BWEnvClient.dir/build.make:104: recipe for target '3rdparty/torchcraft/BWEnv/BWEnvClient' failed
make[2]: *** [3rdparty/torchcraft/BWEnv/BWEnvClient] Error 1
CMakeFiles/Makefile2:694: recipe for target '3rdparty/torchcraft/BWEnv/CMakeFiles/BWEnvClient.dir/all' failed
make[1]: *** [3rdparty/torchcraft/BWEnv/CMakeFiles/BWEnvClient.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[ 26%] Building CXX object 3rdparty/bwem/CMakeFiles/bwem.dir/area.cpp.o
[ 26%] Building CXX object 3rdparty/bwem/CMakeFiles/bwem.dir/cp.cpp.o
[ 26%] Building CXX object 3rdparty/bwem/CMakeFiles/bwem.dir/base.cpp.o
[ 26%] Building CXX object 3rdparty/bwem/CMakeFiles/bwem.dir/map.cpp.o
[ 28%] Building CXX object 3rdparty/bwem/CMakeFiles/bwem.dir/gridMap.cpp.o

PyTorch Failed to Build

Following the guide at https://torchcraft.github.io/TorchCraftAI/docs/install-windows.html, I am getting an error when running the command "python setup.py build". I am running the command form the pytorch folder. Both CUDA 10 and CUDA 9 with patches are installed (I've been told that CUDA 10 is not compatible, but originally the link to CUDA brought me to version 10).

This is the last output of the command:
-- Configuring incomplete, errors occurred!
See also "E:/Program Files (x86)/BWAPI/CherryPi/TorchCraftAI/3rdparty/pytorch/build/CMakeFiles/CMakeOutput.log".
See also "E:/Program Files (x86)/BWAPI/CherryPi/TorchCraftAI/3rdparty/pytorch/build/CMakeFiles/CMakeError.log".

(base) E:\Program Files (x86)\BWAPI\CherryPi\TorchCraftAI\3rdparty\pytorch\build>IF ERRORLEVEL 1 exit 1
Failed to run 'tools\build_pytorch_libs.bat --use-cuda --use-fbgemm --use-nnpack --use-mkldnn --use-qnnpack caffe2'

Attached are the two log files mentioned in the error output.

CMakeError.log
CMakeOutput.log

Build cherrypi get error

I got this error when I input this cmd
msbuild CherryPi.sln /property:Configuration=Release /m

“E:\git\github\TorchCraftAI\build\CherryPi.sln”(默认目标) (1) ->
“E:\git\github\TorchCraftAI\build\common\common.vcxproj.metaproj”(默认目标) (6) ->
“E:\git\github\TorchCraftAI\build\common\common.vcxproj”(默认目标) (20) ->
(ClCompile 目标) ->
E:\git\github\TorchCraftAI\3rdparty\include\autogradpp/autograd.h(106): error C3668: “ag::Sequential::clone”: 包含重写说明符“override”的方法没有重写任何基类方法 (编译源文件 E:\git\github\TorchCraftAI\common\autograd\operations.cpp) [E:\git\github\TorchCraftAI\build\common\common.vcxpro
j]
E:\git\github\TorchCraftAI\3rdparty\include\autogradpp/autograd.h(106): error C3668: “ag::Sequential::clone”: 包含重写说明符“override”的方法没有重写任何基类方法 (编译源文件 E:\git\github\TorchCraftAI\common\autograd\distributions.cpp) [E:\git\github\TorchCraftAI\build\common\common.vcx
proj]
E:\git\github\TorchCraftAI\3rdparty\include\autogradpp/autograd.h(106): error C3668: “ag::Sequential::clone”: 包含重写说明符“override”的方法没有重写任何基类方法 (编译源文件 E:\git\github\TorchCraftAI\common\autograd\utils.cpp) [E:\git\github\TorchCraftAI\build\common\common.vcxproj]
E:\git\github\TorchCraftAI\3rdparty\include\autogradpp/autograd.h(106): error C3668: “ag::Sequential::clone”: 包含重写说明符“override”的方法没有重写任何基类方法 (编译源文件 E:\git\github\TorchCraftAI\common\autograd\debug.cpp) [E:\git\github\TorchCraftAI\build\common\common.vcxproj]
E:\git\github\TorchCraftAI\3rdparty\include\autogradpp/autograd.h(106): error C3668: “ag::Sequential::clone”: 包含重写说明符“override”的方法没有重写任何基类方法 (编译源文件 E:\git\github\TorchCraftAI\common\autograd\models.cpp) [E:\git\github\TorchCraftAI\build\common\common.vcxproj]
E:\git\github\TorchCraftAI\3rdparty\include\autogradpp/autograd.h(106): error C3668: “ag::Sequential::clone”: 包含重写说明符“override”的方法没有重写任何基类方法 (编译源文件 E:\git\github\TorchCraftAI\common\rand.cpp) [E:\git\github\TorchCraftAI\build\common\common.vcxproj]

build cherrypi on windows : cmake command fail

I run the full command in the installation guid

cmake .. -DMSVC=true -DZMQ_LIBRARY="../3rdparty/zmq.lib" -DZMQ_INCLUDE_DIR="../3rdparty/libzmq/include" -DGFLAGS_LIBRARY="../3rdparty/gflags_static.lib" -DGFLAGS_INCLUDE_DIR="../3rdparty/gflags/build/include" -DGLOG_ROOT_DIR="../3rdparty/glog" -DCMAKE_CXX_FLAGS_RELEASE="/MP /EHsc" -G "Visual Studio 15 2017 Win64"

and the error happened:

CMake Error in common/CMakeLists.txt:
  Imported target "Torch" includes non-existent path
    "D:/StarCraftAI/TorchCraft/TorchCraftAI/3rdparty/pytorch/torch/lib/tmp_install/include/THC"
  in its INTERFACE_INCLUDE_DIRECTORIES.

image

So the follow command "msbuild CherryPi.sln /property:Configuration=Release /m” is also failled.

How can I run CherryPi.exe correctly?

I have followed the Installation Guide and compiled on Win10 successfully. Copying the necessary .dll files and CherryPi.exe to a new folder $WORKDIR, then I run CherryPi.exe as administrator, a blank CMD window pops up, and dismisses after few seconds. It feels like nothing happened. If I continue the rest steps, the SC will freeze when the game starts.
Obviously, the fact is the CherryPi.exe I built is not work, but I'm not sure what it should be like after running CherryPi.exe?
Is there any way to check if the compiled files are correct?

Environment:
Windows10
GFX1060
Visual Studio 2017
CUDA10.1
StarCraft 1.16.1

build cherrypi on windows : python setup.py build command fail

-- Caffe2: Header version is: 9.0
-- Found CUDNN: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.0/include

-- Found cuDNN: v7.4.2 (include: C:/Program Files/NVIDIA GPU Computing Toolkit/
CUDA/v9.0/include, library: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v
9.0/lib/x64/cudnn.lib)
CMake Error at cmake/public/cuda.cmake:278 (message):
CUDA support not available with 32-bit windows. Did you forget to set
Win64 in the generator target?
Call Stack (most recent call first):
cmake/Dependencies.cmake:626 (include)
CMakeLists.txt:200 (include)

-- Configuring incomplete, errors occurred!
See also "E:/Game/BWAPI/TorchCraftAI/3rdparty/pytorch/build/CMakeFiles/CMakeOutp
ut.log".
See also "E:/Game/BWAPI/TorchCraftAI/3rdparty/pytorch/build/CMakeFiles/CMakeErro
r.log".

E:\Game\BWAPI\TorchCraftAI\3rdparty\pytorch\build>IF ERRORLEVEL 1 exit 1
Failed to run 'tools\build_pytorch_libs.bat --use-cuda --use-fbgemm --use-nnpack
--use-mkldnn --use-qnnpack caffe2'

Run micro_tutorial get error

I have bulit TorchCraftAI and CherryPi on unbuntu 18.04.

But I got this error when I input this cmd
./build/tutorials/micro/micro_tutorial -vmodule state=-1,micro_tutorial=1 -scenario vu_zl

I 6788/XXXXX 04/24 13:02:39 [micro_tutorial.cpp:246] Scenario: vu_zl
I 6788/XXXXX 04/24 13:02:39 [micro_tutorial.cpp:247] Model: PF
I 6788/XXXXX 04/24 13:02:39 [micro_tutorial.cpp:248] Resume: false
I 6788/XXXXX 04/24 13:02:39 [micro_tutorial.cpp:249] Evaluate: false
I 6788/XXXXX 04/24 13:02:39 [micro_tutorial.cpp:256] resultsJSON: ./metrics-rank-0.json
I 6788/XXXXX 04/24 13:02:39 [micro_tutorial.cpp:257] resultsCheckpoint: ./trainer_latest.bin
I 6788/XXXXX 04/24 13:02:39 [micro_tutorial.cpp:284] Model has 221568 total parameters
I 6788/XXXXX 04/24 13:02:39 [micro_tutorial.cpp:290] Begin training!

F0424 13:02:39.818872 6796 forkserver.cpp:575] You must call ForkServer::startForkServer! Call it as early as possible (in main, after parsing command line flags, but before initializing gloo/mpi/anything else)!

*** Check failure stack trace: ***
F0424 13:02:39.818872 6797 forkserver.cpp:575] You must call ForkServer::startForkServer! Call it as early as possible (in main, after parsing command line flags, but before initializing gloo/mpi/anything else)!F0424 13:02:39.819079 6798 forkserver.cpp:575] You must call ForkServer::startForkServer! Call it as early as possible (in main, after parsing command line flags, but before initializing gloo/mpi/anything else)!F0424 13:02:39.819214 6799 forkserver.cpp:575] You must call ForkServer::startForkServer! Call it as early as possible (in main, after parsing command line flags, but before initializing gloo/mpi/anything else)!F0424 13:02:39.819252 6800 forkserver.cpp:575] You must call ForkServer::startForkServer! Call it as early as possible (in main, after parsing command line flags, but before initializing gloo/mpi/anything else)!

Error while building Pytorch

Hi, Im getting the following error while building Pytorch:

[ 93%] Linking CXX executable ../bin/cuda_half_test
[ 93%] Built target cuda_half_test
[ 93%] Building CXX object caffe2/torch/lib/c10d/test/CMakeFiles/TCPStoreTest.dir/TCPStoreTest.cpp.o
[ 93%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/autograd/generated/VariableType_1.cpp.o
[ 93%] Linking CXX executable ../bin/cuda_optional_test
[ 93%] Built target cuda_optional_test
[ 93%] Building CXX object caffe2/torch/lib/c10d/test/CMakeFiles/FileStoreTest.dir/FileStoreTest.cpp.o
[ 93%] Linking CXX executable ../../../../../bin/FileStoreTest
/usr/bin/ld: can't find -l__caffe2_nccl
collect2: error: ld returned 1 exit status
make[2]: *** [caffe2/torch/lib/c10d/test/CMakeFiles/FileStoreTest.dir/build.make:106: bin/FileStoreTest] Error 1
make[1]: *** [CMakeFiles/Makefile2:9552: caffe2/torch/lib/c10d/test/CMakeFiles/FileStoreTest.dir/all] Error 2
make[1]: *** Se espera a que terminen otras tareas....
[ 93%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/autograd/generated/VariableType_2.cpp.o
[ 93%] Linking CXX executable ../bin/cuda_packedtensoraccessor_test
[ 93%] Linking CXX executable ../../../../../bin/TCPStoreTest
/usr/bin/ld: can't find -l__caffe2_nccl
collect2: error: ld returned 1 exit status
make[2]: *** [caffe2/torch/lib/c10d/test/CMakeFiles/TCPStoreTest.dir/build.make:106: bin/TCPStoreTest] Error 1
make[1]: *** [CMakeFiles/Makefile2:9416: caffe2/torch/lib/c10d/test/CMakeFiles/TCPStoreTest.dir/all] Error 2
[ 93%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/autograd/generated/VariableType_3.cpp.o
[ 93%] Built target cuda_packedtensoraccessor_test
[ 93%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/autograd/generated/VariableType_4.cpp.o
[ 93%] Linking CXX shared library ../../../../../lib/libc10d_cuda_test.so
/usr/bin/ld: can't find  -l__caffe2_nccl
collect2: error: ld returned 1 exit status
make[2]: *** [caffe2/torch/lib/c10d/test/CMakeFiles/c10d_cuda_test.dir/build.make:429: lib/libc10d_cuda_test.so] Error 1
make[1]: *** [CMakeFiles/Makefile2:9461: caffe2/torch/lib/c10d/test/CMakeFiles/c10d_cuda_test.dir/all] Error 2
[ 93%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/autograd/grad_mode.cpp.o
[ 94%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/autograd/input_buffer.cpp.o
[ 94%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/autograd/profiler.cpp.o

Extract from the startup log:

(base) jabbo@jabbo-pc:~/TorchCraftAI/3rdparty/pytorch/tools$ REL_WITH_DEB_INFO=1 python build_libtorch.py
-- std::exception_ptr is supported.
-- NUMA is available
-- Current compiler supports avx2 extension. Will build perfkernels.
-- Current compiler supports avx512f extension. Will build fbgemm.
-- Building using own protobuf under third_party per request.
-- Use custom protobuf build.
-- Caffe2 protobuf include directory: $<BUILD_INTERFACE:/home/jabbo/TorchCraftAI/3rdparty/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include>
-- The BLAS backend of choice:MKL
-- Checking for [mkl_intel_lp64 - mkl_gnu_thread - mkl_core - gomp - pthread - m - dl]
--   Library mkl_intel_lp64: /home/jabbo/anaconda3/lib/libmkl_intel_lp64.so
--   Library mkl_gnu_thread: /home/jabbo/anaconda3/lib/libmkl_gnu_thread.so
--   Library mkl_core: /home/jabbo/anaconda3/lib/libmkl_core.so
-- Found OpenMP_C: -fopenmp  
-- Found OpenMP_CXX: -fopenmp  
--   Library gomp: -fopenmp
--   Library pthread: /usr/lib/x86_64-linux-gnu/libpthread.so
--   Library m: /usr/lib/x86_64-linux-gnu/libm.so
--   Library dl: /usr/lib/x86_64-linux-gnu/libdl.so
-- Brace yourself, we are building NNPACK
-- Failed to find LLVM FileCheck
-- git Version: v1.4.0-505be96a
-- Version: 1.4.0
-- Performing Test HAVE_STD_REGEX -- success
-- Performing Test HAVE_GNU_POSIX_REGEX -- failed to compile
-- Performing Test HAVE_POSIX_REGEX -- success
-- Performing Test HAVE_STEADY_CLOCK -- success
-- Found OpenMP_C: -fopenmp  
-- Found OpenMP_CXX: -fopenmp  
CMake Warning (dev) at tools/third_party/fbgemm/third_party/asmjit/CMakeLists.txt:34 (set):
  implicitly converting 'BOOLEAN' to 'STRING' type.
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Warning (dev) at tools/third_party/fbgemm/third_party/asmjit/CMakeLists.txt:35 (set):
  implicitly converting 'BOOLEAN' to 'STRING' type.
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Warning (dev) at tools/third_party/fbgemm/third_party/asmjit/CMakeLists.txt:36 (set):
  implicitly converting 'BOOLEAN' to 'STRING' type.
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Warning (dev) at tools/third_party/fbgemm/third_party/asmjit/CMakeLists.txt:37 (set):
  implicitly converting 'BOOLEAN' to 'STRING' type.
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Warning (dev) at tools/third_party/fbgemm/third_party/asmjit/CMakeLists.txt:38 (set):
  implicitly converting 'BOOLEAN' to 'STRING' type.
This warning is for project developers.  Use -Wno-dev to suppress it.

-- [asmjit]
   BuildMode=Static
   BuildTest=Off
   ASMJIT_DIR=/home/jabbo/TorchCraftAI/3rdparty/pytorch/tools/third_party/fbgemm/third_party/asmjit
   ASMJIT_DEPS=pthread;rt
   ASMJIT_LIBS=asmjit;pthread;rt
   ASMJIT_CFLAGS=-DASMJIT_STATIC
   ASMJIT_SOURCE_DIR=/home/jabbo/TorchCraftAI/3rdparty/pytorch/tools/third_party/fbgemm/third_party/asmjit/src
   ASMJIT_INCLUDE_DIR=/home/jabbo/TorchCraftAI/3rdparty/pytorch/tools/third_party/fbgemm/third_party/asmjit/src
   ASMJIT_PRIVATE_CFLAGS=
     -DASMJIT_STATIC
     -std=c++17
     -fno-tree-vectorize
     -fvisibility=hidden
     -O2 [RELEASE]
     -fno-keep-static-consts [RELEASE]
     -fmerge-all-constants [RELEASE]
-- Found Numa  (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libnuma.so)
-- Using third party subdirectory Eigen.
-- Could NOT find pybind11 (missing: pybind11_DIR)
-- Could NOT find pybind11 (missing: pybind11_INCLUDE_DIR) 
-- Using third_party/pybind11.
-- Caffe2: CUDA detected: 9.2
-- Caffe2: CUDA nvcc is: /usr/lib/cuda/bin/nvcc
-- Caffe2: CUDA toolkit directory: /usr/lib/cuda
-- Caffe2: Header version is: 9.2
-- Found cuDNN: v7.4.1  (include: /usr/lib/cuda/include, library: /usr/lib/cuda/lib64/libcudnn.so.7)
-- Automatic GPU detection failed. Building for common architectures.
-- Autodetected CUDA architecture(s): 3.0;3.5;5.0;5.2;6.0;6.1;7.0;7.0+PTX
-- Added CUDA NVCC flags for: -gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_70,code=compute_70
-- Could NOT find CUB (missing: CUB_INCLUDE_DIR) 
-- Found CUDA: /usr/lib/cuda (found suitable version "9.2", minimum required is "7.0") 
-- CUDA detected: 9.2
-- 
-- ******** Summary ********
--   CMake version         : 3.14.0
--   CMake command         : /home/jabbo/anaconda3/bin/cmake
--   System                : Linux
--   C++ compiler          : /usr/bin/c++
--   C++ compiler version  : 7.4.0
--   CXX flags             :  -fvisibility-inlines-hidden -Wnon-virtual-dtor
--   Build type            : RelWithDebInfo
--   Compile definitions   : TH_BLAS_MKL
--   CMAKE_PREFIX_PATH     : /home/jabbo/anaconda3/bin/../
--   CMAKE_INSTALL_PREFIX  : /home/jabbo/TorchCraftAI/3rdparty/pytorch/torch/lib/tmp_install
--   CMAKE_MODULE_PATH     : /home/jabbo/TorchCraftAI/3rdparty/pytorch/cmake/Modules;/home/jabbo/TorchCraftAI/3rdparty/pytorch/cmake/public/../Modules_CUDA_fix
-- 
--   ONNX version          : 1.3.0
--   ONNX NAMESPACE        : onnx_torch
--   ONNX_BUILD_TESTS      : OFF
--   ONNX_BUILD_BENCHMARKS : OFF
--   ONNX_USE_LITE_PROTO   : OFF
--   ONNXIFI_DUMMY_BACKEND : OFF
-- 
--   Protobuf compiler     : 
--   Protobuf includes     : 
--   Protobuf libraries    : 
--   BUILD_ONNX_PYTHON     : OFF
-- Found CUDA with FP16 support, compiling with torch.cuda.HalfTensor
-- Removing -DNDEBUG from compile flags
-- Compiling with OpenMP support
-- Compiling with MAGMA support
-- MAGMA INCLUDE DIRECTORIES: /home/jabbo/anaconda3/include
-- MAGMA LIBRARIES: /home/jabbo/anaconda3/lib/libmagma.a
-- MAGMA V2 check: 1
-- Could not find hardware support for NEON on this machine.
-- No OMAP3 processor on this machine.
-- No OMAP4 processor on this machine.
-- AVX compiler support found
-- AVX2 compiler support found
-- Atomics: using C11 intrinsics
-- Found a library with BLAS API (mkl).
-- Found a library with LAPACK API. (mkl)
-- Found CUDA: /usr/lib/cuda (found suitable version "9.2", minimum required is "5.5") 
disabling ROCM because NOT USE_ROCM is set
-- MIOpen not found. Compiling without MIOpen support
-- OpenMP lib: provided by compiler
-- Could NOT find Doxygen (missing: DOXYGEN_EXECUTABLE) 
-- VTune profiling environment is unset
-- Found MKL-DNN: TRUE
-- GCC 7.4.0: Adding gcc and gcc_s libs to link line
-- NUMA paths:
-- /usr/include
-- /usr/lib/x86_64-linux-gnu/libnuma.so
-- Using python found in /home/jabbo/anaconda3/bin/python
-- Configuring build for SLEEF-v3.2
   Target system: Linux-5.0.0-21-generic
   Target processor: x86_64
   Host system: Linux-5.0.0-21-generic
   Host processor: x86_64
   Detected C compiler: GNU @ /usr/bin/cc
-- Using option `-Wall -Wno-unused -Wno-attributes -Wno-unused-result -Wno-psabi -ffp-contract=off -fno-math-errno -fno-trapping-math` to compile libsleef
-- Building shared libs : OFF
-- MPFR : /home/jabbo/anaconda3/lib/libmpfr.so
-- MPFR header file in /home/jabbo/anaconda3/include
-- GMP : /home/jabbo/anaconda3/lib/libgmp.so
-- RUNNING_ON_TRAVIS : 0
-- COMPILER_SUPPORTS_OPENMP : 1
-- Using python found in /home/jabbo/anaconda3/bin/python
-- /usr/bin/c++ /home/jabbo/TorchCraftAI/3rdparty/pytorch/torch/abi-check.cpp -o /home/jabbo/TorchCraftAI/3rdparty/pytorch/tools/abi-check
-- Determined _GLIBCXX_USE_CXX11_ABI=1
-- Could NOT find MPI_C (missing: MPI_C_LIB_NAMES MPI_C_HEADER_DIR MPI_C_WORKS) 
-- Could NOT find MPI_CXX (missing: MPI_CXX_LIB_NAMES MPI_CXX_HEADER_DIR MPI_CXX_WORKS) 
-- Could NOT find MPI (missing: MPI_C_FOUND MPI_CXX_FOUND) 
-- Found CUDA: /usr/lib/cuda (found suitable version "9.2", minimum required is "7.5") 
-- Building the gloo backend with TCP support only
-- Found CUDA: /usr/lib/cuda (found version "9.2") 
-- Building C10D with CUDA support
-- Could NOT find MPI_C (missing: MPI_C_LIB_NAMES MPI_C_HEADER_DIR MPI_C_WORKS) 
-- Could NOT find MPI_CXX (missing: MPI_CXX_LIB_NAMES MPI_CXX_HEADER_DIR MPI_CXX_WORKS) 
-- Could NOT find MPI (missing: MPI_C_FOUND MPI_CXX_FOUND) 
-- Not able to find MPI, will compile c10d without MPI support
-- NCCL operators skipped due to no CUDA support
-- Including IDEEP operators
-- Excluding image processing operators due to no opencv
-- Excluding video processing operators due to no opencv
-- MPI operators skipped due to no MPI support
CMake Warning at CMakeLists.txt:391 (message):
  Generated cmake files are only fully tested if one builds with system glog,
  gflags, and protobuf.  Other settings may generate files that are not well
  tested.


-- 
-- ******** Summary ********
-- General:
--   CMake version         : 3.14.0
--   CMake command         : /home/jabbo/anaconda3/bin/cmake
--   System                : Linux
--   C++ compiler          : /usr/bin/c++
--   C++ compiler version  : 7.4.0
--   BLAS                  : MKL
--   CXX flags             :  -fvisibility-inlines-hidden -fopenmp -DUSE_FBGEMM -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -Wno-stringop-overflow
--   Build type            : RelWithDebInfo
--   Compile definitions   : TH_BLAS_MKL;ONNX_NAMESPACE=onnx_torch;MAGMA_V2;USE_C11_ATOMICS=1;HAVE_MMAP=1;_FILE_OFFSET_BITS=64;HAVE_SHM_OPEN=1;HAVE_SHM_UNLINK=1;HAVE_MALLOC_USABLE_SIZE=1
--   CMAKE_PREFIX_PATH     : /home/jabbo/anaconda3/bin/../
--   CMAKE_INSTALL_PREFIX  : /home/jabbo/TorchCraftAI/3rdparty/pytorch/torch/lib/tmp_install
-- 
--   TORCH_VERSION         : 1.0.0
--   CAFFE2_VERSION        : 1.0.0
--   BUILD_ATEN_MOBILE     : OFF
--   BUILD_ATEN_ONLY       : OFF
--   BUILD_BINARY          : 
--   BUILD_CUSTOM_PROTOBUF : ON
--     Link local protobuf : ON
--   BUILD_DOCS            : OFF
--   BUILD_PYTHON          : 
--   BUILD_CAFFE2_OPS      : ON
--   BUILD_SHARED_LIBS     : ON
--   BUILD_TEST            : ON
--   USE_ASAN              : OFF
--   USE_CUDA              : 1
--     CUDA static link    : 0
--     USE_CUDNN           : 0
--     CUDA version        : 9.2
--     CUDA root directory : /usr/lib/cuda
--     CUDA library        : /usr/lib/x86_64-linux-gnu/libcuda.so
--     cudart library      : /usr/lib/cuda/lib64/libcudart_static.a;-pthread;dl;/usr/lib/x86_64-linux-gnu/librt.so
--     cublas library      : /usr/lib/cuda/lib64/libcublas.so
--     cufft library       : /usr/lib/cuda/lib64/libcufft.so
--     curand library      : /usr/lib/cuda/lib64/libcurand.so
--     nvrtc               : /usr/lib/cuda/lib64/libnvrtc.so
--     CUDA include path   : /usr/lib/cuda/include
--     NVCC executable     : /usr/lib/cuda/bin/nvcc
--     CUDA host compiler  : /usr/bin/cc
--     USE_TENSORRT        : OFF
--   USE_ROCM              : 0
--   USE_EIGEN_FOR_BLAS    : 
--   USE_FBGEMM            : ON
--   USE_FFMPEG            : OFF
--   USE_GFLAGS            : OFF
--   USE_GLOG              : OFF
--   USE_LEVELDB           : OFF
--   USE_LITE_PROTO        : OFF
--   USE_LMDB              : OFF
--   USE_METAL             : OFF
--   USE_MKL               : ON
--   USE_MKLDNN            : ON
--   USE_NCCL              : OFF
--   USE_NNPACK            : 1
--   USE_NUMPY             : 
--   USE_OBSERVERS         : OFF
--   USE_OPENCL            : OFF
--   USE_OPENCV            : OFF
--   USE_OPENMP            : OFF
--   USE_PROF              : OFF
--   USE_QNNPACK           : 1
--   USE_REDIS             : OFF
--   USE_ROCKSDB           : OFF
--   USE_ZMQ               : OFF
--   USE_DISTRIBUTED       : 1
--     USE_MPI             : OFF
--     USE_GLOO            : ON
--     USE_GLOO_IBVERBS    : OFF
--   Public Dependencies  : Threads::Threads;caffe2::mkl;caffe2::mkldnn
--   Private Dependencies : qnnpack;nnpack;cpuinfo;fbgemm;/usr/lib/x86_64-linux-gnu/libnuma.so;fp16;gloo;aten_op_header_gen;onnxifi_loader;rt;gcc_s;gcc;dl
-- Configuring done
CMake Warning (dev) at cmake/Dependencies.cmake:822 (add_dependencies):
  Policy CMP0046 is not set: Error on non-existent dependency in
  add_dependencies.  Run "cmake --help-policy CMP0046" for policy details.
  Use the cmake_policy command to set the policy and suppress this warning.

  The dependency target "nccl_external" of target "gloo_cuda" does not exist.
Call Stack (most recent call first):
  CMakeLists.txt:199 (include)
This warning is for project developers.  Use -Wno-dev to suppress it.

-- Generating done
-- Build files have been written to: /home/jabbo/TorchCraftAI/3rdparty/pytorch/tools

I also have installed the nccl CUDA 9.2 packages libnccl-dev and libnccl2 as well as the cuDNN one

How to run Tutorial cases

Hi there,

I have successfully built and run the cherrypi on windows.

Now I am trying to follow the tutorial about building placement and micro management.
After going through the tutorial blog I found there is no executable files in the ./build folder according to
https://torchcraft.github.io/TorchCraftAI/docs/bptut-rl.html
eg. ./build/tutorials/building-placer/bp-train-rl

Should there be any modification about the building procedure according to the 'Installation(Windows)' part of TorchCraftAI (https://torchcraft.github.io/TorchCraftAI/docs/install-windows.html)?

Thanks in advance : )

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.