artelnics / opennn Goto Github PK
View Code? Open in Web Editor NEWOpenNN - Open Neural Networks Library
Home Page: http://www.opennn.net
License: GNU Lesser General Public License v3.0
OpenNN - Open Neural Networks Library
Home Page: http://www.opennn.net
License: GNU Lesser General Public License v3.0
Test: gradient_descent
Segmentation fault (core dumped)
Hi,
Used CMake 3.6, GNU Make 3.81. Compiles successfully.
Trying to run the tests from ./test/data
as it otherwise will complain about missing data files.
Output:
...
suite
Test: suite
100
7.00916 0
0 7.00916
6.9987 0
0 6.9987
[1] 8222 segmentation fault ../opennntests
Any experience on darwin with OpenNN?
Thanks for OpenNN!
Hello,
I am trying to build example with VS 2013. I have got a following link error:
This error for "L O G I C A L O P E R A T O R S A P P L I C A T I O N" example.
Error 1 error LNK2019: unresolved external symbol "public: __thiscall OpenNN::StochasticGradientDescent::StochasticGradientDescent(class OpenNN::LossIndex *)" (??0StochasticGradientDescent@OpenNN@@QAE@PAVLossIndex@1@@Z) referenced in function "public: void __thiscall OpenNN::TrainingStrategy::set_training_method(enum OpenNN::TrainingStrategy::TrainingMethod const &)" (?set_training_method@TrainingStrategy@OpenNN@@QAEXABW4TrainingMethod@12@@Z) C:\Users\alex\Documents\Visual Studio 2013\Projects\ConsoleApplication1\ConsoleApplication1\opennn.lib(training_strategy.obj) ConsoleApplication1
Error 2 error LNK2019: unresolved external symbol "public: void __thiscall OpenNN::StochasticGradientDescent::perform_training_void(void)" (?perform_training_void@StochasticGradientDescent@OpenNN@@QAEXXZ) referenced in function "public: void __thiscall OpenNN::TrainingStrategy::perform_training_void(void)const " (?perform_training_void@TrainingStrategy@OpenNN@@QBEXXZ) C:\Users\alex\Documents\Visual Studio 2013\Projects\ConsoleApplication1\ConsoleApplication1\opennn.lib(training_strategy.obj) ConsoleApplication1
Hi,
I compiled successfully the library and the example apps under Linux Debian (g++ 6.3).
But the logical_operations application displays the wrong results:
Epoch 78: Minimum loss decrease (0) reached.
Loss decrease: 0
Parameters norm: 8.73467
Training loss: 2.43129e-31
Gradient norm: 3.83038e-16
Training rate: 9.0008e-18
Elapsed time: 00:00
X Y AND OR NAND NOR XOR XNOR
1 1 0 0 1 1 0 1
1 0 0 0 1 1 0 1
0 1 0 0 1 1 0 1
0 0 0 0 1 1 0 1
I have tried to build OpenNN using the instructions for Visual Studio at http://www.opennn.net/documentation/building_opennn.html#VisualStudio.
I did all the necessary steps however when i try to build any of the examples i get the following error: LNK1104 cannot open file '....\opennn\libopennn.a' simple_function_regression C:\OpenNN\OpenNN\build\examples\simple_function_regression\LINK 1
I've had a look for that file libopennn.a and it is no where to be seen in my OpenNN repository at all.
I am using Visual Studio 2019, any help on this matter would be great thanks.
Error C2065 'MaximumIterations': undeclared identifier.
Error C2065 'maximum_time': undeclared identifier.
Error C2065 'performance_functional_pointer': undeclared identifier.
please check this error,
I think in fixed learning rate algorithm, it's better to have a momentum option to accelerate training.
I want to build a robot with the mentioned setup, and question is will it work on that, or the library has some x86 or Windows specific parts?
Running the simple_pattern_recognition example I get
ProbabilisticLayer* get_probabilistic_layer_pointer(void) const method.
Probabilistic layer pointer is NULL.
I'm assuming it's a bug in the example as I obviously didn't write it but I'm a complete noob on the subject so I also don't know how to go about fixing it.
If my data is incressing, and data is biger, so i want continue to retrain, use the trained result xml.
msvobod :: ~/tmp/OpenNN ‹master*› » gcc --version 2 ↵
gcc (GCC) 5.3.0
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
msvobod :: ~/tmp/OpenNN ‹master_› » qmake 2 ↵
msvobod :: ~/tmp/OpenNN ‹master_› » make
cd tinyxml2/ && ( test -e Makefile || /usr/lib/qt/bin/qmake /home/milan/tmp/OpenNN/tinyxml2/tinyxml2.pro -o Makefile ) && make -f Makefile
make[1]: Vstupuje se do adresáře „/home/milan/tmp/OpenNN/tinyxml2“
make[1]: Pro „first“ nebude nic uděláno.
make[1]: Opouští se adresář „/home/milan/tmp/OpenNN/tinyxml2“
cd opennn/ && ( test -e Makefile || /usr/lib/qt/bin/qmake /home/milan/tmp/OpenNN/opennn/opennn.pro -o Makefile ) && make -f Makefile
make[1]: Vstupuje se do adresáře „/home/milan/tmp/OpenNN/opennn“
g++ -c -pipe -fopenmp -O2 -march=x86-64 -mtune=generic -O2 -pipe -fstack-protector-strong -fPIC -Wall -W -D_REENTRANT -DQT_NO_DEBUG -I. -I../tinyxml2 -Ieigen -I/usr/lib/qt/mkspecs/linux-g++ -o variables.o variables.cpp
In file included from variables.h:32:0,
from variables.cpp:16:
vector.h: In member function ‘OpenNN::Vector OpenNN::Vector::sort_less_indices() const’:
vector.h:3611:15: error: ‘begin’ is not a member of ‘std’
std::sort(std::begin(indices), std::end(indices), [this](size_t i1, size_t i2) {return (_this)[i1] < (_this)[i2];});
^
vector.h:3611:36: error: ‘end’ is not a member of ‘std’
std::sort(std::begin(indices), std::end(indices), [this](size_t i1, size_t i2) {return (_this)[i1] < (_this)[i2];});
^
vector.h:3611:118: warning: lambda expressions only available with -std=c++11 or -std=gnu++11
std::sort(std::begin(indices), std::end(indices), [this](size_t i1, size_t i2) {return (_this)[i1] < (_this)[i2];});
^
vector.h: In member function ‘OpenNN::Vector OpenNN::Vector::sort_greater_indices() const’:
vector.h:3633:15: error: ‘begin’ is not a member of ‘std’
std::sort(std::begin(indices), std::end(indices), [this](size_t i1, size_t i2) {return (_this)[i1] > (_this)[i2];});
^
vector.h:3633:36: error: ‘end’ is not a member of ‘std’
std::sort(std::begin(indices), std::end(indices), [this](size_t i1, size_t i2) {return (_this)[i1] > (_this)[i2];});
^
vector.h:3633:118: warning: lambda expressions only available with -std=c++11 or -std=gnu++11
std::sort(std::begin(indices), std::end(indices), [this](size_t i1, size_t i2) {return (_this)[i1] > (_this)[i2];});
^
Makefile:999: návod pro cíl „variables.o“ selhal
make[1]: *** [variables.o] Chyba 1
make[1]: Opouští se adresář „/home/milan/tmp/OpenNN/opennn“
Makefile:89: návod pro cíl „sub-opennn-make_first-ordered“ selhal
make: *** [sub-opennn-make_first-ordered] Chyba 2
Two issues in the section if(__OPENNN_OMP__)
.
First a typo set(CMAEK_EXE_LINKER_FLAGS
should be set(CMAKE_EXE_LINKER_FLAGS
Second, MSVC compilers don't recongize -fopenmp flag. Needs to be /openmp.
I generated using this command: cmake -D__OPENNN_OMP__=1 -G"Visual Studio 15 2017 Win64" ..
I fixed it by making the section:
if(__OPENNN_OMP__)
message("Using OpenMP")
if (MSVC)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} /openmp")#${OpenMP_C_FLAGS}")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /openmp")#${OpenMP_CXX_FLAGS}")
else()
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fopenmp")#${OpenMP_C_FLAGS}")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fopenmp")#${OpenMP_CXX_FLAGS}")
endif()
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} ${OpenMP_EXE_LINKER_FLAGS}")
endif()
This cleaned up the compiler warnings.
How to fix compilation on mac os sierra with clang and openmp ?
first install llvm 4.0.1 from mac in the command line do:
brew install llvm
You will need to change the CMakefile. So edit CMakefile (sublime, vim, ...) like this to add the support for omp and clang 4.0.1:
if(_ OPENNN_OMP _)
message("Using OpenMP")
set(CMAKE_CXX_COMPILER /usr/local/opt/llvm/bin/clang++)
set(CMAKE_C_COMPILER /usr/local/opt/llvm/bin/clang)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fopenmp -I/usr/local/Cellar/llvm/4.0.1/lib/ -I/usr/local/opt/llvm/include")#${OpenMP_C_FLAGS}")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall -fopenmp -I/usr/local/Cellar/llvm/4.0.1/lib/ -I/usr/local/opt/llvm/lib")#${OpenMP_CXX_FLAGS}")
set(CMAKE_CXX_LINK_FLAGS "${CMAKE_CXX_LINK_FLAGS} -iomp5 -L/usr/local/Cellar/llvm/4.0.1/lib/")
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -iomp5 -L/usr/local/Cellar/llvm/4.0.1/lib/")
endif()
precompilation step using by cmake in order to generate Makefile:
CC=/usr/local/opt/llvm/bin/clang CXX=/usr/local/opt/llvm/bin/clang++ LDFLAGS=-L/usr/local/opt/llvm/lib/ cmake -DCMAKE_BUILD_TYPE=Release
than compilation step should work (caution "-j8" flag depends on your mac cpu's number):
make -j8
I cloned the master branch.
Created a Windows C++ console project in Visual studio 2017.
Added all the sub directory tree to the project.
It simply gives lots of compile errors, undefined declarations; if made a search all files it won't even find a .h or .cpp file containing the object(s).
A commit in last september simply removed a needed pair of .h .cpp files from the project but kept references to the missing objects.
I tried to checkout a snapshot just before the sep-6 commit that removed the files.
It also breaks.
Am I making something so wrong or the thing is really broken?
It would be good if at least a stable branch or tag exsists.
The file performance_term.h was deleted in commit cf1736c, however the files final_solutions_error.h, independent_parameters_error.h, inverse_sum_squared_error.h, solutions_error.h rely on it.
the example is not work in the newest version,the spilt is crash in vector.h, beause the this->size is 903, but n is 1000, so the end_itr = std::next(this->cbegin(), k*n + n) is out of cross of vector.
It's very difficult to build opennn with make.
After place install command in CmakeLists.txt should be fixed includes like this:
#include "../tinyxml2/tinyxml2.h"
I think, the best way is remove libraries forks from project and add them to dependencies
In vector.h at about line 1633 the following line fails to compile:
if(fabs(maximum - minimum) <= tolerance)
The fabs() function has an ambiguous argument.
Compiler is MSVC2013_64bit,
Suggested fix:
double diff = maximum - minimum;
return fabs(diff) <= tolerance;
to replace this:
// if(fabs(maximum - minimum) <= tolerance) {
// return(true);
// } else {
// return(false);
// }
The following method from the Vector template does not delete the second column after the merging.
void Matrix::merge_columns(const string& column_1_name, const string& column_2_name, const string& merged_column_name, const char& separator)
PerformanceFunctional type found in file informedness_optimization_threshold.cpp is not declared anywhere.
First: Thanks for creating this project!
For some projects we use Embarcaderos C++ Builder.
We use a older compiler version (6.44), though.
When I tried to include OpenNN, it asked about __attribute__((aligned(n)))
, which is probably __attribute__((alloc_align(n)))
for the bcc32 compiler. Thus I added the define for this.
However, now it complains about a line in /eigen/src/Core/util/Meta.h
:
template <class T, unsigned int Size> struct remove_const<const T[Size]> { typedef T type[Size]; };
The number of parameter does not match redeclaration.
Can this be fixed quickly or is the eigen-library simply incompatible with the c++-builder?
Lstm is the best method to forecast!
Currently I'm using this library for my Machine Learning AI school project, but I have to export the expression file, past to python and get the result back to the game. It's work but absolutely not efficient.
Is there any method that I can simply use the network like "predict()" function in sckit?
/home/zyw/Downloads/OpenNN/opennn/vector.h:783:18: error: there are no arguments to ‘data’ that depend on a template parameter, so a declaration of ‘data’ must be available [-fpermissive] MPI_Recv(data(), vector_size, mpi_datatype, rank - 1, 2, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
/home/zyw/Downloads/OpenNN/opennn/vector.h:790:18: error: there are no arguments to ‘data’ that depend on a template parameter, so a declaration of ‘data’ must be available [-fpermissive] MPI_Send(data(), vector_size, mpi_datatype, rank + 1, 2, MPI_COMM_WORLD);
Configure with cmake .. -D__OPENNN_MPI__=1
. After changing data()
to this->data()
, this problem seems disappeared. Could you please tell me whether this modification is reasonable?
Some examples are commented.
The functions fopen (used in neural_network.cpp) is unsafe. Please consider using fopen_s instead. This functions is capable of checking the bounds (https://en.cppreference.com/w/c/io/fopen).
Hello,
I try to test the leukemia example, and I have errors :
Program received signal SIGSEGV, Segmentation fault.
__memmove_avx_unaligned () at ../sysdeps/x86_64/multiarch/memcpy-avx-unaligned.S:268
268 vmovdqu 0x60(%rsi), %ymm3
I use bt on gdb, and I have this stack :
#0 __memmove_avx_unaligned () at ../sysdeps/x86_64/multiarch/memcpy-avx-unaligned.S:268
#1 0x000000000061103f in double* std::__copy_move<true, true, std::random_access_iterator_tag>::copy_m(double const, double const, double*) ()
#2 0x000000000060f9ba in double* std::copy_move_a<true, double*, double*>(double, double, double*)
()
#3 0x00000000006794ab in __gnu_cxx::__normal_iterator<double*, std::vector<double, std::allocator > > std::__copy_move_a2<true, __gnu_cxx::__normal_iterator<double*, std::vector<double, std::allocator > >, __gnu_cxx::__normal_iterator<double*, std::vector<double, std::allocator > > >(__gnu_cxx::__normal_iterator<double*, std::vector<double, std::allocator > >, __gnu_cxx::__normal_iterator<double*, std::vector<double, std::allocator > >, __gnu_cxx::__normal_iterator<double*, std::vector<double, std::allocator > >) ()
#4 0x00000000006792b9 in __gnu_cxx::__normal_iterator<double*, std::vector<double, std::allocator > > std::move<__gnu_cxx::__normal_iterator<double*, std::vector<double, std::allocator > >, __gnu_cxx::__normal_iterator<double*, std::vector<double, std::allocator > > >(__gnu_cxx::__normal_iterator<double*, std::vector<double, std::allocator > >, __gnu_cxx::__normal_iterator<double*, std::vector<double, std::allocator > >, __gnu_cxx::__normal_iterator<double*, std::vector<double, std::allocator > >) ()
#5 0x00000000006791b2 in std::vector<double, std::allocator >::_M_erase(__gnu_cxx::__normal_iterator<double*, std::vector<double, std::allocator > >) ()
#6 0x0000000000678f10 in std::vector<double, std::allocator >::erase(__gnu_cxx::__normal_iterator<double const*, std::vector<double, std::allocator > >) ()
#7 0x0000000000777012 in OpenNN::Perceptron::prune_input(unsigned long const&) ()
#8 0x0000000000771115 in OpenNN::PerceptronLayer::prune_input(unsigned long const&) ()
#9 0x0000000000659772 in OpenNN::MultilayerPerceptron::prune_input(unsigned long const&) ()
#10 0x0000000000642ec7 in OpenNN::NeuralNetwork::prune_input(unsigned long const&) ()
#11 0x0000000000725ce2 in OpenNN::InputsSelectionAlgorithm::set_neural_inputs(OpenNN::Vector const&) ()
---Type to continue, or q to quit---
#12 0x0000000000727d98 in OpenNN::InputsSelectionAlgorithm::perform_model_evaluation(OpenNN::Vector const&) ()
#13 0x000000000072bf2d in OpenNN::GrowingInputs::perform_inputs_selection() ()
#14 0x000000000070d3e0 in OpenNN::ModelSelection::perform_inputs_selection() const ()
#15 0x00000000005ccdc5 in main ()
I build the program using make.
I'm on Fedora 24 and my GCC version is 6.2.1 but the same problem on Ubuntu 16.10 and GCC 5.4.
C02QH2D7G8WM:workspace userone$ logical_operations /usr/data/
OpenNN. Logical Operations Application.
OpenNN Exception: DataSet class.
void load_data(void) method.
Cannot open data file: ../data/logical_operations.dat
When i use openNN to predict date, the result is not correct.If can support LSTM,the LSTM is the best to predict.
Dear Artelnics,
I just tried to build an example from your repository using Qt 5.5.1 on windows 7 and got these errors:
g++ -static-libgcc -static-libstdc++ -static -Wl,-subsystem,console -mthreads -o ......\OpenNN\examples\simple_function_regression\bin\simple_function_regression.exe debug/main.o -LD:/opennn/build-opennn-Desktop_Qt_5_5_1_MinGW_32bit-Debug/examples/simple_function_regression/../../opennn/debug/ -lopennn -LD:/opennn/build-opennn-Desktop_Qt_5_5_1_MinGW_32bit-Debug/examples/simple_function_regression/../../tinyxml2/debug/ -ltinyxml2 -LD:/Qt/5.5/mingw492_32/lib -lQt5Guid -lQt5Cored
D:/opennn/build-opennn-Desktop_Qt_5_5_1_MinGW_32bit-Debug/examples/simple_function_regression/../../opennn/debug/\libopennn.a(data_set.o): In function 'ZN6OpenNN7DataSet24unuse_repeated_instancesEv':
D:\opennn\build-opennn-Desktop_Qt_5_5_1_MinGW_32bit-Debug\opennn/../../OpenNN/opennn/data_set.cpp:1881: undefined reference to 'GOMP_parallel'
etc.
I was searching for a neural network library in c++ to load my weights in it and get the results for some inputs .
So did this library allows me to load my weights ?
if yes what are the format of the weights file ?
and how can i did so ?
Thanks
class InformednessOptimizationThreshold
Error C2065 'MaximumIterations': undeclared identifier.
Error C2065 'maximum_time': undeclared identifier.
Error C2065 'performance_functional_pointer': undeclared identifier.
please check this error,
In trying to integrate OpenNN into another framework I've failed compiling with an undeclared identifier in scaled_conjugate_gradiant.cpp:2014:
results_pointer->stopping_condition = MaximumSelectionLossIncreases;
Looking at the StoppingCondition
enum in optimization_algorithm.h:73 I see that indeed there is no definition:
enum StoppingCondition{MinimumParametersIncrementNorm, MinimumLossDecrease, LossGoal, GradientNormGoal,
MaximumSelectionErrorIncreases, MaximumIterationsNumber, MaximumTime};
I've added it to allow compilation to succeed but was concerned by its absence.
I am trying to build OpenNN on Fedora. But I just get this message
[ 84%] Linking CXX executable tests
[ 84%] Built target tests
make: *** [Makefile:106: all] Error 2
I am not skilled with CMake to determine what the problem is. I installed tinyxml2, is there any other libraries it needs?
I just did this
mkdir build
cd build
cmake ..
The strange thing is, that if I then go to the examples and compile them, they work fine. But the tests all segfault, and the test suite can not find vector data.
Dear Artelnics,
I tried to build an example from your repository using Qt 5.8.0 on windows 10 and got these kinds of errors related to GOPM_parallel and omp's functions:
C:\Users\Giacomo\Documents\build-opennn-Desktop_Qt_5_8_0_MinGW_32bit-Debug\opennn\debug\libopennn.a(testing_analysis.o): In function ZNK6OpenNN15TestingAnalysis19calculate_roc_curveERKNS_6MatrixIdEES4_': C:\Users\Giacomo\Documents\build-opennn-Desktop_Qt_5_8_0_MinGW_32bit-Debug\opennn/../../OpenNN-master/opennn/testing_analysis.cpp:1425: undefined reference to
GOMP_parallel'
C:\Users\Giacomo\Documents\build-opennn-Desktop_Qt_5_8_0_MinGW_32bit-Debug\opennn\debug\libopennn.a(testing_analysis.o): In function ZNK6OpenNN15TestingAnalysis20calculate_lift_chartERKNS_6MatrixIdEE._omp_fn.2': C:\Users\Giacomo\Documents\build-opennn-Desktop_Qt_5_8_0_MinGW_32bit-Debug\opennn/../../OpenNN-master/opennn/testing_analysis.cpp:1958: undefined reference to
omp_get_num_threads'
I have seen the reported issue on windos 7 and I have added the code:
QMAKE_CXXFLAGS += -std=c++11 -fopenmp -pthread -lgomp
QMAKE_LFLAGS += -fopenmp -pthread -lgomp
LIBS += -fopenmp -pthread -lgomp
unfortunatelly it does not work in my case and I get the same mistakes.
There is a bug in neural_network.cpp file when sanitizing output names in the write_expression method. Starting from line 5254:
for(size_t i = 0; i < outputs_number; i++)
{
pos = 0;
search = " (";
replace = "_";
while((pos = inputs_name[i].find(search, pos)) != std::string::npos)
{
inputs_name[i].replace(pos, search.length(), replace);
pos += replace.length();
}
pos = 0;
search = " ";
replace = "_";
while((pos = inputs_name[i].find(search, pos)) != std::string::npos)
{
inputs_name[i].replace(pos, search.length(), replace);
pos += replace.length();
}
pos = 0;
search = "-";
replace = "_";
while((pos = inputs_name[i].find(search, pos)) != std::string::npos)
{
inputs_name[i].replace(pos, search.length(), replace);
pos += replace.length();
}
pos = 0;
search = "(";
replace = "_";
while((pos = inputs_name[i].find(search, pos)) != std::string::npos)
{
inputs_name[i].replace(pos, search.length(), replace);
pos += replace.length();
}
}
This little for loop iterates accorind to the output count, but modifies inputs_name vector. You won't be seeing it when your input number is less or equal to your output number, but my little experiment had the opposite: more outputs than inputs, thus this loop caused a segfault.
The other bug here is that output names are never sanitized. Just change all "inputs_name" to "outputs_name" and we're done here. :)
Hello.
In my project I am doing a data set on my own:
columnsDS = 14; rowsDS = 6131; dataSet = new DataSet( columnsDS, rowsDS );
...
if( newRow.size() == columnsDS ) {dataSet->set_instance( idxRow, newRow ); }
But the program will throw an exception:
"OpenNN Exception: DataSet class.\n"
"void set_instance(const size_t&, const Vector&) method.\n"
"Size must be equal to number of variables.\n"
I write code according to the documentation:
"It comprises a data matrix in which columns represent variables and rows represent instances."
In Matrix template, implement the conversion of time-date with format 2017-01-01 02:00:00 to timestamp.
The file tinyxml2.h cannot be found. This include file is included in trending_layer.h as a relative include path(../tinyxml2/tinyxml2.h). Please consider including tinyxml2 in your project or mentioning this dependency in the README.
Why delete CUDA refrences?
In cross_entropy_error.cpp, the outputs are calculated on multilayer_perceptron but not neural_network. So the loss function does not incorporate the probability layer and will introduce a log(-ve) situation.
This problem occurs in ALL error classes. The evaluated loss functions do not include the probability layer, conditional layer etc.
your example airfoil_self_noise is not function. Because your split in vector.h is crash,your n is 1000,and this->size is 903,so vector is out of cross
time_t and struct tm are confusing. Is it possible to use some std functions?
Some methods seem to add 1 day to the timestamp:
time[i] = mktime(&time_info) + 3600*24;
What is is wrong here?
C02QH2D7G8WM:build userone$ cmake ..
-- The C compiler identification is AppleClang 7.3.0.7030029
-- The CXX compiler identification is AppleClang 7.3.0.7030029
-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc
-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++
-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
CMake Error at CMakeLists.txt:1 (include):
include could not find load file:
RegexUtils
CMake Error at CMakeLists.txt:2 (test_escape_string_as_regex):
Unknown CMake command "test_escape_string_as_regex".
CMake Warning (dev) in CMakeLists.txt:
No cmake_minimum_required command is present. A line of code such as
cmake_minimum_required(VERSION 3.5)
should be added at the top of the file. The version specified may be lower
if you wish to support older CMake versions for this project. For more
information run "cmake --help-policy CMP0000".
This warning is for project developers. Use -Wno-dev to suppress it.
-- Configuring incomplete, errors occurred!
See also "/Users/userone/Documents/workspace/OpenNN/eigen/build/CMakeFiles/CMakeOutput.log".
Hi
I try to train a multilayer perceptron network with 1 one hidden layer. The number of neurons is 256 in the input layer, 25 in the hidden and 2 in the output.
The perform_training function crashes in the dot function (levenberg_marquardt_algorithm.cpp) :
JacobianT_dot_Jacobian = terms_Jacobian.calculate_transpose().dot(terms_Jacobian);
because it tries to allocate a vector of 6477*6477 values (6477 is the parameters_number, roughly the total number of connexions in the network).
My question is : is it possible to train a network with 256 inputs using openNN ? If yes, how should the parameters be settled to avoid this crash ?
Thank you
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.