Giter Site home page Giter Site logo

josephjaspers / blackcat_tensors Goto Github PK

View Code? Open in Web Editor NEW
12.0 2.0 4.0 320.8 MB

Matrix-Vector Library Designed for Neural Network Construction. cuda (gpu) support, openmp (multithreaded cpu) support, partial support of BLAS, expression template based implementation PTX code generation identical to hand written kernels, and support for auto-differentiation

C 0.54% C++ 93.28% Cuda 5.68% Makefile 0.44% Dockerfile 0.05%
machine-learning neuralnetworks linear-algebra blas gpu-support cuda openmp neuralnetwork-construction gpu blackcat-tensors

blackcat_tensors's People

Contributors

josephjaspers avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

blackcat_tensors's Issues

Binary_Reduction Expressions --

Binary_Reduction Expressions currently will have the dimensionality/shape of the
the right-value expression, opposed to the left-value

IE

vec += mat // will have dims=2 and shape of the matrix.

This will be evaluated internally and will still return a vector,
This currently does not effect any user code, this is just to make a note of this oddity.

--> Fix would involve creating a specialization of Binary_Expression for Broadcasted-Reductions

how to complie BlackCat_Tensors project, many error occur?

I use VS2015 to complie the BlackCat_Tensors, many error occur, below is the error:
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(96): error C2065: “always_inline”: 未声明的标识符
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(96): error C2433: “attribute”: 不允许在数据声明中使用“inline”
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(96): error C4430: 缺少类型说明符 - 假定为 int。注意: C++ 不支持默认 int
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(96): error C2146: 语法错误: 缺少“;”(在标识符“attribute”的前面)
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(96): error C2065: “hot”: 未声明的标识符
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(96): error C2374: “BC::traits::attribute”: 重定义;多次初始化
1> i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(96): note: 参见“BC::traits::attribute”的声明
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(96): error C2448: “BC::traits::attribute”: 函数样式初始值设定项类似函数定义
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(97): error C2065: “always_inline”: 未声明的标识符
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(97): error C2433: “attribute”: 不允许在数据声明中使用“inline”
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(97): error C4430: 缺少类型说明符 - 假定为 int。注意: C++ 不支持默认 int
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(97): error C2374: “BC::traits::attribute”: 重定义;多次初始化
1> i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(96): note: 参见“BC::traits::attribute”的声明
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(97): error C2146: 语法错误: 缺少“;”(在标识符“attribute”的前面)
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(97): error C2065: “hot”: 未声明的标识符
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(97): error C2448: “BC::traits::attribute”: 函数样式初始值设定项类似函数定义
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(99): error C2065: “always_inline”: 未声明的标识符
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(99): error C2433: “attribute”: 不允许在数据声明中使用“inline”
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(99): error C4430: 缺少类型说明符 - 假定为 int。注意: C++ 不支持默认 int
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(99): error C2374: “BC::traits::attribute”: 重定义;多次初始化
1> i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(96): note: 参见“BC::traits::attribute”的声明
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(99): error C2061: 语法错误: 标识符“attribute
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(104): error C2059: 语法错误:“...”
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(105): error C2143: 语法错误: 缺少“;”(在“{”的前面)
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(105): error C2447: “{”: 缺少函数标题(是否是老式的形式表?)
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(167): error C2065: “always_inline”: 未声明的标识符
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(167): error C2433: “attribute”: 不允许在数据声明中使用“inline”
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(167): error C4430: 缺少类型说明符 - 假定为 int。注意: C++ 不支持默认 int
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(167): error C2374: “BC::traits::attribute”: 重定义;多次初始化
1> i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(96): note: 参见“BC::traits::attribute”的声明
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(167): error C2061: 语法错误: 标识符“attribute
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(173): error C2143: 语法错误: 缺少“;”(在“{”的前面)
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(173): error C2447: “{”: 缺少函数标题(是否是老式的形式表?)
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(177): error C2065: “always_inline”: 未声明的标识符
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(177): error C2433: “attribute”: 不允许在数据声明中使用“inline”
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(177): error C4430: 缺少类型说明符 - 假定为 int。注意: C++ 不支持默认 int
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(177): error C2374: “BC::traits::attribute”: 重定义;多次初始化
1> i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(96): note: 参见“BC::traits::attribute”的声明
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(177): error C2061: 语法错误: 标识符“attribute
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(183): error C2143: 语法错误: 缺少“;”(在“{”的前面)
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(183): error C2447: “{”: 缺少函数标题(是否是老式的形式表?)
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(191): error C2065: “always_inline”: 未声明的标识符
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(191): error C2433: “attribute”: 不允许在数据声明中使用“inline”
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(191): error C4430: 缺少类型说明符 - 假定为 int。注意: C++ 不支持默认 int
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(191): error C2374: “BC::traits::attribute”: 重定义;多次初始化
1> i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(96): note: 参见“BC::traits::attribute”的声明
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(191): error C2061: 语法错误: 标识符“attribute
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(194): error C2065: “T”: 未声明的标识符
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(194): error C2923: “BC::traits::apply_const_t”: 对于参数“T”,“T”不是有效的 模板 类型变量
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(194): error C2143: 语法错误: 缺少“;”(在“{”的前面)
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(194): error C2447: “{”: 缺少函数标题(是否是老式的形式表?)
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(195): error C2065: “always_inline”: 未声明的标识符
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(195): error C2433: “attribute”: 不允许在数据声明中使用“inline”
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(195): error C4430: 缺少类型说明符 - 假定为 int。注意: C++ 不支持默认 int
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(195): error C2374: “BC::traits::attribute”: 重定义;多次初始化
1> i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(96): note: 参见“BC::traits::attribute”的声明
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(195): error C2061: 语法错误: 标识符“attribute
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(199): error C2653: “T”: 不是类或命名空间名称
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(215): error C2065: “query_value_type”: 未声明的标识符
1> i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(220): note: 参见对正在编译的类 模板 实例化“BC::traits::common_traits”的引用
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(215): error C3200: “unknown-type”: 模板参数“func”的模板参数无效,应输入类模板
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\allocators\host.h(14): warning C4099: “BC::host_tag”: 类型名称以前使用“struct”现在使用的是“class”
1> i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\blackcat_common.h(21): note: 参见“BC::host_tag”的声明
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\allocators\host.h(57): warning C4814: “BC::allocators::AllocatorBC::host_tag,T::operator ==”: 在 C++14 中,"constexpr" 将不表示“常量”;请考虑显式指定“常量”
1> i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\allocators\host.h(59): note: 参见对正在编译的类 模板 实例化“BC::allocators::AllocatorBC::host_tag,T”的引用
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\allocators\host.h(58): warning C4814: “BC::allocators::AllocatorBC::host_tag,T::operator !=”: 在 C++14 中,"constexpr" 将不表示“常量”;请考虑显式指定“常量”
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\allocators\allocator_traits.h(15): warning C4099: “BC::host_tag”: 类型名称以前使用“struct”现在使用的是“class”
1> i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\allocators\host.h(14): note: 参见“BC::host_tag”的声明
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\allocators\allocator_traits.h(16): warning C4099: “BC::device_tag”: 类型名称以前使用“struct”现在使用的是“class”
1> i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\blackcat_common.h(26): note: 参见“BC::device_tag”的声明
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\streams\streams.h(11): warning C4099: “BC::host_tag”: 类型名称以前使用“struct”现在使用的是“class”
1> i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\allocators\allocator_traits.h(15): note: 参见“BC::host_tag”的声明
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\streams\streams.h(11): warning C4099: “BC::device_tag”: 类型名称以前使用“struct”现在使用的是“class”
1> i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\allocators\allocator_traits.h(16): note: 参见“BC::device_tag”的声明
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\streams\host.h(94): error C2065: “PRETTY_FUNCTION”: 未声明的标识符
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\blas\blas.h(11): warning C4099: “BC::host_tag”: 类型名称以前使用“struct”现在使用的是“class”
1> i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\streams\streams.h(11): note: 参见“BC::host_tag”的声明
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\blas\blas.h(11): warning C4099: “BC::device_tag”: 类型名称以前使用“struct”现在使用的是“class”
1> i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\streams\streams.h(11): note: 参见“BC::device_tag”的声明
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\blas\host.h(11): fatal error C1083: 无法打开包括文件: “cblas.h”: No such file or directory

Convolution error

Convolution does not go be the correct results when channels != 1

Fix includes a rewrite if the current image to col operation

My Dear friend,Merry Christmas Eve!

For more than half a year of communication, we have become good friends. On Christmas Eve and Christmas day, I wish my dear friends, merry Christmas Eve and merry Christmas.

How to put my data into the model?

I read my txt'file data to the vector, then when i put my vectror data to the model, compile, it error:

1569749104(1)
how should i do?

I add the under function to Tensor_Utility.h, and it compile normal, is it ok?
void data_format_convers(std::vector & maxmindata) {
std::vector<value_type> file_data;

	for (int i = 0; i < maxmindata.size(); i++)
	{
		value_type val = (value_type)maxmindata[i];
		file_data.push_back(val);
	}
	int copy_size = (unsigned)as_derived().size() > file_data.size() ? file_data.size() : as_derived().size();
	utility_l::HostToDevice(as_derived().internal().memptr(), file_data.data(), copy_size);
}

About lstm, slice to four parts?

I am puzzle:
1、in mnist_test_recurrent example, you slice the input img to four parts, then input to forward, but you input the data 1/4 by one times, and input four times, then two network.back_propagation(outputs[j]), why back_propagation two times?
int img_partitions = 4;
for (int i = 0; i < epochs; ++i){
std::cout << " current epoch: " << i << std::endl;
for (int j = 0; j < samples/batch_size; j++) {
for (int p = 0; p < img_partitions; ++p) {
auto batch = inputs[j];
auto index = BC::index(0,784 * (p/(float)img_partitions));
auto shape = BC::shape(784/4, batch_size);

			network.forward_propagation(batch[{index, shape}]);
		}
			//Apply backprop on the last two images (images 3/4 and 4/4)
			network.back_propagation(outputs[j]);
			network.back_propagation(outputs[j]);

			network.update_weights();
	}
}

2、when test, in order to get the mat hyps input the before 3/4 into, then input the 3/4 into to get the hyps, it is different to the step one information.

auto batch = inputs[0];
auto shape = BC::shape(784/4, batch_size);
for (int p = 0; p < img_partitions-1; ++p) {
	auto index = BC::index(0,784 * (p/(float)img_partitions));
	network.forward_propagation(batch[{index, shape}]);
}
auto last_index = BC::index(0,784 * ((img_partitions-1)/(float)img_partitions));
mat hyps =network.forward_propagation(batch[{last_index, shape}]);

3、I am so puzzle, when i use the lstm to forcast, whether i should slice the img to such parts? Is it could be simple as the mnist_text example?

network structure puzzle, have or not softmax layer?

1、have no softmax layer, trained, and print the output data and trained forcast data, but forcast data is so big ,not between 0 and 1(i have Maximum and minimum normalization the input data):
My net is:
auto make_lstm_network() {
return BC::nn::neuralnetwork(
BC::nn::lstm(BC::host_tag(), 96 * 10, 256),
BC::nn::recurrent(BC::host_tag(), 256, 192),
BC::nn::output_layer(BC::host_tag(), 192)
);
}
using network_type = decltype(make_lstm_network());

typedef struct _LstmPredictTask {
int fd;
int m_inputs_number;
int m_outputs_number;
int m_sequence_length;
int m_train_number;
int m_batch_size;
double m_learning_rate;
network_type m_pnetwork = make_lstm_network();
void reset_neural_network() {
m_pnetwork = std::move(make_lstm_network());
}
} LstmPredictTask;
I create the net and train:
//start train
LstmPredictTask* lstmpredicttask = new LstmPredictTask();
if (lstmpredicttask == NULL) {
return -2;
}

//LstmPredictTask lstmpredicttask;
std::cout << "Neural Network architecture: \n" << lstmpredicttask->m_pnetwork.get_string_architecture() << std::endl;
lstmpredicttask->m_pnetwork.set_learning_rate(lstmpredicttask->m_learning_rate);
lstmpredicttask->m_pnetwork.set_batch_size(lstmpredicttask->m_batch_size);

int training_sets;
std::pair<cube, cube> data = load_train_data(system_tag, datafilepath, lstmpredicttask, &training_sets);
cube& inputs = data.first;
cube& outputs = data.second;

std::cout <<" training..." << std::endl;
auto start = std::chrono::system_clock::now();

std::cout << "imagesoutput:------------------------------------" << std::endl;
auto imagesoutput = reshape(outputs[0], BC::shape(96, 2, lstmpredicttask->m_batch_size));
imagesoutput[0].t().print_sparse(5);

for (int i = 0; i < epochs; ++i) {
	std::cout << " current epoch: " << i << std::endl;
	for (int j = 0; j < training_sets; ++j) {
		lstmpredicttask->m_pnetwork.forward_propagation(inputs[j]);
		lstmpredicttask->m_pnetwork.back_propagation(outputs[j]);
		lstmpredicttask->m_pnetwork.update_weights();
	}
}

if (strlen(_trainparamsavefile) != 0)
{
	lstmpredicttask->m_pnetwork.save(_trainparamsavefile); //Uncomment to add saving/loading
}

auto end = std::chrono::system_clock::now();
clock total = clock(end - start);
std::cout << " training time: " << total.count() << std::endl;

auto batch = inputs[0];
mat hyps = lstmpredicttask->m_pnetwork.forward_propagation(batch);
std::cout << " MAPE loss: " << BC::Scalar<double>(BC::nn::MAPE(hyps, outputs[0]) / lstmpredicttask->m_batch_size).data()[0] << std::endl;

std::cout << "process_train_data------------------------------------" << std::endl;
for (int i = 0; i < 1; ++i) {
	hyps[i].print();
	std::cout << "------------------------------------" << std::endl;
}

I print the output data and forcast data, but forcast data is not right, not between 0 and 1, and the forcast data is so big?
Neural Network architecture:
LSTM:
inputs: 960
outputs: 256
Recurrent:
inputs: 256
outputs: 192
Output_Layer:
inputs: 192
outputs: 192

training...
imagesoutput:------------------------------------
[[0.380, 0.750, 0.570, 0.603, 0.504, 0.529, 0.585, 0.654, 0.497, 0.674, 0.708, 0.564, 0.841, 0.592, 0.590, 0.724, 0.585, 0.884, 0.273, 1.000, 0.681, 0.496, 0.857, 0.774, 0.749, 0.721, 0.659, 0.582, 0.838, 0.701, 0.617, 0.832, 0.718, 0.699, 0.591, 0.624, 0.737, 0.571, 0.632, 0.707, 0.773, 0.512, 0.823, 0.595, 0.534, 0.784, 0.581, 0.701, 0.698, 0.517, 0.690, 0.826, 0.540, 0.788, 0.709, 0.793, 0.700, 0.722, 0.690, 0.810, 0.693, 0.801, 0.693, 0.801, 0.644, 0.572, 0.552, 0.642, 0.696, 0.593, 0.599, 0.642, 0.584, 0.765, 0.619, 0.514, 0.490, 0.647, 0.385, 0.496, 0.643, 0.552, 0.503, 0.610, 0.567, 0.651, 0.636, 0.514, 0.687, 0.675, 0.704, 0.795, 0.722, 0.753, 0.531, 0.805]
[0.333, 0.846, 0.499, 0.595, 0.510, 0.496, 0.244, 1.000, 0.491, 0.682, 0.708, 0.564, 0.823, 0.592, 0.590, 0.724, 0.570, 0.874, 0.601, 0.679, 0.689, 0.515, 0.761, 0.877, 0.673, 0.712, 0.251, 1.000, 0.857, 0.675, 0.632, 0.814, 0.737, 0.699, 0.606, 0.638, 0.746, 0.659, 0.568, 0.806, 0.686, 0.606, 0.730, 0.696, 0.547, 0.775, 0.604, 0.717, 0.698, 0.523, 0.718, 0.748, 0.635, 0.798, 0.718, 0.812, 0.709, 0.663, 0.821, 0.838, 0.720, 0.735, 0.822, 0.735, 0.830, 0.629, 0.712, 0.642, 0.883, 0.578, 0.773, 0.611, 0.754, 0.728, 0.790, 0.501, 0.623, 0.631, 0.484, 0.484, 0.788, 0.483, 0.567, 0.541, 0.647, 0.525, 0.661, 0.413, 0.723, 0.583, 0.786, 0.640, 0.722, 0.606, 0.537, 0.640]]
current epoch: 0
...
current epoch: 127
training time: 4.44725
MAPE loss: 0.0310454
process_train_data------------------------------------
[-25.8460, -8.38295, 14.65372, -5.71706, -27.4717, 8.602991, -23.6860, -35.2958, 17.08113, -15.3109, -4.69952, 16.89871, 35.82342, 21.46421, 7.413482, 35.08593, 6.854354, 12.87520, -32.6889, 13.24751, 21.69270, 9.105426, -7.89662, -14.2385, -28.2898, 6.437375, -5.50345, -9.93736, -23.0111, 2.646907, 17.38767, -22.7771, 1.653123, -4.01012, 19.70499, -15.3654, -23.4237, 12.21545, 26.81120, 4.972610, -2.22349, 40.82117, -39.3307, -16.4180, -3.45213, 3.258177, 14.86659, 4.521998, 17.44581, -9.25711, 8.646381, -3.66365, -21.4957, 2.988301, 11.47178, -3.76695, 2.691704, -10.3771, 20.55152, -20.8875, -14.0821, 10.21881, -11.5416, -2.01150, -14.9797, -22.1648, 12.36946, 16.69329, -11.0484, 9.839924, -25.3255, -16.6491, 21.82862, 0.515754, 26.25349, -31.2395, -11.0210, 36.62269, 1.771704, -25.2875, -15.7974, 16.03234, 4.626742, -34.5083, -2.80271, 18.01022, 0.685722, 12.62443, -1.93305, 54.40408, -26.3419, 10.30347, 14.39106, -22.9777, 4.736426, -5.70500, -1.08147, -45.7611, 22.09196, 11.78476, 5.660966, 12.14032, -38.3728, 28.37263, 0.546137, 9.886680, -1.93493, -32.3411, 17.84676, -21.0064, -2.20607, -14.6393, -19.7328, -2.09870, -7.53886, 8.389552, -10.0535, -18.1724, -2.34214, -4.51320, -9.92966, -4.03211, 15.46169, -6.36365, 13.37198, -12.9277, 0.622534, 32.86275, -7.55993, 4.308530, 23.71868, -14.3317, -9.16887, 5.663595, -0.60777, -11.6949, -31.5343, 18.74082, -17.3257, 2.044618, 7.356360, -16.0196, 0.659284, 16.72092, 15.97791, -4.75050, -4.47696, -9.21328, 11.09348, 4.904877, -32.0915, -6.60098, -29.8067, 5.970200, 5.943465, 49.47677, 2.389998, -1.62050, 26.01327, 18.25720, -18.1648, 21.47414, -16.8297, 18.06407, 0.427003, -18.3307, 2.537354, -36.9672, 14.45144, 12.91946, -18.1769, 2.703407, -4.11329, -2.75824, -14.2142, 43.97624, 22.92934, 1.787207, -4.71201, 3.859387, -18.1937, -2.71971, 14.80760, 52.58834, -4.54549, 29.55029, -7.89654, 13.35169, -10.4480, -23.7541, 14.79737, -6.61407]

2、have softmax layer,trained, and print the output data and trained forcast data, but forcast data fell some wrong with it ,have more zero and so small:
My net is:
auto make_lstm_network() {
return BC::nn::neuralnetwork(
BC::nn::lstm(BC::host_tag(), 96 * 10, 256),
BC::nn::recurrent(BC::host_tag(), 256, 192),
BC::nn::softmax(BC::host_tag(), 192),
BC::nn::output_layer(BC::host_tag(), 192)
);
}
using network_type = decltype(make_lstm_network());
typedef struct _LstmPredictTask {
int fd;
int m_inputs_number;
int m_outputs_number;
int m_sequence_length;
int m_train_number;
int m_batch_size;
double m_learning_rate;
network_type m_pnetwork = make_lstm_network();
void reset_neural_network() {
m_pnetwork = std::move(make_lstm_network());
}
} LstmPredictTask;
I create the net and train:
//start train
LstmPredictTask* lstmpredicttask = new LstmPredictTask();
if (lstmpredicttask == NULL) {
return -2;
}

//LstmPredictTask lstmpredicttask;
std::cout << "Neural Network architecture: \n" << lstmpredicttask->m_pnetwork.get_string_architecture() << std::endl;
lstmpredicttask->m_pnetwork.set_learning_rate(lstmpredicttask->m_learning_rate);
lstmpredicttask->m_pnetwork.set_batch_size(lstmpredicttask->m_batch_size);

int training_sets;
std::pair<cube, cube> data = load_train_data(system_tag, datafilepath, lstmpredicttask, &training_sets);
cube& inputs = data.first;
cube& outputs = data.second;

std::cout <<" training..." << std::endl;
auto start = std::chrono::system_clock::now();

std::cout << "imagesoutput:------------------------------------" << std::endl;
auto imagesoutput = reshape(outputs[0], BC::shape(96, 2, lstmpredicttask->m_batch_size));
imagesoutput[0].t().print_sparse(5);

for (int i = 0; i < epochs; ++i) {
	std::cout << " current epoch: " << i << std::endl;
	for (int j = 0; j < training_sets; ++j) {
		lstmpredicttask->m_pnetwork.forward_propagation(inputs[j]);
		lstmpredicttask->m_pnetwork.back_propagation(outputs[j]);
		lstmpredicttask->m_pnetwork.update_weights();
	}
}

if (strlen(_trainparamsavefile) != 0)
{
	lstmpredicttask->m_pnetwork.save(_trainparamsavefile); //Uncomment to add saving/loading
}

auto end = std::chrono::system_clock::now();
clock total = clock(end - start);
std::cout << " training time: " << total.count() << std::endl;

auto batch = inputs[0];
mat hyps = lstmpredicttask->m_pnetwork.forward_propagation(batch);
std::cout << " MAPE loss: " << BC::Scalar<double>(BC::nn::MAPE(hyps, outputs[0]) / lstmpredicttask->m_batch_size).data()[0] << std::endl;

std::cout << "process_train_data------------------------------------" << std::endl;
for (int i = 0; i < 1; ++i) {
	hyps[i].print();
	std::cout << "------------------------------------" << std::endl;
}

I print the output data and forcast data, but forcast data fell some wrong with it, have more zero and so small?
Neural Network architecture:
LSTM:
inputs: 960
outputs: 256
Recurrent:
inputs: 256
outputs: 192
SoftMax:
inputs: 192
outputs: 192
Output_Layer:
inputs: 192
outputs: 192

training...
imagesoutput:------------------------------------
[[0.380, 0.750, 0.570, 0.603, 0.504, 0.529, 0.585, 0.654, 0.497, 0.674, 0.708, 0.564, 0.841, 0.592, 0.590, 0.724, 0.585, 0.884, 0.273, 1.000, 0.681, 0.496, 0.857, 0.774, 0.749, 0.721, 0.659, 0.582, 0.838, 0.701, 0.617, 0.832, 0.718, 0.699, 0.591, 0.624, 0.737, 0.571, 0.632, 0.707, 0.773, 0.512, 0.823, 0.595, 0.534, 0.784, 0.581, 0.701, 0.698, 0.517, 0.690, 0.826, 0.540, 0.788, 0.709, 0.793, 0.700, 0.722, 0.690, 0.810, 0.693, 0.801, 0.693, 0.801, 0.644, 0.572, 0.552, 0.642, 0.696, 0.593, 0.599, 0.642, 0.584, 0.765, 0.619, 0.514, 0.490, 0.647, 0.385, 0.496, 0.643, 0.552, 0.503, 0.610, 0.567, 0.651, 0.636, 0.514, 0.687, 0.675, 0.704, 0.795, 0.722, 0.753, 0.531, 0.805]
[0.333, 0.846, 0.499, 0.595, 0.510, 0.496, 0.244, 1.000, 0.491, 0.682, 0.708, 0.564, 0.823, 0.592, 0.590, 0.724, 0.570, 0.874, 0.601, 0.679, 0.689, 0.515, 0.761, 0.877, 0.673, 0.712, 0.251, 1.000, 0.857, 0.675, 0.632, 0.814, 0.737, 0.699, 0.606, 0.638, 0.746, 0.659, 0.568, 0.806, 0.686, 0.606, 0.730, 0.696, 0.547, 0.775, 0.604, 0.717, 0.698, 0.523, 0.718, 0.748, 0.635, 0.798, 0.718, 0.812, 0.709, 0.663, 0.821, 0.838, 0.720, 0.735, 0.822, 0.735, 0.830, 0.629, 0.712, 0.642, 0.883, 0.578, 0.773, 0.611, 0.754, 0.728, 0.790, 0.501, 0.623, 0.631, 0.484, 0.484, 0.788, 0.483, 0.567, 0.541, 0.647, 0.525, 0.661, 0.413, 0.723, 0.583, 0.786, 0.640, 0.722, 0.606, 0.537, 0.640]]
current epoch: 0
...
current epoch: 127
training time: 3.44396
MAPE loss: 17.5378
process_train_data------------------------------------
[0.000000, 0.015604, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.042159, 0.000000, 0.000000, 0.000000, 0.000000, 0.077015, 0.000000, 0.000000, 0.002480, 0.000000, 0.105974, 0.000000, 0.059280, 0.000000, 0.000000, 0.063471, 0.075822, 0.000055, 0.000000, 0.000000, 0.000025, 0.063648, 0.000000, 0.000000, 0.072730, 0.000195, 0.000000, 0.000000, 0.000000, 0.010841, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.024128, 0.000000, 0.000000, 0.000000, 0.000000, 0.000073, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000020, 0.000000, 0.000000, 0.020468, 0.079445, 0.000000, 0.014061, 0.000001, 0.000004, 0.000165, 0.000000, 0.000000, 0.000000, 0.013662, 0.000000, 0.000000, 0.000000, 0.000000, 0.000001, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000005, 0.000001, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000231, 0.000000, 0.000000, 0.000000, 0.000000, 0.005832, 0.000000, 0.000000, 0.000000, 0.000000, 0.000227, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.088577, 0.000000, 0.000000, 0.000566, 0.000000, 0.000000, 0.000000, 0.000000, 0.036642, 0.000000, 0.000000, 0.000104, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.003014, 0.000000, 0.000000, 0.000000, 0.000000, 0.012931, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.005828, 0.000001, 0.042263, 0.000000, 0.000000, 0.000000, 0.062403, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000014, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000034]

some wront with my network structure? Could you give me some suggestions? Or how should i do?

malloc network, but when run it, some wheres break!!!

I declare this:
auto make_lstm_network() {
return BC::nn::neuralnetwork(
BC::nn::lstm(BC::host_tag(), 96 * 10, 128),
BC::nn::recurrent(BC::host_tag(), 128, 96),
BC::nn::softmax(BC::host_tag(), 96),
BC::nn::output_layer(BC::host_tag(), 96)
);
}
using network_type = decltype(make_lstm_network());

typedef struct _LstmPredictTask {
int input;
int output;
network_type m_pnetwork = make_lstm_network();
void reset_neural_network() {
m_pnetwork = std::move(make_lstm_network());
}
} LstmPredictTask;

and when run this code, break at the funciton get_string_architecture() and set_batch_size(32):
LstmPredictTask* lstmpredicttask = (LstmPredictTask*)malloc(sizeof(LstmPredictTask));
if (lstmpredicttask == NULL) {
return -2;
}

//LstmPredictTask lstmpredicttask;
auto network = lstmpredicttask->m_pnetwork;
std::cout << "Neural Network architecture: \n" <<network.get_string_architecture() << std::endl;

network.set_learning_rate(0.001);
network.set_batch_size(batch_size);

I only malloc the network, then it break!
when i do not set the set_batch_size, then for train, the function forward_propagation break.

data conver precision problem!

I use this way to inputdata to system:
图片

my original double data:
图片
when i conver it to inner:
图片
my convered double data precision is %1f. How should i do?

Debug model is so slow.

vs2019 debug model, to create network and load trained model file is so slow. load trained model file > 5s.

what's wrong with VS compile? I have include the openblas!

1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(96): error C2065: “always_inline”: 未声明的标识符
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(96): error C2433: “attribute”: 不允许在数据声明中使用“inline”
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(96): error C4430: 缺少类型说明符 - 假定为 int。注意: C++ 不支持默认 int
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(96): error C2146: 语法错误: 缺少“;”(在标识符“attribute”的前面)
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(96): error C2065: “hot”: 未声明的标识符
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(96): error C2374: “BC::traits::attribute”: 重定义;多次初始化
1> i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(96): note: 参见“BC::traits::attribute”的声明
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(96): error C2448: “BC::traits::attribute”: 函数样式初始值设定项类似函数定义
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(97): error C2065: “always_inline”: 未声明的标识符
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(97): error C2433: “attribute”: 不允许在数据声明中使用“inline”
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(97): error C4430: 缺少类型说明符 - 假定为 int。注意: C++ 不支持默认 int
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(97): error C2374: “BC::traits::attribute”: 重定义;多次初始化
1> i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(96): note: 参见“BC::traits::attribute”的声明
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(97): error C2146: 语法错误: 缺少“;”(在标识符“attribute”的前面)
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(97): error C2065: “hot”: 未声明的标识符
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(97): error C2448: “BC::traits::attribute”: 函数样式初始值设定项类似函数定义
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(99): error C2065: “always_inline”: 未声明的标识符
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(99): error C2433: “attribute”: 不允许在数据声明中使用“inline”
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(99): error C4430: 缺少类型说明符 - 假定为 int。注意: C++ 不支持默认 int
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(99): error C2374: “BC::traits::attribute”: 重定义;多次初始化
1> i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(96): note: 参见“BC::traits::attribute”的声明
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(99): error C2061: 语法错误: 标识符“attribute
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(104): error C2059: 语法错误:“...”
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(105): error C2143: 语法错误: 缺少“;”(在“{”的前面)
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(105): error C2447: “{”: 缺少函数标题(是否是老式的形式表?)
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(167): error C2065: “always_inline”: 未声明的标识符
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(167): error C2433: “attribute”: 不允许在数据声明中使用“inline”
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(167): error C4430: 缺少类型说明符 - 假定为 int。注意: C++ 不支持默认 int
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(167): error C2374: “BC::traits::attribute”: 重定义;多次初始化
1> i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(96): note: 参见“BC::traits::attribute”的声明
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(167): error C2061: 语法错误: 标识符“attribute
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(173): error C2143: 语法错误: 缺少“;”(在“{”的前面)
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(173): error C2447: “{”: 缺少函数标题(是否是老式的形式表?)
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(177): error C2065: “always_inline”: 未声明的标识符
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(177): error C2433: “attribute”: 不允许在数据声明中使用“inline”
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(177): error C4430: 缺少类型说明符 - 假定为 int。注意: C++ 不支持默认 int
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(177): error C2374: “BC::traits::attribute”: 重定义;多次初始化
1> i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(96): note: 参见“BC::traits::attribute”的声明
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(177): error C2061: 语法错误: 标识符“attribute
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(183): error C2143: 语法错误: 缺少“;”(在“{”的前面)
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(183): error C2447: “{”: 缺少函数标题(是否是老式的形式表?)
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(191): error C2065: “always_inline”: 未声明的标识符
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(191): error C2433: “attribute”: 不允许在数据声明中使用“inline”
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(191): error C4430: 缺少类型说明符 - 假定为 int。注意: C++ 不支持默认 int
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(191): error C2374: “BC::traits::attribute”: 重定义;多次初始化
1> i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(96): note: 参见“BC::traits::attribute”的声明
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(191): error C2061: 语法错误: 标识符“attribute
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(194): error C2065: “T”: 未声明的标识符
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(194): error C2923: “BC::traits::apply_const_t”: 对于参数“T”,“T”不是有效的 模板 类型变量
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(194): error C2143: 语法错误: 缺少“;”(在“{”的前面)
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(194): error C2447: “{”: 缺少函数标题(是否是老式的形式表?)
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(195): error C2065: “always_inline”: 未声明的标识符
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(195): error C2433: “attribute”: 不允许在数据声明中使用“inline”
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(195): error C4430: 缺少类型说明符 - 假定为 int。注意: C++ 不支持默认 int
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(195): error C2374: “BC::traits::attribute”: 重定义;多次初始化
1> i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(96): note: 参见“BC::traits::attribute”的声明
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(195): error C2061: 语法错误: 标识符“attribute
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(199): error C2653: “T”: 不是类或命名空间名称
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(215): error C2065: “query_value_type”: 未声明的标识符
1> i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(220): note: 参见对正在编译的类 模板 实例化“BC::traits::common_traits”的引用
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\type_traits\typetraits.h(215): error C3200: “unknown-type”: 模板参数“func”的模板参数无效,应输入类模板
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\allocators\host.h(14): warning C4099: “BC::host_tag”: 类型名称以前使用“struct”现在使用的是“class”
1> i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\blackcat_common.h(22): note: 参见“BC::host_tag”的声明
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\allocators\host.h(56): warning C4814: “BC::allocators::AllocatorBC::host_tag,T::operator ==”: 在 C++14 中,"constexpr" 将不表示“常量”;请考虑显式指定“常量”
1> i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\allocators\host.h(58): note: 参见对正在编译的类 模板 实例化“BC::allocators::AllocatorBC::host_tag,T”的引用
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\allocators\host.h(57): warning C4814: “BC::allocators::AllocatorBC::host_tag,T::operator !=”: 在 C++14 中,"constexpr" 将不表示“常量”;请考虑显式指定“常量”
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\allocators\allocator_traits.h(15): warning C4099: “BC::host_tag”: 类型名称以前使用“struct”现在使用的是“class”
1> i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\allocators\host.h(14): note: 参见“BC::host_tag”的声明
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\allocators\allocator_traits.h(16): warning C4099: “BC::device_tag”: 类型名称以前使用“struct”现在使用的是“class”
1> i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\blackcat_common.h(27): note: 参见“BC::device_tag”的声明
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\streams\streams.h(11): warning C4099: “BC::host_tag”: 类型名称以前使用“struct”现在使用的是“class”
1> i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\allocators\allocator_traits.h(15): note: 参见“BC::host_tag”的声明
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\streams\streams.h(11): warning C4099: “BC::device_tag”: 类型名称以前使用“struct”现在使用的是“class”
1> i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\allocators\allocator_traits.h(16): note: 参见“BC::device_tag”的声明
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\streams\host.h(94): error C2065: “PRETTY_FUNCTION”: 未声明的标识符
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\blas\blas.h(11): warning C4099: “BC::host_tag”: 类型名称以前使用“struct”现在使用的是“class”
1> i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\streams\streams.h(11): note: 参见“BC::host_tag”的声明
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\blas\blas.h(11): warning C4099: “BC::device_tag”: 类型名称以前使用“struct”现在使用的是“class”
1> i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\streams\streams.h(11): note: 参见“BC::device_tag”的声明
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\blas\host.h(11): warning C4067: 预处理器指令后有意外标记 - 应输入换行符
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\blas\host.h(13): warning C4067: 预处理器指令后有意外标记 - 应输入换行符
1>i:\load predict\lstm\lstm-neuralnetwork-cpp_重点\src_info\blackcat_tensors-master\source\include\blas\host.h(16): fatal error C1189: #error: "BLACKCAT_TENSORS REQUIRES A VALID <cblas.h> OR <mkl.h> IN ITS PATH"
========== 生成: 成功 0 个,失败 1 个,最新 0 个,跳过 0 个 ==========

How to set batch_size value?

how to set batch_size value? Default is 32, but when i set it to 16,seems the output result is better than 32, but same epochs==5000, train it need more time.

new code compile error!

图片
address:
tree_evaluator.h file, in line 122
compile error:
tree_evaluator.h(123,1): error C2975: '_Test': invalid template argument for 'std::conditional_t', expected compile-time constant expression

linux makefile error!

When i compile the project under linux, some errors(what should i do,what dependences are missing?) :
MNIST_Test.h:113:2: note: suggested alternatives:
In file included from /usr/include/wchar.h:51:0,
from /usr/include/c++/5/cwchar:44,
from /usr/include/c++/5/bits/postypes.h:40,
from /usr/include/c++/5/iosfwd:40,
from /usr/include/c++/5/ios:38,
from /usr/include/c++/5/ostream:38,
from /usr/include/c++/5/iostream:39,
from ../include/BlackCat_Common.h:95,
from ../include/BlackCat_Tensors.h:9,
from MNIST_Test.cpp:11:
/usr/lib/gcc/x86_64-linux-gnu/5/include/stddef.h:216:23: note: ‘size_t’
typedef SIZE_TYPE size_t;
^
In file included from /usr/include/c++/5/iostream:38:0,
from ../include/BlackCat_Common.h:95,
from ../include/BlackCat_Tensors.h:9,
from MNIST_Test.cpp:11:
/usr/include/x86_64-linux-gnu/c++/5/bits/c++config.h:196:26: note: ‘std::size_t’
typedef SIZE_TYPE size_t;
^
/usr/include/x86_64-linux-gnu/c++/5/bits/c++config.h:196:26: note: ‘std::size_t’
In file included from MNIST_Test.cpp:12:0:
MNIST_Test.h:114:7: error: expected ‘;’ before ‘img’
cube img = cube(reshape(inputs[0])(28,28, BATCH_SIZE));
^
MNIST_Test.h:115:2: error: ‘mat’ was not declared in this scope
mat hyps = mat(network.forward_propagation(inputs[0]));
^
MNIST_Test.h:117:22: error: ‘test_images’ was not declared in this scope
for (int i = 0; i < test_images; ++i) {
^
MNIST_Test.h:118:3: error: ‘img’ was not declared in this scope
img[i].printSparse(3);
^
MNIST_Test.h:119:3: error: ‘hyps’ was not declared in this scope
hyps[i].print();
^
In file included from ../include/BlackCat_Stream.h:10:0,
from ../include/BlackCat_Tensors.h:12,
from MNIST_Test.cpp:11:
../include/BlackCat_Memory.h: In instantiation of ‘int BC::memory::atomic_shared_ptr::operator->() [with ValueType = BC::streams::StreamBC::host_tag::Contents]’:
../include/streams/Host.h:44:23: required from here
../include/BlackCat_Memory.h:89:40: error: cannot convert ‘BC::memory::atomic_shared_ptr<BC::streams::StreamBC::host_tag::Contents>::wrapper’ to ‘int’ in return
auto operator ->() { return this->get(); }
^
../include/BlackCat_Memory.h: In instantiation of ‘bool BC::memory::atomic_shared_ptr::operator==(const BC::memory::atomic_shared_ptr&) [with ValueType = BC::streams::StreamBC::host_tag::Contents]’:
../include/streams/Host.h:61:45: required from here
../include/BlackCat_Memory.h:93:20: error: ‘const class BC::memory::atomic_shared_ptr<BC::streams::StreamBC::host_tag::Contents>’ has no member named ‘m_ptr’
return ptr.m_ptr == this->m_ptr;
^
../include/BlackCat_Memory.h:93:20: error: ‘class BC::memory::atomic_shared_ptr<BC::streams::StreamBC::host_tag::Contents>’ has no member named ‘m_ptr’
../include/BlackCat_Memory.h: In instantiation of ‘BC::memory::atomic_shared_ptr& BC::memory::atomic_shared_ptr::operator=(const BC::memory::atomic_shared_ptr&) [with ValueType = BC::streams::StreamBC::host_tag::Contents]’:
../include/streams/Host.h:65:14: required from here
../include/BlackCat_Memory.h:100:35: error: ‘class BC::memory::atomic_shared_ptr<BC::streams::StreamBC::host_tag::Contents>’ has no member named ‘locker’
std::lock_guardstd::mutex lck1(this->locker);
^
../include/BlackCat_Memory.h:100:35: error: ‘lck1’ was not declared in this scope
../include/BlackCat_Memory.h:101:35: error: ‘const class BC::memory::atomic_shared_ptr<BC::streams::StreamBC::host_tag::Contents>’ has no member named ‘locker’
std::lock_guardstd::mutex lck2(ptr.locker);
^
../include/BlackCat_Memory.h:101:35: error: ‘lck2’ was not declared in this scope
../include/BlackCat_Memory.h:102:15: error: ‘class BC::memory::atomic_shared_ptr<BC::streams::StreamBC::host_tag::Contents>’ has no member named ‘m_ptr’
this->m_ptr = ptr.m_ptr;
^
../include/BlackCat_Memory.h:102:15: error: ‘const class BC::memory::atomic_shared_ptr<BC::streams::StreamBC::host_tag::Contents>’ has no member named ‘m_ptr’
Makefile:14: recipe for target 'all' failed
make: *** [all] Error 1

Implement optimized 2d-multi channel convolution filter.

Current implementation is naive (slow).
Required only for CPU implementation.
Cuda/GPU implementation will wrap CuddNN convolution.

Requires forward and backwards implementation (though pull requests for just forward or backwards will be accepted).

No preference on img2col vs other versions.

Function definition should attempt to mimic blas interface

template<class OutputPtr, class ImgPtr, class KrnlPtr>
void conv2d(OutputPtr output, size_t output_ld,
			ImgPtr img, size_t rows, size_t cols, size_t img_ld,
			KrnlPtr krnl, size_t k_rows, size_t k_cols, size_t krnl_ld, size_t stride=1, size_t padding=0) {

If img2col (or any version that requires a workspace) it must accept a Stream argument and allocate from the stream. IE

template<clsas Stream, class OutputPtr, class ImgPtr, class KrnlPtr>
void conv2d(Stream stream, OutputPtr output, size_t output_ld,
			ImgPtr img, size_t rows, size_t cols, size_t img_ld,
			KrnlPtr krnl, size_t k_rows, size_t k_cols, size_t krnl_ld, size_t stride=1, size_t padding=0) {
       auto buffer = stream.get_allocator().allocate();
}

Feature Request: Add Mish activation

Mish is a new novel activation function proposed in this paper.
It has shown promising results so far and has been adopted in several packages including:

All benchmarks, analysis and links to official package implementations can be found in this repository

It would be nice to have Mish as an option within the activation function group.

This is the comparison of Mish with other conventional activation functions in a SEResNet-50 for CIFAR-10: (Better accuracy and faster than GELU)
se50_1

device_tag model is not as good as expect

mnist_test_recurrent example 500 epochs, run it in different model:
template:
training time:2628.69
Average loss: 0.0552672

template
training time: 2704.01
Average loss: 0.0552672

the BC::host_tag is faster 76ms than BC::device_tag model,my cuda is nvida 1060,.

batch_size and samples size puzzle?

图片
percept_MNIST function in mnist_test_recurrent example, batch_size==32, and samples=32X1024, but in my real data, the samples is not the batch_size's multiple, for example, my data size is 32X1024+31, so the last 31 record is not trained?

Overhaul neural-networks.

Planing on a vast re-write of the neural network implementation.
Noticed a significant performance regression comparing some of the older implementations.

The feature set the was desired was not clear when creating the current implementation, to accustom more rapid development in the future, I am planning on re-writing the neural-network related architecture.

when use untrainded data to predict, the output is so bad.

I trained my data, epoch == 3000, and when i use the trained data to predict, the result is so good, but when i use the closed data to predict, the output is so bad. I don't know whether this is right or wrong? But the normal mode to evaluate it, the data need to divide into trained data and test data, traiend use the trained data and evaluate it by test data.

Add support for predicting non-batched data

In MNIST_Test_Recurrent example, when train, input data size is img_sz*batch_size by one step, but when i forcast, i only have the img_size info, so i need to input data is inputs[0][i] as the MNIST_Test example, but when i use the below, compile, it error:
%KDDU{ZPW@X)~LDM06K LP

Add learning_rate and batch_size to saving and loading.

when i save the network,then load the network, run it, it error,it need to reset learning_rate and batch_size。I sugest to save the learning_rate and batch_size when save the network, and load them, do not need to reset them.

Hurry!!! lstm back_propagation some errors.

My net is:
auto make_lstm_network() {
return BC::nn::neuralnetwork(
BC::nn::lstm(BC::host_tag(), 96 * 10, 1024),
BC::nn::lstm(BC::host_tag(), 1024, 512),
BC::nn::lstm(BC::host_tag(), 512, 256),
BC::nn::recurrent(BC::host_tag(), 256, 192),
BC::nn::output_layer(BC::host_tag(), 192)
);
}
using network_type = decltype(make_lstm_network());

typedef struct _LstmPredictTask {
int fd;
int m_inputs_number;
int m_outputs_number;
int m_sequence_length;
int m_train_number;
int m_batch_size;
double m_learning_rate;
network_type m_pnetwork = make_lstm_network();
void reset_neural_network() {
m_pnetwork = std::move(make_lstm_network());
}
} LstmPredictTask;

i new a LstmPredictTask, then train, but when it run to lstm back_propagation, it break, but my input and output data size is right:
图片
break at this point:
图片
the console output is:
图片
i count my input data size, is right,960, and output data size, is right, 192

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.