Giter Site home page Giter Site logo

mindspore-ai / mindspore Goto Github PK

View Code? Open in Web Editor NEW
4.1K 4.1K 686.0 828.63 MB

MindSpore is a new open source deep learning training/inference framework that could be used for mobile, edge and cloud scenarios.

Home Page: https://gitee.com/mindspore/mindspore

License: Apache License 2.0

CMake 0.61% Shell 0.58% Python 27.91% C++ 60.94% Cuda 3.20% C 5.60% HTML 0.01% Dockerfile 0.12% Batchfile 0.01% Assembly 0.87% Java 0.11% Makefile 0.01% Terra 0.01% Objective-C 0.01% Objective-C++ 0.01% Julia 0.01% Ruby 0.01% Smarty 0.02%

mindspore's People

Contributors

albert-liyan avatar anancds avatar buxue0727 avatar es-chow avatar ginfungyjf avatar greatpanc avatar h-farahat avatar hangangqiang avatar hewei73 avatar it-is-a-robot avatar jiaoy1224 avatar jinxiulang avatar lanzhineng avatar luoyuan7 avatar mindspore-bot avatar mindwilliam avatar nicholasyanghaoran avatar nomindcarry avatar tanghuikang avatar tinaz26 avatar tommylike avatar vectorsl avatar vincent34 avatar wangch1009 avatar wilfchen avatar wuyanernuo avatar wyc-ruiker avatar xiaodathereal avatar zhouyaqiang37 avatar zongs23 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mindspore's Issues

RuntimeError: Thread ID 139752821651200 Unexpected error. This is not the mnist image file: /mnist/train/train-images-idx3-ubyte

my error as follows:
Traceback (most recent call last):
File "train.py", line 51, in
time_cb = TimeMonitor(data_size=ds_train.get_dataset_size())
File "/home/dalab/jwx699828/anaconda3/lib/python3.7/site-packages/mindspore/dataset/engine/datasets.py", line 1395, in get_dataset_size
child_size = self.input[0].get_dataset_size()
File "/home/dalab/jwx699828/anaconda3/lib/python3.7/site-packages/mindspore/dataset/engine/datasets.py", line 1039, in get_dataset_size
return self.input[0].get_dataset_size()
File "/home/dalab/jwx699828/anaconda3/lib/python3.7/site-packages/mindspore/dataset/engine/datasets.py", line 1720, in get_dataset_size
return self.input[0].get_dataset_size()
File "/home/dalab/jwx699828/anaconda3/lib/python3.7/site-packages/mindspore/dataset/engine/datasets.py", line 1720, in get_dataset_size
return self.input[0].get_dataset_size()
File "/home/dalab/jwx699828/anaconda3/lib/python3.7/site-packages/mindspore/dataset/engine/datasets.py", line 1720, in get_dataset_size
return self.input[0].get_dataset_size()
[Previous line repeated 2 more times]
File "/home/dalab/jwx699828/anaconda3/lib/python3.7/site-packages/mindspore/dataset/engine/datasets.py", line 2446, in get_dataset_size
num_rows = MnistOp.get_num_rows(self.dataset_dir, num_samples)
RuntimeError: Thread ID 139752821651200 Unexpected error. This is not the mnist image file: ../data/mnist/train/train-images-idx3-ubyte

the dataset exists in my floder,but i can not run the lenet_mnist example, how could i do?

implementation about gpu pool kernel

I am learning \mindspore\ccsrc\kernel\gpu\nn\pooling_gpu_kernel.h code, and find that when pad_mode_ is kSamePadModeUpperCase, needing to perform the pad operation before cudnnPoolingForward. why not calling cudnnPoolingForward directly by setting pooling_descriptor_ wiht pad para?
企业微信截图_15905893158763

Is it any reason about this.

imply union-by-rank of disjoint-set

Background

  • Describe the status of the problem you wish to solve
  • Attach the relevant issue if have

Introduction

  • Describe the general solution, design and/or pseudo-code

Trail

No. Task Description Related Issue(URL)
1
2

Support yolov3 on gpu

Background

Do you have the schedule to support the yolov3 on gpu? Would like to train a custom object detector of yolov3(v4 is better) by nvidia gpu and run the inference task on Ascend310(could we export the model of yolov3 to GEIR format?). Thanks

resnet50 traning error on cpu using offical example

Environment

Hardware Environment(Ascend/GPU/CPU):

Uncomment only one /device <> line, hit enter to put that in a new line, and remove leading whitespaces from that line:
/device cpu

Software Environment:

  • MindSpore version (source or binary):0.1
  • Python version (e.g., Python 3.7.5):Python 3.7.5
  • OS platform and distribution (e.g., Linux Ubuntu 16.04):Linux Ubuntu 18.04
  • GCC/Compiler version (if compiled from source): gcc 7.5.0

Describe the current behavior

I try to execute example/resnet50_cifar10/training.py, the the error log shows " BiasAddGrad input y backprop, dim should >= 2, while 1". It seems like the errors in the process of constructing graph.

Describe the expected behavior

no errors

Steps to reproduce the issue

  1. download cifar10-binary
  2. export DEVICE_NUM=1 in bash
  3. export RANK_ID=1 in bash
  4. chang the default dataset path to mine path
  5. set device_target='CPU'
  6. set enable_loop_sink=False
  7. pass 'dataset_sink_mode=False ' to model.train
  8. python ./train.py

Related log / screenshot

[ERROR] ME(32429,python):2020-04-03-10:53:16.561.678 [mindspore/ccsrc/operator/prim_nn.cc:258] InferImplBiasAddGrad] BiasAddGrad input y backprop, dim should >= 2, while 1.
Traceback (most recent call last):
File "train.py", line 99, in
model.train(epoch_size, dataset, callbacks=cb,dataset_sink_mode=False)
File "/home/jianliu/anaconda3/envs/mindspore_env/lib/python3.7/site-packages/mindspore/train/model.py", line 387, in train
dataset_sink_mode=dataset_sink_mode)
File "/home/jianliu/anaconda3/envs/mindspore_env/lib/python3.7/site-packages/mindspore/train/model.py", line 230, in _train
self._train_process(epoch, train_dataset, list_callback, cb_params)
File "/home/jianliu/anaconda3/envs/mindspore_env/lib/python3.7/site-packages/mindspore/train/model.py", line 324, in _train_process
outputs = self._train_network(*next_element)
File "/home/jianliu/anaconda3/envs/mindspore_env/lib/python3.7/site-packages/mindspore/nn/cell.py", line 141, in call
out = self.compile_and_run(*inputs)
File "/home/jianliu/anaconda3/envs/mindspore_env/lib/python3.7/site-packages/mindspore/nn/cell.py", line 292, in compile_and_run
_, compile_flag = _executor.compile(self, *inputs, phase=self.phase)
File "/home/jianliu/anaconda3/envs/mindspore_env/lib/python3.7/site-packages/mindspore/common/api.py", line 363, in compile
result = self._executor.compile(obj, args_list, phase, use_vm)
RuntimeError: mindspore/ccsrc/operator/prim_nn.cc:258 InferImplBiasAddGrad] BiasAddGrad input y backprop, dim should >= 2, while 1.
The function call stack:
0 In file /home/jianliu/anaconda3/envs/mindspore_env/lib/python3.7/site-packages/mindspore/nn/wrap/cell_wrapper.py(64)
def construct(self, data, label):

Special notes for this issue

summaryRecord data not found mindspore 0.3alpha

Environment

Hardware Environment(Ascend/GPU/CPU):

Uncomment only one /device <> line, hit enter to put that in a new line, and remove leading whitespaces from that line:

/device cpu

Software Environment:

  • MindSpore version (source or binary): 0.3 A source
  • Python version (e.g., Python 3.7.5): Python 3.7.7
  • OS platform and distribution (e.g., Linux Ubuntu 16.04): Linux 18.07
  • GCC/Compiler version (if compiled from source): 7.5
  • **MindInsight version (source or binary) **: 0.2 A

Describe the current behaviour

Trying to reproduce the tutorial for mind insight. Using summary record with scalarsummary etc
But having some error on summary_record, not data not found error.

Describe the expected behaviour

Steps to reproduce the issue

  1. https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/visualization_tutorials.html

Related log / screenshot

288BB669-4120-40F8-87DE-24AF946CEC60
7AD03638-17A5-4211-B555-C0A02F80D79F
B64A95AE-EFA7-49AC-99EA-5694A6F6AD3D

Special notes for this issue

The following is my Jupyter Notebook used for trying MindInsight. https://github.com/MicaTeo/mindSpore/blob/master/MindInsight.ipynb

BatchNorm2D throw exception

Environment

os : win10 64bits
mindspore version : mindspore-0.5.0-cp37-cp37m-win_amd64.whl
python : 3.7.5(anaconda)
CPU : True

Describe the current behavior

The lenet do not work after apply BatchNorm2d

Describe the expected behavior

lenet work after apply BatchNorm2d

Steps to reproduce the issue

  1. Copy the codes at pastebin to main.py
  2. Create folder data
  3. Create two folders(whatever names you like) and put some images(no less than 32) inside each of them
  4. python main.py

Related log / screenshot

Traceback (most recent call last):
  File "main.py", line 113, in <module>
    model.train(args.epoch, dataset_train, callbacks=[ckpoint_cb, LossMonitor()], dataset_sink_mode=False)
  File "C:\Users\yyyy\Anaconda3\envs\mindspore\lib\site-packages\mindspore\train\model.py", line 535, in train
    dataset_sink_mode=dataset_sink_mode)
  File "C:\Users\yyyy\Anaconda3\envs\mindspore\lib\site-packages\mindspore\train\model.py", line 357, in _train
    self._train_process(epoch, train_dataset, list_callback, cb_params)
  File "C:\Users\yyyy\Anaconda3\envs\mindspore\lib\site-packages\mindspore\train\model.py", line 468, in _train_process
    outputs = self._train_network(*next_element)
  File "C:\Users\yyyy\Anaconda3\envs\mindspore\lib\site-packages\mindspore\nn\cell.py", line 212, in __call__
    out = self.compile_and_run(*inputs)
  File "C:\Users\yyyy\Anaconda3\envs\mindspore\lib\site-packages\mindspore\nn\cell.py", line 411, in compile_and_run
    _executor.compile(self, *inputs, phase=self.phase, auto_parallel_mode=self._auto_parallel_mode)
  File "C:\Users\yyyy\Anaconda3\envs\mindspore\lib\site-packages\mindspore\common\api.py", line 417, in compile
    result = self._executor.compile(obj, args_list, phase, use_vm)
RuntimeError: mindspore\ccsrc\session\anf_runtime_algorithm.cc:555 GetOutputDeviceDataType] Node [kernel_graph_0:equivdout{[0]: ValueNode<Primitive> FusedBatchNorm, [1]: equivdout, [2]: conv1BatchNorm.gamma, [3]: conv1BatchNorm.beta, [4]: conv1BatchNorm.moving_mean, [5]: conv1BatchNorm.moving_variance}] has a invalid dtype

Special notes for this issue

Works well without the batchnorm layer. Not sure if this is the issue on windows only, will test it on ubuntu16.04.

how to quantize a fp32 model

Are there some tools for quantization in mindspore? Could you tell me some docs about mindspore quantization if they existed.

How should I prepare cifa10 dataset to run resent50_cifa10 example?

I launched a container using image mindspore/mindspore-gpu:0.3.0-alpha and attempted to run the resnet50_cifa10 example.

According to the tutorial, the dataset should be organized as follows.

.  
├── cifar-10-batches-bin  # train dataset
└── cifar-10-verify-bin   # infer dataset

Please be specify about what data should the folder cifar-10-batches-bin and cifar-10-verify-bin contain?

I tried to put mindrecord and data_batch_* in cifar-10-batches-py into the cifar-10-batches-bin folder, respectively. But in either the case, the training process failed with error message RuntimeError: Thread ID 140285735020288 Unexpected error. There is no valid data matching the dataset API Cifar10Dataset.Please check file path or dataset API validation first.

[Question] Does MindSpore support multi-GPU with auto-parallel strategy?

I built the source from a docker environment based on the dockerfile (docker/mindspore-gpu/devel/Dockerfile) and tried some tests under mindspore/tests/ut/python/parallel. I've modified the tests by adding below two lines,

context.set_context(mode=context.GRAPH_MODE, device_target="GPU")
init('nccl')

I used below command to build the source,

bash build.sh -e gpu -M on -z
pip install build/package/mindspore_gpu-0.1.0-cp37-cp37m-linux_x86_64.whl

I checked the folder where the package were installed, the libgpu_collective.so which was not loaded successfully during nccl initialization is in there.

The tests failed with below error messages. Is there any guide to run MindSpore with multi-GPU and different parallel modes?

Thanks

################## Error Message ####################
elif backend_name == "nccl":

      init_gpu_collective()

E RuntimeError: mindspore/ccsrc/device/gpu/distribution/collective_init.cc:35 InitCollective] Loading libgpu_collective.so failed. Many reasons could cause this:
E 1.libgpu_collective.so is not installed.
E 2.nccl is not installed or found.
E 3.mpi is not installed or found

../../../../mindspore/communication/management.py:69: RuntimeError
-------------------------------------------------------------------------------------- Captured stderr call --------------------------------------------------------------------------------------
[ERROR] ME(102,python):2020-04-19-14:19:19.404.520 [mindspore/ccsrc/device/gpu/distribution/collective_init.cc:35] InitCollective] Loading libgpu_collective.so failed. Many reasons could cause this:
1.libgpu_collective.so is not installed.
2.nccl is not installed or found.
3.mpi is not installed or found
==================================================================================== short test summary info =====================================================================================
FAILED test_matmul_tensor.py::test_two_matmul - RuntimeError: mindspore/ccsrc/device/gpu/distribution/collective_init.cc:35 InitCollective] Loading libgpu_collective.so failed. Many reasons c...
======================================================================================= 1 failed in 0.96s ========================================================================================

x86 predict build error

Environment

Hardware Environment(Ascend/GPU/CPU):

Uncomment only one /device <> line, hit enter to put that in a new line, and remove leading whitespaces from that line:
/device cpu

Software Environment:

  • MindSpore version (source or binary):0.1
  • Python version (e.g., Python 3.7.5):Python 3.7.5
  • OS platform and distribution (e.g., Linux Ubuntu 16.04): Ubuntu 18.04
  • GCC/Compiler version (if compiled from source): gcc 7.5

Describe the current behavior

occurs build error:
CMake Error at src/CMakeLists.txt:56 (add_dependencies):
The dependency target "securec" of target "mspredict" does not exist.
CMake Error at benchmark/CMakeLists.txt:34 (add_dependencies):
The dependency target "securec" of target "benchmark" does not exist.
CMake Error at test/CMakeLists.txt:41 (add_dependencies):
The dependency target "securec" of target "ms-test" does not exist.
CMake Error at module/tvm_kernel/lite/CMakeLists.txt:140 (add_dependencies):
The dependency target "securec" of target "tvm_kernel" does not exist.

Describe the expected behavior

Steps to reproduce the issue

  1. cd mindspore
  2. ./build.sh -I x86_64 -j4

Related log / screenshot

Special notes for this issue

Please provide orthogonal initialize

Background

Please provide orthogonal initialize designed to solve the problem of gradient disappearance and gradient explosion in deep networks.

Where is 3D convolution?

I took a look at the documentation and didn't find any related classes for 3D operation. Isn't there yet?

Mindspore build with docker image failed

Environment

Hardware Environment(Ascend/GPU/CPU):

/device cpu

Software Environment:

  • MindSpore version (source or binary): source
  • Python version (e.g., Python 3.7.5): Python
  • OS platform and distribution (e.g., Linux Ubuntu 16.04): docker image Ubuntu 18.04.4, host os Ubuntu 16.04.5
  • GCC/Compiler version (if compiled from source): 7.5.0

Describe the current behavior

Build mindspore with docker image mindspore/mindspore-cpu:0.1.0-alpha failed

root@mindspore:/mindspore# ./build.sh -e cpu -j 4
...
...
-- Configuring done
-- Generating done
-- Build files have been written to: /mindspore/build/mindspore/_deps/sqlite-subbuild
[100%] Built target sqlite-populate
sqlite_SOURCE_DIR : /mindspore/build/mindspore/_deps/sqlite-src
patching /mindspore/build/mindspore/_deps/sqlite-src -p1 < /mindspore/third_party/patch/sqlite/sqlite.patch001
patching file manifest
Reversed (or previously applied) patch detected!  Assume -R? [n] 
Apply anyway? [n] 
Skipping patch.
5 out of 5 hunks ignored -- saving rejects to file manifest.rej
patching file manifest.uuid
Reversed (or previously applied) patch detected!  Assume -R? [n] 
Apply anyway? [n] 
Skipping patch.
1 out of 1 hunk ignored -- saving rejects to file manifest.uuid.rej
patching file src/expr.c
Reversed (or previously applied) patch detected!  Assume -R? [n] 
Apply anyway? [n] 
Skipping patch.
1 out of 1 hunk ignored -- saving rejects to file src/expr.c.rej
patching file src/sqliteInt.h
Reversed (or previously applied) patch detected!  Assume -R? [n] 
Apply anyway? [n] 
Skipping patch.
1 out of 1 hunk ignored -- saving rejects to file src/sqliteInt.h.rej
patching file src/whereexpr.c
Reversed (or previously applied) patch detected!  Assume -R? [n] 
Apply anyway? [n] 
Skipping patch.
3 out of 3 hunks ignored -- saving rejects to file src/whereexpr.c.rej
CMake Error at cmake/utils.cmake:221 (message):
  Failed patch: /mindspore/third_party/patch/sqlite/sqlite.patch001
Call Stack (most recent call first):
  cmake/external_libs/sqlite.cmake:7 (mindspore_add_pkg)
  cmake/mind_expression.cmake:54 (include)
  CMakeLists.txt:17 (include)


-- Configuring incomplete, errors occurred!
See also "/mindspore/build/mindspore/CMakeFiles/CMakeOutput.log".
See also "/mindspore/build/mindspore/CMakeFiles/CMakeError.log".
root@mindspore:/mindspore# 

Describe the expected behavior

Being able to build mindspore with the docker image

Steps to reproduce the issue

  1. Run command docker run -it -v /root/mindspore:/mindspore --network=host mindspore/mindspore-cpu:0.1.0-alpha /bin/bash to start the build container
  2. Run command cd minspore; ./build.sh -e cpu -j4 to start the build process

Related log / screenshot

Special notes for this issue

Seeking support for Concat() operation in mindspore CPU

Device: CPU
OS: Ubuntu 18.04.04 LTE
Mindspore Version: 0.1.0-alpha
Installation: Installed with pip from the wheel - https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.1.0-alpha/MindSpore/cpu/ubuntu-x86/mindspore-0.1.0-cp37-cp37m-linux_x86_64.whl

Hello. I've just started using MindSpore, and I really like it thus far. However, I ran into a problem: When I attempt to use the Concat() operation from mindspore.ops.operations.Concat(), it raises a NotImplementedError. This leads to me thinking that Concat() is not yet included in the CPU version of mindspore? I'd like to know if Concat() will be available for CPU soon. Or if it is included in the gpu release, then I may also shift to the cuda installation.

Thanks in advance!

gpu softmax kernel problem

I am learning softmax gpu kernel, and I have a question about this code
企业微信截图_15906459598073

Once mode_ variable was set CUDNN_SOFTMAX_MODE_INSTANCE, and never be changed
when axis_ variable is 1 or 0, mode_ should equal to CUDNN_SOFTMAX_MODE_CHANNEL.
企业微信截图_15906463744716

Load onnx model by mindspore

Is it possible to load the onnx model by mindspore? pytorch has many interesting projects and models(ex : yolov4 and yolov5), would be great if we could reuse them rather than rewrite the codes and train the model from scratch(only mindspore able to leverage the power of Ascend310).

Similar issue is #23

[dataset] discussion: how to improve the performance of GeneratorDataset

Background

  • In the same way, for supporting flexiable way of loading data, there exist GeneratorDataset in mindspore to do the similar kinds of things, but there implemted way and performance seems different

Trail

  • what's your opinions as for this?
  • how Dataloader achrive good performanvce ?
  • and how improve GeneratorDataset?

An example of transfer learning

Trying to perform transfer learning by MobileNet, but cannot find an example to show us how to freeze/set the learning rate of each layer. Any example about this? Thanks

About Mindspore IR

I have read the source code of mindspore, And I want to know how did mindspore IR deal with the control flow (like conditional statement and loop statement) in graph rewriter, do you have more docs about this?

Compile error not friendly

Environment

Hardware Environment:

/device gpu

Software Environment:

  • MindSpore 0.5.0 (from binary)
  • Python 3.7.5
  • Linux Ubuntu 18.04.4 LTS

Bug report

I simply define several ms_function and want to get the grad of the input, just like below:

import numpy as np

from mindspore import ms_function
from mindspore import context
from mindspore import Tensor
import mindspore.ops.composite as C
import mindspore.ops.functional as F
import mindspore.ops.operations as P
import mindspore

# Runtime configs
context.set_context(mode=context.GRAPH_MODE, device_target='GPU')

# Functions
reduce_sum = P.ReduceSum()


@ms_function
def f_1_can_not_run(x):
    y = reduce_sum(x * x, -1)
    y = F.sqrt(y)
    return y


@ms_function
def f_2_can_run(x):
    y = reduce_sum(x * x, -1)
    return y


@ms_function
def f_3_can_run(x):
    y = F.sqrt(x)
    return y


x = Tensor(np.random.normal(size=(32, 2)), dtype=mindspore.float32)
print(C.grad(f_1_can_not_run)(x))  # compile error

x = Tensor(np.random.normal(size=(32, 2)), dtype=mindspore.float32)
print(C.grad(f_2_can_run)(x))

x = Tensor(np.random.normal(size=(32,)), dtype=mindspore.float32)
print(C.grad(f_3_can_run)(x))

But the function f_1_can_not_run got compiler error below:

RuntimeError: mindspore/ccsrc/kernel/akg/gpu/akg_gpu_kernel_build.cc:34 AkgGpuKernelBuild] : The pointer[kernel_pack] is null.

While deleting the print(C.grad(f_1_can_not_run)(x)) line, f_2_can_run and f_3_can_run are behaving as expected.

Is something wrong with my usages? Or is it a bug?

module 'mindspore.dataset.transforms.c_transforms' has no attribute 'RandomCrop'

why running the codebase under tutorial of https://www.mindspore.cn/tutorial/en/master/index.html
throw

Environment

Linux vultr.guest 4.15.0-88-generic #88-Ubuntu SMP Tue Feb 11 20:11:34 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
python==3.7.5 mindspore==API(0.2.0-alpha)

Hardware Environment(Ascend/GPU/CPU):

Uncomment only one /device <> line, hit enter to put that in a new line, and remove leading whitespaces from that line:

/device ascend

/device gpu

/device cpu

-cpu
          description: CPU
          product: Intel Core Processor (Broadwell, no TSX, IBRS)
          vendor: Intel Corp.
          physical id: 400
          bus info: cpu@0
          version: pc-i440fx-4.1
          slot: CPU 0
          size: 2GHz
          capacity: 2GHz
          width: 64 bits

Software Environment:

  • MindSpore version (source or binary):
  • Python version (e.g., Python 3.7.5):
  • OS platform and distribution (e.g., Linux Ubuntu 16.04):
  • GCC/Compiler version (if compiled from source):

Describe the current behavior

Describe the expected behavior

Steps to reproduce the issue

Related log / screenshot

import mindspore.dataset.transforms.c_transforms as C

Traceback (most recent call last):
  File "cifar-10-example.py", line 16, in <module>
    random_crop_op = C.RandomCrop((32, 32), (4, 4, 4, 4)) # padding_mode default CONSTANT
AttributeError: module 'mindspore.dataset.transforms.c_transforms' has no attribute 'RandomCrop'

Special notes for this issue

maybe still in alpha phase?

Not able to setup GRAPH mode for CPU target. Fail to execute basic things as result

Environment

device cpu

Software Environment:

  • MindSpore version 0.5 installed thru pip:
  • Python 3.7.7:
  • Windows 10:

Steps to reproduce

I am trying to verify the installation of MindSpore thru running basic script provided in installation instruction

import numpy as np
from mindspore import Tensor
from mindspore.ops import functional as F
import mindspore.context as context

context.set_context(mode=context.GRAPH_MODE, device_target="CPU")

x = Tensor(np.ones([1,3,3,4]).astype(np.float32))
y = Tensor(np.ones([1,3,3,4]).astype(np.float32))

print(F.tensor_add(x, y))

and getting the following error, although I explicitly setup mode to GRAPH_MODE

RuntimeError: mindspore\ccsrc\pynative\pynative_execute.cc:451 RunOpInMs] ArgumentError Device target [CPU] is not supported in Pynative mode

[dataset] how to inplement a simpler kinds of tokenizer: simple_space_split

Background

  • here in mindspore, there is text part in dataset for doing data_augmentation in some nlp task
  • in this process, we need some tokenizer to deal with origin text data, firstly tokenizer is needed

Introduction

Do you have some implemented way and code ?

Invite you to share the hyper-parameter setting and network fine tuning experience with us

Task Description

For specific open source datasets and networks, may I have the pleasure to invite you to share some hyper-parameter setting experience and network fine tuning tips?

Task Goal

Hi guys, we all know that for some classic networks, you may not be able to access the details of the hyper-parameters from the original paper and open source code. If you have better experience in hyper-parameter config and get better results than the original paper, you are most welcome to share some experience with us, which could further help other developers as well.

Example

Here is an example.
Network: MASS form microsoft

No. Description Dataset Name
1 Pre-training dataset News Crawls 2007-2017
2 Fine-tuning Dataset for Text Summarization Gigaword corpus
3 Fine-tuning Dataset for Conversational Response Generation Cornell movie dialog corpus

new synergy Jina <> Mindspore

Task Description

I'd like to build a demo using Mindspore as the DL infra in Jina. http://github.com/jina-ai/jina

I already create a mirror issue at here jina-ai/jina#431

Task Goal

Implement two encoders, one in NLP, one in CV with Mindspore and add them to the executor family of Jina.

Would be great if anyone can provide some pretrained models, this will greatly speedup the dev process and onboarding for new developers.

Sub Task

No. Task Description Issue ID
1 CV model
2 NLP model
3 Add a tutorial to examples

GLOG_v level does not work correctly

Environment

Hardware Environment(Ascend/GPU/CPU):

Uncomment only one /device <> line, hit enter to put that in a new line, and remove leading whitespaces from that line:

/device cpu

Software Environment:

  • MindSpore version (source or binary): 0.1
  • Python version (e.g., Python 3.7.5): 3.7.4
  • OS platform and distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 16.04.5 LTS
  • GCC/Compiler version (if compiled from source): gcc 5.4

Describe the current behavior

When I do

export GLOG_v=2

MS_LOG(INFO) level information does not show. I have to set the GLOG_v level to exactly 1 to show the INFO level messages

Describe the expected behavior

Should show INFO level messages at 1 or higher.

Steps to reproduce the issue

  1. bash build.sh -e cpu -z -j32
  2. chmod +x build/package/mindspore-0.1.0-cp37-cp37m-linux_x86_64.whl
  3. pip install build/package/mindspore-0.1.0-cp37-cp37m-linux_x86_64.whl
  4. python pynative1_test.py

Related log / screenshot

image

Special notes for this issue

import mindspore errors with ModuleNotFoundError

Environment

Hardware Environment(Ascend/GPU/CPU):

/device cpu

Software Environment:

Describe the current behavior

root@mindspore:~/mindspore# python3.7 --version
Python 3.7.5
root@mindspore:~/mindspore# python3.7 -c 'import mindspore'
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/root/mindspore/mindspore/__init__.py", line 17, in <module>
    from . import common, train
  File "/root/mindspore/mindspore/common/__init__.py", line 16, in <module>
    from . import dtype
  File "/root/mindspore/mindspore/common/dtype.py", line 20, in <module>
    from .._c_expression import typing, EnvInstance_
ModuleNotFoundError: No module named 'mindspore._c_expression'
root@mindspore:~/mindspore#

Describe the expected behavior

Being able to import mindspore correctly.

Steps to reproduce the issue

  1. Build Python 3.7.5 manually
  2. Run command pip3.7 install https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.1.0-alpha/MindSpore/cpu/ubuntu-x86/mindspore-0.1.0-cp37-cp37m-linux_x86_64.whl to install mindspore
  3. python3.7 -c 'import mindspore'

Related log / screenshot

Special notes for this issue

The procedure of building Python 3.7.5:

wget https://www.python.org/ftp/python/3.7.5/Python-3.7.5.tgz
tar zxvf Python-3.7.5.tar.gz
cd Python-3.7.5
./configure --enable-optimizations --prefix=/usr/local/python3.7 --enable-shared
make -j4
make install

cp /usr/local/python3.7/lib/libpython3.7m.so.1.0 /usr/lib/x86_64-linux-gnu/
cd /usr/bin
ln -s /usr/local/python3.7/bin/python3.7 python3.7
ln -s /usr/local/python3.7/bin/pip3.7 pip3.7

(MindSpore GPU CUDA 10.1) Concat throwing error 'Input size is mismatching'

Hello. I uninstalled mindspore-cpu today and installed the GPU version, with CUDA 10.1, since it seemed that some operations weren't supported on CPU yet ( I opened and closed this issue a few days back: #24 ). After installing, I verified my installation with the small code sample as described in https://www.mindspore.cn/install/en#installation-verification.

test1

The installation seems to be alright.

I then attempt to try the old Concat operation to see if it works now. I use the exact example used in the documentation for Concat, but I run into an error: 'Input size is mismatching', even though the inputs are identical in shape.

test_2

Here's the entire log in text:

[ERROR] ME(4847,python):2020-04-12-00:29:55.991.900 [mindspore/ccsrc/kernel/gpu/gpu_kernel_factory.cc:48] CheckIOParam] op[Concat] Input size is mismatching!
Traceback (most recent call last):
File "testconc.py", line 12, in
output = op((data1, data2))
File "/home/mashrur/anaconda3/envs/mindenv/lib/python3.7/site-packages/mindspore/ops/primitive.py", line 141, in call
output = _run_op(self, self.name, args)
File "/home/mashrur/anaconda3/envs/mindenv/lib/python3.7/site-packages/mindspore/common/api.py", line 68, in wrapper
results = fn(*arg, **kwargs)
File "/home/mashrur/anaconda3/envs/mindenv/lib/python3.7/site-packages/mindspore/ops/primitive.py", line 332, in _run_op
output = real_run_op(obj, op_name, tuple(op_inputs), tuple(op_mask))
RuntimeError: mindspore/ccsrc/kernel/gpu/gpu_kernel_factory.cc:48 CheckIOParam] op[Concat] Input size is mismatching!

Any help would be appreciated. Thanks in advance.

RuntimeError: Unsupported op [Div]

我在尝试自定义loss的时候报了一个不支持操作符的错误,写了一个测试代码如下:

context.set_context(mode=context.PYNATIVE_MODE, device_target='GPU',enable_mem_reuse=False)
a = ms.Tensor(4)
b= ms.Tensor(2)
c = P.Div()(a,b)

会报如下错误:

[ERROR] ME(3296,python3):2020-04-07-16:47:45.756.448 [mindspore/ccsrc/device/gpu/kernel_info_setter.cc:102] SelectAkgKernel] Not find op[Div] in akg
[ERROR] ME(3296,python3):2020-04-07-16:47:45.756.524 [mindspore/ccsrc/device/gpu/kernel_info_setter.cc:80] SupportedTypeList] Unsupported op [Div]
Traceback (most recent call last):
  File "<input>", line 3, in <module>
  File "/home/liuky/HDD_1/soft/anaconda3/envs/py375/lib/python3.7/site-packages/mindspore/ops/primitive.py", line 141, in __call__
    output = _run_op(self, self.name, args)
  File "/home/liuky/HDD_1/soft/anaconda3/envs/py375/lib/python3.7/site-packages/mindspore/common/api.py", line 68, in wrapper
    results = fn(*arg, **kwargs)
  File "/home/liuky/HDD_1/soft/anaconda3/envs/py375/lib/python3.7/site-packages/mindspore/ops/primitive.py", line 332, in _run_op
    output = real_run_op(obj, op_name, tuple(op_inputs), tuple(op_mask))
RuntimeError: mindspore/ccsrc/device/gpu/kernel_info_setter.cc:80 SupportedTypeList] Unsupported op [Div]

请问是什么问题呢?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.