nervanasystems / ngraph Goto Github PK
View Code? Open in Web Editor NEWnGraph has moved to OpenVINO
Home Page: https://www.ngraph.ai/
License: Apache License 2.0
nGraph has moved to OpenVINO
Home Page: https://www.ngraph.ai/
License: Apache License 2.0
Building LLVM and Clang, with the right options, can be somewhat involved.
Create a convenience script for this, even if that script may eventually be supplanted by something in CMake.
Choose a specific ResNet-50 model implementation for our Q3 work.
This choice is subject to change in the future, for example based on future discussions with the Benchmark team. But it needs to be a reasonable starting point for our development work.
The tangible work-product of this Issue should be a file (perhaps some scripts, perhaps a README, or both) giving the details of this choice.
I am temporarily commenting out the recursion in my next PR.
To provide a cleaner interface for users of the C++ API.
Until the development and test servers are upgraded to Ubuntu 14.04, our build system needs to work on 10.19.176.53 which is Ubuntu 14.04. This is not the case right now.
This is a blocker.
List of license related issue (this list could be extended when needed):
After building the graph, we check the nodes to make sure that their arguments have consistent type and then propagate the type information to the value so we can check all the nodes.
If the arguments are not consistent, how how want to get this information to the user? Under framework control, this would almost always be a bug on our part.
See https://sarcasm.github.io/notes/dev/clang-tidy.html
Note that the readibility-identifier-naming
check may require a version of CLang newer than 4.0.1.
For this Issue, determine the practicality of using this check now, or in the future, in our development environments and during CI.
Create a script, make target, or some other mechanism that lets us (and a CI system) easily verify that all of the code matches our style requirements.
Until the Python NGraph code is mostly or entirely ported to C++, we're keeping the C++ identifiers as close as we can to their Python counterparts, to clarify the mapping.
Once the port is far-enough along, renaming things to be more idiomatic C++:
Choose some C++ naming convention for functions, variable names, etc. and rename everything accordingly.
Consider introducing one or more C++ namespaces.
This will create the first draft of the plugin code that can then be filled in with more actual code.
Python pattern matching says these things are different
ng.multiply(a,ng.divide(b,c))
ng.divide(ng.multiply(a,b), c)
It would be awesome if the final ngraph-cpp solution treated those (and similar mathematical identities) as equal. It's hard to ensure that the mxnet graph matches the operation order of the neon graph.
According to this Slack conversation, our DS2 benchmarks can use this script for our baseline, and as the basis for our XLA-NGraph++ version.
@yxlao and I decided to let that DS2 script stabilize a bit before we try to work with it.
To help prepare for work with the NGraph-C++ API's design regarding pattern-matching, study how pattern-matching is used and implemented in Python NGraph.
Try to understand (at least) the following details regarding the Python code:
What's the relationship between the types and data structures used to represent the DFG to be compiler, vs. those used to represent a pattern-matcher.
What patterns are matched by current transformers.
What patterns do we expect to need to match in future transformers.
Here are resources to learn those things:
From @diyessi : https://en.wikipedia.org/wiki/Unification_(computer_science)
From this Slack conversation:
A good person to send questions is @nhasabni .
So far we have been doing m-input 1-output DAGs. We will also need to match m-input n-output DAGs in the future
https://github.com/NervanaSystems/private-ngraph/blob/master/ngraph/transformers/passes/cpufusion.py
There is a big comment in front of GraphRewritePass
in passes.py
https://wiki.ith.intel.com/display/intelnervana/Graph+Rewrite+Pass
Document and/or script a mechanism to allow repeatable benchmarking of the TF DO software on MNIST, to provide baseline data for the 2017 Q3 NGraph++ effort.
Make sure the op:: factory names and the Op class names match those in ngraph, except for the ngraph op class names that are incorrect (the ones that do not end in Op).
Make Parameter independent of Function.
To make a Function, make the Parameters, set up the graph, and then create the function from the parameters and result. Tests that don't need Function will not need to create one.
CMake reserves the word test
so change name of unit test app to check
There are stubs for Node.description() for use when dumping the graph for debugging purposes. They need to be expanded into something useful.
Obtain a dump of XLA activity while processing MNIST.
The goal is to provide enough detail to guide the early development of the XLA <--> NGraph++ API bindings, and of the NGraph++ API itself.
Choose a specific Deep Speech 2 model implementation for our Q3 work.
This choice is subject to change in the future, for example based on future discussions with the Benchmark team. But it needs to be a reasonable starting point for our development work.
The tangible work-product of this Issue should be a file (perhaps some scripts, perhaps a README, or both) giving the details of this choice.
README.md
file.Implement XLA plugin C++ stub code that:
#include
a header from libngraph
's public includes directory.libngraph
, which is expressed in TF's build system and is satisfied without error when TF runs.The goal isn't to fix broken unit tests, but rather to get make check
passing so we can get Continuous Integration working.
Note: We may need additional guidance regarding the particular CentOS versions to support.
This task will do the following:
install
target to install artifacts (i.e., header and libngraph.so
) to /tmp/ngraph
so that graph-tensorflow can link against it for plugin support. Also update the code w.r.t., the include path for the headers in src/ngraph subdirectory.Once this task is complete, folks can start to play with XLA-->graph_plugin-->NGraph++ implementation.
This Issue is a baby-step towards gathering the speed- and accuracy-data characterizing the TF DO code on MNIST.
For this Issue:
Develop a Bash or Python script that simply runs MNIST on the TF-DO code. The script must run successfully on our test machine, assuming that a human has already performed appropriate setup steps.
Have @yxlao verify that the MNIST script is sufficiently similar to the version we'll use with with our XLA-based code.
Create documentation describing all human steps needed to run the TF DO script in a clean test environment. (Validation of this step is on a best-effort basis.)
[ ] - Obtain a current copy of the Argon API, including actual files and documentation.
[ ] - Establish a way for Core Ngraph developers to become aware of future changes to the API.
Add debug messages to print the computational graph as it's handed over from the TF framework. Right now the debug messages are just going to be printfs with a specific tag "[NGAPH-DBG]" so that we can easily replace them with a better logging which is work in progress. This is a temporary hack to help understand the call chain as well as the graph details.
Have a build system that...
The current implementation uses a Call-like object to hold the arguments nodes and an object that handles behavior of the call. Builtin operations that have additional non-node arguments, such as broadcast, use a subclass of Call to hold the additional (compile-time) arguments. Argument checking and type-propagation for Call can be delegated to the Op. In addition, the Op is a callable singleton and is used as a factory for its Calls. For pattern matching, Op equality
An alternative is to omit the Op and have every operation correspond to an overloading of Call; in this case, factory functions would be defined to make the calls. Pattern matching would need a way to tell if two instances of Call were instances of the same class. One approach would be to use a pattern compiler to compile the patterns into C++, like Bison.
By using methods on Call to drive argument checking and type propagation, which delegate to Op methods, we can switch from the Ops to subclassing Call with minimal disruption.
private-ngraph-cpp
Implement the body of the test that verifies that if node A is an argument of node B then node B is a user of node A.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.