Comments (4)
Hmmm, thats always a tricky question to answer.
You already have your own repos, so that seems like a good sign :-)
Having said that, just to get this out of the way: there is no easy answer, there is no 'port me' button :-) And if there was an easy way to port pytorch in 40-80 hours, I'd have probably already done it :-) For indicative purposes, here are estimated hours I used for various OpenCL-related things, in the past:
- write DeepCL from scratch: ~5 months * 4.5 weeks per month * 40 hours per week (cos I have a full time job, so after that + sleeping, that only leaves me 40 hours to work on my opensource stuff)
- = 900 hours
- port enough of luatorch to opencl that people started using it: 6-8 weeks * 40 hours per week
- ~= 300 hours
- get enough of cuda-on-cl working so tensorflow can run a basic mlp-type network, with matrix multiplications, training ,derivatives, per-element operatinos, reductions:
- ~8 weeks * 40 hours per week
- ~= 300 hours
- (note that almost all of this effort was in cuda-on-cl itself, not in tensorflow, so should be mostly re-usable, but still... juts to give an idea)
If you're still here :-P :
First thing I think you'd want to do, if your primary goal is to port pytorch, I reckon is:
- familiarize yourself with pytorch. Where is the CUDA code? How easy to factorize the CUDA code out, and put some OpenCL code in its place? How to go about creating an OpenCL context and so on?
- for luatorch, the answer to factorization was: very easy :-) since cuda was in a separate repository, and optional
- so I just created a new github repo for cltorch, ie http://github.com/hughperkins/cltorch
- ... and just started hacking:
- create an OpenCL context
- port enough stuff across so that can eg create some simple 1-d tensor, with length 1, but with no way to read/write it for now :-P
- add code to somehow write a value to it
- add code to somehow read a value from it
- ^^^ you probably want to at least do this much for pytorch, just simply by hand, to get a feel for how it works
- you might want to look at how cltorch works, probably in parallel with the above. Maybe not in detail, but at least to have the feeling that you have understanding for how it inserts itself into torch, integrates with torch etc
- once you've got a feel for how to start adding opencl to pytorch, I think you have a few choices, but I would think you'd want to start by evaluating the options available to you:
- try doing some simple computecpp program, presuambly in sycl. build it, compile it, run it. how well does it work? good points? bad points?
- ditto for trisycl
- ditto for cuda-on-cl
- (and you can also consider porting by hand too, to be honest; it's a pretty standard approach)
- by the time you got this far, you probably already have a good idea of what you want to attempt next, and how you will do that :-)
from coriander.
Thanks for your quick response. i Think i saw a glimpse of what you mean by trying to port https://github.com/Cysu/cuda-kernel-benchmark ;)
from coriander.
See my comments at pytorch/pytorch#488 (comment) This gives a plausible way to start.
from coriander.
Technically, this issue is done, since I have docuemnted, at a high-level, the steps required :-P
from coriander.
Related Issues (20)
- cocl_py cuda_sample.cu dont work HOT 3
- when run "make -j 8 tests", something went wrong,seems like the "PIE object" problem. HOT 5
- How much work would it be to update to OpenCL 3.0/Cuda 12...cuda has always been back wards compatable HOT 1
- CMake issues
- Cython Integration
- i.MX8M Vivante GPU not working (Sorry I messed up, delete this)
- i.MX8M Vivante GPU Not Working HOT 1
- MacOS installation error [Errno 2] No such file or directory HOT 3
- tests compilation problem HOT 1
- run install and get error HOT 1
- I'm extremely impressed by this project. HOT 2
- questions about cuda api HOT 1
- Windows installation HOT 1
- help with make run-tests HOT 1
- Issue installing on MacOS10.15.5 HOT 2
- Question about generating an executable with multiple source files HOT 1
- Success compiling and running cuda_example.cu on Apple M1 Big sur HOT 1
- diffs to compile for clang 9
- documenting fix for clang error: reference to host function from device HOT 1
- update on compiling with big sure and clang11: HOT 8
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from coriander.