Comments (2)
llvmpy change recently. You have to disable the Pass --- we are working to get an actual release of Numba out soon. Apologies for the problems.
-Travis
On Aug 10, 2012, at 12:59 PM, Alex Wiltschko wrote:
In [1]: from numba.decorators import vectorize
In [2]: @vectorize
...: def my_add(x):
...: return x+x
...:
str_to_llvmtype(): str = 'f64'
str_to_llvmtype(): str = 'f64'
{'blocks': {0: },
'blocks_dom': {0: set([0])},
'blocks_in': {0: set()},
'blocks_out': {0: set()},
'blocks_reaching': {0: set([0])},
'blocks_reads': {0: set([0])},
'blocks_writer': {0: {}},
'blocks_writes': {0: set([0])},
'translator': }
op_BINARY_ADD(): , _llvm=, typ='f64')> + , _llvm=, typ='f64')>
resolve_type(): arg1 = , _llvm=, typ='f64')>, arg2 = , _llvm=, typ='f64')>
resolve_type() ==> 'f64'
Warning: Could not create fast version... 'module' object has no attribute 'PASS_DEAD_CODE_ELIMINATION'
Traceback (most recent call last):
File "/Library/Frameworks/EPD64.framework/Versions/7.3/lib/python2.7/site-packages/numba/decorators.py", line 55, in vectorize
t.translate()
File "/Library/Frameworks/EPD64.framework/Versions/7.3/lib/python2.7/site-packages/numba/translate.py", line 935, in translate
fpm.add(lp.PASS_DEAD_CODE_ELIMINATION)
AttributeError: 'module' object has no attribute 'PASS_DEAD_CODE_ELIMINATION'Note that I had to install minivect by manually cloning from numba/minivect. I don't know if that's causing the problem.
?
Reply to this email directly or view it on GitHub.
from numba.
Sounds good. I will wait for an official release.
On Fri, Aug 10, 2012 at 2:03 PM, Travis E. Oliphant <
[email protected]> wrote:
llvmpy change recently. You have to disable the Pass --- we are working to
get an actual release of Numba out soon. Apologies for the problems.-Travis
On Aug 10, 2012, at 12:59 PM, Alex Wiltschko wrote:
In [1]: from numba.decorators import vectorize
In [2]: @vectorize
...: def my_add(x):
...: return x+x
...:
str_to_llvmtype(): str = 'f64'
str_to_llvmtype(): str = 'f64'
{'blocks': {0: },
'blocks_dom': {0: set([0])},
'blocks_in': {0: set()},
'blocks_out': {0: set()},
'blocks_reaching': {0: set([0])},
'blocks_reads': {0: set([0])},
'blocks_writer': {0: {}},
'blocks_writes': {0: set([0])},
'translator': }
op_BINARY_ADD(): , _llvm=, typ='f64')> + , _llvm=, typ='f64')>
resolve_type(): arg1 = , _llvm=, typ='f64')>, arg2 = , _llvm=,
typ='f64')>
resolve_type() ==> 'f64'
Warning: Could not create fast version... 'module' object has no
attribute 'PASS_DEAD_CODE_ELIMINATION'
Traceback (most recent call last):
File
"/Library/Frameworks/EPD64.framework/Versions/7.3/lib/python2.7/site-packages/numba/decorators.py",
line 55, in vectorize
t.translate()
File
"/Library/Frameworks/EPD64.framework/Versions/7.3/lib/python2.7/site-packages/numba/translate.py",
line 935, in translate
fpm.add(lp.PASS_DEAD_CODE_ELIMINATION)
AttributeError: 'module' object has no attribute
'PASS_DEAD_CODE_ELIMINATION'Note that I had to install minivect by manually cloning from
numba/minivect. I don't know if that's causing the problem.?
Reply to this email directly or view it on GitHub.—
Reply to this email directly or view it on GitHubhttps://github.com//issues/18#issuecomment-7652149.
from numba.
Related Issues (20)
- `searchsorted` with dtype `datetime64` stopped working in `njit` mode going from 0.58.1 to 0.59 HOT 2
- GPU target doesn't emit compile events HOT 1
- explicit specialization does not respect caching HOT 3
- Bug: (regression) cooperative grid sync w/ cache=True - working in 0.58.1, not working in 0.59.0 HOT 4
- np.median raises AssertionError for empty arrays while numpy returns nan HOT 1
- Integrate ZLUDA for AMD CUDA HOT 3
- `nextafter` (via both `math` and `numpy`) missing for CUDA
- Fill out `python -m numba -s` field for `CUDA Libraries Test Output` (even if GPU is absent) HOT 1
- Overloaded function is compiled more than once with the same signature HOT 4
- Regression: searchsorted drastically slower in tight loops HOT 3
- test_sum1d_pyobj crashes on aarch64-linux on 0.59.0 HOT 5
- `AttributeError` for inlined generic functions with py312 pep-695 type-parameter syntax HOT 5
- [numba.cuda] StructModel + FP16 fails with CUDA because make_attribute_wrapper assumes default_manager HOT 1
- No matching implimentation for `np.empty` in guvectorized function with `NUMBA_DISABLE_JIT=1` and related caching issue HOT 4
- Casting error HOT 3
- overload_methoded function shouldn't set `no_cpython_wrapper` as `False` HOT 2
- How about unifying `int32` and `Literal[int](0)` as `int32`, rather than `int64` HOT 2
- The `np.size()` Function Fails HOT 1
- I get super bad performance when using zip() and parallel=True HOT 3
- CUDA target respect NUMBA_OPT environment variable HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from numba.