Comments (2)
- Test name:
test_allocation_id_uniqueness (__main__.TestTorchTidyProfiler)
- Platforms for which to skip the test: dynamo
- Disabled by
pytorch-bot[bot]
Within ~15 minutes, test_allocation_id_uniqueness (__main__.TestTorchTidyProfiler)
will be disabled in PyTorch CI for these platforms: dynamo. Please verify that your test name looks correct, e.g., test_cuda_assert_async (__main__.TestCuda)
.
To modify the platforms list, please include a line in the issue body, like below. The default action will disable the test for all platforms if no platforms list is specified.
Platforms: case-insensitive, list, of, platforms
We currently support the following platforms: asan, dynamo, inductor, linux, mac, macos, rocm, slow, win, windows.
from pytorch.
The "recent examples" link didn't show me the error that occurs when it fails, so here it is:
log
2024-01-12T14:29:08.6612009Z ==================================== RERUNS ====================================
2024-01-12T14:29:08.6612340Z _____________ TestTorchTidyProfiler.test_allocation_id_uniqueness ______________
2024-01-12T14:29:08.6612469Z Traceback (most recent call last):
2024-01-12T14:29:08.6612804Z File "profiler/test_profiler.py", line 2302, in test_allocation_id_uniqueness
2024-01-12T14:29:08.6612905Z gc.collect()
2024-01-12T14:29:08.6613537Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 580, in catch_errors
2024-01-12T14:29:08.6613766Z return callback(frame, cache_entry, hooks, frame_state)
2024-01-12T14:29:08.6614428Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 741, in _convert_frame
2024-01-12T14:29:08.6614673Z result = inner_convert(frame, cache_entry, hooks, frame_state)
2024-01-12T14:29:08.6615390Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 384, in _convert_frame_assert
2024-01-12T14:29:08.6615538Z return _compile(
2024-01-12T14:29:08.6616156Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 670, in _compile
2024-01-12T14:29:08.6616389Z raise InternalTorchDynamoError(str(e)).with_traceback(
2024-01-12T14:29:08.6617002Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 643, in _compile
2024-01-12T14:29:08.6617272Z guarded_code = compile_inner(code, one_graph, hooks, transform)
2024-01-12T14:29:08.6617864Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/utils.py", line 247, in time_wrapper
2024-01-12T14:29:08.6617979Z r = func(*args, **kwargs)
2024-01-12T14:29:08.6618636Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 524, in compile_inner
2024-01-12T14:29:08.6618819Z out_code = transform_code_object(code, transform)
2024-01-12T14:29:08.6619595Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/bytecode_transformation.py", line 1033, in transform_code_object
2024-01-12T14:29:08.6619806Z transformations(instructions, code_options)
2024-01-12T14:29:08.6620396Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 151, in _fn
2024-01-12T14:29:08.6620524Z return fn(*args, **kwargs)
2024-01-12T14:29:08.6621175Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 489, in transform
2024-01-12T14:29:08.6621307Z tracer.run()
2024-01-12T14:29:08.6621928Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 2098, in run
2024-01-12T14:29:08.6622028Z super().run()
2024-01-12T14:29:08.6622634Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 780, in run
2024-01-12T14:29:08.6622752Z and self.step()
2024-01-12T14:29:08.6623366Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 743, in step
2024-01-12T14:29:08.6623505Z getattr(self, inst.opname)(inst)
2024-01-12T14:29:08.6624154Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 1282, in LOAD_ATTR
2024-01-12T14:29:08.6624340Z result = BuiltinVariable(getattr).call_function(
2024-01-12T14:29:08.6625028Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/variables/builtin.py", line 650, in call_function
2024-01-12T14:29:08.6625167Z result = handler(tx, *args, **kwargs)
2024-01-12T14:29:08.6625840Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/variables/builtin.py", line 1238, in call_getattr
2024-01-12T14:29:08.6625978Z return obj.var_getattr(tx, name)
2024-01-12T14:29:08.6626616Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/variables/base.py", line 257, in var_getattr
2024-01-12T14:29:08.6626755Z value = self.const_getattr(tx, name)
2024-01-12T14:29:08.6627592Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/variables/constant.py", line 128, in const_getattr
2024-01-12T14:29:08.6627732Z member = getattr(self.value, name)
2024-01-12T14:29:08.6628235Z torch._dynamo.exc.InternalTorchDynamoError: 'NoneType' object has no attribute 'profiler'
2024-01-12T14:29:08.6628242Z
2024-01-12T14:29:08.6628345Z from user code:
2024-01-12T14:29:08.6628759Z File "profiler/test_profiler.py", line 2304, in resume_in_test_allocation_id_uniqueness_at_2302
2024-01-12T14:29:08.6629017Z roots = p.profiler.kineto_results.experimental_event_tree()
2024-01-12T14:29:08.6629022Z
2024-01-12T14:29:08.6629311Z Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
2024-01-12T14:29:08.6629317Z
2024-01-12T14:29:08.6629322Z
2024-01-12T14:29:08.6629610Z You can suppress this exception and fall back to eager by setting:
2024-01-12T14:29:08.6629764Z import torch._dynamo
2024-01-12T14:29:08.6629936Z torch._dynamo.config.suppress_errors = True
2024-01-12T14:29:08.6629942Z
2024-01-12T14:29:08.6629947Z
2024-01-12T14:29:08.6630210Z To execute this test, run the following from the base repo dir:
2024-01-12T14:29:08.6630637Z PYTORCH_TEST_WITH_DYNAMO=1 python test_profiler.py -k test_allocation_id_uniqueness
2024-01-12T14:29:08.6630643Z
2024-01-12T14:29:08.6630961Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
2024-01-12T14:29:08.6631291Z _____________ TestTorchTidyProfiler.test_allocation_id_uniqueness ______________
2024-01-12T14:29:08.6631419Z Traceback (most recent call last):
2024-01-12T14:29:08.6631750Z File "profiler/test_profiler.py", line 2302, in test_allocation_id_uniqueness
2024-01-12T14:29:08.6631848Z gc.collect()
2024-01-12T14:29:08.6632476Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 580, in catch_errors
2024-01-12T14:29:08.6632709Z return callback(frame, cache_entry, hooks, frame_state)
2024-01-12T14:29:08.6633366Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 741, in _convert_frame
2024-01-12T14:29:08.6633645Z result = inner_convert(frame, cache_entry, hooks, frame_state)
2024-01-12T14:29:08.6634367Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 384, in _convert_frame_assert
2024-01-12T14:29:08.6634505Z return _compile(
2024-01-12T14:29:08.6635274Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 670, in _compile
2024-01-12T14:29:08.6635499Z raise InternalTorchDynamoError(str(e)).with_traceback(
2024-01-12T14:29:08.6636116Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 643, in _compile
2024-01-12T14:29:08.6636385Z guarded_code = compile_inner(code, one_graph, hooks, transform)
2024-01-12T14:29:08.6636976Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/utils.py", line 247, in time_wrapper
2024-01-12T14:29:08.6637093Z r = func(*args, **kwargs)
2024-01-12T14:29:08.6637756Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 524, in compile_inner
2024-01-12T14:29:08.6637944Z out_code = transform_code_object(code, transform)
2024-01-12T14:29:08.6638739Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/bytecode_transformation.py", line 1033, in transform_code_object
2024-01-12T14:29:08.6638912Z transformations(instructions, code_options)
2024-01-12T14:29:08.6639498Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 151, in _fn
2024-01-12T14:29:08.6639625Z return fn(*args, **kwargs)
2024-01-12T14:29:08.6640249Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 489, in transform
2024-01-12T14:29:08.6640350Z tracer.run()
2024-01-12T14:29:08.6640976Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 2098, in run
2024-01-12T14:29:08.6641076Z super().run()
2024-01-12T14:29:08.6641683Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 780, in run
2024-01-12T14:29:08.6641799Z and self.step()
2024-01-12T14:29:08.6642407Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 743, in step
2024-01-12T14:29:08.6642553Z getattr(self, inst.opname)(inst)
2024-01-12T14:29:08.6643198Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 1282, in LOAD_ATTR
2024-01-12T14:29:08.6643382Z result = BuiltinVariable(getattr).call_function(
2024-01-12T14:29:08.6644068Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/variables/builtin.py", line 650, in call_function
2024-01-12T14:29:08.6644205Z result = handler(tx, *args, **kwargs)
2024-01-12T14:29:08.6644918Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/variables/builtin.py", line 1238, in call_getattr
2024-01-12T14:29:08.6645056Z return obj.var_getattr(tx, name)
2024-01-12T14:29:08.6645700Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/variables/base.py", line 257, in var_getattr
2024-01-12T14:29:08.6645851Z value = self.const_getattr(tx, name)
2024-01-12T14:29:08.6646536Z File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/variables/constant.py", line 128, in const_getattr
2024-01-12T14:29:08.6646673Z member = getattr(self.value, name)
2024-01-12T14:29:08.6647173Z torch._dynamo.exc.InternalTorchDynamoError: 'NoneType' object has no attribute 'profiler'
2024-01-12T14:29:08.6647179Z
2024-01-12T14:29:08.6647279Z from user code:
2024-01-12T14:29:08.6647696Z File "profiler/test_profiler.py", line 2304, in resume_in_test_allocation_id_uniqueness_at_2302
2024-01-12T14:29:08.6647958Z roots = p.profiler.kineto_results.experimental_event_tree()
2024-01-12T14:29:08.6647964Z
2024-01-12T14:29:08.6648252Z Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
2024-01-12T14:29:08.6648257Z
2024-01-12T14:29:08.6648304Z
2024-01-12T14:29:08.6648596Z You can suppress this exception and fall back to eager by setting:
2024-01-12T14:29:08.6648710Z import torch._dynamo
2024-01-12T14:29:08.6648888Z torch._dynamo.config.suppress_errors = True
2024-01-12T14:29:08.6648919Z
2024-01-12T14:29:08.6648954Z
2024-01-12T14:29:08.6649227Z To execute this test, run the following from the base repo dir:
2024-01-12T14:29:08.6649660Z PYTORCH_TEST_WITH_DYNAMO=1 python test_profiler.py -k test_allocation_id_uniqueness
2024-01-12T14:29:08.6649665Z
2024-01-12T14:29:08.6650000Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
from pytorch.
Related Issues (20)
- Broken Link and unfinished sentence in Frequently Asked Questions HOT 1
- When testing the scalar version, test_open_device_registration will fail
- torch.distributed sys.excepthook crashes if distributed backend was deinitialized HOT 2
- DISABLED test_ring_attention_native_transformer_is_causal_True (__main__.RingAttentionTest) HOT 1
- All processes running torch.distributed.destroy_process_group() create CUDA context on device 0
- Segmentation Faults loading Models pytorch v2.3.0 Apple M2 HOT 6
- lintrunner failures on main
- RAM Not Freed on CPU After Moving Model with Multiple Transformers to CUDA
- [VIT Model] [perf Degradation] [X86] [ARM] torch.compile + weight prepacking results in perf degradation for VIT Transformer model HOT 1
- Allowing topology writer to pass hints to accelerator backends
- Building libtorch for android fails HOT 1
- Add `str` type to `device` parameter of `torch.cuda.get_device_name()` on the doc HOT 2
- TYP: declare torch.utils in __all__
- Add `int` and `str` type to `device` parameter of `torch.tensor()` HOT 1
- Extend Fake Tensor Caching to Symints
- backpropagation for sparse semi-structured HOT 1
- Tracing through __getitem__ -> __len__ for ModuleList fails. HOT 4
- ops.masked with bool inputs leads to compile failure on CPU HOT 8
- Add testing for NJT serialization
- Triton kernel doing more work uses less registers HOT 8
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pytorch.