Comments (3)
- Test name:
test_comprehensive_ones_like_cuda_int64 (__main__.TestInductorOpInfoCUDA)
- Platforms for which to skip the test: rocm
- Disabled by
pytorch-bot[bot]
Within ~15 minutes, test_comprehensive_ones_like_cuda_int64 (__main__.TestInductorOpInfoCUDA)
will be disabled in PyTorch CI for these platforms: rocm. Please verify that your test name looks correct, e.g., test_cuda_assert_async (__main__.TestCuda)
.
To modify the platforms list, please include a line in the issue body, like below. The default action will disable the test for all platforms if no platforms list is specified.
Platforms: case-insensitive, list, of, platforms
We currently support the following platforms: asan, dynamo, inductor, linux, mac, macos, rocm, slow, win, windows.
from pytorch.
Another case of trunk flakiness has been found here. The list of platforms [rocm] appears to contain all the recently affected platforms [rocm]. Either the change didn't propogate fast enough or disable bot might be broken.
from pytorch.
Resolving the issue because the test is not flaky anymore after 700 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive
from pytorch.
Related Issues (20)
- [BUG]Nan in gradients of scaled_dot_product_attention operation with mem_efficient backend
- Unnecessary warning when numpy not installed
- [RFC] Add Cpp Template for GEMM related ops via max-autotune for Inductor CPU
- MAX-Autotune Compilation Time Regression Due To Added MM Configs HOT 1
- cnm
- DISABLED [WORKFLOW_NAME] / [PLATFORM_NAME] / [JOB_NAME] HOT 1
- cnm
- [Dynamo] Support tracing through _get_current_dispatch_mode_stack
- Have config/env option to disable all PT2 caching
- [dynamo] fix nn.Module @property that accesses closure cells
- KINETO_USE_DAEMON causing issues
- `torch.compile` and complex numbers HOT 2
- Support dynamo tracing weakref obj
- Migrate multiple/custom runner labels before deprecation
- torch._inductor.config.max_autotune_gemm_backends = "TRITON" crashes with Convolution layer
- ☂️ `torch.compile` generates slower code for LLMs than eager on ARM platform (M1/AARCH64)
- [ARM] `Vectorized<half>::loadu(x, 8)` yields slow code if `-fno-unsafe-math-optimizations` are used HOT 3
- [FSDP2] _sharded_param_data is sitll on meta while sharded_param moved to cuda after calling initialize_parameters() HOT 2
- [Distributed Checkpoint] When loading FSDP sharded checkpointing each rank needs all the checkpointing files HOT 1
- [DTensor][Tensor Parallel] transformer test numerical issue when `dtype=torch.float32`
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pytorch.