Comments (8)
Yeah, keyword args are a reasonable solution, but I would require them only for long tensors (and optionally allow them for other types).
from pytorch.
@apaszke, my point is that we shouldn't have some constructors work for only some tensor types. That leads to non-obvious bugs because we write a lot of code that is generic in the type of the tensor, especially in nn. The convenience of sometimes not having to use a keyword argument is not worth the inconsistency.
So to be clear, I don't think we should make any arguments optional for only some tensor types.
from pytorch.
I got bitten by this again in the PR #127 -- it breaks with LongTensors.
I have a new proposal that avoids requiring keyword arguments:
We change Tensor.size()
to return a new type torch.Size()
instead of torch.LongStorage
. The torch.Size()
type provides a some standard operations like indexing, but it's immutable. A tensor can be constructed a torch.Size()
type so the following still works:
foo = torch.FloatTensor(tensor.size())
Some more examples:
>>> torch.FloatTensor(3, 4, 2).size()
torch.Size([3, 4, 2])
>>> torch.FloatTensor(3, 4, 2).size()[0]
3
>>> torch.FloatTensor(torch.Size([3, 4, 2]))
...
[torch.FloatTensor of size 3x4x2]
>>> torch.LongTensor(torch.LongStorage([1, 2, 3])) # as opposed to a LongTensor of size 1x2x3
1
2
3
[torch.LongTensor of size 3]
If you guys are OK with this, I'll make the change.
cc @apaszke, @soumith, @adamlerer, @fmassa
from pytorch.
i think this is way more reasonable. Returning torch.LongStorage never made sense anyways.
from pytorch.
This seems to be an elegant solution. It'd be probably the best to make torch.Size
a subclass of tuple
.
class Size(tuple):
pass
Btw, if you have a regular tuple or list of sizes, you can do torch.Tensor(*sizes)
, instead of torch.Tensor(torch.Size(sizes))
.
from pytorch.
@adamlerer was advocating for making Tensor.size()
simply return a tuple (as opposed to a sublcass of tuple
).
To construct a tensor with the same size as another, you would write:
tensor1 = torch.Tensor(*tensor2.size())
This seems reasonable to me too, but would require more changes to existing code.
from pytorch.
I thought about it as well, but I dislike that it requires an additional star. I'd say that size()
should return a special Size
object that behaves just like a regular tuple (because it subclasses it), but has a clear meaning in our functions. If we make it just a regular tuple I can see a lot of issues where people will forget about the star and will be very surprised that they got e.g. a 3 element tensor.
from pytorch.
I think this is fixed now
from pytorch.
Related Issues (20)
- CUDAGraph Tree TORCH_CHECK failed when NCCL operator exists. HOT 1
- DISABLED test_hook_with_no_name (__main__.TestAutogradWithCompiledAutograd) HOT 3
- Provide TORCH_LIBRARY m.def schema types documentation
- AOTAutograd fails to trace with contiguous tangents when subclass has inner noncontiguous tensors HOT 1
- dynamo doesn't support branching on checking for namedtuple
- DISABLED test_batched_mm_float32_bs_2_cuda_float32 (__main__.TestDecompCUDA) HOT 2
- DISABLED test_leaf_assignment (__main__.TestAutogradWithCompiledAutograd) HOT 1
- DISABLED test_hooks_cpp (__main__.TestAutogradWithCompiledAutograd) HOT 3
- ghstack mergeability check seems wrong
- Outer tensor of wrapper subclasses does not respect `map_location` in `torch.load`
- DISABLED test_indexing (__main__.TestAutogradWithCompiledAutograd) HOT 4
- Find a common home for decompositions, perhaps outside of the obliquely named _refs directory HOT 2
- DISABLED test_indexing_duplicates (__main__.TestAutogradWithCompiledAutograd) HOT 3
- torch.histogramdd Inconsistent/Incorrect Documentation HOT 1
- [dynamo] Graph breaks in nested function increase compile time badly HOT 1
- DISABLED test_inplace (__main__.TestAutogradWithCompiledAutograd) HOT 6
- How to use system cuda/cudnn HOT 1
- DISABLED test_inplace_on_view_weak_grad_fn (__main__.TestAutogradWithCompiledAutograd) HOT 3
- torch.utils.cpp_extension.load recompiling every time HOT 2
- CudaHostAlloc takes a lot of time during training HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pytorch.