Comments (5)
This looks like a reason to me:
from pytorch.
Sorry do be the necromancer here.
The numbers for the charts above are a bit hard to replicate, since download statistics from pypi have been disabled, then re-enabled in the meantime, and it's not entirely clear to me how good they were then and how good they are now.
So I just looked at the conda-cloud download numbers for 0.4.0
and 0.4.1
, and it might be time to re-evaluate this.
Pytorch Version | Python 2.7 | Python 3.X |
---|---|---|
0.4.0 |
20,900 | 165,057 |
0.4.1 |
14,578 | 127,007 |
There may be other sources of data that I didn't see or don't have access to, and there may be other reasons to further support legacy python, but I thought I'd put this out here.
from pytorch.
One can get consistent PyPI download numbers over the last 1.5 years from this bigtable: https://bigquery.cloud.google.com/dataset/the-psf:pypi
Running a query to get the download numbers for the last ~3 months (since 2018-06-15):
SELECT
file.project,
file.version,
file.type,
file.filename,
COUNT(*) as total_downloads,
FROM
TABLE_DATE_RANGE(
[the-psf:pypi.downloads],
TIMESTAMP("20180615"),
CURRENT_TIMESTAMP()
)
WHERE
file.project in ('torch')
GROUP BY
file.project, file.version, file.filename, file.type
ORDER BY total_downloads DESC
LIMIT 100
This gives:
file_project | file_version | file_type | file_filename | total_downloads |
---|---|---|---|---|
torch | 0.4.1 | bdist_wheel | torch-0.4.1-cp36-cp36m-manylinux1_x86_64.whl | 107464 |
torch | 0.4.1 | bdist_wheel | torch-0.4.1-cp35-cp35m-manylinux1_x86_64.whl | 76897 |
torch | 0.4.1 | bdist_wheel | torch-0.4.1-cp27-cp27mu-manylinux1_x86_64.whl | 64493 |
torch | 0.4.0 | bdist_wheel | torch-0.4.0-cp36-cp36m-manylinux1_x86_64.whl | 53572 |
torch | 0.1.2.post1 | sdist | torch-0.1.2.post1.tar.gz | 33083 |
torch | 0.3.1 | bdist_wheel | torch-0.3.1-cp36-cp36m-manylinux1_x86_64.whl | 25778 |
torch | 0.4.0 | bdist_wheel | torch-0.4.0-cp27-cp27mu-manylinux1_x86_64.whl | 23750 |
torch | 0.4.0 | bdist_wheel | torch-0.4.0-cp35-cp35m-manylinux1_x86_64.whl | 17091 |
torch | 0.4.1 | bdist_wheel | torch-0.4.1-cp36-cp36m-macosx_10_7_x86_64.whl | 15014 |
torch | 0.4.1.post2 | bdist_wheel | torch-0.4.1.post2-cp37-cp37m-manylinux1_x86_64.whl | 8977 |
torch | 0.3.1 | bdist_wheel | torch-0.3.1-cp27-cp27mu-manylinux1_x86_64.whl | 8479 |
torch | 0.4.1 | bdist_wheel | torch-0.4.1-cp37-cp37m-macosx_10_7_x86_64.whl | 6943 |
torch | 0.4.1 | bdist_wheel | torch-0.4.1-cp27-none-macosx_10_6_x86_64.whl | 6496 |
torch | 0.3.1 | bdist_wheel | torch-0.3.1-cp35-cp35m-manylinux1_x86_64.whl | 5504 |
torch | 0.4.0 | bdist_wheel | torch-0.4.0-cp36-cp36m-macosx_10_7_x86_64.whl | 4059 |
torch | 0.4.1 | bdist_wheel | torch-0.4.1-cp37-cp37m-manylinux1_x86_64.whl | 2852 |
torch | 0.4.1 | bdist_wheel | torch-0.4.1-cp35-cp35m-macosx_10_6_x86_64.whl | 2495 |
torch | 0.4.0 | bdist_wheel | torch-0.4.0-cp27-none-macosx_10_6_x86_64.whl | 1976 |
torch | 0.4.1 | bdist_wheel | torch-0.4.1-cp27-cp27m-manylinux1_x86_64.whl | 1841 |
torch | 0.3.1 | bdist_wheel | torch-0.3.1-cp36-cp36m-macosx_10_7_x86_64.whl | 1618 |
torch | 0.4.0 | bdist_wheel | torch-0.4.0-cp27-cp27m-manylinux1_x86_64.whl | 1210 |
torch | 0.4.0 | bdist_wheel | torch-0.4.0-cp35-cp35m-macosx_10_6_x86_64.whl | 1160 |
torch | 0.1.2 | sdist | torch-0.1.2.tar.gz | 908 |
torch | 0.3.1 | bdist_wheel | torch-0.3.1-cp27-none-macosx_10_6_x86_64.whl | 697 |
torch | 0.3.1 | bdist_wheel | torch-0.3.1-cp27-cp27m-manylinux1_x86_64.whl | 494 |
torch | 0.3.1 | bdist_wheel | torch-0.3.1-cp35-cp35m-macosx_10_6_x86_64.whl | 402 |
torch | 0.3.0.post4 | bdist_wheel | torch-0.3.0.post4-cp36-cp36m-macosx_10_7_x86_64.whl | 299 |
torch | 0.3.0.post4 | bdist_wheel | torch-0.3.0.post4-cp27-none-macosx_10_6_x86_64.whl | 254 |
torch | 0.3.0.post4 | bdist_wheel | torch-0.3.0.post4-cp35-cp35m-macosx_10_6_x86_64.whl | 210 |
On PyPI, 2.7 is very much alive and kicking, still.
The reason Anaconda numbers are skewed is because anaconda brings it's Python, and a lot of users just start with miniconda3 instead of miniconda2
from pytorch.
Indeed, pypi gives a different picture.
After figuring out how to use bigquery (had to select "legacy" sql...) I played around with it a bit.
I aggregated the python/torch version numbers to only make the distinction between major python versions (2/3) and minor pytorch versions (i.e. ignoring postfixes such as "post1").
SELECT
REGEXP_EXTRACT(file.version, r'^([0-9](?:\.[0-9]+)*)') as version,
REGEXP_EXTRACT(details.python, r'^([2-3])\.[0-9].') as python_major,
COUNT(*) as total_downloads,
FROM
TABLE_DATE_RANGE(
[the-psf:pypi.downloads],
TIMESTAMP("20180615"),
TIMESTAMP("20181015")
)
WHERE
file.project in ('torch')
-- AND (file.version = '0.4.0' OR file.version = '0.4.1')
GROUP BY
version, python_major
ORDER BY version, python_major ASC
Ignoring the requests that have no python version set (assuming they're distributed like the other downloads) that puts python 2 at about 24% for the 0.4.0
and 0.4.1
releases. So yeah, arguably still relevant. But is seems to be changing, finally. 🎉
from pytorch.
@black-puppydog nice SQL skills, i might use that in the future.
from pytorch.
Related Issues (20)
- on MPS, torch.embedding, Linear and others raise: RuntimeError: Placeholder storage has not been allocated on MPS device!
- MSE loss triggers ZeroTensor immutable error inside `torch.func`
- RuntimeError: _weight_int4pack_mm_cpu : expect A to be float16 or bfloat16 tensor. HOT 1
- Lowering after pointwise cat can lead to uncontiguous memory accesses HOT 1
- Bitwise (Pop)Count HOT 2
- [2.3 dynamic shapes] backend='inductor' raised: LoweringException: AssertionError: indices must be int64, byte or bool. Got [torch.float32] HOT 1
- gpu memory normal but one of gpus GPU-Util is 0 so cant train
- Loading traced pytorch model to C++
- Typinghints on tensor arithmetic operations (other:Any) cause incompatibilities
- Expand stale management to Issues HOT 2
- FSDP RuntimeError Output 0 of ViewBackward0 is a view and its base or another view of its base has been modified inplace. HOT 5
- Fix OOMs during parallel builds
- [FSDP+TP] RuntimeError: 'weight' must be 2-D HOT 1
- Inconsistency error of `torch.Tensor.atan2_` with torch.compile HOT 1
- CUDA kernel failed: no kernel image is available for execution on the device HOT 1
- Different results for single sample vs. batched HOT 3
- rand_like on permuted parameter not working on MPS
- running AOTI-generated DSO causes segmentation fault on Linux x86, passes on ARM MacOS HOT 6
- can not find the caffe2::Threads
- torch.compile error HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pytorch.