Giter Site home page Giter Site logo

vinayburugu / pytorch-xla Goto Github PK

View Code? Open in Web Editor NEW

This project forked from pytorch/xla

0.0 0.0 0.0 22.13 MB

Enabling PyTorch on Google TPU

Home Page: https://pytorch.org/xla/release/1.13/index.html

License: Other

Shell 1.32% C++ 64.79% Python 25.08% C 0.01% CMake 0.12% Jupyter Notebook 8.37% Dockerfile 0.31%

pytorch-xla's Introduction

PyTorch/XLA

Current CI status: CircleCI

PyTorch/XLA is a Python package that uses the XLA deep learning compiler to connect the PyTorch deep learning framework and Cloud TPUs. You can try it right now, for free, on a single Cloud TPU with Google Colab, and use it in production and on Cloud TPU Pods with Google Cloud.

Take a look at one of our Colab notebooks to quickly try different PyTorch networks running on Cloud TPUs and learn how to use Cloud TPUs as PyTorch devices:

The rest of this README covers:

Additional information on PyTorch/XLA, including a description of its semantics and functions, is available at PyTorch.org.

Running PyTorch on Cloud TPUs with Google Cloud Platform

Google Cloud Platform lets you deploy PyTorch networks running on Cloud TPUs. This guide is split into two parts:


Running on a Single Cloud TPU VM

Google Cloud offers TPU VMs for more transparent and easier access to the TPU hardware. This is our recommedned way of running PyTorch/XLA on Cloud TPU. Please check out our Cloud TPU VM User Guide. To learn more about the Cloud TPU System Architecture, please check out this doc.


How to Run on TPU VM Pods (distributed training)

If a single TPU VM does not suit your requirment, you can consider using TPU Pod. TPU Pod is a collection of TPU devices connected by dedicated high-speed network interfaces. Please checkout our Cloud TPU VM Pod User Guide.

Available images and wheels

The following pre-built docker images are available to run on Cloud TPU Nodes (see docker images for instructions):

* `gcr.io/tpu-pytorch/xla:r1.13_3.7`: The current stable version.
* `gcr.io/tpu-pytorch/xla:r1.12_3.7`: The 1.12 release version.
* `gcr.io/tpu-pytorch/xla:r1.11_3.7`: The 1.11 release version.
* `gcr.io/tpu-pytorch/xla:nightly_3.7`: Nightly version using Python 3.7.
* `gcr.io/tpu-pytorch/xla:nightly_3.7_YYYYMMDD (e.g.: gcr.io/tpu-pytorch/xla:nightly_3.7_20220301)`.

and for Cloud TPU VMs

* `gcr.io/tpu-pytorch/xla:r1.13_3.8_tpuvm`: The current stable version.
* `gcr.io/tpu-pytorch/xla:r1.12_3.8_tpuvm`: The 1.12 release version.
* `gcr.io/tpu-pytorch/xla:nightly_3.8_tpuvm`: Nightly version using Python 3.7.
* `gcr.io/tpu-pytorch/xla:nightly_3.8_YYYYMMDD (e.g.: gcr.io/tpu-pytorch/xla:nightly_3.7_20220301)`.

We also have pre-built docker images to run on Cloud compute instances with GPUs (CUDA 11.2):

* `gcr.io/tpu-pytorch/xla:r1.13_3.7_cuda_11.2`: The current stable version.
* `gcr.io/tpu-pytorch/xla:r1.12_3.7_cuda_11.2`: The 1.12 release version.
* `gcr.io/tpu-pytorch/xla:nightly_3.7_cuda_11.2`: Nightly version using Python 3.7.
* `gcr.io/tpu-pytorch/xla:nightly_3.7_cuda_11.2_YYYYMMDD`.

To run on compute instances with GPUs.

The following pre-built wheels are avaialble for Cloud TPU Node:

  • https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-nightly-cp37-cp37m-linux_x86_64.whl
  • https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.13-cp37-cp37m-linux_x86_64.whl
  • https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.12-cp37-cp37m-linux_x86_64.whl
  • https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.11-cp37-cp37m-linux_x86_64.whl
  • https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.10-cp37-cp37m-linux_x86_64.whl
  • https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.9-cp37-cp37m-linux_x86_64.whl

Cloud TPU VM:

  • https://storage.googleapis.com/tpu-pytorch/wheels/tpuvm/torch_xla-nightly-cp38-cp38-linux_x86_64.whl
  • https://storage.googleapis.com/tpu-pytorch/wheels/tpuvm/torch_xla-1.13-cp38-cp38-linux_x86_64.whl
  • https://storage.googleapis.com/tpu-pytorch/wheels/tpuvm/torch_xla-1.12-cp38-cp38-linux_x86_64.whl
  • https://storage.googleapis.com/tpu-pytorch/wheels/tpuvm/torch_xla-1.11-cp38-cp38-linux_x86_64.whl
  • https://storage.googleapis.com/tpu-pytorch/wheels/tpuvm/torch_xla-1.10-cp38-cp38-linux_x86_64.whl
  • https://storage.googleapis.com/tpu-pytorch/wheels/tpuvm/torch_xla-1.9-cp38-cp38-linux_x86_64.whl

and for Colab:

  • https://storage.googleapis.com/tpu-pytorch/wheels/colab/torch_xla-1.13-cp37-cp37m-linux_x86_64.whl (TPU runtime for 1.13 release)
  • https://storage.googleapis.com/tpu-pytorch/wheels/cuda/112/torch_xla-1.13-cp37-cp37m-linux_x86_64.whl (GPU runtime for 1.13 release)
  • https://storage.googleapis.com/tpu-pytorch/wheels/colab/torch_xla-1.12-cp37-cp37m-linux_x86_64.whl (TPU runtime for 1.12 release)
  • https://storage.googleapis.com/tpu-pytorch/wheels/cuda/112/torch_xla-1.12-cp37-cp37m-linux_x86_64.whl (GPU runtime for 1.12 release)
  • https://storage.googleapis.com/tpu-pytorch/wheels/colab/torch_xla-1.11-cp37-cp37m-linux_x86_64.whl (TPU runtime for 1.11 release)
  • https://storage.googleapis.com/tpu-pytorch/wheels/cuda/112/torch_xla-1.11-cp37-cp37m-linux_x86_64.whl (GPU runtime for 1.11 release)

You can also add +yyyymmdd after torch_xla-nightly to get the nightly wheel of a specified date. To get the companion pytorch nightly wheel, replace the torch_xla with torch on above wheel links.

Note that for Cloud TPU VM, you can update the libtpu after the torch_xla wheel by

pip3 install torch_xla[tpuvm]

API & Best Practices

In general PyTorch/XLA follows PyTorch APIs, some additional torch_xla specific APIs are available at:

Documentation for the latest release

Documentation for master branch

See the API Guide for best practices when writing networks that run on Cloud TPUs and Cloud TPU Pods.

Performance Profiling and Auto-Metrics Analysis

With PyTorch/XLA we provide a set of performance profiling tooling and auto-metrics analysis which you can check the following resources:

Troubleshooting

If PyTorch/XLA isn't performing as expected, see the troubleshooting guide, which has suggestions for debugging and optimizing your network(s).

Providing Feedback

The PyTorch/XLA team is always happy to hear from users and OSS contributors! The best way to reach out is by filing an issue on this Github. Questions, bug reports, feature requests, build issues, etc. are all welcome!

Contributing

See the contribution guide.

Disclaimer

This repository is jointly operated and maintained by Google, Facebook and a number of individual contributors listed in the CONTRIBUTORS file. For questions directed at Facebook, please send an email to [email protected]. For questions directed at Google, please send an email to [email protected]. For all other questions, please open up an issue in this repository here.

pytorch-xla's People

Contributors

dlibenzi avatar asuhan avatar jackcaog avatar ailzhang avatar jysohn23 avatar wonjoolee95 avatar zcain117 avatar taylanbil avatar yeounoh avatar vishwakftw avatar miladm avatar will-cromar avatar mruberry avatar ezyang avatar bdhirsh avatar krovatkin avatar alanwaketan avatar ymwangg avatar vanbasten23 avatar steventk-g avatar peterbell10 avatar miaoshasha avatar ultrons avatar ronghanghu avatar pbelevich avatar kshitij12345 avatar seemethere avatar davidel avatar hjm-aws avatar pkanwar23 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.