Giter Site home page Giter Site logo

pytorch / data Goto Github PK

View Code? Open in Web Editor NEW
1.1K 36.0 139.0 22.76 MB

A PyTorch repo for data loading and utilities to be shared by the PyTorch domain libraries.

License: BSD 3-Clause "New" or "Revised" License

Python 96.88% Shell 1.01% Batchfile 0.03% CMake 0.41% C++ 1.66%

data's Introduction

TorchData (see note below on current status)

Why TorchData? | Install guide | What are DataPipes? | Beta Usage and Feedback | Contributing | Future Plans

โš ๏ธ As of July 2023, we have paused active development on TorchData and have paused new releases. We have learnt a lot from building it and hearing from users, but also believe we need to re-evaluate the technical design and approach given how much the industry has changed since we began the project. During the rest of 2023 we will be re-evaluating our plans in this space. Please reach out if you suggestions or comments (please use #1196 for feedback).

torchdata is a library of common modular data loading primitives for easily constructing flexible and performant data pipelines.

This library introduces composable Iterable-style and Map-style building blocks called DataPipes that work well out of the box with the PyTorch's DataLoader. These built-in DataPipes have the necessary functionalities to reproduce many different datasets in TorchVision and TorchText, namely loading files (from local or cloud), parsing, caching, transforming, filtering, and many more utilities. To understand the basic structure of DataPipes, please see What are DataPipes? below, and to see how DataPipes can be practically composed together into datasets, please see our examples.

On top of DataPipes, this library provides a new DataLoader2 that allows the execution of these data pipelines in various settings and execution backends (ReadingService). You can learn more about the new version of DataLoader2 in our full DataLoader2 documentation. Additional features are work in progres, such as checkpointing and advanced control of randomness and determinism.

Note that because many features of the original DataLoader have been modularized into DataPipes, their source codes live as standard DataPipes in pytorch/pytorch rather than torchdata to preserve backward-compatibility support and functional parity within torch. Regardless, you can to them by importing them from torchdata.

Why composable data loading?

Over many years of feedback and organic community usage of the PyTorch DataLoader and Dataset, we've found that:

  1. The original DataLoader bundled too many features together, making them difficult to extend, manipulate, or replace. This has created a proliferation of use-case specific DataLoader variants in the community rather than an ecosystem of interoperable elements.
  2. Many libraries, including each of the PyTorch domain libraries, have rewritten the same data loading utilities over and over again. We can save OSS maintainers time and effort rewriting, debugging, and maintaining these commonly used elements.

These reasons inspired the creation of DataPipe and DataLoader2, with a goal to make data loading components more flexible and reusable.

Installation

Version Compatibility

The following is the corresponding torchdata versions and supported Python versions.

torch torchdata python
master / nightly main / nightly >=3.8, <=3.11
2.0.0 0.6.0 >=3.8, <=3.11
1.13.1 0.5.1 >=3.7, <=3.10
1.12.1 0.4.1 >=3.7, <=3.10
1.12.0 0.4.0 >=3.7, <=3.10
1.11.0 0.3.0 >=3.7, <=3.10

Colab

Follow the instructions in this Colab notebook. The notebook also contains a simple usage example.

Local pip or conda

First, set up an environment. We will be installing a PyTorch binary as well as torchdata. If you're using conda, create a conda environment:

conda create --name torchdata
conda activate torchdata

If you wish to use venv instead:

python -m venv torchdata-env
source torchdata-env/bin/activate

Install torchdata:

Using pip:

pip install torchdata

Using conda:

conda install -c pytorch torchdata

You can then proceed to run our examples, such as the IMDb one.

From source

pip install .

If you'd like to include the S3 IO datapipes and aws-sdk-cpp, you may also follow the instructions here

In case building TorchData from source fails, install the nightly version of PyTorch following the linked guide on the contributing page.

From nightly

The nightly version of TorchData is also provided and updated daily from main branch.

Using pip:

pip install --pre torchdata --extra-index-url https://download.pytorch.org/whl/nightly/cpu

Using conda:

conda install torchdata -c pytorch-nightly

What are DataPipes?

Early on, we observed widespread confusion between the PyTorch Dataset which represented reusable loading tooling (e.g. TorchVision's ImageFolder), and those that represented pre-built iterators/accessors over actual data corpora (e.g. TorchVision's ImageNet). This led to an unfortunate pattern of siloed inheritance of data tooling rather than composition.

DataPipe is simply a renaming and repurposing of the PyTorch Dataset for composed usage. A DataPipe takes in some access function over Python data structures, __iter__ for IterDataPipes and __getitem__ for MapDataPipes, and returns a new access function with a slight transformation applied. For example, take a look at this JsonParser, which accepts an IterDataPipe over file names and raw streams, and produces a new iterator over the filenames and deserialized data:

import json

class JsonParserIterDataPipe(IterDataPipe):
    def __init__(self, source_datapipe, **kwargs) -> None:
        self.source_datapipe = source_datapipe
        self.kwargs = kwargs

    def __iter__(self):
        for file_name, stream in self.source_datapipe:
            data = stream.read()
            yield file_name, json.loads(data, **self.kwargs)

    def __len__(self):
        return len(self.source_datapipe)

You can see in this example how DataPipes can be easily chained together to compose graphs of transformations that reproduce sophisticated data pipelines, with streamed operation as a first-class citizen.

Under this naming convention, Dataset simply refers to a graph of DataPipes, and a dataset module like ImageNet can be rebuilt as a factory function returning the requisite composed DataPipes. Note that the vast majority of built-in features are implemented as IterDataPipes, we encourage the usage of built-in IterDataPipe as much as possible and convert them to MapDataPipe only when necessary.

DataLoader2

A new, light-weight DataLoader2 is introduced to decouple the overloaded data-manipulation functionalities from torch.utils.data.DataLoader to DataPipe operations. Besides, certain features can only be achieved with DataLoader2, such as like checkpointing/snapshotting and switching backend services to perform high-performant operations.

Please read the full documentation here.

Tutorial

A tutorial of this library is available here on the documentation site. It covers four topics: using DataPipes, working with DataLoader, implementing DataPipes, and working with Cloud Storage Providers.

There is also a tutorial available on how to work with the new DataLoader2.

Usage Examples

We provide a simple usage example in this Colab notebook. It can also be downloaded and executed locally as a Jupyter notebook.

In addition, there are several data loading implementations of popular datasets across different research domains that use DataPipes. You can find a few selected examples here.

Frequently Asked Questions (FAQ)

What should I do if the existing set of DataPipes does not do what I need?

You can implement your own custom DataPipe. If you believe your use case is common enough such that the community can benefit from having your custom DataPipe added to this library, feel free to open a GitHub issue. We will be happy to discuss!

What happens when the Shuffler DataPipe is used with DataLoader?

In order to enable shuffling, you need to add a Shuffler to your DataPipe line. Then, by default, shuffling will happen at the point where you specified as long as you do not set shuffle=False within DataLoader.

What happens when the Batcher DataPipe is used with DataLoader?

If you choose to use Batcher while setting batch_size > 1 for DataLoader, your samples will be batched more than once. You should choose one or the other.

Why are there fewer built-in MapDataPipes than IterDataPipes?

By design, there are fewer MapDataPipes than IterDataPipes to avoid duplicate implementations of the same functionalities as MapDataPipe. We encourage users to use the built-in IterDataPipe for various functionalities, and convert it to MapDataPipe as needed.

How is multiprocessing handled with DataPipes?

Multi-process data loading is still handled by the DataLoader, see the DataLoader documentation for more details. As of PyTorch version >= 1.12.0 (TorchData version >= 0.4.0), data sharding is automatically done for DataPipes within the DataLoader as long as a ShardingFilter DataPipe exists in your pipeline. Please see the tutorial for an example.

What is the upcoming plan for DataLoader?

DataLoader2 is in the prototype phase and more features are actively being developed. Please see the README file in torchdata/dataloader2. If you would like to experiment with it (or other prototype features), we encourage you to install the nightly version of this library.

Why is there an Error saying the specified DLL could not be found at the time of importing portalocker?

It only happens for people who runs torchdata on Windows OS as a common problem with pywin32. And, you can find the reason and the solution for it in the link.

Contributing

We welcome PRs! See the CONTRIBUTING file.

Beta Usage and Feedback

We'd love to hear from and work with early adopters to shape our designs. Please reach out by raising an issue if you're interested in using this tooling for your project.

License

TorchData is BSD licensed, as found in the LICENSE file.

data's People

Contributors

alex-jg3 avatar atalman avatar bushshrub avatar d4l3k avatar dahsh avatar danilbaibak avatar datumbox avatar ejguan avatar erip avatar gaikwadabhishek avatar gokulavasan avatar huydhn avatar jkulhanek avatar lezwon avatar miiirak avatar msaroufim avatar ninginthecloud avatar nivekt avatar osalpekar avatar pmeier avatar r-barnes avatar sebathomas avatar seemethere avatar svends9 avatar tcapelle avatar tmbdev avatar vitalyfedyunin avatar wenleix avatar xunnanxu avatar ydaiming avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

data's Issues

[DataPipe] Ensure all DataPipes Meet Testing Requirements

๐Ÿš€ Feature

We have many tests for existing DataPipes (both in PyTorch Core and TorchData). However, over time, they have become less organized. Moreover, as the testing requirements expand, older DataPipes may not have tests to cover the newly added requirements.

This issue aims to track the status of tests for all DataPipes.

Motivation

We want to ensure test coverage for all DataPipe is complete to reduce bugs and unexpected behavior.

Alternative

We also should create some testing templates for IterDataPipe and MapDataPipe that can be widely applied.

IterDataPipe Tracker

X - Done
NA - Not Applicable
Blank - Not Done/Unclear

Test definitions:
Functional - unit test to ensure that the DataPipe works properly with various input arguments
Reset - DataPipe can be reset/restart after being read
__len__ - the __len__ method is implemented whenever possible (or explicitly not implemented)
Serializable - DataPipe is serializable
Graph (future) - can be traversed as part of a DataPipe graph
Snapshot (future) - can be saved/loaded as a checkpoint/snapshot

Name Module Functional Test Reset __len__ Serializable (Pickable) Graph Snapshot
Batcher Core X X X X
Collator Core X X X X
Concater Core X X X X
Demultiplexer Core X X X X
FileLister Core X X X X
FileOpener Core X X X X
Filter Core X X X X
Forker Core X X X X
Grouper Core X X X
IterableWrapper Core X X X X
Mapper Core X X X X
Multiplexer Core X X X X
RoutedDecoder Core X X X X
Sampler Core X X X X
Shuffler Core X X X X
StreamReader Core X X X X
UnBatcher Core X X X
Zipper Core X X X X
BucketBatcher Data X X X X
CSVDictParser Data X X X X
CSVParser Data X X X X
Cycler Data X X X X
DataFrameMaker Data X X X X
Decompressor Data X X X X
Enumerator Data X X X X
FlatMapper Data X X X X
FSSpecFileLister Data X X X X
FSSpecFileOpener Data X X X X
FSSpecSaver Data X X X X
GDriveReader Data X X X X
HashChecker Data X X X X
Header Data X X X X
HttpReader Data X X X X
InMemoryCacheHolder Data X X X X
IndexAdder Data X X X X
IoPathFileLister Data X X X X
IoPathFileOpener Data X X X X
IoPathSaver Data X X X X
IterKeyZipper Data X X X X
JsonParser Data X X X X
LineReader Data X X X X
MapKeyZipper Data X X X X
OnDiskCacheHolder Data X X X X
OnlineReader Data X X X X
ParagraphAggregator Data X X X X
ParquetDataFrameLoader Data X X X X
RarArchiveLoader Data X X X X
Rows2Columnar Data X X X X
SampleMultiplexer Data X X X X
Saver Data X X X X
TarArchiveLoader Data X X X X
UnZipper Data X X X X
XzFileLoader Data X X X X
ZipArchiveLoader Data X X X X

MapDataPipe Tracker

X - Done
NA - Not Applicable
Blank - Not Done/Unclear

Name Module Functional Test __len__ Serializable (Pickable) Graph Snapshot
Batcher Core X X
Concater Core X X
Mapper Core X X X
SequenceWrapper Core X X X
Shuffler Core X X
Zipper Core X X

cc: @ejguan @VitalyFedyunin @NivekT

Create .pyi files for DataPipes

๐Ÿš€ Feature

Create .pyi files for DataPipes to provide information for IDEs

As @ejguan mentioned, codegen is needed in order for all the comments and argument types to be automatically attached to the .pyi file. We also need to account for the fact that operations exist in both Core and TorchData.

Motivation

These files will allow IDEs to provide more information to users when they are using DataPipes, and will enable features such as code autocompletion.

cc: @VitalyFedyunin @ejguan

Improve debuggability

๐Ÿš€ Feature

Currently, when iteration on DataPipe starts and Error is raised, the traceback would report at each __iter__ method pointing to the DataPipe Class file.
It's hard to figure out which part of DataPipe is broken, especially when multiple same DataPipe calls exist in the pipeline.

As normally developer would iterate over the sequence of DataPipe for debugging, we can't rely on DataLoader to handle this case.

I am not sure how to reference self object from each Iterator instance. https://docs.python.org/3/reference/expressions.html?highlight=generator#generator-iterator-methods

(I guess this is also one thing we need to think about singleton iterator should be able to reference back to the object)

Also expose `torch` native datapipes in `torchdata.datapipes`

That would remove the need to import these from three different sources.

Before:

from torch.utils.data import IterDataPipe
from torch.utils.data.datapipes.iter import (
    Demultiplexer,
    Filter,
    Mapper,
    TarArchiveReader,
    Shuffler,
)
from torchdata.datapipes.iter import KeyZipper

After:

from torchdata.datapipes.iter import (
    KeyZipper,
    IterDataPipe,
    Demultiplexer,
    Filter,
    Mapper,
    TarArchiveReader,
    Shuffler,
)

I can send a PR if we want this.

fsspec support

๐Ÿš€ Feature

Add a new loader similar to the iopath loader that uses fsspec.

https://github.com/pytorch/data/blob/main/torchdata/datapipes/iter/load/iopath.py

https://filesystem-spec.readthedocs.io/en/latest/

Motivation

It would be nice to have fsspec in addition to iopath for loading data from general data sources. A lot of projects already use it and support it which makes it a good to add to torchdata as well for uniform support.

PyTorch Lighting, Tensorboard and TorchX have support for fsspec already. It's quite easy to add support for a new storage provider and has many commons ones available already. Internally there's a Manifold provider which is used with many PyTorch/STL projects.

Alternatives

For common storage providers such as s3 there's generally already support for that in most projects though for custom / less used storage providers a user would have to implement support for each different system. iopath does provide a similar abstraction but it seems like fsspec generally has more OSS adoption so would be nice to have a unified interface across pytorch projects

Additional context

[BE] Add lazy_import

๐Ÿš€ Feature

Currently we used import in each DataPipe class to make lazy importing happens like

try:
from iopath.common.file_io import g_pathmgr
except ImportError:
raise ModuleNotFoundError(
"Package `iopath` is required to be installed to use this "
"datapipe. Please use `pip install iopath` or `conda install "
"iopath`"
"to install the package"
)

As more potential libraries used in TorchData to support different functionalities, we could add a methods to support lazy import module to global namespace. Then, we don't need to duplicate the import inside each class used the same third-party module.

Features needed:

  • Error message generation
  • Support from ... import ... as ...
  • Support submodule lazy import import xxx.yyy

Close all streams within DataPipe after reading (if DataPipe's output isn't a stream)

๐Ÿš€ Feature

Close all streams within DataPipe after reading (if DataPipe's output isn't a stream). This is applicable to DataPipes such as CSVParserIterDataPipe and JsonParserIterDataPipe.

Motivation

This features prevents streams that has been exhausted from remaining open. Since the end of the stream has been reached, the user must reset or reopen the stream before they can be reused. It makes sense to close them and the users can re-open them outside the DataPipe if desired.

Alternatives

One alternative is to only close streams that are not seek-able. The users still have to re-open/reset those streams outside of the DataPipe.

Additional context

Note that if the DataPipe returns a stream (e.g. TarArchiveReader), the behavior will be different (it won't be closed) because it is uncertain when that output stream will be read.

cc @ejguan @VitalyFedyunin

TODO

Issue generate from TODO line

`python setup.py clean` doesn't work.

python setup.py clean couldn't remove the package from my environment no mater I install the package in develop mode or release mode.

In order to remove it, I have to use pip uninstall torchdata.

  • For develop mode (python setup.py develop), I have to re-install it using release mode then pip uninstall.

The script of setup.py probably has a bug to clean.

cc: @NivekT

ImportError: cannot import name 'StreamWrapper' from 'torch.utils.data.datapipes.utils.common'

๐Ÿ› Bug

Seems like there's some messed up import paths? some things are torch.data where as others are torchdata?

(3.7.11) tristanr@tristanr-arch2 ~> pip install -e git+https://github.com/pytorch/data#egg=torchdata
Obtaining torchdata from git+https://github.com/pytorch/data#egg=torchdata
  Cloning https://github.com/pytorch/data to ./venvs/3.7.11/src/torchdata
  Running command git clone --filter=blob:none -q https://github.com/pytorch/data /home/tristanr/venvs/3.7.11/src/torchdata
  Resolved https://github.com/pytorch/data to commit 2d94ebc6e95d4bd475a98e947781e58410386a10
  Preparing metadata (setup.py) ... done
Requirement already satisfied: requests in ./venvs/3.7.11/lib/python3.7/site-packages (from torchdata) (2.26.0)
Requirement already satisfied: torch in ./venvs/3.7.11/lib/python3.7/site-packages (from torchdata) (1.10.0)
Requirement already satisfied: certifi>=2017.4.17 in ./venvs/3.7.11/lib/python3.7/site-packages (from requests->torchdata) (2021.10.8)
Requirement already satisfied: charset-normalizer~=2.0.0 in ./venvs/3.7.11/lib/python3.7/site-packages (from requests->torchdata) (2.0.7)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in ./venvs/3.7.11/lib/python3.7/site-packages (from requests->torchdata) (1.26.7)
Requirement already satisfied: idna<4,>=2.5 in ./venvs/3.7.11/lib/python3.7/site-packages (from requests->torchdata) (3.3)
Requirement already satisfied: typing-extensions in ./venvs/3.7.11/lib/python3.7/site-packages (from torch->torchdata) (3.10.0.2)
Installing collected packages: torchdata
  Attempting uninstall: torchdata
    Found existing installation: torchdata 0.2.0
    Uninstalling torchdata-0.2.0:
      Successfully uninstalled torchdata-0.2.0
  Running setup.py develop for torchdata
Successfully installed torchdata-0.1.0a0+2d94ebc
(3.7.11) tristanr@tristanr-arch2 ~> python -c "import torchdata"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/tristanr/venvs/3.7.11/src/torchdata/torchdata/__init__.py", line 2, in <module>
    from . import datapipes
  File "/home/tristanr/venvs/3.7.11/src/torchdata/torchdata/datapipes/__init__.py", line 3, in <module>
    from . import iter
  File "/home/tristanr/venvs/3.7.11/src/torchdata/torchdata/datapipes/iter/__init__.py", line 24, in <module>
    from torchdata.datapipes.iter.load.online import (
  File "/home/tristanr/venvs/3.7.11/src/torchdata/torchdata/datapipes/iter/load/online.py", line 10, in <module>
    from torchdata.datapipes.utils import StreamWrapper
  File "/home/tristanr/venvs/3.7.11/src/torchdata/torchdata/datapipes/utils/__init__.py", line 2, in <module>
    from torch.utils.data.datapipes.utils.common import StreamWrapper
ImportError: cannot import name 'StreamWrapper' from 'torch.utils.data.datapipes.utils.common' (/home/tristanr/venvs/3.7.11/lib/python3.7/site-packages/torch/utils/data/datapipes/utils/common.py)

To Reproduce

Steps to reproduce the behavior:

  1. pip install -e git+https://github.com/pytorch/data#egg=torchdata
  2. python -c "import torchdata"

Expected behavior

No import error

Environment

PyTorch version: 1.10.0+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A

OS: Arch Linux (x86_64)
GCC version: (GCC) 11.1.0
Clang version: Could not collect
CMake version: version 3.22.0
Libc version: glibc-2.33

Python version: 3.7.11 (default, Nov 22 2021, 11:26:35)  [GCC 11.1.0] (64-bit runtime)
Python platform: Linux-5.14.16-arch1-1-x86_64-with-arch
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A

Versions of relevant libraries:
[pip3] botorch==0.5.1
[pip3] gpytorch==1.5.1
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.4
[pip3] torch==1.10.0
[pip3] torchdata==0.1.0a0+2d94ebc
[pip3] torchmetrics==0.6.0
[pip3] torchvision==0.11.1
[pip3] torchx==0.1.2.dev0
[conda] blas                      1.0                         mkl  
[conda] magma-cuda110             2.5.2                         1    pytorch
[conda] mkl                       2021.4.0           h06a4308_640  
[conda] mkl-include               2021.4.0           h06a4308_640  
[conda] mkl-service               2.4.0            py39h7f8727e_0  
[conda] mkl_fft                   1.3.1            py39hd3c417c_0  
[conda] mkl_random                1.2.2            py39h51133e4_0  
[conda] mypy_extensions           0.4.3            py39h06a4308_0  
[conda] numpy                     1.20.3           py39hf144106_0  
[conda] numpy-base                1.20.3           py39h74d4b33_0  
[conda] numpydoc                  1.1.0              pyhd3eb1b0_1

Additional context

Use IoPath to download HTTP Url

Since IoPath has features to download different sources: HTTP, S3, Google, and etc, we should add a IoPathDownloader to handle all different sources.

Router for same functional API

๐Ÿš€ Feature

We could support different DataPipe with same functionality using a same functional API with a router datapipe.
Like open, we can support:

  • URL
  • IoPath
  • Local File

Option 1:
Based on the input (type), we could route it to corresponding DataPipe with extra argument to functional_datapipe

register_funtional_api("open", router_fn=...)


@functional_datapipe("open", route_class=...)
 class HTTPReader(IterDataPipe):
    ...

Option 2:
Implement a dispatcher toward functional_API. This may be more suitable but needed to be designed carefully.

[RFC] Wrapper/Proxy for Stream

๐Ÿš€ Feature

Discussed with @NivekT about the wrapper class for all streams:
Pros:

  • We can add a __del__ method to close the file stream automatically when ref count becomes 0 for wrapper. It would eliminate all warnings.
  • A wrapper class can unify the reading API for file streams. (For OnDiskCache, I would prefer a unified API to read stream, otherwise I have to handle all different cases)
    • Local file stream, we can use read() to read everything into memory
    • When we set stream=True for large file, the requests.Response doesn't support read. It only supports iter_content or __iter__ to read chunk by chunk.

Cons:

  • Thanks to @NivekT, it needs extra care about magic methods.

Reference: #35 (comment), #65 (comment)

cc: @VitalyFedyunin

fastai's DataBlock

This new data API looks great and has many similarities with the DataBlock api from fastai.
We have a discord channel for fastai dev and we would love to help/test/integrate these cool pipelines.

KeyZipper improvement

Currently multiple stacked KeyZipper would create a recursive data structure:

dp = KeyZipper(dp, ref_dp1, lambda x: x)
dp = KeyZipper(dp, ref_dp2, lambda x: x[0])
dp = KeyZipper(dp, ref_dp3, lambda x: x[0][0])

This is super annoying if we are using same key for each KeyZipper. At the end, it yields `(((dp, ref_dp1), ref_dp2), ref_dp3)

We should either accept multiple reference DataPipe for KeyZipper to preserve same key, or have some expand or collate function to convert result to (dp, (ref_dp1, ref_dp2, ref_dp3))

  • If we take multiple reference DataPipe and ref_key_fn, we need to figure out how to ensure buffer not blown up.

cc: @VitalyFedyunin @NivekT

Error with HashCheckerIterDataPipe

๐Ÿ› Bug

Using HashCheckerIterDataPipe for implementing a SST2 dataset within torchtext causes test failures for unittest_linux_py3.6 and for all python versions on windows platform.

  • Here is the CircleCI link for all the test failures: failures.
  • Here is the Dataset implementation where the HashCheckerIterDataPipe is used: code pointer

I believe there may be changes to how io.seek() works from python 3.6 to 3.7 that could be causing the failures in unittest_linux_py3.6 and unittest_windows_py3.6. I'm not really sure why the other windows unit tests are failing.

To Reproduce

Steps to reproduce the behavior:

  1. Patch commit 62e6fb2 in Nayef211/text repo
  2. Create PR against pytorch/text repo
  3. Look at CircleCI unit test failures

Error for unittest_linux_py3.6 and unittest_windows_py3.6

self = <torchdata.datapipes.iter.util.hashchecker.HashCheckerIterDataPipe object at 0x7f937f867ba8>

    def __iter__(self):
    
        for file_name, stream in self.source_datapipe:
            if self.hash_type == "sha256":
                hash_func = hashlib.sha256()
            else:
                hash_func = hashlib.md5()
    
            while True:
                # Read by chunk to avoid filling memory
                chunk = stream.read(1024 ** 2)
                if not chunk:
                    break
                hash_func.update(chunk)
    
            # TODO(VitalyFedyunin): this will not work (or work crappy for non-seekable steams like http)
            if self.rewind:
>               stream.seek(0)
E               io.UnsupportedOperation: seek

env/lib/python3.6/site-packages/torchdata-0.1.0a0+7772406-py3.6.egg/torchdata/datapipes/iter/util/hashchecker.py:51: UnsupportedOperation

Link to Circle CI Error

Error for all other unittest_windows_py*

self = <torchdata.datapipes.iter.util.hashchecker.HashCheckerIterDataPipe object at 0x000001929F2B5548>

    def __iter__(self):
    
        for file_name, stream in self.source_datapipe:
            if self.hash_type == "sha256":
                hash_func = hashlib.sha256()
            else:
                hash_func = hashlib.md5()
    
            while True:
                # Read by chunk to avoid filling memory
                chunk = stream.read(1024 ** 2)
                if not chunk:
                    break
                hash_func.update(chunk)
    
            # TODO(VitalyFedyunin): this will not work (or work crappy for non-seekable steams like http)
            if self.rewind:
                stream.seek(0)
    
            if file_name not in self.hash_dict:
>               raise RuntimeError("Unspecified hash for file {}".format(file_name))
E               RuntimeError: Unspecified hash for file C:\Users\circleci\.torchtext\cache\SST2\SST-2\train.tsv

env\lib\site-packages\torchdata-0.1.0a0+7772406-py3.7.egg\torchdata\datapipes\iter\util\hashchecker.py:54: RuntimeError

Link to Circle CI Error

Expected behavior

Expect all tests to pass

Environment

Tests pass on devserver environment but fails on CircleCI.

Use iopath in `SaverIterDataPipe`

๐Ÿš€ Feature

iopath provides an API that can replace open and support saving to multiple destinations (including S3), we should probably reimplement SaverIterDataPipe to rely on iopath instead.

We could even then rename the functional API so that it's called save instead of save_to_disk.

Switch `FileLoader` to `FileOpener`

Should we change the name of FileLoader to FileOpener?

We split the file-loading functionality into three steps:

  • List file names in a directory: FileLister
  • Open file handles: FileLoader (I personally feel like the name is incorrect.)
  • Read files: dp.map(fn=lambda x: x.read(), input_col=1)

cc: @VitalyFedyunin @NivekT

Prevent (or at least flag) DataPipe attributes and methods that use existing functional datapipe names

๐Ÿš€ Feature

We should have some checks/tests that flag when a DataPipe has an attribute/method that shares the same name as existing functional datapipes. Those names are the ones defined inside the decorator @functional_datapipe('NAME'), such as map, batch, zip, and etc.

For example, Batcher (or BatcherIterDataPipe) has the functional datapipe name batch. However, currently there is nothing to prevent other IterDataPipes to use batch as the name of an attribute or method.

The change in the following PR is a good example.

Motivation

If this feature is not implemented, then a DataPipe can have multiple attributes/methods with the same name, potentially causing confusion and bugs.

Alternatives

Ideally, we should be able to flag this issue during development (within IDEs).

If we cannot automatically prevent this during development, we can have a check in register_datapipe_as_function or a CI check that ensures all attributes and methods are compliant.

mypy also should be able to flag this issue if the .pyi file has a complete set of method interfaces for all built-in DataPipes (including those in TorchData). This becomes trickier for user-defined DataPipes.

[RFC] Disable the multiple Iterators per IterDataPipe (Make Iterator singleton)

This is the initial draft. I will complete it shortly.

State of Iterator is attached to each IterDataPipe instance. This is super useful for:

  • Determinism
  • Snapshotting
  • Benchmarking -> It becomes easier to register each DataPipe since they have different ID in the graph.

Implementation Options:

  • Each DataPipe has an attribute of _iterator as the place holder for __iter__ calls.
  • Implement __next__. (My Preference)
    • It would make the instance pickable. Previously generator function (__iter__) is not picklable -> Help multiprocessing and snapshotting)
    • __iter__ return self (Forker(self) may be another option, not 100% sure)
    • IMO, this is super useful as we can track the number of __next__ call to do a fast forward. The state of iteration is attached to DataPipe instance, rather than a temporary instance created from __iter__, which we couldn't track the internal state. (We can easily track states like RNG, iteration number, buffer, etc. as they are going to be attached to self instance)
    • As source DataPipe is attached to each DataPipe, but the actual iteration happens on Iterator level. The graph constructed by DataLoaderV2 doesn't match the actual execution graph.

DataLoader trigger Error if there are two DataPipe instance with same id in the graph. (Another option is DataLoader do an automatically fork)
Users should use Forker for each DataPipe want to have single DataPipe twice in the graph.

cc: @VitalyFedyunin @NivekT

TarArchiveReader is not functioning with HTTPReader or GDriveReader

This issue was discovered as part of #40. The TarArchiveReader implementation is likely wrong:

  1. An error is raised when we attempt to use TarArchiveReader immediately after HTTPReader because the HTTP stream does not support the operation seek:
file_url = "http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz"
http_reader_dp = HttpReader(IterableWrapper([file_url]))
tar_dp = http_reader_dp.read_from_tar()
for fname, stream in tar_dp:
    print(f"{fname}: {stream.read()}")

It returns an error that looks something like this:

Traceback (most recent call last):
  File "/Users/ktse/data/test/test_stream.py", line 66, in <module>
    for fname, stream in tar_dp:
  File "/Users/.../data/torchdata/datapipes/iter/util/tararchivereader.py", line 62, in __iter__
    raise e
  File "/Users/.../data/torchdata/datapipes/iter/util/tararchivereader.py", line 48, in __iter__
    tar = tarfile.open(fileobj=cast(Optional[IO[bytes]], data_stream), mode=self.mode)
  File "/Users/.../miniconda3/envs/pytorch/lib/python3.9/tarfile.py", line 1609, in open
    saved_pos = fileobj.tell()
io.UnsupportedOperation: seek

Currently, you can work around by downloading the file in advance (or caching it with OnDiskCacheHolderIterDataPipe). In those cases, TarArchiveReader works as intended.

  1. TarArchiveReader also doesn't work with GDriveReader because of the return type
amazon_review_url = "https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbaW12WVVZS2drcnM"
gdrive_reader_dp = OnlineReader(IterableWrapper([amazon_review_url]))
tar_dp = gdrive_reader_dp.read_from_tar()

This is because validate_pathname_binary_tuple requires BufferedIOBase. Perhaps it should accept HTTP response as well?

def validate_pathname_binary_tuple(data: Tuple[str, BufferedIOBase]):
if not isinstance(data, tuple):
raise TypeError("pathname binary data should be tuple type, but got {}".format(type(data)))
if len(data) != 2:
raise TypeError("pathname binary tuple length should be 2, but got {}".format(str(len(data))))
if not isinstance(data[0], str):
raise TypeError("pathname binary tuple should have string type pathname, but got {}".format(type(data[0])))
if not isinstance(data[1], BufferedIOBase):
raise TypeError(
"pathname binary tuple should have BufferedIOBase based binary type, but got {}".format(type(data[1]))
)

test/test_stream.py:None (test/test_stream.py)
test_stream.py:79: in <module>
    for fname, stream in tar_dp:
../torchdata/datapipes/iter/util/tararchivereader.py:43: in __iter__
    validate_pathname_binary_tuple(data)
../torchdata/datapipes/utils/common.py:74: in validate_pathname_binary_tuple
    raise TypeError(
E   TypeError: pathname binary tuple should have BufferedIOBase based binary type, but got <class 'urllib3.response.HTTPResponse'>

cc @VitalyFedyunin @ejguan

pip install torchdata

Is the pypi package torchdata associated with this project? I naively ran pip install torchdata when trying to install this package and got some unexpectedly unrelated project.

https://pypi.org/project/torchdata/

That doesn't seem to have any github sources corresponding to it so wondering what's going on with that package.

Consider linting `# TODO` lines

Frequently TODO lines left unattended, we should consider requirement to bind TODO lines to issues (like #TODO(issue_id))

datapipe serialization support / cloudpickle / parallel support

I've been looking at how we might go about supporting torchdata within TorchX and with components. I was wondering what the serialization options were for transforms and what that might look like.

There's a couple of common patterns that would be nice to support:

  • general data transforms (with potentially distributed preprocessing via torch elastic/ddp)
  • data splitting into train/validation sets
  • summary statistic computation

For the general transforms and handling arbitrary user data we were wondering how we might go about serializing the data pipes and transforms for use in a pipeline with TorchX.

There's a couple of options here:

  1. add serialization support to the transforms so you can serialize them (lambdas?)
  2. generate a .py file from a provided user function
  3. pickle the transform using something like cloudpickle/torch.package and load it in a trainer app
  4. ask the user to write a .py file that uses the datapipes as the transform and create a TorchX component (what we currently have)

Has there been any thought about how to support this well? Is there extra work that should be done here to make this better?

Are DataPipes guaranteed to be pickle safe and is there anything that needs to be done to support that?

I was also wondering if there's multiprocessing based datapipes and how that works since this seems comparable. I did see https://github.com/pytorch/pytorch/blob/master/torch/utils/data/distributed.py but didn't see any examples on how to use that to achieve a traditional PyTorch dataloader style workers.

P.S. should this be on the pytorch discussion forums instead? it's half feature request half questions so wasn't sure where best to put it

cc @kiukchung

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.