Giter Site home page Giter Site logo

fastai / fastai1 Goto Github PK

View Code? Open in Web Editor NEW
99.0 7.0 113.0 471.68 MB

v1 of the fastai library. v2 is the current version. v1 is still supported for bug fixes, but will not receive new features.

Home Page: http://fastai1.fast.ai

License: Apache License 2.0

Makefile 0.02% Batchfile 0.01% Shell 0.02% Python 2.03% Jupyter Notebook 97.71% JavaScript 0.01% Smarty 0.01% C++ 0.01% Cuda 0.01% HTML 0.05% Perl 0.14%

fastai1's Introduction

NB: this repo is for v1 of the fastai library. v2 is the current version. v1 is still supported for bug fixes, but will not receive new features.

pypi fastai version Conda fastai version

Anaconda-Server Badge fastai python compatibility fastai license

fastai

The fastai library simplifies training fast and accurate neural nets using modern best practices. See the fastai website to get started. The library is based on research into deep learning best practices undertaken at fast.ai, and includes "out of the box" support for vision, text, tabular, and collab (collaborative filtering) models. For brief examples, see the examples folder; detailed examples are provided in the full documentation. For instance, here's how to train an MNIST model using resnet18 (from the vision example):

from fastai.vision import *
path = untar_data(MNIST_PATH)
data = image_data_from_folder(path)
learn = cnn_learner(data, models.resnet18, metrics=accuracy)
learn.fit(1)

Note for course.fast.ai students

This document is written for fastai v1, which we use for version 3 of the course.fast.ai deep learning courses. If you're following along with a course at course18.fast.ai (i.e. the machine learning course, which isn't updated for v1) you need to use fastai 0.7; please follow the installation instructions here.

Installation

NB: fastai v1 currently supports Linux only, and requires PyTorch v1 and Python 3.6 or later. Windows support is at an experimental stage: it should work fine but it's much slower and less well tested. Since Macs don't currently have good Nvidia GPU support, we do not currently prioritize Mac development.

fastai-1.x can be installed with either conda or pip package managers and also from source. At the moment you can't just run install, since you first need to get the correct pytorch version installed - thus to get fastai-1.x installed choose one of the installation recipes below using your favorite python package manager. Note that PyTorch v1 and Python 3.6 are the minimal version requirements.

It's highly recommended you install fastai and its dependencies in a virtual environment (conda or others), so that you don't interfere with system-wide python packages. It's not that you must, but if you experience problems with any dependency packages, please consider using a fresh virtual environment just for fastai.

Starting with pytorch-1.x you no longer need to install a special pytorch-cpu version. Instead use the normal pytorch and it works with and without GPU. But you can install the cpu build too.

If you experience installation problems, please read about installation issues.

If you are planning on using fastai in the jupyter notebook environment, make sure to also install the corresponding packages.

More advanced installation issues, such as installing only partial dependencies are covered in a dedicated installation doc.

Conda Install

conda install -c pytorch -c fastai fastai=1.0.61

This will install the pytorch build with the latest cudatoolkit version. If you need a higher or lower CUDA XX build (e.g. CUDA 9.0), following the instructions here, to install the desired pytorch build.

Note that JPEG decoding can be a bottleneck, particularly if you have a fast GPU. You can optionally install an optimized JPEG decoder as follows (Linux):

conda uninstall --force jpeg libtiff -y
conda install -c conda-forge libjpeg-turbo pillow==6.0.0
CC="cc -mavx2" pip install --no-cache-dir -U --force-reinstall --no-binary :all: --compile pillow-simd

If you only care about faster JPEG decompression, it can be pillow or pillow-simd in the last command above, the latter speeds up other image processing operations. For the full story see Pillow-SIMD.

PyPI Install

pip install fastai==1.0.61

By default pip will install the latest pytorch with the latest cudatoolkit. If your hardware doesn't support the latest cudatoolkit, follow the instructions here, to install a pytorch build that fits your hardware.

Bug Fix Install

If a bug fix was made in git and you can't wait till a new release is made, you can install the bleeding edge version of fastai with:

pip install git+https://github.com/fastai/fastai1.git

Developer Install

The following instructions will result in a pip editable install, so that you can git pull at any time and your environment will automatically get the updates:

git clone https://github.com/fastai/fastai1
cd fastai1
tools/run-after-git-clone
pip install -e ".[dev]"

Next, you can test that the build works by starting the jupyter notebook:

jupyter notebook

and executing an example notebook. For example load examples/tabular.ipynb and run it.

Please refer to CONTRIBUTING.md and Notes For Developers for more details on how to contribute to the fastai project.

Building From Source

If for any reason you can't use the prepackaged packages and have to build from source, this section is for you.

  1. To build pytorch from source follow the complete instructions. Remember to first install CUDA, CuDNN, and other required libraries as suggested - everything will be very slow without those libraries built into pytorch.

  2. Next, you will also need to build torchvision from source:

    git clone https://github.com/pytorch/vision
    cd vision
    python setup.py install
  3. When both pytorch and torchvision are installed, first test that you can load each of these libraries:

    import torch
    import torchvision

    to validate that they were installed correctly

    Finally, proceed with fastai installation as normal, either through prepackaged pip or conda builds or installing from source ("the developer install") as explained in the sections above.

Installation Issues

If the installation process fails, first make sure your system is supported. And if the problem is still not addressed, please refer to the troubleshooting document.

If you encounter installation problems with conda, make sure you have the latest conda client (conda install will do an update too):

conda install conda

Is My System Supported?

  1. Python: You need to have python 3.6 or higher

  2. CPU or GPU

    The pytorch binary package comes with its own CUDA, CuDNN, NCCL, MKL, and other libraries so you don't have to install system-wide NVIDIA's CUDA and related libraries if you don't need them for something else. If you have them installed already it doesn't matter which NVIDIA's CUDA version library you have installed system-wide. Your system could have CUDA 9.0 libraries, and you can still use pytorch build with CUDA 10.0 libraries without any problem, since the pytorch binary package is self-contained.

    The only requirement is that you have installed and configured the NVIDIA driver correctly. Usually you can test that by running nvidia-smi. While it's possible that this application is not available on your system, it's very likely that if it doesn't work, then you don't have your NVIDIA drivers configured properly. And remember that a reboot is always required after installing NVIDIA drivers.

  3. Operating System:

    Since fastai-1.0 relies on pytorch-1.0, you need to be able to install pytorch-1.0 first.

    As of this moment pytorch.org's 1.0 version supports:

    Platform GPU CPU
    linux binary binary
    mac source binary
    windows binary binary

    Legend: binary = can be installed directly, source = needs to be built from source.

    If there is no pytorch preview conda or pip package available for your system, you may still be able to build it from source.

  4. How do you know which pytorch cuda version build to choose?

    It depends on the version of the installed NVIDIA driver. Here are the requirements for CUDA versions supported by pre-built pytorch releases:

    CUDA Toolkit NVIDIA (Linux x86_64)
    CUDA 10.0 >= 410.00
    CUDA 9.0 >= 384.81
    CUDA 8.0 >= 367.48

    So if your NVIDIA driver is less than 384, then you can only use CUDA 8.0. Of course, you can upgrade your drivers to more recent ones if your card supports it.

    You can find a complete table with all variations here.

    If you use NVIDIA driver 410+, you most likely want to install the cudatoolkit=10.0 pytorch variant, via:

    conda install -c pytorch pytorch cudatoolkit=10.0

    or if you need a lower version, use one of:

    conda install -c pytorch pytorch cudatoolkit=8.0
    conda install -c pytorch pytorch cudatoolkit=9.0

    For other options refer to the complete list of the available pytorch variants.

Updates

In order to update your environment, simply install fastai in exactly the same way you did the initial installation.

Top level files environment.yml and environment-cpu.yml belong to the old fastai (0.7). conda env update is no longer the way to update your fastai-1.x environment. These files remain because the fastai course-v2 video instructions rely on this setup. Eventually, once fastai course-v3 p1 and p2 will be completed, they will probably be moved to where they belong - under old/.

Contribution guidelines

If you want to contribute to fastai, be sure to review the contribution guidelines. This project adheres to fastai's code of conduct. By participating, you are expected to uphold this code.

We use GitHub issues for tracking requests and bugs, so please see fastai forum for general questions and discussion.

The fastai project strives to abide by generally accepted best practices in open-source software development:

History

A detailed history of changes can be found here.

Copyright

Copyright 2017 onwards, fast.ai, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this project's files except in compliance with the License. A copy of the License is provided in the LICENSE file in this repository.

fastai1's People

Contributors

adityasoni19031997 avatar anandsaha avatar bearpelican avatar benudek avatar bfarzin avatar cuddle-cuddle avatar embracelife avatar fredmonroe avatar hiromis avatar jph00 avatar kdorichev avatar keremturgutlu avatar mcleavey avatar mcskinner avatar navjotts avatar neia20 avatar odysseus0 avatar ohmeow avatar piotrczapla avatar racheltho avatar radekosmulski avatar sampathweb avatar sebastianruder avatar sgugger avatar sjdlloyd avatar sovietic-boss88 avatar stas00 avatar wdhorton avatar yanneta avatar zcaceres avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

fastai1's Issues

Runtime error while fine tuning LM with QRNN

Describe the bug

Getting following error

RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1607370141920/work/torch/csrc/autograd/variable.cpp":363, please report a bug to PyTorch.

Provide your installation details

=== Software === 
python        : 3.7.9
fastai        : 1.0.61
fastprogress  : 0.2.7
torch         : 1.7.1
nvidia driver : 430.50
torch cuda    : 10.1 / is available
torch cudnn   : 7603 / is enabled

=== Hardware === 
nvidia gpus   : 2
torch devices : 2
  - gpu0      : 48600MB | Quadro RTX 8000
  - gpu1      : 48601MB | Quadro RTX 8000

=== Environment === 
platform      : Linux-5.3.0-28-generic-x86_64-with-debian-buster-sid
distro        : #30~18.04.1-Ubuntu SMP Fri Jan 17 06:14:09 UTC 2020
conda env     : ulmfit
python        : /home/anaconda3/envs/ulmfit/bin/python
sys.path      : /home/Documents/Urszula/zonage/ulmfit/language-models
/home/anaconda3/envs/ulmfit/lib/python37.zip
/home/anaconda3/envs/ulmfit/lib/python3.7
/home/anaconda3/envs/ulmfit/lib/python3.7/lib-dynload
/home/.local/lib/python3.7/site-packages
/home/anaconda3/envs/ulmfit/lib/python3.7/site-packages
/home/anaconda3/envs/ulmfit/lib/python3.7/site-packages/locket-0.2.1-py3.7.egg
/home/anaconda3/envs/ulmfit/lib/python3.7/site-packages/IPython/extensions

To Reproduce

Following notebook https://github.com/piegu/language-models/blob/master/lm3-french-classifier-amazon.ipynb of @piegu

After loading my databunch

data_lm = load_data(path, f'{lang}_databunch_lm_aws_sp15_multifit_v2', bs=bs)

Setting params

config = awd_lstm_lm_config.copy()
config['qrnn'] = True
config['n_hid'] = 1550 #default 1152
config['n_layers'] = 4 #default 3

Running

perplexity = Perplexity()
learn_lm = language_model_learner(data_lm, AWD_LSTM, config=config, pretrained_fnames=lm_fns3, drop_mult=0.3, 
                                  metrics=[error_rate, accuracy, perplexity]).to_fp16()

Error occures in

learn_lm.lr_find()

Expected behavior
Text

LR Finder is complete, type {learner_name}.recorder.plot() to see the graph.

Additional context
Full errror


/home/anaconda3/envs/ulmfit/lib/python3.7/site-packages/fastai/text/models/qrnn.py:104: UserWarning: Output 0 of SplitBackward is a view and is being modified inplace. This view is an output of a function that returns multiple views. Inplace operators on such views are being deprecated and will be forbidden starting from version 1.8. Consider using `unsafe_` version of the function that produced this view or don't modify this view inplace. (Triggered internally at  /opt/conda/conda-bld/pytorch_1607370141920/work/torch/csrc/autograd/variable.cpp:491.)
  z_gate.tanh_()
/home/anaconda3/envs/ulmfit/lib/python3.7/site-packages/fastai/text/models/qrnn.py:105: UserWarning: Output 1 of SplitBackward is a view and is being modified inplace. This view is an output of a function that returns multiple views. Inplace operators on such views are being deprecated and will be forbidden starting from version 1.8. Consider using `unsafe_` version of the function that produced this view or don't modify this view inplace. (Triggered internally at  /opt/conda/conda-bld/pytorch_1607370141920/work/torch/csrc/autograd/variable.cpp:491.)
  f_gate.sigmoid_()

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-28-a437f15b10c2> in <module>
----> 1 learn_lm.fit_one_cycle(2, lr*10, wd=wd, moms=(0.8,0.7))

~/anaconda3/envs/ulmfit/lib/python3.7/site-packages/fastai/train.py in fit_one_cycle(learn, cyc_len, max_lr, moms, div_factor, pct_start, final_div, wd, callbacks, tot_epochs, start_epoch)
     21     callbacks.append(OneCycleScheduler(learn, max_lr, moms=moms, div_factor=div_factor, pct_start=pct_start,
     22                                        final_div=final_div, tot_epochs=tot_epochs, start_epoch=start_epoch))
---> 23     learn.fit(cyc_len, max_lr, wd=wd, callbacks=callbacks)
     24 
     25 def fit_fc(learn:Learner, tot_epochs:int=1, lr:float=defaults.lr,  moms:Tuple[float,float]=(0.95,0.85), start_pct:float=0.72,

~/anaconda3/envs/ulmfit/lib/python3.7/site-packages/fastai/basic_train.py in fit(self, epochs, lr, wd, callbacks)
    198         else: self.opt.lr,self.opt.wd = lr,wd
    199         callbacks = [cb(self) for cb in self.callback_fns + listify(defaults.extra_callback_fns)] + listify(callbacks)
--> 200         fit(epochs, self, metrics=self.metrics, callbacks=self.callbacks+callbacks)
    201 
    202     def create_opt(self, lr:Floats, wd:Floats=0.)->None:

~/anaconda3/envs/ulmfit/lib/python3.7/site-packages/fastai/basic_train.py in fit(epochs, learn, callbacks, metrics)
     99             for xb,yb in progress_bar(learn.data.train_dl, parent=pbar):
    100                 xb, yb = cb_handler.on_batch_begin(xb, yb)
--> 101                 loss = loss_batch(learn.model, xb, yb, learn.loss_func, learn.opt, cb_handler)
    102                 if cb_handler.on_batch_end(loss): break
    103 

~/anaconda3/envs/ulmfit/lib/python3.7/site-packages/fastai/basic_train.py in loss_batch(model, xb, yb, loss_func, opt, cb_handler)
     24     if not is_listy(xb): xb = [xb]
     25     if not is_listy(yb): yb = [yb]
---> 26     out = model(*xb)
     27     out = cb_handler.on_loss_begin(out)
     28 

~/anaconda3/envs/ulmfit/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
    725             result = self._slow_forward(*input, **kwargs)
    726         else:
--> 727             result = self.forward(*input, **kwargs)
    728         for hook in itertools.chain(
    729                 _global_forward_hooks.values(),

~/anaconda3/envs/ulmfit/lib/python3.7/site-packages/torch/nn/modules/container.py in forward(self, input)
    115     def forward(self, input):
    116         for module in self:
--> 117             input = module(input)
    118         return input
    119 

~/anaconda3/envs/ulmfit/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
    725             result = self._slow_forward(*input, **kwargs)
    726         else:
--> 727             result = self.forward(*input, **kwargs)
    728         for hook in itertools.chain(
    729                 _global_forward_hooks.values(),

~/anaconda3/envs/ulmfit/lib/python3.7/site-packages/fastai/text/models/awd_lstm.py in forward(self, input, from_embeddings)
    119         new_hidden,raw_outputs,outputs = [],[],[]
    120         for l, (rnn,hid_dp) in enumerate(zip(self.rnns, self.hidden_dps)):
--> 121             raw_output, new_h = rnn(raw_output, self.hidden[l])
    122             new_hidden.append(new_h)
    123             raw_outputs.append(raw_output)

~/anaconda3/envs/ulmfit/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
    725             result = self._slow_forward(*input, **kwargs)
    726         else:
--> 727             result = self.forward(*input, **kwargs)
    728         for hook in itertools.chain(
    729                 _global_forward_hooks.values(),

~/anaconda3/envs/ulmfit/lib/python3.7/site-packages/fastai/text/models/qrnn.py in forward(self, inp, hid)
    156         if self.bidirectional: inp_bwd = inp.clone()
    157         for i, layer in enumerate(self.layers):
--> 158             inp, h = layer(inp, None if hid is None else hid[2*i if self.bidirectional else i])
    159             new_hid.append(h)
    160             if self.bidirectional:

~/anaconda3/envs/ulmfit/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
    725             result = self._slow_forward(*input, **kwargs)
    726         else:
--> 727             result = self.forward(*input, **kwargs)
    728         for hook in itertools.chain(
    729                 _global_forward_hooks.values(),

~/anaconda3/envs/ulmfit/lib/python3.7/site-packages/fastai/text/models/qrnn.py in forward(self, inp, hid)
    103         else:                z_gate,f_gate        = y.chunk(2, dim=2)
    104         z_gate.tanh_()
--> 105         f_gate.sigmoid_()
    106         if self.zoneout and self.training:
    107             mask = dropout_mask(f_gate, f_gate.size(), self.zoneout).requires_grad_(False)

RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1607370141920/work/torch/csrc/autograd/variable.cpp":363, please report a bug to PyTorch. 


TypeError: intercept_args() got an unexpected keyword argument 'persistent_workers'

Hi everyone, I am getting an error when I am creating the object of DataBunch with the persistent_workers parameter of torch Dataloader.

PyTorch version - 1.8.2

transforms = get_transforms(max_rotate=30, max_zoom=1.5, max_lighting=0.35)

data = (ObjectItemList.from_folder(img_path)
.split_by_rand_pct(0.1)
.label_from_func(bbox_class)
.transform(transforms, size=224, tfm_y=True)
.databunch(bs=4, num_workers=4, persistent_workers=True)
)


TypeError Traceback (most recent call last)
Cell In [7], line 3
1 transforms = get_transforms(max_rotate=30, max_zoom=1.5, max_lighting=0.35)
----> 3 data = (ObjectItemList.from_folder(img_path / "images")
4 .split_by_rand_pct(0.1)
5 .label_from_func(bbox_class)
6 .transform(transforms, size=448, tfm_y=True)
7 .databunch(bs=4, collate_fn=collate_fn, num_workers=4, persistent_workers=True)
8 )
9 data.chip_size = 448
10 data

File ~\AppData\Local\conda\envs\dec2022\lib\site-packages\fastai\data_block.py:553, in LabelLists.databunch(self, path, bs, val_bs, num_workers, dl_tfms, device, collate_fn, no_check, **kwargs)
551 "Create an DataBunch from self, path will override self.path, kwargs are passed to DataBunch.create."
552 path = Path(ifnone(path, self.path))
--> 553 data = self.x._bunch.create(self.train, self.valid, test_ds=self.test, path=path, bs=bs, val_bs=val_bs,
554 num_workers=num_workers, dl_tfms=dl_tfms, device=device, collate_fn=collate_fn, no_check=no_check, **kwargs)
555 if getattr(self, 'normalize', False):#In case a normalization was serialized
556 norm = self.normalize

File ~\AppData\Local\conda\envs\dec2022\lib\site-packages\fastai\basic_data.py:118, in DataBunch.create(cls, train_ds, valid_ds, test_ds, path, bs, val_bs, num_workers, dl_tfms, device, collate_fn, no_check, **dl_kwargs)
116 datasets = cls._init_ds(train_ds, valid_ds, test_ds)
117 val_bs = ifnone(val_bs, bs)
--> 118 dls = [DataLoader(d, b, shuffle=s, drop_last=s, num_workers=num_workers, **dl_kwargs) for d,b,s in
119 zip(datasets, (bs,val_bs,val_bs,val_bs), (True,False,False,False)) if d is not None]
120 return cls(*dls, path=path, device=device, dl_tfms=dl_tfms, collate_fn=collate_fn, no_check=no_check)

File ~\AppData\Local\conda\envs\dec2022\lib\site-packages\fastai\basic_data.py:118, in (.0)
116 datasets = cls._init_ds(train_ds, valid_ds, test_ds)
117 val_bs = ifnone(val_bs, bs)
--> 118 dls = [DataLoader(d, b, shuffle=s, drop_last=s, num_workers=num_workers, **dl_kwargs) for d,b,s in
119 zip(datasets, (bs,val_bs,val_bs,val_bs), (True,False,False,False)) if d is not None]
120 return cls(*dls, path=path, device=device, dl_tfms=dl_tfms, collate_fn=collate_fn, no_check=no_check)

TypeError: intercept_args() got an unexpected keyword argument 'persistent_workers'

fast.ai course - NameError: name 'widgets' is not defined on 01_intro

Unresolved on forum

(Also this is bold and I can't unbold it)
`#hide_output
uploader = widgets.FileUpload()
uploader

NameError Traceback (most recent call last)
in
1 #hide_output
----> 2 uploader = widgets.FileUpload()
3 uploader

NameError: name 'widgets' is not defined`

Hello, I didn't see a 'post a new thread' available on my interface (I'm new and another user posted that your account doesn't have post a topic as an enabled feature until it goes into another approved to post state which I am assuming is correct) so I am posting here. Images below. I ran the training for the Cats & Dogs code snippet and it was successful. However, when I wanted to upload an image of a cat I was unable to upload an image and I have the following error:

I'd appreciate any help to get this working, excited for the course, thanks!
Screen Shot 2021-08-01 at 11 57 44 AM
Screen Shot 2021-08-01 at 11 57 39 AM

hasattr(): attribute name must be string


TypeError Traceback (most recent call last)
in
----> 1 learn.fine_tune(10)

c:\users\moham\appdata\local\programs\python\python39\lib\site-packages\fastai\callback\schedule.py in fine_tune(self, epochs, base_lr, freeze_epochs, lr_mult, pct_start, div, **kwargs)
155 "Fine tune with freeze for freeze_epochs then with unfreeze from epochs using discriminative LR"
156 self.freeze()
--> 157 self.fit_one_cycle(freeze_epochs, slice(base_lr), pct_start=0.99, **kwargs)
158 base_lr /= 2
159 self.unfreeze()

c:\users\moham\appdata\local\programs\python\python39\lib\site-packages\fastai\callback\schedule.py in fit_one_cycle(self, n_epoch, lr_max, div, div_final, pct_start, wd, moms, cbs, reset_opt)
110 scheds = {'lr': combined_cos(pct_start, lr_max/div, lr_max, lr_max/div_final),
111 'mom': combined_cos(pct_start, *(self.moms if moms is None else moms))}
--> 112 self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
113
114 # Cell

c:\users\moham\appdata\local\programs\python\python39\lib\site-packages\fastai\learner.py in fit(self, n_epoch, lr, wd, cbs, reset_opt)
216 self.opt.set_hypers(lr=self.lr if lr is None else lr)
217 self.n_epoch = n_epoch
--> 218 self._with_events(self._do_fit, 'fit', CancelFitException, self._end_cleanup)
219
220 def _end_cleanup(self): self.dl,self.xb,self.yb,self.pred,self.loss = None,(None,),(None,),None,None

c:\users\moham\appdata\local\programs\python\python39\lib\site-packages\fastai\learner.py in with_events(self, f, event_type, ex, final)
158
159 def with_events(self, f, event_type, ex, final=noop):
--> 160 try: self(f'before
{event_type}'); f()
161 except ex: self(f'after_cancel
{event_type}')
162 self(f'after_{event_type}'); final()

c:\users\moham\appdata\local\programs\python\python39\lib\site-packages\fastai\learner.py in _do_fit(self)
207 for epoch in range(self.n_epoch):
208 self.epoch=epoch
--> 209 self._with_events(self._do_epoch, 'epoch', CancelEpochException)
210
211 def fit(self, n_epoch, lr=None, wd=None, cbs=None, reset_opt=False):

c:\users\moham\appdata\local\programs\python\python39\lib\site-packages\fastai\learner.py in with_events(self, f, event_type, ex, final)
158
159 def with_events(self, f, event_type, ex, final=noop):
--> 160 try: self(f'before
{event_type}'); f()
161 except ex: self(f'after_cancel
{event_type}')
162 self(f'after_{event_type}'); final()

c:\users\moham\appdata\local\programs\python\python39\lib\site-packages\fastai\learner.py in _do_epoch(self)
201
202 def _do_epoch(self):
--> 203 self._do_epoch_train()
204 self._do_epoch_validate()
205

c:\users\moham\appdata\local\programs\python\python39\lib\site-packages\fastai\learner.py in _do_epoch_train(self)
193 def _do_epoch_train(self):
194 self.dl = self.dls.train
--> 195 self._with_events(self.all_batches, 'train', CancelTrainException)
196
197 def _do_epoch_validate(self, ds_idx=1, dl=None):

c:\users\moham\appdata\local\programs\python\python39\lib\site-packages\fastai\learner.py in with_events(self, f, event_type, ex, final)
158
159 def with_events(self, f, event_type, ex, final=noop):
--> 160 try: self(f'before
{event_type}'); f()
161 except ex: self(f'after_cancel
{event_type}')
162 self(f'after_{event_type}'); final()

c:\users\moham\appdata\local\programs\python\python39\lib\site-packages\fastai\learner.py in all_batches(self)
164 def all_batches(self):
165 self.n_iter = len(self.dl)
--> 166 for o in enumerate(self.dl): self.one_batch(*o)
167
168 def _do_one_batch(self):

c:\users\moham\appdata\local\programs\python\python39\lib\site-packages\fastai\learner.py in one_batch(self, i, b)
189 b = self._set_device(b)
190 self._split(b)
--> 191 self._with_events(self._do_one_batch, 'batch', CancelBatchException)
192
193 def _do_epoch_train(self):

c:\users\moham\appdata\local\programs\python\python39\lib\site-packages\fastai\learner.py in with_events(self, f, event_type, ex, final)
158
159 def with_events(self, f, event_type, ex, final=noop):
--> 160 try: self(f'before
{event_type}'); f()
161 except ex: self(f'after_cancel
{event_type}')
162 self(f'after_{event_type}'); final()

c:\users\moham\appdata\local\programs\python\python39\lib\site-packages\fastai\learner.py in _do_one_batch(self)
167
168 def _do_one_batch(self):
--> 169 self.pred = self.model(*self.xb)
170 self('after_pred')
171 if len(self.yb):

c:\users\moham\appdata\local\programs\python\python39\lib\site-packages\fastai\learner.py in call(self, event_name)
139
140 def ordered_cbs(self, event): return [cb for cb in self.cbs.sorted('order') if hasattr(cb, event)]
--> 141 def call(self, event_name): L(event_name).map(self._call_one)
142
143 def _call_one(self, event_name):

c:\users\moham\appdata\local\programs\python\python39\lib\site-packages\fastcore\foundation.py in map(self, f, gen, *args, **kwargs)
152 def range(cls, a, b=None, step=None): return cls(range_of(a, b=b, step=step))
153
--> 154 def map(self, f, *args, gen=False, **kwargs): return self._new(map_ex(self, f, *args, gen=gen, **kwargs))
155 def argwhere(self, f, negate=False, **kwargs): return self._new(argwhere(self, f, negate, **kwargs))
156 def filter(self, f=noop, negate=False, gen=False, **kwargs):

c:\users\moham\appdata\local\programs\python\python39\lib\site-packages\fastcore\basics.py in map_ex(iterable, f, gen, *args, **kwargs)
664 res = map(g, iterable)
665 if gen: return res
--> 666 return list(res)
667
668 # Cell

c:\users\moham\appdata\local\programs\python\python39\lib\site-packages\fastcore\basics.py in call(self, *args, **kwargs)
649 if isinstance(v,_Arg): kwargs[k] = args.pop(v.i)
650 fargs = [args[x.i] if isinstance(x, _Arg) else x for x in self.pargs] + args[self.maxi+1:]
--> 651 return self.func(*fargs, **kwargs)
652
653 # Cell

c:\users\moham\appdata\local\programs\python\python39\lib\site-packages\fastai\learner.py in _call_one(self, event_name)
142
143 def _call_one(self, event_name):
--> 144 if not hasattr(event, event_name): raise Exception(f'missing {event_name}')
145 for cb in self.cbs.sorted('order'): cb(event_name)
146

TypeError: hasattr(): attribute name must be string

about pred_batch speed

Hi, I seem to be getting the same average inference speed using a loop of predict versus a single pred_batch. Is that normal? I thought batch prediction would be faster.any help will be grateful!

How to create static databunch from the images?

Hey authors,
I have a problem with the data bunch, I try creating the batches of images from my dataset, but the issue is the batches keep on changing, I want fixed batches of images, every time i call data.show_batch() shows different batches, even though there are only 2 images in the batch and the batches are created from 2 images only so repeated random batches are being shown in data bunch.

"Same file error" on resize_images when in place and image size < max_size

Describe the bug

SameFileError on resize_images when in place (same src and dest folder) and image size < max_size

Provide your installation details

To Reproduce

from fastai.vision.utils import *
from pathlib import Path
download_images(dest='.', urls=['https://upload.wikimedia.org/wikipedia/commons/thumb/b/b6/Image_created_with_a_mobile_phone.png/1200px-Image_created_with_a_mobile_phone.png'])
resize_images( path='.', dest='.', max_size=None)

=> generate always a shutil.SameFileError

Expected behavior

No error and no copy if file are exactly the same. I will create a PR by my self.

Screenshots

Additional context

Forum where the issue was discussed : https://forums.fast.ai/t/same-file-path-error-while-resizing-images-lesson-1/97601/12
resize_images is used in lesson 1. Some beginners (like me) could have the same issue.

"urlsave() got an unexpected keyword argument 'timeout'" when using untar_data()

Describe the bug

While trying to download COCO_SAMPLE dataset using Fastai in Google Colab the error message 'urlsave() got an unexpected keyword argument 'timeout' ' is thrown.

Below is the code that was run :

from fastai.data.external import untar_data, URLs
coco_path = untar_data(URLs.COCO_SAMPLE)

and here is the error that is being thrown :
fastai_error

Provide your installation details

text
=== Software === 
python       : 3.7.11
fastai       : 1.0.61
fastprogress : 0.2.7
torch        : 1.9.0+cu102
torch cuda   : 10.2 / is **Not available** 

=== Hardware === 
No GPUs available 

=== Environment === 
platform     : Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
distro       : #1 SMP Sat Jun 5 09:50:34 PDT 2021
conda env    : Unknown
python       : /usr/bin/python3
sys.path     : 
/content
/env/python
/usr/lib/python37.zip
/usr/lib/python3.7
/usr/lib/python3.7/lib-dynload
/usr/local/lib/python3.7/dist-packages
/usr/lib/python3/dist-packages
/usr/local/lib/python3.7/dist-packages/IPython/extensions
/root/.ipython
no supported gpus found on this system

To Reproduce

Please execute the piece of code provided in the description of the issue.

Expected behavior

Get the path to the COCO_SAMPLE dataset.

Screenshots

Additional context

fastai unet from binary data rather than files?

Hi,

I was wondering if it possible to run unet image segmentation training in fastai.

All the code I have found sets up fastai with 2D files of data and labels. For my particular application I would like to avoid creation of PNG file images with 3 channels, and just use a numpy style 2D array for each image, essentially greyscale.

There appears to be two potiential routes:

  1. subclass Dataset class, and create two objects for train (data and segmentation) and validation (data and segmentation). Then DataBunch.create(train_ads, validation_ads) and use this databunch to create the unet. See example in a googlecolab here.

Although the unet seems to be successfully created, the learn() function fails, not sure why

  1. subclass ItemBase and ItemList. I have made some attempts but the fastai code seems to rely a lot in the usage of paths in for ItemList. Instructions do not seem to be sufficient. In particular the issue is about linking the data with the respective segmentations. User can define a class for the labels. What this means is not very clear but it probably tries the open function in it, with some pathname parameters. As such databunch() function fails, and this is needed to setup unet.

Any help is appreciated.

Regards
L

Exception of pin_memory

I'm having a constant exception after running the following code part:

from fastai.vision import *
import warnings
warnings.filterwarnings('ignore')
path = 'G:/DataScienceProject/Kaggle-Prostate-cANcer-graDe-Assessment/train'
folderList = os.listdir(path)
data = ImageDataBunch.from_folder(path,
train=".",
test="../cv",
valid_pct=0.2,
classes=folderList)

from fastai.metrics import error_rate # 1 - accuracy
learn = create_cnn(data, models.resnet34, metrics=accuracy)
defaults.device = torch.device('cuda')
learn.fit_one_cycle(10)

Fastai version: 1.0.61

Exception:
RuntimeError: Caught RuntimeError in pin memory thread for device 0.
Original Traceback (most recent call last):
File "C:\Users\User\AppData\Roaming\Python\Python38\site-packages\torch\utils\data_utils\pin_memory.py", line 31, in _pin_memory_loop
data = pin_memory(data)
File "C:\Users\User\AppData\Roaming\Python\Python38\site-packages\torch\utils\data_utils\pin_memory.py", line 55, in pin_memory
return [pin_memory(sample) for sample in data]
File "C:\Users\User\AppData\Roaming\Python\Python38\site-packages\torch\utils\data_utils\pin_memory.py", line 55, in
return [pin_memory(sample) for sample in data]
File "C:\Users\User\AppData\Roaming\Python\Python38\site-packages\torch\utils\data_utils\pin_memory.py", line 47, in pin_memory
return data.pin_memory()
RuntimeError: error in LoadLibraryA

-Gilad

where can I set num_worker==0?

the problem is caused by multi worker when I use gpu ,8g,win10,how to solved it? @fizx @kashif @ohmeow @edave

Exception ignored in: <bound method _MultiProcessingDataLoaderIter.del of <torch.utils.data.dataloader._MultiProcessingDataLoaderIter object at 0x0000025081F53748>>
Traceback (most recent call last):
File "D:\rj\ana3\envs\tf2\lib\site-packages\torch\utils\data\dataloader.py", line 1203, in del
self._shutdown_workers()
File "D:\rj\ana3\envs\tf2\lib\site-packages\torch\utils\data\dataloader.py", line 1161, in _shutdown_workers
self._worker_result_queue.put((None, None))
File "D:\rj\ana3\envs\tf2\lib\multiprocessing\queues.py", line 87, in put
self._start_thread()
File "D:\rj\ana3\envs\tf2\lib\multiprocessing\queues.py", line 169, in _start_thread
self._thread.start()
File "D:\rj\ana3\envs\tf2\lib\threading.py", line 846, in start
_start_new_thread(self._bootstrap, ())
RuntimeError: can't start new thread
Traceback (most recent call last):
File "D:\rj\ana3\envs\tf2\lib\site-packages\fastai\basic_train.py", line 101, in fit
loss = loss_batch(learn.model, xb, yb, learn.loss_func, learn.opt, cb_handler)
File "D:\rj\ana3\envs\tf2\lib\site-packages\fastai\basic_train.py", line 26, in loss_batch
out = model(*xb)
File "D:\rj\ana3\envs\tf2\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "D:\rj\ana3\envs\tf2\lib\site-packages\pytorch_pretrained_bert\modeling.py", line 989, in forward
_, pooled_output = self.bert(input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False)
File "D:\rj\ana3\envs\tf2\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "D:\rj\ana3\envs\tf2\lib\site-packages\pytorch_pretrained_bert\modeling.py", line 733, in forward
output_all_encoded_layers=output_all_encoded_layers)
File "D:\rj\ana3\envs\tf2\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "D:\rj\ana3\envs\tf2\lib\site-packages\pytorch_pretrained_bert\modeling.py", line 406, in forward
hidden_states = layer_module(hidden_states, attention_mask)
File "D:\rj\ana3\envs\tf2\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "D:\rj\ana3\envs\tf2\lib\site-packages\pytorch_pretrained_bert\modeling.py", line 391, in forward
attention_output = self.attention(hidden_states, attention_mask)
File "D:\rj\ana3\envs\tf2\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "D:\rj\ana3\envs\tf2\lib\site-packages\pytorch_pretrained_bert\modeling.py", line 349, in forward
self_output = self.self(input_tensor, attention_mask)
File "D:\rj\ana3\envs\tf2\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "D:\rj\ana3\envs\tf2\lib\site-packages\pytorch_pretrained_bert\modeling.py", line 319, in forward
attention_probs = self.dropout(attention_probs)
File "D:\rj\ana3\envs\tf2\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "D:\rj\ana3\envs\tf2\lib\site-packages\torch\nn\modules\dropout.py", line 58, in forward
return F.dropout(input, self.p, self.training, self.inplace)
File "D:\rj\ana3\envs\tf2\lib\site-packages\torch\nn\functional.py", line 983, in dropout
else _VF.dropout(input, p, training))
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 8.00 GiB total capacity; 1.61 GiB already allocated; 4.94 GiB free; 1.64 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:/duanhong/elensdata/multi_label/NLP_BERT_multi_label/multi_label_train.py", line 226, in
learner.lr_find()
File "D:\rj\ana3\envs\tf2\lib\site-packages\fastai\train.py", line 41, in lr_find
learn.fit(epochs, start_lr, callbacks=[cb], wd=wd)
File "D:\rj\ana3\envs\tf2\lib\site-packages\fastai\basic_train.py", line 200, in fit
fit(epochs, self, metrics=self.metrics, callbacks=self.callbacks+callbacks)
File "D:\rj\ana3\envs\tf2\lib\site-packages\fastai\basic_train.py", line 112, in fit
finally: cb_handler.on_train_end(exception)
File "D:\rj\ana3\envs\tf2\lib\site-packages\fastai\callback.py", line 323, in on_train_end
self('train_end', exception=exception)
File "D:\rj\ana3\envs\tf2\lib\site-packages\fastai\callback.py", line 251, in call
for cb in self.callbacks: self._call_and_update(cb, cb_name, **kwargs)
File "D:\rj\ana3\envs\tf2\lib\site-packages\fastai\callback.py", line 241, in call_and_update
new = ifnone(getattr(cb, f'on
{cb_name}')(**self.state_dict, **kwargs), dict())
File "D:\rj\ana3\envs\tf2\lib\site-packages\fastai\callbacks\lr_finder.py", line 39, in on_train_end
self.learn.load('tmp', purge=False)
File "D:\rj\ana3\envs\tf2\lib\site-packages\fastai\basic_train.py", line 269, in load
state = torch.load(source, map_location=device)
File "D:\rj\ana3\envs\tf2\lib\site-packages\torch\serialization.py", line 594, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "D:\rj\ana3\envs\tf2\lib\site-packages\torch\serialization.py", line 853, in _load
result = unpickler.load()
File "D:\rj\ana3\envs\tf2\lib\site-packages\torch\serialization.py", line 845, in persistent_load
load_tensor(data_type, size, key, _maybe_decode_ascii(location))
File "D:\rj\ana3\envs\tf2\lib\site-packages\torch\serialization.py", line 833, in load_tensor
storage = zip_file.get_storage_from_record(name, size, dtype).storage()
RuntimeError: [enforce fail at ..\c10\core\CPUAllocator.cpp:73] data. DefaultCPUAllocator: not enough memory: you tried to allocate 32452608 bytes. Buy new RAM!

Databunch object not getting saved

hey everyone ,
I tried saving my databunch object using databunch.save method but it gives me a ctype error:
Here is the code i used:

batch_size = 64

do_flip = True
flip_vert = True
max_rotate = 90
max_zoom = 1.1
max_lighting = 0.2
max_warp = 0.2
p_affine = 0.75
p_lighting = 0.75

tfms = get_transforms(do_flip=do_flip,
flip_vert=flip_vert,
max_rotate=max_rotate,
max_zoom=max_zoom,
max_lighting=max_lighting,
max_warp=max_warp,
p_affine=p_affine,
p_lighting=p_lighting)
train, valid = ObjectItemListSlide(train_images) ,ObjectItemListSlide(valid_images)
item_list = ItemLists(".", train, valid)
lls = item_list.label_from_func(lambda x: x.y, label_cls=SlideObjectCategoryList)
lls = lls.transform(tfms, tfm_y=True, size=patch_size)
data = lls.databunch(bs=batch_size, collate_fn=bb_pad_collate,num_workers=0).normalize()
The error:

ValueError Traceback (most recent call last)
in
----> 1 data.save("test_data_bs8.pkl")

3 frames
/usr/local/lib/python3.7/dist-packages/fastai/basic_data.py in save(self, file)
153 warn("Serializing the DataBunch only works when you created it using the data block API.")
154 return
--> 155 try_save(self.label_list, self.path, file)
156
157 def add_test(self, items:Iterator, label:Any=None, tfms=None, tfm_y=None)->None:

/usr/local/lib/python3.7/dist-packages/fastai/torch_core.py in try_save(state, path, file)
414 #To avoid the warning that come from PyTorch about model not being checked
415 warnings.simplefilter("ignore")
--> 416 torch.save(state, target)
417 except OSError as e:
418 raise Exception(f"{e}\n Can't write {path/file}. Pass an absolute writable pathlib obj fname.")

/usr/local/lib/python3.7/dist-packages/torch/serialization.py in save(obj, f, pickle_module, pickle_protocol, _use_new_zipfile_serialization)
378 if _use_new_zipfile_serialization:
379 with _open_zipfile_writer(opened_file) as opened_zipfile:
--> 380 _save(obj, opened_zipfile, pickle_module, pickle_protocol)
381 return
382 _legacy_save(obj, opened_file, pickle_module, pickle_protocol)

/usr/local/lib/python3.7/dist-packages/torch/serialization.py in _save(obj, zip_file, pickle_module, pickle_protocol)
587 pickler = pickle_module.Pickler(data_buf, protocol=pickle_protocol)
588 pickler.persistent_id = persistent_id
--> 589 pickler.dump(obj)
590 data_value = data_buf.getvalue()
591 zip_file.write_record('data.pkl', data_value, len(data_value))

ValueError: ctypes objects containing pointers cannot be pickled

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.