Giter Site home page Giter Site logo

deep-learning-with-pytorch / dlwpt-code Goto Github PK

View Code? Open in Web Editor NEW
4.5K 4.5K 1.9K 176.02 MB

Code for the book Deep Learning with PyTorch by Eli Stevens, Luca Antiga, and Thomas Viehmann.

Home Page: https://www.manning.com/books/deep-learning-with-pytorch

Python 1.09% Jupyter Notebook 98.86% CMake 0.01% Java 0.02% C++ 0.03%
deep-learning deep-neural-networks python python3 pytorch

dlwpt-code's People

Contributors

elistevens avatar lantiga avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dlwpt-code's Issues

Possible error on fig 6.6

Subgraph titled "D: TANH(-3*x - 1.5)" plot shows d= tanh(-3x + 1.5). C + D graph also indicates that D is tanh(-3x + 1.5). Pg 150

Memory Errors During Preping Cache

I can't seem to get the provided code to work for the entire luna dataset and keep running into memory errors thrown by the SimpleITK package. After looking at the code for some time, I tried manipulating the code to free up more memory when things weren't needed, etc., but I can't get the cache preping code to work for the complete dataset (only 0-3 subsets together work). It seems as the cache preping code continue to accumulate more and more memory without writing much to my hard disk. I believe it might be because the diskcache isn't working like expected whereby writing steps for the FanoutCache are timing out even when set at 1s. Moreover, the specified shard number (64) implies concurrent writing that isn't shaped by worker number. Could you explain the selected FanoutCache configuration? What changes could potentially mitigate memory problems during the prepcache step if we were attempting to run the complete dataset?

Errata: "magifying" glass

Figure 13.7 contains the phrase "magiying glass." Not sure if this is a typo ("magnifying glass") or a joke though :) (a la "automagically")

p2ch10_explore_data.ipynp import errors for the util.disk

It seems that BytesType and BytesIO are no longer included in the diskcashe 5.0.3 module

ImportError                               Traceback (most recent call last)
<ipython-input-2-6816886bb81a> in <module>
----> 1 from p2ch10.dsets_edited import getCandidateInfoList, getCt, LunaDataset
      2 candidateInfo_list = getCandidateInfoList(requireOnDisk_bool=False)
      3 positiveInfo_list = [x for x in candidateInfo_list if x[0]]
      4 diameter_list = [x[1] for x in positiveInfo_list]

~/PyTorchBook/dlwpt-code/p2ch10/dsets_edited.py in <module>
     14 from torch.utils.data import Dataset
     15 
---> 16 from util.disk import getCache
     17 from util.util import XyzTuple, xyz2irc
     18 from util.logconf import logging

~/PyTorchBook/dlwpt-code/util/disk.py in <module>
      2 
      3 from diskcache import FanoutCache, Disk
----> 4 from diskcache.core import BytesType, MODE_BINARY, BytesIO
      5 
      6 from util.logconf import logging

ImportError: cannot import name 'BytesType' from 'diskcache.core'

Missing train_seg.py and train_cls.py

Hi,
I can't run items in chapter 13 because at least 2 files are missing: train_cls.py and train_seg.py. I see these in the LiveBook. Am I supposed to edit the files from Git or Manning to match these? I don't understand. The line numbers don't match so I think there is a store of working code that I don't see on Git or Manning.
I have been making continual edits to code and CSV files.
Is there an easier way? Is it a work-in-progress and I should wait a while longer?
Jeff

_pickle.UnpicklingError: invalid load key, '\x00'

I am trying to run code from chapter 11 and seem to be getting this error: _pickle.UnpicklingError: invalid load key, '\x00'

image

prepcache.py:
image

util.py:
image

dataloader.py:
image

dataloader.py:
image

dataloader.py:
image

_utils.py:
image

UnpicklingError: Caught UnpicklingError in DataLoader worker process 0.

where to place data?

Where am I supposed to put the data files?
would the path be data/subset*/.mhd or data-unversioned/subsets/subset/*.mhd

I don't understand what the puporse of data-unversioned is? When I run the training scripts in ch11 and ch12 data-unversioned is created and inside there's a cache folder

Typo in your PDF copy of Chapter 9?

Is this a typo in your PDF copy of Chapter 9?

In your Chapter 9 it says: If you want to jump ahead, you can use the code/p2ch09_explore_data.ipynb Jupyter Notebook to get started. But I can't find the code/p2ch09 folder in your Github repository.

typo

typo

Code Release for Chapter 13

The MEAP version of the book already includes Chapter 13 on Image Segmentation. When will the corresponding code for Chapter 13 be released? @elistevens looking forward to the full code release for a more fruitful reading of the book.

A mistake in section 10.5.4 "Rendering the data" of the book

# In[7]:
%matplotlib inline
from p2ch10.vis import findNoduleSamples, showNodule
noduleSample_list = findNoduleSamples()
error: cannot import name 'findPositiveSampls' from 'chapter10.vis'


Because the p2ch10.vis does not contain findNoduleSamples and showNodule.
correction:
# In[7]:
%matplotlib inline
from p2ch10.vis import findPositiveSamples, showCandidate
positiveSample_list = findPositiveSamples()

Expected object of scalar type Long but got scalar type Float for argument #3 'index'

  • Chapter: 4
  • Page: 82
  • Section: One-hot encoding
target_onehot = torch.zeros(target.shape[0], 10)
target_onehot.scatter_(1, target.unsqueeze(1), 1.0)

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
 in 
      1 target_onehot = torch.zeros(target.shape[0], 10)
----> 2 target_onehot.scatter_(1, target.unsqueeze(1), 1.0)

RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #3 'index' in call to _th_scatter_

Typos in Chapter 15 (mostly)

The inimitable @stas00 looked at Chapter 15 and found some errors/unclear parts. I'm collecting them here while I work on getting them into the errata. Thank you, Stas! Any remaining bad bits are mine, of course.

p466:

  • "// ...here we need to produce an output tensor from input"

doesn't make it clear that code has been removed, which makes it hard to understand where those variables in the subsequent lines suddenly appeared. I'd write it as:

[...]
// here we need to produce an output tensor from input

i.e. it's not clear that ... merged with the comment is there to indicate that some code was snipped.

e.g. on page 471 it's loud and clear that code has been removed.

and later

  • "auto input_ = torch::tensor" is misindented

p469:

  • "When we look at the source code in the file"

it's not clear what "the file" is

  • "torch::nn::ReflectionPad2d(1)" is misindented

p470:

"In contrast to what we did (and should!) in Python"

I can see you're alluding to __call__ but it's not obvious.

First I thought it'd be better to say:

"In contrast to what we did (and should not!) in Python"

but it's still ambiguous, might be easier to say it in direct language.

"As explained in earlier chapters in Python we don't normally call forward explicitly, but in C++ we do."

or something like that.

p475:

"Another approach to is to reduce the footprint of each parameter and operation:"

typo: first "to" shouldn't be there.

p476:

"convolutions and linear layers as weighted averages, we may expect rounding errors to
typically cancel.19"

I think the correct expression is "to cancel out"

same goes for the footnote:

", leading to errors adding up rather than canceling."

==============

and earlier:

p87:

The capture:

"Figure 4.5 Transforming a 1D, multichannel dataset into a 2D, multichannel dataset by separating the date and
hour of each sample into separate axes"

doesn't match the previous text:

"Our goal will be to take a flat, 2D dataset and transform it into a 3D one, as shown in figure 4.5."

The capture most likely needs to say "2D to 3D"

p2ch12 (training.py)- Training stops without error

When I run "python -m training --balanced --epochs 11", the training process will be shut down automated on epoch 3 without an error message. I try a lot of times and get the same result. It makes me confuse because there is no error message.
Environments:
Conda: 1.9.12
PyTorch: 1.7.0
Cuda: 10.2
RAM: 32 GB
GPU: RTX 2080 Ti
I think I meet the same issue on #17.
2021-01-20 144504

requirements.txt referenced but unavailable

Thanks for making a really excellent book! Excited to dive in.

On pages xxv and 14 (Section 1.5), there's mention of a requirements.txt file being provided for use with pip to ensure that all packages are the correct version.

I've looked around and been unable to find this file, either within this repo or on the internet. Where/Is it available?

A possible mistake about file path definition in ../p1ch8/1_convolution.ipynb

The CIFAR dataset was first introduced in Chapter 7 of the book. And in the code examples of Chapter 7, CIFAR dataset was downloaded in default file path '../data-unversioned/p1ch7/'. Then in the code examples of Chapter 8, the default file path is '../data-unversioned/p1ch6/' which might cause the second download process. I guess it was alright in the preview version of the book.

Misleading code [3.8.2]

We can easily verify that the two tensors share the same storage
id(points.storage()) == id(points_t.storage())

is should be used, which returns False; try id(float(3)) == id(float(4)). See here.

Project Goal

image

image

Hello everyone.
I am a bit confused here. Can I say that that term was used a bit wrongly? Isn't object detection(in this case nodule detection) concerned with identifying a nodule and drawing a bounding box around it? I thought that in our case we are dealing with a classification problem of telling whether a lump is a nodule or not?
Any help extended will be really appreciated. Thank you in advance

torchvision not in requirements

Ch2: First code we see is torch vision. This is missing from the requirements file. Since it's not compatible with pytorch 1.7.1, the requirements should probably have this updated.

ImageCaptioning.pytorch Error

Tried run ImageCaptioning from chapter 2 (2.3.1 NeuralTalk2) and get this error, any idea what to do with it? cuda available btw

(dcv37) ivan:~/dev/gg/Deep-Learning-with-PyTorch/ImageCaptioning.pytorch$ python eval.py --model ./data/FC/fc-model.pth --infos_path ./data/FC/fc-infos.pkl --image_folder ./data
DataLoaderRaw loading images from folder:  ./data
0
listing all images in directory ./data
DataLoaderRaw found  1  images
/home/ivan/anaconda3/envs/dcv37/lib/python3.7/site-packages/torch/nn/functional.py:1625: UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.
  warnings.warn("nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.")
/home/ivan/anaconda3/envs/dcv37/lib/python3.7/site-packages/torch/nn/functional.py:1614: UserWarning: nn.functional.tanh is deprecated. Use torch.tanh instead.
  warnings.warn("nn.functional.tanh is deprecated. Use torch.tanh instead.")
/home/ivan/dev/gg/Deep-Learning-with-PyTorch/ImageCaptioning.pytorch/models/FCModel.py:147: UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument.
  logprobs = F.log_softmax(self.logit(output))
Traceback (most recent call last):
  File "eval.py", line 134, in <module>
    vars(opt))
  File "/home/ivan/dev/gg/Deep-Learning-with-PyTorch/ImageCaptioning.pytorch/eval_utils.py", line 106, in eval_split
    seq, _ = model.sample(fc_feats, att_feats, eval_kwargs)
  File "/home/ivan/dev/gg/Deep-Learning-with-PyTorch/ImageCaptioning.pytorch/models/FCModel.py", line 160, in sample
    return self.sample_beam(fc_feats, att_feats, opt)
  File "/home/ivan/dev/gg/Deep-Learning-with-PyTorch/ImageCaptioning.pytorch/models/FCModel.py", line 144, in sample_beam
    xt = self.embed(Variable(it, requires_grad=False))
  File "/home/ivan/anaconda3/envs/dcv37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/ivan/anaconda3/envs/dcv37/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 126, in forward
    self.norm_type, self.scale_grad_by_freq, self.sparse)
  File "/home/ivan/anaconda3/envs/dcv37/lib/python3.7/site-packages/torch/nn/functional.py", line 1814, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _th_index_select
(dcv37) ivan@adsl-2080-server:~/dev/gg/Deep-Learning-with-PyTorch/ImageCaptioning.pytorch$ python 
Python 3.7.7 (default, May  7 2020, 21:25:33) 
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> 'cuda' if torch.cuda.is_available() else 'cpu'
'cuda'
>>> 

'Ct' object has no attribute 'build3dLungMask'?

In both "p2ch10_explore_data.ipynb" and "p2ch10/dsets.py", I am not able to find 'build3dLungMask'.

from p2ch10.dsets import getCt
ct = getCt(series_uid)
air_mask, lung_mask, dense_mask, denoise_mask, tissue_mask, body_mask = ct.build3dLungMask()

---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-23-2630724d96f5> in <module>
1 from p2ch10.dsets import getCt
2 ct = getCt(series_uid)
----> 3 air_mask, lung_mask, dense_mask, denoise_mask, tissue_mask, body_mask = ct.build3dLungMask()
AttributeError: 'Ct' object has no attribute 'build3dLungMask'

build3dlungmask

ImportError: cannot import name 'BytesType' from 'diskcache.core'

Hello Everyone

I am trying to learn along with the book by building cancer detection along with the book. However I get the following error.
Traceback (most recent call last):

File "/home/sandip/code/dsets.py", line 16, in
from util.disk import getCache

File "/home/sandip/code/util/disk.py", line 4, in
from diskcache.core import BytesType, MODE_BINARY, BytesIO

ImportError: cannot import name 'BytesType' from 'diskcache.core' (/home/sandip/anaconda3/envs/breast_cancer/lib/python3.7/site-packages/diskcache/core.py)

image

Please guide me to fix the issue.

Thanks in advance

Memory leak in asyncio operation

Hi,
In p3ch15/request_batching_server.py, there is a excellent code for asycnio. But I am gettting a memory leak there. The GPU memory is increasing regularly. The rate of memory increment is proportional to the number of input it is getting. Can you suggest some method how to debug this?
Thanks in advance.

Typo in your PDF copy of Chapter 10?

In chapter 10 on page 258, the Bash shell code explanation says Counts the number of lines that end with 1, which indicates malignancy, but in fact, lines end with 1 doesn't mean malignancy. It means it's a nodule and it can be malignant or benign.

In the later section, you also have 0 for a candidate that is not an actual nodule, and 1 for a candidate that is a nodule, either malignant or benign. This is kind of confusing.

ch10 typo

Errata: Listing 11.2, minor style issue (missing whitespaces)

I took some notes reg. potential errata/suggestions when reading. As I am currently digitizing my handwritten notes, I thought it might be useful to share those.

In Listing 11.12, I would put whitespaces around the " = " for PEP8 compliancy and consistency

Screen Shot 2021-01-15 at 5 37 48 PM

Incorrect model class import

Hi,
I have read all the book and I think it is a great source to learn PyTorch, thanks for that!

I was trying to run the servers from the deployment chapter, but I think there is a mistake when import the model.

In file p3ch15/flask_server.py line 8, should it be p2ch14.model instead of p2ch13.model_cls?

Where to find the latest checkpoints?

Hi,

I tried to test the p2ch14.nodule_analysis module, but I did not found the checkpoints: data/part2/models/cls_2020-02-06_14.16.55_final-nodule-nonnodule.best.state, data/part2/models/seg_2020-01-26_19.45.12_w4d3c1-bal_1_nodupe-label_pos-d1_fn8-adam.best.state. Will these models be available? Thanks!

Do cache paths need to be generated each time?

Hi,
This isn't an issue really but a question.
Can I create the cache for each chapter only once? That is, if I have a large enough SSD can I run 'prepcache' on ch12, ch13, ch14 and then be able to experiment on each chapter? Currently I have only enough space for a single chapter's cache so I must clear_cache each time I go to a different chapter and redo prepcache. If I use a 2TB SSD I can fit all three chapters. Is this possible or does some database get matched up with the current cache? I see separate folders under cache so I hope so.

Jeff

CT 3D Visual

Figured it helpful to interactively dig in skeletons:

Requirement: plotly. Guide.

import os
import glob
import torch
import torch.nn as nn
import numpy as np
import SimpleITK as sitk

import plotly.graph_objs as go
from plotly.offline import plot  # for IDE (e.g. Spyder use)

#%%# Load data ###############################################################
_dir = r"C:\LUNA\\"  # replace with path to your 'subset*' directory
path = glob.glob(os.path.join(_dir, 'subset*\\*.mhd'))[0]

ct_mhd = sitk.ReadImage(path)
ct_a = np.array(sitk.GetArrayFromImage(ct_mhd), dtype=np.float32)
ct_a.clip(-1000, 1000, ct_a)

#%%# Downsample #######
ct_a_t = torch.tensor([[ct_a]])
ctm = nn.MaxPool3d(4)(ct_a_t).numpy()[0][0]

#%%# Prepare to plot ##
a, b, c = ctm.shape
l = 16
X, Y, Z = np.mgrid[-l:l:a*1j, -l:l:b*1j, -l:l:c*1j]

#%%# Plot in browser #########################################################
fig = go.Figure(data=go.Volume(
    x=X.flatten(),
    y=Y.flatten(),
    z=Z.flatten(),
    value=ctm.flatten(),
    opacity=0.15, # small to see through surfaces
    surface_count=12, # larger -> better volume rendering
    colorscale='RdBu',
    ))
plot(fig, auto_open=True)

Data preprocessing in Part 2

Hello everyone.
I am a bit stuck in part 2 especially on data preparing.

  1. Why is it that in the candidates.csv some nodules are occurring more than once?
  2. Second, what specifically are Image coordinates, I really don't understand the logic of converting from world coordinates(what are they anyway? is it the Cartesian x, y, z system) to voxel coordinates and why should we care to convert between the two?
  3. Why are we manually splitting the dataset, can't we rely on PyTorch or sktlearn to randomly split it for us?
    Thank you please

Issue in the book

image

Page 272, In listing 10.12, should the mysterious 1((CO10-1)) be there?
I have checked the code implementation at

return (
and that thing is not there.

Also, Is this the right place to raise an issue about the book too?

p2ch09?!

It looks like this one is missing.
;o)

Undermining batchnorm

Textbook takes a crack at batchnorm in 8.5.4, citing a paper which "eliminates" need for it, training a 10k layer network without BN by initializing properly, and claiming it helps mainly in convergence speed and isn't a regularizer.

I'm afraid this isn't true; no initialization scheme can replace normalization and confer all its benefits, including better generalization. What unfortunately doesn't seem to make to mainstream is found in research papers cited below.

TL;DR (1) smoother loss landscape guiding optimization away from premature optima and flat regions; (2) weight length-direction decoupling; (3) layer-to-layer scale-invariance. Greater stability permits for dynamic learning rates and other advanced techniques that'd otherwise break down.

Cannot import 'BytesType' from diskcache.core

I was trying to run "p2ch10_explore_data.ipynb", facing import error from diskcache.core

ImportError: cannot import name 'BytesType'
Looks like diskcache has been updated and does not have 'BytesType' anymore.
Any alternatives to "utils/disk.py"?

p11: Error when loading .mhd file

Getting error below when running python -m p2ch11.training

RuntimeError: Exception thrown in SimpleITK ReadImage: /tmp/SimpleITK-build/ITK/Modules/IO/Meta/src/itkMetaImageIO.cxx:491:
itk::ERROR: MetaImageIO(0x80234000): File cannot be read: /content/drive/My Drive/Data/Test/subset0/1.3.6.1.4.1.14519.5.2.1.6279.6001.317087518531899043292346860596.mhd for reading.
Reason: No such file or directory

NB: Running on GoogleColab

Do sound and audio files have to be the same length?

I looked at the audio chirp exercise posted as an extra example in chapter 4. It and the discussion of making an array stack of sounds seems to imply that they should all have the same time length or handled using embedding. I am interested in visual patterns of Morse Code sounds and wrote some code exercises to make the patterns as numpy and tensor arrays, but I don't think they are going to work for deep learning input so need help., see waszee repo SignalStudies_RF if interested. The problem I am wrestling with is how to standardize the audio patterns to feed the learning exercise. For example the letter "e" is just a single dit long but the letter "b" is dah dit dit dit long and common pattrens include stings like "CQ CQ de ....". The numpy files I built have a variable length string of sound pulses that depend on the pulse pattern and the sending speed. The audio chirp example seems to say we can make the sounds same length, I wonder if anybody has suggestions as to how to handle the vairable lengths to make common array images for all of the patterns of sound needed to decode the sounds. I suspect the natural language guys have dealt with the issue as vocal words have different lengths but I did not see any references here or in the text as to where to look. Please offer suggestions as to how to handle the embedding? Still learning how to tag targets to the patterns too. I am very new to PyTorch.

Suggestion: mentioning neural nets for ordinal regression in chapter 4

Just reading chapter 4, you mentioned on pg. 83 that ordinal targets are usually either treated as continuous ("metric" regression) or nominal (conventional classification).

Just wanted to mention that there is a growing interest in outfitting deep neural nets with capabilities to regard them as ordinal targets without assuming a continuous or nominal nature. Personally, I have worked on this topic here: https://github.com/Raschka-research-group/coral_pytorch. Our approach is based on recasting the problem into binary classification tasks, but other approaches exist too.

I probably wouldn't discuss ordinal regression in detail in this chapter, but maybe adding a footnote would be nice (this could potentially provide some people with an entry point, and it can save some people from some frustration/reinventing the wheel)

Kernel keeps crashing at the exact same point while training during the 2nd epoch (p2ch12.training.LunaTrainingApp)

I am trying to train the luna model using data augmentation of chapter 12. The issue I am facing that the kernel crashes everytime during the end of the 2nd epoch of training set. The same behaviour is exhibited whether I run from Jupyter notebook or command line. If I check my resources during training ( attached ), it doesn't look like there is any memory shortage in ram or gpu.

Screenshot (2)

And here's the logs while training.

Screenshot (3)

Screenshot (4)

After this the training crashes. Can you please point out what seems to be the issue? I am running the exact same code except change in the path for the subset data that I downloaded in my local machine.
I am running Windows 10, 32 GB RAM, 8 GB GPU.
I also tried with num-workers = 4,6 with the same result (only slower), decreased the batch size to 64 and again same thing.
Also during the 2nd epoch, my systems seems to slow down as i experience some lag in switching tabs/windows but if i check the task manager as in screenshot, there is plenty of ram left.

Any help would be appreciated as I am new to deep learning and I am running a huge model for the first time. Thank you.

prepcache in p2ch11 wrongly calls for sort_by argument

Running prepcach.py from p2ch11 leads to the following error:

2020-09-27 16:09:11,916 INFO     pid:12612 __main__:043:main Starting LunaPrepCacheApp, Namespace(batch_size=1024, num_workers=8)
2020-09-27 16:09:15,968 INFO     pid:12612 p2ch11.dsets_edited:170:__init__ <p2ch11.dsets_edited.LunaDataset object at 0x7f4bc2f4baf0>: 548723 training samples
Traceback (most recent call last):
  File "/home/ahmed/anaconda3/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/ahmed/anaconda3/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/HiSp-1/dlwpt/prepcache.py", line 62, in <module>
    LunaPrepCacheApp().main()
  File "/HiSp-1/dlwpt/prepcache.py", line 45, in main
    self.prep_dl = DataLoader(
TypeError: __init__() got an unexpected keyword argument 'sortby_str'

The sort_by is not an argument of Luna's model DataLoader.
Removing the argument leads to successful run, but the effect of having unsorted cache is unknown?!

A mistake in the book

IN:torch.le(torch.Tensor([[1, 2], [3, 4]]), torch.Tensor([[1, 1], [4, 4]]))
OUT:tensor([[ True, False],
[ True, True]])

but in the preview version
OUT:tensor([[ 1, 0],
[ 1, 1]])

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.