Giter Site home page Giter Site logo

fastai / course20 Goto Github PK

View Code? Open in Web Editor NEW
845.0 845.0 309.0 35.18 MB

Deep Learning for Coders, 2020, the website

Home Page: https://book.fast.ai

License: Apache License 2.0

Makefile 0.29% Python 3.31% Jupyter Notebook 83.79% HTML 0.90% CSS 0.75% JavaScript 10.85% Shell 0.11%
deep-learning jupyter-notebook machine-learning python teaching

course20's Introduction

Welcome to fastai

CI PyPI Conda (channel only) docs

Installing

You can use fastai without any installation by using Google Colab. In fact, every page of this documentation is also available as an interactive notebook - click “Open in colab” at the top of any page to open it (be sure to change the Colab runtime to “GPU” to have it run fast!) See the fast.ai documentation on Using Colab for more information.

You can install fastai on your own machines with conda (highly recommended), as long as you’re running Linux or Windows (NB: Mac is not supported). For Windows, please see the “Running on Windows” for important notes.

We recommend using miniconda (or miniforge). First install PyTorch using the conda line shown here, and then run:

conda install -c fastai fastai

To install with pip, use: pip install fastai.

If you plan to develop fastai yourself, or want to be on the cutting edge, you can use an editable install (if you do this, you should also use an editable install of fastcore to go with it.) First install PyTorch, and then:

git clone https://github.com/fastai/fastai
pip install -e "fastai[dev]"

Learning fastai

The best way to get started with fastai (and deep learning) is to read the book, and complete the free course.

To see what’s possible with fastai, take a look at the Quick Start, which shows how to use around 5 lines of code to build an image classifier, an image segmentation model, a text sentiment model, a recommendation system, and a tabular model. For each of the applications, the code is much the same.

Read through the Tutorials to learn how to train your own models on your own datasets. Use the navigation sidebar to look through the fastai documentation. Every class, function, and method is documented here.

To learn about the design and motivation of the library, read the peer reviewed paper.

About fastai

fastai is a deep learning library which provides practitioners with high-level components that can quickly and easily provide state-of-the-art results in standard deep learning domains, and provides researchers with low-level components that can be mixed and matched to build new approaches. It aims to do both things without substantial compromises in ease of use, flexibility, or performance. This is possible thanks to a carefully layered architecture, which expresses common underlying patterns of many deep learning and data processing techniques in terms of decoupled abstractions. These abstractions can be expressed concisely and clearly by leveraging the dynamism of the underlying Python language and the flexibility of the PyTorch library. fastai includes:

  • A new type dispatch system for Python along with a semantic type hierarchy for tensors
  • A GPU-optimized computer vision library which can be extended in pure Python
  • An optimizer which refactors out the common functionality of modern optimizers into two basic pieces, allowing optimization algorithms to be implemented in 4–5 lines of code
  • A novel 2-way callback system that can access any part of the data, model, or optimizer and change it at any point during training
  • A new data block API
  • And much more…

fastai is organized around two main design goals: to be approachable and rapidly productive, while also being deeply hackable and configurable. It is built on top of a hierarchy of lower-level APIs which provide composable building blocks. This way, a user wanting to rewrite part of the high-level API or add particular behavior to suit their needs does not have to learn how to use the lowest level.

Layered API

Migrating from other libraries

It’s very easy to migrate from plain PyTorch, Ignite, or any other PyTorch-based library, or even to use fastai in conjunction with other libraries. Generally, you’ll be able to use all your existing data processing code, but will be able to reduce the amount of code you require for training, and more easily take advantage of modern best practices. Here are migration guides from some popular libraries to help you on your way:

Windows Support

Due to python multiprocessing issues on Jupyter and Windows, num_workers of Dataloader is reset to 0 automatically to avoid Jupyter hanging. This makes tasks such as computer vision in Jupyter on Windows many times slower than on Linux. This limitation doesn’t exist if you use fastai from a script.

See this example to fully leverage the fastai API on Windows.

We recommend using Windows Subsystem for Linux (WSL) instead – if you do that, you can use the regular Linux installation approach, and you won’t have any issues with num_workers.

Tests

To run the tests in parallel, launch:

nbdev_test

For all the tests to pass, you’ll need to install the dependencies specified as part of dev_requirements in settings.ini

pip install -e .[dev]

Tests are written using nbdev, for example see the documentation for test_eq.

Contributing

After you clone this repository, make sure you have run nbdev_install_hooks in your terminal. This install Jupyter and git hooks to automatically clean, trust, and fix merge conflicts in notebooks.

After making changes in the repo, you should run nbdev_prepare and make additional and necessary changes in order to pass all the tests.

Docker Containers

For those interested in official docker containers for this project, they can be found here.

course20's People

Contributors

abeomor avatar bxbrenden avatar dangro avatar datacrunchio avatar dependabot[bot] avatar ezeeetm avatar gopitk avatar hamelsmu avatar isaac-flath avatar janvdp avatar jchapman avatar joe-bender avatar joedockrill avatar joshua-paperspace avatar jph00 avatar kerrickstaley avatar mathemage avatar micstn avatar mone27 avatar nikhilmaddirala avatar prosoitos avatar rpasricha avatar svishnu88 avatar theisshe avatar v-ahuja avatar velaia avatar yubozhao avatar zerotosingularity avatar zzweig avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

course20's Issues

FastaiSageMakerStack Template does not work

Hi,

The instructions in https://course.fast.ai/start_sagemaker do not work. The file: https://s3-eu-west-1.amazonaws.com/mmcclean-public-files/sagemaker-fastai-notebook/sagemaker-cfn-course-v4.yaml has AccessDenied or does not exist. I have tried to apply the fixes suggested in from #60 to the original template: https://fastai-cfn.s3.amazonaws.com/sagemaker-cfn-course-v4.yml but nothing worked. Either the fastai kernel was not created, or I was not able to connect to it.

Thanks!

Cheers,

Peter

image_cat() not defined error

HI,

What is image_cat() in the following code (given in lesson 1)?

img = PILImage.create(image_cat())
img.to_thumb(192)

It gives following error after running it:

**NameError Traceback (most recent call last)
in
----> 1 img = PILImage.create(image_cat())
2 img.to_thumb(192)

NameError: name 'image_cat' is not defined**

Bracket Missing line 45

Bracket is missing in line 45, def search_images_bing(key,earch_images_bing(key, term, min_sz=128, max_images=150):
It is causing error I think.

FastaiSageMakerStack template (https://fastai-cfn.s3.amazonaws.com/sagemaker-cfn-course-v4.yml) fails updating conda due to conda env update dependencies

Issue

Recently, a few people have faced problems with getting set up on AWS SageMaker, due to a bug in the FastaiSageMakerStack template dependencies on conda env ver updates, which results in conda env inconsistency and manifests in unavailability of fastai kernel for running notebooks.

Fixing this issue may help prevent new students from dropping off the valuable AWS SageMaker platform for running their fastai course Notebooks.

The issue and the solution history is elaborated here:

https://forums.fast.ai/t/sagemaker-notebook-deployment-problem-no-fastai-kernel/88806/10

The current stack template (template URL):

https://fastai-cfn.s3.amazonaws.com/sagemaker-cfn-course-v4.yml

The list of CloudFormation stack template links:

https://course.fast.ai/start_sagemaker#creating-the-sagemaker-notebook-instance

A stack template, which I have modified as shown below to successful resolution of the issue:

https://us-east-2.console.aws.amazon.com/cloudformation/home?region=us-east-2#/stacks/quickcreate?filter=active&templateURL=https://fastai-cfn.s3.amazonaws.com/sagemaker-cfn-course-v4.yml&stackName=FastaiSageMakerStack

Solution

Update stack template to the version below (adds 'conda update --force-reinstall conda -y' line after echo "Updating conda" in OnCreate script and removes updating conda section from the OnStart script of the stack template):

Parameters:
  InstanceType:
    Type: String
    Default: ml.p2.xlarge
    AllowedValues:
      - ml.p3.2xlarge
      - ml.p2.xlarge
    Description: Enter the SageMaker Notebook instance type
  VolumeSize:
    Type: Number
    Default: 50
    Description: Enter the size of the EBS volume attached to the notebook instance
    MaxValue: 17592
    MinValue: 5
Resources:
  Fastai2SagemakerNotebookfastaiv4NotebookRoleA75B4C74:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Statement:
          - Action: sts:AssumeRole
            Effect: Allow
            Principal:
              Service: sagemaker.amazonaws.com
        Version: "2012-10-17"
      ManagedPolicyArns:
        - Fn::Join:
            - ""
            - - "arn:"
              - Ref: AWS::Partition
              - :iam::aws:policy/AmazonSageMakerFullAccess
    Metadata:
      aws:cdk:path: CdkFastaiv2SagemakerNbStack/Fastai2SagemakerNotebook/fastai-v4NotebookRole/Resource
  Fastai2SagemakerNotebookfastaiv4LifecycleConfigD72E2247:
    Type: AWS::SageMaker::NotebookInstanceLifecycleConfig
    Properties:
      NotebookInstanceLifecycleConfigName: fastai-v4LifecycleConfig
      OnCreate:
        - Content:
            Fn::Base64: >-
              #!/bin/bash


              set -e


              echo "Starting on Create script"


              sudo -i -u ec2-user bash <<EOF

              touch /home/ec2-user/SageMaker/.create-notebook

              EOF


              cat > /home/ec2-user/SageMaker/.fastai-install.sh <<\EOF

              #!/bin/bash

              set -e

              echo "Creating dirs and symlinks"

              mkdir -p /home/ec2-user/SageMaker/.cache

              mkdir -p /home/ec2-user/SageMaker/.fastai

              [ ! -L "/home/ec2-user/.cache" ] && ln -s /home/ec2-user/SageMaker/.cache /home/ec2-user/.cache

              [ ! -L "/home/ec2-user/.fastai" ] && ln -s /home/ec2-user/SageMaker/.fastai /home/ec2-user/.fastai


              echo "Updating conda"

              conda update --force-reinstall conda -y
            
              conda update -n base -c defaults conda -y

              conda update --all -y

              echo "Starting conda create command for fastai env"

              conda create -mqyp /home/ec2-user/SageMaker/.env/fastai python=3.6

              echo "Activate fastai conda env"

              conda init bash

              source ~/.bashrc

              conda activate /home/ec2-user/SageMaker/.env/fastai

              echo "Install ipython kernel and widgets"

              conda install ipywidgets ipykernel -y

              echo "Installing fastai lib"

              pip install -r /home/ec2-user/SageMaker/fastbook/requirements.txt

              pip install fastbook sagemaker

              echo "Installing Jupyter kernel for fastai"

              python -m ipykernel install --name 'fastai' --user

              echo "Finished installing fastai conda env"

              echo "Install Jupyter nbextensions"

              conda activate JupyterSystemEnv

              pip install jupyter_contrib_nbextensions

              jupyter contrib nbextensions install --user

              echo "Restarting jupyter notebook server"

              pkill -f jupyter-notebook

              rm /home/ec2-user/SageMaker/.create-notebook

              echo "Exiting install script"

              EOF


              chown ec2-user:ec2-user /home/ec2-user/SageMaker/.fastai-install.sh

              chmod 755 /home/ec2-user/SageMaker/.fastai-install.sh


              sudo -i -u ec2-user bash <<EOF

              nohup /home/ec2-user/SageMaker/.fastai-install.sh &

              EOF


              echo "Finishing on Create script"
      OnStart:
        - Content:
            Fn::Base64: >-
              #!/bin/bash


              set -e


              echo "Starting on Start script"


              sudo -i -u ec2-user bash << EOF

              if [[ -f /home/ec2-user/SageMaker/.create-notebook ]]; then
                  echo "Skipping as currently installing conda env"
              else
                  # create symlinks to EBS volume
                  echo "Creating symlinks"
                  ln -s /home/ec2-user/SageMaker/.fastai /home/ec2-user/.fastai
                  echo "Updating conda skipped in the fixed FastaiSageMakerStack template"
                  echo "Activate fastai conda env"
                  conda init bash
                  source ~/.bashrc
                  conda activate /home/ec2-user/SageMaker/.env/fastai
                  echo "Updating fastai packages"
                  pip install fastai fastcore sagemaker --upgrade
                  echo "Installing Jupyter kernel"
                  python -m ipykernel install --name 'fastai' --user
                  echo "Install Jupyter nbextensions"
                  conda activate JupyterSystemEnv
                  pip install jupyter_contrib_nbextensions
                  jupyter contrib nbextensions install --user
                  echo "Restarting jupyter notebook server"
                  pkill -f jupyter-notebook
                  echo "Finished setting up Jupyter kernel"
              fi

              EOF


              echo "Finishing on Start script"
    Metadata:
      aws:cdk:path: CdkFastaiv2SagemakerNbStack/Fastai2SagemakerNotebook/fastai-v4LifecycleConfig
  Fastai2SagemakerNotebookfastaiv4NotebookInstance7C46E7E0:
    Type: AWS::SageMaker::NotebookInstance
    Properties:
      InstanceType:
        Ref: InstanceType
      RoleArn:
        Fn::GetAtt:
          - Fastai2SagemakerNotebookfastaiv4NotebookRoleA75B4C74
          - Arn
      DefaultCodeRepository: https://github.com/fastai/fastbook
      LifecycleConfigName: fastai-v4LifecycleConfig
      NotebookInstanceName: fastai-v4
      VolumeSizeInGB:
        Ref: VolumeSize
    Metadata:
      aws:cdk:path: CdkFastaiv2SagemakerNbStack/Fastai2SagemakerNotebook/fastai-v4NotebookInstance
  CDKMetadata:
    Type: AWS::CDK::Metadata
    Properties:
      Modules: aws-cdk=1.60.0,@aws-cdk/aws-iam=1.60.0,@aws-cdk/aws-sagemaker=1.60.0,@aws-cdk/cloud-assembly-schema=1.60.0,@aws-cdk/core=1.60.0,@aws-cdk/cx-api=1.60.0,@aws-cdk/region-

info=1.60.0,jsii-runtime=node.js/v14.8.0
    Condition: CDKMetadataAvailable
Conditions:
  CDKMetadataAvailable:
    Fn::Or:
      - Fn::Or:
          - Fn::Equals:
              - Ref: AWS::Region
              - ap-east-1
          - Fn::Equals:
              - Ref: AWS::Region
              - ap-northeast-1
          - Fn::Equals:
              - Ref: AWS::Region
              - ap-northeast-2
          - Fn::Equals:
              - Ref: AWS::Region
              - ap-south-1
          - Fn::Equals:
              - Ref: AWS::Region
              - ap-southeast-1
          - Fn::Equals:
              - Ref: AWS::Region
              - ap-southeast-2
          - Fn::Equals:
              - Ref: AWS::Region
              - ca-central-1
          - Fn::Equals:
              - Ref: AWS::Region
              - cn-north-1
          - Fn::Equals:
              - Ref: AWS::Region
              - cn-northwest-1
          - Fn::Equals:
              - Ref: AWS::Region
              - eu-central-1
      - Fn::Or:
          - Fn::Equals:
              - Ref: AWS::Region
              - eu-north-1
          - Fn::Equals:
              - Ref: AWS::Region
              - eu-west-1
          - Fn::Equals:
              - Ref: AWS::Region
              - eu-west-2
          - Fn::Equals:
              - Ref: AWS::Region
              - eu-west-3
          - Fn::Equals:
              - Ref: AWS::Region
              - me-south-1
          - Fn::Equals:
              - Ref: AWS::Region
              - sa-east-1
          - Fn::Equals:
              - Ref: AWS::Region
              - us-east-1
          - Fn::Equals:
              - Ref: AWS::Region
              - us-east-2
          - Fn::Equals:
              - Ref: AWS::Region
              - us-west-1
          - Fn::Equals:
              - Ref: AWS::Region
              - us-west-2

gradient how to is no more current

https://course.fast.ai/start_gradient describes the process to setup a paper space gradient instance. However, most likely between the time the instructions were written and now, the instructions are now different. Moreover, it looks like the free access to the terminal is gone, requires the "pro" package at 8$/month. Might want to update the instructions.

Google Colab now supporting Chapter 2 widgets

I remember, when working on last year's course, that indeed the widgets you guys use in chapter 2 (whatever chapter it was before) did not work under Google Colab. But let me tell you that they now do. So, you might consider taking a look for yourselves.

Add images to fastbook

Some images are opened directly from the repo. These should be added to fastbook and made available using attributes.

Resources mentioned in the book

As I discussed with @jph00 on Discord, there are several resources mentioned in the book that are supposed to be added to the book website (originally mentioned to be https://book.fast.ai in the book, now it's the course link.

There are several mentions of resources that would be there on the website. Some of the things I found by searching the fastbook repo include:

  • answers to questionnaire solutions
  • bonus chapter on transformers
  • bonus chapter on generative image models
  • recommended tutorials that will be present for each chapter

There may be more mentions of resources, and I can update the issue if I find any additional ones.

Gradient instructions don't update the course materials

I went through the instructions for using Paperspace Gradient and set up notebooks etc as required.

Currently the instructions have you update the fastbook code with a git pull command in the terminal.

This fails to update the course-v4 files which are in a separate folder. This is important because if you navigate into the course-v4 folder (to get the slimmed-down notebooks which are used in the course videos) currently, all the code given to you automatically by paperspace on initialisation of your machine make reference to fastai2 instead of fastai.

So the documentation should probably be changed from:

Once you click on ‘Terminal’ a new window should open with a terminal. Type:

pip install fastai fastcore --upgrade
then

cd fastbook
git pull

Now you should close the terminal window.

to this:

Once you click on ‘Terminal’ a new window should open with a terminal. Type:

pip install fastai fastcore --upgrade
then

cd fastbook
git pull
cd ../course-v4
git pull
Now you should close the terminal window.

Unclear text

The first three chapters have been explicitly written in a way that will allow executives, product managers, etc. to understand the most important things they'll need to know about deep learning -- if that's you, just skip over the code in those sections.

Shouldn't this say, "if that's not you just skip over..." or better "if you're an experienced coder just skip over..."?

Instructions in CONTRIBUTING.md state to run nbdev_install_git_hooks as a first step, but it is not found

The instructions in CONTRIBUDING.md state:

Before anything else, please install the git hooks that run automatic scripts during each commit and merge to strip the notebooks of suerpfluous metadata (and avoid merge conflicts). After cloning the repository, run the following command inside it:

The command being nbdev_install_git_hooks. It is the only instance of this string in the repository and I cannot find any script or file with that name. I also have found no issue mentioning this. Am I reading the instructions incorrectly?

A few mistakes in the readme.md text

  1. Under Who we are, at the end of the 2nd paragraph it is written:
    "He is the co-founder, along with Dr. Rachel Thomas, of fast.ai, the organization that built the course this course is based on."
    Was this intentional? It feels like a mistake to me.

  2. Under Who we are, at the last paragraph:
    " We always teaching through examples."

Add docs on how to use iko.ai to go through the course

iko.ai

There are several services that people can use to go through the fast.ai course, from notebook servers to execute notebooks, to Linux servers that give more control, to model deployment services.

This issue aims to add another option, https://iko.ai, that can be immensely helpful for those taking the course.

iko.ai offers real-time collaborative notebooks to train, track, package, deploy, and monitor your machine learning models. It lowers the barrier to entry and gives you leverage to do things that typically require a team of experts.

Here's a list of features that describe the platform. The guide on how to use it for the fast.ai course will be in a pull request. Feedback is welcome.

No-setup notebooks on your own Kubernetes clusters:

iko-cluster-choice-notebook

You can just start a fresh notebook server on your Kubernetes clusters from several Docker images that just work. You don’t need to troubleshoot environments, deal with NVIDIA drivers, or lose hours or days fixing your laptop or VMs, or breaking environments. For people without strong systems skills, they become autonomous instead of relying on others to fix their system. For those with the necessary skills, it is work they do not need to do anymore.

Real-time collaborative notebooks:

rtc-attempt-2

You can share a notebook with other users and collectively edit it, see each others’ cursors and selections. This is ideal for pair-programming, troubleshooting, tutoring, and a must when working remotely. We even use it for our meetings: each team member edits the same notebook with agenda items before the call on their own time. During the call, we'll go over them and edit, answer questions, add snippets of code to reproduce bugs or implement proof-of-concepts.

Long-running notebooks that don't lose output, executed in the background:

async-notebook-on-cluster

Regular notebooks lose computation results when there is a disconnection, which often happens in long-running training jobs. You can schedule your notebooks in a fire and forget fashion, right from the notebook interface, without interrupting your flow, and watch their output stream from other devices, even on your phone. Choose the Docker image, choose the Kubernetes cluster, and just fire and forget. You can even save the resulting notebook as another notebook, but even if you do not, the notebooks have checkpoints [we contributed that back to the Jupyter community] (jupyterlab/jupyterlab#5923 (comment))

AppBooks:

publish

You can run or have others run your notebooks with different values without changing your code. You can click “Publish” and have a parametrized version without changing cell metadata or adding tags. One use case is having a subject-matter expert tweak some domain specific parameters to train a model without being overwhelmed by the code, or the notebook interface. It becomes a form on top of the notebook with the parameters you want to expose.

Bring your own Kubernetes clusters:

clusters

You can use your own existing Kubernetes clusters from Google, Amazon, Microsoft, or DigitalOcean on iko.ai which will use them for your notebook servers, your scheduled notebooks, and different other workloads. You don’t have to worry about metered billing in yet another service. You can control access right from your cloud provider’s console or interface, and grant your own customers or users access to your clusters. We also shut down inactive notebooks that have no computation running automatically, to save on cloud spend.

Bring your own S3 buckets:

mounting-external-s3-buckets

You do not need to upload data to iko.ai to start working, you can just add an external S3 bucket and be able to access it as if it were a filesystem. As if your files were on disk. This is ideal not to pollute your code with S3 specific code (boto3, tensorflow_io), and reduces friction. This also ensures people work on the same data, and avoids common errors when working on local copies that were changed by accident.

Automatic Experiment Tracking:

You can’t improve what you cannot measure and track. Model training is an iterative process and involves varying approaches, parameters, hyperparameters, which give models with different metrics. You can’t keep all of this in your head or rely on ad-hoc notes. Your experiments are automatically tracked on iko, everything is saved. This makes collaboration easier as well, because several team members will be able to compare results and choose the best approach.

One-click model deployment:

deploy

You can deploy your model by clicking a button, instead of relying on a colleague to do it for you. You get a REST endpoint to invoke your model with requests, which makes it easy for developers to use these models without knowing anything about machine learning. You also get a convenience page to upload a CSV and get predictions, or enter JSON values and get predictions from that model. This is often needed for non-technical users who want a graphical interface on top of the model. The page also contains all the information of how that model was produced, which experiment produced it, so you know which parameters, and which metrics.

One-click model packaging:

model_docker_image

You don’t need to worry about sending pickles and weights or using ssh and scp. You can click a button and iko.ai will package your model in a Docker image and push it to a registry of your choice. You can then take your model anywhere. If you have a client for which you do on-premises work, you can just docker pull your image there, and start that container and expose that model to other internal systems.

Model monitoring

model-monitoring

For each model you deploy, you get a live dashboard that shows you how it is behaving. How many requests, how many errors, etc. This enables you to become aware as soon as something is wrong and fix it. Right now, we’re adding data logging so you can have access to data distribution or outliers. We’re also adding alerts so you get an alert when something changes above a certain threshold, and are exposing model grading, so you can note your model when it gets something wrong or right, and visualize its decaying performance. Automatic retraining is on the short term roadmap.

Integrations with Streamlit, Voilà, and Superset

You can create dashboards and applications right from the notebook interface, as opposed to having someone provision a virtual machine on GCP, create the environment, push the code, start a web server, add authentication, remember the IP address for a demo. You can also create Superset dashboards to work on your data.

APIs everywhere

You can use most of these features with an HTTP request, as opposed to going through the web interface. This is really important for instrumentation and integrations. We’re also adding webhooks soon, so iko.ai can emit and get events to and from other external systems. One application of this is Slack alerts, for example, or automatic retraining based on events you choose.

Paperspace Gradient instructions seem incomplete - Unable to use notebooks

I set up the environment to use Paperspace Gradient using the instructions in https://course.fast.ai/start_gradient.

However, when attempting to run the first notebook (clean/01_intro), the very first cell throws the following error:

---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-2-2b820b2b946f> in <module>
      1 #hide
      2 get_ipython().system('pip install -Uqq fastbook')
----> 3 import fastbook
      4 fastbook.setup_book()

/opt/conda/envs/fastai/lib/python3.8/site-packages/fastbook/__init__.py in <module>
     15 except ModuleNotFoundError:
     16     warn("Missing Azure SDK - please run `pip install azure-cognitiveservices-search-imagesearch`")
---> 17 try: from nbdev.showdoc import *
     18 except ModuleNotFoundError: warn("Missing `nbdev` - please install it")
     19 try:

/opt/conda/envs/fastai/lib/python3.8/site-packages/nbdev/__init__.py in <module>
      5 if IN_IPYTHON:
      6     from .flags import *
----> 7     from .showdoc import show_doc
      8     #from .export import notebook2script

/opt/conda/envs/fastai/lib/python3.8/site-packages/nbdev/showdoc.py in <module>
     12 from nbconvert import HTMLExporter
     13 
---> 14 if IN_NOTEBOOK:
     15     from IPython.display import Markdown,display
     16     from IPython.core import page

NameError: name 'IN_NOTEBOOK' is not defined```

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.