Giter Site home page Giter Site logo

microsoft / aiforearth-api-development Goto Github PK

View Code? Open in Web Editor NEW
74.0 15.0 46.0 5.93 MB

This is an API Framework for AI models to be hosted locally or on the AI for Earth API Platform (https://github.com/microsoft/AIforEarth-API-Platform).

License: MIT License

Dockerfile 1.90% Python 19.50% R 2.33% Shell 0.05% Jupyter Notebook 76.23%
aiforearth

aiforearth-api-development's Introduction

Due to new featues that have since been added to Azure Machine Learning, this repository is now deprecated.

AI for Earth - Creating APIs

These images and examples are meant to illustrate how to build containers for use in the AI for Earth API system. The following images and tags (versions/images) are available on Dockerhub:

  • mcr.microsoft.com/aiforearth/base-py

    • Available Tags
    • The latest base-py images are available derived from several CUDA images:
      • 1.15 - nvidia/cuda:9.2-runtime-ubuntu16.04
      • 1.15-cuda-9.0 - nvidia/cuda:9.0-runtime-ubuntu16.04
      • 1.15-cuda-9.0-devel - nvidia/cuda:9.0-devel-ubuntu16.04
    • The base-py image can be built using any Ubuntu image of your choice by building with the optional BASE_IMAGE build argument.
      • Example of how to build with the CUDA 9.0 devel image (inside Containers):
        • docker build . -f base-py/Dockerfile -t base-py:1.13-cuda-9.0-devel --build-arg BASE_IMAGE=nvidia/cuda:9.0-devel-ubuntu16.04
  • mcr.microsoft.com/aiforearth/blob-py

    • Available Tags
    • The latest base-py images are available derived from several CUDA images:
      • 1.5 - nvidia/cuda:9.2-runtime-ubuntu16.04
  • mcr.microsoft.com/aiforearth/base-r

    • Available Tags
    • The latest base-r images are available derived from gdal:
      • 1.8 - osgeo/gdal:ubuntu-full-3.0.3
  • mcr.microsoft.com/aiforearth/blob-r

    • Available Tags
    • The latest base-r images are available derived from gdal:
      • 1.5 - osgeo/gdal:ubuntu-full-3.0.3

Notice

Additional to a running docker environment, GPU images require NVIDIA Docker package to support CUDA.

CUDA Toolkit

To view the license for the CUDA Toolkit included in the cuda base image, click here

CUDA Deep Neural Network library (cuDNN)

To view the license for cuDNN included in the cuda base image, click here

Contents

  1. Repo Layout
  2. Quickstart Tutorial
    1. Choose a base image or example
    2. Insert code to call your model
    3. Input handling
    4. Output handling
    5. Function decorator detail
    6. Create AppInsights instrumentation keys
    7. Install required packages
    8. Set environment variables
    9. Build and run your image
    10. Make requests
    11. Publish to Azure Container Registry
    12. Run your container in ACI
    13. FAQs
  3. Contributing

Repo Layout

  • Containers
    • base-py [Base AI for Earth Python image]
      • API hosting libraries
      • Azure Blob libraries
      • Monitoring libraries
    • base-r [Base AI for Earth R image]
      • API hosting libraries
      • Azure Blob libraries
      • Monitoring libraries
    • blob-py [Base AI for Earth Python image with Azure Blob mounting tools]
      • AI for Earth base-py base
      • AI for Earth Azure Blob mounting tools
    • blob-r [Base AI for Earth R image with Azure Blob mounting tools]
      • AI for Earth base-r base
      • AI for Earth Azure Blob mounting tools
  • Examples

Notes

  • Docker commands for the base and blob images must be run at the version level of the repo. Ex. docker build . -f base-py/Dockerfile. Example Docker commands can be run within the example codebase.

Quickstart Tutorial

This quickstart will walk you through turning a model into an API. Starting with a trained model, we will containerize it, deploy it on Azure, and expose an endpoint to call the API. We will leverage Docker containers, Azure Application Insights, Azure Container Registry, and Azure Container Instances.

We are assuming that you have a trained model that you want to expose as an API. To begin, download or clone this repository to your local machine.

Create an Azure Resource Group

Throughout this quickstart tutorial, we recommend that you put all Azure resources created into a single new Resource Group. This will organize these related resources together and make it easy to remove them as a single group.

From the Azure Portal, click Create a resource from the left menu. Search the Marketplace for "Resource Group", select the resource group option and click Create.

Search for Resource Group

Use a descriptive resource group name, such as "ai4e_yourname_app_rg". Then click Create.

Resource Group

Machine Setup

You will need an active Azure subscription as well as the following software installed.

Choose a base image or example

AI for Earth APIs are all built from an AI for Earth base image. You may use a base image directly or start with an example. The following sections will help you decide.

Base images

  • base-py
  • base-r
  • blob-py
  • blob-r

Examples

  • Basic Python API (Sync and Async)
  • Basic R API (Sync and Async)
  • Blob Python API
  • Synchronous PyTorch API
  • Asynchronous Tensorflow API

In general, if you're using Python, you will want to use an image or example with the base-py or blob-py images. If you are using R, you will want to use an image or example with the base-r or blob-r images. The difference between them: the blob-* image contains everything that the cooresponding base-* image contains, plus additional support for mounting Azure blob storage. This may be useful if you need to process (for example) a batch of images all at once; you can upload them all to Azure blob storage, the container in which your model is running can mount that storage, and access it like it is local storage.

Asynchronous (async) vs. Synchronous (sync) Endpoint

In addition to your language choice, think about whether your API call should be synchronous or asynchronous.

  • A synchronous API call will invoke your model, get results, and return immediately. This is a good paradigm to use if you want to perform classification with your model on a single image, for example.
  • An asynchronous API call should be used for long-running tasks, like processing a whole folder of images using your model and storing the results, or constructing a forecasting model from historical data that the user provides.

Asynchronous Implementation Examples

The following examples demonstrate async endpoints:

Synchronous Implementation Examples

The following examples demonstrate sync endpoints:

Input/Output Patterns

While input patterns can be used for sync or async designs, your output design is dependent on your sync/async choice, therefore, we have identified recommended approaches for each.

Input Recommendations

JSON

JSON is the recommended approach for data ingestion.

Binary Input

Many applications of AI apply models to image/binary inputs. Here are some approaches:

  • Send the image directly via request data. See the tensorflow example to see how it is accomplished.
  • Upload your binary input to an Azure Blob, create a SAS key, and add a JSON field for it.
  • If you would like users to use your own Azure blob storage, we provide tools to mount blobs as local drives within your service. You may then use this virtual file system, locally.
  • Serializing your payload is a very efficient method for transmission. BSON is an open standard, bin­ary-en­coded serialization for such purposes.

Asynchronous Pattern

The preferred way of handling asynchronous API calls is to provide a task status endpoint to your users. When a request is submitted, a new taskId is immediately returned to the caller to track the status of their request as it is processed.

We have several tools to help with task tracking that you can use for local development and testing. These tools create a database within the service instance and are not recommended for production use.

Once a task is completed, the user needs to retrieve the result of their service call. This can be accomplished in several ways:

  • Return a SAS-keyed URL to an Azure Blob Container via a call to the task endpoint.
  • Request that a writable SAS-keyed URL is provided as input to your API call. Indicate completion via the task interface and write the output to that URL.
  • If you would like users to use your own Azure blob storage, you can write directly to a virtually-mounted drive.

Examples

We have provided several examples that leverage these base images to make it easier for you to get started.

  • base-py: Start with this example if you are using Python and don't need Azure blob storage integration, and none of the below more specific examples are a good fit. It contains both synchronous and asynchronous endpoints. It is a great example to use for asynchronous, long-running API calls.
  • base-r: Start with this example if you are using R.
  • blob-mount-py: Start with this example if you are using Python and you need Azure blob storage integration.
  • pytorch: This example is a modification of the base-py example, using a synchronous API call to call a PyTorch model. It is a great example to use if you are using PyTorch or if you are making a synchronous API call.
  • tensorflow: This example is a modification of the base-py example, using an asynchronous API call to call a TensorFlow model. It is a great example to use if you are using TensorFlow or if you are making an asynchronous API call.

After you've chosen the example that best fits your scenario, make a copy of that directory, which you can use as your working directory in which you apply your changes.

Insert code to call your model

Next, in your new working directory, we need to update the example that you chose with code to call your specific model. This should be done in the runserver.py file (if you are using a Python example) or the api_example.R file (if you are using an R example) in the my_api (or similarly named) subfolder.

Input handling

Your model has inputs and outputs. For example, let's consider a classification model that takes an image and classifies its contents as one of multiple species of animal. The input that you need to provide to this model is an image, and the output that you provide may be JSON-formatted text of the classifications and their confidence.

Some examples of how to send parameters as inputs into your APIs follow.

GET URL parameters

For GET operations, best practice dictates that a noun is used in the URL in the segment before the related parameter. An echo example is as follows.

Python and Flask
@ai4e_service.api_sync_func(api_path = '/echo/<string:text>', methods = ['GET'], maximum_concurrent_requests = 1000, trace_name = 'get:echo', kwargs = {'text'})
def echo(*args, **kwargs):
    return 'Echo: ' + kwargs['text']
R and Plumber
#* @param text The text to echo back
#* @get /echo/<text>
GetProcessDataTaskStatus<-function(text){
  print(text)
}

POST body

For non-trivial parameters, retrieve parameters from the body sent as part of the request. JSON is the preferred standard for API transmission. The following gives an example of sample input, followed by Python and R usage.

Sample Input
{
    "container_uri": "https://myblobacct.blob.core.windows.net/user?st=2018-08-02T12%3A01%3A00Z&se=5200-08-03T12%3A01%3A00Z&sp=rwl&sv=2017-04-17&sr=c&sig=xxx",
    "run_id": "myrunid"
}
Python and Flask
from flask import Flask, request
import json

post_body = request.get_json()

print(post_body['run_id'])
print(post_body['container_uri'])
R and Plumber
library(jsonlite)

#* @post /process-data
ProcessDataAPI<-function(req, res){
  post_body <- req$postBody
  input_data <- fromJSON(post_body, simplifyDataFrame=TRUE)

  print(input_data$run_id)
  print(input_data$container_uri)
}

Output handling

Then, you need to send back your model's results as output. Two return types are important when dealing with hosted ML APIs: non-binary and binary.

Non-binary data

You may need to return non-binary data, like simple strings or numbers. The preferred method to return non-binary data is to use JSON.

Python and Flask
import json
def post(self):
    ret = {}
    ret['run_id'] = myrunid   
    ret['container_uri'] = 'https://myblobacct.blob.core.windows.net/user?st=2018-08-02T12%3A01%3A00Z&se=5200-08-03T12%3A01%3A00Z&sp=rwl&sv=2017-04-17&sr=c&sig=xxx'

    return json.dumps(ret)   
R and Plumber
ProcessDataAPI<-function(req, res){
  post_body <- req$postBody
  input_data <- fromJSON(post_body, simplifyDataFrame=TRUE)

  # Return JSON containing run_id and container_uri
  data.frame(input_data$run_id, input_data$container_uri)
}

Binary data

You may also need to return binary data, like images.

Python and Flask
from io import BytesIO
import tifffile
from flask import send_file

ACCEPTED_CONTENT_TYPES = ['image/tiff', 'application/octet-stream']

if request.headers.get("Content-Type") in ACCEPTED_CONTENT_TYPES:
    tiff_file = tifffile.imread(BytesIO(request.data))
    # Do something with the tiff_file...
    prediction_stream = BytesIO()
    # Create your image to return...
    prediction_stream.seek(0)
    return send_file(prediction_stream)

Function decorator detail

We use function decorators to create APIs out of your functions, such as those that execute a model. Here, we will detail the two decorators and their parameters.

There are two decorators:

  • @ai4e_service.api_async_func, which wraps the function as an async/long-running API.
  • @ai4e_service.api_sync_func, which wraps the function as a sync API that returns right away.

Each decorator contains the following parameters:

  • api_path = '/': Specifies the endpoint of the API. This comes after the API_PREFIX value in the Dockerfile. For example, if the Dockerfile entry is ENV API_PREFIX=/v1/my_api/tasker and the api_path is as it is specified here, the complete endpoint of your API will be http://localhost:80/v1/my_api/tasker/
  • methods = ['POST']: Specifies the methods accepted by the API.
  • request_processing_function = process_request_data: Specifies the function to call before your endpoint function is called. This function will pre-process the request data and is located in your code. To work with request data, you must assign and return the request data as part of a dictionary that will be extracted later in your model function.
  • maximum_concurrent_requests = 5: If the number of requests exceed this limit, a 503 is returned to the caller.
  • content_types = ['application/json']: An array of accepted content types. If the requested type is not found in the array, a 503 will be returned.
  • content_max_length = 1000: The maximum length of the request data (in bytes) permitted. If the length of the data exceeds this setting, a 503 will be returned.
  • trace_name = 'post:my_long_running_funct': A trace name to associate with this function. This allows you to search logs and metrics for this particular function.

Create AppInsights instrumentation keys

Application Insights is an Azure service for application performance management. We have integrated with Application Insights to provide advanced monitoring capabilities. You will need to generate both an Instrumentation key and an API key to use in your application.

The instrumentation key is for general logging and tracing. This is found under the "Properties" section for your Application Insights instance in the Azure portal.

Search for App Insights

Click Create, then choose a name for your Application Insight resource. For Application Type, choose General from the drop-down menu. For Resource Group, select "Use existing" and choose the resource group that you created earlier.

Create App Insights

Once your AppInsights resource has successfully deployed, navigate to the resource from your home screen, and locate the Instrumentation Key.

Get Instrumentation Key

Next, create a Live Metrics API key. Scroll down in the left menu to find API Access within Application Insights, and click Create API key. When creating the key, be sure to select "Authenticate SDK control channel".

Get API Key

Get API Key

Copy and store both of these keys in a safe place.

Install required packages

Now, let's look at the Dockerfile in your code. Update the Dockerfile to install any required packages. There are several ways to install packages. We cover popular ones here:

  • pip
RUN /usr/local/envs/ai4e_py_api/bin/pip install grpcio opencensus
  • apt-get
RUN apt-get install gfortran -y
  • R packages
RUN R -e 'install.packages("rgeos"); library(rgeos)'

Set environment variables

The Dockerfile also contains several environment variables that should be set for proper logging. You will need to add your two Application Insights keys here as well. Follow the instructions within the file.

# Application Insights keys and trace configuration
ENV APPINSIGHTS_INSTRUMENTATIONKEY=your_instrumentation_key_goes_here \
    LOCALAPPDATA=/app_insights_data \
    OCAGENT_TRACE_EXPORTER_ENDPOINT=localhost:55678

# The following variables will allow you to filter logs in AppInsights
ENV SERVICE_OWNER=AI4E_Test \
    SERVICE_CLUSTER=Local\ Docker \
    SERVICE_MODEL_NAME=base-py example \
    SERVICE_MODEL_FRAMEWORK=Python \
    SERVICE_MODEL_FRAMEOWRK_VERSION=3.6.6 \
    SERVICE_MODEL_VERSION=1.0

# The API_PREFIX is the URL path that will occur after your domain and before your endpoints
ENV API_PREFIX=/v1/my_api/tasker

You may modify other environment variables as well. In particular, you may want to change the environment variable API_PREFIX. We recommend using the format "/<version-number>/<api-name>/<function>" such as "/v1/my_api/tasker".

(Optional) Set up Azure blob storage

You will want to follow these steps if you are working from the blob-mount-py example. If you do not plan to use Azure blob storage in your app, skip ahead to Build and run your image.
First you will need create a new Azure Blob Container with a file named config.csv. We also recommend using Azure Storage Explorer to aid in storage upload/download.

Create an Azure storage account by selecting "Storage Accounts" from the left menu and clicking the Add button. Make sure to select the resource group you previously created, and use a descriptive name for your storage account (must be lowercase letters or numbers). You may configure advanced options for your account here, or simply click "Review + create".

Create Storage Account

Click "Create" on the validation screen that appears. Once the storage account is deployed, click "Go to resource". You still need to create a container within your storage account. To do this, scroll down on the left menu of your storage account to click on "Blobs". Click the plus sign in the top left to create a new container.

Create Storage Account

Use a text editor to create an empty file named config.csv on your local machine. You can now navigate to your empty Azure container and upload the file as a blob.

Upload config.csv

Next, from within the Azure Portal or within Azure Storage Explorer, copy your blob's storage key. You can find your storage keys by clicking "Keys" on the left menu of your storage account.

Create Storage Account

You must also modify the blob_mount.json file as follows:

  • accountName: This is the name of your blob storage account.
  • accountKey: This is storage account key that you copied.
  • containerName: This is the name of the container that you created within your storage account. It is the container that will be mapped, locally.
  • mappedDirectory: This is the local path where your container will be mounted.

Note: You may map as many containers as you would like in this file. The blob mounter will mount all of them.

Build and run your image

This section features a step-by-step guide to building and running your image.

Build your image

  1. Navigate to the directory containing your Dockerfile.
  2. Execute the docker build command:
docker build . -t your_custom_image_name:1

In the above command, -t denotes that you wish to label your image with the name "your_custom_image_name" and with the tag of 1. Typically tags represent the build number.

If you will be pushing your image to a repository, your docker build command will resemble:

docker build . -t your_registry_name.azurecr.io/your_custom_image_name:1

Run your image, locally

Run a container based on your image:

docker run -p 8081:80 "your_custom_image_name:1"

In the above command, the -p switch designates the local port mapping to the container port. -p host_port:container_port. The host_port is arbitrary and will be the port to which you issue requests. Ensure, however, that the container_port is exposed in the Dockerfile with the Dockerfile entry:

EXPOSE 80

TIP: Depending on your git settings and your operating system, the "docker run" command may fail with the error standard_init_linux.go:190: exec user process caused "no such file or directory". If this happens, you need to change the end-of-line characters in startup.sh to LF. One way to do this is using VS Code; open the startup.sh file and click on CRLF in the bottom right corner in the blue bar and select LF instead, then save.

If you find that there are errors and you need to go back and rebuild your docker container, run the following commands:

# This lists all of the docker processes running
docker ps

# Find the container ID in the list from the previous command, and replace <container-id> with that value to end the process
docker kill <container-id>

Then you can execute your docker build and docker run commands again. Additionally, the docker logs are located in your user account's AppData\Local\Docker folder (i.e. C:\Users\jennmar\AppData\Local\Docker).

Make requests

Now that you have a local instance of your container running, you should issue requests and debug it, locally. For this exercise, you may issue requests in whatever way that you would like, but we prefer using Postman to quickly test our endpoints.

Test endpoints

  1. Open Postman or your favorite API development tool.
  2. From your service code, determine which endpoint you want to test. If you are following one of our examples, the endpoint is: http://localhost:<host_port>/<my_api_prefix>/<route>. Also, understand if you will issuing a POST or a GET.
  3. In your API dev tool, select POST or GET and enter the endpoint you would like to test.
  4. If you are performing a POST, construct valid JSON for your endpoint and enter it into the body of the request. Alternatively, if you are POSTing an image, upload it to Postman (see screenshots below).
  5. Click "Send" or execute the call to your endpoint. The running container shell will contain any messages that are printed to the console.

Posting JSON

From the Body tab, select "raw". Ensure that "JSON (application/json)" is selected from the drop-down option (note that a Content-type header is automatically added when you do this). Construct valid JSON in the window.

Post JSON

Posting an Image

In the Headers tab, create a Content-Type header of either image/jpeg or image/png. From the Body tab, select "binary". Upload your JPEG or PNG image here.

Post Image

Post Image

Publish to Azure Container Registry

If you haven't already, create an instance of Azure Container Registry (ACR) in your subscription.

  1. Log into the Azure Portal and click on +Create a Resource. Click on "Containers" and select "Container Registry". Select a name for your registry and make sure you choose the same subscription and resource group you used for the AppInsight step above. Before creating the registry make sure Admin user is set to "enable". Create container registry

  2. After your Azure Container Registry is created, you can click on it to find your login server.

ACR Login Server

Note: if you just created an ACR (i.e., did not include an ACR uri in your previous build), you will need to tag your container image with your ACR uri.

  1. Tag your docker image:
docker tag your_custom_image_name:tag <your_login_server>/your_custom_image_name:tag
  1. Log into your ACR:
docker login --username <username> --password <password> <your_login_server>

You can find your admin ACR credentials on Azure portal.

ACR Credentials

  1. Push your image to your ACR:
docker push <your_login_server>/your_custom_image_name:tag

Run your container in Azure Container Instances

Running your container in ACI is easy. The easiest way is to open the Azure Portal to your ACR, select the repository and tag that corresponds to the image you just pushed, click on it and select "Run Instance." This will create a new ACI instance of your image.

Running Azure Container Instance

Issue requests to your hosted API

Now that your service is running within ACI, we can issue requests to it.

  1. Open the Azure Portal to your ACI. Click on the "Overview" tab and copy the "IP address".
  2. In your API tool, change localhost:port to the IP that you just copied.
  3. Issue your request.

To see logs/console output in your running container, click on "Containers" in your ACI in the Azure Portal and click the "Logs" tab. If you configured Application Insights, you may use that to review logs, identify issues, and view metrics.

Conclusion and Next Steps

Congratulations! You have successfully hosted a model in Azure and exposed it to be accessed as an API.

Cost implications

The services we used today are very reasonably priced. Here are the pricing details.

How to remove Azure resources

We hope you find this a valuable way to provide access to your machine learning model. But if you don't plan to use your API immediately and you want to release these resources in Azure to reduce your costs, you may do so. If you put all resources in a single resource group, then you can navigate to the Azure portal, click on "Resource Groups", and select the resource group that you have been using throughout this tutorial. From there, you can select "Delete resource group" and remove all of the resources at once. (If you didn't add them all to the same resource group, you can delete them all separately.)

Delete Resource Group

Next Steps

Upon completion of this quickstart tutorial, you may want to investigate the following.

  • Azure API Management: Integration with API Management will allow you to publish your APIs to external, partner, and employee developers securely and at scale.
  • Azure Kubernetes Services: If you expect significant traffic, you may want to consider a managed Kubernetes cluster instead of a single Azure container instance for hosting your model.

FAQs

  • What is "my_api_prefix"? "my_api_prefix" is a variable that denotes the prefix for all of your API endpoints. Typically, you would create a versioned path, so that you can easily make breaking changes in the future without harming existing users. A good prefix example would be: /v1/my_api/.
  • In the Python example, why is there an "AppInsights" and an "AI4EAppInsights" library? The Application Insights Python SDK is not an officially supported SDK, but it does provide great Flask integration. Because of this, in our examples, we use the SDK's Flask integration, but we also provide a hardended (AI4EAppInsights) library that you should use for logging.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

aiforearth-api-development's People

Contributors

agentmorris avatar chrisyeh96 avatar dependabot[bot] avatar jennifermarsman avatar kant avatar lizberg-slalom avatar microsoft-github-policy-service[bot] avatar microsoftopensource avatar mmcfarland avatar msftgits avatar pflickin avatar rbavery avatar schneiderl avatar sidetrackedmind avatar yangsiyu007 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aiforearth-api-development's Issues

Feature Request: Add tips for quicker interactive app development

Hello, I'm making some updates to my application and am wondering if there are some suggestions that could be made for:

  1. hot reloading the application if a python file is changed
    I need to run docker build -t pytorchapp-prod -f ./Dockerfile-prod . and then docker run -it -p 8081:80 --runtime=nvidia pytorchapp-prod to reload any changes I've made to my app. This isn't so bad but can get a bit cumbersome.

I tried to mount the app folder in my host machine to the docker container with --mount type=bind,source="$(pwd)/pytorch_api",target=/app/pytorch_api and then add py-autoreload=3 in supervisord.conf to monitor for changes to application files (which I make in vscode on my host machine). However this doesn't seem to work and either the files aren't being monitored for changes or the changes are not being propagated to the application.

  1. (less important) sending logs from log = AI4EAppInsights to the terminal during development so that messages from calls to log.log_debug are visible.
    It'd be nice to do local development with the same logging statements that interface with the AppInsights service for easier portability. Right now I'm replacing all these logging statements with print so that I can see them in the terminal logs to debug my app.

build error when installing appinsights

following the tensorflow example, my Dockerfile runs the following commands and then fails when installing appinsights. The Dockerfile:

# Pull in the AI for Earth Base Image, so we can extract necessary libraries.
FROM mcr.microsoft.com/aiforearth/base-py:latest as ai4e_base

# Use any compatible Ubuntu-based image as your selected base image.
FROM nvidia/cuda:9.0-cudnn7-runtime-ubuntu16.04
# Copy the AI4E tools and libraries to our container.
COPY --from=ai4e_base /ai4e_api_tools /ai4e_api_tools

# Add the AI4E API source directory to the PATH.
ENV PATH /usr/local/envs/ai4e_py_api/bin:$PATH
# Add the AI4E tools directory to the PYTHONPATH.
ENV PYTHONPATH="${PYTHONPATH}:/ai4e_api_tools"

# Install Miniconda, Flask, Supervisor, uwsgi
RUN ./ai4e_api_tools/requirements/install-api-hosting-reqs.sh

# Install Azure Blob SDK
RUN ./ai4e_api_tools/requirements/install-azure-blob.sh

# Install Application Insights
RUN ./ai4e_api_tools/requirements/install-appinsights.sh

Traceback:

# rave at rave-desktop in ~/CropMask_RCNN/app on git:master ✖︎ [10:02:09]
→ docker build . -t cropmask
Sending build context to Docker daemon  179.2MB
Step 1/23 : FROM mcr.microsoft.com/aiforearth/base-py:latest as ai4e_base
 ---> b96b2ebc8ea3
Step 2/23 : FROM nvidia/cuda:9.0-cudnn7-runtime-ubuntu16.04
 ---> 4ecbea4d32bd
Step 3/23 : COPY --from=ai4e_base /ai4e_api_tools /ai4e_api_tools
 ---> Using cache
 ---> 923a40ede187
Step 4/23 : ENV PATH /usr/local/envs/ai4e_py_api/bin:$PATH
 ---> Using cache
 ---> be940007fd2f
Step 5/23 : ENV PYTHONPATH="${PYTHONPATH}:/ai4e_api_tools"
 ---> Using cache
 ---> c1316f2a8527
Step 6/23 : RUN ./ai4e_api_tools/requirements/install-api-hosting-reqs.sh
 ---> Using cache
 ---> ffbf8e7fc9fa
Step 7/23 : RUN ./ai4e_api_tools/requirements/install-azure-blob.sh
 ---> Using cache
 ---> fe6d7a201594
Step 8/23 : RUN ./ai4e_api_tools/requirements/install-appinsights.sh
 ---> Running in 233013baa44c
Collecting applicationinsights
  Downloading https://files.pythonhosted.org/packages/a1/53/234c53004f71f0717d8acd37876e0b65c121181167057b9ce1b1795f96a0/applicationinsights-0.11.9-py2.py3-none-any.whl (58kB)
Installing collected packages: applicationinsights
Successfully installed applicationinsights-0.11.9
Collecting grpcio
  Downloading https://files.pythonhosted.org/packages/f2/5d/b434403adb2db8853a97828d3d19f2032e79d630e0d11a8e95d243103a11/grpcio-1.22.0-cp36-cp36m-manylinux1_x86_64.whl (2.2MB)
Collecting opencensus==0.6.0
  Downloading https://files.pythonhosted.org/packages/b8/79/466e39c5e81ec105bbbe42a5f85d5a5e27a75d629271af2dcc9408adcb12/opencensus-0.6.0-py2.py3-none-any.whl (124kB)
Collecting opencensus-ext-requests
  Downloading https://files.pythonhosted.org/packages/c7/ff/e12bdbed71ac483b70219b57af483f4783a2ab7b0cd60aea069e8c2d36a0/opencensus_ext_requests-0.1.2-py2.py3-none-any.whl
Collecting opencensus-ext-azure
  Downloading https://files.pythonhosted.org/packages/d4/87/643a1a068f066fa6a4a389526028a5a454d7c40bbdc65ea517e01014b3fa/opencensus_ext_azure-0.7.0-py2.py3-none-any.whl
Requirement already satisfied: six>=1.5.2 in /usr/local/envs/ai4e_py_api/lib/python3.6/site-packages (from grpcio) (1.12.0)
Collecting opencensus-context<1.0.0,>=0.1.1 (from opencensus==0.6.0)
  Downloading https://files.pythonhosted.org/packages/2b/b7/720d4507e97aa3916ac47054cd75490de6b6148c46d8c2c487638f16ad95/opencensus_context-0.1.1-py2.py3-none-any.whl
Collecting google-api-core<2.0.0,>=1.0.0 (from opencensus==0.6.0)
  Downloading https://files.pythonhosted.org/packages/71/e5/7059475b3013a3c75abe35015c5761735ab224eb1b129fee7c8e376e7805/google_api_core-1.14.2-py2.py3-none-any.whl (68kB)
Collecting wrapt<2.0.0,>=1.0.0 (from opencensus-ext-requests)
  Downloading https://files.pythonhosted.org/packages/23/84/323c2415280bc4fc880ac5050dddfb3c8062c2552b34c2e512eb4aa68f79/wrapt-1.11.2.tar.gz
Requirement already satisfied: requests>=2.19.0 in /usr/local/envs/ai4e_py_api/lib/python3.6/site-packages (from opencensus-ext-azure) (2.22.0)
Collecting psutil>=5.6.3 (from opencensus-ext-azure)
  Downloading https://files.pythonhosted.org/packages/1c/ca/5b8c1fe032a458c2c4bcbe509d1401dca9dda35c7fc46b36bb81c2834740/psutil-5.6.3.tar.gz (435kB)
Collecting contextvars; python_version >= "3.6" and python_version < "3.7" (from opencensus-context<1.0.0,>=0.1.1->opencensus==0.6.0)
  Downloading https://files.pythonhosted.org/packages/83/96/55b82d9f13763be9d672622e1b8106c85acb83edd7cc2fa5bc67cd9877e9/contextvars-2.4.tar.gz
Collecting protobuf>=3.4.0 (from google-api-core<2.0.0,>=1.0.0->opencensus==0.6.0)
  Downloading https://files.pythonhosted.org/packages/dc/0e/e7cdff89745986c984ba58e6ff6541bc5c388dd9ab9d7d312b3b1532584a/protobuf-3.9.0-cp36-cp36m-manylinux1_x86_64.whl (1.2MB)
Requirement already satisfied: setuptools>=34.0.0 in /usr/local/envs/ai4e_py_api/lib/python3.6/site-packages (from google-api-core<2.0.0,>=1.0.0->opencensus==0.6.0) (41.0.1)
Collecting google-auth<2.0dev,>=0.4.0 (from google-api-core<2.0.0,>=1.0.0->opencensus==0.6.0)
  Downloading https://files.pythonhosted.org/packages/c5/9b/ed0516cc1f7609fb0217e3057ff4f0f9f3e3ce79a369c6af4a6c5ca25664/google_auth-1.6.3-py2.py3-none-any.whl (73kB)
Requirement already satisfied: pytz in /usr/local/envs/ai4e_py_api/lib/python3.6/site-packages (from google-api-core<2.0.0,>=1.0.0->opencensus==0.6.0) (2019.2)
Collecting googleapis-common-protos<2.0dev,>=1.6.0 (from google-api-core<2.0.0,>=1.0.0->opencensus==0.6.0)
  Downloading https://files.pythonhosted.org/packages/eb/ee/e59e74ecac678a14d6abefb9054f0bbcb318a6452a30df3776f133886d7d/googleapis-common-protos-1.6.0.tar.gz
Requirement already satisfied: idna<2.9,>=2.5 in /usr/local/envs/ai4e_py_api/lib/python3.6/site-packages (from requests>=2.19.0->opencensus-ext-azure) (2.8)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/envs/ai4e_py_api/lib/python3.6/site-packages (from requests>=2.19.0->opencensus-ext-azure) (1.25.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/envs/ai4e_py_api/lib/python3.6/site-packages (from requests>=2.19.0->opencensus-ext-azure) (2019.6.16)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/envs/ai4e_py_api/lib/python3.6/site-packages (from requests>=2.19.0->opencensus-ext-azure) (3.0.4)
Collecting immutables>=0.9 (from contextvars; python_version >= "3.6" and python_version < "3.7"->opencensus-context<1.0.0,>=0.1.1->opencensus==0.6.0)
  Downloading https://files.pythonhosted.org/packages/e3/91/bc4b34993ef77aabfd1546a657563576bdd437205fa24d4acaf232707452/immutables-0.9-cp36-cp36m-manylinux1_x86_64.whl (91kB)
Collecting cachetools>=2.0.0 (from google-auth<2.0dev,>=0.4.0->google-api-core<2.0.0,>=1.0.0->opencensus==0.6.0)
  Downloading https://files.pythonhosted.org/packages/2f/a6/30b0a0bef12283e83e58c1d6e7b5aabc7acfc4110df81a4471655d33e704/cachetools-3.1.1-py2.py3-none-any.whl
Collecting pyasn1-modules>=0.2.1 (from google-auth<2.0dev,>=0.4.0->google-api-core<2.0.0,>=1.0.0->opencensus==0.6.0)
  Downloading https://files.pythonhosted.org/packages/be/70/e5ea8afd6d08a4b99ebfc77bd1845248d56cfcf43d11f9dc324b9580a35c/pyasn1_modules-0.2.6-py2.py3-none-any.whl (95kB)
Collecting rsa>=3.1.4 (from google-auth<2.0dev,>=0.4.0->google-api-core<2.0.0,>=1.0.0->opencensus==0.6.0)
  Downloading https://files.pythonhosted.org/packages/02/e5/38518af393f7c214357079ce67a317307936896e961e35450b70fad2a9cf/rsa-4.0-py2.py3-none-any.whl
Collecting pyasn1<0.5.0,>=0.4.6 (from pyasn1-modules>=0.2.1->google-auth<2.0dev,>=0.4.0->google-api-core<2.0.0,>=1.0.0->opencensus==0.6.0)
  Downloading https://files.pythonhosted.org/packages/6a/6e/209351ec34b7d7807342e2bb6ff8a96eef1fd5dcac13bdbadf065c2bb55c/pyasn1-0.4.6-py2.py3-none-any.whl (75kB)
Building wheels for collected packages: wrapt, psutil, contextvars, googleapis-common-protos
  Building wheel for wrapt (setup.py): started
  Building wheel for wrapt (setup.py): finished with status 'done'
  Stored in directory: /root/.cache/pip/wheels/d7/de/2e/efa132238792efb6459a96e85916ef8597fcb3d2ae51590dfd
  Building wheel for psutil (setup.py): started
  Building wheel for psutil (setup.py): finished with status 'error'
  ERROR: Complete output from command /usr/local/envs/ai4e_py_api/bin/python -u -c 'import setuptools, tokenize;__file__='"'"'/tmp/pip-install-hl7bjjdm/psutil/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-3c3my185 --python-tag cp36:
  ERROR: running bdist_wheel
  running build
  running build_py
  creating build
  creating build/lib.linux-x86_64-3.6
  creating build/lib.linux-x86_64-3.6/psutil
  copying psutil/_psaix.py -> build/lib.linux-x86_64-3.6/psutil
  copying psutil/_pslinux.py -> build/lib.linux-x86_64-3.6/psutil
  copying psutil/_common.py -> build/lib.linux-x86_64-3.6/psutil
  copying psutil/_compat.py -> build/lib.linux-x86_64-3.6/psutil
  copying psutil/_psbsd.py -> build/lib.linux-x86_64-3.6/psutil
  copying psutil/_pswindows.py -> build/lib.linux-x86_64-3.6/psutil
  copying psutil/_pssunos.py -> build/lib.linux-x86_64-3.6/psutil
  copying psutil/__init__.py -> build/lib.linux-x86_64-3.6/psutil
  copying psutil/_psosx.py -> build/lib.linux-x86_64-3.6/psutil
  copying psutil/_psposix.py -> build/lib.linux-x86_64-3.6/psutil
  creating build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_misc.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_aix.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_linux.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_memory_leaks.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_contracts.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/__main__.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_bsd.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_windows.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_unicode.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_system.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_connections.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/runner.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_process.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_posix.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/__init__.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_osx.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_sunos.py -> build/lib.linux-x86_64-3.6/psutil/tests
  running build_ext
  building 'psutil._psutil_linux' extension
  creating build/temp.linux-x86_64-3.6
  creating build/temp.linux-x86_64-3.6/psutil
  gcc -pthread -B /usr/local/envs/ai4e_py_api/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DPSUTIL_POSIX=1 -DPSUTIL_VERSION=563 -DPSUTIL_LINUX=1 -DPSUTIL_ETHTOOL_MISSING_TYPES=1 -I/usr/local/envs/ai4e_py_api/include/python3.6m -c psutil/_psutil_common.c -o build/temp.linux-x86_64-3.6/psutil/_psutil_common.o
  unable to execute 'gcc': No such file or directory
  error: command 'gcc' failed with exit status 1
  ----------------------------------------
  ERROR: Failed building wheel for psutil
  Running setup.py clean for psutil
  Building wheel for contextvars (setup.py): started
  Building wheel for contextvars (setup.py): finished with status 'done'
  Stored in directory: /root/.cache/pip/wheels/a5/7d/68/1ebae2668bda2228686e3c1cf16f2c2384cea6e9334ad5f6de
  Building wheel for googleapis-common-protos (setup.py): started
  Building wheel for googleapis-common-protos (setup.py): finished with status 'done'
  Stored in directory: /root/.cache/pip/wheels/9e/3d/a2/1bec8bb7db80ab3216dbc33092bb7ccd0debfb8ba42b5668d5
Successfully built wrapt contextvars googleapis-common-protos
Failed to build psutil
ERROR: opencensus-ext-azure 0.7.0 has requirement opencensus<1.0.0,>=0.7.0, but you'll have opencensus 0.6.0 which is incompatible.
Installing collected packages: grpcio, immutables, contextvars, opencensus-context, protobuf, cachetools, pyasn1, pyasn1-modules, rsa, google-auth, googleapis-common-protos, google-api-core, opencensus, wrapt, opencensus-ext-requests, psutil, opencensus-ext-azure
  Running setup.py install for psutil: started
    Running setup.py install for psutil: finished with status 'error'
    ERROR: Complete output from command /usr/local/envs/ai4e_py_api/bin/python -u -c 'import setuptools, tokenize;__file__='"'"'/tmp/pip-install-hl7bjjdm/psutil/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-q2u74ciq/install-record.txt --single-version-externally-managed --compile:
    ERROR: running install
    running build
    running build_py
    creating build
    creating build/lib.linux-x86_64-3.6
    creating build/lib.linux-x86_64-3.6/psutil
    copying psutil/_psaix.py -> build/lib.linux-x86_64-3.6/psutil
    copying psutil/_pslinux.py -> build/lib.linux-x86_64-3.6/psutil
    copying psutil/_common.py -> build/lib.linux-x86_64-3.6/psutil
    copying psutil/_compat.py -> build/lib.linux-x86_64-3.6/psutil
    copying psutil/_psbsd.py -> build/lib.linux-x86_64-3.6/psutil
    copying psutil/_pswindows.py -> build/lib.linux-x86_64-3.6/psutil
    copying psutil/_pssunos.py -> build/lib.linux-x86_64-3.6/psutil
    copying psutil/__init__.py -> build/lib.linux-x86_64-3.6/psutil
    copying psutil/_psosx.py -> build/lib.linux-x86_64-3.6/psutil
    copying psutil/_psposix.py -> build/lib.linux-x86_64-3.6/psutil
    creating build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_misc.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_aix.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_linux.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_memory_leaks.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_contracts.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/__main__.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_bsd.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_windows.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_unicode.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_system.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_connections.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/runner.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_process.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_posix.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/__init__.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_osx.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_sunos.py -> build/lib.linux-x86_64-3.6/psutil/tests
    running build_ext
    building 'psutil._psutil_linux' extension
    creating build/temp.linux-x86_64-3.6
    creating build/temp.linux-x86_64-3.6/psutil
    gcc -pthread -B /usr/local/envs/ai4e_py_api/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DPSUTIL_POSIX=1 -DPSUTIL_VERSION=563 -DPSUTIL_LINUX=1 -DPSUTIL_ETHTOOL_MISSING_TYPES=1 -I/usr/local/envs/ai4e_py_api/include/python3.6m -c psutil/_psutil_common.c -o build/temp.linux-x86_64-3.6/psutil/_psutil_common.o
    unable to execute 'gcc': No such file or directory
    error: command 'gcc' failed with exit status 1
    ----------------------------------------
ERROR: Command "/usr/local/envs/ai4e_py_api/bin/python -u -c 'import setuptools, tokenize;__file__='"'"'/tmp/pip-install-hl7bjjdm/psutil/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-q2u74ciq/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-install-hl7bjjdm/psutil/
The command '/bin/sh -c ./ai4e_api_tools/requirements/install-appinsights.sh' returned a non-zero code: 1

Make the task ID in local development mode longer and persistent

When doing local development, currently an up to 4-digit number is given as the task ID. Can we make this a longer ID, since some APIs can stay in local mode for a long time?

It'd be great if the ID remains rather short, such as https://github.com/skorokithakis/shortuuid, if that won't create a problem.

Update June 12
Another feature request: is it possible to use a remote and persistent database to store the task ID even while in local mode? For some APIs that do not need scaling up, it's useful to be able to run the container on a single VM for a long time, and not having the task IDs reset after each re-start.

Error in api_task_manager.UpdateTaskStatus when there's an exception in provided functions

There was an exception in my part of the app (making an async API), but the error was not propagated it seems, and subsequent calls to the /task endpoint still shows the job as "created":

{
    "uuid": 6061,
    "status": "created",
    "timestamp": "2019-08-08 01:00:46",
    "endpoint": "uri"
}

From the local log, the issue is

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/envs/ai4e_py_api/lib/python3.6/threading.py", line 916, in _bootstrap_inner
    self.run()
  File "/usr/local/envs/ai4e_py_api/lib/python3.6/threading.py", line 864, in run
    self._target(*self._args, **self._kwargs)
  File "/app/orchestrator_api/runserver.py", line 290, in _request_detections
    api_task_manager.UpdateTaskStatus(request_id,
UnboundLocalError: local variable 'request_id' referenced before assignment

Expected behavior: task status reflects the error in the custom code.

System Information Leak:Internal

AIforEarth-API-Development-master/Containers/common/blob_mounting/blob_mounter.py 44
image
An internal information leak occurs when system data or debugging information is sent to a local file, console, or screen via printing or logging.

Azure container instances don't work with azure data blobs.

I can't seem to mount azure data blobs on ACI using docker, always runs into
fuse mount failed,
I can circumvent this, by passing arguments when using docker run, --cap-add SYS_ADMIN, --device /dev/fuse --security-opt apparmor:unconfined on my local, is there a way to pass these arguments to docker when launching container instance?

Issue where self.tracer wasn't defined

I ran into an issue with the following code

where self.tracer was not defined.

if not isinstance(self.log, AI4EAppInsights):
self.tracer = self.log.tracer

Above L42, I guess you could just add self.tracer = None to prevent this. Would you like me to make the PR?

when there is no detection above the threshold, render boxes errors

I'm using the tensorflow example to profile why rendering the boxes does not work on my own dataset (which I'll post in a separate issue in case anyone has suggestions). When I ran the suggested ResNet 50 faster RCNN model (http://download.tensorflow.org/models/object_detection/faster_rcnn_resnet50_fgvc_2018_07_19.tar.gz) on this image https://farm3.staticflickr.com/2248/2195772708_716d50d8e9.jpg

I get this traceback because the 5 scores are too love wo be over the .5 threshold. This results in an error because the draw_bounding_boxes_on_image function expects at least one box. A simple fix would be to not call the function if no scores are above the threshold and instead return the original image.

Traceback

render_bounding_boxes(...
(0,)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
 in 
      1 render_bounding_boxes(
----> 2             boxes, scores, clsses, image, confidence_threshold=0.5)

 in render_bounding_boxes(boxes, scores, classes, image, label_map, confidence_threshold)
    110     display_boxes = np.array(display_boxes)
    111     print(display_boxes.shape)
--> 112     draw_bounding_boxes_on_image(image, display_boxes, display_str_list_list=display_strs)
    113 
    114 # the following two functions are from https://github.com/tensorflow/models/blob/master/research/object_detection/utils/visualization_utils.py

 in draw_bounding_boxes_on_image(image, boxes, color, thickness, display_str_list_list)
    140     return
    141   if len(boxes_shape) != 2 or boxes_shape[1] != 4:
--> 142     raise ValueError('Input must be of size [N, 4]')
    143   for i in range(boxes_shape[0]):
    144     display_str_list = ()

ValueError: Input must be of size [N, 4]
#%%

import tensorflow as tf
import numpy as np
import PIL.Image as Image
import PIL.ImageColor as ImageColor
import PIL.ImageDraw as ImageDraw
import PIL.ImageFont as ImageFont


# Core detection functions


def load_model(checkpoint):
    """Load a detection model (i.e., create a graph) from a .pb file.

    Args:
        checkpoint: .pb file of the model.

    Returns: the loaded graph.

    """
    print('tf_detector.py: Loading graph...')
    detection_graph = tf.Graph()
    with detection_graph.as_default():
        od_graph_def = tf.GraphDef()
        with tf.gfile.GFile(checkpoint, 'rb') as fid:
            serialized_graph = fid.read()
            od_graph_def.ParseFromString(serialized_graph)
            tf.import_graph_def(od_graph_def, name='')
    print('tf_detector.py: Detection graph loaded.')

    return detection_graph


def open_image(image_bytes):
    """ Open an image in binary format using PIL.Image and convert to RGB mode
    Args:
        image_bytes: an image in binary format read from the POST request's body

    Returns:
        an PIL image object in RGB mode
    """
    image = Image.open(image_bytes)
    if image.mode not in ('RGBA', 'RGB'):
        raise AttributeError('Input image not in RGBA or RGB mode and cannot be processed.')
    if image.mode == 'RGBA':
        # Image.convert() returns a converted copy of this image
        image = image.convert(mode='RGB')
    return image


def generate_detections(detection_graph, image):
    """ Generates a set of bounding boxes with confidence and class prediction for one input image file.

    Args:
        detection_graph: an already loaded object detection inference graph.
        image_file: a PIL Image object

    Returns:
        boxes, scores, classes, and the image loaded from the input image_file - for one image
    """
    image_np = np.asarray(image, np.uint8)
    image_np = image_np[:, :, :3] # Remove the alpha channel

    #with detection_graph.as_default():
    with tf.Session(graph=detection_graph) as sess:
        image_np = np.expand_dims(image_np, axis=0)

        # get the operators
        image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
        box = detection_graph.get_tensor_by_name('detection_boxes:0')
        score = detection_graph.get_tensor_by_name('detection_scores:0')
        clss = detection_graph.get_tensor_by_name('detection_classes:0')
        num_detections = detection_graph.get_tensor_by_name('num_detections:0')

        # performs inference
        (box, score, clss, num_detections) = sess.run(
            [box, score, clss, num_detections],
            feed_dict={image_tensor: image_np})

    return np.squeeze(box), np.squeeze(score), np.squeeze(clss), image  # these are lists of bboxes, scores etc


# Rendering functions


def render_bounding_boxes(boxes, scores, classes, image, label_map={}, confidence_threshold=0.5):
    """Renders bounding boxes, label and confidence on an image if confidence is above the threshold.

    Args:
        boxes, scores, classes:  outputs of generate_detections.
        image: PIL.Image object, output of generate_detections.
        label_map: optional, mapping the numerical label to a string name.
        confidence_threshold: threshold above which the bounding box is rendered.

    image is modified in place!

    """
    display_boxes = []
    display_strs = []  # list of list, one list of strings for each bounding box (to accommodate multiple labels)

    for box, score, clss in zip(boxes, scores, classes):
        if score > confidence_threshold:
            print('Confidence of detection greater than threshold: ', score)
            display_boxes.append(box)
            clss = int(clss)
            label = label_map[clss] if clss in label_map else str(clss)
            displayed_label = '{}: {}%'.format(label, round(100*score))
            display_strs.append([displayed_label])

    display_boxes = np.array(display_boxes)
    print(display_boxes.shape)
    draw_bounding_boxes_on_image(image, display_boxes, display_str_list_list=display_strs)

# the following two functions are from https://github.com/tensorflow/models/blob/master/research/object_detection/utils/visualization_utils.py

def draw_bounding_boxes_on_image(image,
                                 boxes,
                                 color='LimeGreen',
                                 thickness=4,
                                 display_str_list_list=()):
  """Draws bounding boxes on image.

  Args:
    image: a PIL.Image object.
    boxes: a 2 dimensional numpy array of [N, 4]: (ymin, xmin, ymax, xmax).
           The coordinates are in normalized format between [0, 1].
    color: color to draw bounding box. Default is red.
    thickness: line thickness. Default value is 4.
    display_str_list_list: list of list of strings.
                           a list of strings for each bounding box.
                           The reason to pass a list of strings for a
                           bounding box is that it might contain
                           multiple labels.

  Raises:
    ValueError: if boxes is not a [N, 4] array
  """
  boxes_shape = boxes.shape
  if not boxes_shape:
    return
  if len(boxes_shape) != 2 or boxes_shape[1] != 4:
    raise ValueError('Input must be of size [N, 4]')
  for i in range(boxes_shape[0]):
    display_str_list = ()
    if display_str_list_list:
      display_str_list = display_str_list_list[i]
    draw_bounding_box_on_image(image, boxes[i, 0], boxes[i, 1], boxes[i, 2],
                               boxes[i, 3], color, thickness, display_str_list)


def draw_bounding_box_on_image(image,
                               ymin,
                               xmin,
                               ymax,
                               xmax,
                               color='red',
                               thickness=4,
                               display_str_list=(),
                               use_normalized_coordinates=True):
  """Adds a bounding box to an image.

  Bounding box coordinates can be specified in either absolute (pixel) or
  normalized coordinates by setting the use_normalized_coordinates argument.

  Each string in display_str_list is displayed on a separate line above the
  bounding box in black text on a rectangle filled with the input 'color'.
  If the top of the bounding box extends to the edge of the image, the strings
  are displayed below the bounding box.

  Args:
    image: a PIL.Image object.
    ymin: ymin of bounding box.
    xmin: xmin of bounding box.
    ymax: ymax of bounding box.
    xmax: xmax of bounding box.
    color: color to draw bounding box. Default is red.
    thickness: line thickness. Default value is 4.
    display_str_list: list of strings to display in box
                      (each to be shown on its own line).
    use_normalized_coordinates: If True (default), treat coordinates
      ymin, xmin, ymax, xmax as relative to the image.  Otherwise treat
      coordinates as absolute.
  """
  draw = ImageDraw.Draw(image)
  im_width, im_height = image.size
  if use_normalized_coordinates:
    (left, right, top, bottom) = (xmin * im_width, xmax * im_width,
                                  ymin * im_height, ymax * im_height)
  else:
    (left, right, top, bottom) = (xmin, xmax, ymin, ymax)
  draw.line([(left, top), (left, bottom), (right, bottom),
             (right, top), (left, top)], width=thickness, fill=color)
  try:
    font = ImageFont.truetype('arial.ttf', 24)
  except IOError:
    font = ImageFont.load_default()

  # If the total height of the display strings added to the top of the bounding
  # box exceeds the top of the image, stack the strings below the bounding box
  # instead of above.
  display_str_heights = [font.getsize(ds)[1] for ds in display_str_list]
  # Each display_str has a top and bottom margin of 0.05x.
  total_display_str_height = (1 + 2 * 0.05) * sum(display_str_heights)

  if top > total_display_str_height:
    text_bottom = top
  else:
    text_bottom = bottom + total_display_str_height
  # Reverse list and print from bottom to top.
  for display_str in display_str_list[::-1]:
    text_width, text_height = font.getsize(display_str)
    margin = np.ceil(0.05 * text_height)
    draw.rectangle(
        [(left, text_bottom - text_height - 2 * margin), (left + text_width,
                                                          text_bottom)],
        fill=color)
    draw.text(
        (left + margin, text_bottom - text_height - margin),
        display_str,
        fill='black',
        font=font)
    text_bottom -= text_height - 2 * margin
#%%
model = load_model("./tf_iNat_api/faster_rcnn_resnet50_fgvc_2018_07_19/frozen_inference_graph.pb")

f = open("/home/rave/AIforEarth-API-Development/Examples/tensorflow/2195772708_716d50d8e9.jpg", 'rb')
image = open_image(f)

#%%
boxes, scores, clsses, image = generate_detections(
            model, image)

#%%
render_bounding_boxes(
            boxes, scores, clsses, image, confidence_threshold=0.5)

PIL/Pillow does not support tiff format, can't use api with multichannel tiff images

See python-pillow/Pillow#3984 and python-pillow/Pillow#1888

It doesn't look like the pillow issues linked above will be fixed anytime soon, which means the api examples that use pillow/PIL can't accept tiff format. All the models I've trained so far have been trained on int16 tiff RGB files from Landsat (most geospatial raster data comes in tiff format). So I adapted the api to try and read in the submission with a different library, rasterio

https://github.com/ecohydro/CropMask_RCNN/blob/master/app/keras_iNat_api/keras_detector.py#L17-L37

But I can't seem to open a byte array with anything except for PIL/pillow. I've tried running the above function and get the following error when submitting a tiff to the docker server

{
    "TaskId": 3130,
    "Status": "failed: <class 'AttributeError'>\nTraceback (most recent call last):\n  File \"/app/keras_iNat_api/runserver.py\", line 60, in detect\n    arr_for_detection, image_for_drawing = keras_detector.open_image(image_bytes)\n  File \"./keras_detector.py\", line 34, in open_image\n    arr = reshape_as_image(img.read())\nAttributeError: '_GeneratorContextManager' object has no attribute 'read'\n",
    "Timestamp": "2019-07-22 17:33:14",
    "Endpoint": "uri"
}

Any tips on how to read a multichannel tiff byte array? I'd like my api to accept this format since most geospatial imagery users would prefer to use an api that accept the format that Landsat imagery comes in (tiff).

Better message when task ID is not found (local development)

When calling the /task endpoint with a task ID that is not valid (i.e. after the API was restarted and the task ID was from a previous session, or an invalid task ID), the message is malformed:

image

Would be good to say in the status field that the task ID is not found, and also have a valid timestamp.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.