Giter Site home page Giter Site logo

ibm / predictive-model-on-watson-ml Goto Github PK

View Code? Open in Web Editor NEW

This project forked from djccarew/watson-dojo-pm-tester

50.0 18.0 123.0 39.68 MB

Create and deploy a predictive model using Watson Studio and Watson Machine Learning

Home Page: https://developer.ibm.com/patterns/create-and-deploy-a-scoring-model-to-predict-heartrate-failure/

License: Apache License 2.0

JavaScript 44.26% HTML 44.76% CSS 10.97%
ibmcode machine-learning watson-machine-learning watson-studio call-for-code hacktoberfest

predictive-model-on-watson-ml's Introduction

Deploy a model to predict heart failure with Watson Machine Learning

DISCLAIMER: This application is used for demonstrative and illustrative purposes only and does not constitute an offering that has gone through regulatory review.

This code pattern can be thought of as two distinct parts:

  1. A predictive model will be built using AutoAI on IBM Cloud Pak for Data. The model is then deployed to the Watson Machine Learning service, where it can be accessed via a REST API.

  2. A Node.js web app that allows a user to input some data to be scored against the previous model.

When the reader has completed this Code Pattern, they will understand how to:

  • Build a predictive model with AutoAI on Cloud Pak for Data
  • Deploy the model to the IBM Watson Machine Learning service
  • Via a Node.js app, score some data against the model via an API call to the Watson Machine Learning service

Sample output

Here's an example of what the final web app looks like

form

Architecture

  1. The developer creates a Cloud Pak for Data project.
  2. A model is created with AutoAI by uploading some data.
  3. Data is backed up and stored on Cloud Object Storage.
  4. The model is deployed using the Watson Machine Learning service.
  5. A Node.js web app is deployed on IBM Cloud. It calls the predictive model hosted on the Watson Machine Learning service.
  6. A user visits the web app, enters their information, and the predictive model returns a response.

"architecture diagram"

Prerequisites

NOTE: As of 10/16/2020, the Watson Machine Learning service on IBM Cloud is only available in the Dallas, London, Frankfurt, or Tokyo regions. Not the Seoul, Frankfurt, or Sydney regions.

Steps

  1. Create an IBM Cloud API key
  2. Create a new Cloud Pak for Data project
  3. Build a model with AutoAI
  4. Deploy the model with WML
  5. Run the Node.js application

1. Create an IBM Cloud API key

To use the Watson Machine Learning service programmatically we'll need an API key. Even though this isn't used until later on, let's create one now.

Navigate to https://cloud.ibm.com/iam/apikeys and choose to create a new API key.

create api key

Give it a name and description, hit OK. Write down the API key somewhere.

generated api key

2. Create a new Cloud Pak for Data project

Log into IBM's Cloud Pak for Data service (formally known as Watson Studio). Once in, you'll land on the dashboard.

Create a new project by clicking Create a project.

new project

Choose an Empty project.

empty project

Enter a Name and associate the project with a Cloud Object Storage service.

empty project

NOTE: By creating a project in Watson Studio a free tier Object Storage service will be created in your IBM Cloud account. Select the Free storage type to avoid fees.

At the project dashboard click on the Assets tab and upload the data set associated with this repo. patientdataV6.csv

upload data

3. Build a model with AutoAI

Now we're going to build a model from the data using IBM's AutoAI. A tool that will automatically create multiple models and test them, giving us the best result. Data science made easy!

Start by clicking on Add to project and choosing AutoAI experiment.

Add to project

Give it a Name and specify a Watson Machine Learning instance.

WML

Choose to use data from your project.

Choose data

Choose the patientdataV6.csv option.

data set

For the "What do you want to predict?" option, choose HEARTFAILURE.

right column

The experiment will take a few minutes to run. Once completed hover over the top option to make the Save as button appear. Click it.

experiment

Choose to save the experiment as a Model. You can optionally download a generated Jupyter Notebook that can be used to re-create the steps that were taken to create the model.

save

You model will be saved. Click the dialog to view it in your project.

dialog

Once you're at the model overview choose the button Promote to deployment space.

promote

4. Deploy the model with WML

To promote the model to deployment you must specify a deployment space. If no space is created choose the New space + option to create one. This action will associate the model with the space.

specify space

Navigate to the space using the hamburger menu (โ˜ฐ) on the top right and choose to View all spaces.

hamburger

Click on the space you saved the model to.

space

Choose the deploy the model by clicking the rocket ship icon.

deploy

Choose the Online deployment option and give it a name.

online

Your new deployment will appear.

new deployment

Click on the API reference tab and save the Endpoint. We'll be using this in our application.

endpoint

5. Run the Node.js application

You can deploy this application as a Cloud Foundry application to IBM Cloud by simply clicking the button below. This option will create a deployment pipeline, complete with a hosted Git lab project and devops toolchain.

Deploy to IBM Cloud

You may be prompted for an IBM Cloud API Key during this process. Use the Create (+) button to auto-fill this field and the others. Click on the Deploy button to deploy the application.

pipeline

Before using the application go to the Runtime section of the application and in the Environment variables tab add in your API_KEY and DEPLOYMENT_URL values from steps 1 and 4.

TIP Do NOT wrap these values with double quotes.

Once updated your application will restart and you can visit the application by clicking on Visit App URL.

env vars

The app is fairly self-explantory, simply fill in the data you want to score and click on the Classify button to test how those figures would score against our model. The model predicts that the risk of heart failure for a patient with these medical characteristics.

risk

License

This code pattern is licensed under the Apache Software License, Version 2. Separate third party code objects invoked within this code pattern are licensed by their respective providers pursuant to their own separate licenses. Contributions are subject to the Developer Certificate of Origin, Version 1.1 (DCO) and the Apache Software License, Version 2.

Apache Software License (ASL) FAQ

predictive-model-on-watson-ml's People

Contributors

akeller avatar djccarew avatar dokun1 avatar dolph avatar eciggaar avatar imgbot[bot] avatar imgbotapp avatar justinmccoy avatar kant avatar markstur avatar rhagarty avatar sanjay-saxena avatar sanjeevghimire avatar scottdangelo avatar seanzx85 avatar stevemar avatar stevemart avatar timroster avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

predictive-model-on-watson-ml's Issues

TypeErrorTraceback (most recent call last)

When I run the second cell of Step 6.2 in the notebook, which is:

"import json
import requests
from base64 import b64encode

token_url = url + "/v3/identity/token"

userAndPass = b64encode(bytes(username + ':' + password, "utf-8")).decode("ascii")
headers = { 'Authorization' : 'Basic %s' % userAndPass }

response = requests.request("GET", token_url, headers=headers)

watson_ml_token = json.loads(response.text)['token']
print(watson_ml_token)"

I get the error:
"TypeErrorTraceback (most recent call last)
in ()
5 token_url = url + "/v3/identity/token"
6
----> 7 userAndPass = b64encode(bytes(username + ':' + password, "utf-8")).decode("ascii")
8 headers = { 'Authorization' : 'Basic %s' % userAndPass }
9

TypeError: str() takes at most 1 argument (2 given)"

Version conflicts with new Watson Studio Spark Python 3.6

To avoid current version conflicts for Spark Python 3.6 version default runtime environment

tensorflow 1.13.1 requires tensorboard<1.14.0,>=1.13.0, which is not installed.
ibm-cos-sdk-core 2.4.3 has requirement urllib3<1.25,>=1.20, but you'll have urllib3 1.25.7 which is incompatible.
botocore 1.12.82 has requirement urllib3<1.25,>=1.20, but you'll have urllib3 1.25.7 which is incompatible.

Add

!pip uninstall -y botocore
!pip install 'botocore==1.12.253'
!pip uninstall -y urllib3
!pip install 'urllib3==1.24.3'
!pip uninstall -y tensorboard
!pip install 'tensorboard==1.13.1'

to the notebook

Cannot connect ML instance to running app

I do not believe that this is an issue with this code pattern. It is likely a cloud infrastructure issue, but I am documenting here.

When attempting to use the Machine Learning connection tab to connect to the deployed app, which is in Dev Advocacy/dev and Dallas:

image

This fails, with no compatible app found :

image

When attempting to use the Cloud app connection tab to connect an existing compatible service, the ML service shows up:

image

But when the Connect button is clicked, there is an error Failed to get service plan:

image

TypeError: expected bytes, not str

Using Py3 we get an error with this:

headers = {'authorization': "Basic {}".format(b64encode(username + ":" + password).decode("ascii"))}

TypeError: expected bytes, not str

py2.7 print syntax doesn't work in py3.5

This will fail:
print "Number of training records: " + str(train_data.count())
with:

  File "<ipython-input-11-b5aca007116f>", line 6
    print "Number of training records: " + str(train_data.count())
                                       ^
SyntaxError: invalid syntax

Needs parenthesis around the options to print()

Question. Where's this data from?

I'm looking for similar data specifically for elder people, so I wanted to know where was it downloaded so I can search similar datasets. Thanks

update for new UI

Some UI changes:
When you create a project, it no longer has you choose Data Science, so this is deprecated:

* Create a new project by clicking `+ New project` and choosing `Data Science`:

and you cannot choose Associated services +add -> machine learning, you need to choose Watson and then navigate to Machine Learning, so this is out-of-date:

* Click on the `Settings` tab for the project, scroll down to `Associated services` and click `+ Add service` ->  `Machine Learning`.

Create model artifact (abstraction layer) fails with Value error.

I've run all cells up to Create model artifact and verified model accuracy using test data.
The cell for login passes ml_repository_client.authorize(username, password)
But when I run the model_artifact = MLRepositoryArtifact(....) cell, it fails with:
ValueError: Invalid input, pipeline_artifact can not be None for spark models.

Here's the entire output:

ValueErrorTraceback (most recent call last)
<ipython-input-22-c9e6cc205406> in <module>()
----> 1 model_artifact = MLRepositoryArtifact(model_rf, training_data=train_data, name="Heart Failure Prediction Model")

/usr/local/src/wml-libs.v17/spark-2.1/python-2.7/repository/mlrepositoryartifact/ml_repository_artifact.pyc in MLRepositoryArtifact(ml_artifact, training_data, training_dataref, training_target, feature_names, label_column_names, pipeline_artifact, name, meta_props)
     88                                        pipeline_artifact=pipeline_artifact,
     89                                        name=name,
---> 90                                        meta_props=meta_props)
     91     if lib_checker.installed_libs[SCIKIT]:
     92         if issubclass(type(ml_artifact), BaseEstimator):

/usr/local/src/wml-libs.v17/spark-2.1/python-2.7/repository/mlrepositoryartifact/ml_repository_artifact.pyc in _get_pipeline_model(pipeline_model, training_data, pipeline_artifact, name, meta_props)
    113 
    114 def _get_pipeline_model(pipeline_model, training_data, pipeline_artifact, name, meta_props):
--> 115     return SparkPipelineModelArtifact(pipeline_model, training_data=training_data, pipeline_artifact=pipeline_artifact, name=name, meta_props=meta_props)
    116 
    117 

/usr/local/src/wml-libs.v17/spark-2.1/python-2.7/repository/mlrepositoryartifact/spark_pipeline_model_artifact.pyc in __init__(self, ml_pipeline_model, training_data, uid, name, pipeline_artifact, meta_props)
     42                 type_identified = True
     43             else:
---> 44                 raise ValueError('Invalid input, pipeline_artifact can not be None for spark models.')
     45 
     46         if not type_identified and lib_checker.installed_libs[MLPIPELINE]:

ValueError: Invalid input, pipeline_artifact can not be None for spark models.

update for IAM token

IAM token has replaced the Cloud Foundry uname and password. The notebook and docs need to be changed to reflect this.

`!pip install --user -pixiedust==1.1.2` fails

Collecting pixiedust
Collecting astunparse (from pixiedust)
  Using cached https://files.pythonhosted.org/packages/8b/ea/d38686f1718e307d83673d905dcffd142640a31a217e4e76d1de78f21b20/astunparse-1.5.0-py2.py3-none-any.whl
Collecting geojson (from pixiedust)
  Using cached https://files.pythonhosted.org/packages/8d/39/231105abbfd2332f108cdbfe736e56324949fa9e80e536ae60a082cf96a9/geojson-2.4.0-py2.py3-none-any.whl
Collecting colour (from pixiedust)
  Using cached https://files.pythonhosted.org/packages/74/46/e81907704ab203206769dee1385dc77e1407576ff8f50a0681d0a6b541be/colour-0.1.5-py2.py3-none-any.whl
Collecting mpld3 (from pixiedust)
Collecting lxml (from pixiedust)
  Using cached https://files.pythonhosted.org/packages/eb/e2/02d18a1b3021b65409dd860f91cf0d68d79900f172bb3cc93cff21c3c951/lxml-4.2.5-cp35-cp35m-manylinux1_x86_64.whl
Collecting markdown (from pixiedust)
  Using cached https://files.pythonhosted.org/packages/6d/7d/488b90f470b96531a3f5788cf12a93332f543dbab13c423a5e7ce96a0493/Markdown-2.6.11-py2.py3-none-any.whl
Collecting wheel<1.0,>=0.23.0 (from astunparse->pixiedust)
  Using cached https://files.pythonhosted.org/packages/81/30/e935244ca6165187ae8be876b6316ae201b71485538ffac1d718843025a9/wheel-0.31.1-py2.py3-none-any.whl
Collecting six<2.0,>=1.6.1 (from astunparse->pixiedust)
  Using cached https://files.pythonhosted.org/packages/67/4b/141a581104b1f6397bfa78ac9d43d8ad29a7ca43ea90a2d863fe3056e86a/six-1.11.0-py2.py3-none-any.whl
Installing collected packages: wheel, six, astunparse, geojson, colour, mpld3, lxml, markdown, pixiedust
Exception:
Traceback (most recent call last):
  File "/opt/ibm/conda/miniconda3/lib/python3.5/site-packages/pip/basecommand.py", line 215, in main
    status = self.run(options, args)
  File "/opt/ibm/conda/miniconda3/lib/python3.5/site-packages/pip/commands/install.py", line 342, in run
    prefix=options.prefix_path,
  File "/opt/ibm/conda/miniconda3/lib/python3.5/site-packages/pip/req/req_set.py", line 784, in install
    **kwargs
  File "/opt/ibm/conda/miniconda3/lib/python3.5/site-packages/pip/req/req_install.py", line 851, in install
    self.move_wheel_files(self.source_dir, root=root, prefix=prefix)
  File "/opt/ibm/conda/miniconda3/lib/python3.5/site-packages/pip/req/req_install.py", line 1064, in move_wheel_files
    isolated=self.isolated,
  File "/opt/ibm/conda/miniconda3/lib/python3.5/site-packages/pip/wheel.py", line 247, in move_wheel_files
    prefix=prefix,
  File "/opt/ibm/conda/miniconda3/lib/python3.5/site-packages/pip/locations.py", line 153, in distutils_scheme
    i.finalize_options()
  File "/opt/ibm/conda/miniconda3/lib/python3.5/distutils/command/install.py", line 251, in finalize_options
    raise DistutilsOptionError("can't combine user with prefix, "
distutils.errors.DistutilsOptionError: can't combine user with prefix, exec_prefix/home, or install_(plat)base

Testing Application returns 503 error

The ML model is build and works inside the Notebook using the APIs. The ML dashboard shows:
Deployment Type | online
Deployment Status | ACTIVE

The app is connected to the ML service.
But when I try to Score in the app GUI, I get the following:

guierror503

and the server app logs show:

errorlog

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.