Giter Site home page Giter Site logo

ibm / monitor-wml-model-with-watson-openscale Goto Github PK

View Code? Open in Web Editor NEW
13.0 18.0 19.0 9.52 MB

Monitor performance, fairness, and quality of a WML model with AI OpenScale APIs

Home Page: https://developer.ibm.com/patterns/monitor-performance-fairness-and-quality-of-a-wml-model-with-ai-openscale-apis

License: Apache License 2.0

Jupyter Notebook 100.00%
ai watson-machine-learning jupyter-notebook ibm-watson-studio

monitor-wml-model-with-watson-openscale's Issues

Having issues with WML service credentials

Hi, I tried to run the jupyter notebook, everything was fine until I reached the section: Bind machine learning engines. I got the below error:


KeyError Traceback (most recent call last)
in
----> 1 binding_uid = ai_client.data_mart.bindings.add('WML instance', WatsonMachineLearningInstance(WML_CREDENTIALS))
2 if binding_uid is None:
3 binding_uid = ai_client.data_mart.bindings.get_details()['service_bindings'][0]['metadata']['guid']
4 bindings_details = ai_client.data_mart.bindings.get_details()
5 ai_client.data_mart.bindings.list()

/opt/conda/envs/Python36/lib/python3.6/site-packages/ibm_ai_openscale/engines/watson_machine_learning/instance.py in init(self, service_credentials)
31
32 validate_type(service_credentials['apikey'], 'service_credentials.apikey', str, True)
---> 33 validate_type(service_credentials['username'], 'service_credentials.username', str, True)
34 validate_type(service_credentials['password'], 'service_credentials.password', str, True)
35 AIInstance.init(self, service_credentials['instance_id'], service_credentials, WMLConsts.SERVICE_TYPE)

KeyError: 'username'

Note that my machine learning instance uses IAM token (API key) so it does not contain username and password. Moreover the section Save and deploy the model using the same credentials worked just fine.

from watson_machine_learning_client import WatsonMachineLearningAPIClient
import json

wml_client = WatsonMachineLearningAPIClient(WML_CREDENTIALS)
......

published_model_details = wml_client.repository.store_model(model=model, meta_props=model_props, training_data=train_data, pipeline=pipeline)

By the way I ran the notebook in Watson Studio, Python 3.6, ibm-open-ai and pyspark are the same in the notebook. Could you please take a look? Thank you,

Mistypes In NoteBook

Hi,
Found some mistype in the notebook:

  1. At the "load tranning data from github" section the link is broken and need to be replaces with - !wget https://raw.githubusercontent.com/IBM/monitor-wml-model-with-watson-openscale/master/data/german_credit_data_biased_training.csv

Screenshot_1

  1. At the "set up datamart" section there is if statement about DB2_CREDENTIALS which never defined - I assume it should be DB_CREDENTIALS
    Screenshot_2

The Jupyter note book does not work

Below is the error seen when running the cell with code -
"wml_models = wml_client.repository.get_details()
model_uid = None
for model_in in wml_models['models']['resources']:
if MODEL_NAME == model_in['entity']['name']:
model_uid = model_in['metadata']['guid']
break

if model_uid is None:
print("Storing model ...")

published_model_details = wml_client.repository.store_model(model=model, meta_props=model_props, training_data=train_data, pipeline=pipeline)
model_uid = wml_client.repository.get_model_uid(published_model_details)
print("Done")"

2019-07-15 06:27:06,712 - watson_machine_learning_client.wml_client_error - WARNING - Publishing model failed.
Reason: (400)
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'Server': 'nginx', 'Date': 'Mon, 15 Jul 2019 06:27:06 GMT', 'Content-Type': 'application/json', 'Content-Length': '166', 'Connection': 'keep-alive', 'X-Frame-Options': 'DENY', 'X-Content-Type-Options': 'nosniff', 'X-XSS-Protection': '1', 'Pragma': 'no-cache', 'Cache-Control': 'private, no-cache, no-store, must-revalidate', 'X-WML-User-Client': 'PythonClient', 'x-global-transaction-id': '423161n6m4d1d3f9f6c13759kew16c2e9f7b', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains'})
HTTP response body: {"trace":"423161n6m4d1d3f9f6c13759kew16c2e9f7b","errors":[{"code":"invalid_framework_input","message":"The framework value specified: mllib, 2.4 is not supported."}]}

Problem in the notebook steps sequence

At the "Enable quality monitoring" step there is a call to enable quality monitoring with threshold and minrecords, the problem is that it gives error when you try to run it
Screenshot_3

I found out that you need to run the "Insert historical payloads" section before that to make it work like in the pictures below, to be more accurate what need to be done is the store payload logging function : subscription.payload_logging.store(records=recordsList)
Screenshot_1

Screenshot_2

No longer works due to changes of IBM Cloud

when running code in note book:

time.sleep(10)
subscription.quality_monitoring.enable(threshold=0.7, min_records=50)

Received errors:

MissingValue: No “output_data_schema” provided.
Reason: Column predictedLabel cannot be found in output_data_schema. Check if this column name is valid. Make sure that payload has been logged to populate schema.

Having issues running the notebook in my environment

I have an instance of Spark created in my cloud and use it as runtime for this notebook. I get the following error and am not able to resolve it. I would like to demo OpenScale, can someone please take a look at the error and fix it? I reached out to Lukasz Cmielowski and he asked me to open this issue. Thank you.

Here is the code I get the error in:
from pyspark.ml.classification import RandomForestClassifier
classifier = RandomForestClassifier(featuresCol="features")

pipeline = Pipeline(stages=[si_CheckingStatus, si_CreditHistory, si_EmploymentDuration, si_ExistingSavings, si_ForeignWorker, si_Housing, si_InstallmentPlans, si_Job, si_LoanPurpose, si_OthersOnLoan, si_OwnsProperty, si_Sex, si_Telephone, si_Label, va_features, classifier, label_converter])
model = pipeline.fit(train_data)

Here is the error:
Py4JJavaError Traceback (most recent call last)
/usr/local/src/spark21master/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
62 try:
---> 63 return f(*a, **kw)
64 except py4j.protocol.Py4JJavaError as e:

/usr/local/src/spark21master/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
327 "An error occurred while calling {0}{1}{2}.\n".
--> 328 format(target_id, ".", name), value)
329 else:

Py4JJavaError: An error occurred while calling o163.transform.
: java.lang.IllegalArgumentException: Data type StringType is not supported.
at org.apache.spark.ml.feature.VectorAssembler$$anonfun$transformSchema$1.apply(VectorAssembler.scala:121)
at org.apache.spark.ml.feature.VectorAssembler$$anonfun$transformSchema$1.apply(VectorAssembler.scala:117)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at org.apache.spark.ml.feature.VectorAssembler.transformSchema(VectorAssembler.scala:117)
at org.apache.spark.ml.PipelineStage.transformSchema(Pipeline.scala:74)
at org.apache.spark.ml.feature.VectorAssembler.transform(VectorAssembler.scala:54)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:90)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:508)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:812)

During handling of the above exception, another exception occurred:

IllegalArgumentException Traceback (most recent call last)
in ()
3
4 pipeline = Pipeline(stages=[si_CheckingStatus, si_CreditHistory, si_EmploymentDuration, si_ExistingSavings, si_ForeignWorker, si_Housing, si_InstallmentPlans, si_Job, si_LoanPurpose, si_OthersOnLoan, si_OwnsProperty, si_Sex, si_Telephone, si_Label, va_features, classifier, label_converter])
----> 5 model = pipeline.fit(train_data)

/usr/local/src/spark21master/spark/python/pyspark/ml/base.py in fit(self, dataset, params)
62 return self.copy(params)._fit(dataset)
63 else:
---> 64 return self._fit(dataset)
65 else:
66 raise ValueError("Params must be either a param map or a list/tuple of param maps, "

/usr/local/src/spark21master/spark/python/pyspark/ml/pipeline.py in _fit(self, dataset)
104 if isinstance(stage, Transformer):
105 transformers.append(stage)
--> 106 dataset = stage.transform(dataset)
107 else: # must be an Estimator
108 model = stage.fit(dataset)

/usr/local/src/spark21master/spark/python/pyspark/ml/base.py in transform(self, dataset, params)
103 return self.copy(params)._transform(dataset)
104 else:
--> 105 return self._transform(dataset)
106 else:
107 raise ValueError("Params must be a param map but got %s." % type(params))

/usr/local/src/spark21master/spark/python/pyspark/ml/wrapper.py in _transform(self, dataset)
250 def _transform(self, dataset):
251 self._transfer_params_to_java()
--> 252 return DataFrame(self._java_obj.transform(dataset._jdf), dataset.sql_ctx)
253
254

/usr/local/src/spark21master/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py in call(self, *args)
1255 answer = self.gateway_client.send_command(command)
1256 return_value = get_return_value(
-> 1257 answer, self.gateway_client, self.target_id, self.name)
1258
1259 for temp_arg in temp_args:

/usr/local/src/spark21master/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
77 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
78 if s.startswith('java.lang.IllegalArgumentException: '):
---> 79 raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
80 raise
81 return deco

IllegalArgumentException: 'Data type StringType is not supported.'

Error while running FPGrowth Model in Spark

Getting this error while trying to run a piece of code.
Code runs fine when using a different sample dataset.


fp_growth = FPGrowth(itemsCol="country", minSupport=0.1, minConfidence=0.5)
model = fp_growth.fit(grouped_orders)

Py4JJavaError Traceback (most recent call last)
in
1 fp_growth = FPGrowth(itemsCol="country", minSupport=0.1, minConfidence=0.5)
----> 2 model = fp_growth.fit(grouped_orders)

5 frames
/usr/local/lib/python3.8/dist-packages/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
325 if answer[1] == REFERENCE_TYPE:
--> 326 raise Py4JJavaError(
327 "An error occurred while calling {0}{1}{2}.\n".
328 format(target_id, ".", name), value)

Py4JJavaError: An error occurred while calling o164.fit.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 35.0 failed 1 times, most recent failure: Lost task 0.0 in stage 35.0 (TID 33) (973340009f58 executor driver): org.apache.spark.SparkException: Items in a transaction must be unique but got WrappedArray(Germany, Germany, Germany, Germany, Germany, Germany, Germany, Germany, Germany, Germany, Germany, Germany, Germany, Germany, Germany, Germany).
at org.apache.spark.mllib.fpm.FPGrowth.$anonfun$genFreqItems$1(FPGrowth.scala:249)
at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:197)
at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
at org.apache.spark.scheduler.Task.run(Task.scala:136)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)

Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2672)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2608)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2607)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2607)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1182)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1182)
at scala.Option.foreach(Option.scala:407)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1182)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2860)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2802)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2791)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:952)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2228)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2249)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2268)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2293)
at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1021)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:406)
at org.apache.spark.rdd.RDD.collect(RDD.scala:1020)
at org.apache.spark.mllib.fpm.FPGrowth.genFreqItems(FPGrowth.scala:254)
at org.apache.spark.mllib.fpm.FPGrowth.run(FPGrowth.scala:219)
at org.apache.spark.ml.fpm.FPGrowth.$anonfun$genericFit$1(FPGrowth.scala:180)
at org.apache.spark.ml.util.Instrumentation$.$anonfun$instrumented$1(Instrumentation.scala:191)
at scala.util.Try$.apply(Try.scala:213)
at org.apache.spark.ml.util.Instrumentation$.instrumented(Instrumentation.scala:191)
at org.apache.spark.ml.fpm.FPGrowth.genericFit(FPGrowth.scala:162)
at org.apache.spark.ml.fpm.FPGrowth.fit(FPGrowth.scala:159)
at org.apache.spark.ml.fpm.FPGrowth.fit(FPGrowth.scala:129)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.spark.SparkException: Items in a transaction must be unique but got WrappedArray(Germany, Germany, Germany, Germany, Germany, Germany, Germany, Germany, Germany, Germany, Germany, Germany, Germany, Germany, Germany, Germany).
at org.apache.spark.mllib.fpm.FPGrowth.$anonfun$genFreqItems$1(FPGrowth.scala:249)
at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:197)
at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
at org.apache.spark.scheduler.Task.run(Task.scala:136)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
... 1 more

Add instructions to define DB_CREDENTIALS

It is not clear that DB_CREDENTIALS must be defined, but if will fail if not:

under Set up datamart

    data_mart_details = ai_client.data_mart.get_details()
    if 'internal_database' in data_mart_details and data_mart_details['internal_database']:
        if KEEP_MY_INTERNAL_POSTGRES:
            print('Using existing internal datamart.')
        else:
            if DB_CREDENTIALS is None:
                print('No postgres credentials supplied. Using existing internal datamart')
            else:
                print('Switching to external datamart')
                ai_client.data_mart.delete(force=True)
                ai_client.data_mart.setup(db_credentials=DB_CREDENTIALS)
    else:
        print('Using existing external datamart')
except:
    if DB_CREDENTIALS is None:
        print('Setting up internal datamart')
        ai_client.data_mart.setup(internal_db=True)
    else:
        print('Setting up external datamart')
        try:
            ai_client.data_mart.setup(db_credentials=DB_CREDENTIALS)
        except:
            print('Setup failed, trying Db2 setup')
            ai_client.data_mart.setup(db_credentials=DB_CREDENTIALS, schema=DB_CREDENTIALS['username'])

Changes for new `Databases for PostgreSQL`

We no longer have Compose for PostgreSQL and will need to update the notebook and instructions to use Databases for PostgreSQL
The version of the Watson OpenScale Python SDK will need to be updated.

utils.py create_connection_string() fails on IBM Cloud `Databases for PostgreSQL` credentials

Previously, a user was able to use Compose for PostGRE offering from IBM Cloud, but that offering is no longer available. I documented with this diagram:
https://github.com/IBM/monitor-custom-ml-engine-with-watson-openscale/blob/master/doc/source/images/ChooseComposePostgres.png

Now, we need the Compose for PostgreSQL version.
The credential structure is different, however, and the utils.py:create_connection_string() function fails with this exception trace:

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-6-df89728f4ac4> in <module>()
----> 1 create_postgres_schema(postgres_credentials=POSTGRES_CREDENTIALS, schema_name=SCHEMA_NAME)

/opt/conda/envs/DSX-Python35/lib/python3.5/site-packages/ibm_ai_openscale/utils/utils.py in create_postgres_schema(postgres_credentials, schema_name)
    273     import psycopg2
    274 
--> 275     conn_string = create_connection_string(postgres_credentials)
    276     conn = psycopg2.connect(conn_string)
    277     conn.autocommit = True

/opt/conda/envs/DSX-Python35/lib/python3.5/site-packages/ibm_ai_openscale/utils/utils.py in create_connection_string(postgres_credentials, db_name)
    291 
    292 def create_connection_string(postgres_credentials, db_name='compose'):
--> 293     hostname = postgres_credentials['uri'].split('@')[1].split(':')[0]
    294     port = postgres_credentials['uri'].split('@')[1].split(':')[1].split('/')[0]
    295     user = postgres_credentials['uri'].split('@')[0].split('//')[1].split(':')[0]

KeyError: 'uri'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.