Giter Site home page Giter Site logo

databrickslabs / automl-toolkit Goto Github PK

View Code? Open in Web Editor NEW
189.0 21.0 42.0 161.88 MB

Toolkit for Apache Spark ML for Feature clean-up, feature Importance calculation suite, Information Gain selection, Distributed SMOTE, Model selection and training, Hyper parameter optimization and selection, Model interprability.

License: Other

Scala 30.64% Dockerfile 0.01% Python 0.38% HTML 68.97% Shell 0.01%
spark feature-engineering apache-spark pyspark scala machinelearning ml

automl-toolkit's Introduction

Databricks Labs AutoML Toolkit

Release Notes | Python API Docs | Python Artifact | Developer Docs | Python Docs | Analysis Tools Docs | Demo | Release Artifacts | Contributors

This Databricks Labs project is a non-officially-supported end-to-end supervised learning solution for automating:

  • Feature clean-up
    • Advanced NA fill, covariance calculations, collinearity determination, outlier filtering, and data casting
  • Feature Importance calculation suite
    • RandomForest or XGBoost determinations
  • Feature Interaction with Information Gain selection
  • Feature vectorization
  • Advanced train/test split techniques (including Distributed SMOTE (KSample))
  • Model selection and training
  • Hyper parameter optimization and selection
    • Hyperspace, Genetic, and MBO-based selection
  • Batch Prediction through serialized SparkML Pipelines
  • Logging of model results and training runs (using MLFlow)
  • Model interprability (including distributed Shapley Values )

This package utilizes Apache Spark ML and currently supports the following model family types:

  • Decision Trees (Regressor and Classifier)
  • Gradient Boosted Trees (Regressor and Classifier)
  • Random Forest (Regressor and Classifier)
  • Linear Regression
  • Logistic Regression
  • Multi-Layer Perceptron Classifier
  • Support Vector Machines
  • XGBoost (Regressor and Classifier)

NOTE: With the upgrade to Spark 3 (Scala 2.12) LightGBM is no longer supported but will be added in a future release.

Documentation

Scala API documentation can be found here

Python API documentation can be found here

Analytics Package API documentation can be found here

Installing - Recommended!

Darabricks Labs AutoML can be pulled from maven central with the following coordinates. Example - to install 0.7.2 AutoML:

<dependency>
  <groupId>com.databricks.labs</groupId>
  <artifactId>automl-toolkit_2.12</artifactId>
  <version>0.8.1</version>
</dependency>

Building

Databricks Labs AutoML can be build with either SBT or Maven.

This package requires Java 1.8.x  and scala 2.12.x to be installed on your system prior to building.

After cloning this repo onto your local system, navigate to the root directory and execute either:

Maven Build
mvn clean install -DskipTests
SBT Build
sbt package

If there is any StackOverflowError during the build, adjust the stack size on your computer's JVM. Example:

#For Maven
export MAVEN_OPTS=-Xss2m
#For SBT
export SBT_OPTS="-Xss2M"

This will skip unit test execution (it is not recommended to run unit tests in local mode against this package as unit testing is asynchronous and incredibly CPU intensive for this code base.)

Setup

Once the artifact has been built, attach to the Databricks Shard through either the DBFS API or the GUI. Once loaded into the account, utilize either the Libraries API to attach to a cluster, or utilize the GUI to attach the .jar to the cluster.

NOTE: It is not recommended to attach this libarary to all clusters on the account.  

Use of an ML Runtime cluster configuration is highly advised to ensure that custom management of dependent 
libraries and configurations are provided 'out of the box'

Attach the following libraries to the cluster:

  • The automl toolkit jar created above. (automatedml_2.12-((version)).jar)
  • If using the PySpark API for the toolkit, the .whl file for the PySpark API.

IMPORTANT NOTE: as of release 0.7.1, the mlflow libraries in pypi and Maven are NO LONGER NEEDED. Attaching them to your cluster WILL prevent the run from logging and will throw an exception. DO NOT ATTACH EITHER OF THEM.

Getting Started

This package provides a number of different levels of API interaction, from the highest-level "default only" FamilyRunner to low-level APIs that allow for highly customizable workflows to be created for automated ML tuning and Inference.

Since v0.6.0 we have included an API to work with the pipeline semantics around feature engineering steps and full predict pipelines.For the purposes of a quick-start intro, the below example is of the highest-level API access point.

import com.databricks.labs.automl.executor.config.ConfigurationGenerator
import com.databricks.labs.automl.executor.FamilyRunner
import org.apache.spark.ml.PipelineModel

val data = spark.table("ben_demo.adult_data")
val overrides = Map(
  "labelCol" -> "label",
  "mlFlowLoggingFlag" -> false,
  "scalingFlag" -> true,
  "oneHotEncodeFlag" -> true,
  "pipelineDebugFlag" -> true
)
val randomForestConfig = ConfigurationGenerator
        .generateConfigFromMap("RandomForest", "classifier", overrides)

val runner = FamilyRunner(data, Array(randomForestConfig)).executeWithPipeline()

runner.bestPipelineModel("RandomForest").transform(data)

//Serialize it
runner.bestPipelineModel("RandomForest").write.overwrite().save("tmp/predict-pipeline-1")

// Load it for running inference
val pipelineModel = PipelineModel.load("tmp/predict-pipeline-1")
val predictDf = pipelineModel.transform(data)

This example will take the default configuration for all of the application parameters (excepting the overridden parameters in overrides Map) and execute Data Preparation tasks, Feature Vectorization, and automatic tuning of all 3 specified model types. At the conclusion of each run, the results and model artifacts will be logged to the mlflow location that was specified in the configuration.

For a listing of all available parameter overrides and their functionality, see the Developer Docs

Inference via Mlflow Run ID

It is also possible to use MlFlow Run ID for inference, if Mlflow logging is turned on during training. For usage, see this

For all available pipeline APIs. please see Developer Docs

Feedback

Issues with the application? Found a bug? Have a great idea for an addition? Feel free to file an issue or contact Ben

Contributing

Have a great idea that you want to add? Fork the repo and submit a PR!

Legal Information

This software is provided as-is and is not officially supported by Databricks through customer technical support channels. Support, questions, and feature requests can be communicated via email -> [email protected] or through the Issues page of this repo. Please see the legal agreement and understand that issues with the use of this code will not be answered or investigated by Databricks Support.

Core Contribution team

  • Lead Developer: Ben Wilson, Practice Leader, Databricks
  • Developer: Daniel Tomes, RSA Practice Leader, Databricks
  • Developer: Jas Bali, Sr. Solutions Consultant, Databricks
  • Developer: Mary Grace Moesta, Customer Success Engineer, Databricks
  • Developer: Nick Senno, Resident Solutions Architect, Databricks

automl-toolkit's People

Contributors

bali0019 avatar benwilson2 avatar geeksheikh avatar marygracemoesta avatar nathanknox avatar nsenno-dbr avatar wesley84 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

automl-toolkit's Issues

Issue with DataSplitUtility repartition(0)

When following this tutorial, I encounter the following error during feature selection thrown by DataSplitUtility:
java.lang.IllegalArgumentException: requirement failed: Number of partitions (0) must be positive.

The thing I do differently from the tutorial is setting the trainTestSplitMethod to "chronological" as in:

Map(
  ...
  "tunerTrainSplitMethod" -> "chronological",
  "tunerTrainSplitChronologicalColumn" -> "id",
  "tunerTrainSplitChronologicalRandomPercentage" -> 0.25,
  ...
)

Any ideas on how to fix the issue?

I am using:

  • Spark 3.2.0
  • Hadoop 3.3.1
  • Scala 2.12.15
  • automl-toolkit 0.8.1

feature interaction: evaluation scoring on original input fields was too slow.

hey guys, i'm reading the source code and i would like to sincerely thank all those works you've done there, and public all of code too. But i noticed that some of code in "FeatureInteraction" is running too slow, for example:

`val nominalScores = nominalFields.map { x =>
x -> ColumnScoreData(
scoreColumn(
df,
modelType,
x,
getFieldType("nominal"),
totalRecordCount
),
"nominal"
)

}.toMap

val continuousScores = continuousFields.map { x =>
  x -> ColumnScoreData(
    scoreColumn(
      df,
      modelType,
      x,
      getFieldType("continuous"),
      totalRecordCount
    ),
    "continuous"
  )
}.toMap`

is there any suggestions for paralisim? looking forward your reply!

Fix example notebook so it works out of the box

Current example does not work.
https://github.com/databrickslabs/automl-toolkit/blob/master/demos/AutoMLPresentationDemo.dbc

Issues:

  • Load data
  • Parameterize hard-coded path names
    • Experiment name is hardcoded
      • Cmd 15 has experiment name hardcoded to /Users/[email protected]/autoMLTraining
      • Use dbutils.notebook.getContext().tags("user") to parameterize user home dir
    • Cmd 16 - dbfs:/tmp/tomes/ml/automl/models/$projectName/
    • Cmd 28 - /tmp/tomes/ml/automl/inference/auto_ml_demo
  • xgboost error - see below.
  • Cmd 3 High Level Process diagram minor mispelling: infernece -> inference
at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:1102)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:1100)
	at scala.collection.mutable.HashSet.foreach(HashSet.scala:78)
	at org.apache.spark.scheduler.DAGScheduler.cleanUpAfterSchedulerStop(DAGScheduler.scala:1100)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onStop(DAGScheduler.scala:2592)
	at org.apache.spark.util.EventLoop.stop(EventLoop.scala:84)
	at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:2503)
	at org.apache.spark.SparkContext$$anonfun$stop$6.apply$mcV$sp(SparkContext.scala:2107)
	at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1506)
	at org.apache.spark.SparkContext.stop(SparkContext.scala:2106)
	at org.apache.spark.TaskFailedListener$$anon$1$$anonfun$run$1.apply$mcV$sp(SparkParallelismTracker.scala:131)
	at org.apache.spark.TaskFailedListener$$anon$1$$anonfun$run$1.apply(SparkParallelismTracker.scala:131)
	at org.apache.spark.TaskFailedListener$$anon$1$$anonfun$run$1.apply(SparkParallelismTracker.scala:131)
	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
	at org.apache.spark.TaskFailedListener$$anon$1.run(SparkParallelismTracker.scala:130)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:893)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2243)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2265)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2284)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2309)
	at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:961)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:379)
	at org.apache.spark.rdd.RDD.collect(RDD.scala:960)
	at org.apache.spark.RangePartitioner$.sketch(Partitioner.scala:309)
	at org.apache.spark.RangePartitioner.<init>(Partitioner.scala:171)
	at org.apache.spark.RangePartitioner.<init>(Partitioner.scala:151)
	at org.apache.spark.rdd.OrderedRDDFunctions$$anonfun$sortByKey$1.apply(OrderedRDDFunctions.scala:62)
	at org.apache.spark.rdd.OrderedRDDFunctions$$anonfun$sortByKey$1.apply(OrderedRDDFunctions.scala:61)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:379)
	at org.apache.spark.rdd.OrderedRDDFunctions.sortByKey(OrderedRDDFunctions.scala:61)
	at org.apache.spark.mllib.evaluation.BinaryClassificationMetrics.x$4$lzycompute(BinaryClassificationMetrics.scala:155)
	at org.apache.spark.mllib.evaluation.BinaryClassificationMetrics.x$4(BinaryClassificationMetrics.scala:146)
	at org.apache.spark.mllib.evaluation.BinaryClassificationMetrics.confusions$lzycompute(BinaryClassificationMetrics.scala:148)

How to use in databricks?

Hello, I followed the instructions in this repo and was able to build using SBT. I installed it in my cluster using the GUI, but still, have an error when importing. If you could provide guidance, that would be great.

Thanks!

'FeatureImportance' object has no attribute 'run_feature_importances'

I am trying to run the python example on databricks.
When I get to this line I get the error in the subject.
fi_importances = FI.run_feature_importances("XGBoost", "classifier", dataframe,20.0,"count",generic_overrides)

I have attached the wheel file pyAutoML-0.2.0-py3-none-any.whl to my cluster.

Will Python APIs be provided?

It seems that the current version only supports scala APIs to build automation flow. Will Python APIs be provided?

Issue with DropColumnsTransformer when split is "chronological"

Since yesterday, I tried using FamilyRunner and it works past DropColumnsTransformer stage as long as I don't use "chronological" split method -- but fails in DataSplitUtility.split as reported here
The error I get with FamilyRunner is different from the above. In my understanding, DropColumnsTransformer drops tunerTrainSplitChronologicalColumn despite the fact that I add it to fieldsToIgnoreInVector.

In my understanding, columns in fieldsToIgnoreInVector should be left untouched by all transformers, but it doesn't seem to be the case. It is possible to spot the problem with the debug flag. In my experiment, tunerTrainSplitChronologicalColumn -> "id_col", but it is not present in the step output dataset:

...
Output dataset schema: root
 |-- label_col: integer (nullable = true)
 |-- automl_internal_id: long (nullable = false)
 |-- features: vector (nullable = true)

=== End of class com.databricks.labs.automl.pipeline.DropColumnsTransformer Pipeline Stage log <==

I will look deeper into this and open a PR to fix it.

java.lang.ArrayIndexOutOfBoundsException when execute `FamilyRunner`

Hi
I got an Error : java.lang.ArrayIndexOutOfBoundsException: 1 when i execute the FamilyRunner
or AutomationRunner
I used the practice example in README.md.
How can i solve this problem?

val data = spark.table("DF")

val overrides = Map("labelCol" -> "class")

val randomForestConfig = ConfigurationGenerator.generateConfigFromMap("RandomForest", "classifier", overrides)
val gbtConfig = ConfigurationGenerator.generateConfigFromMap("GBT", "classifier", overrides)
val logConfig = ConfigurationGenerator.generateConfigFromMap("LogisticRegression", "classifier", overrides)


val runner = FamilyRunner(data, Array(randomForestConfig, gbtConfig, logConfig))..execute()

And Error code is below :


ava.lang.ArrayIndexOutOfBoundsException: 1
	at com.databricks.labs.automl.utils.WorkspaceDirectoryValidation.validate(WorkspaceDirectoryValidation.scala:96)
	at com.databricks.labs.automl.utils.WorkspaceDirectoryValidation$.apply(WorkspaceDirectoryValidation.scala:123)
	at com.databricks.labs.automl.executor.DataPrep.prepData(DataPrep.scala:275)
	at com.databricks.labs.automl.executor.FamilyRunner$$anonfun$execute$1.apply(FamilyRunner.scala:125)
	at com.databricks.labs.automl.executor.FamilyRunner$$anonfun$execute$1.apply(FamilyRunner.scala:119)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
	at com.databricks.labs.automl.executor.FamilyRunner.execute(FamilyRunner.scala:119)
	at line3fa913e91f964622bbee0641bf7664fb138.$read$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-983:1)
	at line3fa913e91f964622bbee0641bf7664fb138.$read$$iw$$iw$$iw$$iw$$iw.<init>(command-983:48)
	at line3fa913e91f964622bbee0641bf7664fb138.$read$$iw$$iw$$iw$$iw.<init>(command-983:50)
	at line3fa913e91f964622bbee0641bf7664fb138.$read$$iw$$iw$$iw.<init>(command-983:52)
	at line3fa913e91f964622bbee0641bf7664fb138.$read$$iw$$iw.<init>(command-983:54)
	at line3fa913e91f964622bbee0641bf7664fb138.$read$$iw.<init>(command-983:56)
	at line3fa913e91f964622bbee0641bf7664fb138.$read.<init>(command-983:58)
	at line3fa913e91f964622bbee0641bf7664fb138.$read$.<init>(command-983:62)
	at line3fa913e91f964622bbee0641bf7664fb138.$read$.<clinit>(command-983)
	at line3fa913e91f964622bbee0641bf7664fb138.$eval$.$print$lzycompute(<notebook>:7)
	at line3fa913e91f964622bbee0641bf7664fb138.$eval$.$print(<notebook>:6)
	at line3fa913e91f964622bbee0641bf7664fb138.$eval.$print(<notebook>)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:793)
	at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1054)
	at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:645)
	at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:644)
	at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
	at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
	at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:644)
	at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:576)
	at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:572)
	at com.databricks.backend.daemon.driver.DriverILoop.execute(DriverILoop.scala:215)
	at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$repl$1.apply$mcV$sp(ScalaDriverLocal.scala:197)
	at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$repl$1.apply(ScalaDriverLocal.scala:197)
	at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$repl$1.apply(ScalaDriverLocal.scala:197)
	at com.databricks.backend.daemon.driver.DriverLocal$TrapExitInternal$.trapExit(DriverLocal.scala:679)
	at com.databricks.backend.daemon.driver.DriverLocal$TrapExit$.apply(DriverLocal.scala:632)
	at com.databricks.backend.daemon.driver.ScalaDriverLocal.repl(ScalaDriverLocal.scala:197)
	at com.databricks.backend.daemon.driver.DriverLocal$$anonfun$execute$8.apply(DriverLocal.scala:368)
	at com.databricks.backend.daemon.driver.DriverLocal$$anonfun$execute$8.apply(DriverLocal.scala:345)
	at com.databricks.logging.UsageLogging$$anonfun$withAttributionContext$1.apply(UsageLogging.scala:238)
	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
	at com.databricks.logging.UsageLogging$class.withAttributionContext(UsageLogging.scala:233)
	at com.databricks.backend.daemon.driver.DriverLocal.withAttributionContext(DriverLocal.scala:48)
	at com.databricks.logging.UsageLogging$class.withAttributionTags(UsageLogging.scala:271)
	at com.databricks.backend.daemon.driver.DriverLocal.withAttributionTags(DriverLocal.scala:48)
	at com.databricks.backend.daemon.driver.DriverLocal.execute(DriverLocal.scala:345)
	at com.databricks.backend.daemon.driver.DriverWrapper$$anonfun$tryExecutingCommand$2.apply(DriverWrapper.scala:644)
	at com.databricks.backend.daemon.driver.DriverWrapper$$anonfun$tryExecutingCommand$2.apply(DriverWrapper.scala:644)
	at scala.util.Try$.apply(Try.scala:192)
	at com.databricks.backend.daemon.driver.DriverWrapper.tryExecutingCommand(DriverWrapper.scala:639)
	at com.databricks.backend.daemon.driver.DriverWrapper.getCommandOutputAndError(DriverWrapper.scala:485)
	at com.databricks.backend.daemon.driver.DriverWrapper.executeCommand(DriverWrapper.scala:597)
	at com.databricks.backend.daemon.driver.DriverWrapper.runInnerLoop(DriverWrapper.scala:390)
	at com.databricks.backend.daemon.driver.DriverWrapper.runInner(DriverWrapper.scala:337)
	at com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:219)
	at java.lang.Thread.run(Thread.java:748)

License

Hi,
What is the plan with the license for this project? Will it become Apache 2.0 like DeltaLake?

Also will there be a Spark 3.0/Scala 2.12 release?

Thanks

java.lang.NoSuchMethodError: org.mlflow.api.proto.Service$CreateRun$Builder.setRunName

Hi, After complete the FamilyRunner, I got an error code with belw

java.lang.NoSuchMethodError: org.mlflow.api.proto.Service$CreateRun$Builder.setRunName(Ljava/lang/String;)Lorg/mlflow/api/proto/Service$CreateRun$Builder;

at com.databricks.labs.automl.tracking.MLFlowTracker.com$databricks$labs$automl$tracking$MLFlowTracker$$generateMlFlowRun(MLFlowTracker.scala:148)
	at com.databricks.labs.automl.tracking.MLFlowTracker.logBest(MLFlowTracker.scala:401)
	at com.databricks.labs.automl.tracking.MLFlowTracker.logMlFlowDataAndModels(MLFlowTracker.scala:352)
	at com.databricks.labs.automl.AutomationRunner.logResultsToMlFlow(AutomationRunner.scala:1291)
	at com.databricks.labs.automl.AutomationRunner.liftedTree1$1(AutomationRunner.scala:1439)
	at com.databricks.labs.automl.AutomationRunner.executeTuning(AutomationRunner.scala:1438)
	at com.databricks.labs.automl.executor.FamilyRunner$$anonfun$execute$1.apply(FamilyRunner.scala:129)
	at com.databricks.labs.automl.executor.FamilyRunner$$anonfun$execute$1.apply(FamilyRunner.scala:119)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
	at com.databricks.labs.automl.executor.FamilyRunner.execute(FamilyRunner.scala:119)
	at linea339e92b41aa489e83cc214c9c04f05540.$read$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-1020:5)
	at linea339e92b41aa489e83cc214c9c04f05540.$read$$iw$$iw$$iw$$iw$$iw.<init>(command-1020:53)
	at linea339e92b41aa489e83cc214c9c04f05540.$read$$iw$$iw$$iw$$iw.<init>(command-1020:55)
	at linea339e92b41aa489e83cc214c9c04f05540.$read$$iw$$iw$$iw.<init>(command-1020:57)
	at linea339e92b41aa489e83cc214c9c04f05540.$read$$iw$$iw.<init>(command-1020:59)
	at linea339e92b41aa489e83cc214c9c04f05540.$read$$iw.<init>(command-1020:61)
	at linea339e92b41aa489e83cc214c9c04f05540.$read.<init>(command-1020:63)
	at linea339e92b41aa489e83cc214c9c04f05540.$read$.<init>(command-1020:67)
	at linea339e92b41aa489e83cc214c9c04f05540.$read$.<clinit>(command-1020)
	at linea339e92b41aa489e83cc214c9c04f05540.$eval$.$print$lzycompute(<notebook>:7)
	at linea339e92b41aa489e83cc214c9c04f05540.$eval$.$print(<notebook>:6)
	at linea339e92b41aa489e83cc214c9c04f05540.$eval.$print(<notebook>)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:793)
	at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1054)
	at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:645)
	at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:644)
	at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
	at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
	at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:644)
	at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:576)
	at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:572)
	at com.databricks.backend.daemon.driver.DriverILoop.execute(DriverILoop.scala:215)
	at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$repl$1.apply$mcV$sp(ScalaDriverLocal.scala:197)
	at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$repl$1.apply(ScalaDriverLocal.scala:197)
	at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$repl$1.apply(ScalaDriverLocal.scala:197)
	at com.databricks.backend.daemon.driver.DriverLocal$TrapExitInternal$.trapExit(DriverLocal.scala:679)
	at com.databricks.backend.daemon.driver.DriverLocal$TrapExit$.apply(DriverLocal.scala:632)
	at com.databricks.backend.daemon.driver.ScalaDriverLocal.repl(ScalaDriverLocal.scala:197)
	at com.databricks.backend.daemon.driver.DriverLocal$$anonfun$execute$8.apply(DriverLocal.scala:368)
	at com.databricks.backend.daemon.driver.DriverLocal$$anonfun$execute$8.apply(DriverLocal.scala:345)
	at com.databricks.logging.UsageLogging$$anonfun$withAttributionContext$1.apply(UsageLogging.scala:238)
	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
	at com.databricks.logging.UsageLogging$class.withAttributionContext(UsageLogging.scala:233)
	at com.databricks.backend.daemon.driver.DriverLocal.withAttributionContext(DriverLocal.scala:48)
	at com.databricks.logging.UsageLogging$class.withAttributionTags(UsageLogging.scala:271)
	at com.databricks.backend.daemon.driver.DriverLocal.withAttributionTags(DriverLocal.scala:48)
	at com.databricks.backend.daemon.driver.DriverLocal.execute(DriverLocal.scala:345)
	at com.databricks.backend.daemon.driver.DriverWrapper$$anonfun$tryExecutingCommand$2.apply(DriverWrapper.scala:644)
	at com.databricks.backend.daemon.driver.DriverWrapper$$anonfun$tryExecutingCommand$2.apply(DriverWrapper.scala:644)
	at scala.util.Try$.apply(Try.scala:192)
	at com.databricks.backend.daemon.driver.DriverWrapper.tryExecutingCommand(DriverWrapper.scala:639)
	at com.databricks.backend.daemon.driver.DriverWrapper.getCommandOutputAndError(DriverWrapper.scala:485)
	at com.databricks.backend.daemon.driver.DriverWrapper.executeCommand(DriverWrapper.scala:597)
	at com.databricks.backend.daemon.driver.DriverWrapper.runInnerLoop(DriverWrapper.scala:390)
	at com.databricks.backend.daemon.driver.DriverWrapper.runInner(DriverWrapper.scala:337)
	at com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:219)
	at java.lang.Thread.run(Thread.java:748)

And my code is

import com.databricks.labs.automl.executor.config.ConfigurationGenerator
import com.databricks.labs.automl.executor.FamilyRunner

val sourceData = spark.read("<DATA>")
val overrides = Map("labelCol" -> "is_attributed",
"mlFlowExperimentName" -> "<User-Defined-Name>",
"mlFlowTrackingURI" -> "<Databricks Host URI>",
"mlFlowAPIToken" -> dbutils.notebook.getContext().apiToken.get,
"mlFlowModelSaveDirectory" -> "<User-Defined-Directory>",
"inferenceConfigSaveLocation" -> "<User-Defined-Directory>",
"tunerParallelism" -> 30
)
val randomForestConfig = ConfigurationGenerator.generateConfigFromMap("RandomForest", "classifier", overrides)
val gbtConfig = ConfigurationGenerator.generateConfigFromMap("GBT", "classifier", overrides)
val logConfig = ConfigurationGenerator.generateConfigFromMap("LogisticRegression", "classifier", overrides)

val runner = FamilyRunner(sourceData, Array(logConfig)).execute()

Additionaly I installed library on my cluster, items are in below :

automatedml_2_11_0_5_1.jar
JAR
Installed
dbfs:/FileStore/jars/0391c7b8_92d3_4a41_92e4_1456ab5d4d54-automatedml_2_11_0_5_1-3990a.jar

azureml
PyPI
Uninstall pending restart

Hyperopt
PyPI
Installed

keras
PyPI
Installed

koalas
PyPI
Installed

ml.combust.mleap:mleap-spark_2.11:0.14.0
Maven
Installed

mleap
PyPI
Installed

mlflow
PyPI
Installed

org.mlflow:mlflow-client:1.2.0
Maven
Installed

org.mlflow:mlflow-scoring:1.2.0
Maven
Installed

seaborn
PyPI
Installed

sklearn
PyPI
Installed

xgboost
PyPI
Installed

xgboost4j_spark_0_90.jar
JAR
Installed
dbfs:/FileStore/jars/2afc2977_6cc0_4511_8b70_555882caa8af-xgboost4j_spark_0_90-b50ca.jar

How to install AutoML-Toolkit for python in databricks?

This module it very promising.
I want to use it but I dont know how can I get .whl file.

As in python installation document,
Currently, this library exists as a .whl file in the /dist directory.

Where can find the .whl file and add in Databricks?

Many thanks in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.