Giter Site home page Giter Site logo

spark-iforest's People

Contributors

dependabot[bot] avatar titicaca avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spark-iforest's Issues

Caused by: java.lang.ClassNotFoundException: com.sun.jersey.api.client.Client

In Spark local mode, it's ok. But in spark yarn mode, it was failed.

When I run fllowing code:
val predictions = model.transform(dataset) //the code before this line is ok.

//The error information:
Name: java.lang.NoClassDefFoundError
Message: Lcom/sun/jersey/api/client/Client;
StackTrace: at java.lang.Class.getDeclaredFields0(Native Method)
at java.lang.Class.privateGetDeclaredFields(Class.java:2583)
at java.lang.Class.getDeclaredFields(Class.java:1916)
at org.apache.spark.util.SizeEstimator$.getClassInfo(SizeEstimator.scala:330)
at org.apache.spark.util.SizeEstimator$.visitSingleObject(SizeEstimator.scala:222)
at org.apache.spark.util.SizeEstimator$.org$apache$spark$util$SizeEstimator$$estimate(SizeEstimator.scala:201)
at org.apache.spark.util.SizeEstimator$.estimate(SizeEstimator.scala:69)
at org.apache.spark.util.collection.SizeTracker$class.takeSample(SizeTracker.scala:78)
at org.apache.spark.util.collection.SizeTracker$class.afterUpdate(SizeTracker.scala:70)
at org.apache.spark.util.collection.SizeTrackingVector.$plus$eq(SizeTrackingVector.scala:31)
at org.apache.spark.storage.memory.MemoryStore.putIteratorAsValues(MemoryStore.scala:216)
at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1039)
at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1030)
at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:970)
at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1030)
at org.apache.spark.storage.BlockManager.putIterator(BlockManager.scala:793)
at org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:1351)
at org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:122)
at org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:88)
at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:56)
at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1488)
at org.apache.spark.ml.iforest.IForestModel.transform(IForest.scala:77)
at org.apache.spark.ml.PipelineModel$$anonfun$transform$1.apply(Pipeline.scala:305)
at org.apache.spark.ml.PipelineModel$$anonfun$transform$1.apply(Pipeline.scala:305)
at scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57)
at scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66)
at scala.collection.mutable.ArrayOps$ofRef.foldLeft(ArrayOps.scala:186)
at org.apache.spark.ml.PipelineModel.transform(Pipeline.scala:305)
... 46 elided
Caused by: java.lang.ClassNotFoundException: com.sun.jersey.api.client.Client
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)

Then I google this error, I find my $SPARK_HOME/jars/ have those jars about jersey:
jersey-client-2.22.2.jar jersey-container-servlet-core-2.22.2.jar jersey-server-2.22.2.jar jetty-util-6.1.26.cloudera.4.jar
jersey-common-2.22.2.jar jersey-guava-2.22.2.jar jets3t-0.9.3.jar
jersey-container-servlet-2.22.2.jar jersey-media-jaxb-2.22.2.jar jetty-6.1.26.cloudera.4.jar

So, I set the first code line like this:
val spark = SparkSession
.builder()
.master("yarn") // yarn mode
.appName("iforest example")
.config("spark.jars", "../spark-iforest-2.2.0.jar,../jersey-client-2.22.2.jar")
.getOrCreate()

But it didn't work, the error information was all the same.

Generated tree are repeated after numTrees/2

Tested using the data
[1.0, 20.0, 200.5, 0.002
2.3, 50.0, 300.75, 0.009
1.3, 20.4, 100.9, 0.0045
10.3, 200.4, 1000.9, 10.45]

and the command
IForest iForest = new IForest().setNumTrees(4)
.setContamination(0.3)
.setBootstrap(false)
.setMaxDepth(2)
.setSeed(123456L);

resulting in the following trees:
tree[0]
featureIndex: 1
featureValue: 119.1657

-leftChild
	featureIndex: 1
	featureValue: 40.0932

tree[1]
featureIndex: 0
featureValue: 2.48332

-leftChild
	featureIndex: 0
	featureValue: 2.1810

tree[2]
featureIndex: 1
featureValue: 119.1657

-leftChild
	featureIndex: 1
	featureValue: 40.0932

tree[3]
featureIndex: 0
featureValue: 2.48332

-leftChild
	featureIndex: 0
	featureValue: 2.18104

So tree[2] and tree[3] are identical copies of tree[0] and tree[1]. Worth menioning that increasing the number of trees results in similar behavior but later, i.e. if numTrees=10 then the trees will repeat after 5.

I thought it had to do with the seed but while debugging I found that trees [2,3] are not generated anew at all, at some point they seem to be copied from the first ones.

spark-iforest for Streaming data ?

Hi,

We tried for Bulk data and spark-iforest seems to be working fine but we would like to make it work for streaming data. Let us know how we can get it work.

Thanks,
Bhushan

Parameter approxQuantileRelativeError not usable when running with Spark 3

Setting approxQuantileRelativeError using Scala or Python API of branch spark3 returns the following error. Is there an easy way to fix this? Thank you in advance!

java.lang.UnsupportedOperationException: The default jsonEncode only supports string, vector and matrix. org.apache.spark.ml.param.Param must override jsonEncode for java.lang.Double.
  at org.apache.spark.ml.param.Param.jsonEncode(params.scala:100)
  at org.apache.spark.ml.util.Instrumentation.$anonfun$logParams$2(Instrumentation.scala:109)
  at scala.Option.map(Option.scala:230)
  at org.apache.spark.ml.util.Instrumentation.$anonfun$logParams$1(Instrumentation.scala:106)
  at scala.collection.TraversableLike.$anonfun$flatMap$1(TraversableLike.scala:245)
  at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
  at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
  at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:38)
  at scala.collection.TraversableLike.flatMap(TraversableLike.scala:245)
  at scala.collection.TraversableLike.flatMap$(TraversableLike.scala:242)
  at scala.collection.AbstractTraversable.flatMap(Traversable.scala:108)
  at org.apache.spark.ml.util.Instrumentation.logParams(Instrumentation.scala:105)
  at org.apache.spark.ml.iforest.IForest.$anonfun$fit$1(IForest.scala:503)
  at org.apache.spark.ml.util.Instrumentation$.$anonfun$instrumented$1(Instrumentation.scala:191)
  at scala.util.Try$.apply(Try.scala:213)
  at org.apache.spark.ml.util.Instrumentation$.instrumented(Instrumentation.scala:191)
  at org.apache.spark.ml.iforest.IForest.fit(IForest.scala:495)
  ... 47 elided

deployment methods - pypi or maven

hi ,
would be great if there were pypi installation or maven coordinates so organization behind firewalls and with security controls can use the library.

library import

Hi!
I succesfully built the jar package, but I can't import lib as from pyspark_iforest.ml.iforest import *. Should I add something? I found this and tried:

  • adding jar to Spark config conf = SparkConf() conf.set('spark.jars', '/content/spark-2.4.8-bin-hadoop2.7/jars/spark-iforest-2.4.0.jar')
  • adding to Path variables os.environ['PYSPARK_SUBMIT_ARGS'] = '--jars /content/spark-2.4.8-bin-hadoop2.7/jars/spark-iforest-2.4.0.jar pyspark-shell'
    But nothing helps

BTW. What version of Pyspark should we use?
You can replicate it here google colab Thank you!

Error saving the model: scala.NotImplementedError: The default jsonEncode only supports string, vector and matrix. org.apache.spark.ml.param.Param must override jsonEncode for java.lang.Double.

Hi!
I would like to save the obtained model with the code
model.write.overwrite().save("myisolatedforestmodel")
but I bumped into the error:
scala.NotImplementedError: The default jsonEncode only supports string, vector and matrix. org.apache.spark.ml.param.Param must override jsonEncode for java.lang.Double.
Can you please check what is the issue?

pyspark-iforest不支持spark2.2.0版本吗

我的spark版本是:2.2.1,使用了v2.2.0的releases编译了jar包后,安装了Python目录下的pyspark-iforest,但在使用过程中报错:

Traceback (most recent call last):
File "/appcom/apps/hduser0011/wangwenbin520/test_iforest.py", line 19, in <module>
 model = iforest.fit(df) File "/appcom/spark-2.2.1/python/lib/pyspark.zip/pyspark/ml/base.py", line 64, in fit 
File "/appcom/spark-2.2.1/python/lib/pyspark.zip/pyspark/ml/wrapper.py", line 265, in _fit 
File "/appcom/spark-2.2.1/python/lib/pyspark.zip/pyspark/ml/wrapper.py", line 261, in _fit_java 
File "/appcom/spark-2.2.1/python/lib/pyspark.zip/pyspark/ml/wrapper.py", line 124, in _transfer_params_to_java 
File "/appcom/spark-2.2.1/python/lib/pyspark.zip/pyspark/ml/wrapper.py", line 113, in _make_java_param_pair 
File "/appcom/spark-2.2.1/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
File "/appcom/spark-2.2.1/python/lib/pyspark.zip/pyspark/sql/utils.py", line 63, in deco 
File "/appcom/spark-2.2.1/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 319,
 in get_return_value py4j.protocol.Py4JJavaError: An error occurred while calling o80.getParam. : java.util.NoSuchElementException: Param approxQuantileRelativeError does not exist. 
at org.apache.spark.ml.param.Params$$anonfun$getParam$2.apply(params.scala:601) at
 org.apache.spark.ml.param.Params$$anonfun$getParam$2.apply(params.scala:601) at
 scala.Option.getOrElse(Option.scala:121) at 
org.apache.spark.ml.param.Params$class.getParam(params.scala:600) at 
org.apache.spark.ml.PipelineStage.getParam(Pipeline.scala:42) at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at
 java.lang.reflect.Method.invoke(Method.java:498) at 
py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at 
py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at 
py4j.Gateway.invoke(Gateway.java:280) at 
py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at 
py4j.commands.CallCommand.execute(CallCommand.java:79) at 
py4j.GatewayConnection.run(GatewayConnection.java:214) at 
java.lang.Thread.run(Thread.java:745)

所以,当前的Python是不支持spark2.2.1的版本对吗?

Regarding ApproxQuantile

Can you please tell me what is the purpose of "ApproxQuantile"? I could not find it in sklearn implementation of the iForest algorithm. Also, I can't find (or I may have missed) it in the paper. the value of ApproxQuantile was giving GCoverhead error in my large dataset (130 millions of examples). Now I set it to 0.5, and it was running successfully. But I am confused what happens for setting the value to 0.5? Can you please tell?

approxQuantile still asking for exact quantile

First of all, many thanks for this library. I'm trying to use it in my setting, and I got it to work; however, I fail on larger datasets. While investigating I noticed the following:

Despite using org.apache.spark.sql.DataFrameStatFunctions.approxQuantile() I think you still ask for exact quantile in your transform(). The reason for that is that you pass 0 as the third parameter (relativeError) and thus don't allow for any approximation here.

val threshold = scoreDataset.stat.approxQuantile($(anomalyScoreCol),
      Array(1 - $(contamination)), 0)

See
approxQuantile @ spark.apache.org:
"relativeError - The relative target precision to achieve (>= 0). If set to zero, the exact quantiles are computed, which could be very expensive. Note that values greater than 1 are accepted but give the same result as 1."

Any chance you would allow parametrizing this relativeError in your IForest, please?

issue with saving the model with python on windows

I would really like to thank you for this amazing work.

however, I'm trying to save the model so I will be able to use it, later with spark streaming but somehow I'm not able to do this and I'm having an error saying "The default jsonEncode only supports string, vector, and matrix. org.apache.spark.ml.param.Param must override jsonEncode for java.lang.Double"

here is my code :

temp_path = tempfile.mkdtemp()
iforest_path = temp_path + "\iforest"
iforest_model.save(iforest_path)

I would really appreciate it if you can help, thanks in advance.

Data type StringType is not supported.

dataset.show //my data like this
+-------+---+---+---+---+---+---+---+---+---+----+
| _c0|_c1|_c2|_c3|_c4|_c5|_c6|_c7|_c8|_c9|_c10|
+-------+---+---+---+---+---+---+---+---+---+----+
|1000025| 5| 1| 1| 1| 2| 1| 3| 1| 1| 2|
|1002945| 5| 4| 4| 5| 7| 10| 3| 2| 1| 2|
|1015425| 3| 1| 1| 1| 2| 2| 3| 1| 1| 2|
|1016277| 6| 8| 8| 1| 3| 4| 3| 7| 1| 2|
|1017023| 4| 1| 1| 3| 2| 1| 3| 1| 1| 2|
|1017122| 8| 10| 10| 8| 7| 10| 9| 7| 1| 4|
|1018099| 1| 1| 1| 1| 2| 10| 3| 1| 1| 2|
|1018561| 2| 1| 2| 1| 2| 1| 3| 1| 1| 2|
|1033078| 2| 1| 1| 1| 2| 1| 1| 1| 5| 2|
|1033078| 4| 2| 1| 1| 2| 1| 2| 1| 1| 2|
|1035283| 1| 1| 1| 1| 1| 1| 3| 1| 1| 2|
|1036172| 2| 1| 1| 1| 2| 1| 2| 1| 1| 2|
|1041801| 5| 3| 3| 3| 2| 3| 4| 4| 1| 4|
|1043999| 1| 1| 1| 1| 2| 3| 3| 1| 1| 2|
|1044572| 8| 7| 5| 10| 7| 9| 5| 5| 4| 4|
|1047630| 7| 4| 6| 4| 6| 1| 4| 3| 1| 4|
|1048672| 4| 1| 1| 1| 2| 1| 2| 1| 1| 2|
|1049815| 4| 1| 1| 1| 2| 1| 3| 1| 1| 2|
|1050670| 10| 7| 7| 6| 4| 10| 4| 1| 2| 4|
|1050718| 6| 1| 1| 1| 2| 1| 3| 1| 1| 2|
+-------+---+---+---+---+---+---+---+---+---+----+
only showing top 20 rows

//when I run those code...
val pipeline = new Pipeline().setStages(Array(indexer, assembler, iForest))
val model = pipeline.fit(dataset)
val predictions = model.transform(dataset)

Name: java.lang.IllegalArgumentException
Message: Data type StringType is not supported.
StackTrace: at org.apache.spark.ml.feature.VectorAssembler$$anonfun$transformSchema$1.apply(VectorAssembler.scala:121)
at org.apache.spark.ml.feature.VectorAssembler$$anonfun$transformSchema$1.apply(VectorAssembler.scala:117)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at org.apache.spark.ml.feature.VectorAssembler.transformSchema(VectorAssembler.scala:117)
at org.apache.spark.ml.Pipeline$$anonfun$transformSchema$4.apply(Pipeline.scala:184)
at org.apache.spark.ml.Pipeline$$anonfun$transformSchema$4.apply(Pipeline.scala:184)
at scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57)
at scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66)
at scala.collection.mutable.ArrayOps$ofRef.foldLeft(ArrayOps.scala:186)
at org.apache.spark.ml.Pipeline.transformSchema(Pipeline.scala:184)
at org.apache.spark.ml.PipelineStage.transformSchema(Pipeline.scala:74)
at org.apache.spark.ml.Pipeline.fit(Pipeline.scala:136)

//My data not have the string type。。。

JavaPackage is not callable - pays-ark

hi,

I followed your guideline and got:

iforest = IForest(contamination=0.3, maxDepth=2)
Traceback (most recent call last):
File "/Users/htayeb/miniconda2/envs/cs/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 2878, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 1, in
iforest = IForest(contamination=0.3, maxDepth=2)
File "/Users/htayeb/miniconda2/envs/cs/lib/python2.7/site-packages/pyspark/init.py", line 110, in wrapper
return func(self, **kwargs)
File "/Users/htayeb/miniconda2/envs/cs/lib/python2.7/site-packages/pyspark_iforest/ml/iforest.py", line 245, in init
self._java_obj = self._new_java_obj("org.apache.spark.ml.iforest.IForest", self.uid)
File "/Users/htayeb/miniconda2/envs/cs/lib/python2.7/site-packages/pyspark/ml/wrapper.py", line 67, in _new_java_obj
return java_obj(*java_args)
TypeError: 'JavaPackage' object is not callable

any idea why?

I am using python 2.7 and the spark context is alive.

org.apache.spark.ml.linalg.SparseVector cannot be cast to org.apache.spark.ml.linalg.DenseVector???

Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1887)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1875)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1874)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1874)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2108)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2057)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2046)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:945)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.collect(RDD.scala:944)
at org.apache.spark.ml.iforest.IForest$$anonfun$fit$1.apply(IForest.scala:539)
at org.apache.spark.ml.iforest.IForest$$anonfun$fit$1.apply(IForest.scala:495)
at org.apache.spark.ml.util.Instrumentation$$anonfun$11.apply(Instrumentation.scala:183)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.spark.ml.util.Instrumentation$.instrumented(Instrumentation.scala:183)
at org.apache.spark.ml.iforest.IForest.fit(IForest.scala:495)
at org.apache.spark.ml.iforest.IForest.fit(IForest.scala:334)
at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:153)
at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:149)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableViewLike$Transformed$class.foreach(IterableViewLike.scala:44)
at scala.collection.SeqViewLike$AbstractTransformed.foreach(SeqViewLike.scala:37)
at org.apache.spark.ml.Pipeline.fit(Pipeline.scala:149)
at org.apache.spark.examples.ml.IForestExample$.main(IForestExample.scala:58)
at org.apache.spark.examples.ml.IForestExample.main(IForestExample.scala)
Caused by: java.lang.ClassCastException: org.apache.spark.ml.linalg.SparseVector cannot be cast to org.apache.spark.ml.linalg.DenseVector

Unexpected keyword argument `seed` when initializing IForest via Python API

When I try to initialize the model (as per Python API convention) using the below statement -

from pyspark_iforest.ml.iforest import IForest

iforest = IForest(
    numTrees=100,
    maxSamples=256,
    maxFeatures=len(features),
    contamination=0.025,
    bootstrap=False,
    seed=42,
)

I receive the error - TypeError: __init__() got an unexpected keyword argument 'seed'. However, when seed argument is removed all works as usual.

Note: Used spark3 branch to generate the distribution .tar.gz file.

Maven build error

Hi everyone!

I'm trying to build jar with maven in Google Colab and this error appears. Any suggestions? Maybe there is some requirements missing? Maybe I should change Maven or Scala versions?
Notebook is here tap

[ERROR] error: scala.reflect.internal.MissingRequirementError: object java.lang.Object in compiler mirror not found. [ERROR] at scala.reflect.internal.MissingRequirementError$.signal(MissingRequirementError.scala:17) [ERROR] at scala.reflect.internal.MissingRequirementError$.notFound(MissingRequirementError.scala:18) [ERROR] at scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:53) [ERROR] at scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:45) [ERROR] at scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:45) [ERROR] at scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:66) [ERROR] at scala.reflect.internal.Mirrors$RootsBase.getClassByName(Mirrors.scala:102) [ERROR] at scala.reflect.internal.Mirrors$RootsBase.getRequiredClass(Mirrors.scala:105) [ERROR] at scala.reflect.internal.Definitions$DefinitionsClass.ObjectClass$lzycompute(Definitions.scala:257) [ERROR] at scala.reflect.internal.Definitions$DefinitionsClass.ObjectClass(Definitions.scala:257) [ERROR] at scala.reflect.internal.Definitions$DefinitionsClass.init(Definitions.scala:1394) [ERROR] at scala.tools.nsc.Global$Run.<init>(Global.scala:1215) [ERROR] at scala.tools.nsc.Driver.doCompile(Driver.scala:31) [ERROR] at scala.tools.nsc.MainClass.doCompile(Main.scala:23) [ERROR] at scala.tools.nsc.Driver.process(Driver.scala:51) [ERROR] at scala.tools.nsc.Driver.main(Driver.scala:64) [ERROR] at scala.tools.nsc.Main.main(Main.scala) [ERROR] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [ERROR] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) [ERROR] at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [ERROR] at java.base/java.lang.reflect.Method.invoke(Method.java:566) [ERROR] at scala_maven_executions.MainHelper.runMain(MainHelper.java:164) [ERROR] at scala_maven_executions.MainWithArgsInFile.main(MainWithArgsInFile.java:26) [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 7.209 s [INFO] Finished at: 2022-06-22T12:32:49Z [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal net.alchim31.maven:scala-maven-plugin:3.3.2:compile (default) on project spark-iforest: wrap: org.apache.commons.exec.ExecuteException: Process exited with an error: 1 (Exit value: 1) -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException

Getting NoSuchMethodError in IForest.fit

Hi, I'm getting the exact same error as issue #5 in version 2.4.0 with Spark 2.4 at runtime, although the tests all pass.

iforest = IForest(contamination=0.3, maxDepth=2)
model = iforest.fit(nz_df)

Results:

Py4JJavaError: An error occurred while calling o161.fit.
: java.lang.NoSuchMethodError: org.apache.spark.ml.util.Instrumentation$.instrumented(Lscala/Function1;)Ljava/lang/Object;
	at org.apache.spark.ml.iforest.IForest.fit(IForest.scala:495)
	at org.apache.spark.ml.iforest.IForest.fit(IForest.scala:334)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
	at py4j.Gateway.invoke(Gateway.java:280)
	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
	at py4j.commands.CallCommand.execute(CallCommand.java:79)
	at py4j.GatewayConnection.run(GatewayConnection.java:214)
	at java.lang.Thread.run(Thread.java:748)

Any ideas what could the problem be?

Threshold calculation doesn't considers training data(fitted), anomaly score doesn't work for new data

Hi,

I would like to thankyou first for implementing the library, Before integrating this library into our spark project we went to test it with. We used the same dataset on sckit learn and this and it doesn't work for new data(anomalies) and labels them as normal data. I guess it calculates threshold with respect to this new data resulting in declaring few of them as anomalies and rest of them "0'. Although score are quite high (between 0.60-0.65).

If i make a new test data by appending the anomalies data to the training data and predict it, it correctly label them as anomalies. So i think that this is the problem with the threshold calculation.

Here is the example

# Generate train data
X = 0.3 * rng.randn(100, 2)
X_train = np.r_[X + 2, X - 2]
Xtrain = map(lambda x: Vectors.dense(x), X_train)
dfltrain = pd.DataFrame(list(Xtrain))
dfnbtrain = spark.createDataFrame(dfltrain,["features"])

# Generate some regular novel observations
X = 0.3 * rng.randn(20, 2)
X_test = np.r_[X + 2, X - 2]
Xtest = map(lambda x: Vectors.dense(x), X_test)
dfltest = pd.DataFrame(list(Xtest))
dfnbtest = spark.createDataFrame(dfltest,["features"])

# Generate some abnormal novel observations
X_outliers = rng.uniform(low=-4, high=4, size=(20, 2))
XOutliers = map(lambda x: Vectors.dense(x), X_outliers)
dflOutliers = pd.DataFrame(list(XOutliers))
dfnbOutliers = spark.createDataFrame(dflOutliers,["features"])
# Init an IForest Object
iforest = IForest(maxSamples=100)
iforest.setSeed(42)
# Fit on a given data frame
model = iforest.fit(df)
y_pred_train = model.transform(dfnbtrain)
y_pred_test = model.transform(dfnbtest)
y_pred_outliers = model.transform(dfnbOutliers)


print("Accuracy:",y_pred_test.groupby("prediction").count().collect()[0]["count"]/y_pred_test.count())
#Accuracy: 0.9
print("Accuracy:",y_pred_outliers.groupby("prediction").count().collect()[1]["count"]/y_pred_outliers.count())
#Accuracy: 0.1

As you can see that for self generated and obvious anomalies, it has wrong predictions but scores are quite high

[Row(features=DenseVector([-0.4882, -3.3723]), anomalyScore=0.6670575241125163, prediction=0.0),
Row(features=DenseVector([-3.7972, 3.7012]), anomalyScore=0.6854332724633916, prediction=1.0),
Row(features=DenseVector([2.6878, 1.5678]), anomalyScore=0.6374480738168643, prediction=0.0),
Row(features=DenseVector([-0.7284, -2.6136]), anomalyScore=0.6566451502774648, prediction=0.0),
Row(features=DenseVector([-2.7485, -1.9981]), anomalyScore=0.5795116354078663, prediction=0.0),
Row(features=DenseVector([0.3938, 1.7168]), anomalyScore=0.5973296265149223, prediction=0.0),
Row(features=DenseVector([1.2816, -1.7605]), anomalyScore=0.6347619060759053, prediction=0.0),
Row(features=DenseVector([3.6389, 1.9032]), anomalyScore=0.6140531002586469, prediction=0.0),
Row(features=DenseVector([0.4348, 0.8938]), anomalyScore=0.6523854328525718, prediction=0.0),
Row(features=DenseVector([-0.6432, -2.0182]), anomalyScore=0.6014666752642815, prediction=0.0),
Row(features=DenseVector([-1.1522, 2.0628]), anomalyScore=0.6080298691384164, prediction=0.0),
Row(features=DenseVector([-3.8849, -3.0714]), anomalyScore=0.6730886352051572, prediction=0.0),
Row(features=DenseVector([-3.632, -3.6742]), anomalyScore=0.6730886352051572, prediction=0.0),
Row(features=DenseVector([2.8437, 1.6293]), anomalyScore=0.6342865718344055, prediction=0.0),
Row(features=DenseVector([-0.2066, -3.2173]), anomalyScore=0.6725254553370426, prediction=0.0),
Row(features=DenseVector([-0.0671, -0.2122]), anomalyScore=0.6600221092919962, prediction=0.0),
Row(features=DenseVector([-2.6144, -0.5292]), anomalyScore=0.6569516722831921, prediction=0.0),
Row(features=DenseVector([-0.812, 0.9268]), anomalyScore=0.6534675412783653, prediction=0.0),
Row(features=DenseVector([1.0807, -3.6376]), anomalyScore=0.6764378581357128, prediction=1.0),
Row(features=DenseVector([-1.0031, 1.0069]), anomalyScore=0.6521343894913213, prediction=0.0)]

By Appending it to training data and predicting, it labels correctly.

So i think you need to move threshold to model fitting part.

Compile for Pyspark 3.x.x

Hi,
I've used your package on Pyspark 2.4.0 and it works amazing. Thank you for your incredible work.
Due to some project constrains I now need to use Pyspark 3.x.x, . Is this even possible? If so, how should i proceed?

Best Regards,
Diogo Ribeiro

Cannot load iforest model

Hi @titicaca, I am trying to run the example pyspark code given in the Readme on my Pyspark notebook, but it returns an error while loading the model. I am using Spark v3.3.0 with Python 3 on Windows 10. I am using the spark3 branch.

Py4JJavaError                             Traceback (most recent call last)
<ipython-input-3-e07a4e1b49a8> in <module>
     52 
     53 # Load a fitted model from a model path
---> 54 loaded_model = IForestModel.load(model_path)

C:\ProgramData\Anaconda3\lib\site-packages\pyspark\ml\util.py in load(cls, path)
    351     def load(cls, path: str) -> RL:
    352         """Reads an ML instance from the input path, a shortcut of `read().load(path)`."""
--> 353         return cls.read().load(path)
    354 
    355 

C:\ProgramData\Anaconda3\lib\site-packages\pyspark_iforest\ml\iforest.py in load(self, path)
     97         if not isinstance(path, basestring):
     98             raise TypeError("path should be a basestring, got type %s" % type(path))
---> 99         java_obj = self._jread.load(path)
    100         if not hasattr(self._clazz, "_from_java"):
    101             raise NotImplementedError("This Java ML type cannot be loaded into Python currently: %r"

C:\ProgramData\Anaconda3\lib\site-packages\py4j\java_gateway.py in __call__(self, *args)
   1319 
   1320         answer = self.gateway_client.send_command(command)
-> 1321         return_value = get_return_value(
   1322             answer, self.gateway_client, self.target_id, [self.name](http://self.name/))
   1323 

C:\ProgramData\Anaconda3\lib\site-packages\pyspark\sql\utils.py in deco(*a, **kw)
    188     def deco(*a: Any, **kw: Any) -> Any:
    189         try:
--> 190             return f(*a, **kw)
    191         except Py4JJavaError as e:
    192             converted = convert_exception(e.java_exception)

C:\ProgramData\Anaconda3\lib\site-packages\py4j\protocol.py in get_return_value(answer, gateway_client, target_id, name)
    324             value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
    325             if answer[1] == REFERENCE_TYPE:
--> 326                 raise Py4JJavaError(
    327                     "An error occurred while calling {0}{1}{2}.\n".
    328                     format(target_id, ".", name), value)

Py4JJavaError: An error occurred while calling o193.load.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 5 in stage 22.0 failed 1 times, most recent failure: Lost task 5.0 in stage 22.0 (TID 94) ([xxxxxxx](http://xxxxxxxx.com/) executor driver): java.util.concurrent.ExecutionException: org.codehaus.commons.compiler.CompileException: File 'generated.java', Line 66, Column 8: failed to compile: org.codehaus.commons.compiler.CompileException: File 'generated.java', Line 66, Column 8: Private member cannot be accessed from type "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection".
	at org.sparkproject.guava.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:306)
	at org.sparkproject.guava.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:293)
	at org.sparkproject.guava.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
	at org.sparkproject.guava.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:135)
	at org.sparkproject.guava.cache.LocalCache$LoadingValueReference.waitForValue(LocalCache.java:3620)
	at org.sparkproject.guava.cache.LocalCache$Segment.waitForLoadingValue(LocalCache.java:2362)
	at org.sparkproject.guava.cache.LocalCache$Segment.get(LocalCache.java:2251)
	at org.sparkproject.guava.cache.LocalCache.get(LocalCache.java:4000)
	at org.sparkproject.guava.cache.LocalCache.getOrLoad(LocalCache.java:4004)
	at org.sparkproject.guava.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4874)
	at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$.compile(CodeGenerator.scala:1437)
	at org.apache.spark.sql.catalyst.expressions.codegen.GenerateSafeProjection$.create(GenerateSafeProjection.scala:205)
	at org.apache.spark.sql.catalyst.expressions.codegen.GenerateSafeProjection$.create(GenerateSafeProjection.scala:39)
	at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:1363)
	at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:1360)
	at org.apache.spark.sql.execution.DeserializeToObjectExec.$anonfun$doExecute$1(objects.scala:97)
	at org.apache.spark.sql.execution.DeserializeToObjectExec.$anonfun$doExecute$1$adapted(objects.scala:96)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsWithIndexInternal$2(RDD.scala:877)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsWithIndexInternal$2$adapted(RDD.scala:877)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
	at org.apache.spark.sql.execution.SQLExecutionRDD.$anonfun$compute$1(SQLExecutionRDD.scala:52)
	at org.apache.spark.sql.internal.SQLConf$.withExistingConf(SQLConf.scala:158)
	at org.apache.spark.sql.execution.SQLExecutionRDD.compute(SQLExecutionRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
	at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
	at org.apache.spark.scheduler.Task.run(Task.scala:136)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
Caused by: org.codehaus.commons.compiler.CompileException: File 'generated.java', Line 66, Column 8: failed to compile: org.codehaus.commons.compiler.CompileException: File 'generated.java', Line 66, Column 8: Private member cannot be accessed from type "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection".
	at org.apache.spark.sql.errors.QueryExecutionErrors$.compilerError(QueryExecutionErrors.scala:533)
	at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$.org$apache$spark$sql$catalyst$expressions$codegen$CodeGenerator$$doCompile(CodeGenerator.scala:1502)
	at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anon$1.load(CodeGenerator.scala:1587)
	at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anon$1.load(CodeGenerator.scala:1584)
	at org.sparkproject.guava.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3599)
	at org.sparkproject.guava.cache.LocalCache$Segment.loadSync(LocalCache.java:2379)
	at org.sparkproject.guava.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2342)
	at org.sparkproject.guava.cache.LocalCache$Segment.get(LocalCache.java:2257)
	... 36 more

Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2672)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2608)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2607)
	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2607)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1182)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1182)
	at scala.Option.foreach(Option.scala:407)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1182)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2860)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2802)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2791)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:952)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2228)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2249)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2268)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2293)
	at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1021)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:406)
	at org.apache.spark.rdd.RDD.collect(RDD.scala:1020)
	at org.apache.spark.RangePartitioner$.sketch(Partitioner.scala:304)
	at org.apache.spark.RangePartitioner.<init>(Partitioner.scala:171)
	at org.apache.spark.RangePartitioner.<init>(Partitioner.scala:151)
	at org.apache.spark.rdd.OrderedRDDFunctions.$anonfun$sortByKey$1(OrderedRDDFunctions.scala:64)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:406)
	at org.apache.spark.rdd.OrderedRDDFunctions.sortByKey(OrderedRDDFunctions.scala:63)
	at org.apache.spark.ml.iforest.IForestModel$.org$apache$spark$ml$iforest$IForestModel$$loadTreeNodes(IForest.scala:249)
	at org.apache.spark.ml.iforest.IForestModel$IForestModelReader.load(IForest.scala:305)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
	at py4j.Gateway.invoke(Gateway.java:282)
	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
	at py4j.commands.CallCommand.execute(CallCommand.java:79)
	at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
	at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
	at java.lang.Thread.run(Thread.java:750)
Caused by: java.util.concurrent.ExecutionException: org.codehaus.commons.compiler.CompileException: File 'generated.java', Line 66, Column 8: failed to compile: org.codehaus.commons.compiler.CompileException: File 'generated.java', Line 66, Column 8: Private member cannot be accessed from type "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection".
	at org.sparkproject.guava.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:306)
	at org.sparkproject.guava.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:293)
	at org.sparkproject.guava.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
	at org.sparkproject.guava.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:135)
	at org.sparkproject.guava.cache.LocalCache$LoadingValueReference.waitForValue(LocalCache.java:3620)
	at org.sparkproject.guava.cache.LocalCache$Segment.waitForLoadingValue(LocalCache.java:2362)
	at org.sparkproject.guava.cache.LocalCache$Segment.get(LocalCache.java:2251)
	at org.sparkproject.guava.cache.LocalCache.get(LocalCache.java:4000)
	at org.sparkproject.guava.cache.LocalCache.getOrLoad(LocalCache.java:4004)
	at org.sparkproject.guava.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4874)
	at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$.compile(CodeGenerator.scala:1437)
	at org.apache.spark.sql.catalyst.expressions.codegen.GenerateSafeProjection$.create(GenerateSafeProjection.scala:205)
	at org.apache.spark.sql.catalyst.expressions.codegen.GenerateSafeProjection$.create(GenerateSafeProjection.scala:39)
	at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:1363)
	at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:1360)
	at org.apache.spark.sql.execution.DeserializeToObjectExec.$anonfun$doExecute$1(objects.scala:97)
	at org.apache.spark.sql.execution.DeserializeToObjectExec.$anonfun$doExecute$1$adapted(objects.scala:96)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsWithIndexInternal$2(RDD.scala:877)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsWithIndexInternal$2$adapted(RDD.scala:877)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
	at org.apache.spark.sql.execution.SQLExecutionRDD.$anonfun$compute$1(SQLExecutionRDD.scala:52)
	at org.apache.spark.sql.internal.SQLConf$.withExistingConf(SQLConf.scala:158)
	at org.apache.spark.sql.execution.SQLExecutionRDD.compute(SQLExecutionRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
	at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
	at org.apache.spark.scheduler.Task.run(Task.scala:136)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	... 1 more
Caused by: org.codehaus.commons.compiler.CompileException: File 'generated.java', Line 66, Column 8: failed to compile: org.codehaus.commons.compiler.CompileException: File 'generated.java', Line 66, Column 8: Private member cannot be accessed from type "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection".
	at org.apache.spark.sql.errors.QueryExecutionErrors$.compilerError(QueryExecutionErrors.scala:533)
	at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$.org$apache$spark$sql$catalyst$expressions$codegen$CodeGenerator$$doCompile(CodeGenerator.scala:1502)
	at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anon$1.load(CodeGenerator.scala:1587)
	at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anon$1.load(CodeGenerator.scala:1584)
	at org.sparkproject.guava.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3599)
	at org.sparkproject.guava.cache.LocalCache$Segment.loadSync(LocalCache.java:2379)
	at org.sparkproject.guava.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2342)
	at org.sparkproject.guava.cache.LocalCache$Segment.get(LocalCache.java:2257)
	... 36 more

abount the anomaly score

Hi, thanks for you implemention of IForest in spark. I have a little question about the anomaly score when prediction. In the original paper, the lower the score is, the more normal the sample is. But in your description you mentioned the opposite way. Just want to be sure.

TypeError: 'JavaPackage' object is not callable

I am running the python example provided, but got the following error:
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Traceback (most recent call last):
File "/Users/kzhang/repositories/flightaware/data/ml_features/spark-iforest.py", line 19, in
iforest = IForest(contamination=0.3, maxDepth=2)
File "/Users/kzhang/anaconda3/lib/python3.7/site-packages/pyspark/init.py", line 110, in wrapper
return func(self, **kwargs)
File "/Users/kzhang/anaconda3/lib/python3.7/site-packages/pyspark_iforest/ml/iforest.py", line 245, in init
self._java_obj = self._new_java_obj("org.apache.spark.ml.iforest.IForest", self.uid)
File "/Users/kzhang/anaconda3/lib/python3.7/site-packages/pyspark/ml/wrapper.py", line 67, in _new_java_obj
return java_obj(*java_args)
TypeError: 'JavaPackage' object is not callable

Error while initializing the model

I have followed the instructions and successfully installed all the requirements. But I am unable initialize the model

from pyspark_iforest.ml.iforest import *

Init an IForest Object

iforest = IForest(contamination=0.3, maxDepth=2)

error:


Py4JJavaError Traceback (most recent call last)
in
2
3 # Init an IForest Object
----> 4 iforest = IForest(contamination=0.3, maxDepth=2)

~/spark-3.0.0-preview2-bin-hadoop2.7/python/pyspark/init.py in wrapper(self, *args, **kwargs)
109 raise TypeError("Method %s forces keyword arguments." % func.name)
110 self._input_kwargs = kwargs
--> 111 return func(self, **kwargs)
112 return wrapper
113

~/opt/anaconda3/lib/python3.7/site-packages/pyspark_iforest/ml/iforest.py in init(self, featuresCol, predictionCol, anomalyScore, numTrees, maxSamples, maxFeatures, maxDepth, contamination, bootstrap, approxQuantileRelativeError)
243
244 super(IForest, self).init()
--> 245 self._java_obj = self._new_java_obj("org.apache.spark.ml.iforest.IForest", self.uid)
246 self._setDefault(numTrees=100, maxSamples=1.0, maxFeatures=1.0, maxDepth=10, contamination=0.1,
247 bootstrap=False, approxQuantileRelativeError=0.)

~/spark-3.0.0-preview2-bin-hadoop2.7/python/pyspark/ml/wrapper.py in _new_java_obj(java_class, *args)
67 java_obj = getattr(java_obj, name)
68 java_args = [_py2java(sc, arg) for arg in args]
---> 69 return java_obj(*java_args)
70
71 @staticmethod

~/spark-3.0.0-preview2-bin-hadoop2.7/python/lib/py4j-0.10.8.1-src.zip/py4j/java_gateway.py in call(self, *args)
1552 answer = self._gateway_client.send_command(command)
1553 return_value = get_return_value(
-> 1554 answer, self._gateway_client, None, self._fqn)
1555
1556 for temp_arg in temp_args:

~/spark-3.0.0-preview2-bin-hadoop2.7/python/pyspark/sql/utils.py in deco(*a, **kw)
96 def deco(*a, **kw):
97 try:
---> 98 return f(*a, **kw)
99 except py4j.protocol.Py4JJavaError as e:
100 converted = convert_exception(e.java_exception)

~/spark-3.0.0-preview2-bin-hadoop2.7/python/lib/py4j-0.10.8.1-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
326 raise Py4JJavaError(
327 "An error occurred while calling {0}{1}{2}.\n".
--> 328 format(target_id, ".", name), value)
329 else:
330 raise Py4JError(

Py4JJavaError: An error occurred while calling None.org.apache.spark.ml.iforest.IForest.
: java.lang.NoClassDefFoundError: org/apache/spark/ml/util/MLWritable$class
at org.apache.spark.ml.iforest.IForest.(IForest.scala:335)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:238)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)

PySpark iforest.fit() method error

Spark version: 2.3.2
Python: 3.6

Exception ignored in: <object repr() failed>
Traceback (most recent call last):
File "/usr/hdp/current/spark-client/python/pyspark/ml/wrapper.py", line 105, in del
SparkContext._active_spark_context._gateway.detach(self._java_obj)
AttributeError: 'IForest' object has no attribute '_java_obj'


Py4JJavaError Traceback (most recent call last)
in
24
25 # 训练模型
---> 26 model = iforest.fit(powerDF4)
27
28 # 模型总结

/usr/hdp/current/spark-client/python/pyspark/ml/base.py in fit(self, dataset, params)
130 return self.copy(params)._fit(dataset)
131 else:
--> 132 return self._fit(dataset)
133 else:
134 raise ValueError("Params must be either a param map or a list/tuple of param maps, "

/usr/hdp/current/spark-client/python/pyspark/ml/wrapper.py in _fit(self, dataset)
286
287 def _fit(self, dataset):
--> 288 java_model = self._fit_java(dataset)
289 model = self._create_model(java_model)
290 return self._copyValues(model)

/usr/hdp/current/spark-client/python/pyspark/ml/wrapper.py in _fit_java(self, dataset)
283 """
284 self._transfer_params_to_java()
--> 285 return self._java_obj.fit(dataset._jdf)
286
287 def _fit(self, dataset):

/usr/hdp/current/spark-client/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py in call(self, *args)
1255 answer = self.gateway_client.send_command(command)
1256 return_value = get_return_value(
-> 1257 answer, self.gateway_client, self.target_id, self.name)
1258
1259 for temp_arg in temp_args:

/usr/hdp/current/spark-client/python/pyspark/sql/utils.py in deco(*a, **kw)
61 def deco(*a, **kw):
62 try:
---> 63 return f(*a, **kw)
64 except py4j.protocol.Py4JJavaError as e:
65 s = e.java_exception.toString()

/usr/hdp/current/spark-client/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
326 raise Py4JJavaError(
327 "An error occurred while calling {0}{1}{2}.\n".
--> 328 format(target_id, ".", name), value)
329 else:
330 raise Py4JError(

Py4JJavaError: An error occurred while calling o850.fit.
: java.lang.NoSuchMethodError: org.apache.spark.ml.util.Instrumentation$.instrumented(Lscala/Function1;)Ljava/lang/Object;
at org.apache.spark.ml.iforest.IForest.fit(IForest.scala:495)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:750)

"Task not serializable" when loading trained model

Hello, I have saved a trained model and upon trying to load it I get the following error: https://pastebin.com/raw/jG2BRSwV

Traceback (most recent call last): File "/home/orestisk/Downloads/Netflow pipeline/distributed_netflow_inference.py", line 308, in <module> runPipeline(df) File "/home/orestisk/Downloads/Netflow pipeline/distributed_netflow_inference.py", line 168, in runPipeline model=IForestModel.load('spark_netflow_pickled_files/iforest') File "/home/orestisk/Downloads/venv/lib/python3.8/site-packages/pyspark/python/lib/pyspark.zip/pyspark/ml/util.py", line 332, in load File "/home/orestisk/Downloads/venv/lib/python3.8/site-packages/pyspark_iforest/ml/iforest.py", line 95, in load java_obj = self._jread.load(path) File "/home/orestisk/Downloads/venv/lib/python3.8/site-packages/pyspark/python/lib/py4j-0.10.9.2-src.zip/py4j/java_gateway.py", line 1309, in __call__ File "/home/orestisk/Downloads/venv/lib/python3.8/site-packages/pyspark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 111, in deco File "/home/orestisk/Downloads/venv/lib/python3.8/site-packages/pyspark/python/lib/py4j-0.10.9.2-src.zip/py4j/protocol.py", line 326, in get_return_value py4j.protocol.Py4JJavaError: An error occurred while calling o629.load. : org.apache.spark.SparkException: Task not serializable at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:416) at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:406) at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:162) at org.apache.spark.SparkContext.clean(SparkContext.scala:2477) at org.apache.spark.rdd.RDD.$anonfun$map$1(RDD.scala:422) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) at org.apache.spark.rdd.RDD.withScope(RDD.scala:414) at org.apache.spark.rdd.RDD.map(RDD.scala:421) at org.apache.spark.ml.iforest.IForestModel$.org$apache$spark$ml$iforest$IForestModel$$loadTreeNodes(IForest.scala:245) at org.apache.spark.ml.iforest.IForestModel$IForestModelReader.load(IForest.scala:305) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182) at py4j.ClientServerConnection.run(ClientServerConnection.java:106) at java.lang.Thread.run(Thread.java:748) Caused by: java.io.NotSerializableException: org.apache.log4j.Logger Serialization stack: - object not serializable (class: org.apache.log4j.Logger, value: org.apache.log4j.Logger@3b90bf18) - field (class: org.apache.spark.ml.iforest.IForestModel$, name: org$apache$spark$ml$iforest$IForestModel$$logger, type: class org.apache.log4j.Logger) - object (class org.apache.spark.ml.iforest.IForestModel$, org.apache.spark.ml.iforest.IForestModel$@1eeb4532) - element of array (index: 0) - array (class [Ljava.lang.Object;, size 1) - field (class: java.lang.invoke.SerializedLambda, name: capturedArgs, type: class [Ljava.lang.Object;) - object (class java.lang.invoke.SerializedLambda, SerializedLambda[capturingClass=class org.apache.spark.ml.iforest.IForestModel$, functionalInterfaceMethod=scala/Function1.apply:(Ljava/lang/Object;)Ljava/lang/Object;, implementation=invokeStatic org/apache/spark/ml/iforest/IForestModel$.$anonfun$loadTreeNodes$2:(Lorg/apache/spark/ml/iforest/IForestModel$;Lscala/Tuple2;)Lscala/Tuple2;, instantiatedMethodType=(Lscala/Tuple2;)Lscala/Tuple2;, numCaptured=1]) - writeReplace data (class: java.lang.invoke.SerializedLambda) - object (class org.apache.spark.ml.iforest.IForestModel$$$Lambda$3992/1149006421, org.apache.spark.ml.iforest.IForestModel$$$Lambda$3992/1149006421@e4b740) at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:41) at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47) at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:101) at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:413) ... 22 more

Is there some workaround to this? The README Python API example also fails on IForestModel.load. I am using spark 3.2, here is also my pom.xml https://pastebin.com/raw/excYS1v6

Edit: I tried spark 3.0 and I'm getting the same error

Any update plan for Spark 2.4?

Hi,

There are several API changes in Spark 2.4 which break this package. Do you have a plan for version update?

thanks,
Richard

backward (Spark) support

In my environment I sadly cannot upgrade Spark, and we're still at 2.2.0. It takes lots of hacks to get the library running with pyspark and approxQuantileRelativeError (both features added only with the library upgrade to 2.4).

I wonder whether it would be difficult to keep the library somehow backward compatible and support also few older Spark versions.

No version for spark1.x

I am attempting to modify the code to be compatible with Spark 1.5, but I have discovered that many of the classes are features of Spark 2.x. Is there a plan to provide a version for Spark 1.x?

Pyspark Streaming Iforest

Error in loading model using PipelineModel.load() in Pyspark.

model = PipelineModel.load(path_to_model)
Traceback (most recent call last):
File "", line 1, in
File "/home/users/spark-2.4.6-bin-hadoop2.7/python/pyspark/ml/util.py", line 362, in load
return cls.read().load(path)
File "/home/users/spark-2.4.6-bin-hadoop2.7/python/pyspark/ml/pipeline.py", line 242, in load
return JavaMLReader(self.cls).load(path)
File "/home/users/spark-2.4.6-bin-hadoop2.7/python/pyspark/ml/util.py", line 304, in load
return self._clazz._from_java(java_obj)
File "/home/users/spark-2.4.6-bin-hadoop2.7/python/pyspark/ml/pipeline.py", line 299, in _from_java
py_stages = [JavaParams._from_java(s) for s in java_stage.stages()]
File "/home/users/spark-2.4.6-bin-hadoop2.7/python/pyspark/ml/pipeline.py", line 299, in
py_stages = [JavaParams._from_java(s) for s in java_stage.stages()]
File "/home/users/spark-2.4.6-bin-hadoop2.7/python/pyspark/ml/wrapper.py", line 227, in _from_java
py_type = __get_class(stage_name)
File "/home/users/spark-2.4.6-bin-hadoop2.7/python/pyspark/ml/wrapper.py", line 221, in __get_class
m = import(module)
ModuleNotFoundError: No module named 'pyspark.ml.iforest'

load model

/usr/local/spark/python/pyspark/ml/wrapper.py in _new_java_obj(java_class, *args)
65 java_obj = getattr(java_obj, name)
66 java_args = [_py2java(sc, arg) for arg in args]
---> 67 return java_obj(*java_args)
68
69 @staticmethod

TypeError: 'JavaPackage' object is not callable

java.lang.ArrayIndexOutOfBoundsException:0

我在IDE中,使用这个算法处理流数据,当数据发送端没有发送数据时,只开启sparkstreaming时候,loadedPipelineModel.transform(df)这个代码会报错:java.lang.ArrayIndexOutOfBoundsException:0。定位到iForest.data代码中的104行:
103 threshold = scoreDataset.stat.approxQuantile($(anomalyScoreCol),
104 Array(1 - $(contamination)), $(approxQuantileRelativeError))(0)

但是在shell端使用上述代码就不会报错。

Getting Py4JJavaError

When I run the demo :

from pyspark.ml.linalg import Vectors
import tempfile


conf = SparkConf().setAppName('ansonzhou_test').setAll([
                                        ('spark.executor.memory', '8g'),
                                        ('spark.executor.memoryOverhead', '16g'),
                                        ('spark.executor.cores', '3'),
                                        ('spark.num.executors', '64'),
                                        ('spark.driver.memory', '8g'),
                                        ('spark.debug.maxToStringFields', '1000')
                                        ])
spark = SparkSession.builder.config(conf = conf).enableHiveSupport().getOrCreate()

data = [(Vectors.dense([0.0, 0.0]),), (Vectors.dense([7.0, 9.0]),),
        (Vectors.dense([9.0, 8.0]),), (Vectors.dense([8.0, 9.0]),)]

from pyspark_iforest.ml.iforest import *

# Init an IForest Object
iforest = IForest(contamination=0.3, maxDepth=2)

# Fit on a given data frame
model = iforest.fit(df)

getting error, and i don't know how to deal with it.

---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
<ipython-input-8-9cac942af2f5> in <module>
      8 
      9 # Fit on a given data frame
---> 10 model = iforest.fit(df)

/usr/local/anaconda3/envs/ansonzhou/lib/python3.7/site-packages/pyspark/ml/base.py in fit(self, dataset, params)
    130                 return self.copy(params)._fit(dataset)
    131             else:
--> 132                 return self._fit(dataset)
    133         else:
    134             raise ValueError("Params must be either a param map or a list/tuple of param maps, "

/usr/local/anaconda3/envs/ansonzhou/lib/python3.7/site-packages/pyspark/ml/wrapper.py in _fit(self, dataset)
    293 
    294     def _fit(self, dataset):
--> 295         java_model = self._fit_java(dataset)
    296         model = self._create_model(java_model)
    297         return self._copyValues(model)

/usr/local/anaconda3/envs/ansonzhou/lib/python3.7/site-packages/pyspark/ml/wrapper.py in _fit_java(self, dataset)
    289         :return: fitted Java model
    290         """
--> 291         self._transfer_params_to_java()
    292         return self._java_obj.fit(dataset._jdf)
    293 

/usr/local/anaconda3/envs/ansonzhou/lib/python3.7/site-packages/pyspark/ml/wrapper.py in _transfer_params_to_java(self)
    125                 self._java_obj.set(pair)
    126             if self.hasDefault(param):
--> 127                 pair = self._make_java_param_pair(param, self._defaultParamMap[param])
    128                 pair_defaults.append(pair)
    129         if len(pair_defaults) > 0:

/usr/local/anaconda3/envs/ansonzhou/lib/python3.7/site-packages/pyspark/ml/wrapper.py in _make_java_param_pair(self, param, value)
    111         sc = SparkContext._active_spark_context
    112         param = self._resolveParam(param)
--> 113         java_param = self._java_obj.getParam(param.name)
    114         java_value = _py2java(sc, value)
    115         return java_param.w(java_value)

/usr/local/anaconda3/envs/ansonzhou/lib/python3.7/site-packages/py4j/java_gateway.py in __call__(self, *args)
   1255         answer = self.gateway_client.send_command(command)
   1256         return_value = get_return_value(
-> 1257             answer, self.gateway_client, self.target_id, self.name)
   1258 
   1259         for temp_arg in temp_args:

/usr/local/anaconda3/envs/ansonzhou/lib/python3.7/site-packages/pyspark/sql/utils.py in deco(*a, **kw)
     61     def deco(*a, **kw):
     62         try:
---> 63             return f(*a, **kw)
     64         except py4j.protocol.Py4JJavaError as e:
     65             s = e.java_exception.toString()

/usr/local/anaconda3/envs/ansonzhou/lib/python3.7/site-packages/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    326                 raise Py4JJavaError(
    327                     "An error occurred while calling {0}{1}{2}.\n".
--> 328                     format(target_id, ".", name), value)
    329             else:
    330                 raise Py4JError(

Py4JJavaError: An error occurred while calling o145.getParam.
: java.util.NoSuchElementException: Param approxQuantileRelativeError does not exist.
	at org.apache.spark.ml.param.Params$$anonfun$getParam$2.apply(params.scala:729)
	at org.apache.spark.ml.param.Params$$anonfun$getParam$2.apply(params.scala:729)
	at scala.Option.getOrElse(Option.scala:121)
	at org.apache.spark.ml.param.Params$class.getParam(params.scala:728)
	at org.apache.spark.ml.PipelineStage.getParam(Pipeline.scala:42)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
	at py4j.Gateway.invoke(Gateway.java:282)
	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
	at py4j.commands.CallCommand.execute(CallCommand.java:79)
	at py4j.GatewayConnection.run(GatewayConnection.java:238)
	at java.lang.Thread.run(Thread.java:748)

featureIdx shuffling results in wrong featureIndex in the tree

I am testing with the following data
[1.0, 2.0, 2.5, 0.2,
2.3, 5.0, 0.75, 0.9,
1.3, 2.4, 1.9, 0.45,
10.3, 20.4, 10.9, 10.45]

and the following default parameters
IForest iForest = new IForest().setNumTrees(5)
.setMaxSamples(3)
.setContamination(0.3)
.setBootstrap(false)
.setMaxDepth(2)
.setSeed(123456L);

The trees are as follows:
tree[0]
featureIndex: 1
featureValue: 9.417743965315601
tree[1]
featureIndex: 0
featureValue: 4.977936221311794
tree[2]
featureIndex: 0
featureValue: 4.866908154888555
tree[3]
featureIndex: 1
featureValue: 9.391937564448492
tree[4]
featureIndex: 0
featureValue: 20.26467549071234

The final tree has a featureValue outside the range of values for featureIndex=0, i.e. between 1.0 -> 10.3.

I tracked the issue to line 553 in iForest.scala, where a shuffling operation happens on the feature indices, this reordering seems to be lost afterwards resulting later in wrong attrIndex. The attrIndex was based on the shuffled data not the original one.

How to use on Databricks.

use branch spark3

Step 1

Package spark-iforest jar and deploy it into spark lib

cd spark-iforest/

mvn clean package -DskipTests

cp target/spark-iforest-.jar $SPARK_HOME/jars/

After generating this jar file install it using databricks API(UI option to install JAR)

Step 2.

Package pyspark-iforest and install it via pip, skip this step if you don't need the python pkg

cd spark-iforest/python

python setup.py sdist

pip install dist/pyspark-iforest-.tar.gz

then install above python wrapper using above command.

Bug: switch position

Could you check this paragraph of code, it seems incorrect.

// select randomly a feature index
attrIndex = constantFeatures(rng.nextInt(numFeatures - constantFeatureIndex) +
constantFeatureIndex)
val features = Array.tabulate(data.length)( i => data(i)(attrIndex))
attrMin = features.min
attrMax = features.max
if (attrMin == attrMax) {
// swap constant feature index with non-constant feature index
val tmp = constantFeatures(attrIndex)
constantFeatures(constantFeatureIndex) = tmp
constantFeatures(attrIndex) = constantFeatures(constantFeatureIndex)

The library doesn't work with Spark 3.0.0

Apparently the package doesn't work with Spark 3.0.0, as depends on a older version of Hadoop and Spark, as pointed out in https://survival8.blogspot.com/p/isolation-forest-implementation-using.html.

The error I receive (An error occurred while calling None.org.apache.spark.ml.iforest.IForest.) after compiling the jar file and install the python version via pip can be obtained running

from pyspark_iforest.ml.iforest import *
 IForest(contamination=0.3, maxDepth=2)

Is there any plan to update the library so that it works also with spark 3?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.