Giter Site home page Giter Site logo

spark-jobserver / spark-jobserver Goto Github PK

View Code? Open in Web Editor NEW
2.8K 221.0 1.0K 5.43 MB

REST job server for Apache Spark

License: Other

Scala 93.68% Shell 1.60% CSS 0.06% JavaScript 1.08% HTML 0.49% Java 0.48% Python 2.35% Batchfile 0.09% Dockerfile 0.16%
spark rest-api spark-jobserver scala

spark-jobserver's Introduction

Build Status Coverage

Join the chat at https://gitter.im/spark-jobserver/spark-jobserver

spark-jobserver provides a RESTful interface for submitting and managing Apache Spark jobs, jars, and job contexts. This repo contains the complete Spark job server project, including unit tests and deploy scripts. It was originally started at Ooyala, but this is now the main development repo.

Other useful links: Troubleshooting, cluster, YARN client, YARN on EMR, Mesos, JMX tips.

Also see Chinese docs / 中文.

Table of Contents generated with DocToc

Users

(Please add yourself to this list!)

Spark Job Server is included in Datastax Enterprise!

Features

  • "Spark as a Service": Simple REST interface (including HTTPS) for all aspects of job, context management
  • Support for Spark SQL, Hive, Streaming Contexts/jobs and custom job contexts! See Contexts.
  • Python, Scala, and Java (see TestJob.java) support
  • LDAP Auth support via Apache Shiro integration
  • Separate JVM per SparkContext for isolation (EXPERIMENTAL)
  • Supports sub-second low-latency jobs via long-running job contexts
  • Start and stop job contexts for RDD sharing and low-latency jobs; change resources on restart
  • Kill running jobs via stop context and delete job
  • Separate jar uploading step for faster job startup
  • Asynchronous and synchronous job API. Synchronous API is great for low latency jobs!
  • Works with Standalone Spark as well on cluster, Mesos, YARN client and on EMR)
  • Job and jar info is persisted via a pluggable DAO interface
  • Named Objects (such as RDDs or DataFrames) to cache and retrieve RDDs or DataFrames by name, improving object sharing and reuse among jobs.
  • Supports Scala 2.11 and 2.12
  • Support for supervise mode of Spark (EXPERIMENTAL)
  • Possible to be deployed in an HA setup of multiple jobservers (beta)

Version Information

Version Spark Version Scala Version
0.8.1 2.2.0 2.11
0.10.2 2.4.4 2.11
0.11.1 2.4.4 2.11, 2.12

For release notes, look in the notes/ directory.

Due to the sunset of Bintray all previous release binaries were deleted. Jobserver had to migrate to JFrog Platform and only recent releases are available there. To use Spark Jobserver in your SBT project please include the following resolver in you build.sbt file:

resolvers += "Artifactory" at "https://sparkjobserver.jfrog.io/artifactory/jobserver/"

Check Creating a project manually assuming that you already have sbt project structure for more information.

If you need non-released jars, please visit Jitpack - they provide non-release jar builds for any Git repo. :)

Getting Started with Spark Job Server

The easiest way to get started is to try the Docker container which prepackages a Spark distribution with the job server and lets you start and deploy it.

Alternatives:

  • Build and run Job Server in local development mode within SBT. NOTE: This does NOT work for YARN, and in fact is only recommended with spark.master set to local[*]. Please deploy if you want to try with YARN or other real cluster.
  • Deploy job server to a cluster. There are two alternatives (see the deployment section):
    • server_deploy.sh deploys job server to a directory on a remote host.
    • server_package.sh deploys job server to a local directory, from which you can deploy the directory, or create a .tar.gz for Mesos or YARN deployment.
  • EC2 Deploy scripts - follow the instructions in EC2 to spin up a Spark cluster with job server and an example application.
  • EMR Deploy instruction - follow the instruction in EMR

NOTE: Spark Job Server can optionally run SparkContexts in their own, forked JVM process when the config option spark.jobserver.context-per-jvm is set to true. This option does not currently work for SBT/local dev mode. See Deployment section for more info.

Development mode

The example walk-through below shows you how to use the job server with an included example job, by running the job server in local development mode in SBT. This is not an example of usage in production.

You need to have SBT installed.

To set the current version, do something like this:

export VER=`sbt version | tail -1 | cut -f2`

From SBT shell, simply type "reStart". This uses a default configuration file. An optional argument is a path to an alternative config file. You can also specify JVM parameters after "---". Including all the options looks like this:

job-server-extras/reStart /path/to/my.conf --- -Xmx8g

Note that reStart (SBT Revolver) forks the job server in a separate process. If you make a code change, simply type reStart again at the SBT shell prompt, it will compile your changes and restart the jobserver. It enables very fast turnaround cycles.

NOTE2: You cannot do sbt reStart from the OS shell. SBT will start job server and immediately kill it.

For example jobs see the job-server-tests/ project / folder.

When you use reStart, the log file goes to job-server/job-server-local.log. There is also an environment variable EXTRA_JAR for adding a jar to the classpath.

WordCountExample walk-through

Package Jar - Send to Server

First, to package the test jar containing the WordCountExample: sbt job-server-tests/package. Then go ahead and start the job server using the instructions above.

Let's upload the jar:

curl -X POST localhost:8090/binaries/test -H "Content-Type: application/java-archive" --data-binary @job-server-tests/target/scala-2.12/job-server-tests_2.12-$VER.jar
OK⏎

Ad-hoc Mode - Single, Unrelated Jobs (Transient Context)

The above jar is uploaded as app test. Next, let's start an ad-hoc word count job, meaning that the job server will create its own SparkContext, and return a job ID for subsequent querying:

curl -d "input.string = a b c a b see" "localhost:8090/jobs?appName=test&classPath=spark.jobserver.WordCountExample"
{
  "duration": "Job not done yet",
  "classPath": "spark.jobserver.WordCountExample",
  "startTime": "2016-06-19T16:27:12.196+05:30",
  "context": "b7ea0eb5-spark.jobserver.WordCountExample",
  "status": "STARTED",
  "jobId": "5453779a-f004-45fc-a11d-a39dae0f9bf4"
}⏎

NOTE: If you want to feed in a text file config and POST using curl, you want the --data-binary option, otherwise curl will munge your line separator chars. Like:

curl --data-binary @my-job-config.json "localhost:8090/jobs?appNam=..."

NOTE2: If you want to send in UTF-8 chars, make sure you pass in a proper header to CURL for the encoding, otherwise it may assume an encoding which is not what you expect.

From this point, you could asynchronously query the status and results:

curl localhost:8090/jobs/5453779a-f004-45fc-a11d-a39dae0f9bf4
{
  "duration": "6.341 secs",
  "classPath": "spark.jobserver.WordCountExample",
  "startTime": "2015-10-16T03:17:03.127Z",
  "context": "b7ea0eb5-spark.jobserver.WordCountExample",
  "result": {
    "a": 2,
    "b": 2,
    "c": 1,
    "see": 1
  },
  "status": "FINISHED",
  "jobId": "5453779a-f004-45fc-a11d-a39dae0f9bf4"
}⏎

Note that you could append &sync=true when you POST to /jobs to get the results back in one request, but for real clusters and most jobs this may be too slow.

You can also append &timeout=XX to extend the request timeout for sync=true requests.

Persistent Context Mode - Faster & Required for Related Jobs

Another way of running this job is in a pre-created context. Start a new context:

curl -d "" "localhost:8090/contexts/test-context?num-cpu-cores=4&memory-per-node=512m"
OK⏎

You can verify that the context has been created:

curl localhost:8090/contexts
["test-context"]⏎

Now let's run the job in the context and get the results back right away:

curl -d "input.string = a b c a b see" "localhost:8090/jobs?appName=test&classPath=spark.jobserver.WordCountExample&context=test-context&sync=true"
{
  "result": {
    "a": 2,
    "b": 2,
    "c": 1,
    "see": 1
  }
}⏎

Note the addition of context= and sync=true.

Debug mode

Spark job server is started using SBT Revolver (which forks a new JVM), so debugging directly in an IDE is not feasible. To enable debugging, the Spark job server should be started from the SBT shell with the following Java options :

job-server-extras/reStart /absolute/path/to/your/dev.conf --- -Xdebug -Xrunjdwp:transport=dt_socket,address=15000,server=y,suspend=y

The above command starts a remote debugging server on port 15000. The Spark job server is not started until a debugging client (Intellij, Eclipse, telnet, ...) connects to the exposed port.

In your IDE you just have to start a Remote debugging debug job and use the above defined port. Once the client connects to the debugging server the Spark job server is started and you can start adding breakpoints and debugging requests.

Note that you might need to adjust some server parameters to avoid short Spary/Akka/Spark timeouts, in your dev.conf add the following values :

spark {
  jobserver {
    # Dev debug timeouts
    context-creation-timeout = 1000000 s
    yarn-context-creation-timeout = 1000000 s
    default-sync-timeout = 1000000 s
  }

  context-settings {
    # Dev debug timeout
    context-init-timeout = 1000000 s
  }
}
akka.http.server {
      # Debug timeouts
      idle-timeout = infinite
      request-timeout = infinite
}

Additionally, you might have to increase the Akka Timeouts by adding the following query parameter timeout=1000000 in your HTTP requests :

curl -d "input.string = a b c a b see" "localhost:8090/jobs?appName=test&classPath=spark.jobserver.WordCountExample&sync=true&timeout=100000"

Create a Job Server Project

Creating a project from scratch using giter8 template

There is a giter8 template available at https://github.com/spark-jobserver/spark-jobserver.g8

$ sbt new spark-jobserver/spark-jobserver.g8

Answer the questions to generate a project structure for you. This contains Word Count example spark job using both old API and new one.

$ cd /path/to/project/directory
$ sbt package

Now you could remove example application and start adding your one.

Creating a project manually assuming that you already have sbt project structure

In your build.sbt, add this to use the job server jar:

    resolvers += "Artifactory" at "https://sparkjobserver.jfrog.io/artifactory/jobserver/"

    libraryDependencies += "spark.jobserver" %% "job-server-api" % "0.11.1" % "provided"

If a SQL or Hive job/context is desired, you also want to pull in job-server-extras:

libraryDependencies += "spark.jobserver" %% "job-server-extras" % "0.11.1" % "provided"

For most use cases it's better to have the dependencies be "provided" because you don't want SBT assembly to include the whole job server jar.

To create a job that can be submitted through the job server, the job must implement the SparkJob trait. Your job will look like:

object SampleJob extends SparkJob {
    override def runJob(sc: SparkContext, jobConfig: Config): Any = ???
    override def validate(sc: SparkContext, config: Config): SparkJobValidation = ???
}
  • runJob contains the implementation of the Job. The SparkContext is managed by the JobServer and will be provided to the job through this method. This relieves the developer from the boiler-plate configuration management that comes with the creation of a Spark job and allows the Job Server to manage and re-use contexts.
  • validate allows for an initial validation of the context and any provided configuration. If the context and configuration are OK to run the job, returning spark.jobserver.SparkJobValid will let the job execute, otherwise returning spark.jobserver.SparkJobInvalid(reason) prevents the job from running and provides means to convey the reason of failure. In this case, the call immediately returns an HTTP/1.1 400 Bad Request status code. validate helps you preventing running jobs that will eventually fail due to missing or wrong configuration and save both time and resources.

NEW SparkJob API

Note: As of version 0.7.0, a new SparkJob API that is significantly better than the old SparkJob API will take over. Existing jobs should continue to compile against the old spark.jobserver.SparkJob API, but this will be deprecated in the future. Note that jobs before 0.7.0 will need to be recompiled, older jobs may not work with the current SJS example. The new API looks like this:

object WordCountExampleNewApi extends NewSparkJob {
  type JobData = Seq[String]
  type JobOutput = collection.Map[String, Long]

  def runJob(sc: SparkContext, runtime: JobEnvironment, data: JobData): JobOutput =
    sc.parallelize(data).countByValue

  def validate(sc: SparkContext, runtime: JobEnvironment, config: Config):
    JobData Or Every[ValidationProblem] = {
    Try(config.getString("input.string").split(" ").toSeq)
      .map(words => Good(words))
      .getOrElse(Bad(One(SingleProblem("No input.string param"))))
  }
}

It is much more type safe, separates context configuration, job ID, named objects, and other environment variables into a separate JobEnvironment input, and allows the validation method to return specific data for the runJob method. See the WordCountExample and LongPiJob for examples.

Let's try running our sample job with an invalid configuration:

curl -i -d "bad.input=abc" "localhost:8090/jobs?appName=test&classPath=spark.jobserver.WordCountExample"
HTTP/1.1 400 Bad Request
Server: spray-can/1.3.4
Date: Thu, 14 Sep 2017 12:01:37 GMT
Access-Control-Allow-Origin: *
Content-Type: application/json; charset=UTF-8
Content-Length: 738

{
  "status": "VALIDATION FAILED",
  "result": {
    "message": "One(SparkJobInvalid(No input.string config param))",
    "errorClass": "java.lang.Throwable",
    "stack": "java.lang.Throwable: One(SparkJobInvalid(No input.string config param))\n\tat spark.jobserver.JobManagerActor$$anonfun$getJobFuture$4.apply(JobManagerActor.scala:327)\n\tat scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)\n\tat scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n\tat java.lang.Thread.run(Thread.java:748)\n"
  }
}

NEW SparkJob API with Spark v2.1

Deploying Spark JobServer with Spark v2.x cluster, you can create a SparkSession context which enables Spark-SQL and Hive support

curl -i -d "" 'http://localhost:8090/contexts/sql-context-1?num-cpu-cores=2&memory-per-node=512M&context-factory=spark.jobserver.context.SessionContextFactory'

Spark JobServer application shall extend from the SparkSessionJob to use the spark.jobserver.context.SessionContextFactory, here is an example

import com.typesafe.config.Config
import org.apache.spark.sql.SparkSession
import org.scalactic._
import spark.jobserver.SparkSessionJob
import spark.jobserver.api.{JobEnvironment, SingleProblem, ValidationProblem}

import scala.util.Try

object WordCountExampleSparkSession extends SparkSessionJob {
  type JobData = Seq[String]
  type JobOutput = collection.Map[String, Long]

  override def runJob(sparkSession: SparkSession, runtime: JobEnvironment, data: JobData): JobOutput =
    sparkSession.sparkContext.parallelize(data).countByValue

  override def validate(sparkSession: SparkSession, runtime: JobEnvironment, config: Config):
  JobData Or Every[ValidationProblem] = {
    Try(config.getString("input.string").split(" ").toSeq)
      .map(words => Good(words))
      .getOrElse(Bad(One(SingleProblem("No input.string param"))))
  }
}

Dependency jars

For Java/Scala applications you have a couple options to package and upload dependency jars.

  • The easiest is to use something like sbt-assembly to produce a fat jar. Be sure to mark the Spark and job-server dependencies as "provided" so it won't blow up the jar size. This works well if the number of dependencies is not large.
  • When the dependencies are sizeable and/or you don't want to load them with every different job, you can package the dependencies separately and use one of several options:
    • Use the dependent-jar-uris context configuration param. Then the jar gets loaded for every job.
    • The dependent-jar-uris can also be used in job configuration param when submitting a job. On an ad-hoc context this has the same effect as dependent-jar-uris context configuration param. On a persistent context the jars will be loaded for the current job and then for every job that will be executed on the persistent context.
      curl -d "" "localhost:8090/contexts/test-context?num-cpu-cores=4&memory-per-node=512m"
      OK⏎
      
      curl "localhost:8090/jobs?appName=test&classPath=spark.jobserver.WordCountExample&context=test-context&sync=true" -d '{
          dependent-jar-uris = ["file:///myjars/deps01.jar", "file:///myjars/deps02.jar"],
          input.string = "a b c a b see"
      }'
      
      The jars /myjars/deps01.jar & /myjars/deps02.jar (present only on the SJS node) will be loaded and made available for the Spark driver & executors. Please note that only only file, local, ftp, http protocols will work (URIs will be added to standard java class loader). Recent changes also allow to use names of the binaries, which were uploaded to Jobserver.
    • Use the --package option with Maven coordinates with server_start.sh.
    • Recent changes also allow you to use new parameters for the POST /jobs request:
      POST /jobs?cp=someURI,binName1,binName2&mainClass=some.main.Class
      
      cp accepts list of binary names (under which you uploaded binary to Jobserver) and URIs, mainClass is the main class of your application. Main advantage of this approach in comparison to using dependent-jar-uris is that you don't need to specify which jar is the main one and can just send all of needed jars in one list.

Named Objects

Using Named RDDs

Initially, the job server only supported Named RDDs. For backwards compatibility and convenience, the following is still supported even though it is now possible to use the more generic Named Object support described in the next section.

Named RDDs are a way to easily share RDDs among jobs. Using this facility, computed RDDs can be cached with a given name and later on retrieved. To use this feature, the SparkJob needs to mixin NamedRddSupport:

object SampleNamedRDDJob  extends SparkJob with NamedRddSupport {
    override def runJob(sc:SparkContext, jobConfig: Config): Any = ???
    override def validate(sc:SparkContext, config: Config): SparkJobValidation = ???
}

Then in the implementation of the job, RDDs can be stored with a given name:

this.namedRdds.update("french_dictionary", frenchDictionaryRDD)

Other job running in the same context can retrieve and use this RDD later on:

val rdd = this.namedRdds.get[(String, String)]("french_dictionary").get

(note the explicit type provided to get. This will allow to cast the retrieved RDD that otherwise is of type RDD[_])

For jobs that depends on a named RDDs it's a good practice to check for the existence of the NamedRDD in the validate method as explained earlier:

def validate(sc:SparkContext, config: Config): SparkJobValidation = {
  ...
  val rdd = this.namedRdds.get[(Long, scala.Seq[String])]("dictionary")
  if (rdd.isDefined) SparkJobValid else SparkJobInvalid(s"Missing named RDD [dictionary]")
}

Using Named Objects

Named Objects are a way to easily share RDDs, DataFrames or other objects among jobs. Using this facility, computed objects can be cached with a given name and later on retrieved. To use this feature, the SparkJob needs to mixin NamedObjectSupport. It is also necessary to define implicit persisters for each desired type of named objects. For convencience, we have provided implementations for RDD persistence and for DataFrame persistence (defined in job-server-extras):

object SampleNamedObjectJob  extends SparkJob with NamedObjectSupport {

  implicit def rddPersister[T] : NamedObjectPersister[NamedRDD[T]] = new RDDPersister[T]
  implicit val dataFramePersister = new DataFramePersister

    override def runJob(sc:SparkContext, jobConfig: Config): Any = ???
    override def validate(sc:SparkContext, config: Config): SparkJobValidation = ???
}

Then in the implementation of the job, RDDs can be stored with a given name:

this.namedObjects.update("rdd:french_dictionary", NamedRDD(frenchDictionaryRDD, forceComputation = false, storageLevel = StorageLevel.NONE))

DataFrames can be stored like so:

this.namedObjects.update("df:some df", NamedDataFrame(frenchDictionaryDF, forceComputation = false, storageLevel = StorageLevel.NONE))

It is advisable to use different name prefixes for different types of objects to avoid confusion.

Another job running in the same context can retrieve and use these objects later on:

val NamedRDD(frenchDictionaryRDD, _ ,_) = namedObjects.get[NamedRDD[(String, String)]]("rdd:french_dictionary").get

val NamedDataFrame(frenchDictionaryDF, _, _) = namedObjects.get[NamedDataFrame]("df:some df").get

(Note the explicit type provided to get. This will allow to cast the retrieved RDD/DataFrame object to the proper result type.)

For jobs that depends on a named objects it's a good practice to check for the existence of the NamedObject in the validate method as explained earlier:

def validate(sc:SparkContext, config: Config): SparkJobValidation = {
  ...
  val obj = this.namedObjects.get("dictionary")
  if (obj.isDefined) SparkJobValid else SparkJobInvalid(s"Missing named object [dictionary]")
}

HTTPS / SSL Configuration

Server authentication

To activate server authentication and ssl communication, set these flags in your application.conf file (Section 'akka.http.server'):

  ssl-encryption = on
  # absolute path to keystore file
  keystore = "/some/path/sjs.jks"
  keystorePW = "changeit"

You will need a keystore that contains the server certificate. The bare minimum is achieved with this command which creates a self-signed certificate:

 keytool -genkey -keyalg RSA -alias jobserver -keystore ~/sjs.jks -storepass changeit -validity 360 -keysize 2048

You may place the keystore anywhere. Here is an example of a simple curl command that utilizes ssl:

curl -k https://localhost:8090/contexts

The -k flag tells curl to "Allow connections to SSL sites without certs". Export your server certificate and import it into the client's truststore to fully utilize ssl security.

Client authentication

Client authentication can be enabled by simply pointing Job Server to a valid Trust Store. As for server authentication, this is done by setting appropriate values in the application.conf. The minimum set of parameters to enable client authentication consists of:

  # truststore = "/some/path/server-truststore.jks"
  # truststorePW = "changeit"

Note, client authentication implies server authentication, therefore client authentication will only be enabled once server authentication is activated.

Access Control

By default, access to the Job Server is not limited. Basic authentication (username and password) support is provided via the Apache Shiro framework or Keycloak. Both authentication frameworks have to be explicitly activated in the configuration file.

After the configuration, you can provide credentials via basic auth. Here is an example of a simple curl command that authenticates a user and uses ssl (you may want to use -H to hide the credentials, this is just a simple example to get you started):

curl -k --basic --user 'user:pw' https://localhost:8090/contexts

Shiro Authentication

The Shiro Authenticator can be activated in the configuration file by changing the authentication provider and providing a shiro configuration file.

access-control {
  provider = spark.jobserver.auth.ShiroAccessControl

  # absolute path to shiro config file, including file name
  shiro.config.path = "/some/path/shiro.ini"
}

Shiro-specific configuration options should be placed into a file named 'shiro.ini' in the directory as specified by the config option 'config.path'. Here is an example that configures LDAP with user group verification:

# use this for basic ldap authorization, without group checking
# activeDirectoryRealm = org.apache.shiro.realm.ldap.JndiLdapRealm
# use this for checking group membership of users based on the 'member' attribute of the groups:
activeDirectoryRealm = spark.jobserver.auth.LdapGroupRealm
# search base for ldap groups (only relevant for LdapGroupRealm):
activeDirectoryRealm.contextFactory.environment[ldap.searchBase] = dc=xxx,dc=org
# allowed groups (only relevant for LdapGroupRealm):
activeDirectoryRealm.contextFactory.environment[ldap.allowedGroups] = "cn=group1,ou=groups", "cn=group2,ou=groups"
activeDirectoryRealm.contextFactory.environment[java.naming.security.credentials] = password
activeDirectoryRealm.contextFactory.url = ldap://localhost:389
activeDirectoryRealm.userDnTemplate = cn={0},ou=people,dc=xxx,dc=org

cacheManager = org.apache.shiro.cache.MemoryConstrainedCacheManager

securityManager.cacheManager = $cacheManager

Make sure to edit the url, credentials, userDnTemplate, ldap.allowedGroups and ldap.searchBase settings in accordance with your local setup.

The Shiro authenticator is able to perform fine-grained user authorization. Permissions are extracted from the provided user roles. Each role that matches a known permission is added to the authenticated user. Unknown roles are ignored. For a list of available permissions see Permissions.

Keycloak Authentication

The Keycloak Authenticator can be activated in the configuration file by changing the authentication provider and providing a keycloak configuration.

access-control {
  provider = spark.jobserver.auth.KeycloakAccessControl

  keycloak {
    authServerUrl = "https://example.com"
    realmName = "master"
    client = "client"
    clientSecret = "secret"
  }
}
Name Description Mandatory
authServerUrl The URL to reach the keycloak instance. yes
realmName The realm to authenticate against. yes
client The client to authenticate against. yes
clientSecret An according client secret, if it exists. no

For better performance, authentication requests against Keycloak can be cached locally.

The Keycloak authenticator is able to perform fine-grained user authorization. Permissions are extracted from the provided client's roles. Each client role that matches a known permission is added to the authenticated user. Unknown client roles are ignored. For a list of available permissions see Permissions.

Important: If no client role matches a permission, the user is assigned the ALLOW_ALL role.

User Authorization

Spark job server implements a basic authorization management system to control access to single resources. By default, users always have access to all resources (ALLOW_ALL). Authorization is implemented by checking the permissions of a user with the required permissions of an endpoint. For a detailed list of all available permissions see Permissions.

Deployment

See also running on cluster, YARN client, on EMR and running on Mesos.

Manual steps

  1. Copy config/local.sh.template to <environment>.sh and edit as appropriate. NOTE: be sure to set SPARK_VERSION if you need to compile against a different version.
  2. Copy config/shiro.ini.template to shiro.ini and edit as appropriate. NOTE: only required when access-control.provider = spark.jobserver.auth.ShiroAccessControl
  3. Copy config/local.conf.template to <environment>.conf and edit as appropriate.
  4. bin/server_deploy.sh <environment> -- this packages the job server along with config files and pushes it to the remotes you have configured in <environment>.sh
  5. On the remote server, start it in the deployed directory with server_start.sh and stop it with server_stop.sh

The server_start.sh script uses spark-submit under the hood and may be passed any of the standard extra arguments from spark-submit.

NOTE: Under the hood, the deploy scripts generate an assembly jar from the job-server-extras project. Generating assemblies from other projects may not include all the necessary components for job execution.

Context per JVM

Each context can be a separate process launched using SparkLauncher, if context-per-jvm is set to true. This can be especially desirable when you want to run many contexts at once, or for certain types of contexts such as StreamingContexts which really need their own processes.

Also, the extra processes talk to the master HTTP process via random ports using the Akka Cluster gossip protocol. If for some reason the separate processes causes issues, set spark.jobserver.context-per-jvm to false, which will cause the job server to use a single JVM for all contexts.

Among the known issues:

  • Launched contexts do not shut down by themselves. You need to manually kill each separate process, or do -X DELETE /contexts/<context-name>

Log files are separated out for each context (assuming context-per-jvm is true) in their own subdirs under the LOG_DIR configured in settings.sh in the deployed directory.

Note: to test out the deploy to a local staging dir, or package the job server for Mesos, use bin/server_package.sh <environment>.

Configuring Spark Jobserver backend

Please visit setting up dao documentation page. Currently supported backend options:

  • H2
  • MySQL
  • PostgreSQL
  • HDFS for binaries with Zookeeper for metadata
  • HDFS for binaries with H2/MySQL/PostgreSQL for metadata

HA Deployment (beta)

It is possible to run multiple Spark Jobservers in a highly available setup. For a documentation of a Jobserver HA setup, refer to the Jobserver HA documentation.

Chef

There is also a Chef cookbook which can be used to deploy Spark Jobserver.

Architecture

The job server is intended to be run as one or more independent processes, separate from the Spark cluster (though it very well may be collocated with say the Master).

At first glance, it seems many of these functions (eg job management) could be integrated into the Spark standalone master. While this is true, we believe there are many significant reasons to keep it separate:

  • We want the job server to work for Mesos and YARN as well
  • Spark and Mesos masters are organized around "applications" or contexts, but the job server supports running many discrete "jobs" inside a single context
  • We want it to support Shark functionality in the future
  • Loose coupling allows for flexible HA arrangements (multiple job servers targeting same standalone master, or possibly multiple Spark clusters per job server)

Flow diagrams are checked in in the doc/ subdirectory. .diagram files are for websequencediagrams.com... check them out, they really will help you understand the flow of messages between actors.

API

A comprehensive (manually created) swagger specification of the spark jobserver WebApi can be found here.

Binaries

GET /binaries               - lists all current binaries
GET /binaries /<appName>    - gets info about the last binary uploaded under this name (app-name, binary-type, upload-time)
POST /binaries/<appName>    - upload a new binary file
DELETE /binaries/<appName>  - delete defined binary

When POSTing new binaries, the content-type header must be set to one of the types supported by the subclasses of the BinaryType trait. e.g. "application/java-archive", "application/python-egg" or "application/python-wheel". If you are using curl command, then you must pass for example "-H 'Content-Type: application/python-wheel'".

Contexts

GET /contexts               - lists all current contexts
GET /contexts/<name>        - gets info about a context, such as the spark UI url
POST /contexts/<name>       - creates a new context
DELETE /contexts/<name>     - stops a context and all jobs running in it. Additionally, you can pass ?force=true to stop a context forcefully. This is equivalent to killing the application from SparkUI (works for spark standalone only).
PUT /contexts?reset=reboot  - shuts down all contexts and re-loads only the contexts from config. Use ?sync=false to execute asynchronously.

Spark context configuration params can follow POST /contexts/<name> as query params. See section below for more details.

Jobs

Jobs submitted to the job server must implement a SparkJob trait. It has a main runJob method which is passed a SparkContext and a typesafe Config object. Results returned by the method are made available through the REST API.

GET /jobs                - Lists the last N jobs
POST /jobs               - Starts a new job, use ?sync=true to wait for results
GET /jobs/<jobId>        - Gets the result or status of a specific job
DELETE /jobs/<jobId>     - Kills the specified job
GET /jobs/<jobId>/config - Gets the job configuration

For additional information on POST /jobs check out submitting jobs documentation.

For details on the Typesafe config format used for input (JSON also works), see the Typesafe Config docs.

Data

It is sometime necessary to programmatically upload files to the server. Use these paths to manage such files:

GET /data                - Lists previously uploaded files that were not yet deleted
POST /data/<prefix>      - Uploads a new file, the full path of the file on the server is returned, the
                           prefix is the prefix of the actual filename used on the server (a timestamp is
                           added to ensure uniqueness)
DELETE /data/<filename>  - Deletes the specified file (only if under control of the JobServer)
PUT /data?reset=reboot   - Deletes all uploaded files. Use ?sync=false to execute asynchronously.

These files are uploaded to the server and are stored in a local temporary directory where the JobServer runs. The POST command returns the full pathname and filename of the uploaded file so that later jobs can work with this just the same as with any other server-local file. A job could therefore add this file to HDFS or distribute it to worker nodes via the SparkContext.addFile command. For files that are larger than a few hundred MB, it is recommended to manually upload these files to the server or to directly add them to your HDFS.

Data API Example

$ curl -d "Test data file api" http://localhost:8090/data/test_data_file_upload.txt
{
  "result": {
    "filename": "/tmp/spark-jobserver/upload/test_data_file_upload.txt-2016-07-04T09_09_57.928+05_30.dat"
  }
}

$ curl http://localhost:8090/data
["/tmp/spark-jobserver/upload/test_data_file_upload.txt-2016-07-04T09_09_57.928+05_30.dat"]

$ curl -X DELETE http://localhost:8090/data/%2Ftmp%2Fspark-jobserver%2Fupload%2Ftest_data_file_upload.txt-2016-07-04T09_09_57.928%2B05_30.dat
OK

$ curl http://localhost:8090/data
[]

Note: Both POST and DELETE requests takes URI encoded file names.

Context configuration

A number of context-specific settings can be controlled when creating a context (POST /contexts) or running an ad-hoc job (which creates a context on the spot). For example, add urls of dependent jars for a context.

POST '/contexts/my-new-context?dependent-jar-uris=file:///some/path/of/my-foo-lib.jar'

NOTE: Only the latest dependent-jar-uris (btw it’s jar-uris, not jars-uri) takes effect. You can specify multiple URIs by comma-separating them. So like this:

&dependent-jar-uris=file:///path/a.jar,file:///path/b.jar

When creating a context via POST /contexts, the query params are used to override the default configuration in spark.context-settings. For example,

POST /contexts/my-new-context?num-cpu-cores=10

would override the default spark.context-settings.num-cpu-cores setting.

When starting a job, and the context= query param is not specified, then an ad-hoc context is created. Any settings specified in spark.context-settings will override the defaults in the job server config when it is started up.

Any spark configuration param can be overridden either in POST /contexts query params, or through spark .context-settings job configuration. In addition, num-cpu-cores maps to spark.cores.max, and mem-per- node maps to spark.executor.memory. Therefore the following are all equivalent:

POST /contexts/my-new-context?num-cpu-cores=10

POST /contexts/my-new-context?spark.cores.max=10

or in the job config when using POST /jobs,

spark.context-settings {
    spark.cores.max = 10
}

User impersonation for an already Kerberos authenticated user is supported via spark.proxy.user query param:

POST /contexts/my-new-context?spark.proxy.user=

However, whenever the flag access-control.shiro.use-as-proxy-user is set to on (and Shiro is used as provider) then this parameter is ignored and the name of the authenticated user is always used as the value of the spark.proxy.user parameter when creating contexts.

To pass settings directly to the sparkConf that do not use the "spark." prefix "as-is", use the "passthrough" section.

spark.context-settings {
    spark.cores.max = 10
    passthrough {
      some.custom.hadoop.config = "192.168.1.1"
    }
}

To add to the underlying Hadoop configuration in a Spark context, add the hadoop section to the context settings

spark.context-settings {
    hadoop {
        mapreduce.framework.name = "Foo"
    }
}

stop-context-on-job-error=true can be passed to context if you want the context to stop immediately after first error is reported by a job. The default value is false but for StreamingContextFactory the default is true. You can also change the default value globally in application.conf context-settings section.

For the exact context configuration parameters, see JobManagerActor docs as well as application.conf.

Other configuration settings

For all of the Spark Job Server configuration settings, see job-server/src/main/resources/application.conf.

Job Result Serialization

The result returned by the SparkJob runJob method is serialized by the job server into JSON for routes that return the result (GET /jobs with sync=true, GET /jobs/). Currently the following types can be serialized properly:

  • String, Int, Long, Double, Float, Boolean
  • Scala Map's with string key values (non-string keys may be converted to strings)
  • Scala Seq's
  • Array's
  • Anything that implements Product (Option, case classes) -- they will be serialized as lists
  • Subclasses of java.util.List
  • Subclasses of java.util.Map with string key values (non-string keys may be converted to strings)
  • Maps, Seqs, Java Maps and Java Lists may contain nested values of any of the above
  • If a job result is of scala's Stream[Byte] type it will be serialised directly as a chunk encoded stream. This is useful if your job result payload is large and may cause a timeout serialising as objects. Beware, this will not currently work as desired with context-per-jvm=true configuration, since it would require serialising Stream[_] blob between processes. For now use Stream[_] job results in context-per-jvm=false configuration, pending potential future enhancements to support this in context-per-jvm=true mode.

If we encounter a data type that is not supported, then the entire result will be serialized to a string.

HTTP Override

Spark Job Server offers HTTP override functionality. Often reverse proxies and firewall implement access limitations to, for example, DELETE and PUT requests. HTTP override allows overcoming these limitations by wrapping, for example, a DELETE request into a POST request.

Requesting the destruction of a context can be accomplished through HTTP override using the following syntax:

$ curl -X POST "localhost:8090/contexts/test_context?_method=DELETE"

Here, a DELETE request is passed to Spark Job Server "through" a POST request.

Clients

Spark Jobserver project has a python binding package. This can be used to quickly develop python applications that can interact with Spark Jobserver programmatically.

Contribution and Development

Contributions via Github Pull Request are welcome. Please start by taking a look at the contribution guidelines and check the TODO for some contribution ideas.

  • If you need to build with a specific scala version use ++x.xx.x followed by the regular command, for instance: sbt ++2.12.12 job-server/compile
  • From the "master" project, please run "test" to ensure nothing is broken.
    • You may need to set SPARK_LOCAL_IP to localhost to ensure Akka port can bind successfully
    • Note for Windows users: very few tests fail on Windows. Thus, run testOnly -- -l WindowsIgnore from SBT shell to ignore them.
  • Logging for tests goes to "job-server-test.log". To see test logging in console also, add the following to your log4j.properties (job-server/src/test/resources/log4j.properties)
log4j.rootLogger=INFO, LOGFILE, console

log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=[%d] %-5p %.26c [%X{testName}] [%X{akkaSource}] - %m%n
  • Run sbt clean coverage test to check the code coverage and improve it. You can generate reports by running sbt coverageReport or sbt coverageAggregate for the full overview.
    • Windows users: run ; coverage ; testOnly -- -l WindowsIgnore ; coverageReport from SBT shell.
  • Please run scalastyle to ensure your code changes don't break the style guide.
  • Do "reStart" from SBT for quick restarts of the job server process
  • Please update the g8 template if you change the SparkJob API

Profiling software generously provided by

YourKit supports open source projects with its full-featured Java Profiler.

Contact

For user/dev questions, we are using google group for discussions: https://groups.google.com/forum/#!forum/spark-jobserver

Please report bugs/problems to: https://github.com/spark-jobserver/spark-jobserver/issues

License

Apache 2.0, see LICENSE.md

TODO

  • More debugging for classpath issues

  • Add Swagger support. See the spray-swagger project.

  • Implement an interactive SQL window. See: spark-admin

  • Stream the current job progress via a Listener

  • Add routes to return stage info for a job. Persist it via DAO so that we can always retrieve stage / performance info even for historical jobs. This would be pretty kickass.

spark-jobserver's People

Contributors

amarouni avatar ankitson avatar ash211 avatar bsikander avatar calebfenton avatar currygaifan avatar dan-null avatar david-durst avatar dersascha avatar doernemt avatar ecandreev avatar f1yegor avatar fedragon avatar hntd187 avatar jeffsteinmetz avatar joescii avatar maasg avatar marcoboi avatar meisign avatar nemccarthy avatar noorul avatar petro-rudenko avatar pk-work avatar pmarnik avatar rmettu-rms avatar sujee avatar sv3ndk avatar timmaltgermany avatar velvia avatar zeitos avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spark-jobserver's Issues

Unable to log debugging messages

I'd like to output messages using println from a running job for debugging, but I'm not sure how to do this. I tried using println within a job, and was hoping to see the output in job-server-local.log, but it is not appearing there. Is there a different API that should be used for printing messages to the log? Thank you.

HA Failover for Job Server

From what I can tell, the JobDAO's are for keeping the important mutable state. If a DAO was implemented to keep configs in Redis or PostgreSQL, would the job server be stateless enough to properly work with multiple instances of the jobserver backed by a central DB?

If so, then I could POST a jar to one instance and run the app from a separate instance. Correct?

I've noticed that the SQL DAO can take JDBC configuration in the ooyala repo, which seems promising.

JobErroredOut messages not delivered to subscribers

I have been testing my notification subscription branch and saw that JobErroredOut messages were not being received by the JobStatusActor and subsequently the subscribers of the event.

I tested the master without my modifications and found 3 dead letters.

[2014-10-08 10:13:00,555] INFO akka.actor.LocalActorRef [] [akka://JobServer/user/context-supervisor/bb071d29-spark.jobserver.MyErrorJob/status-actor] - Message [spark.jobserve r.CommonMessages$JobErroredOut] from Actor[akka://JobServer/user/context-supervisor/bb071d29-spark.jobserver.MyErrorJob#-1723753377] to Actor[akka://JobServer/user/context-super visor/bb071d29-spark.jobserver.MyErrorJob/status-actor#1190504260] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.

The one above is more important than the next two, I am not sure if the next two are cause of any concern, however the one above causes subscribers to not be notified. Before I dig into this -I wanted to make sure this is a valid issue.

Best,
Amit

The other two dead letters

---------------8<-------------------
[2014-10-08 10:13:00,555] INFO a.actor.DeadLetterActorRef [] [akka://JobServer/deadLetters] - Message [spark.jobserver.JobInfoActor$JobConfigStored$] from Actor[akka://JobServer/user/job-info#-1512739567] to Actor[akka://JobServer/deadLetters] was not delivered. [2] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.

---------------8<-------------------
[2014-10-08 10:13:00,556] INFO akka.actor.LocalActorRef [] [akka://JobServer/user/context-supervisor/bb071d29-spark.jobserver.MyErrorJob/status-actor] - Message [spark.jobserve r.CommonMessages$Unsubscribe] from Actor[akka://JobServer/user/context-supervisor/bb071d29-spark.jobserver.MyErrorJob#-1723753377] to Actor[akka://JobServer/user/context-supervi sor/bb071d29-spark.jobserver.MyErrorJob/status-actor#1190504260] was not delivered. [3] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.

custom SparkConf

Is it possible to pass in custom SparkConf values (i.e not set globally across the entire job-server), such as within the job that extends the SparkJob trait, or in the POST /contexts ?
The README.md implies that the POST /contexts always assumes spark.xxxx settings.
An example of a standalone spark job might be set up as follows, with some custom hadoop configs (like cassandara hadoop, elasticsearch, etc)

val conf = new SparkConf()
                  .setAppName("test")
                  .setMaster(sparkMaster)
conf.set("es.nodes", "10.1.1.1")
val sc = new SparkContext(conf)

or

val conf = new SparkConf().set(“cassandra.connection.host”, “localhost”)

Spark Streaming context

i am trying to do the word count example on flume stream, so i tried to modify the already existing wordcount example code , compilation successfully done on sbt , but when i try to run the jar on job server using hue interface , it gives out an error and the job server is shutdown

object WordCountExample extends SparkJob {
def main(args: Array[String]) {
val sc = new SparkContext("spark://bdvm01.ejada.com:7077", "WordCountExample")

val config = ConfigFactory.parseString("")
val results = runJob(sc, config)
println("Result is " + results)

}

override def validate(sc: SparkContext, config: Config): SparkJobValidation = {
Try(config.getString("host"))
.map(x => SparkJobValid)
.getOrElse(SparkJobInvalid("No host/port config params"))
}
override def runJob(sc: SparkContext, config: Config): Any = {
val batchInterval = Milliseconds(2000)
val ssc = new StreamingContext(sc, batchInterval)
val stream = FlumeUtils.createStream(ssc, config.getString("host"), config.getString("port").toInt)
val body = stream.map(e => new String(e.event.getBody.array))
val counts = body.flatMap(line => line.toLowerCase.replaceAll("[^a-zA-Z0-9\s]", "").split("\s+"))
.map(word => (word, 1))
.reduceByKey(_ + _)

counts.saveAsTextFiles("/usr/local/output/output")

ssc.start()
ssc.awaitTermination()

}
}

and that's the error :

job-server[ERROR] Uncaught error from thread [JobServer-akka.actor.default-dispatcher-4] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[JobServer]
job-server[ERROR] java.lang.NoClassDefFoundError: org/apache/spark/streaming/Milliseconds$
job-server[ERROR] at spark.jobserver.WordCountExample$.runJob(WordCountExample.scala:38)
job-server[ERROR] at spark.jobserver.WordCountExample$.runJob(WordCountExample.scala:23)
job-server[ERROR] at spark.jobserver.JobManagerActor$$anonfun$spark$jobserver$JobManagerActor$$getJobFuture$4.apply(JobManagerActor.scala:228)
job-server[ERROR] at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
job-server[ERROR] at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
job-server[ERROR] at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
job-server[ERROR] at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
job-server[ERROR] at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
job-server[ERROR] at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
job-server[ERROR] at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
job-server[ERROR] at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
job-server[ERROR] Caused by: java.lang.ClassNotFoundException: org.apache.spark.streaming.Milliseconds$
job-server[ERROR] at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
job-server[ERROR] at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
job-server[ERROR] at java.security.AccessController.doPrivileged(Native Method)
job-server[ERROR] at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
job-server[ERROR] at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
job-server[ERROR] at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
job-server[ERROR] ... 11 more
job-server ... finished with exit code 255

so does that mean that job server doesn't support streaming yet ?

org.apache.spark.storage.BlockManagerId : local class incompatible

hi,

does anyone see this issue? what cause it? how to fix it?

[2015-02-16 20:21:49,875] ERROR Remoting [] [Remoting] - org.apache.spark.storage.BlockManagerId; local class incompatible: stream classdesc serialVersionUID = -7366074099953117729, local class serialVersionUID = 2439208141545036836
java.io.InvalidClassException: org.apache.spark.storage.BlockManagerId; local class incompatible: stream classdesc serialVersionUID = -7366074099953117729, local class serialVersionUID = 2439208141545036836
at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:621)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1623)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1518)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1774)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
at akka.serialization.JavaSerializer$$anonfun$1.apply(Serializer.scala:136)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at akka.serialization.JavaSerializer.fromBinary(Serializer.scala:136)
at akka.serialization.Serialization$$anonfun$deserialize$1.apply(Serialization.scala:104)
at scala.util.Try$.apply(Try.scala:161)
at akka.serialization.Serialization.deserialize(Serialization.scala:98)
at akka.remote.MessageSerializer$.deserialize(MessageSerializer.scala:23)
at akka.remote.DefaultMessageDispatcher.payload$lzycompute$1(Endpoint.scala:58)
at akka.remote.DefaultMessageDispatcher.payload$1(Endpoint.scala:58)
at akka.remote.DefaultMessageDispatcher.dispatch(Endpoint.scala:76)
at akka.remote.EndpointReader$$anonfun$receive$2.applyOrElse(Endpoint.scala:937)
at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
at akka.remote.EndpointActor.aroundReceive(Endpoint.scala:415)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
at akka.actor.ActorCell.invoke(ActorCell.scala:487)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
at akka.dispatch.Mailbox.run(Mailbox.scala:220)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

Spark-jobserver running over Standalone HA cluster

Hi,

I have configured a Spark's Standalone cluster, using Zookeeper to provide HA. That is, I have 2 Spark master nodes connected to a Zookeeper quorum, one of them is ACTIVE and the other one is in STANDBY mode. So, if Node1 (active node) goes down, Node2 (standby node) becomes the active node. When Node 1 recovers, detects there is another active node (through Zookeeper) and stays in standby mode.

I am trying to connect Spark-Jobserver to this configuration, but I can't set "spark.master" property in conf file to point the whole cluster (that is spark.master = "spark://node1:7077,node2:7077")
If I run this configuration, server_start fails:
ERROR ka.actor.OneForOneStrategy [] [akka://JobServer/user/IO-HTTP/host-connector-0] - Il
legal URI host, unexpected character ':' at position 5: node1:7077,node2
akka.actor.ActorInitializationException: exception during creation
at akka.actor.ActorInitializationException$.apply(Actor.scala:164)
Caused by: spray.http.IllegalUriException: Illegal URI host, unexpected character ':' at position 5: node1:7077,node2
at spray.http.Uri$.fail(Uri.scala:775)

I have also tried to set it without spark's master port number (that is spark.master = "spark://node1,node2"), but I get:
akka.actor.ActorInitializationException: exception during creation
at akka.actor.ActorInitializationException$.apply(Actor.scala:164)
Caused by: java.lang.RuntimeException: Could not parse Master URL: 'node1,node2'
at spark.jobserver.SparkWebUiActor.getSparkHostName(SparkWebUiActor.scala:86)

Is there any way I can do that?

I have tried to set just one of both masters (that is spark.master = "spark://node1:7077") and it works great, but if Node1 goes down, then spark-jobserver can't reassign itself to Node2 (the new active master)

Thanks in advance

Test are failing on master

Just cloned the repo and tried to build the assembly, resulting in 25 failed test.

[error] Failed: Total 106, Failed 25, Errors 0, Passed 81
[error] Failed tests:
[error] spark.jobserver.JobManagerActorAdHocSpec
[error] spark.jobserver.JobManagerActorSpec
[error] spark.jobserver.LocalContextSupervisorSpec
error sbt.TestsFailedException: Tests unsuccessful
[error] Total time: 202 s, completed Sep 15, 2014 5:29:18 PM

Server does not return valid JSON

Many responses from the server are returned with content-type application/json, but only contain OK (without quotes) as the body.

This makes many clients fail, as OK is not valid JSON. It would be enough to either

  • return "OK" with quotes, or
  • change the content-type to text/plain for those responses

sbt not able to resolve dependencies related to Cassandra connector

Hi,

I am new to spark-jobserver and trying to write code which will connect to cassandra from spark-jobserver. For that I am trying to include dependencies related to spark-cassandra-connector.

In /apps/spark-jobserver-master/project/plugins.sbt I have included the plugin as follow:

addSbtPlugin("com.datastax.spark" % "spark-cassandra-connector-java_2.10" % "1.1.0" withSources() withJavadoc())

When I run sbt command at path /apps/spark-jobserver-master/ it is giving me error as :


[info] Loading global plugins from /home/cloud/.sbt/0.13/plugins
[info] Updating {file:/home/cloud/.sbt/0.13/plugins/}global-plugins...
[info] Resolving com.datastax.cassandra#cassandra-driver-core;2.0.2 ...
[warn] module not found: com.datastax.cassandra#cassandra-driver-core;2.0.2
[warn] ==== typesafe-ivy-releases: tried
[warn] http://repo.typesafe.com/typesafe/ivy-releases/com.datastax.cassandra/cassandra-driver-core/scala_2.10/sbt_0.13/2.0.2/ivys/ivy.xml
[warn] ==== sbt-plugin-releases: tried
[warn] http://repo.scala-sbt.org/scalasbt/sbt-plugin-releases/com.datastax.cassandra/cassandra-driver-core/scala_2.10/sbt_0.13/2.0.2/ivys/ivy.xml
[warn] ==== local: tried
[warn] /home/cloud/.ivy2/local/com.datastax.cassandra/cassandra-driver-core/scala_2.10/sbt_0.13/2.0.2/ivys/ivy.xml
[warn] ==== public: tried
[warn] http://repo1.maven.org/maven2/com/datastax/cassandra/cassandra-driver-core_2.10_0.13/2.0.2/cassandra-driver-core-2.0.2.pom
[warn] ==== sbt-plugin-releases: tried
[warn] http://repo.scala-sbt.org/scalasbt/sbt-plugin-releases/com.datastax.cassandra/cassandra-driver-core/scala_2.10/sbt_0.13/2.0.2/ivys/ivy.xml
[warn] ==== bintray-sbt-plugin-releases: tried
[warn] http://dl.bintray.com/content/sbt/sbt-plugin-releases/com.datastax.cassandra/cassandra-driver-core/scala_2.10/sbt_0.13/2.0.2/ivys/ivy.xml
[info] Resolving org.fusesource.jansi#jansi;1.4 ...
[warn] ::::::::::::::::::::::::::::::::::::::::::::::
[warn] :: UNRESOLVED DEPENDENCIES ::
[warn] ::::::::::::::::::::::::::::::::::::::::::::::
[warn] :: com.datastax.cassandra#cassandra-driver-core;2.0.2: not found
[warn] ::::::::::::::::::::::::::::::::::::::::::::::
[warn]
[warn] Note: Some unresolved dependencies have extra attributes. Check that these dependencies exist with the requested attributes.
[warn] com.datastax.cassandra:cassandra-driver-core:2.0.2 (sbtVersion=0.13, scalaVersion=2.10)
[warn]
sbt.ResolveException: unresolved dependency: com.datastax.cassandra#cassandra-driver-core;2.0.2: not found

error sbt.ResolveException: unresolved dependency: com.datastax.cassandra#cassandra-driver-core;2.0.2: not found
Project loading failed: (r)etry, (q)uit, (l)ast, or (i)gnore? q


Am I missing something here ? Do I need to update any other file for this ?
Can someone help me to solve this issue ?

Thanks in advance.

Fail to read HDFS text file then saveAsHdfsFile from spark-jobserver job due to HDFS _temporary folder not found.

I am failing to execute a basic read from HDFS / process / write to HDFS Spark job executed from the Spark job server.

We are running the latest version of the job-server from the master branch with Spark 1.1, Mesos 0.20 and HDFS from CDH 5.1 (the same issue is visible with Spark 1.0.2).

The code of the process(..) method works successfully on Spark/Mesos when executed from the spark-shell but fails when bundled in a SparkJob and ran from the Spark job server.

class JobServerTest extends SparkJob {

  override def runJob(sparkContext: SparkContext, jobConfig: Config): Any =
    new Tool().process(sparkContext, jobConfig.getString("destinationFoler"))

  override def validate(sc: SparkContext, config: Config): SparkJobValidation = SparkJobValid

}

class Tool extends Serializable{

  case class Data (part: Int, word1: String, amount1: Double, amount2: Double, letter1: String, word2: String) extends Serializable

  def process(sparkContext: SparkContext, destinationFolder: String) =
    sparkContext.textFile("hdfs://vm28-hulk-priv:8020/svend/test-jobserver/source")
      .map(_.split(","))
      .filter(_.length == 6)
      .map(raw => new Data(raw(0).toInt, raw(1), raw(2).toDouble, raw(3).toDouble, raw(4), raw(5)) )

      // some silly operations just to shuffle + process data
      .map (data => data.copy(amount1 = data.amount2 * Math.sqrt(data.amount1)))
      .groupBy(_.part)
      .flatMap { case (part, data) => data}
      .saveAsTextFile(destinationFolder)
}

On each Mesos slave nodes, the following stack traces are visible:

14/09/22 11:27:33 INFO BlockFetcherIterator$BasicBlockFetcherIterator: Getting 16 non-empty blocks out of 16 blocks
14/09/22 11:27:33 INFO BlockFetcherIterator$BasicBlockFetcherIterator: Started 0 remote fetches in 6 ms
14/09/22 11:27:33 INFO BlockFetcherIterator$BasicBlockFetcherIterator: Started 0 remote fetches in 2 ms
14/09/22 11:27:33 INFO BlockFetcherIterator$BasicBlockFetcherIterator: Started 0 remote fetches in 1 ms
14/09/22 11:27:33 INFO BlockFetcherIterator$BasicBlockFetcherIterator: Started 2 remote fetches in 15 ms
14/09/22 11:27:33 INFO SendingConnection: Initiating connection to [vm21-hulk-priv.mtl.mnubo.com/10.237.241.21:54104]
14/09/22 11:27:33 INFO SendingConnection: Connected to [vm21-hulk-priv.mtl.mnubo.com/10.237.241.21:54104], 2 messages pending
14/09/22 11:27:33 INFO BlockFetcherIterator$BasicBlockFetcherIterator: Started 2 remote fetches in 12 ms
14/09/22 11:27:33 INFO BlockFetcherIterator$BasicBlockFetcherIterator: Started 2 remote fetches in 16 ms
14/09/22 11:27:33 INFO BlockFetcherIterator$BasicBlockFetcherIterator: Started 2 remote fetches in 16 ms
14/09/22 11:27:33 INFO ConnectionManager: Accepted connection from [vm21-hulk-priv.mtl.mnubo.com/10.237.241.21:32981]
14/09/22 11:27:33 INFO BlockFetcherIterator$BasicBlockFetcherIterator: Started 2 remote fetches in 22 ms
14/09/22 11:27:33 ERROR Executor: Exception in task 12.0 in stage 0.0 (TID 28)
java.io.IOException: The temporary job-output directory file:/svend/test-jobserver/dest6/_temporary doesn't exist!
    at org.apache.hadoop.mapred.FileOutputCommitter.getWorkPath(FileOutputCommitter.java:250)
    at org.apache.hadoop.mapred.FileOutputFormat.getTaskOutputPath(FileOutputFormat.java:240)
    at org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:116)
    at org.apache.spark.SparkHadoopWriter.open(SparkHadoopWriter.scala:89)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$13.apply(PairRDDFunctions.scala:980)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$13.apply(PairRDDFunctions.scala:974)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
    at org.apache.spark.scheduler.Task.run(Task.scala:54)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:744)
14/09/22 11:27:33 ERROR Executor: Exception in tas

The data read in that folder is just a bunch of random csv lines, generated as follows (from spark-shell):

class generator() {

    val letters = 'a' to 'z'        
    val rand = new scala.util.Random()

    def letter = letters(rand.nextInt(26))
    def word = 1 to rand.nextInt(15) map (_ => letter) mkString
    def amount = rand.nextGaussian * rand.nextInt(100)
    def part = rand.nextInt(10)

    def line = part :: word :: amount :: amount :: letter :: word :: Nil mkString(",")
    def lines = (1 to 500000).foldLeft(List[String]()) { case (acc, _) => line :: acc } 

}

sc.parallelize(1 to 16, 16).map (_ => new generator()).flatMap (_.lines).saveAsTextFile("/svend/test-jobserver/source")

Job Server Concurrent Requests

Hi, we are testing Spark Job server version 0.4.1 in YARN mode with FAIR job scheduler. Currently we have an issue with concurrent requests testing. The problem is that server can run only 6 concurrent same requests at the moment. It refuses to accept any more with the below error:
{
"status": "NO SLOTS AVAILABLE",
"result": "Too many running jobs (6) for job context 'con'"
}
Each request just queries the same Cached RDD in memory via SQLContext and prints the count of rows to the output. Could you please advise are there any restrictions to run many queries at the same time?

Question:How to run the spark sql example SqlTestJob?

Hi,
How to run the expamle spark-jobserver/job-server-tests/src/spark.jobserver/SqlTestJob.scala? What's the right way to start a sparksql job? I got an error when I typed

curl -d '{"sql":"select first from addresses"}' 'localhost:8090/jobs?
appName=sqltest01&classPath=spark.jobserver.SqlTestJob&sync=true':
{
"status": "ERROR",
"result": "Invalid job type for this context"
}

Please help me,
Thank you very much~~

Support yarn-cluster mode

Hi there's some issues regarding concurrency and multitenancy with jobserver & spark:

  1. The issue that impossible to create multiple spark context inside one JVM (SPARK-2243)
  2. There's no way to kill particular job when submitting several jobs within single spark context.

For yarn mode it's possible to submit job remotely from code (http://blog.sequenceiq.com/blog/2014/08/22/spark-submit-in-java/) - and kill job through YARN API. How difficult is to implement such functionality inside spark-jobserver?

Support for named Models

We currently have support for Named RDDs but the MLlib returns Models (KMeansModel, MatrixFactorizationModel, GeneralizedLinearModels)

I think it would be useful to have a mechanism to cache those.

Server tested/running on 1.2.0

I notice the max version of Spark upon which the server claims to run is 1.1.0. Assuming development is still ongoing for this project, it should be tested and verified for 1.2.0 and the README updated

Spark Job Server with Java error

I'm using spark whith java, and i want to use spark Job-Server. For this i followed all in this link : https://github.com/spark-jobserver/spark-jobserver

This is the scala class in my project :

package fr.aid.cim.spark

import root.spark.jobserver.SparkJob
import root.spark.jobserver.SparkJobValid
import root.spark.jobserver.SparkJobValidation
import com.typesafe.config._
import fr.aid.cim.spark.JavaCount
import org.apache.spark._
import org.apache.spark.api.java.JavaSparkContext
import spark.jobserver.{SparkJob, SparkJobValid, SparkJobValidation}

object JavaWord extends SparkJob {
def main(args: Array[String]) {
val ctx = new SparkContext("local[4]", "JavaWordCount")
val config = ConfigFactory.parseString("")

val results = runJob(ctx, config)
}

override def validate(sc: SparkContext, config: Config): SparkJobValidation = {
SparkJobValid;
}

override def runJob(sc: SparkContext, config: Config): Any = {
val jsc = new JavaSparkContext(sc)
val j = new JavaCount()
return j.Mafonction(jsc: JavaSparkContext)
}
}
And the Java class "word wount"

package fr.aid.cim.spark;

import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.PairFunction;
import scala.Tuple2;
import java.io.Serializable;
import java.util.Arrays;
import java.util.regex.Pattern;

public final class JavaCount implements Serializable {
public static Object main(String[] args) throws Exception {

return null;

}

public Object Mafonction(JavaSparkContext sc){
String s= "a a a a b b c a";
JavaPairRDD<String, Integer> lines = sc.parallelize(Arrays.asList(s.split(" "))).mapToPair(new PairFunction<String, String, Integer>() {
@OverRide
public Tuple2<String, Integer> call(String s) {
return new Tuple2<String, Integer>(s, 1);
}
}).reduceByKey(new Function2<Integer, Integer, Integer>() {
@OverRide
public Integer call(Integer i1, Integer i2) {
return i1 + i2;
}
});
return lines.collect();
}
}
But when i execute it i got curl: (52) Empty reply from server with this error in spark job-server:

job-server[ERROR] Uncaught error from thread [JobServer-akka.actor.default-dispatcher-13] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[JobServer]
job-server[ERROR] java.lang.IncompatibleClassChangeError: Implementing class
job-server[ERROR] at java.lang.ClassLoader.defineClass1(Native Method)
job-server[ERROR] at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
job-server[ERROR] at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
job-server[ERROR] at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
job-server[ERROR] at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
job-server[ERROR] at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
job-server[ERROR] at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
job-server[ERROR] at java.security.AccessController.doPrivileged(Native Method)
job-server[ERROR] at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
job-server[ERROR] at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
job-server[ERROR] at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
job-server[ERROR] at fr.aid.cim.spark.JavaWord$.runJob(JavaWord.scala:31)
job-server[ERROR] at spark.jobserver.JobManagerActor$$anonfun$spark$jobserver$JobManagerActor$$getJobFuture$4.apply(JobMan agerActor.scala:222)
job-server[ERROR] at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
job-server[ERROR] at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
job-server[ERROR] at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:42)
job-server[ERROR] at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
job-server[ERROR] at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
job-server[ERROR] at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
job-server[ERROR] at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
job-server[ERROR] at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
job-server ... finished with exit code 255
java scala apache-spark

Using Datastax spark-cassandra-connector with JobServer

Hi,

Still learning the jobserver, it is a great project, so thank you for open sourcing it.

I would like to use the spark-cassandra-connector from datastax with the JobServer. I have used it outside of the JobServer before, but I need to be able to set some properties on the SparkContext. I would normally do it something like this:

  /*
   * Before creating the `SparkContext`, set the `cassandra.connection.host` 
   * property to the address of one of the Cassandra nodes.
   */
  val conf = new SparkConf(true).set("spark.cassandra.connection.host", "127.0.0.1")

  /*
   * Set the port to connect to.  If using embedded instance set to 9142 else
   * default to 9042.
   */
  conf.set("spark.cassandra.connection.native.port", "9042")

And then do val sc = new SparkContext(sparkMasterHost, "demo", conf)

With the JobServer as I understand it the Context is managed on my behalf and is provided to the job implementing the SparkJob trait, runJob.

I know I can define the properties I need in the $SPARK_HOME/conf/spark-default.conf, but that would bind all jobs to these settings. I'm not clear the even if I define them there that the JobServer will provide back these in the context it provides.

Any guidance on how and where to place these for use in the JobServer would be appreciated. Is it simply a matter of adding these into a config and providing it to the JobServer when started and if so is there any particular format required?

TIA for the assistance.

-Todd

Support job cancellation via cancelJobGroup

Hi,
I notice you will support job cancellation via cancelJobGroup in the future. But In my application, when I use this feature, it doesn't work for me. It can't cancel the running jobs.
I want to know it's my usage mistake, or there exist some bugs?

which is the primary github repo?

I was wondering if somebody could clear up some confusion, is this the new github repo for the open source project?
or is it still ooyla/spark-jobserver?

I was going to fork from the master, and wanted to make sure I was working with the project that is still the most active.

can not find class SparkJobValidation in the job-server-tests-0.4.1.jar

hi,

i am trying to directly submit the test job to spark cluster like below, but failed.

$ bin/spark-submit --class "spark.jobserver.WordCountExample" --master "spark://127.0.0.1:7077" "/home/localadmin/test/job-server-tests-0.4.1.jar"

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
Exception in thread "main" java.lang.NoClassDefFoundError: spark/jobserver/SparkJobValidation
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Class.java:2693)
at java.lang.Class.privateGetMethodRecursive(Class.java:3040)
at java.lang.Class.getMethod0(Class.java:3010)
at java.lang.Class.getMethod(Class.java:1776)
at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:301)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:55)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: spark.jobserver.SparkJobValidation
at java.net.URLClassLoader$1.run(URLClassLoader.java:372)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 8 more

here is how i build the spark job server code:

spark-jobserver-0.4.1$ sudo sbt package

this will generate the jars files.

i could not use run this :
$ bin/server_package.sh xxx

as i get tests errors (too long, so not paste here)

why can not submit to spark? i though all the dependencies should be packaged into the final jar file.

thanks in advance.

Unable to specify SparkConf defaults.

I'm not sure if I've overlooked something, but I can't find a way to specify default configuration values for SparkConf other than via the context url. For example, I would like to specify "spark.driver.port=39393" as a default configuration value instead of having to provide it via "/contexts/name?spark.driver.port=39393". At first I thought I would be able to specify them in the jobserver conf file, but after looking at the code it doesn't seem to copy from there. Perhaps there should be a section of the jobserver configuration for setting default values? I can issue a pull request if needed, but I would like to hear everyone's thoughts on it.

Deploy to a local staging dir failed

Hi,
After modifying config/local.sh and config/local.conf files to use local setting. The deployment of spark-jobserver fails with the following error. Note that I am using

  1. Mac OS Maverick
  2. Java 1.7.0
  3. Spark 1.2.0

$ ./server_start.sh

./server_start.sh: line 52: kill: (9343) - No such process
Spark assembly has been built with Hive, including Datanucleus jars on classpath
glide:job-server pdharma$ log4j:WARN No such property [datePattern] in org.apache.log4j.RollingFileAppender.
Exception in thread "main" java.lang.NoSuchMethodError: com.typesafe.config.Config.getDuration(Ljava/lang/String;Ljava/util/concurrent/TimeUnit;)J
at akka.util.Helpers$ConfigOps$.akka$util$Helpers$ConfigOps$$getDuration$extension(Helpers.scala:125)
at akka.util.Helpers$ConfigOps$.getMillisDuration$extension(Helpers.scala:120)
at akka.actor.ActorSystem$Settings.(ActorSystem.scala:171)
at akka.actor.ActorSystemImpl.(ActorSystem.scala:504)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:141)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:118)
at spark.jobserver.JobServer$$anonfun$main$1.apply(JobServer.scala:68)
at spark.jobserver.JobServer$$anonfun$main$1.apply(JobServer.scala:68)
at spark.jobserver.JobServer$.start(JobServer.scala:47)
at spark.jobserver.JobServer$.main(JobServer.scala:68)
at spark.jobserver.JobServer.main(JobServer.scala)

Can't deploy jobserver in localhost

With spark 1.1.1 and spark-job-server 0.4.1, I'm facing tests failure issue for config/production.sh as below,

 ## copied from local.sh.template                                                                                                                                 

# Environment and deploy file                                                                          
# For use with bin/server_deploy, bin/server_package etc.                                              
DEPLOY_HOSTS="localhost"                                                                               

APP_USER=spark                                                                                         
APP_GROUP=spark                                                                                        
# optional SSH Key to login to deploy server                                                           
#SSH_KEY=/path/to/keyfile.pem                                                                          
INSTALL_DIR=/home/spark/job-server                                                                     
LOG_DIR=/var/log/job-server                                                                            
PIDFILE=spark-jobserver.pid                                                                            
SPARK_HOME=/usr/local/spark-1.1.1                                                                      
SPARK_CONF_HOME=$SPARK_HOME/conf                                                                    
# Only needed for Mesos deploys                                                                     
SPARK_EXECUTOR_URI=/packup/repo.softwares/JVM/BigData/spark-1.1.1.tgz 
$ bin/server_deploy.sh production
Deploying job server to localhost...
[info] Loading project definition from /packup/workspace.programming/workspace.scala/spark-jobserver/project
Missing bintray credentials /home/prayagupd/.bintray/.credentials. Some bintray features depend on this.
Missing bintray credentials /home/prayagupd/.bintray/.credentials. Some bintray features depend on this.
Missing bintray credentials /home/prayagupd/.bintray/.credentials. Some bintray features depend on this.
[info] Set current project to root (in build file:/packup/workspace.programming/workspace.scala/spark-jobserver/)
warning file=/packup/workspace.programming/workspace.scala/spark-jobserver/job-server/src/spark.jobserver/io/JobFileDAO.scala message=File line length exceeds 110 characters line=138
warning file=/packup/workspace.programming/workspace.scala/spark-jobserver/job-server/src/spark.jobserver/io/JobSqlDAO.scala message=Public method must have explicit type line=23 column=8
warning file=/packup/workspace.programming/workspace.scala/spark-jobserver/job-server/src/spark.jobserver/io/JobSqlDAO.scala message=Public method must have explicit type line=24 column=8
warning file=/packup/workspace.programming/workspace.scala/spark-jobserver/job-server/src/spark.jobserver/io/JobSqlDAO.scala message=Public method must have explicit type line=25 column=8
warning file=/packup/workspace.programming/workspace.scala/spark-jobserver/job-server/src/spark.jobserver/io/JobSqlDAO.scala message=Public method must have explicit type line=26 column=8
warning file=/packup/workspace.programming/workspace.scala/spark-jobserver/job-server/src/spark.jobserver/io/JobSqlDAO.scala message=Public method must have explicit type line=28 column=8
warning file=/packup/workspace.programming/workspace.scala/spark-jobserver/job-server/src/spark.jobserver/io/JobSqlDAO.scala message=Public method must have explicit type line=36 column=8
warning file=/packup/workspace.programming/workspace.scala/spark-jobserver/job-server/src/spark.jobserver/io/JobSqlDAO.scala message=Public method must have explicit type line=37 column=8
warning file=/packup/workspace.programming/workspace.scala/spark-jobserver/job-server/src/spark.jobserver/io/JobSqlDAO.scala message=Public method must have explicit type line=38 column=8
warning file=/packup/workspace.programming/workspace.scala/spark-jobserver/job-server/src/spark.jobserver/io/JobSqlDAO.scala message=Public method must have explicit type line=39 column=8
warning file=/packup/workspace.programming/workspace.scala/spark-jobserver/job-server/src/spark.jobserver/io/JobSqlDAO.scala message=Public method must have explicit type line=40 column=8
warning file=/packup/workspace.programming/workspace.scala/spark-jobserver/job-server/src/spark.jobserver/io/JobSqlDAO.scala message=Public method must have explicit type line=41 column=8
warning file=/packup/workspace.programming/workspace.scala/spark-jobserver/job-server/src/spark.jobserver/io/JobSqlDAO.scala message=Public method must have explicit type line=42 column=8
warning file=/packup/workspace.programming/workspace.scala/spark-jobserver/job-server/src/spark.jobserver/io/JobSqlDAO.scala message=Public method must have explicit type line=43 column=8
warning file=/packup/workspace.programming/workspace.scala/spark-jobserver/job-server/src/spark.jobserver/io/JobSqlDAO.scala message=Public method must have explicit type line=48 column=8
warning file=/packup/workspace.programming/workspace.scala/spark-jobserver/job-server/src/spark.jobserver/io/JobSqlDAO.scala message=Public method must have explicit type line=49 column=8
warning file=/packup/workspace.programming/workspace.scala/spark-jobserver/job-server/src/spark.jobserver/io/JobSqlDAO.scala message=Public method must have explicit type line=50 column=8
warning file=/packup/workspace.programming/workspace.scala/spark-jobserver/job-server/src/spark.jobserver/JobManagerActor.scala message=File line length exceeds 110 characters line=102
warning file=/packup/workspace.programming/workspace.scala/spark-jobserver/job-server/src/spark.jobserver/JobManagerActor.scala message=Avoid using return line=186 column=6
Processed 22 file(s)
Found 0 errors
Found 19 warnings
Finished in 11 ms
[success] created: /packup/workspace.programming/workspace.scala/spark-jobserver/job-server/target/scalastyle-result.xml
Processed 5 file(s)
Found 0 errors
Found 0 warnings
Finished in 1 ms
[success] created: /packup/workspace.programming/workspace.scala/spark-jobserver/job-server-tests/target/scalastyle-result.xml
[warn] Credentials file /home/prayagupd/.bintray/.credentials does not exist
[warn] Credentials file /home/prayagupd/.bintray/.credentials does not exist
[info] Updating {file:/packup/workspace.programming/workspace.scala/spark-jobserver/}job-server-api...
[info] Updating {file:/packup/workspace.programming/workspace.scala/spark-jobserver/}akka-app...
[info] Resolving org.scala-lang#scala-library;2.10.3 ...
[info] Done updating.
[info] Resolving org.scala-lang#scala-library;2.10.3 ...
[info] Done updating.
[warn] Credentials file /home/prayagupd/.bintray/.credentials does not exist
[info] Updating {file:/packup/workspace.programming/workspace.scala/spark-jobserver/}job-server-tests...
[info] Resolving com.typesafe#config;1.0.0 ...
[info] Updating {file:/packup/workspace.programming/workspace.scala/spark-jobserver/}job-server...
[info] Resolving org.scala-lang#scala-library;2.10.3 ...
[info] Done updating.
[info] Resolving io.spray#spray-util;1.2.1 ...
warning file=/packup/workspace.programming/workspace.scala/spark-jobserver/job-server-api/src/spark.jobserver/SparkJob.scala message=Method name does not match the regular expression '^([+-/\\*]|[a-z][A-Za-z0-9]*)$' line=10 column=6
Processed 2 file(s)
Found 0 errors
Found 1 warnings
Finished in 1 ms
[success] created: /packup/workspace.programming/workspace.scala/spark-jobserver/job-server-api/target/scalastyle-result.xml
[info] Resolving com.h2database#h2;1.3.170 ...
[info] Compiling 5 Scala sources to /packup/workspace.programming/workspace.scala/spark-jobserver/job-server-tests/target/classes...
[info] Resolving org.scala-lang#scala-library;2.10.3 ...
[info] Done updating.
Processed 11 file(s)
Found 0 errors
Found 0 warnings
Finished in 0 ms
[success] created: /packup/workspace.programming/workspace.scala/spark-jobserver/akka-app/target/scalastyle-result.xml
[info] Compiling 6 Scala sources to /packup/workspace.programming/workspace.scala/spark-jobserver/job-server/target/classes...
[info] Packaging /packup/workspace.programming/workspace.scala/spark-jobserver/job-server-tests/target/job-server-tests-0.4.1.jar ...
[info] Done packaging.
[info] Compiling 4 Scala sources to /packup/workspace.programming/workspace.scala/spark-jobserver/job-server/target/classes...
[info] Compiling 4 Scala sources to /packup/workspace.programming/workspace.scala/spark-jobserver/job-server/target/test-classes...
[info] Including from cache: slf4j-api-1.7.2.jar
[info] Including from cache: spray-util-1.2.1.jar
[info] Including from cache: spray-httpx-1.2.1.jar
[info] Including from cache: netty-3.6.6.Final.jar
[info] Including from cache: joda-convert-1.2.jar
[info] Including from cache: spray-http-1.2.1.jar
[info] Including from cache: parboiled-scala_2.10-1.1.6.jar
[info] Including from cache: config-1.0.0.jar
[info] Including from cache: mimepull-1.9.4.jar
[info] Including from cache: spray-json_2.10-1.2.5.jar
[info] Including from cache: shapeless_2.10-1.2.4.jar
[info] Including from cache: joda-time-2.1.jar
[info] Including from cache: spray-client-1.2.1.jar
[info] Including from cache: parboiled-core-1.1.6.jar
[info] Including from cache: metrics-core-2.2.0.jar
[info] Including from cache: spray-can-1.2.1.jar
[info] Including from cache: spray-routing-1.2.1.jar
[info] Including from cache: spray-io-1.2.1.jar
[info] Including from cache: slick_2.10-2.0.2-RC1.jar
[info] Including from cache: h2-1.3.170.jar
[info] JobStatusActorSpec:
[info] JobStatusActor 
[info] - should return empty sequence if there is no job infos
[info] - should return error if non-existing job is unsubscribed
[info] - should not initialize a job more than two times
[info] - should be informed JobStarted until it is unsubscribed
[info] - should be ok to subscribe beofore job init
[info] - should be informed JobValidationFailed once
[info] - should be informed JobFinished until it is unsubscribed
[info] - should be informed JobErroredOut until it is unsubscribed
[info] - should update status correctly
[info] - should update JobValidationFailed status correctly
[info] - should update JobErroredOut status correctly
[info] NamedRddsSpec:
[info] NamedRdds 
[info] - get() should return None when RDD does not exist
[info] - get() should return Some(RDD) when it exists
[info] - destroy() should do nothing when RDD with given name doesn't exist
[info] - destroy() should destroy an RDD that exists
[info] - getNames() should return names of all managed RDDs
[info] - getOrElseCreate() should call generator function if RDD does not exist
[info] - getOrElseCreate() should not call generator function, should return existing RDD if one exists
[info] - update() should replace existing RDD
[info] - should include underlying exception when error occurs
[info] SparkJobSpec:
[info] Sample tests for default validation && method 
[info] - should return valid
[info] - should return invalid if one of them is invalid
[info] - should return invalid if both of them are invalid with the first message
[ERROR] [02/14/2015 06:56:15.772] [test-akka.actor.default-dispatcher-6] [akka.dispatch.Dispatcher] null
java.lang.NullPointerException
    at spark.jobserver.JobManagerActor.spark$jobserver$JobManagerActor$$postEachJob(JobManagerActor.scala:257)
    at spark.jobserver.JobManagerActor$$anonfun$spark$jobserver$JobManagerActor$$getJobFuture$2.applyOrElse(JobManagerActor.scala:239)
    at spark.jobserver.JobManagerActor$$anonfun$spark$jobserver$JobManagerActor$$getJobFuture$2.applyOrElse(JobManagerActor.scala:232)
    at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:434)
    at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:433)
    at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
    at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
    at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
    at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
    at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
    at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
    at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
    at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:39)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:385)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

[ERROR] [02/14/2015 06:56:15.772] [test-akka.actor.default-dispatcher-2] [akka.dispatch.Dispatcher] null
java.lang.NullPointerException
    at spark.jobserver.JobManagerActor.spark$jobserver$JobManagerActor$$postEachJob(JobManagerActor.scala:257)
    at spark.jobserver.JobManagerActor$$anonfun$spark$jobserver$JobManagerActor$$getJobFuture$2.applyOrElse(JobManagerActor.scala:239)
    at spark.jobserver.JobManagerActor$$anonfun$spark$jobserver$JobManagerActor$$getJobFuture$2.applyOrElse(JobManagerActor.scala:232)
    at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:434)
    at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:433)
    at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
    at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
    at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
    at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
    at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
    at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
    at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
    at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:39)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:385)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

[info] JobManagerActorAdHocSpec:
[info] error conditions 
[info] - should return errors if appName does not match
[info] - should return error message if classPath does not match *** FAILED ***
[info]   java.lang.AssertionError: assertion failed: timeout (3 seconds) during expectMsgClass waiting for class spark.jobserver.JobManagerActor$Initialized
[info]   at scala.Predef$.assert(Predef.scala:179)
[info]   at akka.testkit.TestKitBase$class.expectMsgClass_internal(TestKit.scala:412)
[info]   at akka.testkit.TestKitBase$class.expectMsgClass(TestKit.scala:399)
[info]   at akka.testkit.TestKit.expectMsgClass(TestKit.scala:707)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$2$$anonfun$apply$mcV$sp$7.apply$mcV$sp(JobManagerSpec.scala:76)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$2$$anonfun$apply$mcV$sp$7.apply(JobManagerSpec.scala:73)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$2$$anonfun$apply$mcV$sp$7.apply(JobManagerSpec.scala:73)
[info]   at org.scalatest.FunSpec$$anon$1.apply(FunSpec.scala:1600)
[info]   at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
[info]   at spark.jobserver.JobManagerSpec.withFixture(JobManagerSpec.scala:36)
[info]   ...
[info] - should error out if loading garbage jar *** FAILED ***
[info]   java.lang.AssertionError: assertion failed: timeout (3 seconds) during expectMsg while waiting for NoSuchClass
[info]   at scala.Predef$.assert(Predef.scala:179)
[info]   at akka.testkit.TestKitBase$class.expectMsg_internal(TestKit.scala:327)
[info]   at akka.testkit.TestKitBase$class.expectMsg(TestKit.scala:314)
[info]   at akka.testkit.TestKit.expectMsg(TestKit.scala:707)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$2$$anonfun$apply$mcV$sp$8.apply$mcV$sp(JobManagerSpec.scala:86)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$2$$anonfun$apply$mcV$sp$8.apply(JobManagerSpec.scala:81)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$2$$anonfun$apply$mcV$sp$8.apply(JobManagerSpec.scala:81)
[info]   at org.scalatest.FunSpec$$anon$1.apply(FunSpec.scala:1600)
[info]   at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
[info]   at spark.jobserver.JobManagerSpec.withFixture(JobManagerSpec.scala:36)
[info]   ...
[info] - should error out if job validation fails *** FAILED ***
[info]   java.lang.AssertionError: assertion failed: expected class spark.jobserver.CommonMessages$JobValidationFailed, found class spark.jobserver.CommonMessages$NoSuchClass$
[info]   at scala.Predef$.assert(Predef.scala:179)
[info]   at akka.testkit.TestKitBase$class.expectMsgClass_internal(TestKit.scala:413)
[info]   at akka.testkit.TestKitBase$class.expectMsgClass(TestKit.scala:399)
[info]   at akka.testkit.TestKit.expectMsgClass(TestKit.scala:707)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$2$$anonfun$apply$mcV$sp$9.apply$mcV$sp(JobManagerSpec.scala:95)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$2$$anonfun$apply$mcV$sp$9.apply(JobManagerSpec.scala:89)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$2$$anonfun$apply$mcV$sp$9.apply(JobManagerSpec.scala:89)
[info]   at org.scalatest.FunSpec$$anon$1.apply(FunSpec.scala:1600)
[info]   at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
[info]   at spark.jobserver.JobManagerSpec.withFixture(JobManagerSpec.scala:36)
[info]   ...
[info] starting jobs 
[info] - should start job and return result successfully (all events) *** FAILED ***
[info]   java.lang.AssertionError: assertion failed: timeout (3 seconds) during expectMsgClass waiting for class spark.jobserver.CommonMessages$JobStarted
[info]   at scala.Predef$.assert(Predef.scala:179)
[info]   at akka.testkit.TestKitBase$class.expectMsgClass_internal(TestKit.scala:412)
[info]   at akka.testkit.TestKitBase$class.expectMsgClass(TestKit.scala:399)
[info]   at akka.testkit.TestKit.expectMsgClass(TestKit.scala:707)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$10.apply$mcV$sp(JobManagerSpec.scala:113)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$10.apply(JobManagerSpec.scala:107)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$10.apply(JobManagerSpec.scala:107)
[info]   at org.scalatest.FunSpec$$anon$1.apply(FunSpec.scala:1600)
[info]   at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
[info]   at spark.jobserver.JobManagerSpec.withFixture(JobManagerSpec.scala:36)
[info]   ...
[info] - should start job more than one time and return result successfully (all events) *** FAILED ***
[info]   java.lang.AssertionError: assertion failed: timeout (3 seconds) during expectMsgClass waiting for class spark.jobserver.CommonMessages$JobStarted
[info]   at scala.Predef$.assert(Predef.scala:179)
[info]   at akka.testkit.TestKitBase$class.expectMsgClass_internal(TestKit.scala:412)
[info]   at akka.testkit.TestKitBase$class.expectMsgClass(TestKit.scala:399)
[info]   at akka.testkit.TestKit.expectMsgClass(TestKit.scala:707)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$11.apply$mcV$sp(JobManagerSpec.scala:124)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$11.apply(JobManagerSpec.scala:118)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$11.apply(JobManagerSpec.scala:118)
[info]   at org.scalatest.FunSpec$$anon$1.apply(FunSpec.scala:1600)
[info]   at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
[info]   at spark.jobserver.JobManagerSpec.withFixture(JobManagerSpec.scala:36)
[info]   ...
[info] - should start job and return results (sync route) *** FAILED ***
[info]   java.lang.AssertionError: assertion failed: timeout (3 seconds) during expectMsg: Did not get JobResult
[info]   at scala.Predef$.assert(Predef.scala:179)
[info]   at akka.testkit.TestKitBase$class.expectMsgPF(TestKit.scala:345)
[info]   at akka.testkit.TestKit.expectMsgPF(TestKit.scala:707)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$12.apply$mcV$sp(JobManagerSpec.scala:141)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$12.apply(JobManagerSpec.scala:135)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$12.apply(JobManagerSpec.scala:135)
[info]   at org.scalatest.FunSpec$$anon$1.apply(FunSpec.scala:1600)
[info]   at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
[info]   at spark.jobserver.JobManagerSpec.withFixture(JobManagerSpec.scala:36)
[info]   at org.scalatest.FunSpec$class.invokeWithFixture$1(FunSpec.scala:1597)
[info]   ...
[info] - should start job and return JobStarted (async) *** FAILED ***
[info]   java.lang.AssertionError: assertion failed: timeout (3 seconds) during expectMsgClass waiting for class spark.jobserver.CommonMessages$JobStarted
[info]   at scala.Predef$.assert(Predef.scala:179)
[info]   at akka.testkit.TestKitBase$class.expectMsgClass_internal(TestKit.scala:412)
[info]   at akka.testkit.TestKitBase$class.expectMsgClass(TestKit.scala:399)
[info]   at akka.testkit.TestKit.expectMsgClass(TestKit.scala:707)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$13.apply$mcV$sp(JobManagerSpec.scala:153)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$13.apply(JobManagerSpec.scala:147)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$13.apply(JobManagerSpec.scala:147)
[info]   at org.scalatest.FunSpec$$anon$1.apply(FunSpec.scala:1600)
[info]   at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
[info]   at spark.jobserver.JobManagerSpec.withFixture(JobManagerSpec.scala:36)
[info]   ...
[info] - should return error if job throws an error *** FAILED ***
[info]   java.lang.AssertionError: assertion failed: timeout (3 seconds) during expectMsgClass waiting for class spark.jobserver.CommonMessages$JobErroredOut
[info]   at scala.Predef$.assert(Predef.scala:179)
[info]   at akka.testkit.TestKitBase$class.expectMsgClass_internal(TestKit.scala:412)
[info]   at akka.testkit.TestKitBase$class.expectMsgClass(TestKit.scala:399)
[info]   at akka.testkit.TestKit.expectMsgClass(TestKit.scala:707)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$14.apply$mcV$sp(JobManagerSpec.scala:163)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$14.apply(JobManagerSpec.scala:157)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$14.apply(JobManagerSpec.scala:157)
[info]   at org.scalatest.FunSpec$$anon$1.apply(FunSpec.scala:1600)
[info]   at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
[info]   at spark.jobserver.JobManagerSpec.withFixture(JobManagerSpec.scala:36)
[info]   ...
[info] - job should get jobConfig passed in to StartJob message *** FAILED ***
[info]   java.lang.AssertionError: assertion failed: timeout (3 seconds) during expectMsg: Did not get JobResult
[info]   at scala.Predef$.assert(Predef.scala:179)
[info]   at akka.testkit.TestKitBase$class.expectMsgPF(TestKit.scala:345)
[info]   at akka.testkit.TestKit.expectMsgPF(TestKit.scala:707)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$15.apply$mcV$sp(JobManagerSpec.scala:175)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$15.apply(JobManagerSpec.scala:167)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$15.apply(JobManagerSpec.scala:167)
[info]   at org.scalatest.FunSpec$$anon$1.apply(FunSpec.scala:1600)
[info]   at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
[info]   at spark.jobserver.JobManagerSpec.withFixture(JobManagerSpec.scala:36)
[info]   at org.scalatest.FunSpec$class.invokeWithFixture$1(FunSpec.scala:1597)
[info]   ...
[info] - should properly serialize case classes and other job jar classes *** FAILED ***
[info]   java.lang.AssertionError: assertion failed: expected: Did not get JobResult but got unexpected message JobResult(8f3ca434-11d2-45f7-8702-38046f7739b5,ArrayBuffer(foo))
[info]   at scala.Predef$.assert(Predef.scala:179)
[info]   at akka.testkit.TestKitBase$class.expectMsgPF(TestKit.scala:346)
[info]   at akka.testkit.TestKit.expectMsgPF(TestKit.scala:707)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$16.apply$mcV$sp(JobManagerSpec.scala:188)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$16.apply(JobManagerSpec.scala:181)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$16.apply(JobManagerSpec.scala:181)
[info]   at org.scalatest.FunSpec$$anon$1.apply(FunSpec.scala:1600)
[info]   at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
[info]   at spark.jobserver.JobManagerSpec.withFixture(JobManagerSpec.scala:36)
[info]   at org.scalatest.FunSpec$class.invokeWithFixture$1(FunSpec.scala:1597)
[info]   ...
[info] - should refuse to start a job when too many jobs in the context are running *** FAILED ***
[info]   java.lang.AssertionError: assertion failed: timeout (3 seconds) during expectMsg: Expected a message but didn't get one!
[info]   at scala.Predef$.assert(Predef.scala:179)
[info]   at akka.testkit.TestKitBase$class.expectMsgPF(TestKit.scala:345)
[info]   at akka.testkit.TestKit.expectMsgPF(TestKit.scala:707)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$17.apply$mcV$sp(JobManagerSpec.scala:212)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$17.apply(JobManagerSpec.scala:196)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$17.apply(JobManagerSpec.scala:196)
[info]   at org.scalatest.FunSpec$$anon$1.apply(FunSpec.scala:1600)
[info]   at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
[info]   at spark.jobserver.JobManagerSpec.withFixture(JobManagerSpec.scala:36)
[info]   at org.scalatest.FunSpec$class.invokeWithFixture$1(FunSpec.scala:1597)
[info]   ...
[info] - should start a job that's an object rather than class *** FAILED ***
[info]   java.lang.AssertionError: assertion failed: expected: Did not get JobResult but got unexpected message JobStarted(2c9d6a4a-7aa0-4226-a8b1-15dcd60e5444,test,2015-02-14T06:56:13.741-06:00)
[info]   at scala.Predef$.assert(Predef.scala:179)
[info]   at akka.testkit.TestKitBase$class.expectMsgPF(TestKit.scala:346)
[info]   at akka.testkit.TestKit.expectMsgPF(TestKit.scala:707)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$19.apply$mcV$sp(JobManagerSpec.scala:238)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$19.apply(JobManagerSpec.scala:231)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$19.apply(JobManagerSpec.scala:231)
[info]   at org.scalatest.FunSpec$$anon$1.apply(FunSpec.scala:1600)
[info]   at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
[info]   at spark.jobserver.JobManagerSpec.withFixture(JobManagerSpec.scala:36)
[info]   at org.scalatest.FunSpec$class.invokeWithFixture$1(FunSpec.scala:1597)
[info]   ...
[info] WebApiSpec:
[info] jars routes 
[info] - should list all jars
[info] - should respond with OK if jar uploaded successfully
[info] - should respond with bad request if jar formatted incorrectly
[info] list jobs 
[info] - should list jobs correctly
[info] /jobs routes 
[info] - should respond with bad request if jobConfig cannot be parsed
[info] - should merge user passed jobConfig with default jobConfig
[info] - async route should return 202 if job starts successfully
[info] - adhoc job of sync route should return 200 and result
[info] - should be able to take a timeout param
[info] - adhoc job started successfully of async route should return 202
[info] - should be able to query job result from /jobs/<id> route
[info] - should be able to query job config from /jobs/<id>/config route
[info] - should respond with 404 Not Found from /jobs/<id>/config route if jobId does not exist
[info] - should respond with 404 Not Found if context does not exist
[info] - should respond with 404 Not Found if app or class not found
[info] - sync route should return Ok with ERROR in JSON response if job failed
[info] serializing complex data types 
[info] - should be able to serialize nested Seq's and Map's within Map's to JSON
[info] - should be able to serialize Seq's with different types to JSON
[info] - should be able to serialize base types (eg float, numbers) to JSON
[info] - should convert non-understood types to string
[info] context routes 
[info] - should list all contexts
[info] - should respond with 404 Not Found if stopping unknown context
[info] - should return OK if stopping known context
[info] - should respond with bad request if starting an already started context
[info] - should return OK if starting a new context
[info] spark alive workers 
[info] - should return OK
[info] SparkWebUiActorSpec:
[info] SparkWebUiActor 
[info] - should get worker info
[info] JobResultActorSpec:
[info] JobResultActor 
[info] - should return error if non-existing jobs are asked
[info] - should get back existing result
[info] - should be informed only once by subscribed result
[info] - should not be informed unsubscribed result
[info] - should not publish if do not subscribe to JobResult events
[info] - should return error if non-existing subscription is unsubscribed
[info] JobSqlDAOSpec:
[info] save and get the jars 
[info] - should be able to save one jar and get it back
[info] - should be able to retrieve the jar file
[info] saveJobConfig() and getJobConfigs() tests 
[info] - should provide an empty map on getJobConfigs() for an empty CONFIGS table
[info] - should save and get the same config
[info] - should be able to get previously saved config
[info] - Save a new config, bring down DB, bring up DB, should get configs from DB
[info] Basic saveJobInfo() and getJobInfos() tests 
[info] - should provide an empty map on getJobInfos() for an empty JOBS table
[info] - should save a new JobInfo and get the same JobInfo
[info] - should be able to get previously saved JobInfo
[info] - Save another new jobInfo, bring down DB, bring up DB, should JobInfos from DB
[info] - saving a JobInfo with the same jobId should update the JOBS table
[info] JobManagerActorSpec:
[info] error conditions 
[info] - should return errors if appName does not match
[info] - should return error message if classPath does not match *** FAILED ***
[info]   java.lang.AssertionError: assertion failed: timeout (3 seconds) during expectMsgClass waiting for class spark.jobserver.JobManagerActor$Initialized
[info]   at scala.Predef$.assert(Predef.scala:179)
[info]   at akka.testkit.TestKitBase$class.expectMsgClass_internal(TestKit.scala:412)
[info]   at akka.testkit.TestKitBase$class.expectMsgClass(TestKit.scala:399)
[info]   at akka.testkit.TestKit.expectMsgClass(TestKit.scala:707)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$2$$anonfun$apply$mcV$sp$7.apply$mcV$sp(JobManagerSpec.scala:76)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$2$$anonfun$apply$mcV$sp$7.apply(JobManagerSpec.scala:73)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$2$$anonfun$apply$mcV$sp$7.apply(JobManagerSpec.scala:73)
[info]   at org.scalatest.FunSpec$$anon$1.apply(FunSpec.scala:1600)
[info]   at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
[info]   at spark.jobserver.JobManagerSpec.withFixture(JobManagerSpec.scala:36)
[info]   ...
[info] - should error out if loading garbage jar *** FAILED ***
[info]   java.lang.AssertionError: assertion failed: timeout (3 seconds) during expectMsg while waiting for NoSuchClass
[info]   at scala.Predef$.assert(Predef.scala:179)
[info]   at akka.testkit.TestKitBase$class.expectMsg_internal(TestKit.scala:327)
[info]   at akka.testkit.TestKitBase$class.expectMsg(TestKit.scala:314)
[info]   at akka.testkit.TestKit.expectMsg(TestKit.scala:707)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$2$$anonfun$apply$mcV$sp$8.apply$mcV$sp(JobManagerSpec.scala:86)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$2$$anonfun$apply$mcV$sp$8.apply(JobManagerSpec.scala:81)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$2$$anonfun$apply$mcV$sp$8.apply(JobManagerSpec.scala:81)
[info]   at org.scalatest.FunSpec$$anon$1.apply(FunSpec.scala:1600)
[info]   at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
[info]   at spark.jobserver.JobManagerSpec.withFixture(JobManagerSpec.scala:36)
[info]   ...
[info] - should error out if job validation fails *** FAILED ***
[info]   java.lang.AssertionError: assertion failed: expected class spark.jobserver.CommonMessages$JobValidationFailed, found class spark.jobserver.CommonMessages$NoSuchClass$
[info]   at scala.Predef$.assert(Predef.scala:179)
[info]   at akka.testkit.TestKitBase$class.expectMsgClass_internal(TestKit.scala:413)
[info]   at akka.testkit.TestKitBase$class.expectMsgClass(TestKit.scala:399)
[info]   at akka.testkit.TestKit.expectMsgClass(TestKit.scala:707)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$2$$anonfun$apply$mcV$sp$9.apply$mcV$sp(JobManagerSpec.scala:95)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$2$$anonfun$apply$mcV$sp$9.apply(JobManagerSpec.scala:89)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$2$$anonfun$apply$mcV$sp$9.apply(JobManagerSpec.scala:89)
[info]   at org.scalatest.FunSpec$$anon$1.apply(FunSpec.scala:1600)
[info]   at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
[info]   at spark.jobserver.JobManagerSpec.withFixture(JobManagerSpec.scala:36)
[info]   ...
[info] starting jobs 
[info] - should start job and return result successfully (all events) *** FAILED ***
[info]   java.lang.AssertionError: assertion failed: timeout (3 seconds) during expectMsgClass waiting for class spark.jobserver.CommonMessages$JobStarted
[info]   at scala.Predef$.assert(Predef.scala:179)
[info]   at akka.testkit.TestKitBase$class.expectMsgClass_internal(TestKit.scala:412)
[info]   at akka.testkit.TestKitBase$class.expectMsgClass(TestKit.scala:399)
[info]   at akka.testkit.TestKit.expectMsgClass(TestKit.scala:707)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$10.apply$mcV$sp(JobManagerSpec.scala:113)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$10.apply(JobManagerSpec.scala:107)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$10.apply(JobManagerSpec.scala:107)
[info]   at org.scalatest.FunSpec$$anon$1.apply(FunSpec.scala:1600)
[info]   at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
[info]   at spark.jobserver.JobManagerSpec.withFixture(JobManagerSpec.scala:36)
[info]   ...
[info] - should start job more than one time and return result successfully (all events) *** FAILED ***
[info]   java.lang.AssertionError: assertion failed: timeout (3 seconds) during expectMsgClass waiting for class spark.jobserver.CommonMessages$JobStarted
[info]   at scala.Predef$.assert(Predef.scala:179)
[info]   at akka.testkit.TestKitBase$class.expectMsgClass_internal(TestKit.scala:412)
[info]   at akka.testkit.TestKitBase$class.expectMsgClass(TestKit.scala:399)
[info]   at akka.testkit.TestKit.expectMsgClass(TestKit.scala:707)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$11.apply$mcV$sp(JobManagerSpec.scala:124)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$11.apply(JobManagerSpec.scala:118)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$11.apply(JobManagerSpec.scala:118)
[info]   at org.scalatest.FunSpec$$anon$1.apply(FunSpec.scala:1600)
[info]   at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
[info]   at spark.jobserver.JobManagerSpec.withFixture(JobManagerSpec.scala:36)
[info]   ...
[info] - should start job and return results (sync route) *** FAILED ***
[info]   java.lang.AssertionError: assertion failed: timeout (3 seconds) during expectMsg: Did not get JobResult
[info]   at scala.Predef$.assert(Predef.scala:179)
[info]   at akka.testkit.TestKitBase$class.expectMsgPF(TestKit.scala:345)
[info]   at akka.testkit.TestKit.expectMsgPF(TestKit.scala:707)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$12.apply$mcV$sp(JobManagerSpec.scala:141)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$12.apply(JobManagerSpec.scala:135)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$12.apply(JobManagerSpec.scala:135)
[info]   at org.scalatest.FunSpec$$anon$1.apply(FunSpec.scala:1600)
[info]   at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
[info]   at spark.jobserver.JobManagerSpec.withFixture(JobManagerSpec.scala:36)
[info]   at org.scalatest.FunSpec$class.invokeWithFixture$1(FunSpec.scala:1597)
[info]   ...
[info] - should start job and return JobStarted (async) *** FAILED ***
[info]   java.lang.AssertionError: assertion failed: timeout (3 seconds) during expectMsgClass waiting for class spark.jobserver.CommonMessages$JobStarted
[info]   at scala.Predef$.assert(Predef.scala:179)
[info]   at akka.testkit.TestKitBase$class.expectMsgClass_internal(TestKit.scala:412)
[info]   at akka.testkit.TestKitBase$class.expectMsgClass(TestKit.scala:399)
[info]   at akka.testkit.TestKit.expectMsgClass(TestKit.scala:707)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$13.apply$mcV$sp(JobManagerSpec.scala:153)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$13.apply(JobManagerSpec.scala:147)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$13.apply(JobManagerSpec.scala:147)
[info]   at org.scalatest.FunSpec$$anon$1.apply(FunSpec.scala:1600)
[info]   at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
[info]   at spark.jobserver.JobManagerSpec.withFixture(JobManagerSpec.scala:36)
[info]   ...
[info] - should return error if job throws an error *** FAILED ***
[info]   java.lang.AssertionError: assertion failed: timeout (3 seconds) during expectMsgClass waiting for class spark.jobserver.CommonMessages$JobErroredOut
[info]   at scala.Predef$.assert(Predef.scala:179)
[info]   at akka.testkit.TestKitBase$class.expectMsgClass_internal(TestKit.scala:412)
[info]   at akka.testkit.TestKitBase$class.expectMsgClass(TestKit.scala:399)
[info]   at akka.testkit.TestKit.expectMsgClass(TestKit.scala:707)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$14.apply$mcV$sp(JobManagerSpec.scala:163)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$14.apply(JobManagerSpec.scala:157)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$14.apply(JobManagerSpec.scala:157)
[info]   at org.scalatest.FunSpec$$anon$1.apply(FunSpec.scala:1600)
[info]   at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
[info]   at spark.jobserver.JobManagerSpec.withFixture(JobManagerSpec.scala:36)
[info]   ...
[info] - job should get jobConfig passed in to StartJob message *** FAILED ***
[info]   java.lang.AssertionError: assertion failed: timeout (3 seconds) during expectMsg: Did not get JobResult
[info]   at scala.Predef$.assert(Predef.scala:179)
[info]   at akka.testkit.TestKitBase$class.expectMsgPF(TestKit.scala:345)
[info]   at akka.testkit.TestKit.expectMsgPF(TestKit.scala:707)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$15.apply$mcV$sp(JobManagerSpec.scala:175)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$15.apply(JobManagerSpec.scala:167)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$15.apply(JobManagerSpec.scala:167)
[info]   at org.scalatest.FunSpec$$anon$1.apply(FunSpec.scala:1600)
[info]   at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
[info]   at spark.jobserver.JobManagerSpec.withFixture(JobManagerSpec.scala:36)
[info]   at org.scalatest.FunSpec$class.invokeWithFixture$1(FunSpec.scala:1597)
[info]   ...
[info] - should properly serialize case classes and other job jar classes *** FAILED ***
[info]   java.lang.AssertionError: assertion failed: timeout (4 seconds) during expectMsg: Did not get JobResult
[info]   at scala.Predef$.assert(Predef.scala:179)
[info]   at akka.testkit.TestKitBase$class.expectMsgPF(TestKit.scala:345)
[info]   at akka.testkit.TestKit.expectMsgPF(TestKit.scala:707)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$16.apply$mcV$sp(JobManagerSpec.scala:188)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$16.apply(JobManagerSpec.scala:181)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$16.apply(JobManagerSpec.scala:181)
[info]   at org.scalatest.FunSpec$$anon$1.apply(FunSpec.scala:1600)
[info]   at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
[info]   at spark.jobserver.JobManagerSpec.withFixture(JobManagerSpec.scala:36)
[info]   at org.scalatest.FunSpec$class.invokeWithFixture$1(FunSpec.scala:1597)
[info]   ...
[info] - should refuse to start a job when too many jobs in the context are running *** FAILED ***
[info]   java.lang.AssertionError: assertion failed: timeout (3 seconds) during expectMsg: Expected a message but didn't get one!
[info]   at scala.Predef$.assert(Predef.scala:179)
[info]   at akka.testkit.TestKitBase$class.expectMsgPF(TestKit.scala:345)
[info]   at akka.testkit.TestKit.expectMsgPF(TestKit.scala:707)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$17.apply$mcV$sp(JobManagerSpec.scala:212)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$17.apply(JobManagerSpec.scala:196)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$17.apply(JobManagerSpec.scala:196)
[info]   at org.scalatest.FunSpec$$anon$1.apply(FunSpec.scala:1600)
[info]   at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
[info]   at spark.jobserver.JobManagerSpec.withFixture(JobManagerSpec.scala:36)
[info]   at org.scalatest.FunSpec$class.invokeWithFixture$1(FunSpec.scala:1597)
[info]   ...
[info] - should start a job that's an object rather than class *** FAILED ***
[info]   java.lang.AssertionError: assertion failed: expected: Did not get JobResult but got unexpected message JobStarted(a962d12c-2ef4-4c39-92dc-5807c7eed5b1,test,2015-02-14T06:57:50.865-06:00)
[info]   at scala.Predef$.assert(Predef.scala:179)
[info]   at akka.testkit.TestKitBase$class.expectMsgPF(TestKit.scala:346)
[info]   at akka.testkit.TestKit.expectMsgPF(TestKit.scala:707)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$19.apply$mcV$sp(JobManagerSpec.scala:238)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$19.apply(JobManagerSpec.scala:231)
[info]   at spark.jobserver.JobManagerSpec$$anonfun$3$$anonfun$apply$mcV$sp$19.apply(JobManagerSpec.scala:231)
[info]   at org.scalatest.FunSpec$$anon$1.apply(FunSpec.scala:1600)
[info]   at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
[info]   at spark.jobserver.JobManagerSpec.withFixture(JobManagerSpec.scala:36)
[info]   at org.scalatest.FunSpec$class.invokeWithFixture$1(FunSpec.scala:1597)
[info]   ...
[info] starting jobs 
[info] - jobs should be able to cache RDDs and retrieve them through getPersistentRDDs *** FAILED ***
[info]   java.lang.AssertionError: assertion failed: expected class spark.jobserver.JobManagerActor$Initialized, found class spark.jobserver.CommonMessages$JobStarted
[info]   at scala.Predef$.assert(Predef.scala:179)
[info]   at akka.testkit.TestKitBase$class.expectMsgClass_internal(TestKit.scala:413)
[info]   at akka.testkit.TestKitBase$class.expectMsgClass(TestKit.scala:399)
[info]   at akka.testkit.TestKit.expectMsgClass(TestKit.scala:707)
[info]   at spark.jobserver.JobManagerActorSpec$$anonfun$2$$anonfun$apply$mcV$sp$2.apply$mcV$sp(JobManagerActorSpec.scala:18)
[info]   at spark.jobserver.JobManagerActorSpec$$anonfun$2$$anonfun$apply$mcV$sp$2.apply(JobManagerActorSpec.scala:16)
[info]   at spark.jobserver.JobManagerActorSpec$$anonfun$2$$anonfun$apply$mcV$sp$2.apply(JobManagerActorSpec.scala:16)
[info]   at org.scalatest.FunSpec$$anon$1.apply(FunSpec.scala:1600)
[info]   at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
[info]   at spark.jobserver.JobManagerSpec.withFixture(JobManagerSpec.scala:36)
[info]   ...
[info] - jobs should be able to cache and retrieve RDDs by name *** FAILED ***
[info]   java.lang.AssertionError: assertion failed: expected class spark.jobserver.JobManagerActor$Initialized, found class spark.jobserver.CommonMessages$NoJobSlotsAvailable
[info]   at scala.Predef$.assert(Predef.scala:179)
[info]   at akka.testkit.TestKitBase$class.expectMsgClass_internal(TestKit.scala:413)
[info]   at akka.testkit.TestKitBase$class.expectMsgClass(TestKit.scala:399)
[info]   at akka.testkit.TestKit.expectMsgClass(TestKit.scala:707)
[info]   at spark.jobserver.JobManagerActorSpec$$anonfun$2$$anonfun$apply$mcV$sp$3.apply$mcV$sp(JobManagerActorSpec.scala:34)
[info]   at spark.jobserver.JobManagerActorSpec$$anonfun$2$$anonfun$apply$mcV$sp$3.apply(JobManagerActorSpec.scala:32)
[info]   at spark.jobserver.JobManagerActorSpec$$anonfun$2$$anonfun$apply$mcV$sp$3.apply(JobManagerActorSpec.scala:32)
[info]   at org.scalatest.FunSpec$$anon$1.apply(FunSpec.scala:1600)
[info]   at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
[info]   at spark.jobserver.JobManagerSpec.withFixture(JobManagerSpec.scala:36)
[info]   ...
[info] LocalContextSupervisorSpec:
[info] context management 
[info] - should list empty contexts at startup
[info] - can add contexts from jobConfig *** FAILED ***
[info]   java.lang.AssertionError: assertion failed: expected List(olap-demo), found ArrayBuffer()
[info]   at scala.Predef$.assert(Predef.scala:179)
[info]   at akka.testkit.TestKitBase$class.expectMsg_internal(TestKit.scala:328)
[info]   at akka.testkit.TestKitBase$class.expectMsg(TestKit.scala:314)
[info]   at akka.testkit.TestKit.expectMsg(TestKit.scala:707)
[info]   at spark.jobserver.LocalContextSupervisorSpec$$anonfun$3$$anonfun$apply$mcV$sp$2.apply$mcV$sp(LocalContextSupervisorSpec.scala:77)
[info]   at spark.jobserver.LocalContextSupervisorSpec$$anonfun$3$$anonfun$apply$mcV$sp$2.apply(LocalContextSupervisorSpec.scala:73)
[info]   at spark.jobserver.LocalContextSupervisorSpec$$anonfun$3$$anonfun$apply$mcV$sp$2.apply(LocalContextSupervisorSpec.scala:73)
[info]   at org.scalatest.FunSpec$$anon$1.apply(FunSpec.scala:1600)
[info]   at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
[info]   at spark.jobserver.LocalContextSupervisorSpec.withFixture(LocalContextSupervisorSpec.scala:41)
[info]   ...
[info] - should be able to add multiple new contexts *** FAILED ***
[info]   java.lang.AssertionError: assertion failed: timeout (3 seconds) during expectMsg while waiting for ContextInitialized
[info]   at scala.Predef$.assert(Predef.scala:179)
[info]   at akka.testkit.TestKitBase$class.expectMsg_internal(TestKit.scala:327)
[info]   at akka.testkit.TestKitBase$class.expectMsg(TestKit.scala:314)
[info]   at akka.testkit.TestKit.expectMsg(TestKit.scala:707)
[info]   at spark.jobserver.LocalContextSupervisorSpec$$anonfun$3$$anonfun$apply$mcV$sp$3.apply$mcV$sp(LocalContextSupervisorSpec.scala:83)
[info]   at spark.jobserver.LocalContextSupervisorSpec$$anonfun$3$$anonfun$apply$mcV$sp$3.apply(LocalContextSupervisorSpec.scala:80)
[info]   at spark.jobserver.LocalContextSupervisorSpec$$anonfun$3$$anonfun$apply$mcV$sp$3.apply(LocalContextSupervisorSpec.scala:80)
[info]   at org.scalatest.FunSpec$$anon$1.apply(FunSpec.scala:1600)
[info]   at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
[info]   at spark.jobserver.LocalContextSupervisorSpec.withFixture(LocalContextSupervisorSpec.scala:41)
[info]   ...
[info] - should be able to stop contexts already running *** FAILED ***
[info]   java.lang.AssertionError: assertion failed: expected ContextInitialized, found ContextInitError(akka.pattern.AskTimeoutException: Timed out)
[info]   at scala.Predef$.assert(Predef.scala:179)
[info]   at akka.testkit.TestKitBase$class.expectMsg_internal(TestKit.scala:328)
[info]   at akka.testkit.TestKitBase$class.expectMsg(TestKit.scala:314)
[info]   at akka.testkit.TestKit.expectMsg(TestKit.scala:707)
[info]   at spark.jobserver.LocalContextSupervisorSpec$$anonfun$3$$anonfun$apply$mcV$sp$4.apply$mcV$sp(LocalContextSupervisorSpec.scala:96)
[info]   at spark.jobserver.LocalContextSupervisorSpec$$anonfun$3$$anonfun$apply$mcV$sp$4.apply(LocalContextSupervisorSpec.scala:93)
[info]   at spark.jobserver.LocalContextSupervisorSpec$$anonfun$3$$anonfun$apply$mcV$sp$4.apply(LocalContextSupervisorSpec.scala:93)
[info]   at org.scalatest.FunSpec$$anon$1.apply(FunSpec.scala:1600)
[info]   at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
[info]   at spark.jobserver.LocalContextSupervisorSpec.withFixture(LocalContextSupervisorSpec.scala:41)
[info]   ...
[info] - should return NoSuchContext if attempt to stop nonexisting context *** FAILED ***
[info]   java.lang.AssertionError: assertion failed: expected NoSuchContext, found ContextInitError(akka.pattern.AskTimeoutException: Timed out)
[info]   at scala.Predef$.assert(Predef.scala:179)
[info]   at akka.testkit.TestKitBase$class.expectMsg_internal(TestKit.scala:328)
[info]   at akka.testkit.TestKitBase$class.expectMsg(TestKit.scala:314)
[info]   at akka.testkit.TestKit.expectMsg(TestKit.scala:707)
[info]   at spark.jobserver.LocalContextSupervisorSpec$$anonfun$3$$anonfun$apply$mcV$sp$5.apply$mcV$sp(LocalContextSupervisorSpec.scala:110)
[info]   at spark.jobserver.LocalContextSupervisorSpec$$anonfun$3$$anonfun$apply$mcV$sp$5.apply(LocalContextSupervisorSpec.scala:108)
[info]   at spark.jobserver.LocalContextSupervisorSpec$$anonfun$3$$anonfun$apply$mcV$sp$5.apply(LocalContextSupervisorSpec.scala:108)
[info]   at org.scalatest.FunSpec$$anon$1.apply(FunSpec.scala:1600)
[info]   at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
[info]   at spark.jobserver.LocalContextSupervisorSpec.withFixture(LocalContextSupervisorSpec.scala:41)
[info]   ...
[info] - should not allow creation of an already existing context *** FAILED ***
[info]   java.lang.AssertionError: assertion failed: expected ContextInitialized, found ContextInitError(akka.pattern.AskTimeoutException: Timed out)
[info]   at scala.Predef$.assert(Predef.scala:179)
[info]   at akka.testkit.TestKitBase$class.expectMsg_internal(TestKit.scala:328)
[info]   at akka.testkit.TestKitBase$class.expectMsg(TestKit.scala:314)
[info]   at akka.testkit.TestKit.expectMsg(TestKit.scala:707)
[info]   at spark.jobserver.LocalContextSupervisorSpec$$anonfun$3$$anonfun$apply$mcV$sp$6.apply$mcV$sp(LocalContextSupervisorSpec.scala:115)
[info]   at spark.jobserver.LocalContextSupervisorSpec$$anonfun$3$$anonfun$apply$mcV$sp$6.apply(LocalContextSupervisorSpec.scala:113)
[info]   at spark.jobserver.LocalContextSupervisorSpec$$anonfun$3$$anonfun$apply$mcV$sp$6.apply(LocalContextSupervisorSpec.scala:113)
[info]   at org.scalatest.FunSpec$$anon$1.apply(FunSpec.scala:1600)
[info]   at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
[info]   at spark.jobserver.LocalContextSupervisorSpec.withFixture(LocalContextSupervisorSpec.scala:41)
[info]   ...
[info] SparkJobUtilsSpec:
[info] SparkJobUtils.configToSparkConf 
[info] - should translate num-cpu-cores and memory-per-node properly
[info] - should add other arbitrary settings
[info] JobInfoActorSpec:
[info] JobInfoActor 
[info] - should store a job configuration
[info] - should return a job configuration when the jobId exists
[info] - should return error if jobId does not exist
[error] Failed: Total 106, Failed 31, Errors 0, Passed 75
[error] Failed tests:
[error]     spark.jobserver.JobManagerActorAdHocSpec
[error]     spark.jobserver.JobManagerActorSpec
[error]     spark.jobserver.LocalContextSupervisorSpec
[error] (job-server/test:test) sbt.TestsFailedException: Tests unsuccessful
[error] Total time: 374 s, completed Feb 14, 2015 6:58:31 AM
Assembly failed

Similar issue at ooyala/spark-jobserver#59

cc/ @velvia

sbt test error akka.pattern.AskTimeoutException: Timed out

I am using 0.4.1 and I get this error when I run "sbt test"

[info] JobManagerActorAdHocSpec:
[info] error conditions
[info] - should return errors if appName does not match
[info] - should return error message if classPath does not match *** FAILED ***
[info] java.lang.AssertionError: assertion failed: timeout (3 seconds) during expectMsgClass waiting for class spark.jobserver.JobManagerActor$Initialized
[info] at scala.Predef$.assert(Predef.scala:179)
[info] at akka.testkit.TestKitBase$class.expectMsgClass_internal(TestKit.scala:412)
[info] at akka.testkit.TestKitBase$class.expectMsgClass(TestKit.scala:399)
[info] at akka.testkit.TestKit.expectMsgClass(TestKit.scala:707)
[info] at spark.jobserver.JobManagerSpec$$anonfun$2$$anonfun$apply$mcV$sp$7.apply$mcV$sp(JobManagerSpec.scala:76)
[info] at spark.jobserver.JobManagerSpec$$anonfun$2$$anonfun$apply$mcV$sp$7.apply(JobManagerSpec.scala:73)
[info] at spark.jobserver.JobManagerSpec$$anonfun$2$$anonfun$apply$mcV$sp$7.apply(JobManagerSpec.scala:73)
[info] at org.scalatest.FunSpec$$anon$1.apply(FunSpec.scala:1600)
[info] at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
[info] at spark.jobserver.JobManagerSpec.withFixture(JobManagerSpec.scala:36)
[info] ...
[info] Exception encountered when attempting to run suite spark.jobserver.JobManagerActorAdHocSpec: Timed out *** ABORTED ***
[info] akka.pattern.AskTimeoutException: Timed out
[info] at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:334)
[info] at akka.actor.Scheduler$$anon$11.run(Scheduler.scala:118)
[info] at scala.concurrent.Future$InternalCallbackExecutor$.scala$concurrent$Future$InternalCallbackExecutor$$unbatchedExecute(Future.scala:694)
[info] at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:691)
[info] at akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(Scheduler.scala:455)
[info] at akka.actor.LightArrayRevolverScheduler$$anon$12.executeBucket$1(Scheduler.scala:407)
[info] at akka.actor.LightArrayRevolverScheduler$$anon$12.nextTick(Scheduler.scala:411)
[info] at akka.actor.LightArrayRevolverScheduler$$anon$12.run(Scheduler.scala:363)
[info] at java.lang.Thread.run(Thread.java:745)
[info] ...
[info] LocalContextSupervisorSpec:
[info] context management
[info] - should list empty contexts at startup
[info] - can add contexts from jobConfig *** FAILED ***
[info] java.lang.AssertionError: assertion failed: expected List(olap-demo), found ArrayBuffer()
[info] at scala.Predef$.assert(Predef.scala:179)
[info] at akka.testkit.TestKitBase$class.expectMsg_internal(TestKit.scala:328)
[info] at akka.testkit.TestKitBase$class.expectMsg(TestKit.scala:314)
[info] at akka.testkit.TestKit.expectMsg(TestKit.scala:707)
[info] at spark.jobserver.LocalContextSupervisorSpec$$anonfun$3$$anonfun$apply$mcV$sp$2.apply$mcV$sp(LocalContextSupervisorSpec.scala:77)
[info] at spark.jobserver.LocalContextSupervisorSpec$$anonfun$3$$anonfun$apply$mcV$sp$2.apply(LocalContextSupervisorSpec.scala:73)
[info] at spark.jobserver.LocalContextSupervisorSpec$$anonfun$3$$anonfun$apply$mcV$sp$2.apply(LocalContextSupervisorSpec.scala:73)
[info] at org.scalatest.FunSpec$$anon$1.apply(FunSpec.scala:1600)
[info] at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
[info] at spark.jobserver.LocalContextSupervisorSpec.withFixture(LocalContextSupervisorSpec.scala:41)
[info] ...
[info] Exception encountered when attempting to run suite spark.jobserver.LocalContextSupervisorSpec: Timed out *** ABORTED ***
[info] akka.pattern.AskTimeoutException: Timed out
[info] at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:334)
[info] at akka.actor.Scheduler$$anon$11.run(Scheduler.scala:118)
[info] at scala.concurrent.Future$InternalCallbackExecutor$.scala$concurrent$Future$InternalCallbackExecutor$$unbatchedExecute(Future.scala:694)
[info] at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:691)
[info] at akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(Scheduler.scala:455)
[info] at akka.actor.LightArrayRevolverScheduler$$anon$12.executeBucket$1(Scheduler.scala:407)
[info] at akka.actor.LightArrayRevolverScheduler$$anon$12.nextTick(Scheduler.scala:411)
[info] at akka.actor.LightArrayRevolverScheduler$$anon$12.run(Scheduler.scala:363)
[info] at java.lang.Thread.run(Thread.java:745)
[info] ...
[error] Failed: Total 81, Failed 6, Errors 0, Passed 75
[error] Failed tests:
[error] spark.jobserver.JobManagerActorAdHocSpec
[error] spark.jobserver.JobManagerActorSpec
[error] spark.jobserver.LocalContextSupervisorSpec
error sbt.TestsFailedException: Tests unsuccessful
[error] Total time: 162 s, completed 24 Mar, 2015 11:47:08 AM

CDH 5.3 Spark 1.2 NoSuchMethodError Error

We have been running Spark Jobserver for some time. We just upgraded one of our clusters to CDH5.3 (and consequently Spark 1.2 ) . I did a git pull, and from sbt it actually seems to run fine. When I try to run it from server_start.sh I get the below exception. Any thoughts greatly appreciated.

Uncaught error from thread [JobServer-akka.actor.default-dispatcher-4] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[JobServer]
java.lang.NoSuchMethodError: akka.actor.ActorContext.dispatcher()Lscala/concurrent/ExecutionContextExecutor;
at spark.jobserver.LocalContextSupervisorActor.spark$jobserver$LocalContextSupervisorActor$$startContext(LocalContextSupervisorActor.scala:161)
at spark.jobserver.LocalContextSupervisorActor$$anonfun$spark$jobserver$LocalContextSupervisorActor$$addContextsFromConfig$2$$anonfun$apply$1.apply(LocalContextSupervisorActor.scala:186)
at spark.jobserver.LocalContextSupervisorActor$$anonfun$spark$jobserver$LocalContextSupervisorActor$$addContextsFromConfig$2$$anonfun$apply$1.apply(LocalContextSupervisorActor.scala:183)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at spark.jobserver.LocalContextSupervisorActor$$anonfun$spark$jobserver$LocalContextSupervisorActor$$addContextsFromConfig$2.apply(LocalContextSupervisorActor.scala:183)
at spark.jobserver.LocalContextSupervisorActor$$anonfun$spark$jobserver$LocalContextSupervisorActor$$addContextsFromConfig$2.apply(LocalContextSupervisorActor.scala:182)
at scala.util.Success.foreach(Try.scala:205)
at spark.jobserver.LocalContextSupervisorActor.spark$jobserver$LocalContextSupervisorActor$$addContextsFromConfig(LocalContextSupervisorActor.scala:182)
at spark.jobserver.LocalContextSupervisorActor$$anonfun$wrappedReceive$1.applyOrElse(LocalContextSupervisorActor.scala:87)
at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
at ooyala.common.akka.ActorStack$$anonfun$receive$1.applyOrElse(ActorStack.scala:33)
at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
at ooyala.common.akka.Slf4jLogging$$anonfun$receive$1$$anonfun$applyOrElse$1.apply$mcV$sp(Slf4jLogging.scala:26)
at ooyala.common.akka.Slf4jLogging$class.ooyala$common$akka$Slf4jLogging$$withAkkaSourceLogging(Slf4jLogging.scala:35)
at ooyala.common.akka.Slf4jLogging$$anonfun$receive$1.applyOrElse(Slf4jLogging.scala:25)
at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
at ooyala.common.akka.ActorMetrics$$anonfun$receive$1.applyOrElse(ActorMetrics.scala:24)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
at akka.actor.ActorCell.invoke(ActorCell.scala:456)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

Accept error: could not accept new connection java.io.IOException: Too many open files

hi,

i saw tons of below errors on spark job server when my app sent multi requests to the server.

is this a bug? how to fix it?

[2015-01-27 16:40:59,292] ERROR akka.io.TcpListener [] [akka://JobServer/system/IO-TCP/selectors/$a/0] - Accept error: could not accept new connection
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241)
at akka.io.TcpListener.acceptAllPending(TcpListener.scala:96)
at akka.io.TcpListener$$anonfun$bound$1.applyOrElse(TcpListener.scala:76)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
at akka.actor.ActorCell.invoke(ActorCell.scala:456)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

unhandled exception cause spark job server crashed (java.lang.ClassNotFoundException: org.apache.spark.SparkContext)

hi,

i debuged the spark job server in intellij , when client post jobs to it, it crashed with below logs:

one of the reasons is it unable to find SparkContext
java.lang.ClassNotFoundException: org.apache.spark.SparkContext

but if i deploy the same code with bin/server_deploy.sh command,and ran it with ./server_start.sh command, it did not report this error.

so how can i make it not crash in debuger?

Connected to the target VM, address: '127.0.0.1:37764', transport: 'socket'
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
SLF4J: Failed to load class "org.slf4j.impl.StaticMDCBinder".
SLF4J: Defaulting to no-operation MDCAdapter implementation.
SLF4J: See http://www.slf4j.org/codes.html#no_static_mdc_binder for further details.
Uncaught error from thread [JobServer-akka.actor.default-dispatcher-14] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[JobServer]
java.lang.NoClassDefFoundError: Lorg/apache/spark/SparkContext;
at java.lang.Class.getDeclaredFields0(Native Method)
at java.lang.Class.privateGetDeclaredFields(Class.java:2575)
at java.lang.Class.getDeclaredField(Class.java:2060)
at akka.actor.ActorCell.lookupAndSetField(ActorCell.scala:602)
at akka.actor.ActorCell.setActorFields(ActorCell.scala:627)
at akka.actor.ActorCell.clearActorFields(ActorCell.scala:620)
at akka.actor.dungeon.FaultHandling$class.akka$actor$dungeon$FaultHandling$$finishTerminate(FaultHandling.scala:211)
at akka.actor.dungeon.FaultHandling$class.handleChildTerminated(FaultHandling.scala:283)
at akka.actor.ActorCell.handleChildTerminated(ActorCell.scala:338)
at akka.actor.dungeon.DeathWatch$class.watchedActorTerminated(DeathWatch.scala:62)
at akka.actor.ActorCell.watchedActorTerminated(ActorCell.scala:338)
at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:424)
at akka.actor.ActorCell.systemInvoke(ActorCell.scala:447)
at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:262)
at akka.dispatch.Mailbox.run(Mailbox.scala:218)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:385)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
Disconnected from the target VM, address: '127.0.0.1:37764', transport: 'socket'
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.SparkContext
at java.net.URLClassLoader$1.run(URLClassLoader.java:372)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 20 more

sbt package . get below error, can anyone give some hints?

[info] Packaging /Users/xiaods/Downloads/spark-jobserver-0.5.0/job-server/target/job-server-0.5.0.jar ...
[info] Done packaging.
[success] Total time: 364 s, completed 2015-3-23 14:58:35
[error] Expected letter
[error] Expected symbol
[error] Expected '!'
[error] Expected '+'
[error] Expected '++'
[error] Expected 'debug'
[error] Expected 'info'
[error] Expected 'warn'
[error] Expected 'error'
[error] Expected ';'
[error] Expected end of input.
[error] Expected '--'
[error] Expected 'show'
[error] Expected 'all'
[error] Expected '*'
[error] Expected '{'
[error] Expected project ID
[error] Expected configuration
[error] Expected key
[error] Expected '-'
[error] .
[error] ^

In yarn-client mode, sparkcontext can't be recreated after crashed

In yarn-client mode, I met a problem. AM was crashed, so the whole sparkcontext was crashed. But the sparkcontext didn't recreate. My new comming jobs submmited to sparkjobserver was accepted, but never to run, because the sparkcontext was not exist.
I want to know whether the sparkjobserver have sparkcontext recreate feature. If not, will you consider to add this feature in the future?

What is the recommended size of the return data set

Hi,
I have a jobserver process that takes a bunch of filter parameters and returns the filtered dataset.
However my return dataset is of the order of multiple GB.
Can I send the result as a map?(With out choking the job-server with OOM)
or a better way is to save it to HDFS and just send the path of the saved files?

Thanks
Manas

Multiple Applications(Spark Contexts) Concurrently Fail With Broadcast Error

We are unable to run more than one application at a time using Spark 1.0.0 on CDH5. We submit two applications using two different SparkContexts on the same Spark Master. The Spark Master was started using the following command and parameters and is running in standalone mode:

/usr/java/jdk1.7.0_55-cloudera/bin/java 
  -XX:MaxPermSize=128m 
  -Djava.net.preferIPv4Stack=true 
  -Dspark.akka.logLifecycleEvents=true 
  -Xms8589934592 
  -Xmx8589934592 
  org.apache.spark.deploy.master.Master 
    --ip ip-10-186-155-45.ec2.internal

When submitting this application by itself it finishes and all of the data comes out happy. The problem occurs when trying to run another application while an existing application is still processing and we get an error stating that the spark contexts were shut down prematurely.




The errors can be viewed in the following pastebins. All IP addresses have been changed to 1.1.1.1 for security reasons.
Notice that on the top of the logs we have printed out the spark config stuff for reference.

The working logs: Working Pastebin
The broken logs: Broken Pastebin

We have also included the worker logs. For the second app, we see in the work/app/ directory 7 additional directors: 0/ 1/ 2/ 3/ 4/ 5/ 6/. There are then two different groups of errors. The first three are one group and the other 4 are the other group of errors.

Worker log for broken app group 1: Broken App Group 1
Worker log for broken app group 2: Broken App Group 2
Worker log for working app: available upon request.

The two different errors are the last lines of both groups and are:

Received LaunchTask command but executor was null
Slave registration failed: Duplicate executor ID: 4

tl;dr
We are unable to run more than one application in the same spark master using different spark contexts. The only errors we see are broadcast errors.

the JVM process of spark job (Spray HTTP Server) goes down without any logs under stress test

we have the spark job server run together with the standalone spark cluster on one machine, which has 8 cores, 14G memory.

we tried to run jobs every 1 second against it, but after about 30 jobs, the spark job server process died without any logs, so don't know what caused this.

here are what i adjusted in the code and configuration:

  1. add more log output where the PoisonPill is called
  2. add the "verbose-error-messages = on" to spray.can.server in application.conf
  3. change log4j.rootLogger in log4j-server.properties as "log4j.rootLogger=DEBUG, LOGFILE"
  4. add below JAVA option in server_start.sh
    -XX:+HeapDumpOnOutOfMemoryError
    -XX:HeapDumpPath=$appdir/heapdump

but still did not figure out from the log why the spray server goes down without any log left.

please point me how can i know why the job server process die .

thanks in advance.

Trying to use spark-jobserver to fetch updated values from cassandra

Hi,

My requirement is :

I am trying to fetch data from Cassandra using spark-jobserver.
I have written one job which is able to do above task by running :

curl -X POST 'localhost:8090/jobs?appName=app&classPath=spark.jobserver.my'

Now I want to fetch data from Cassandra which will give me updated values in Cassandra after every one hour.
Can someone please tell me how to achieve this task in spark-jobserver ?
Because once I run above command, job finishes after fetching the data.

Do I need to run above POST command every time (after every one hour) to fetch data ?
If I do that, it will create new job every one hour.
Is it possible to rerun one job more than once ?

Thanks in advance.
-Sumant

How to set executor-num in yarn-client mode?

I am using spark-jobserver in yarn-client. I can only set num-cpu-cores and mem-per-node. But how can I set num-executors? (Now the default executor num is always 2.). Config in client-side or server-side is both ok for me.

NamedRDD is not working on 0.5.0

I just upgraded to 0.5.0 from 0.4.1 without any other changes, the RDDs are no longer cached.

I did a test where within the same job, I did an update and a get. The get failed to get any cached RDD even through the update went through without any errors.

How can I cache the RDDs like in the previous release?

Thank you,
Hung

Test fails with SparkContext has been shutdown

hi all,

I am tying to run server_deploy.sh, it seems that it fails to create the sparkcontext with the first test, using spark.jobserver.WordCountExample, not sure what the problem might be, as I can run that example separately.

here is my job-server-test.log:

[2015-01-27 13:13:44,630] INFO  .jobserver.JobManagerActor [] [akka://test/user/$a] - Starting actor spark.jobserver.JobManagerActor
[2015-01-27 13:13:44,869] INFO  k.jobserver.JobStatusActor [] [akka://test/user/$a/status-actor] - Starting actor spark.jobserver.JobStatusActor
[2015-01-27 13:13:44,872] INFO  k.jobserver.JobResultActor [] [akka://test/user/$a/result-actor] - Starting actor spark.jobserver.JobResultActor
[2015-01-27 13:13:45,035] WARN  k.jobserver.JobStatusActor [] [] - Shutting down spark.jobserver.JobStatusActor
[2015-01-27 13:13:45,035] WARN  k.jobserver.JobResultActor [] [] - Shutting down spark.jobserver.JobResultActor
[2015-01-27 13:13:45,040] INFO  .jobserver.JobManagerActor [] [] - Shutting down SparkContext test
[2015-01-27 13:13:45,051] INFO  .jobserver.JobManagerActor [] [akka://test/user/$b] - Starting actor spark.jobserver.JobManagerActor
[2015-01-27 13:13:45,055] INFO  k.jobserver.JobResultActor [] [akka://test/user/$b/result-actor] - Starting actor spark.jobserver.JobResultActor
[2015-01-27 13:13:45,059] INFO  k.jobserver.JobStatusActor [] [akka://test/user/$b/status-actor] - Starting actor spark.jobserver.JobStatusActor
[2015-01-27 13:13:45,229] WARN  rg.apache.spark.util.Utils [] [akka://test/user/$b] - Your hostname, tamas-laptop resolves to a loopback address: 127.0.1.1; using 10.1.3.213 instead (on interface eth0)
[2015-01-27 13:13:45,230] WARN  rg.apache.spark.util.Utils [] [akka://test/user/$b] - Set SPARK_LOCAL_IP if you need to bind to another address
[2015-01-27 13:13:45,609] INFO  ache.spark.SecurityManager [] [akka://test/user/$b] - Changing view acls to: tja01
[2015-01-27 13:13:45,610] INFO  ache.spark.SecurityManager [] [akka://test/user/$b] - Changing modify acls to: tja01
[2015-01-27 13:13:45,611] INFO  ache.spark.SecurityManager [] [akka://test/user/$b] - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(tja01); users with modify permissions: Set(tja01)
[2015-01-27 13:13:45,908] INFO  ka.event.slf4j.Slf4jLogger [] [akka://test/user/$b] - Slf4jLogger started
[2015-01-27 13:13:45,999] INFO  Remoting [] [Remoting] - Starting remoting
[2015-01-27 13:13:46,450] INFO  Remoting [] [Remoting] - Remoting started; listening on addresses :[akka.tcp://[email protected]:34516]
[2015-01-27 13:13:46,463] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$b] - Successfully started service 'sparkDriver' on port 34516.
[2015-01-27 13:13:46,498] INFO  org.apache.spark.SparkEnv [] [akka://test/user/$b] - Registering MapOutputTracker
[2015-01-27 13:13:46,525] INFO  org.apache.spark.SparkEnv [] [akka://test/user/$b] - Registering BlockManagerMaster
[2015-01-27 13:13:46,560] INFO  k.storage.DiskBlockManager [] [akka://test/user/$b] - Created local directory at /tmp/spark-local-20150127131346-c001
[2015-01-27 13:13:46,572] INFO  .spark.storage.MemoryStore [] [akka://test/user/$b] - MemoryStore started with capacity 681.8 MB
[2015-01-27 13:13:47,301] WARN  doop.util.NativeCodeLoader [] [akka://test/user/$b] - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[2015-01-27 13:13:47,548] INFO  pache.spark.HttpFileServer [] [akka://test/user/$b] - HTTP File server directory is /tmp/spark-34bbc817-ea02-4871-924d-dfb34763e456
[2015-01-27 13:13:47,564] INFO  rg.apache.spark.HttpServer [] [akka://test/user/$b] - Starting HTTP Server
[2015-01-27 13:13:47,799] INFO  clipse.jetty.server.Server [] [akka://test/user/$b] - jetty-8.1.14.v20131031
[2015-01-27 13:13:47,825] INFO  y.server.AbstractConnector [] [akka://test/user/$b] - Started [email protected]:60616
[2015-01-27 13:13:47,826] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$b] - Successfully started service 'HTTP file server' on port 60616.
[2015-01-27 13:13:53,012] INFO  clipse.jetty.server.Server [] [akka://test/user/$b] - jetty-8.1.14.v20131031
[2015-01-27 13:13:53,034] INFO  y.server.AbstractConnector [] [akka://test/user/$b] - Started [email protected]:34008
[2015-01-27 13:13:53,034] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$b] - Successfully started service 'SparkUI' on port 34008.
[2015-01-27 13:13:53,040] INFO  rg.apache.spark.ui.SparkUI [] [akka://test/user/$b] - Started SparkUI at http://10.1.3.213:34008
[2015-01-27 13:13:53,294] INFO  pache.spark.util.AkkaUtils [] [] - Connecting to HeartbeatReceiver: akka.tcp://[email protected]:34516/user/HeartbeatReceiver
[2015-01-27 13:13:53,528] INFO  .NettyBlockTransferService [] [akka://test/user/$b] - Server created on 46782
[2015-01-27 13:13:53,532] INFO  storage.BlockManagerMaster [] [akka://test/user/$b] - Trying to register BlockManager
[2015-01-27 13:13:53,535] INFO  ge.BlockManagerMasterActor [] [] - Registering block manager localhost:46782 with 681.8 MB RAM, BlockManagerId(<driver>, localhost, 46782)
[2015-01-27 13:13:53,540] INFO  storage.BlockManagerMaster [] [akka://test/user/$b] - Registered BlockManager
[2015-01-27 13:13:53,851] INFO  .jobserver.RddManagerActor [] [akka://test/user/$b/rdd-manager-actor] - Starting actor spark.jobserver.RddManagerActor
[2015-01-27 13:13:53,853] WARN  k.jobserver.JobResultActor [] [] - Shutting down spark.jobserver.JobResultActor
[2015-01-27 13:13:53,853] WARN  k.jobserver.JobStatusActor [] [] - Shutting down spark.jobserver.JobStatusActor
[2015-01-27 13:13:53,854] WARN  .jobserver.RddManagerActor [] [] - Shutting down spark.jobserver.RddManagerActor
[2015-01-27 13:13:53,854] INFO  .jobserver.JobManagerActor [] [] - Shutting down SparkContext test
[2015-01-27 13:13:53,870] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage/kill,null}
[2015-01-27 13:13:53,871] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/,null}
[2015-01-27 13:13:53,871] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/static,null}
[2015-01-27 13:13:53,871] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/threadDump/json,null}
[2015-01-27 13:13:53,871] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/threadDump,null}
[2015-01-27 13:13:53,872] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/json,null}
[2015-01-27 13:13:53,872] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors,null}
[2015-01-27 13:13:53,872] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/environment/json,null}
[2015-01-27 13:13:53,872] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/environment,null}
[2015-01-27 13:13:53,872] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/rdd/json,null}
[2015-01-27 13:13:53,873] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/rdd,null}
[2015-01-27 13:13:53,873] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/json,null}
[2015-01-27 13:13:53,873] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage,null}
[2015-01-27 13:13:53,873] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/pool/json,null}
[2015-01-27 13:13:53,873] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/pool,null}
[2015-01-27 13:13:53,873] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage/json,null}
[2015-01-27 13:13:53,874] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage,null}
[2015-01-27 13:13:53,874] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/json,null}
[2015-01-27 13:13:53,874] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages,null}
[2015-01-27 13:13:53,874] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/job/json,null}
[2015-01-27 13:13:53,874] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/job,null}
[2015-01-27 13:13:53,874] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/json,null}
[2015-01-27 13:13:53,875] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs,null}
[2015-01-27 13:13:53,932] INFO  rg.apache.spark.ui.SparkUI [] [] - Stopped Spark web UI at http://10.1.3.213:34008
[2015-01-27 13:13:53,935] INFO  ark.scheduler.DAGScheduler [] [] - Stopping DAGScheduler
[2015-01-27 13:13:54,996] INFO  apOutputTrackerMasterActor [] [akka://test/user/$b] - MapOutputTrackerActor stopped!
[2015-01-27 13:13:55,082] INFO  .spark.storage.MemoryStore [] [] - MemoryStore cleared
[2015-01-27 13:13:55,083] INFO  spark.storage.BlockManager [] [] - BlockManager stopped
[2015-01-27 13:13:55,085] INFO  storage.BlockManagerMaster [] [] - BlockManagerMaster stopped
[2015-01-27 13:13:55,099] INFO  .apache.spark.SparkContext [] [] - Successfully stopped SparkContext
[2015-01-27 13:13:55,099] INFO  rovider$RemotingTerminator [] [akka.tcp://[email protected]:34516/system/remoting-terminator] - Shutting down remote daemon.
[2015-01-27 13:13:55,100] INFO  .jobserver.JobManagerActor [] [akka://test/user/$c] - Starting actor spark.jobserver.JobManagerActor
[2015-01-27 13:13:55,101] INFO  rovider$RemotingTerminator [] [akka.tcp://[email protected]:34516/system/remoting-terminator] - Remote daemon shut down; proceeding with flushing remote transports.
[2015-01-27 13:13:55,102] INFO  k.jobserver.JobStatusActor [] [akka://test/user/$c/status-actor] - Starting actor spark.jobserver.JobStatusActor
[2015-01-27 13:13:55,102] INFO  k.jobserver.JobResultActor [] [akka://test/user/$c/result-actor] - Starting actor spark.jobserver.JobResultActor
[2015-01-27 13:13:55,137] INFO  ache.spark.SecurityManager [] [akka://test/user/$c] - Changing view acls to: tja01
[2015-01-27 13:13:55,137] INFO  ache.spark.SecurityManager [] [akka://test/user/$c] - Changing modify acls to: tja01
[2015-01-27 13:13:55,138] INFO  ache.spark.SecurityManager [] [akka://test/user/$c] - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(tja01); users with modify permissions: Set(tja01)
[2015-01-27 13:13:55,169] INFO  rovider$RemotingTerminator [] [akka.tcp://[email protected]:34516/system/remoting-terminator] - Remoting shut down.
[2015-01-27 13:13:55,249] INFO  ka.event.slf4j.Slf4jLogger [] [akka://test/user/$c] - Slf4jLogger started
[2015-01-27 13:13:55,259] INFO  Remoting [] [Remoting] - Starting remoting
[2015-01-27 13:13:55,280] INFO  Remoting [] [Remoting] - Remoting started; listening on addresses :[akka.tcp://sparkDriver@localhost:49266]
[2015-01-27 13:13:55,282] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$c] - Successfully started service 'sparkDriver' on port 49266.
[2015-01-27 13:13:55,283] INFO  org.apache.spark.SparkEnv [] [akka://test/user/$c] - Registering MapOutputTracker
[2015-01-27 13:13:55,285] INFO  org.apache.spark.SparkEnv [] [akka://test/user/$c] - Registering BlockManagerMaster
[2015-01-27 13:13:55,287] INFO  k.storage.DiskBlockManager [] [akka://test/user/$c] - Created local directory at /tmp/spark-local-20150127131355-fede
[2015-01-27 13:13:55,288] INFO  .spark.storage.MemoryStore [] [akka://test/user/$c] - MemoryStore started with capacity 681.8 MB
[2015-01-27 13:13:55,290] INFO  pache.spark.HttpFileServer [] [akka://test/user/$c] - HTTP File server directory is /tmp/spark-443848ba-82ea-4c59-bc86-16a71dc8bd65
[2015-01-27 13:13:55,291] INFO  rg.apache.spark.HttpServer [] [akka://test/user/$c] - Starting HTTP Server
[2015-01-27 13:13:55,292] INFO  clipse.jetty.server.Server [] [akka://test/user/$c] - jetty-8.1.14.v20131031
[2015-01-27 13:13:55,299] INFO  y.server.AbstractConnector [] [akka://test/user/$c] - Started [email protected]:58253
[2015-01-27 13:13:55,299] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$c] - Successfully started service 'HTTP file server' on port 58253.
[2015-01-27 13:14:00,325] INFO  clipse.jetty.server.Server [] [akka://test/user/$c] - jetty-8.1.14.v20131031
[2015-01-27 13:14:00,333] INFO  y.server.AbstractConnector [] [akka://test/user/$c] - Started [email protected]:58755
[2015-01-27 13:14:00,334] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$c] - Successfully started service 'SparkUI' on port 58755.
[2015-01-27 13:14:00,334] INFO  rg.apache.spark.ui.SparkUI [] [akka://test/user/$c] - Started SparkUI at http://localhost:58755
[2015-01-27 13:14:00,410] INFO  pache.spark.util.AkkaUtils [] [akka://test/user/$c] - Connecting to HeartbeatReceiver: akka.tcp://sparkDriver@localhost:49266/user/HeartbeatReceiver
[2015-01-27 13:14:00,417] INFO  .NettyBlockTransferService [] [akka://test/user/$c] - Server created on 55801
[2015-01-27 13:14:00,417] INFO  storage.BlockManagerMaster [] [akka://test/user/$c] - Trying to register BlockManager
[2015-01-27 13:14:00,418] INFO  ge.BlockManagerMasterActor [] [akka://test/user/$c] - Registering block manager localhost:55801 with 681.8 MB RAM, BlockManagerId(<driver>, localhost, 55801)
[2015-01-27 13:14:00,418] INFO  storage.BlockManagerMaster [] [akka://test/user/$c] - Registered BlockManager
[2015-01-27 13:14:00,431] INFO  .jobserver.RddManagerActor [] [akka://test/user/$c/rdd-manager-actor] - Starting actor spark.jobserver.RddManagerActor
[2015-01-27 13:14:00,433] INFO  .jobserver.JobManagerActor [] [akka://test/user/$c] - Loading class no.such.class for app notajar
[2015-01-27 13:14:00,462] INFO  .apache.spark.SparkContext [] [akka://test/user/$c] - Added JAR /tmp/InMemoryDAO722024764359244436.jar at http://10.1.3.213:58253/jars/InMemoryDAO722024764359244436.jar with timestamp 1422364440461
[2015-01-27 13:14:00,469] INFO  util.ContextURLClassLoader [] [akka://test/user/$c] - Added URL file:/tmp/InMemoryDAO722024764359244436.jar to ContextURLClassLoader
[2015-01-27 13:14:00,470] INFO  spark.jobserver.JarUtils$ [] [akka://test/user/$c] - Loading object no.such.class$ using loader spark.jobserver.util.ContextURLClassLoader@7bbe1c14
[2015-01-27 13:14:00,472] INFO  spark.jobserver.JarUtils$ [] [akka://test/user/$c] - Loading class no.such.class using loader spark.jobserver.util.ContextURLClassLoader@7bbe1c14
[2015-01-27 13:14:00,473] WARN  .jobserver.RddManagerActor [] [] - Shutting down spark.jobserver.RddManagerActor
[2015-01-27 13:14:00,473] WARN  k.jobserver.JobResultActor [] [] - Shutting down spark.jobserver.JobResultActor
[2015-01-27 13:14:00,473] WARN  k.jobserver.JobStatusActor [] [] - Shutting down spark.jobserver.JobStatusActor
[2015-01-27 13:14:00,473] INFO  .jobserver.JobManagerActor [] [] - Shutting down SparkContext test
[2015-01-27 13:14:00,484] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage/kill,null}
[2015-01-27 13:14:00,485] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/,null}
[2015-01-27 13:14:00,485] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/static,null}
[2015-01-27 13:14:00,485] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/threadDump/json,null}
[2015-01-27 13:14:00,486] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/threadDump,null}
[2015-01-27 13:14:00,486] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/json,null}
[2015-01-27 13:14:00,486] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors,null}
[2015-01-27 13:14:00,486] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/environment/json,null}
[2015-01-27 13:14:00,487] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/environment,null}
[2015-01-27 13:14:00,487] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/rdd/json,null}
[2015-01-27 13:14:00,487] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/rdd,null}
[2015-01-27 13:14:00,487] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/json,null}
[2015-01-27 13:14:00,488] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage,null}
[2015-01-27 13:14:00,488] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/pool/json,null}
[2015-01-27 13:14:00,488] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/pool,null}
[2015-01-27 13:14:00,489] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage/json,null}
[2015-01-27 13:14:00,489] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage,null}
[2015-01-27 13:14:00,489] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/json,null}
[2015-01-27 13:14:00,490] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages,null}
[2015-01-27 13:14:00,491] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/job/json,null}
[2015-01-27 13:14:00,491] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/job,null}
[2015-01-27 13:14:00,491] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/json,null}
[2015-01-27 13:14:00,491] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs,null}
[2015-01-27 13:14:00,543] INFO  rg.apache.spark.ui.SparkUI [] [] - Stopped Spark web UI at http://localhost:58755
[2015-01-27 13:14:00,544] INFO  ark.scheduler.DAGScheduler [] [] - Stopping DAGScheduler
[2015-01-27 13:14:01,603] INFO  apOutputTrackerMasterActor [] [akka://test/user/$c] - MapOutputTrackerActor stopped!
[2015-01-27 13:14:01,616] INFO  .spark.storage.MemoryStore [] [] - MemoryStore cleared
[2015-01-27 13:14:01,616] INFO  spark.storage.BlockManager [] [] - BlockManager stopped
[2015-01-27 13:14:01,617] INFO  storage.BlockManagerMaster [] [] - BlockManagerMaster stopped
[2015-01-27 13:14:01,620] INFO  .apache.spark.SparkContext [] [] - Successfully stopped SparkContext
[2015-01-27 13:14:01,622] INFO  .jobserver.JobManagerActor [] [akka://test/user/$d] - Starting actor spark.jobserver.JobManagerActor
[2015-01-27 13:14:01,623] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:49266/system/remoting-terminator] - Shutting down remote daemon.
[2015-01-27 13:14:01,625] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:49266/system/remoting-terminator] - Remote daemon shut down; proceeding with flushing remote transports.
[2015-01-27 13:14:01,627] INFO  k.jobserver.JobStatusActor [] [akka://test/user/$d/status-actor] - Starting actor spark.jobserver.JobStatusActor
[2015-01-27 13:14:01,628] INFO  k.jobserver.JobResultActor [] [akka://test/user/$d/result-actor] - Starting actor spark.jobserver.JobResultActor
[2015-01-27 13:14:01,636] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:49266/system/remoting-terminator] - Remoting shut down.
[2015-01-27 13:14:01,646] INFO  ache.spark.SecurityManager [] [akka://test/user/$d] - Changing view acls to: tja01
[2015-01-27 13:14:01,663] INFO  ache.spark.SecurityManager [] [akka://test/user/$d] - Changing modify acls to: tja01
[2015-01-27 13:14:01,663] INFO  ache.spark.SecurityManager [] [akka://test/user/$d] - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(tja01); users with modify permissions: Set(tja01)
[2015-01-27 13:14:01,800] INFO  ka.event.slf4j.Slf4jLogger [] [akka://test/user/$d] - Slf4jLogger started
[2015-01-27 13:14:01,816] INFO  Remoting [] [Remoting] - Starting remoting
[2015-01-27 13:14:01,864] INFO  Remoting [] [Remoting] - Remoting started; listening on addresses :[akka.tcp://sparkDriver@localhost:44175]
[2015-01-27 13:14:01,865] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$d] - Successfully started service 'sparkDriver' on port 44175.
[2015-01-27 13:14:01,866] INFO  org.apache.spark.SparkEnv [] [akka://test/user/$d] - Registering MapOutputTracker
[2015-01-27 13:14:01,868] INFO  org.apache.spark.SparkEnv [] [akka://test/user/$d] - Registering BlockManagerMaster
[2015-01-27 13:14:01,869] INFO  k.storage.DiskBlockManager [] [akka://test/user/$d] - Created local directory at /tmp/spark-local-20150127131401-3d1f
[2015-01-27 13:14:01,870] INFO  .spark.storage.MemoryStore [] [akka://test/user/$d] - MemoryStore started with capacity 681.8 MB
[2015-01-27 13:14:01,872] INFO  pache.spark.HttpFileServer [] [akka://test/user/$d] - HTTP File server directory is /tmp/spark-98b99407-490b-4beb-9da9-3c753cd2e8b9
[2015-01-27 13:14:01,872] INFO  rg.apache.spark.HttpServer [] [akka://test/user/$d] - Starting HTTP Server
[2015-01-27 13:14:01,874] INFO  clipse.jetty.server.Server [] [akka://test/user/$d] - jetty-8.1.14.v20131031
[2015-01-27 13:14:01,882] INFO  y.server.AbstractConnector [] [akka://test/user/$d] - Started [email protected]:48214
[2015-01-27 13:14:01,882] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$d] - Successfully started service 'HTTP file server' on port 48214.
[2015-01-27 13:14:06,911] INFO  clipse.jetty.server.Server [] [akka://test/user/$d] - jetty-8.1.14.v20131031
[2015-01-27 13:14:06,919] INFO  y.server.AbstractConnector [] [akka://test/user/$d] - Started [email protected]:52772
[2015-01-27 13:14:06,919] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$d] - Successfully started service 'SparkUI' on port 52772.
[2015-01-27 13:14:06,920] INFO  rg.apache.spark.ui.SparkUI [] [akka://test/user/$d] - Started SparkUI at http://localhost:52772
[2015-01-27 13:14:06,980] INFO  pache.spark.util.AkkaUtils [] [] - Connecting to HeartbeatReceiver: akka.tcp://sparkDriver@localhost:44175/user/HeartbeatReceiver
[2015-01-27 13:14:06,981] INFO  .NettyBlockTransferService [] [akka://test/user/$d] - Server created on 49782
[2015-01-27 13:14:06,981] INFO  storage.BlockManagerMaster [] [akka://test/user/$d] - Trying to register BlockManager
[2015-01-27 13:14:06,982] INFO  ge.BlockManagerMasterActor [] [] - Registering block manager localhost:49782 with 681.8 MB RAM, BlockManagerId(<driver>, localhost, 49782)
[2015-01-27 13:14:06,982] INFO  storage.BlockManagerMaster [] [akka://test/user/$d] - Registered BlockManager
[2015-01-27 13:14:06,991] INFO  .jobserver.RddManagerActor [] [akka://test/user/$d/rdd-manager-actor] - Starting actor spark.jobserver.RddManagerActor
[2015-01-27 13:14:06,992] INFO  .jobserver.JobManagerActor [] [akka://test/user/$d] - Loading class spark.jobserver.WordCountExample for app demo
[2015-01-27 13:14:06,993] INFO  .apache.spark.SparkContext [] [akka://test/user/$d] - Added JAR /tmp/InMemoryDAO8320471003519538213.jar at http://10.1.3.213:48214/jars/InMemoryDAO8320471003519538213.jar with timestamp 1422364446993
[2015-01-27 13:14:07,000] INFO  util.ContextURLClassLoader [] [akka://test/user/$d] - Added URL file:/tmp/InMemoryDAO8320471003519538213.jar to ContextURLClassLoader
[2015-01-27 13:14:07,000] INFO  spark.jobserver.JarUtils$ [] [akka://test/user/$d] - Loading object spark.jobserver.WordCountExample$ using loader spark.jobserver.util.ContextURLClassLoader@237ac531
[2015-01-27 13:14:07,006] INFO  .jobserver.JobManagerActor [] [akka://test/user/$d] - Starting Spark job 1f1b4a8c-edde-49c0-855d-238efb9ff8f0 [spark.jobserver.WordCountExample]...
[2015-01-27 13:14:07,006] INFO  k.jobserver.JobResultActor [] [akka://test/user/$d/result-actor] - Added receiver Actor[akka://test/system/testActor1#-2027067696] to subscriber list for JobID 1f1b4a8c-edde-49c0-855d-238efb9ff8f0
[2015-01-27 13:14:07,007] INFO  .jobserver.JobManagerActor [] [] - Starting job future thread
[2015-01-27 13:14:07,008] WARN  .jobserver.RddManagerActor [] [] - Shutting down spark.jobserver.RddManagerActor
[2015-01-27 13:14:07,009] WARN  k.jobserver.JobResultActor [] [] - Shutting down spark.jobserver.JobResultActor
[2015-01-27 13:14:07,009] WARN  k.jobserver.JobStatusActor [] [] - Shutting down spark.jobserver.JobStatusActor
[2015-01-27 13:14:07,010] INFO  .jobserver.JobManagerActor [] [] - Shutting down SparkContext test
[2015-01-27 13:14:07,019] WARN  .jobserver.JobManagerActor [] [] - Exception from job 1f1b4a8c-edde-49c0-855d-238efb9ff8f0: 
java.lang.Throwable: No input.string config param
    at spark.jobserver.JobManagerActor$$anonfun$spark$jobserver$JobManagerActor$$getJobFuture$4.apply(JobManagerActor.scala:213)
    at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
    at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
    at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
[2015-01-27 13:14:07,023] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage/kill,null}
[2015-01-27 13:14:07,024] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/,null}
[2015-01-27 13:14:07,025] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/static,null}
[2015-01-27 13:14:07,025] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/threadDump/json,null}
[2015-01-27 13:14:07,025] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/threadDump,null}
[2015-01-27 13:14:07,026] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/json,null}
[2015-01-27 13:14:07,026] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors,null}
[2015-01-27 13:14:07,026] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/environment/json,null}
[2015-01-27 13:14:07,026] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/environment,null}
[2015-01-27 13:14:07,027] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/rdd/json,null}
[2015-01-27 13:14:07,027] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/rdd,null}
[2015-01-27 13:14:07,027] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/json,null}
[2015-01-27 13:14:07,028] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage,null}
[2015-01-27 13:14:07,028] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/pool/json,null}
[2015-01-27 13:14:07,028] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/pool,null}
[2015-01-27 13:14:07,029] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage/json,null}
[2015-01-27 13:14:07,029] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage,null}
[2015-01-27 13:14:07,029] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/json,null}
[2015-01-27 13:14:07,029] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages,null}
[2015-01-27 13:14:07,030] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/job/json,null}
[2015-01-27 13:14:07,030] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/job,null}
[2015-01-27 13:14:07,031] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/json,null}
[2015-01-27 13:14:07,031] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs,null}
[2015-01-27 13:14:07,082] INFO  rg.apache.spark.ui.SparkUI [] [] - Stopped Spark web UI at http://localhost:52772
[2015-01-27 13:14:07,083] INFO  ark.scheduler.DAGScheduler [] [] - Stopping DAGScheduler
[2015-01-27 13:14:08,135] INFO  apOutputTrackerMasterActor [] [] - MapOutputTrackerActor stopped!
[2015-01-27 13:14:08,138] INFO  .spark.storage.MemoryStore [] [] - MemoryStore cleared
[2015-01-27 13:14:08,139] INFO  spark.storage.BlockManager [] [] - BlockManager stopped
[2015-01-27 13:14:08,139] INFO  storage.BlockManagerMaster [] [] - BlockManagerMaster stopped
[2015-01-27 13:14:08,141] INFO  .apache.spark.SparkContext [] [] - Successfully stopped SparkContext
[2015-01-27 13:14:08,142] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:44175/system/remoting-terminator] - Shutting down remote daemon.
[2015-01-27 13:14:08,143] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:44175/system/remoting-terminator] - Remote daemon shut down; proceeding with flushing remote transports.
[2015-01-27 13:14:08,150] INFO  .jobserver.JobManagerActor [] [akka://test/user/$e] - Starting actor spark.jobserver.JobManagerActor
[2015-01-27 13:14:08,152] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:44175/system/remoting-terminator] - Remoting shut down.
[2015-01-27 13:14:08,153] INFO  k.jobserver.JobStatusActor [] [akka://test/user/$e/status-actor] - Starting actor spark.jobserver.JobStatusActor
[2015-01-27 13:14:08,154] INFO  k.jobserver.JobResultActor [] [akka://test/user/$e/result-actor] - Starting actor spark.jobserver.JobResultActor
[2015-01-27 13:14:08,172] INFO  ache.spark.SecurityManager [] [akka://test/user/$e] - Changing view acls to: tja01
[2015-01-27 13:14:08,172] INFO  ache.spark.SecurityManager [] [akka://test/user/$e] - Changing modify acls to: tja01
[2015-01-27 13:14:08,172] INFO  ache.spark.SecurityManager [] [akka://test/user/$e] - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(tja01); users with modify permissions: Set(tja01)
[2015-01-27 13:14:08,279] INFO  ka.event.slf4j.Slf4jLogger [] [akka://test/user/$e] - Slf4jLogger started
[2015-01-27 13:14:08,287] INFO  Remoting [] [Remoting] - Starting remoting
[2015-01-27 13:14:08,302] INFO  Remoting [] [Remoting] - Remoting started; listening on addresses :[akka.tcp://sparkDriver@localhost:56015]
[2015-01-27 13:14:08,303] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$e] - Successfully started service 'sparkDriver' on port 56015.
[2015-01-27 13:14:08,304] INFO  org.apache.spark.SparkEnv [] [akka://test/user/$e] - Registering MapOutputTracker
[2015-01-27 13:14:08,305] INFO  org.apache.spark.SparkEnv [] [akka://test/user/$e] - Registering BlockManagerMaster
[2015-01-27 13:14:08,306] INFO  k.storage.DiskBlockManager [] [akka://test/user/$e] - Created local directory at /tmp/spark-local-20150127131408-2dbe
[2015-01-27 13:14:08,307] INFO  .spark.storage.MemoryStore [] [akka://test/user/$e] - MemoryStore started with capacity 681.8 MB
[2015-01-27 13:14:08,309] INFO  pache.spark.HttpFileServer [] [akka://test/user/$e] - HTTP File server directory is /tmp/spark-ef2db7f2-dfce-4e9d-8ac5-aa9f6386f6bf
[2015-01-27 13:14:08,309] INFO  rg.apache.spark.HttpServer [] [akka://test/user/$e] - Starting HTTP Server
[2015-01-27 13:14:08,311] INFO  clipse.jetty.server.Server [] [akka://test/user/$e] - jetty-8.1.14.v20131031
[2015-01-27 13:14:08,312] INFO  y.server.AbstractConnector [] [akka://test/user/$e] - Started [email protected]:43668
[2015-01-27 13:14:08,313] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$e] - Successfully started service 'HTTP file server' on port 43668.
[2015-01-27 13:14:13,331] INFO  clipse.jetty.server.Server [] [akka://test/user/$e] - jetty-8.1.14.v20131031
[2015-01-27 13:14:13,342] INFO  y.server.AbstractConnector [] [akka://test/user/$e] - Started [email protected]:43117
[2015-01-27 13:14:13,343] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$e] - Successfully started service 'SparkUI' on port 43117.
[2015-01-27 13:14:13,343] INFO  rg.apache.spark.ui.SparkUI [] [akka://test/user/$e] - Started SparkUI at http://localhost:43117
[2015-01-27 13:14:13,408] INFO  pache.spark.util.AkkaUtils [] [] - Connecting to HeartbeatReceiver: akka.tcp://sparkDriver@localhost:56015/user/HeartbeatReceiver
[2015-01-27 13:14:13,411] INFO  .NettyBlockTransferService [] [akka://test/user/$e] - Server created on 36598
[2015-01-27 13:14:13,411] INFO  storage.BlockManagerMaster [] [akka://test/user/$e] - Trying to register BlockManager
[2015-01-27 13:14:13,412] INFO  ge.BlockManagerMasterActor [] [] - Registering block manager localhost:36598 with 681.8 MB RAM, BlockManagerId(<driver>, localhost, 36598)
[2015-01-27 13:14:13,412] INFO  storage.BlockManagerMaster [] [akka://test/user/$e] - Registered BlockManager
[2015-01-27 13:14:13,419] INFO  .jobserver.RddManagerActor [] [akka://test/user/$e/rdd-manager-actor] - Starting actor spark.jobserver.RddManagerActor
[2015-01-27 13:14:13,420] INFO  .jobserver.JobManagerActor [] [akka://test/user/$e] - Loading class spark.jobserver.WordCountExample for app demo
[2015-01-27 13:14:13,423] INFO  .apache.spark.SparkContext [] [akka://test/user/$e] - Added JAR /tmp/InMemoryDAO2098972855628514058.jar at http://10.1.3.213:43668/jars/InMemoryDAO2098972855628514058.jar with timestamp 1422364453423
[2015-01-27 13:14:13,429] INFO  util.ContextURLClassLoader [] [akka://test/user/$e] - Added URL file:/tmp/InMemoryDAO2098972855628514058.jar to ContextURLClassLoader
[2015-01-27 13:14:13,429] INFO  spark.jobserver.JarUtils$ [] [akka://test/user/$e] - Loading object spark.jobserver.WordCountExample$ using loader spark.jobserver.util.ContextURLClassLoader@23f9a8ff
[2015-01-27 13:14:13,433] INFO  .jobserver.JobManagerActor [] [akka://test/user/$e] - Starting Spark job 6bd35ede-445c-4303-95c8-61f8e3d6689b [spark.jobserver.WordCountExample]...
[2015-01-27 13:14:13,433] INFO  .jobserver.JobManagerActor [] [] - Starting job future thread
[2015-01-27 13:14:13,433] INFO  k.jobserver.JobResultActor [] [akka://test/user/$e/result-actor] - Added receiver Actor[akka://test/system/testActor1#-2027067696] to subscriber list for JobID 6bd35ede-445c-4303-95c8-61f8e3d6689b
[2015-01-27 13:14:13,437] WARN  .jobserver.RddManagerActor [] [] - Shutting down spark.jobserver.RddManagerActor
[2015-01-27 13:14:13,437] WARN  k.jobserver.JobResultActor [] [] - Shutting down spark.jobserver.JobResultActor
[2015-01-27 13:14:13,437] WARN  k.jobserver.JobStatusActor [] [] - Shutting down spark.jobserver.JobStatusActor
[2015-01-27 13:14:13,438] INFO  .jobserver.JobManagerActor [] [] - Shutting down SparkContext test
[2015-01-27 13:14:13,449] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage/kill,null}
[2015-01-27 13:14:13,449] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/,null}
[2015-01-27 13:14:13,450] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/static,null}
[2015-01-27 13:14:13,450] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/threadDump/json,null}
[2015-01-27 13:14:13,450] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/threadDump,null}
[2015-01-27 13:14:13,450] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/json,null}
[2015-01-27 13:14:13,451] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors,null}
[2015-01-27 13:14:13,451] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/environment/json,null}
[2015-01-27 13:14:13,451] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/environment,null}
[2015-01-27 13:14:13,451] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/rdd/json,null}
[2015-01-27 13:14:13,452] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/rdd,null}
[2015-01-27 13:14:13,452] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/json,null}
[2015-01-27 13:14:13,452] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage,null}
[2015-01-27 13:14:13,452] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/pool/json,null}
[2015-01-27 13:14:13,453] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/pool,null}
[2015-01-27 13:14:13,453] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage/json,null}
[2015-01-27 13:14:13,453] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage,null}
[2015-01-27 13:14:13,453] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/json,null}
[2015-01-27 13:14:13,454] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages,null}
[2015-01-27 13:14:13,454] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/job/json,null}
[2015-01-27 13:14:13,454] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/job,null}
[2015-01-27 13:14:13,454] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/json,null}
[2015-01-27 13:14:13,454] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs,null}
[2015-01-27 13:14:13,507] INFO  rg.apache.spark.ui.SparkUI [] [] - Stopped Spark web UI at http://localhost:43117
[2015-01-27 13:14:13,507] INFO  ark.scheduler.DAGScheduler [] [] - Stopping DAGScheduler
[2015-01-27 13:14:13,570] WARN  .jobserver.JobManagerActor [] [] - Exception from job 6bd35ede-445c-4303-95c8-61f8e3d6689b: 
org.apache.spark.SparkException: SparkContext has been shutdown
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1277)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1300)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1314)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1328)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:780)
    at spark.jobserver.WordCountExample$.runJob(WordCountExample.scala:32)
    at spark.jobserver.JobManagerActor$$anonfun$spark$jobserver$JobManagerActor$$getJobFuture$4.apply(JobManagerActor.scala:219)
    at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
    at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
    at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
[2015-01-27 13:14:14,561] INFO  apOutputTrackerMasterActor [] [] - MapOutputTrackerActor stopped!
[2015-01-27 13:14:14,565] INFO  .spark.storage.MemoryStore [] [] - MemoryStore cleared
[2015-01-27 13:14:14,565] INFO  spark.storage.BlockManager [] [] - BlockManager stopped
[2015-01-27 13:14:14,566] INFO  storage.BlockManagerMaster [] [] - BlockManagerMaster stopped
[2015-01-27 13:14:14,566] INFO  .apache.spark.SparkContext [] [] - Successfully stopped SparkContext
[2015-01-27 13:14:14,568] INFO  .jobserver.JobManagerActor [] [akka://test/user/$f] - Starting actor spark.jobserver.JobManagerActor
[2015-01-27 13:14:14,569] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:56015/system/remoting-terminator] - Shutting down remote daemon.
[2015-01-27 13:14:14,573] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:56015/system/remoting-terminator] - Remote daemon shut down; proceeding with flushing remote transports.
[2015-01-27 13:14:14,576] INFO  k.jobserver.JobStatusActor [] [akka://test/user/$f/status-actor] - Starting actor spark.jobserver.JobStatusActor
[2015-01-27 13:14:14,576] INFO  k.jobserver.JobResultActor [] [akka://test/user/$f/result-actor] - Starting actor spark.jobserver.JobResultActor
[2015-01-27 13:14:14,581] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:56015/system/remoting-terminator] - Remoting shut down.
[2015-01-27 13:14:14,593] INFO  ache.spark.SecurityManager [] [akka://test/user/$f] - Changing view acls to: tja01
[2015-01-27 13:14:14,593] INFO  ache.spark.SecurityManager [] [akka://test/user/$f] - Changing modify acls to: tja01
[2015-01-27 13:14:14,593] INFO  ache.spark.SecurityManager [] [akka://test/user/$f] - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(tja01); users with modify permissions: Set(tja01)
[2015-01-27 13:14:14,635] INFO  ka.event.slf4j.Slf4jLogger [] [akka://test/user/$f] - Slf4jLogger started
[2015-01-27 13:14:14,642] INFO  Remoting [] [Remoting] - Starting remoting
[2015-01-27 13:14:14,656] INFO  Remoting [] [Remoting] - Remoting started; listening on addresses :[akka.tcp://sparkDriver@localhost:42388]
[2015-01-27 13:14:14,657] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$f] - Successfully started service 'sparkDriver' on port 42388.
[2015-01-27 13:14:14,657] INFO  org.apache.spark.SparkEnv [] [akka://test/user/$f] - Registering MapOutputTracker
[2015-01-27 13:14:14,658] INFO  org.apache.spark.SparkEnv [] [akka://test/user/$f] - Registering BlockManagerMaster
[2015-01-27 13:14:14,659] INFO  k.storage.DiskBlockManager [] [akka://test/user/$f] - Created local directory at /tmp/spark-local-20150127131414-bc68
[2015-01-27 13:14:14,659] INFO  .spark.storage.MemoryStore [] [akka://test/user/$f] - MemoryStore started with capacity 681.8 MB
[2015-01-27 13:14:14,660] INFO  pache.spark.HttpFileServer [] [akka://test/user/$f] - HTTP File server directory is /tmp/spark-50d4eb9b-ac3c-43d9-ba59-dbd30c5576bc
[2015-01-27 13:14:14,660] INFO  rg.apache.spark.HttpServer [] [akka://test/user/$f] - Starting HTTP Server
[2015-01-27 13:14:14,661] INFO  clipse.jetty.server.Server [] [akka://test/user/$f] - jetty-8.1.14.v20131031
[2015-01-27 13:14:14,662] INFO  y.server.AbstractConnector [] [akka://test/user/$f] - Started [email protected]:37978
[2015-01-27 13:14:14,662] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$f] - Successfully started service 'HTTP file server' on port 37978.
[2015-01-27 13:14:19,675] INFO  clipse.jetty.server.Server [] [akka://test/user/$f] - jetty-8.1.14.v20131031
[2015-01-27 13:14:19,691] INFO  y.server.AbstractConnector [] [akka://test/user/$f] - Started [email protected]:48028
[2015-01-27 13:14:19,692] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$f] - Successfully started service 'SparkUI' on port 48028.
[2015-01-27 13:14:19,692] INFO  rg.apache.spark.ui.SparkUI [] [akka://test/user/$f] - Started SparkUI at http://localhost:48028
[2015-01-27 13:14:19,754] INFO  pache.spark.util.AkkaUtils [] [] - Connecting to HeartbeatReceiver: akka.tcp://sparkDriver@localhost:42388/user/HeartbeatReceiver
[2015-01-27 13:14:19,768] INFO  .NettyBlockTransferService [] [akka://test/user/$f] - Server created on 34558
[2015-01-27 13:14:19,768] INFO  storage.BlockManagerMaster [] [akka://test/user/$f] - Trying to register BlockManager
[2015-01-27 13:14:19,769] INFO  ge.BlockManagerMasterActor [] [] - Registering block manager localhost:34558 with 681.8 MB RAM, BlockManagerId(<driver>, localhost, 34558)
[2015-01-27 13:14:19,769] INFO  storage.BlockManagerMaster [] [akka://test/user/$f] - Registered BlockManager
[2015-01-27 13:14:19,779] INFO  .jobserver.JobManagerActor [] [akka://test/user/$f] - Loading class spark.jobserver.WordCountExample for app demo
[2015-01-27 13:14:19,780] INFO  .jobserver.RddManagerActor [] [akka://test/user/$f/rdd-manager-actor] - Starting actor spark.jobserver.RddManagerActor
[2015-01-27 13:14:19,788] INFO  .apache.spark.SparkContext [] [akka://test/user/$f] - Added JAR /tmp/InMemoryDAO6787955189121296071.jar at http://10.1.3.213:37978/jars/InMemoryDAO6787955189121296071.jar with timestamp 1422364459787
[2015-01-27 13:14:19,793] INFO  util.ContextURLClassLoader [] [akka://test/user/$f] - Added URL file:/tmp/InMemoryDAO6787955189121296071.jar to ContextURLClassLoader
[2015-01-27 13:14:19,794] INFO  spark.jobserver.JarUtils$ [] [akka://test/user/$f] - Loading object spark.jobserver.WordCountExample$ using loader spark.jobserver.util.ContextURLClassLoader@6fdc750b
[2015-01-27 13:14:19,796] INFO  .jobserver.JobManagerActor [] [akka://test/user/$f] - Starting Spark job dea82364-5c96-4f85-bc0b-333ebd3cf3d4 [spark.jobserver.WordCountExample]...
[2015-01-27 13:14:19,796] INFO  k.jobserver.JobResultActor [] [akka://test/user/$f/result-actor] - Added receiver Actor[akka://test/system/testActor1#-2027067696] to subscriber list for JobID dea82364-5c96-4f85-bc0b-333ebd3cf3d4
[2015-01-27 13:14:19,796] INFO  .jobserver.JobManagerActor [] [] - Starting job future thread
[2015-01-27 13:14:19,796] WARN  .jobserver.RddManagerActor [] [] - Shutting down spark.jobserver.RddManagerActor
[2015-01-27 13:14:19,796] WARN  k.jobserver.JobResultActor [] [] - Shutting down spark.jobserver.JobResultActor
[2015-01-27 13:14:19,796] WARN  k.jobserver.JobStatusActor [] [] - Shutting down spark.jobserver.JobStatusActor
[2015-01-27 13:14:19,797] INFO  .jobserver.JobManagerActor [] [] - Shutting down SparkContext test
[2015-01-27 13:14:19,810] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage/kill,null}
[2015-01-27 13:14:19,810] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/,null}
[2015-01-27 13:14:19,810] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/static,null}
[2015-01-27 13:14:19,810] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/threadDump/json,null}
[2015-01-27 13:14:19,810] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/threadDump,null}
[2015-01-27 13:14:19,811] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/json,null}
[2015-01-27 13:14:19,811] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors,null}
[2015-01-27 13:14:19,811] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/environment/json,null}
[2015-01-27 13:14:19,811] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/environment,null}
[2015-01-27 13:14:19,811] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/rdd/json,null}
[2015-01-27 13:14:19,811] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/rdd,null}
[2015-01-27 13:14:19,812] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/json,null}
[2015-01-27 13:14:19,812] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage,null}
[2015-01-27 13:14:19,812] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/pool/json,null}
[2015-01-27 13:14:19,812] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/pool,null}
[2015-01-27 13:14:19,812] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage/json,null}
[2015-01-27 13:14:19,813] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage,null}
[2015-01-27 13:14:19,813] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/json,null}
[2015-01-27 13:14:19,813] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages,null}
[2015-01-27 13:14:19,813] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/job/json,null}
[2015-01-27 13:14:19,813] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/job,null}
[2015-01-27 13:14:19,813] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/json,null}
[2015-01-27 13:14:19,814] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs,null}
[2015-01-27 13:14:19,873] INFO  rg.apache.spark.ui.SparkUI [] [] - Stopped Spark web UI at http://localhost:48028
[2015-01-27 13:14:19,873] INFO  ark.scheduler.DAGScheduler [] [] - Stopping DAGScheduler
[2015-01-27 13:14:19,877] INFO  .apache.spark.SparkContext [] [] - Starting job: collect at WordCountExample.scala:32
[2015-01-27 13:14:19,878] WARN  .jobserver.JobManagerActor [] [] - Exception from job dea82364-5c96-4f85-bc0b-333ebd3cf3d4: 
java.lang.NullPointerException
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1282)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1300)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1314)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1328)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:780)
    at spark.jobserver.WordCountExample$.runJob(WordCountExample.scala:32)
    at spark.jobserver.JobManagerActor$$anonfun$spark$jobserver$JobManagerActor$$getJobFuture$4.apply(JobManagerActor.scala:219)
    at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
    at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
    at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
[2015-01-27 13:14:20,936] INFO  apOutputTrackerMasterActor [] [akka://test/user/$f] - MapOutputTrackerActor stopped!
[2015-01-27 13:14:20,940] INFO  .spark.storage.MemoryStore [] [] - MemoryStore cleared
[2015-01-27 13:14:20,941] INFO  spark.storage.BlockManager [] [] - BlockManager stopped
[2015-01-27 13:14:20,942] INFO  storage.BlockManagerMaster [] [] - BlockManagerMaster stopped
[2015-01-27 13:14:20,943] INFO  .apache.spark.SparkContext [] [] - Successfully stopped SparkContext
[2015-01-27 13:14:20,946] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:42388/system/remoting-terminator] - Shutting down remote daemon.
[2015-01-27 13:14:20,947] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:42388/system/remoting-terminator] - Remote daemon shut down; proceeding with flushing remote transports.
[2015-01-27 13:14:20,951] INFO  .jobserver.JobManagerActor [] [akka://test/user/$g] - Starting actor spark.jobserver.JobManagerActor
[2015-01-27 13:14:20,954] INFO  k.jobserver.JobStatusActor [] [akka://test/user/$g/status-actor] - Starting actor spark.jobserver.JobStatusActor
[2015-01-27 13:14:20,954] INFO  k.jobserver.JobResultActor [] [akka://test/user/$g/result-actor] - Starting actor spark.jobserver.JobResultActor
[2015-01-27 13:14:20,962] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:42388/system/remoting-terminator] - Remoting shut down.
[2015-01-27 13:14:20,966] INFO  ache.spark.SecurityManager [] [akka://test/user/$g] - Changing view acls to: tja01
[2015-01-27 13:14:20,966] INFO  ache.spark.SecurityManager [] [akka://test/user/$g] - Changing modify acls to: tja01
[2015-01-27 13:14:20,967] INFO  ache.spark.SecurityManager [] [akka://test/user/$g] - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(tja01); users with modify permissions: Set(tja01)
[2015-01-27 13:14:21,020] INFO  ka.event.slf4j.Slf4jLogger [] [akka://test/user/$g] - Slf4jLogger started
[2015-01-27 13:14:21,026] INFO  Remoting [] [Remoting] - Starting remoting
[2015-01-27 13:14:21,035] INFO  Remoting [] [Remoting] - Remoting started; listening on addresses :[akka.tcp://sparkDriver@localhost:33466]
[2015-01-27 13:14:21,036] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$g] - Successfully started service 'sparkDriver' on port 33466.
[2015-01-27 13:14:21,036] INFO  org.apache.spark.SparkEnv [] [akka://test/user/$g] - Registering MapOutputTracker
[2015-01-27 13:14:21,037] INFO  org.apache.spark.SparkEnv [] [akka://test/user/$g] - Registering BlockManagerMaster
[2015-01-27 13:14:21,038] INFO  k.storage.DiskBlockManager [] [akka://test/user/$g] - Created local directory at /tmp/spark-local-20150127131421-1f64
[2015-01-27 13:14:21,038] INFO  .spark.storage.MemoryStore [] [akka://test/user/$g] - MemoryStore started with capacity 681.8 MB
[2015-01-27 13:14:21,040] INFO  pache.spark.HttpFileServer [] [akka://test/user/$g] - HTTP File server directory is /tmp/spark-a4cc300f-faf3-4065-9763-b07ba8f68cc9
[2015-01-27 13:14:21,040] INFO  rg.apache.spark.HttpServer [] [akka://test/user/$g] - Starting HTTP Server
[2015-01-27 13:14:21,041] INFO  clipse.jetty.server.Server [] [akka://test/user/$g] - jetty-8.1.14.v20131031
[2015-01-27 13:14:21,042] INFO  y.server.AbstractConnector [] [akka://test/user/$g] - Started [email protected]:50655
[2015-01-27 13:14:21,042] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$g] - Successfully started service 'HTTP file server' on port 50655.
[2015-01-27 13:14:26,056] INFO  clipse.jetty.server.Server [] [akka://test/user/$g] - jetty-8.1.14.v20131031
[2015-01-27 13:14:26,066] INFO  y.server.AbstractConnector [] [akka://test/user/$g] - Started [email protected]:49996
[2015-01-27 13:14:26,073] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$g] - Successfully started service 'SparkUI' on port 49996.
[2015-01-27 13:14:26,073] INFO  rg.apache.spark.ui.SparkUI [] [akka://test/user/$g] - Started SparkUI at http://localhost:49996
[2015-01-27 13:14:26,107] INFO  pache.spark.util.AkkaUtils [] [] - Connecting to HeartbeatReceiver: akka.tcp://sparkDriver@localhost:33466/user/HeartbeatReceiver
[2015-01-27 13:14:26,110] INFO  .NettyBlockTransferService [] [akka://test/user/$g] - Server created on 45721
[2015-01-27 13:14:26,110] INFO  storage.BlockManagerMaster [] [akka://test/user/$g] - Trying to register BlockManager
[2015-01-27 13:14:26,111] INFO  ge.BlockManagerMasterActor [] [] - Registering block manager localhost:45721 with 681.8 MB RAM, BlockManagerId(<driver>, localhost, 45721)
[2015-01-27 13:14:26,112] INFO  storage.BlockManagerMaster [] [akka://test/user/$g] - Registered BlockManager
[2015-01-27 13:14:26,118] INFO  .jobserver.RddManagerActor [] [akka://test/user/$g/rdd-manager-actor] - Starting actor spark.jobserver.RddManagerActor
[2015-01-27 13:14:26,118] INFO  .jobserver.JobManagerActor [] [akka://test/user/$g] - Loading class spark.jobserver.WordCountExample for app demo
[2015-01-27 13:14:26,119] INFO  .apache.spark.SparkContext [] [akka://test/user/$g] - Added JAR /tmp/InMemoryDAO8452775048902259754.jar at http://10.1.3.213:50655/jars/InMemoryDAO8452775048902259754.jar with timestamp 1422364466119
[2015-01-27 13:14:26,123] INFO  util.ContextURLClassLoader [] [akka://test/user/$g] - Added URL file:/tmp/InMemoryDAO8452775048902259754.jar to ContextURLClassLoader
[2015-01-27 13:14:26,123] INFO  spark.jobserver.JarUtils$ [] [akka://test/user/$g] - Loading object spark.jobserver.WordCountExample$ using loader spark.jobserver.util.ContextURLClassLoader@4eb7a276
[2015-01-27 13:14:26,126] INFO  .jobserver.JobManagerActor [] [akka://test/user/$g] - Starting Spark job f22f07f7-1c5c-4c8e-848b-513d8ba83a21 [spark.jobserver.WordCountExample]...
[2015-01-27 13:14:26,126] INFO  k.jobserver.JobResultActor [] [akka://test/user/$g/result-actor] - Added receiver Actor[akka://test/system/testActor1#-2027067696] to subscriber list for JobID f22f07f7-1c5c-4c8e-848b-513d8ba83a21
[2015-01-27 13:14:26,126] INFO  .jobserver.JobManagerActor [] [] - Starting job future thread
[2015-01-27 13:14:26,127] WARN  .jobserver.RddManagerActor [] [] - Shutting down spark.jobserver.RddManagerActor
[2015-01-27 13:14:26,131] WARN  k.jobserver.JobResultActor [] [] - Shutting down spark.jobserver.JobResultActor
[2015-01-27 13:14:26,132] WARN  k.jobserver.JobStatusActor [] [] - Shutting down spark.jobserver.JobStatusActor
[2015-01-27 13:14:26,132] INFO  .jobserver.JobManagerActor [] [] - Shutting down SparkContext test
[2015-01-27 13:14:26,143] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage/kill,null}
[2015-01-27 13:14:26,143] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/,null}
[2015-01-27 13:14:26,143] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/static,null}
[2015-01-27 13:14:26,144] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/threadDump/json,null}
[2015-01-27 13:14:26,144] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/threadDump,null}
[2015-01-27 13:14:26,144] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/json,null}
[2015-01-27 13:14:26,144] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors,null}
[2015-01-27 13:14:26,144] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/environment/json,null}
[2015-01-27 13:14:26,144] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/environment,null}
[2015-01-27 13:14:26,145] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/rdd/json,null}
[2015-01-27 13:14:26,145] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/rdd,null}
[2015-01-27 13:14:26,145] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/json,null}
[2015-01-27 13:14:26,145] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage,null}
[2015-01-27 13:14:26,145] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/pool/json,null}
[2015-01-27 13:14:26,145] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/pool,null}
[2015-01-27 13:14:26,146] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage/json,null}
[2015-01-27 13:14:26,146] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage,null}
[2015-01-27 13:14:26,146] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/json,null}
[2015-01-27 13:14:26,146] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages,null}
[2015-01-27 13:14:26,146] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/job/json,null}
[2015-01-27 13:14:26,146] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/job,null}
[2015-01-27 13:14:26,147] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/json,null}
[2015-01-27 13:14:26,147] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs,null}
[2015-01-27 13:14:26,148] INFO  .apache.spark.SparkContext [] [] - Starting job: collect at WordCountExample.scala:32
[2015-01-27 13:14:26,168] INFO  ark.scheduler.DAGScheduler [] [] - Registering RDD 1 (map at WordCountExample.scala:32)
[2015-01-27 13:14:26,170] INFO  ark.scheduler.DAGScheduler [] [] - Got job 0 (collect at WordCountExample.scala:32) with 4 output partitions (allowLocal=false)
[2015-01-27 13:14:26,171] INFO  ark.scheduler.DAGScheduler [] [] - Final stage: Stage 1(collect at WordCountExample.scala:32)
[2015-01-27 13:14:26,172] INFO  ark.scheduler.DAGScheduler [] [] - Parents of final stage: List(Stage 0)
[2015-01-27 13:14:26,177] INFO  ark.scheduler.DAGScheduler [] [] - Missing parents: List(Stage 0)
[2015-01-27 13:14:26,198] INFO  ark.scheduler.DAGScheduler [] [] - Submitting Stage 0 (MappedRDD[1] at map at WordCountExample.scala:32), which has no missing parents
[2015-01-27 13:14:26,198] INFO  rg.apache.spark.ui.SparkUI [] [] - Stopped Spark web UI at http://localhost:49996
[2015-01-27 13:14:26,199] INFO  ark.scheduler.DAGScheduler [] [] - Stopping DAGScheduler
[2015-01-27 13:14:26,351] INFO  .spark.storage.MemoryStore [] [] - ensureFreeSpace(2384) called with curMem=0, maxMem=714866688
[2015-01-27 13:14:26,354] INFO  .spark.storage.MemoryStore [] [] - Block broadcast_0 stored as values in memory (estimated size 2.3 KB, free 681.7 MB)
[2015-01-27 13:14:26,398] INFO  .spark.storage.MemoryStore [] [] - ensureFreeSpace(1716) called with curMem=2384, maxMem=714866688
[2015-01-27 13:14:26,398] INFO  .spark.storage.MemoryStore [] [] - Block broadcast_0_piece0 stored as bytes in memory (estimated size 1716.0 B, free 681.7 MB)
[2015-01-27 13:14:26,401] INFO  k.storage.BlockManagerInfo [] [] - Added broadcast_0_piece0 in memory on localhost:45721 (size: 1716.0 B, free: 681.7 MB)
[2015-01-27 13:14:26,402] INFO  storage.BlockManagerMaster [] [] - Updated info of block broadcast_0_piece0
[2015-01-27 13:14:26,404] INFO  .apache.spark.SparkContext [] [] - Created broadcast 0 from broadcast at DAGScheduler.scala:838
[2015-01-27 13:14:26,425] INFO  ark.scheduler.DAGScheduler [] [] - Submitting 4 missing tasks from Stage 0 (MappedRDD[1] at map at WordCountExample.scala:32)
[2015-01-27 13:14:26,427] INFO  cheduler.TaskSchedulerImpl [] [] - Adding task set 0.0 with 4 tasks
[2015-01-27 13:14:26,451] INFO  ark.scheduler.DAGScheduler [] [] - Job 0 failed: collect at WordCountExample.scala:32, took 0.301746 s
[2015-01-27 13:14:26,451] WARN  .jobserver.JobManagerActor [] [] - Exception from job f22f07f7-1c5c-4c8e-848b-513d8ba83a21: 
org.apache.spark.SparkException: Job cancelled because SparkContext was shut down
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:702)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:701)
    at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
    at org.apache.spark.scheduler.DAGScheduler.cleanUpAfterSchedulerStop(DAGScheduler.scala:701)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessActor.postStop(DAGScheduler.scala:1428)
    at akka.actor.Actor$class.aroundPostStop(Actor.scala:475)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessActor.aroundPostStop(DAGScheduler.scala:1375)
    at akka.actor.dungeon.FaultHandling$class.akka$actor$dungeon$FaultHandling$$finishTerminate(FaultHandling.scala:210)
    at akka.actor.dungeon.FaultHandling$class.terminate(FaultHandling.scala:172)
    at akka.actor.ActorCell.terminate(ActorCell.scala:369)
    at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:462)
    at akka.actor.ActorCell.systemInvoke(ActorCell.scala:478)
    at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:263)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:241)
    at akka.dispatch.Mailbox.run(Mailbox.scala:220)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
[2015-01-27 13:14:26,476] INFO  k.scheduler.TaskSetManager [] [] - Starting task 0.0 in stage 0.0 (TID 0, localhost, PROCESS_LOCAL, 1330 bytes)
[2015-01-27 13:14:26,480] INFO  k.scheduler.TaskSetManager [] [] - Starting task 1.0 in stage 0.0 (TID 1, localhost, PROCESS_LOCAL, 1337 bytes)
[2015-01-27 13:14:26,481] INFO  k.scheduler.TaskSetManager [] [] - Starting task 2.0 in stage 0.0 (TID 2, localhost, PROCESS_LOCAL, 1340 bytes)
[2015-01-27 13:14:26,482] INFO  k.scheduler.TaskSetManager [] [] - Starting task 3.0 in stage 0.0 (TID 3, localhost, PROCESS_LOCAL, 1337 bytes)
[2015-01-27 13:14:26,494] ERROR ka.actor.OneForOneStrategy [] [akka://sparkDriver/user/LocalBackendActor] - Task org.apache.spark.executor.Executor$TaskRunner@656a0389 rejected from java.util.concurrent.ThreadPoolExecutor@130e4b63[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]
java.util.concurrent.RejectedExecutionException: Task org.apache.spark.executor.Executor$TaskRunner@656a0389 rejected from java.util.concurrent.ThreadPoolExecutor@130e4b63[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]
    at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
    at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
    at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
    at org.apache.spark.executor.Executor.launchTask(Executor.scala:128)
    at org.apache.spark.scheduler.local.LocalActor$$anonfun$reviveOffers$1.apply(LocalBackend.scala:78)
    at org.apache.spark.scheduler.local.LocalActor$$anonfun$reviveOffers$1.apply(LocalBackend.scala:76)
    at scala.collection.immutable.List.foreach(List.scala:318)
    at org.apache.spark.scheduler.local.LocalActor.reviveOffers(LocalBackend.scala:76)
    at org.apache.spark.scheduler.local.LocalActor$$anonfun$receiveWithLogging$1.applyOrElse(LocalBackend.scala:58)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
    at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:53)
    at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:42)
    at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
    at org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogReceive.scala:42)
    at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
    at org.apache.spark.scheduler.local.LocalActor.aroundReceive(LocalBackend.scala:43)
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
    at akka.actor.ActorCell.invoke(ActorCell.scala:487)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
    at akka.dispatch.Mailbox.run(Mailbox.scala:220)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
[2015-01-27 13:14:26,495] INFO  pache.spark.util.AkkaUtils [] [] - Connecting to HeartbeatReceiver: akka.tcp://sparkDriver@localhost:33466/user/HeartbeatReceiver
[2015-01-27 13:14:26,503] ERROR ka.actor.OneForOneStrategy [] [akka://sparkDriver/user/LocalBackendActor] - Actor not found for: ActorSelection[Anchor(akka://sparkDriver/), Path(/user/HeartbeatReceiver)]
akka.actor.PostRestartException: exception post restart (class java.util.concurrent.RejectedExecutionException)
    at akka.actor.dungeon.FaultHandling$$anonfun$6.apply(FaultHandling.scala:249)
    at akka.actor.dungeon.FaultHandling$$anonfun$6.apply(FaultHandling.scala:247)
    at akka.actor.dungeon.FaultHandling$$anonfun$handleNonFatalOrInterruptedException$1.applyOrElse(FaultHandling.scala:302)
    at akka.actor.dungeon.FaultHandling$$anonfun$handleNonFatalOrInterruptedException$1.applyOrElse(FaultHandling.scala:297)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
    at akka.actor.dungeon.FaultHandling$class.finishRecreate(FaultHandling.scala:247)
    at akka.actor.dungeon.FaultHandling$class.faultRecreate(FaultHandling.scala:76)
    at akka.actor.ActorCell.faultRecreate(ActorCell.scala:369)
    at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:459)
    at akka.actor.ActorCell.systemInvoke(ActorCell.scala:478)
    at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:263)
    at akka.dispatch.Mailbox.run(Mailbox.scala:219)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: akka.actor.ActorNotFound: Actor not found for: ActorSelection[Anchor(akka://sparkDriver/), Path(/user/HeartbeatReceiver)]
    at akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:65)
    at akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:63)
    at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
    at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
    at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
    at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
    at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
    at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
    at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
    at akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.unbatchedExecute(Future.scala:74)
    at akka.dispatch.BatchingExecutor$class.execute(BatchingExecutor.scala:110)
    at akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.execute(Future.scala:73)
    at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
    at scala.concurrent.impl.Promise$DefaultPromise.scala$concurrent$impl$Promise$DefaultPromise$$dispatchOrAddCallback(Promise.scala:280)
    at scala.concurrent.impl.Promise$DefaultPromise.onComplete(Promise.scala:270)
    at akka.actor.ActorSelection.resolveOne(ActorSelection.scala:63)
    at akka.actor.ActorSelection.resolveOne(ActorSelection.scala:80)
    at org.apache.spark.util.AkkaUtils$.makeDriverRef(AkkaUtils.scala:213)
    at org.apache.spark.executor.Executor.startDriverHeartbeater(Executor.scala:369)
    at org.apache.spark.executor.Executor.<init>(Executor.scala:122)
    at org.apache.spark.scheduler.local.LocalActor.<init>(LocalBackend.scala:53)
    at org.apache.spark.scheduler.local.LocalBackend$$anonfun$start$1.apply(LocalBackend.scala:96)
    at org.apache.spark.scheduler.local.LocalBackend$$anonfun$start$1.apply(LocalBackend.scala:96)
    at akka.actor.TypedCreatorFunctionConsumer.produce(Props.scala:343)
    at akka.actor.Props.newActor(Props.scala:252)
    at akka.actor.ActorCell.newActor(ActorCell.scala:552)
    at akka.actor.dungeon.FaultHandling$class.finishRecreate(FaultHandling.scala:234)
    ... 11 more
[2015-01-27 13:14:27,261] INFO  apOutputTrackerMasterActor [] [] - MapOutputTrackerActor stopped!
[2015-01-27 13:14:27,268] INFO  .spark.storage.MemoryStore [] [] - MemoryStore cleared
[2015-01-27 13:14:27,268] INFO  spark.storage.BlockManager [] [] - BlockManager stopped
[2015-01-27 13:14:27,269] INFO  storage.BlockManagerMaster [] [] - BlockManagerMaster stopped
[2015-01-27 13:14:27,270] INFO  .apache.spark.SparkContext [] [] - Successfully stopped SparkContext
[2015-01-27 13:14:27,274] INFO  .jobserver.JobManagerActor [] [akka://test/user/$h] - Starting actor spark.jobserver.JobManagerActor
[2015-01-27 13:14:27,274] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:33466/system/remoting-terminator] - Shutting down remote daemon.
[2015-01-27 13:14:27,275] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:33466/system/remoting-terminator] - Remote daemon shut down; proceeding with flushing remote transports.
[2015-01-27 13:14:27,279] INFO  k.jobserver.JobStatusActor [] [akka://test/user/$h/status-actor] - Starting actor spark.jobserver.JobStatusActor
[2015-01-27 13:14:27,280] INFO  k.jobserver.JobResultActor [] [akka://test/user/$h/result-actor] - Starting actor spark.jobserver.JobResultActor
[2015-01-27 13:14:27,287] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:33466/system/remoting-terminator] - Remoting shut down.
[2015-01-27 13:14:27,290] INFO  ache.spark.SecurityManager [] [akka://test/user/$h] - Changing view acls to: tja01
[2015-01-27 13:14:27,290] INFO  ache.spark.SecurityManager [] [akka://test/user/$h] - Changing modify acls to: tja01
[2015-01-27 13:14:27,290] INFO  ache.spark.SecurityManager [] [akka://test/user/$h] - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(tja01); users with modify permissions: Set(tja01)
[2015-01-27 13:14:27,378] INFO  ka.event.slf4j.Slf4jLogger [] [akka://test/user/$h] - Slf4jLogger started
[2015-01-27 13:14:27,385] INFO  Remoting [] [Remoting] - Starting remoting
[2015-01-27 13:14:27,399] INFO  Remoting [] [Remoting] - Remoting started; listening on addresses :[akka.tcp://sparkDriver@localhost:36349]
[2015-01-27 13:14:27,400] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$h] - Successfully started service 'sparkDriver' on port 36349.
[2015-01-27 13:14:27,401] INFO  org.apache.spark.SparkEnv [] [akka://test/user/$h] - Registering MapOutputTracker
[2015-01-27 13:14:27,402] INFO  org.apache.spark.SparkEnv [] [akka://test/user/$h] - Registering BlockManagerMaster
[2015-01-27 13:14:27,403] INFO  k.storage.DiskBlockManager [] [akka://test/user/$h] - Created local directory at /tmp/spark-local-20150127131427-db39
[2015-01-27 13:14:27,403] INFO  .spark.storage.MemoryStore [] [akka://test/user/$h] - MemoryStore started with capacity 681.8 MB
[2015-01-27 13:14:27,404] INFO  pache.spark.HttpFileServer [] [akka://test/user/$h] - HTTP File server directory is /tmp/spark-bc54f317-76a1-4fc5-b3b3-7681d6a8010a
[2015-01-27 13:14:27,404] INFO  rg.apache.spark.HttpServer [] [akka://test/user/$h] - Starting HTTP Server
[2015-01-27 13:14:27,405] INFO  clipse.jetty.server.Server [] [akka://test/user/$h] - jetty-8.1.14.v20131031
[2015-01-27 13:14:27,410] INFO  y.server.AbstractConnector [] [akka://test/user/$h] - Started [email protected]:42366
[2015-01-27 13:14:27,410] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$h] - Successfully started service 'HTTP file server' on port 42366.
[2015-01-27 13:14:32,426] INFO  clipse.jetty.server.Server [] [akka://test/user/$h] - jetty-8.1.14.v20131031
[2015-01-27 13:14:32,440] INFO  y.server.AbstractConnector [] [akka://test/user/$h] - Started [email protected]:51422
[2015-01-27 13:14:32,440] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$h] - Successfully started service 'SparkUI' on port 51422.
[2015-01-27 13:14:32,441] INFO  rg.apache.spark.ui.SparkUI [] [akka://test/user/$h] - Started SparkUI at http://localhost:51422
[2015-01-27 13:14:32,481] INFO  pache.spark.util.AkkaUtils [] [] - Connecting to HeartbeatReceiver: akka.tcp://sparkDriver@localhost:36349/user/HeartbeatReceiver
[2015-01-27 13:14:32,511] INFO  .NettyBlockTransferService [] [akka://test/user/$h] - Server created on 56071
[2015-01-27 13:14:32,512] INFO  storage.BlockManagerMaster [] [akka://test/user/$h] - Trying to register BlockManager
[2015-01-27 13:14:32,512] INFO  ge.BlockManagerMasterActor [] [] - Registering block manager localhost:56071 with 681.8 MB RAM, BlockManagerId(<driver>, localhost, 56071)
[2015-01-27 13:14:32,512] INFO  storage.BlockManagerMaster [] [akka://test/user/$h] - Registered BlockManager
[2015-01-27 13:14:32,518] INFO  .jobserver.RddManagerActor [] [akka://test/user/$h/rdd-manager-actor] - Starting actor spark.jobserver.RddManagerActor
[2015-01-27 13:14:32,518] INFO  .jobserver.JobManagerActor [] [akka://test/user/$h] - Loading class spark.jobserver.WordCountExample for app demo
[2015-01-27 13:14:32,519] INFO  .apache.spark.SparkContext [] [akka://test/user/$h] - Added JAR /tmp/InMemoryDAO9098106076186307131.jar at http://10.1.3.213:42366/jars/InMemoryDAO9098106076186307131.jar with timestamp 1422364472519
[2015-01-27 13:14:32,523] INFO  util.ContextURLClassLoader [] [akka://test/user/$h] - Added URL file:/tmp/InMemoryDAO9098106076186307131.jar to ContextURLClassLoader
[2015-01-27 13:14:32,523] INFO  spark.jobserver.JarUtils$ [] [akka://test/user/$h] - Loading object spark.jobserver.WordCountExample$ using loader spark.jobserver.util.ContextURLClassLoader@45ea5f99
[2015-01-27 13:14:32,525] INFO  .jobserver.JobManagerActor [] [akka://test/user/$h] - Starting Spark job 251b8900-7e2b-48df-a14f-0356d7fbe72d [spark.jobserver.WordCountExample]...
[2015-01-27 13:14:32,525] INFO  .jobserver.JobManagerActor [] [] - Starting job future thread
[2015-01-27 13:14:32,525] WARN  .jobserver.RddManagerActor [] [] - Shutting down spark.jobserver.RddManagerActor
[2015-01-27 13:14:32,526] WARN  k.jobserver.JobStatusActor [] [] - Shutting down spark.jobserver.JobStatusActor
[2015-01-27 13:14:32,529] WARN  k.jobserver.JobResultActor [] [] - Shutting down spark.jobserver.JobResultActor
[2015-01-27 13:14:32,529] INFO  .jobserver.JobManagerActor [] [] - Shutting down SparkContext test
[2015-01-27 13:14:32,540] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage/kill,null}
[2015-01-27 13:14:32,540] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/,null}
[2015-01-27 13:14:32,541] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/static,null}
[2015-01-27 13:14:32,543] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/threadDump/json,null}
[2015-01-27 13:14:32,543] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/threadDump,null}
[2015-01-27 13:14:32,544] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/json,null}
[2015-01-27 13:14:32,544] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors,null}
[2015-01-27 13:14:32,544] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/environment/json,null}
[2015-01-27 13:14:32,545] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/environment,null}
[2015-01-27 13:14:32,545] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/rdd/json,null}
[2015-01-27 13:14:32,545] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/rdd,null}
[2015-01-27 13:14:32,546] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/json,null}
[2015-01-27 13:14:32,546] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage,null}
[2015-01-27 13:14:32,546] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/pool/json,null}
[2015-01-27 13:14:32,546] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/pool,null}
[2015-01-27 13:14:32,547] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage/json,null}
[2015-01-27 13:14:32,547] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage,null}
[2015-01-27 13:14:32,547] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/json,null}
[2015-01-27 13:14:32,548] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages,null}
[2015-01-27 13:14:32,548] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/job/json,null}
[2015-01-27 13:14:32,549] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/job,null}
[2015-01-27 13:14:32,549] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/json,null}
[2015-01-27 13:14:32,549] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs,null}
[2015-01-27 13:14:32,550] INFO  .apache.spark.SparkContext [] [] - Starting job: collect at WordCountExample.scala:32
[2015-01-27 13:14:32,551] INFO  ark.scheduler.DAGScheduler [] [] - Registering RDD 1 (map at WordCountExample.scala:32)
[2015-01-27 13:14:32,552] INFO  ark.scheduler.DAGScheduler [] [] - Got job 0 (collect at WordCountExample.scala:32) with 4 output partitions (allowLocal=false)
[2015-01-27 13:14:32,552] INFO  ark.scheduler.DAGScheduler [] [] - Final stage: Stage 1(collect at WordCountExample.scala:32)
[2015-01-27 13:14:32,552] INFO  ark.scheduler.DAGScheduler [] [] - Parents of final stage: List(Stage 0)
[2015-01-27 13:14:32,554] INFO  ark.scheduler.DAGScheduler [] [] - Missing parents: List(Stage 0)
[2015-01-27 13:14:32,556] INFO  ark.scheduler.DAGScheduler [] [] - Submitting Stage 0 (MappedRDD[1] at map at WordCountExample.scala:32), which has no missing parents
[2015-01-27 13:14:32,559] INFO  .spark.storage.MemoryStore [] [] - ensureFreeSpace(2384) called with curMem=0, maxMem=714866688
[2015-01-27 13:14:32,560] INFO  .spark.storage.MemoryStore [] [] - Block broadcast_0 stored as values in memory (estimated size 2.3 KB, free 681.7 MB)
[2015-01-27 13:14:32,562] INFO  .spark.storage.MemoryStore [] [] - ensureFreeSpace(1716) called with curMem=2384, maxMem=714866688
[2015-01-27 13:14:32,563] INFO  .spark.storage.MemoryStore [] [] - Block broadcast_0_piece0 stored as bytes in memory (estimated size 1716.0 B, free 681.7 MB)
[2015-01-27 13:14:32,564] INFO  k.storage.BlockManagerInfo [] [] - Added broadcast_0_piece0 in memory on localhost:56071 (size: 1716.0 B, free: 681.7 MB)
[2015-01-27 13:14:32,565] INFO  storage.BlockManagerMaster [] [] - Updated info of block broadcast_0_piece0
[2015-01-27 13:14:32,566] INFO  .apache.spark.SparkContext [] [] - Created broadcast 0 from broadcast at DAGScheduler.scala:838
[2015-01-27 13:14:32,568] INFO  ark.scheduler.DAGScheduler [] [] - Submitting 4 missing tasks from Stage 0 (MappedRDD[1] at map at WordCountExample.scala:32)
[2015-01-27 13:14:32,568] INFO  cheduler.TaskSchedulerImpl [] [] - Adding task set 0.0 with 4 tasks
[2015-01-27 13:14:32,572] INFO  k.scheduler.TaskSetManager [] [] - Starting task 0.0 in stage 0.0 (TID 0, localhost, PROCESS_LOCAL, 1330 bytes)
[2015-01-27 13:14:32,573] INFO  k.scheduler.TaskSetManager [] [] - Starting task 1.0 in stage 0.0 (TID 1, localhost, PROCESS_LOCAL, 1337 bytes)
[2015-01-27 13:14:32,574] INFO  k.scheduler.TaskSetManager [] [] - Starting task 2.0 in stage 0.0 (TID 2, localhost, PROCESS_LOCAL, 1340 bytes)
[2015-01-27 13:14:32,576] INFO  k.scheduler.TaskSetManager [] [] - Starting task 3.0 in stage 0.0 (TID 3, localhost, PROCESS_LOCAL, 1337 bytes)
[2015-01-27 13:14:32,582] INFO  he.spark.executor.Executor [] [] - Running task 0.0 in stage 0.0 (TID 0)
[2015-01-27 13:14:32,583] INFO  he.spark.executor.Executor [] [] - Running task 1.0 in stage 0.0 (TID 1)
[2015-01-27 13:14:32,583] INFO  he.spark.executor.Executor [] [] - Running task 2.0 in stage 0.0 (TID 2)
[2015-01-27 13:14:32,585] INFO  he.spark.executor.Executor [] [] - Running task 3.0 in stage 0.0 (TID 3)
[2015-01-27 13:14:32,592] INFO  he.spark.executor.Executor [] [] - Fetching http://10.1.3.213:42366/jars/InMemoryDAO9098106076186307131.jar with timestamp 1422364472519
[2015-01-27 13:14:32,600] INFO  rg.apache.spark.ui.SparkUI [] [] - Stopped Spark web UI at http://localhost:51422
[2015-01-27 13:14:32,601] INFO  ark.scheduler.DAGScheduler [] [] - Stopping DAGScheduler
[2015-01-27 13:14:32,602] INFO  ark.scheduler.DAGScheduler [] [] - Job 0 failed: collect at WordCountExample.scala:32, took 0.051083 s
[2015-01-27 13:14:32,602] WARN  .jobserver.JobManagerActor [] [] - Exception from job 251b8900-7e2b-48df-a14f-0356d7fbe72d: 
org.apache.spark.SparkException: Job cancelled because SparkContext was shut down
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:702)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:701)
    at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
    at org.apache.spark.scheduler.DAGScheduler.cleanUpAfterSchedulerStop(DAGScheduler.scala:701)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessActor.postStop(DAGScheduler.scala:1428)
    at akka.actor.Actor$class.aroundPostStop(Actor.scala:475)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessActor.aroundPostStop(DAGScheduler.scala:1375)
    at akka.actor.dungeon.FaultHandling$class.akka$actor$dungeon$FaultHandling$$finishTerminate(FaultHandling.scala:210)
    at akka.actor.dungeon.FaultHandling$class.terminate(FaultHandling.scala:172)
    at akka.actor.ActorCell.terminate(ActorCell.scala:369)
    at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:462)
    at akka.actor.ActorCell.systemInvoke(ActorCell.scala:478)
    at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:263)
    at akka.dispatch.Mailbox.run(Mailbox.scala:219)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
[2015-01-27 13:14:32,624] INFO  rg.apache.spark.util.Utils [] [] - Fetching http://10.1.3.213:42366/jars/InMemoryDAO9098106076186307131.jar to /tmp/fetchFileTemp167125581953179899.tmp
[2015-01-27 13:14:32,732] INFO  he.spark.executor.Executor [] [] - Adding file:/tmp/spark-1e3520d7-33c3-4152-a51f-fe5cbb931e0c/InMemoryDAO9098106076186307131.jar to class loader
[2015-01-27 13:14:32,832] INFO  he.spark.executor.Executor [] [] - Finished task 2.0 in stage 0.0 (TID 2). 840 bytes result sent to driver
[2015-01-27 13:14:32,832] INFO  he.spark.executor.Executor [] [] - Finished task 3.0 in stage 0.0 (TID 3). 840 bytes result sent to driver
[2015-01-27 13:14:32,832] INFO  he.spark.executor.Executor [] [] - Finished task 0.0 in stage 0.0 (TID 0). 840 bytes result sent to driver
[2015-01-27 13:14:32,832] INFO  he.spark.executor.Executor [] [] - Finished task 1.0 in stage 0.0 (TID 1). 840 bytes result sent to driver
[2015-01-27 13:14:32,834] ERROR cheduler.TaskSchedulerImpl [] [] - Exception in statusUpdate
java.util.concurrent.RejectedExecutionException: Task org.apache.spark.scheduler.TaskResultGetter$$anon$2@5df7a803 rejected from java.util.concurrent.ThreadPoolExecutor@2e378d27[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]
    at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
    at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
    at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
    at org.apache.spark.scheduler.TaskResultGetter.enqueueSuccessfulTask(TaskResultGetter.scala:47)
    at org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$liftedTree2$1$1.apply(TaskSchedulerImpl.scala:301)
    at org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$liftedTree2$1$1.apply(TaskSchedulerImpl.scala:298)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.TaskSchedulerImpl.liftedTree2$1(TaskSchedulerImpl.scala:298)
    at org.apache.spark.scheduler.TaskSchedulerImpl.statusUpdate(TaskSchedulerImpl.scala:283)
    at org.apache.spark.scheduler.local.LocalActor$$anonfun$receiveWithLogging$1.applyOrElse(LocalBackend.scala:61)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
    at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:53)
    at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:42)
    at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
    at org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogReceive.scala:42)
    at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
    at org.apache.spark.scheduler.local.LocalActor.aroundReceive(LocalBackend.scala:43)
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
    at akka.actor.ActorCell.invoke(ActorCell.scala:487)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
    at akka.dispatch.Mailbox.run(Mailbox.scala:220)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
[2015-01-27 13:14:32,837] ERROR cheduler.TaskSchedulerImpl [] [] - Exception in statusUpdate
java.util.concurrent.RejectedExecutionException: Task org.apache.spark.scheduler.TaskResultGetter$$anon$2@4ac62bc0 rejected from java.util.concurrent.ThreadPoolExecutor@2e378d27[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]
    at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
    at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
    at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
    at org.apache.spark.scheduler.TaskResultGetter.enqueueSuccessfulTask(TaskResultGetter.scala:47)
    at org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$liftedTree2$1$1.apply(TaskSchedulerImpl.scala:301)
    at org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$liftedTree2$1$1.apply(TaskSchedulerImpl.scala:298)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.TaskSchedulerImpl.liftedTree2$1(TaskSchedulerImpl.scala:298)
    at org.apache.spark.scheduler.TaskSchedulerImpl.statusUpdate(TaskSchedulerImpl.scala:283)
    at org.apache.spark.scheduler.local.LocalActor$$anonfun$receiveWithLogging$1.applyOrElse(LocalBackend.scala:61)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
    at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:53)
    at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:42)
    at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
    at org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogReceive.scala:42)
    at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
    at org.apache.spark.scheduler.local.LocalActor.aroundReceive(LocalBackend.scala:43)
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
    at akka.actor.ActorCell.invoke(ActorCell.scala:487)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
    at akka.dispatch.Mailbox.run(Mailbox.scala:220)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
[2015-01-27 13:14:32,838] ERROR cheduler.TaskSchedulerImpl [] [] - Exception in statusUpdate
java.util.concurrent.RejectedExecutionException: Task org.apache.spark.scheduler.TaskResultGetter$$anon$2@66ce9af7 rejected from java.util.concurrent.ThreadPoolExecutor@2e378d27[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]
    at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
    at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
    at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
    at org.apache.spark.scheduler.TaskResultGetter.enqueueSuccessfulTask(TaskResultGetter.scala:47)
    at org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$liftedTree2$1$1.apply(TaskSchedulerImpl.scala:301)
    at org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$liftedTree2$1$1.apply(TaskSchedulerImpl.scala:298)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.TaskSchedulerImpl.liftedTree2$1(TaskSchedulerImpl.scala:298)
    at org.apache.spark.scheduler.TaskSchedulerImpl.statusUpdate(TaskSchedulerImpl.scala:283)
    at org.apache.spark.scheduler.local.LocalActor$$anonfun$receiveWithLogging$1.applyOrElse(LocalBackend.scala:61)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
    at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:53)
    at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:42)
    at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
    at org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogReceive.scala:42)
    at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
    at org.apache.spark.scheduler.local.LocalActor.aroundReceive(LocalBackend.scala:43)
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
    at akka.actor.ActorCell.invoke(ActorCell.scala:487)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
    at akka.dispatch.Mailbox.run(Mailbox.scala:220)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
[2015-01-27 13:14:32,839] ERROR cheduler.TaskSchedulerImpl [] [] - Exception in statusUpdate
java.util.concurrent.RejectedExecutionException: Task org.apache.spark.scheduler.TaskResultGetter$$anon$2@1f9bbca8 rejected from java.util.concurrent.ThreadPoolExecutor@2e378d27[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]
    at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
    at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
    at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
    at org.apache.spark.scheduler.TaskResultGetter.enqueueSuccessfulTask(TaskResultGetter.scala:47)
    at org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$liftedTree2$1$1.apply(TaskSchedulerImpl.scala:301)
    at org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$liftedTree2$1$1.apply(TaskSchedulerImpl.scala:298)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.TaskSchedulerImpl.liftedTree2$1(TaskSchedulerImpl.scala:298)
    at org.apache.spark.scheduler.TaskSchedulerImpl.statusUpdate(TaskSchedulerImpl.scala:283)
    at org.apache.spark.scheduler.local.LocalActor$$anonfun$receiveWithLogging$1.applyOrElse(LocalBackend.scala:61)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
    at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:53)
    at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:42)
    at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
    at org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogReceive.scala:42)
    at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
    at org.apache.spark.scheduler.local.LocalActor.aroundReceive(LocalBackend.scala:43)
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
    at akka.actor.ActorCell.invoke(ActorCell.scala:487)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
    at akka.dispatch.Mailbox.run(Mailbox.scala:220)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
[2015-01-27 13:14:33,654] INFO  apOutputTrackerMasterActor [] [] - MapOutputTrackerActor stopped!
[2015-01-27 13:14:33,660] INFO  .spark.storage.MemoryStore [] [] - MemoryStore cleared
[2015-01-27 13:14:33,660] INFO  spark.storage.BlockManager [] [] - BlockManager stopped
[2015-01-27 13:14:33,661] INFO  storage.BlockManagerMaster [] [] - BlockManagerMaster stopped
[2015-01-27 13:14:33,665] INFO  .apache.spark.SparkContext [] [] - Successfully stopped SparkContext
[2015-01-27 13:14:33,665] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:36349/system/remoting-terminator] - Shutting down remote daemon.
[2015-01-27 13:14:33,666] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:36349/system/remoting-terminator] - Remote daemon shut down; proceeding with flushing remote transports.
[2015-01-27 13:14:33,668] INFO  .jobserver.JobManagerActor [] [akka://test/user/$i] - Starting actor spark.jobserver.JobManagerActor
[2015-01-27 13:14:33,669] INFO  k.jobserver.JobStatusActor [] [akka://test/user/$i/status-actor] - Starting actor spark.jobserver.JobStatusActor
[2015-01-27 13:14:33,670] INFO  k.jobserver.JobResultActor [] [akka://test/user/$i/result-actor] - Starting actor spark.jobserver.JobResultActor
[2015-01-27 13:14:33,678] INFO  ache.spark.SecurityManager [] [akka://test/user/$i] - Changing view acls to: tja01
[2015-01-27 13:14:33,679] INFO  ache.spark.SecurityManager [] [akka://test/user/$i] - Changing modify acls to: tja01
[2015-01-27 13:14:33,679] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:36349/system/remoting-terminator] - Remoting shut down.
[2015-01-27 13:14:33,679] INFO  ache.spark.SecurityManager [] [akka://test/user/$i] - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(tja01); users with modify permissions: Set(tja01)
[2015-01-27 13:14:33,738] INFO  ka.event.slf4j.Slf4jLogger [] [akka://test/user/$i] - Slf4jLogger started
[2015-01-27 13:14:33,745] INFO  Remoting [] [Remoting] - Starting remoting
[2015-01-27 13:14:33,758] INFO  Remoting [] [Remoting] - Remoting started; listening on addresses :[akka.tcp://sparkDriver@localhost:33782]
[2015-01-27 13:14:33,759] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$i] - Successfully started service 'sparkDriver' on port 33782.
[2015-01-27 13:14:33,763] INFO  org.apache.spark.SparkEnv [] [akka://test/user/$i] - Registering MapOutputTracker
[2015-01-27 13:14:33,764] INFO  org.apache.spark.SparkEnv [] [akka://test/user/$i] - Registering BlockManagerMaster
[2015-01-27 13:14:33,765] INFO  k.storage.DiskBlockManager [] [akka://test/user/$i] - Created local directory at /tmp/spark-local-20150127131433-710d
[2015-01-27 13:14:33,765] INFO  .spark.storage.MemoryStore [] [akka://test/user/$i] - MemoryStore started with capacity 681.8 MB
[2015-01-27 13:14:33,767] INFO  pache.spark.HttpFileServer [] [akka://test/user/$i] - HTTP File server directory is /tmp/spark-caa38a12-b787-41e3-aaf5-fdbdc1d8e77d
[2015-01-27 13:14:33,767] INFO  rg.apache.spark.HttpServer [] [akka://test/user/$i] - Starting HTTP Server
[2015-01-27 13:14:33,768] INFO  clipse.jetty.server.Server [] [akka://test/user/$i] - jetty-8.1.14.v20131031
[2015-01-27 13:14:33,795] INFO  y.server.AbstractConnector [] [akka://test/user/$i] - Started [email protected]:36798
[2015-01-27 13:14:33,795] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$i] - Successfully started service 'HTTP file server' on port 36798.
[2015-01-27 13:14:38,843] INFO  clipse.jetty.server.Server [] [akka://test/user/$i] - jetty-8.1.14.v20131031
[2015-01-27 13:14:38,854] INFO  y.server.AbstractConnector [] [akka://test/user/$i] - Started [email protected]:44998
[2015-01-27 13:14:38,854] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$i] - Successfully started service 'SparkUI' on port 44998.
[2015-01-27 13:14:38,854] INFO  rg.apache.spark.ui.SparkUI [] [akka://test/user/$i] - Started SparkUI at http://localhost:44998
[2015-01-27 13:14:38,873] INFO  pache.spark.util.AkkaUtils [] [] - Connecting to HeartbeatReceiver: akka.tcp://sparkDriver@localhost:33782/user/HeartbeatReceiver
[2015-01-27 13:14:38,875] INFO  .NettyBlockTransferService [] [akka://test/user/$i] - Server created on 38722
[2015-01-27 13:14:38,875] INFO  storage.BlockManagerMaster [] [akka://test/user/$i] - Trying to register BlockManager
[2015-01-27 13:14:38,875] INFO  ge.BlockManagerMasterActor [] [] - Registering block manager localhost:38722 with 681.8 MB RAM, BlockManagerId(<driver>, localhost, 38722)
[2015-01-27 13:14:38,876] INFO  storage.BlockManagerMaster [] [akka://test/user/$i] - Registered BlockManager
[2015-01-27 13:14:38,880] INFO  .jobserver.RddManagerActor [] [akka://test/user/$i/rdd-manager-actor] - Starting actor spark.jobserver.RddManagerActor
[2015-01-27 13:14:38,881] INFO  .jobserver.JobManagerActor [] [akka://test/user/$i] - Loading class spark.jobserver.MyErrorJob for app demo
[2015-01-27 13:14:38,881] INFO  .apache.spark.SparkContext [] [akka://test/user/$i] - Added JAR /tmp/InMemoryDAO7796836459895580557.jar at http://10.1.3.213:36798/jars/InMemoryDAO7796836459895580557.jar with timestamp 1422364478881
[2015-01-27 13:14:38,884] INFO  util.ContextURLClassLoader [] [akka://test/user/$i] - Added URL file:/tmp/InMemoryDAO7796836459895580557.jar to ContextURLClassLoader
[2015-01-27 13:14:38,884] INFO  spark.jobserver.JarUtils$ [] [akka://test/user/$i] - Loading object spark.jobserver.MyErrorJob$ using loader spark.jobserver.util.ContextURLClassLoader@7af89f7a
[2015-01-27 13:14:38,885] INFO  spark.jobserver.JarUtils$ [] [akka://test/user/$i] - Loading class spark.jobserver.MyErrorJob using loader spark.jobserver.util.ContextURLClassLoader@7af89f7a
[2015-01-27 13:14:38,887] INFO  .jobserver.JobManagerActor [] [akka://test/user/$i] - Starting Spark job 6950182c-d954-4168-bf4f-17dcb284ecbb [spark.jobserver.MyErrorJob]...
[2015-01-27 13:14:38,887] INFO  .jobserver.JobManagerActor [] [] - Starting job future thread
[2015-01-27 13:14:38,888] WARN  k.jobserver.JobResultActor [] [] - Shutting down spark.jobserver.JobResultActor
[2015-01-27 13:14:38,888] WARN  .jobserver.RddManagerActor [] [] - Shutting down spark.jobserver.RddManagerActor
[2015-01-27 13:14:38,888] WARN  k.jobserver.JobStatusActor [] [] - Shutting down spark.jobserver.JobStatusActor
[2015-01-27 13:14:38,888] INFO  .jobserver.JobManagerActor [] [] - Shutting down SparkContext test
[2015-01-27 13:14:38,890] WARN  .jobserver.JobManagerActor [] [] - Exception from job 6950182c-d954-4168-bf4f-17dcb284ecbb: 
java.lang.IllegalArgumentException: Foobar
    at spark.jobserver.MyErrorJob.runJob(SparkTestJobs.scala:14)
    at spark.jobserver.JobManagerActor$$anonfun$spark$jobserver$JobManagerActor$$getJobFuture$4.apply(JobManagerActor.scala:219)
    at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
    at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
    at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
[2015-01-27 13:14:38,899] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage/kill,null}
[2015-01-27 13:14:38,900] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/,null}
[2015-01-27 13:14:38,900] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/static,null}
[2015-01-27 13:14:38,901] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/threadDump/json,null}
[2015-01-27 13:14:38,901] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/threadDump,null}
[2015-01-27 13:14:38,902] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/json,null}
[2015-01-27 13:14:38,902] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors,null}
[2015-01-27 13:14:38,903] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/environment/json,null}
[2015-01-27 13:14:38,903] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/environment,null}
[2015-01-27 13:14:38,904] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/rdd/json,null}
[2015-01-27 13:14:38,904] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/rdd,null}
[2015-01-27 13:14:38,904] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/json,null}
[2015-01-27 13:14:38,905] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage,null}
[2015-01-27 13:14:38,905] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/pool/json,null}
[2015-01-27 13:14:38,906] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/pool,null}
[2015-01-27 13:14:38,906] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage/json,null}
[2015-01-27 13:14:38,907] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage,null}
[2015-01-27 13:14:38,907] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/json,null}
[2015-01-27 13:14:38,908] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages,null}
[2015-01-27 13:14:38,908] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/job/json,null}
[2015-01-27 13:14:38,909] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/job,null}
[2015-01-27 13:14:38,909] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/json,null}
[2015-01-27 13:14:38,910] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs,null}
[2015-01-27 13:14:38,961] INFO  rg.apache.spark.ui.SparkUI [] [] - Stopped Spark web UI at http://localhost:44998
[2015-01-27 13:14:38,962] INFO  ark.scheduler.DAGScheduler [] [] - Stopping DAGScheduler
[2015-01-27 13:14:40,015] INFO  apOutputTrackerMasterActor [] [] - MapOutputTrackerActor stopped!
[2015-01-27 13:14:40,020] INFO  .spark.storage.MemoryStore [] [] - MemoryStore cleared
[2015-01-27 13:14:40,021] INFO  spark.storage.BlockManager [] [] - BlockManager stopped
[2015-01-27 13:14:40,022] INFO  storage.BlockManagerMaster [] [] - BlockManagerMaster stopped
[2015-01-27 13:14:40,024] INFO  .apache.spark.SparkContext [] [] - Successfully stopped SparkContext
[2015-01-27 13:14:40,026] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:33782/system/remoting-terminator] - Shutting down remote daemon.
[2015-01-27 13:14:40,027] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:33782/system/remoting-terminator] - Remote daemon shut down; proceeding with flushing remote transports.
[2015-01-27 13:14:40,028] INFO  .jobserver.JobManagerActor [] [akka://test/user/$j] - Starting actor spark.jobserver.JobManagerActor
[2015-01-27 13:14:40,029] INFO  k.jobserver.JobStatusActor [] [akka://test/user/$j/status-actor] - Starting actor spark.jobserver.JobStatusActor
[2015-01-27 13:14:40,029] INFO  k.jobserver.JobResultActor [] [akka://test/user/$j/result-actor] - Starting actor spark.jobserver.JobResultActor
[2015-01-27 13:14:40,035] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:33782/system/remoting-terminator] - Remoting shut down.
[2015-01-27 13:14:40,036] INFO  ache.spark.SecurityManager [] [akka://test/user/$j] - Changing view acls to: tja01
[2015-01-27 13:14:40,036] INFO  ache.spark.SecurityManager [] [akka://test/user/$j] - Changing modify acls to: tja01
[2015-01-27 13:14:40,036] INFO  ache.spark.SecurityManager [] [akka://test/user/$j] - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(tja01); users with modify permissions: Set(tja01)
[2015-01-27 13:14:40,079] INFO  ka.event.slf4j.Slf4jLogger [] [akka://test/user/$j] - Slf4jLogger started
[2015-01-27 13:14:40,085] INFO  Remoting [] [Remoting] - Starting remoting
[2015-01-27 13:14:40,093] INFO  Remoting [] [Remoting] - Remoting started; listening on addresses :[akka.tcp://sparkDriver@localhost:36268]
[2015-01-27 13:14:40,094] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$j] - Successfully started service 'sparkDriver' on port 36268.
[2015-01-27 13:14:40,094] INFO  org.apache.spark.SparkEnv [] [akka://test/user/$j] - Registering MapOutputTracker
[2015-01-27 13:14:40,095] INFO  org.apache.spark.SparkEnv [] [akka://test/user/$j] - Registering BlockManagerMaster
[2015-01-27 13:14:40,096] INFO  k.storage.DiskBlockManager [] [akka://test/user/$j] - Created local directory at /tmp/spark-local-20150127131440-ca6f
[2015-01-27 13:14:40,096] INFO  .spark.storage.MemoryStore [] [akka://test/user/$j] - MemoryStore started with capacity 681.8 MB
[2015-01-27 13:14:40,097] INFO  pache.spark.HttpFileServer [] [akka://test/user/$j] - HTTP File server directory is /tmp/spark-f9795fb2-7372-49d7-95d7-4dec42b9af62
[2015-01-27 13:14:40,097] INFO  rg.apache.spark.HttpServer [] [akka://test/user/$j] - Starting HTTP Server
[2015-01-27 13:14:40,098] INFO  clipse.jetty.server.Server [] [akka://test/user/$j] - jetty-8.1.14.v20131031
[2015-01-27 13:14:40,099] INFO  y.server.AbstractConnector [] [akka://test/user/$j] - Started [email protected]:43259
[2015-01-27 13:14:40,099] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$j] - Successfully started service 'HTTP file server' on port 43259.
[2015-01-27 13:14:45,111] INFO  clipse.jetty.server.Server [] [akka://test/user/$j] - jetty-8.1.14.v20131031
[2015-01-27 13:14:45,117] INFO  y.server.AbstractConnector [] [akka://test/user/$j] - Started [email protected]:35186
[2015-01-27 13:14:45,117] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$j] - Successfully started service 'SparkUI' on port 35186.
[2015-01-27 13:14:45,117] INFO  rg.apache.spark.ui.SparkUI [] [akka://test/user/$j] - Started SparkUI at http://localhost:35186
[2015-01-27 13:14:45,138] INFO  pache.spark.util.AkkaUtils [] [akka://test/user/$j] - Connecting to HeartbeatReceiver: akka.tcp://sparkDriver@localhost:36268/user/HeartbeatReceiver
[2015-01-27 13:14:45,143] INFO  .NettyBlockTransferService [] [akka://test/user/$j] - Server created on 41968
[2015-01-27 13:14:45,143] INFO  storage.BlockManagerMaster [] [akka://test/user/$j] - Trying to register BlockManager
[2015-01-27 13:14:45,143] INFO  ge.BlockManagerMasterActor [] [akka://test/user/$j] - Registering block manager localhost:41968 with 681.8 MB RAM, BlockManagerId(<driver>, localhost, 41968)
[2015-01-27 13:14:45,144] INFO  storage.BlockManagerMaster [] [akka://test/user/$j] - Registered BlockManager
[2015-01-27 13:14:45,148] INFO  .jobserver.RddManagerActor [] [akka://test/user/$j/rdd-manager-actor] - Starting actor spark.jobserver.RddManagerActor
[2015-01-27 13:14:45,148] INFO  .jobserver.JobManagerActor [] [akka://test/user/$j] - Loading class spark.jobserver.ConfigCheckerJob for app demo
[2015-01-27 13:14:45,149] INFO  .apache.spark.SparkContext [] [akka://test/user/$j] - Added JAR /tmp/InMemoryDAO9202477903903725816.jar at http://10.1.3.213:43259/jars/InMemoryDAO9202477903903725816.jar with timestamp 1422364485149
[2015-01-27 13:14:45,152] INFO  util.ContextURLClassLoader [] [akka://test/user/$j] - Added URL file:/tmp/InMemoryDAO9202477903903725816.jar to ContextURLClassLoader
[2015-01-27 13:14:45,152] INFO  spark.jobserver.JarUtils$ [] [akka://test/user/$j] - Loading object spark.jobserver.ConfigCheckerJob$ using loader spark.jobserver.util.ContextURLClassLoader@600b9fdc
[2015-01-27 13:14:45,153] INFO  spark.jobserver.JarUtils$ [] [akka://test/user/$j] - Loading class spark.jobserver.ConfigCheckerJob using loader spark.jobserver.util.ContextURLClassLoader@600b9fdc
[2015-01-27 13:14:45,154] INFO  .jobserver.JobManagerActor [] [akka://test/user/$j] - Starting Spark job 4c1e37e3-fee8-4603-bdf7-e1607ec0c5c0 [spark.jobserver.ConfigCheckerJob]...
[2015-01-27 13:14:45,154] INFO  k.jobserver.JobResultActor [] [akka://test/user/$j/result-actor] - Added receiver Actor[akka://test/system/testActor1#-2027067696] to subscriber list for JobID 4c1e37e3-fee8-4603-bdf7-e1607ec0c5c0
[2015-01-27 13:14:45,154] INFO  .jobserver.JobManagerActor [] [] - Starting job future thread
[2015-01-27 13:14:45,155] WARN  .jobserver.RddManagerActor [] [] - Shutting down spark.jobserver.RddManagerActor
[2015-01-27 13:14:45,155] WARN  k.jobserver.JobResultActor [] [] - Shutting down spark.jobserver.JobResultActor
[2015-01-27 13:14:45,156] INFO  k.jobserver.JobStatusActor [] [akka://test/user/$j/status-actor] - Job 4c1e37e3-fee8-4603-bdf7-e1607ec0c5c0 started
[2015-01-27 13:14:45,159] WARN  k.jobserver.JobStatusActor [] [] - Shutting down spark.jobserver.JobStatusActor
[2015-01-27 13:14:45,160] INFO  .jobserver.JobManagerActor [] [] - Shutting down SparkContext test
[2015-01-27 13:14:45,170] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage/kill,null}
[2015-01-27 13:14:45,171] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/,null}
[2015-01-27 13:14:45,171] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/static,null}
[2015-01-27 13:14:45,171] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/threadDump/json,null}
[2015-01-27 13:14:45,171] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/threadDump,null}
[2015-01-27 13:14:45,171] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/json,null}
[2015-01-27 13:14:45,171] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors,null}
[2015-01-27 13:14:45,171] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/environment/json,null}
[2015-01-27 13:14:45,172] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/environment,null}
[2015-01-27 13:14:45,172] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/rdd/json,null}
[2015-01-27 13:14:45,172] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/rdd,null}
[2015-01-27 13:14:45,172] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/json,null}
[2015-01-27 13:14:45,172] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage,null}
[2015-01-27 13:14:45,172] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/pool/json,null}
[2015-01-27 13:14:45,172] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/pool,null}
[2015-01-27 13:14:45,172] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage/json,null}
[2015-01-27 13:14:45,173] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage,null}
[2015-01-27 13:14:45,173] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/json,null}
[2015-01-27 13:14:45,173] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages,null}
[2015-01-27 13:14:45,173] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/job/json,null}
[2015-01-27 13:14:45,173] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/job,null}
[2015-01-27 13:14:45,173] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/json,null}
[2015-01-27 13:14:45,173] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs,null}
[2015-01-27 13:14:45,224] INFO  rg.apache.spark.ui.SparkUI [] [] - Stopped Spark web UI at http://localhost:35186
[2015-01-27 13:14:45,224] INFO  ark.scheduler.DAGScheduler [] [] - Stopping DAGScheduler
[2015-01-27 13:14:46,277] INFO  apOutputTrackerMasterActor [] [akka://test/user/$j] - MapOutputTrackerActor stopped!
[2015-01-27 13:14:46,281] INFO  .spark.storage.MemoryStore [] [] - MemoryStore cleared
[2015-01-27 13:14:46,281] INFO  spark.storage.BlockManager [] [] - BlockManager stopped
[2015-01-27 13:14:46,282] INFO  storage.BlockManagerMaster [] [] - BlockManagerMaster stopped
[2015-01-27 13:14:46,283] INFO  .apache.spark.SparkContext [] [] - Successfully stopped SparkContext
[2015-01-27 13:14:46,284] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:36268/system/remoting-terminator] - Shutting down remote daemon.
[2015-01-27 13:14:46,285] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:36268/system/remoting-terminator] - Remote daemon shut down; proceeding with flushing remote transports.
[2015-01-27 13:14:46,287] INFO  .jobserver.JobManagerActor [] [akka://test/user/$k] - Starting actor spark.jobserver.JobManagerActor
[2015-01-27 13:14:46,288] INFO  k.jobserver.JobStatusActor [] [akka://test/user/$k/status-actor] - Starting actor spark.jobserver.JobStatusActor
[2015-01-27 13:14:46,288] INFO  k.jobserver.JobResultActor [] [akka://test/user/$k/result-actor] - Starting actor spark.jobserver.JobResultActor
[2015-01-27 13:14:46,294] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:36268/system/remoting-terminator] - Remoting shut down.
[2015-01-27 13:14:46,296] INFO  ache.spark.SecurityManager [] [akka://test/user/$k] - Changing view acls to: tja01
[2015-01-27 13:14:46,296] INFO  ache.spark.SecurityManager [] [akka://test/user/$k] - Changing modify acls to: tja01
[2015-01-27 13:14:46,296] INFO  ache.spark.SecurityManager [] [akka://test/user/$k] - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(tja01); users with modify permissions: Set(tja01)
[2015-01-27 13:14:46,334] INFO  ka.event.slf4j.Slf4jLogger [] [akka://test/user/$k] - Slf4jLogger started
[2015-01-27 13:14:46,342] INFO  Remoting [] [Remoting] - Starting remoting
[2015-01-27 13:14:46,357] INFO  Remoting [] [Remoting] - Remoting started; listening on addresses :[akka.tcp://sparkDriver@localhost:57257]
[2015-01-27 13:14:46,358] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$k] - Successfully started service 'sparkDriver' on port 57257.
[2015-01-27 13:14:46,358] INFO  org.apache.spark.SparkEnv [] [akka://test/user/$k] - Registering MapOutputTracker
[2015-01-27 13:14:46,359] INFO  org.apache.spark.SparkEnv [] [akka://test/user/$k] - Registering BlockManagerMaster
[2015-01-27 13:14:46,361] INFO  k.storage.DiskBlockManager [] [akka://test/user/$k] - Created local directory at /tmp/spark-local-20150127131446-cb64
[2015-01-27 13:14:46,361] INFO  .spark.storage.MemoryStore [] [akka://test/user/$k] - MemoryStore started with capacity 681.8 MB
[2015-01-27 13:14:46,362] INFO  pache.spark.HttpFileServer [] [akka://test/user/$k] - HTTP File server directory is /tmp/spark-2b07d776-fab3-47d9-8173-03ca0decbee5
[2015-01-27 13:14:46,362] INFO  rg.apache.spark.HttpServer [] [akka://test/user/$k] - Starting HTTP Server
[2015-01-27 13:14:46,363] INFO  clipse.jetty.server.Server [] [akka://test/user/$k] - jetty-8.1.14.v20131031
[2015-01-27 13:14:46,372] INFO  y.server.AbstractConnector [] [akka://test/user/$k] - Started [email protected]:57449
[2015-01-27 13:14:46,372] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$k] - Successfully started service 'HTTP file server' on port 57449.
[2015-01-27 13:14:51,431] INFO  clipse.jetty.server.Server [] [akka://test/user/$k] - jetty-8.1.14.v20131031
[2015-01-27 13:14:51,448] INFO  y.server.AbstractConnector [] [akka://test/user/$k] - Started [email protected]:38517
[2015-01-27 13:14:51,448] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$k] - Successfully started service 'SparkUI' on port 38517.
[2015-01-27 13:14:51,448] INFO  rg.apache.spark.ui.SparkUI [] [akka://test/user/$k] - Started SparkUI at http://localhost:38517
[2015-01-27 13:14:51,491] INFO  pache.spark.util.AkkaUtils [] [] - Connecting to HeartbeatReceiver: akka.tcp://sparkDriver@localhost:57257/user/HeartbeatReceiver
[2015-01-27 13:14:51,499] INFO  .NettyBlockTransferService [] [akka://test/user/$k] - Server created on 40291
[2015-01-27 13:14:51,500] INFO  storage.BlockManagerMaster [] [akka://test/user/$k] - Trying to register BlockManager
[2015-01-27 13:14:51,500] INFO  ge.BlockManagerMasterActor [] [] - Registering block manager localhost:40291 with 681.8 MB RAM, BlockManagerId(<driver>, localhost, 40291)
[2015-01-27 13:14:51,501] INFO  storage.BlockManagerMaster [] [akka://test/user/$k] - Registered BlockManager
[2015-01-27 13:14:51,510] INFO  .jobserver.RddManagerActor [] [akka://test/user/$k/rdd-manager-actor] - Starting actor spark.jobserver.RddManagerActor
[2015-01-27 13:14:51,510] INFO  .jobserver.JobManagerActor [] [akka://test/user/$k] - Loading class spark.jobserver.ZookeeperJob for app demo
[2015-01-27 13:14:51,511] INFO  .apache.spark.SparkContext [] [akka://test/user/$k] - Added JAR /tmp/InMemoryDAO575330615886762321.jar at http://10.1.3.213:57449/jars/InMemoryDAO575330615886762321.jar with timestamp 1422364491511
[2015-01-27 13:14:51,514] INFO  util.ContextURLClassLoader [] [akka://test/user/$k] - Added URL file:/tmp/InMemoryDAO575330615886762321.jar to ContextURLClassLoader
[2015-01-27 13:14:51,514] INFO  spark.jobserver.JarUtils$ [] [akka://test/user/$k] - Loading object spark.jobserver.ZookeeperJob$ using loader spark.jobserver.util.ContextURLClassLoader@3129517
[2015-01-27 13:14:51,515] INFO  spark.jobserver.JarUtils$ [] [akka://test/user/$k] - Loading class spark.jobserver.ZookeeperJob using loader spark.jobserver.util.ContextURLClassLoader@3129517
[2015-01-27 13:14:51,517] INFO  .jobserver.JobManagerActor [] [akka://test/user/$k] - Starting Spark job 1e0b5ae7-83e6-4f73-bda1-5f5ffa34f537 [spark.jobserver.ZookeeperJob]...
[2015-01-27 13:14:51,517] INFO  k.jobserver.JobResultActor [] [akka://test/user/$k/result-actor] - Added receiver Actor[akka://test/system/testActor1#-2027067696] to subscriber list for JobID 1e0b5ae7-83e6-4f73-bda1-5f5ffa34f537
[2015-01-27 13:14:51,517] INFO  .jobserver.JobManagerActor [] [] - Starting job future thread
[2015-01-27 13:14:51,517] WARN  .jobserver.RddManagerActor [] [] - Shutting down spark.jobserver.RddManagerActor
[2015-01-27 13:14:51,517] WARN  k.jobserver.JobResultActor [] [] - Shutting down spark.jobserver.JobResultActor
[2015-01-27 13:14:51,518] WARN  k.jobserver.JobStatusActor [] [] - Shutting down spark.jobserver.JobStatusActor
[2015-01-27 13:14:51,519] INFO  .jobserver.JobManagerActor [] [] - Shutting down SparkContext test
[2015-01-27 13:14:51,529] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage/kill,null}
[2015-01-27 13:14:51,530] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/,null}
[2015-01-27 13:14:51,530] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/static,null}
[2015-01-27 13:14:51,530] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/threadDump/json,null}
[2015-01-27 13:14:51,530] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/threadDump,null}
[2015-01-27 13:14:51,530] INFO  .apache.spark.SparkContext [] [] - Starting job: collect at SparkTestJobs.scala:74
[2015-01-27 13:14:51,530] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/json,null}
[2015-01-27 13:14:51,530] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors,null}
[2015-01-27 13:14:51,530] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/environment/json,null}
[2015-01-27 13:14:51,531] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/environment,null}
[2015-01-27 13:14:51,531] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/rdd/json,null}
[2015-01-27 13:14:51,531] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/rdd,null}
[2015-01-27 13:14:51,531] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/json,null}
[2015-01-27 13:14:51,531] INFO  ark.scheduler.DAGScheduler [] [] - Got job 0 (collect at SparkTestJobs.scala:74) with 4 output partitions (allowLocal=false)
[2015-01-27 13:14:51,531] INFO  ark.scheduler.DAGScheduler [] [] - Final stage: Stage 0(collect at SparkTestJobs.scala:74)
[2015-01-27 13:14:51,531] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage,null}
[2015-01-27 13:14:51,531] INFO  ark.scheduler.DAGScheduler [] [] - Parents of final stage: List()
[2015-01-27 13:14:51,531] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/pool/json,null}
[2015-01-27 13:14:51,531] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/pool,null}
[2015-01-27 13:14:51,531] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage/json,null}
[2015-01-27 13:14:51,532] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage,null}
[2015-01-27 13:14:51,532] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/json,null}
[2015-01-27 13:14:51,532] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages,null}
[2015-01-27 13:14:51,532] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/job/json,null}
[2015-01-27 13:14:51,532] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/job,null}
[2015-01-27 13:14:51,532] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/json,null}
[2015-01-27 13:14:51,532] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs,null}
[2015-01-27 13:14:51,532] INFO  ark.scheduler.DAGScheduler [] [] - Missing parents: List()
[2015-01-27 13:14:51,533] INFO  ark.scheduler.DAGScheduler [] [] - Submitting Stage 0 (FilteredRDD[1] at filter at SparkTestJobs.scala:74), which has no missing parents
[2015-01-27 13:14:51,535] INFO  .spark.storage.MemoryStore [] [] - ensureFreeSpace(1680) called with curMem=0, maxMem=714866688
[2015-01-27 13:14:51,536] INFO  .spark.storage.MemoryStore [] [] - Block broadcast_0 stored as values in memory (estimated size 1680.0 B, free 681.7 MB)
[2015-01-27 13:14:51,537] INFO  .spark.storage.MemoryStore [] [] - ensureFreeSpace(1219) called with curMem=1680, maxMem=714866688
[2015-01-27 13:14:51,538] INFO  .spark.storage.MemoryStore [] [] - Block broadcast_0_piece0 stored as bytes in memory (estimated size 1219.0 B, free 681.7 MB)
[2015-01-27 13:14:51,539] INFO  k.storage.BlockManagerInfo [] [] - Added broadcast_0_piece0 in memory on localhost:40291 (size: 1219.0 B, free: 681.7 MB)
[2015-01-27 13:14:51,539] INFO  storage.BlockManagerMaster [] [] - Updated info of block broadcast_0_piece0
[2015-01-27 13:14:51,540] INFO  .apache.spark.SparkContext [] [] - Created broadcast 0 from broadcast at DAGScheduler.scala:838
[2015-01-27 13:14:51,543] INFO  ark.scheduler.DAGScheduler [] [] - Submitting 4 missing tasks from Stage 0 (FilteredRDD[1] at filter at SparkTestJobs.scala:74)
[2015-01-27 13:14:51,544] INFO  cheduler.TaskSchedulerImpl [] [] - Adding task set 0.0 with 4 tasks
[2015-01-27 13:14:51,545] INFO  k.scheduler.TaskSetManager [] [] - Starting task 0.0 in stage 0.0 (TID 0, localhost, PROCESS_LOCAL, 1340 bytes)
[2015-01-27 13:14:51,546] INFO  k.scheduler.TaskSetManager [] [] - Starting task 1.0 in stage 0.0 (TID 1, localhost, PROCESS_LOCAL, 1397 bytes)
[2015-01-27 13:14:51,547] INFO  k.scheduler.TaskSetManager [] [] - Starting task 2.0 in stage 0.0 (TID 2, localhost, PROCESS_LOCAL, 1397 bytes)
[2015-01-27 13:14:51,548] INFO  k.scheduler.TaskSetManager [] [] - Starting task 3.0 in stage 0.0 (TID 3, localhost, PROCESS_LOCAL, 1399 bytes)
[2015-01-27 13:14:51,548] INFO  he.spark.executor.Executor [] [] - Running task 0.0 in stage 0.0 (TID 0)
[2015-01-27 13:14:51,548] INFO  he.spark.executor.Executor [] [] - Running task 1.0 in stage 0.0 (TID 1)
[2015-01-27 13:14:51,549] INFO  he.spark.executor.Executor [] [] - Fetching http://10.1.3.213:57449/jars/InMemoryDAO575330615886762321.jar with timestamp 1422364491511
[2015-01-27 13:14:51,549] INFO  he.spark.executor.Executor [] [] - Running task 2.0 in stage 0.0 (TID 2)
[2015-01-27 13:14:51,550] INFO  he.spark.executor.Executor [] [] - Running task 3.0 in stage 0.0 (TID 3)
[2015-01-27 13:14:51,563] INFO  rg.apache.spark.util.Utils [] [] - Fetching http://10.1.3.213:57449/jars/InMemoryDAO575330615886762321.jar to /tmp/fetchFileTemp3338585275882279297.tmp
[2015-01-27 13:14:51,570] INFO  he.spark.executor.Executor [] [] - Adding file:/tmp/spark-10e2421c-dfd6-4e05-b5dd-8dc0d291dfb3/InMemoryDAO575330615886762321.jar to class loader
[2015-01-27 13:14:51,578] INFO  he.spark.executor.Executor [] [] - Finished task 2.0 in stage 0.0 (TID 2). 617 bytes result sent to driver
[2015-01-27 13:14:51,579] INFO  he.spark.executor.Executor [] [] - Finished task 1.0 in stage 0.0 (TID 1). 617 bytes result sent to driver
[2015-01-27 13:14:51,580] INFO  he.spark.executor.Executor [] [] - Finished task 0.0 in stage 0.0 (TID 0). 617 bytes result sent to driver
[2015-01-27 13:14:51,580] INFO  he.spark.executor.Executor [] [] - Finished task 3.0 in stage 0.0 (TID 3). 692 bytes result sent to driver
[2015-01-27 13:14:51,584] INFO  rg.apache.spark.ui.SparkUI [] [] - Stopped Spark web UI at http://localhost:38517
[2015-01-27 13:14:51,584] INFO  ark.scheduler.DAGScheduler [] [] - Stopping DAGScheduler
[2015-01-27 13:14:51,585] INFO  ark.scheduler.DAGScheduler [] [] - Job 0 failed: collect at SparkTestJobs.scala:74, took 0.054222 s
[2015-01-27 13:14:51,585] WARN  .jobserver.JobManagerActor [] [] - Exception from job 1e0b5ae7-83e6-4f73-bda1-5f5ffa34f537: 
org.apache.spark.SparkException: Job cancelled because SparkContext was shut down
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:702)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:701)
    at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
    at org.apache.spark.scheduler.DAGScheduler.cleanUpAfterSchedulerStop(DAGScheduler.scala:701)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessActor.postStop(DAGScheduler.scala:1428)
    at akka.actor.Actor$class.aroundPostStop(Actor.scala:475)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessActor.aroundPostStop(DAGScheduler.scala:1375)
    at akka.actor.dungeon.FaultHandling$class.akka$actor$dungeon$FaultHandling$$finishTerminate(FaultHandling.scala:210)
    at akka.actor.dungeon.FaultHandling$class.terminate(FaultHandling.scala:172)
    at akka.actor.ActorCell.terminate(ActorCell.scala:369)
    at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:462)
    at akka.actor.ActorCell.systemInvoke(ActorCell.scala:478)
    at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:263)
    at akka.dispatch.Mailbox.run(Mailbox.scala:219)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
[2015-01-27 13:14:51,588] INFO  k.scheduler.TaskSetManager [] [] - Finished task 3.0 in stage 0.0 (TID 3) in 35 ms on localhost (1/4)
[2015-01-27 13:14:51,588] INFO  k.scheduler.TaskSetManager [] [] - Finished task 1.0 in stage 0.0 (TID 1) in 43 ms on localhost (2/4)
[2015-01-27 13:14:51,589] INFO  k.scheduler.TaskSetManager [] [] - Finished task 2.0 in stage 0.0 (TID 2) in 43 ms on localhost (3/4)
[2015-01-27 13:14:51,590] INFO  k.scheduler.TaskSetManager [] [] - Finished task 0.0 in stage 0.0 (TID 0) in 45 ms on localhost (4/4)
[2015-01-27 13:14:51,591] INFO  cheduler.TaskSchedulerImpl [] [] - Removed TaskSet 0.0, whose tasks have all completed, from pool 
[2015-01-27 13:14:52,637] INFO  apOutputTrackerMasterActor [] [] - MapOutputTrackerActor stopped!
[2015-01-27 13:14:52,640] INFO  .spark.storage.MemoryStore [] [] - MemoryStore cleared
[2015-01-27 13:14:52,640] INFO  spark.storage.BlockManager [] [] - BlockManager stopped
[2015-01-27 13:14:52,641] INFO  storage.BlockManagerMaster [] [] - BlockManagerMaster stopped
[2015-01-27 13:14:52,642] INFO  .apache.spark.SparkContext [] [] - Successfully stopped SparkContext
[2015-01-27 13:14:52,642] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:57257/system/remoting-terminator] - Shutting down remote daemon.
[2015-01-27 13:14:52,643] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:57257/system/remoting-terminator] - Remote daemon shut down; proceeding with flushing remote transports.
[2015-01-27 13:14:52,643] INFO  .jobserver.JobManagerActor [] [akka://test/user/$l] - Starting actor spark.jobserver.JobManagerActor
[2015-01-27 13:14:52,647] INFO  k.jobserver.JobStatusActor [] [akka://test/user/$l/status-actor] - Starting actor spark.jobserver.JobStatusActor
[2015-01-27 13:14:52,647] INFO  k.jobserver.JobResultActor [] [akka://test/user/$l/result-actor] - Starting actor spark.jobserver.JobResultActor
[2015-01-27 13:14:52,654] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:57257/system/remoting-terminator] - Remoting shut down.
[2015-01-27 13:14:52,657] INFO  ache.spark.SecurityManager [] [akka://test/user/$l] - Changing view acls to: tja01
[2015-01-27 13:14:52,658] INFO  ache.spark.SecurityManager [] [akka://test/user/$l] - Changing modify acls to: tja01
[2015-01-27 13:14:52,658] INFO  ache.spark.SecurityManager [] [akka://test/user/$l] - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(tja01); users with modify permissions: Set(tja01)
[2015-01-27 13:14:52,688] INFO  ka.event.slf4j.Slf4jLogger [] [akka://test/user/$l] - Slf4jLogger started
[2015-01-27 13:14:52,694] INFO  Remoting [] [Remoting] - Starting remoting
[2015-01-27 13:14:52,702] INFO  Remoting [] [Remoting] - Remoting started; listening on addresses :[akka.tcp://sparkDriver@localhost:46813]
[2015-01-27 13:14:52,703] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$l] - Successfully started service 'sparkDriver' on port 46813.
[2015-01-27 13:14:52,704] INFO  org.apache.spark.SparkEnv [] [akka://test/user/$l] - Registering MapOutputTracker
[2015-01-27 13:14:52,704] INFO  org.apache.spark.SparkEnv [] [akka://test/user/$l] - Registering BlockManagerMaster
[2015-01-27 13:14:52,705] INFO  k.storage.DiskBlockManager [] [akka://test/user/$l] - Created local directory at /tmp/spark-local-20150127131452-6998
[2015-01-27 13:14:52,705] INFO  .spark.storage.MemoryStore [] [akka://test/user/$l] - MemoryStore started with capacity 681.8 MB
[2015-01-27 13:14:52,706] INFO  pache.spark.HttpFileServer [] [akka://test/user/$l] - HTTP File server directory is /tmp/spark-549dfa56-7df5-477f-9e46-9a66cf7aa198
[2015-01-27 13:14:52,706] INFO  rg.apache.spark.HttpServer [] [akka://test/user/$l] - Starting HTTP Server
[2015-01-27 13:14:52,707] INFO  clipse.jetty.server.Server [] [akka://test/user/$l] - jetty-8.1.14.v20131031
[2015-01-27 13:14:52,708] INFO  y.server.AbstractConnector [] [akka://test/user/$l] - Started [email protected]:45111
[2015-01-27 13:14:52,708] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$l] - Successfully started service 'HTTP file server' on port 45111.
[2015-01-27 13:14:57,722] INFO  clipse.jetty.server.Server [] [akka://test/user/$l] - jetty-8.1.14.v20131031
[2015-01-27 13:14:57,734] INFO  y.server.AbstractConnector [] [akka://test/user/$l] - Started [email protected]:55198
[2015-01-27 13:14:57,734] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$l] - Successfully started service 'SparkUI' on port 55198.
[2015-01-27 13:14:57,735] INFO  rg.apache.spark.ui.SparkUI [] [akka://test/user/$l] - Started SparkUI at http://localhost:55198
[2015-01-27 13:14:57,775] INFO  pache.spark.util.AkkaUtils [] [akka://test/user/$l] - Connecting to HeartbeatReceiver: akka.tcp://sparkDriver@localhost:46813/user/HeartbeatReceiver
[2015-01-27 13:14:57,777] INFO  .NettyBlockTransferService [] [akka://test/user/$l] - Server created on 35147
[2015-01-27 13:14:57,777] INFO  storage.BlockManagerMaster [] [akka://test/user/$l] - Trying to register BlockManager
[2015-01-27 13:14:57,778] INFO  ge.BlockManagerMasterActor [] [akka://test/user/$l] - Registering block manager localhost:35147 with 681.8 MB RAM, BlockManagerId(<driver>, localhost, 35147)
[2015-01-27 13:14:57,778] INFO  storage.BlockManagerMaster [] [akka://test/user/$l] - Registered BlockManager
[2015-01-27 13:14:57,782] INFO  .jobserver.RddManagerActor [] [akka://test/user/$l/rdd-manager-actor] - Starting actor spark.jobserver.RddManagerActor
[2015-01-27 13:14:57,783] INFO  .jobserver.JobManagerActor [] [akka://test/user/$l] - Loading class spark.jobserver.SleepJob for app demo
[2015-01-27 13:14:57,783] INFO  .apache.spark.SparkContext [] [akka://test/user/$l] - Added JAR /tmp/InMemoryDAO4651919385015759310.jar at http://10.1.3.213:45111/jars/InMemoryDAO4651919385015759310.jar with timestamp 1422364497783
[2015-01-27 13:14:57,786] INFO  util.ContextURLClassLoader [] [akka://test/user/$l] - Added URL file:/tmp/InMemoryDAO4651919385015759310.jar to ContextURLClassLoader
[2015-01-27 13:14:57,786] INFO  spark.jobserver.JarUtils$ [] [akka://test/user/$l] - Loading object spark.jobserver.SleepJob$ using loader spark.jobserver.util.ContextURLClassLoader@2691238a
[2015-01-27 13:14:57,787] INFO  spark.jobserver.JarUtils$ [] [akka://test/user/$l] - Loading class spark.jobserver.SleepJob using loader spark.jobserver.util.ContextURLClassLoader@2691238a
[2015-01-27 13:14:57,788] INFO  k.jobserver.JobResultActor [] [akka://test/user/$l/result-actor] - Added receiver Actor[akka://test/system/testActor1#-2027067696] to subscriber list for JobID 7805b79d-4ecf-4f84-93cf-3e0a42b39843
[2015-01-27 13:14:57,789] INFO  .jobserver.JobManagerActor [] [akka://test/user/$l] - Starting Spark job 7805b79d-4ecf-4f84-93cf-3e0a42b39843 [spark.jobserver.SleepJob]...
[2015-01-27 13:14:57,789] INFO  .jobserver.JobManagerActor [] [akka://test/user/$l] - Loading class spark.jobserver.SleepJob for app demo
[2015-01-27 13:14:57,789] INFO  .jobserver.JobManagerActor [] [] - Starting job future thread
[2015-01-27 13:14:57,789] INFO  .jobserver.JobManagerActor [] [akka://test/user/$l] - Starting Spark job 4319e2b4-e13c-4c3a-857a-e5a94b88b225 [spark.jobserver.SleepJob]...
[2015-01-27 13:14:57,789] INFO  k.jobserver.JobResultActor [] [akka://test/user/$l/result-actor] - Added receiver Actor[akka://test/system/testActor1#-2027067696] to subscriber list for JobID 4319e2b4-e13c-4c3a-857a-e5a94b88b225
[2015-01-27 13:14:57,789] INFO  .jobserver.JobManagerActor [] [] - Starting job future thread
[2015-01-27 13:14:57,790] INFO  .jobserver.JobManagerActor [] [akka://test/user/$l] - Loading class spark.jobserver.SleepJob for app demo
[2015-01-27 13:14:57,790] INFO  .jobserver.JobManagerActor [] [akka://test/user/$l] - Starting Spark job 7d45a9fb-4120-47e3-884c-f9836530aad2 [spark.jobserver.SleepJob]...
[2015-01-27 13:14:57,791] INFO  k.jobserver.JobResultActor [] [akka://test/user/$l/result-actor] - Added receiver Actor[akka://test/system/testActor1#-2027067696] to subscriber list for JobID 7d45a9fb-4120-47e3-884c-f9836530aad2
[2015-01-27 13:14:57,792] INFO  k.jobserver.JobStatusActor [] [akka://test/user/$l/status-actor] - Job 7805b79d-4ecf-4f84-93cf-3e0a42b39843 started
[2015-01-27 13:14:57,792] WARN  .jobserver.RddManagerActor [] [] - Shutting down spark.jobserver.RddManagerActor
[2015-01-27 13:14:57,792] WARN  k.jobserver.JobResultActor [] [] - Shutting down spark.jobserver.JobResultActor
[2015-01-27 13:14:57,793] WARN  k.jobserver.JobStatusActor [] [] - Shutting down spark.jobserver.JobStatusActor
[2015-01-27 13:14:57,793] INFO  .jobserver.JobManagerActor [] [] - Shutting down SparkContext test
[2015-01-27 13:14:57,804] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage/kill,null}
[2015-01-27 13:14:57,804] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/,null}
[2015-01-27 13:14:57,805] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/static,null}
[2015-01-27 13:14:57,805] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/threadDump/json,null}
[2015-01-27 13:14:57,806] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/threadDump,null}
[2015-01-27 13:14:57,806] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/json,null}
[2015-01-27 13:14:57,806] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors,null}
[2015-01-27 13:14:57,807] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/environment/json,null}
[2015-01-27 13:14:57,807] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/environment,null}
[2015-01-27 13:14:57,807] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/rdd/json,null}
[2015-01-27 13:14:57,807] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/rdd,null}
[2015-01-27 13:14:57,807] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/json,null}
[2015-01-27 13:14:57,807] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage,null}
[2015-01-27 13:14:57,807] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/pool/json,null}
[2015-01-27 13:14:57,807] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/pool,null}
[2015-01-27 13:14:57,808] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage/json,null}
[2015-01-27 13:14:57,808] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage,null}
[2015-01-27 13:14:57,808] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/json,null}
[2015-01-27 13:14:57,808] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages,null}
[2015-01-27 13:14:57,808] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/job/json,null}
[2015-01-27 13:14:57,808] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/job,null}
[2015-01-27 13:14:57,808] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/json,null}
[2015-01-27 13:14:57,808] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs,null}
[2015-01-27 13:14:57,860] INFO  rg.apache.spark.ui.SparkUI [] [] - Stopped Spark web UI at http://localhost:55198
[2015-01-27 13:14:57,860] INFO  ark.scheduler.DAGScheduler [] [] - Stopping DAGScheduler
[2015-01-27 13:14:58,913] INFO  apOutputTrackerMasterActor [] [] - MapOutputTrackerActor stopped!
[2015-01-27 13:14:58,917] INFO  .spark.storage.MemoryStore [] [] - MemoryStore cleared
[2015-01-27 13:14:58,918] INFO  spark.storage.BlockManager [] [] - BlockManager stopped
[2015-01-27 13:14:58,921] INFO  storage.BlockManagerMaster [] [] - BlockManagerMaster stopped
[2015-01-27 13:14:58,922] INFO  .apache.spark.SparkContext [] [] - Successfully stopped SparkContext
[2015-01-27 13:14:58,926] INFO  .jobserver.JobManagerActor [] [akka://test/user/$m] - Starting actor spark.jobserver.JobManagerActor
[2015-01-27 13:14:58,929] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:46813/system/remoting-terminator] - Shutting down remote daemon.
[2015-01-27 13:14:58,930] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:46813/system/remoting-terminator] - Remote daemon shut down; proceeding with flushing remote transports.
[2015-01-27 13:14:58,946] INFO  k.jobserver.JobStatusActor [] [akka://test/user/$m/status-actor] - Starting actor spark.jobserver.JobStatusActor
[2015-01-27 13:14:58,947] INFO  k.jobserver.JobResultActor [] [akka://test/user/$m/result-actor] - Starting actor spark.jobserver.JobResultActor
[2015-01-27 13:14:58,961] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:46813/system/remoting-terminator] - Remoting shut down.
[2015-01-27 13:14:58,966] INFO  ache.spark.SecurityManager [] [akka://test/user/$m] - Changing view acls to: tja01
[2015-01-27 13:14:58,967] INFO  ache.spark.SecurityManager [] [akka://test/user/$m] - Changing modify acls to: tja01
[2015-01-27 13:14:58,967] INFO  ache.spark.SecurityManager [] [akka://test/user/$m] - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(tja01); users with modify permissions: Set(tja01)
[2015-01-27 13:14:59,002] INFO  ka.event.slf4j.Slf4jLogger [] [akka://test/user/$m] - Slf4jLogger started
[2015-01-27 13:14:59,008] INFO  Remoting [] [Remoting] - Starting remoting
[2015-01-27 13:14:59,015] INFO  Remoting [] [Remoting] - Remoting started; listening on addresses :[akka.tcp://sparkDriver@localhost:44346]
[2015-01-27 13:14:59,016] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$m] - Successfully started service 'sparkDriver' on port 44346.
[2015-01-27 13:14:59,016] INFO  org.apache.spark.SparkEnv [] [akka://test/user/$m] - Registering MapOutputTracker
[2015-01-27 13:14:59,017] INFO  org.apache.spark.SparkEnv [] [akka://test/user/$m] - Registering BlockManagerMaster
[2015-01-27 13:14:59,018] INFO  k.storage.DiskBlockManager [] [akka://test/user/$m] - Created local directory at /tmp/spark-local-20150127131459-34a1
[2015-01-27 13:14:59,018] INFO  .spark.storage.MemoryStore [] [akka://test/user/$m] - MemoryStore started with capacity 681.8 MB
[2015-01-27 13:14:59,019] INFO  pache.spark.HttpFileServer [] [akka://test/user/$m] - HTTP File server directory is /tmp/spark-596dd07f-798d-408a-8851-11d23a582ba6
[2015-01-27 13:14:59,019] INFO  rg.apache.spark.HttpServer [] [akka://test/user/$m] - Starting HTTP Server
[2015-01-27 13:14:59,019] INFO  clipse.jetty.server.Server [] [akka://test/user/$m] - jetty-8.1.14.v20131031
[2015-01-27 13:14:59,020] INFO  y.server.AbstractConnector [] [akka://test/user/$m] - Started [email protected]:46487
[2015-01-27 13:14:59,021] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$m] - Successfully started service 'HTTP file server' on port 46487.
[2015-01-27 13:15:04,034] INFO  clipse.jetty.server.Server [] [akka://test/user/$m] - jetty-8.1.14.v20131031
[2015-01-27 13:15:04,046] INFO  y.server.AbstractConnector [] [akka://test/user/$m] - Started [email protected]:39195
[2015-01-27 13:15:04,046] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$m] - Successfully started service 'SparkUI' on port 39195.
[2015-01-27 13:15:04,047] INFO  rg.apache.spark.ui.SparkUI [] [akka://test/user/$m] - Started SparkUI at http://localhost:39195
[2015-01-27 13:15:04,077] INFO  pache.spark.util.AkkaUtils [] [] - Connecting to HeartbeatReceiver: akka.tcp://sparkDriver@localhost:44346/user/HeartbeatReceiver
[2015-01-27 13:15:04,080] INFO  .NettyBlockTransferService [] [akka://test/user/$m] - Server created on 55389
[2015-01-27 13:15:04,080] INFO  storage.BlockManagerMaster [] [akka://test/user/$m] - Trying to register BlockManager
[2015-01-27 13:15:04,081] INFO  ge.BlockManagerMasterActor [] [] - Registering block manager localhost:55389 with 681.8 MB RAM, BlockManagerId(<driver>, localhost, 55389)
[2015-01-27 13:15:04,081] INFO  storage.BlockManagerMaster [] [akka://test/user/$m] - Registered BlockManager
[2015-01-27 13:15:04,085] INFO  .jobserver.RddManagerActor [] [akka://test/user/$m/rdd-manager-actor] - Starting actor spark.jobserver.RddManagerActor
[2015-01-27 13:15:04,086] INFO  .jobserver.JobManagerActor [] [akka://test/user/$m] - Loading class spark.jobserver.SimpleObjectJob for app demo
[2015-01-27 13:15:04,086] INFO  .apache.spark.SparkContext [] [akka://test/user/$m] - Added JAR /tmp/InMemoryDAO7626494765007423818.jar at http://10.1.3.213:46487/jars/InMemoryDAO7626494765007423818.jar with timestamp 1422364504086
[2015-01-27 13:15:04,089] INFO  util.ContextURLClassLoader [] [akka://test/user/$m] - Added URL file:/tmp/InMemoryDAO7626494765007423818.jar to ContextURLClassLoader
[2015-01-27 13:15:04,089] INFO  spark.jobserver.JarUtils$ [] [akka://test/user/$m] - Loading object spark.jobserver.SimpleObjectJob$ using loader spark.jobserver.util.ContextURLClassLoader@6a0245ff
[2015-01-27 13:15:04,091] INFO  .jobserver.JobManagerActor [] [akka://test/user/$m] - Starting Spark job a6b4c092-f5cb-44a4-ad4f-f04cf1a9d903 [spark.jobserver.SimpleObjectJob]...
[2015-01-27 13:15:04,091] INFO  k.jobserver.JobResultActor [] [akka://test/user/$m/result-actor] - Added receiver Actor[akka://test/system/testActor1#-2027067696] to subscriber list for JobID a6b4c092-f5cb-44a4-ad4f-f04cf1a9d903
[2015-01-27 13:15:04,091] INFO  .jobserver.JobManagerActor [] [] - Starting job future thread
[2015-01-27 13:15:04,092] WARN  .jobserver.RddManagerActor [] [] - Shutting down spark.jobserver.RddManagerActor
[2015-01-27 13:15:04,092] INFO  k.jobserver.JobStatusActor [] [akka://test/user/$m/status-actor] - Job a6b4c092-f5cb-44a4-ad4f-f04cf1a9d903 started
[2015-01-27 13:15:04,092] WARN  k.jobserver.JobResultActor [] [] - Shutting down spark.jobserver.JobResultActor
[2015-01-27 13:15:04,092] WARN  k.jobserver.JobStatusActor [] [] - Shutting down spark.jobserver.JobStatusActor
[2015-01-27 13:15:04,092] INFO  .jobserver.JobManagerActor [] [] - Shutting down SparkContext test
[2015-01-27 13:15:04,104] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage/kill,null}
[2015-01-27 13:15:04,105] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/,null}
[2015-01-27 13:15:04,105] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/static,null}
[2015-01-27 13:15:04,106] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/threadDump/json,null}
[2015-01-27 13:15:04,106] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/threadDump,null}
[2015-01-27 13:15:04,106] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/json,null}
[2015-01-27 13:15:04,106] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors,null}
[2015-01-27 13:15:04,107] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/environment/json,null}
[2015-01-27 13:15:04,107] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/environment,null}
[2015-01-27 13:15:04,107] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/rdd/json,null}
[2015-01-27 13:15:04,107] INFO  .apache.spark.SparkContext [] [] - Starting job: collect at SparkTestJobs.scala:81
[2015-01-27 13:15:04,107] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/rdd,null}
[2015-01-27 13:15:04,108] INFO  ark.scheduler.DAGScheduler [] [] - Got job 0 (collect at SparkTestJobs.scala:81) with 4 output partitions (allowLocal=false)
[2015-01-27 13:15:04,108] INFO  ark.scheduler.DAGScheduler [] [] - Final stage: Stage 0(collect at SparkTestJobs.scala:81)
[2015-01-27 13:15:04,108] INFO  ark.scheduler.DAGScheduler [] [] - Parents of final stage: List()
[2015-01-27 13:15:04,108] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/json,null}
[2015-01-27 13:15:04,108] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage,null}
[2015-01-27 13:15:04,109] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/pool/json,null}
[2015-01-27 13:15:04,109] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/pool,null}
[2015-01-27 13:15:04,109] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage/json,null}
[2015-01-27 13:15:04,109] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage,null}
[2015-01-27 13:15:04,109] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/json,null}
[2015-01-27 13:15:04,109] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages,null}
[2015-01-27 13:15:04,109] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/job/json,null}
[2015-01-27 13:15:04,109] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/job,null}
[2015-01-27 13:15:04,109] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/json,null}
[2015-01-27 13:15:04,110] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs,null}
[2015-01-27 13:15:04,110] INFO  ark.scheduler.DAGScheduler [] [] - Missing parents: List()
[2015-01-27 13:15:04,110] INFO  ark.scheduler.DAGScheduler [] [] - Submitting Stage 0 (ParallelCollectionRDD[0] at parallelize at SparkTestJobs.scala:80), which has no missing parents
[2015-01-27 13:15:04,112] INFO  .spark.storage.MemoryStore [] [] - ensureFreeSpace(1128) called with curMem=0, maxMem=714866688
[2015-01-27 13:15:04,112] INFO  .spark.storage.MemoryStore [] [] - Block broadcast_0 stored as values in memory (estimated size 1128.0 B, free 681.7 MB)
[2015-01-27 13:15:04,114] INFO  .spark.storage.MemoryStore [] [] - ensureFreeSpace(872) called with curMem=1128, maxMem=714866688
[2015-01-27 13:15:04,114] INFO  .spark.storage.MemoryStore [] [] - Block broadcast_0_piece0 stored as bytes in memory (estimated size 872.0 B, free 681.7 MB)
[2015-01-27 13:15:04,115] INFO  k.storage.BlockManagerInfo [] [akka://test/user/$m] - Added broadcast_0_piece0 in memory on localhost:55389 (size: 872.0 B, free: 681.7 MB)
[2015-01-27 13:15:04,115] INFO  storage.BlockManagerMaster [] [] - Updated info of block broadcast_0_piece0
[2015-01-27 13:15:04,116] INFO  .apache.spark.SparkContext [] [] - Created broadcast 0 from broadcast at DAGScheduler.scala:838
[2015-01-27 13:15:04,119] INFO  ark.scheduler.DAGScheduler [] [] - Submitting 4 missing tasks from Stage 0 (ParallelCollectionRDD[0] at parallelize at SparkTestJobs.scala:80)
[2015-01-27 13:15:04,119] INFO  cheduler.TaskSchedulerImpl [] [] - Adding task set 0.0 with 4 tasks
[2015-01-27 13:15:04,123] INFO  k.scheduler.TaskSetManager [] [akka://test/user/$m] - Starting task 0.0 in stage 0.0 (TID 0, localhost, PROCESS_LOCAL, 1273 bytes)
[2015-01-27 13:15:04,124] INFO  k.scheduler.TaskSetManager [] [akka://test/user/$m] - Starting task 1.0 in stage 0.0 (TID 1, localhost, PROCESS_LOCAL, 1277 bytes)
[2015-01-27 13:15:04,125] INFO  k.scheduler.TaskSetManager [] [akka://test/user/$m] - Starting task 2.0 in stage 0.0 (TID 2, localhost, PROCESS_LOCAL, 1277 bytes)
[2015-01-27 13:15:04,126] INFO  k.scheduler.TaskSetManager [] [akka://test/user/$m] - Starting task 3.0 in stage 0.0 (TID 3, localhost, PROCESS_LOCAL, 1277 bytes)
[2015-01-27 13:15:04,126] INFO  he.spark.executor.Executor [] [akka://test/user/$m] - Running task 0.0 in stage 0.0 (TID 0)
[2015-01-27 13:15:04,126] INFO  he.spark.executor.Executor [] [akka://test/user/$m] - Running task 1.0 in stage 0.0 (TID 1)
[2015-01-27 13:15:04,126] INFO  he.spark.executor.Executor [] [akka://test/user/$m] - Fetching http://10.1.3.213:46487/jars/InMemoryDAO7626494765007423818.jar with timestamp 1422364504086
[2015-01-27 13:15:04,127] INFO  he.spark.executor.Executor [] [akka://test/user/$m] - Running task 3.0 in stage 0.0 (TID 3)
[2015-01-27 13:15:04,127] INFO  he.spark.executor.Executor [] [akka://test/user/$m] - Running task 2.0 in stage 0.0 (TID 2)
[2015-01-27 13:15:04,146] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$m] - Fetching http://10.1.3.213:46487/jars/InMemoryDAO7626494765007423818.jar to /tmp/fetchFileTemp1421123736807285146.tmp
[2015-01-27 13:15:04,150] INFO  he.spark.executor.Executor [] [akka://test/user/$m] - Adding file:/tmp/spark-d89424f0-9e65-4956-b565-56eff909606e/InMemoryDAO7626494765007423818.jar to class loader
[2015-01-27 13:15:04,161] INFO  he.spark.executor.Executor [] [akka://test/user/$m] - Finished task 0.0 in stage 0.0 (TID 0). 594 bytes result sent to driver
[2015-01-27 13:15:04,161] INFO  he.spark.executor.Executor [] [akka://test/user/$m] - Finished task 3.0 in stage 0.0 (TID 3). 598 bytes result sent to driver
[2015-01-27 13:15:04,161] INFO  he.spark.executor.Executor [] [akka://test/user/$m] - Finished task 1.0 in stage 0.0 (TID 1). 598 bytes result sent to driver
[2015-01-27 13:15:04,161] INFO  he.spark.executor.Executor [] [akka://test/user/$m] - Finished task 2.0 in stage 0.0 (TID 2). 598 bytes result sent to driver
[2015-01-27 13:15:04,163] INFO  rg.apache.spark.ui.SparkUI [] [] - Stopped Spark web UI at http://localhost:39195
[2015-01-27 13:15:04,163] INFO  ark.scheduler.DAGScheduler [] [] - Stopping DAGScheduler
[2015-01-27 13:15:04,163] INFO  k.scheduler.TaskSetManager [] [akka://test/user/$m] - Finished task 0.0 in stage 0.0 (TID 0) in 40 ms on localhost (1/4)
[2015-01-27 13:15:04,164] ERROR cheduler.TaskSchedulerImpl [] [akka://test/user/$m] - Exception in statusUpdate
java.util.concurrent.RejectedExecutionException: Task org.apache.spark.scheduler.TaskResultGetter$$anon$2@f4a7f52 rejected from java.util.concurrent.ThreadPoolExecutor@52dcd527[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 1]
    at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
    at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
    at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
    at org.apache.spark.scheduler.TaskResultGetter.enqueueSuccessfulTask(TaskResultGetter.scala:47)
    at org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$liftedTree2$1$1.apply(TaskSchedulerImpl.scala:301)
    at org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$liftedTree2$1$1.apply(TaskSchedulerImpl.scala:298)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.TaskSchedulerImpl.liftedTree2$1(TaskSchedulerImpl.scala:298)
    at org.apache.spark.scheduler.TaskSchedulerImpl.statusUpdate(TaskSchedulerImpl.scala:283)
    at org.apache.spark.scheduler.local.LocalActor$$anonfun$receiveWithLogging$1.applyOrElse(LocalBackend.scala:61)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
    at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:53)
    at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:42)
    at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
    at org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogReceive.scala:42)
    at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
    at org.apache.spark.scheduler.local.LocalActor.aroundReceive(LocalBackend.scala:43)
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
    at akka.actor.ActorCell.invoke(ActorCell.scala:487)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
    at akka.dispatch.Mailbox.run(Mailbox.scala:220)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
[2015-01-27 13:15:04,166] ERROR cheduler.TaskSchedulerImpl [] [akka://test/user/$m] - Exception in statusUpdate
java.util.concurrent.RejectedExecutionException: Task org.apache.spark.scheduler.TaskResultGetter$$anon$2@7341d044 rejected from java.util.concurrent.ThreadPoolExecutor@52dcd527[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 1]
    at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
    at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
    at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
    at org.apache.spark.scheduler.TaskResultGetter.enqueueSuccessfulTask(TaskResultGetter.scala:47)
    at org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$liftedTree2$1$1.apply(TaskSchedulerImpl.scala:301)
    at org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$liftedTree2$1$1.apply(TaskSchedulerImpl.scala:298)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.TaskSchedulerImpl.liftedTree2$1(TaskSchedulerImpl.scala:298)
    at org.apache.spark.scheduler.TaskSchedulerImpl.statusUpdate(TaskSchedulerImpl.scala:283)
    at org.apache.spark.scheduler.local.LocalActor$$anonfun$receiveWithLogging$1.applyOrElse(LocalBackend.scala:61)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
    at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:53)
    at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:42)
    at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
    at org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogReceive.scala:42)
    at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
    at org.apache.spark.scheduler.local.LocalActor.aroundReceive(LocalBackend.scala:43)
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
    at akka.actor.ActorCell.invoke(ActorCell.scala:487)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
    at akka.dispatch.Mailbox.run(Mailbox.scala:220)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
[2015-01-27 13:15:04,167] INFO  ark.scheduler.DAGScheduler [] [] - Job 0 failed: collect at SparkTestJobs.scala:81, took 0.059633 s
[2015-01-27 13:15:04,167] WARN  .jobserver.JobManagerActor [] [] - Exception from job a6b4c092-f5cb-44a4-ad4f-f04cf1a9d903: 
org.apache.spark.SparkException: Job cancelled because SparkContext was shut down
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:702)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:701)
    at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
    at org.apache.spark.scheduler.DAGScheduler.cleanUpAfterSchedulerStop(DAGScheduler.scala:701)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessActor.postStop(DAGScheduler.scala:1428)
    at akka.actor.Actor$class.aroundPostStop(Actor.scala:475)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessActor.aroundPostStop(DAGScheduler.scala:1375)
    at akka.actor.dungeon.FaultHandling$class.akka$actor$dungeon$FaultHandling$$finishTerminate(FaultHandling.scala:210)
    at akka.actor.dungeon.FaultHandling$class.terminate(FaultHandling.scala:172)
    at akka.actor.ActorCell.terminate(ActorCell.scala:369)
    at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:462)
    at akka.actor.ActorCell.systemInvoke(ActorCell.scala:478)
    at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:263)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:241)
    at akka.dispatch.Mailbox.run(Mailbox.scala:220)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
[2015-01-27 13:15:04,169] INFO  k.scheduler.TaskSetManager [] [akka://test/user/$m] - Finished task 3.0 in stage 0.0 (TID 3) in 43 ms on localhost (2/4)
[2015-01-27 13:15:05,216] INFO  apOutputTrackerMasterActor [] [akka://test/user/$m] - MapOutputTrackerActor stopped!
[2015-01-27 13:15:05,220] INFO  .spark.storage.MemoryStore [] [] - MemoryStore cleared
[2015-01-27 13:15:05,221] INFO  spark.storage.BlockManager [] [] - BlockManager stopped
[2015-01-27 13:15:05,222] INFO  storage.BlockManagerMaster [] [] - BlockManagerMaster stopped
[2015-01-27 13:15:05,223] INFO  .apache.spark.SparkContext [] [] - Successfully stopped SparkContext
[2015-01-27 13:15:05,226] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:44346/system/remoting-terminator] - Shutting down remote daemon.
[2015-01-27 13:15:05,227] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:44346/system/remoting-terminator] - Remote daemon shut down; proceeding with flushing remote transports.
[2015-01-27 13:15:05,228] INFO  .jobserver.JobManagerActor [] [akka://test/user/$n] - Starting actor spark.jobserver.JobManagerActor
[2015-01-27 13:15:05,230] INFO  k.jobserver.JobStatusActor [] [akka://test/user/$n/status-actor] - Starting actor spark.jobserver.JobStatusActor
[2015-01-27 13:15:05,230] INFO  k.jobserver.JobResultActor [] [akka://test/user/$n/result-actor] - Starting actor spark.jobserver.JobResultActor
[2015-01-27 13:15:05,239] INFO  ache.spark.SecurityManager [] [akka://test/user/$n] - Changing view acls to: tja01
[2015-01-27 13:15:05,239] INFO  ache.spark.SecurityManager [] [akka://test/user/$n] - Changing modify acls to: tja01
[2015-01-27 13:15:05,239] INFO  ache.spark.SecurityManager [] [akka://test/user/$n] - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(tja01); users with modify permissions: Set(tja01)
[2015-01-27 13:15:05,247] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:44346/system/remoting-terminator] - Remoting shut down.
[2015-01-27 13:15:05,268] INFO  ka.event.slf4j.Slf4jLogger [] [akka://test/user/$n] - Slf4jLogger started
[2015-01-27 13:15:05,272] INFO  Remoting [] [Remoting] - Starting remoting
[2015-01-27 13:15:05,280] INFO  Remoting [] [Remoting] - Remoting started; listening on addresses :[akka.tcp://sparkDriver@localhost:60401]
[2015-01-27 13:15:05,280] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$n] - Successfully started service 'sparkDriver' on port 60401.
[2015-01-27 13:15:05,281] INFO  org.apache.spark.SparkEnv [] [akka://test/user/$n] - Registering MapOutputTracker
[2015-01-27 13:15:05,281] INFO  org.apache.spark.SparkEnv [] [akka://test/user/$n] - Registering BlockManagerMaster
[2015-01-27 13:15:05,282] INFO  k.storage.DiskBlockManager [] [akka://test/user/$n] - Created local directory at /tmp/spark-local-20150127131505-8cc1
[2015-01-27 13:15:05,282] INFO  .spark.storage.MemoryStore [] [akka://test/user/$n] - MemoryStore started with capacity 681.8 MB
[2015-01-27 13:15:05,283] INFO  pache.spark.HttpFileServer [] [akka://test/user/$n] - HTTP File server directory is /tmp/spark-14aad482-5263-4167-b7a5-b17c2083c756
[2015-01-27 13:15:05,283] INFO  rg.apache.spark.HttpServer [] [akka://test/user/$n] - Starting HTTP Server
[2015-01-27 13:15:05,284] INFO  clipse.jetty.server.Server [] [akka://test/user/$n] - jetty-8.1.14.v20131031
[2015-01-27 13:15:05,285] INFO  y.server.AbstractConnector [] [akka://test/user/$n] - Started [email protected]:58628
[2015-01-27 13:15:05,285] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$n] - Successfully started service 'HTTP file server' on port 58628.
[2015-01-27 13:15:10,299] INFO  clipse.jetty.server.Server [] [akka://test/user/$n] - jetty-8.1.14.v20131031
[2015-01-27 13:15:10,305] INFO  y.server.AbstractConnector [] [akka://test/user/$n] - Started [email protected]:40894
[2015-01-27 13:15:10,305] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$n] - Successfully started service 'SparkUI' on port 40894.
[2015-01-27 13:15:10,306] INFO  rg.apache.spark.ui.SparkUI [] [akka://test/user/$n] - Started SparkUI at http://localhost:40894
[2015-01-27 13:15:10,332] INFO  pache.spark.util.AkkaUtils [] [] - Connecting to HeartbeatReceiver: akka.tcp://sparkDriver@localhost:60401/user/HeartbeatReceiver
[2015-01-27 13:15:10,334] INFO  .NettyBlockTransferService [] [akka://test/user/$n] - Server created on 59467
[2015-01-27 13:15:10,334] INFO  storage.BlockManagerMaster [] [akka://test/user/$n] - Trying to register BlockManager
[2015-01-27 13:15:10,335] INFO  ge.BlockManagerMasterActor [] [akka://test/user/$n] - Registering block manager localhost:59467 with 681.8 MB RAM, BlockManagerId(<driver>, localhost, 59467)
[2015-01-27 13:15:10,335] INFO  storage.BlockManagerMaster [] [akka://test/user/$n] - Registered BlockManager
[2015-01-27 13:15:10,339] INFO  .jobserver.RddManagerActor [] [akka://test/user/$n/rdd-manager-actor] - Starting actor spark.jobserver.RddManagerActor
[2015-01-27 13:15:10,340] WARN  .jobserver.RddManagerActor [] [] - Shutting down spark.jobserver.RddManagerActor
[2015-01-27 13:15:10,340] WARN  k.jobserver.JobResultActor [] [] - Shutting down spark.jobserver.JobResultActor
[2015-01-27 13:15:10,341] WARN  k.jobserver.JobStatusActor [] [] - Shutting down spark.jobserver.JobStatusActor
[2015-01-27 13:15:10,341] INFO  .jobserver.JobManagerActor [] [] - Shutting down SparkContext test
[2015-01-27 13:15:10,357] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage/kill,null}
[2015-01-27 13:15:10,357] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/,null}
[2015-01-27 13:15:10,357] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/static,null}
[2015-01-27 13:15:10,358] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/threadDump/json,null}
[2015-01-27 13:15:10,358] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/threadDump,null}
[2015-01-27 13:15:10,358] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/json,null}
[2015-01-27 13:15:10,358] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors,null}
[2015-01-27 13:15:10,358] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/environment/json,null}
[2015-01-27 13:15:10,358] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/environment,null}
[2015-01-27 13:15:10,359] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/rdd/json,null}
[2015-01-27 13:15:10,359] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/rdd,null}
[2015-01-27 13:15:10,359] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/json,null}
[2015-01-27 13:15:10,359] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage,null}
[2015-01-27 13:15:10,359] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/pool/json,null}
[2015-01-27 13:15:10,359] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/pool,null}
[2015-01-27 13:15:10,360] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage/json,null}
[2015-01-27 13:15:10,360] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage,null}
[2015-01-27 13:15:10,360] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/json,null}
[2015-01-27 13:15:10,360] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages,null}
[2015-01-27 13:15:10,360] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/job/json,null}
[2015-01-27 13:15:10,361] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/job,null}
[2015-01-27 13:15:10,361] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/json,null}
[2015-01-27 13:15:10,361] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs,null}
[2015-01-27 13:15:10,412] INFO  rg.apache.spark.ui.SparkUI [] [] - Stopped Spark web UI at http://localhost:40894
[2015-01-27 13:15:10,413] INFO  ark.scheduler.DAGScheduler [] [] - Stopping DAGScheduler
[2015-01-27 13:15:11,466] INFO  apOutputTrackerMasterActor [] [akka://test/user/$n] - MapOutputTrackerActor stopped!
[2015-01-27 13:15:11,471] INFO  .spark.storage.MemoryStore [] [] - MemoryStore cleared
[2015-01-27 13:15:11,472] INFO  spark.storage.BlockManager [] [] - BlockManager stopped
[2015-01-27 13:15:11,473] INFO  storage.BlockManagerMaster [] [] - BlockManagerMaster stopped
[2015-01-27 13:15:11,473] INFO  .apache.spark.SparkContext [] [] - Successfully stopped SparkContext
[2015-01-27 13:15:11,475] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:60401/system/remoting-terminator] - Shutting down remote daemon.
[2015-01-27 13:15:11,475] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:60401/system/remoting-terminator] - Remote daemon shut down; proceeding with flushing remote transports.
[2015-01-27 13:15:11,476] INFO  .jobserver.JobManagerActor [] [akka://test/user/$o] - Starting actor spark.jobserver.JobManagerActor
[2015-01-27 13:15:11,477] INFO  k.jobserver.JobStatusActor [] [akka://test/user/$o/status-actor] - Starting actor spark.jobserver.JobStatusActor
[2015-01-27 13:15:11,478] INFO  k.jobserver.JobResultActor [] [akka://test/user/$o/result-actor] - Starting actor spark.jobserver.JobResultActor
[2015-01-27 13:15:11,491] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:60401/system/remoting-terminator] - Remoting shut down.
[2015-01-27 13:15:11,494] INFO  ache.spark.SecurityManager [] [akka://test/user/$o] - Changing view acls to: tja01
[2015-01-27 13:15:11,494] INFO  ache.spark.SecurityManager [] [akka://test/user/$o] - Changing modify acls to: tja01
[2015-01-27 13:15:11,494] INFO  ache.spark.SecurityManager [] [akka://test/user/$o] - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(tja01); users with modify permissions: Set(tja01)
[2015-01-27 13:15:11,526] INFO  ka.event.slf4j.Slf4jLogger [] [akka://test/user/$o] - Slf4jLogger started
[2015-01-27 13:15:11,530] INFO  Remoting [] [Remoting] - Starting remoting
[2015-01-27 13:15:11,537] INFO  Remoting [] [Remoting] - Remoting started; listening on addresses :[akka.tcp://sparkDriver@localhost:49686]
[2015-01-27 13:15:11,537] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$o] - Successfully started service 'sparkDriver' on port 49686.
[2015-01-27 13:15:11,538] INFO  org.apache.spark.SparkEnv [] [akka://test/user/$o] - Registering MapOutputTracker
[2015-01-27 13:15:11,538] INFO  org.apache.spark.SparkEnv [] [akka://test/user/$o] - Registering BlockManagerMaster
[2015-01-27 13:15:11,539] INFO  k.storage.DiskBlockManager [] [akka://test/user/$o] - Created local directory at /tmp/spark-local-20150127131511-863c
[2015-01-27 13:15:11,539] INFO  .spark.storage.MemoryStore [] [akka://test/user/$o] - MemoryStore started with capacity 681.8 MB
[2015-01-27 13:15:11,540] INFO  pache.spark.HttpFileServer [] [akka://test/user/$o] - HTTP File server directory is /tmp/spark-8b9fef95-4f46-43e8-83bc-5bd23eeb11e1
[2015-01-27 13:15:11,540] INFO  rg.apache.spark.HttpServer [] [akka://test/user/$o] - Starting HTTP Server
[2015-01-27 13:15:11,541] INFO  clipse.jetty.server.Server [] [akka://test/user/$o] - jetty-8.1.14.v20131031
[2015-01-27 13:15:11,542] INFO  y.server.AbstractConnector [] [akka://test/user/$o] - Started [email protected]:33339
[2015-01-27 13:15:11,542] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$o] - Successfully started service 'HTTP file server' on port 33339.
[2015-01-27 13:15:16,556] INFO  clipse.jetty.server.Server [] [akka://test/user/$o] - jetty-8.1.14.v20131031
[2015-01-27 13:15:16,563] INFO  y.server.AbstractConnector [] [akka://test/user/$o] - Started [email protected]:57192
[2015-01-27 13:15:16,563] INFO  rg.apache.spark.util.Utils [] [akka://test/user/$o] - Successfully started service 'SparkUI' on port 57192.
[2015-01-27 13:15:16,564] INFO  rg.apache.spark.ui.SparkUI [] [akka://test/user/$o] - Started SparkUI at http://localhost:57192
[2015-01-27 13:15:16,641] INFO  pache.spark.util.AkkaUtils [] [] - Connecting to HeartbeatReceiver: akka.tcp://sparkDriver@localhost:49686/user/HeartbeatReceiver
[2015-01-27 13:15:16,646] INFO  .NettyBlockTransferService [] [akka://test/user/$o] - Server created on 34437
[2015-01-27 13:15:16,646] INFO  storage.BlockManagerMaster [] [akka://test/user/$o] - Trying to register BlockManager
[2015-01-27 13:15:16,646] INFO  ge.BlockManagerMasterActor [] [] - Registering block manager localhost:34437 with 681.8 MB RAM, BlockManagerId(<driver>, localhost, 34437)
[2015-01-27 13:15:16,647] INFO  storage.BlockManagerMaster [] [akka://test/user/$o] - Registered BlockManager
[2015-01-27 13:15:16,651] INFO  .jobserver.RddManagerActor [] [akka://test/user/$o/rdd-manager-actor] - Starting actor spark.jobserver.RddManagerActor
[2015-01-27 13:15:16,651] INFO  .jobserver.JobManagerActor [] [akka://test/user/$o] - Loading class spark.jobserver.CacheRddByNameJob for app demo
[2015-01-27 13:15:16,652] INFO  .apache.spark.SparkContext [] [akka://test/user/$o] - Added JAR /tmp/InMemoryDAO5654339605225412024.jar at http://10.1.3.213:33339/jars/InMemoryDAO5654339605225412024.jar with timestamp 1422364516652
[2015-01-27 13:15:16,654] INFO  util.ContextURLClassLoader [] [akka://test/user/$o] - Added URL file:/tmp/InMemoryDAO5654339605225412024.jar to ContextURLClassLoader
[2015-01-27 13:15:16,654] INFO  spark.jobserver.JarUtils$ [] [akka://test/user/$o] - Loading object spark.jobserver.CacheRddByNameJob$ using loader spark.jobserver.util.ContextURLClassLoader@27dec5ab
[2015-01-27 13:15:16,655] INFO  spark.jobserver.JarUtils$ [] [akka://test/user/$o] - Loading class spark.jobserver.CacheRddByNameJob using loader spark.jobserver.util.ContextURLClassLoader@27dec5ab
[2015-01-27 13:15:16,656] INFO  .jobserver.JobManagerActor [] [akka://test/user/$o] - Starting Spark job f52f0dd3-874b-48e3-9d59-a5b9d312adb3 [spark.jobserver.CacheRddByNameJob]...
[2015-01-27 13:15:16,657] INFO  k.jobserver.JobResultActor [] [akka://test/user/$o/result-actor] - Added receiver Actor[akka://test/system/testActor1#-2027067696] to subscriber list for JobID f52f0dd3-874b-48e3-9d59-a5b9d312adb3
[2015-01-27 13:15:16,657] INFO  .jobserver.JobManagerActor [] [] - Starting job future thread
[2015-01-27 13:15:16,657] WARN  .jobserver.RddManagerActor [] [] - Shutting down spark.jobserver.RddManagerActor
[2015-01-27 13:15:16,657] WARN  k.jobserver.JobResultActor [] [] - Shutting down spark.jobserver.JobResultActor
[2015-01-27 13:15:16,657] WARN  k.jobserver.JobStatusActor [] [] - Shutting down spark.jobserver.JobStatusActor
[2015-01-27 13:15:16,658] INFO  .jobserver.JobManagerActor [] [] - Shutting down SparkContext test
[2015-01-27 13:15:16,663] WARN  .jobserver.JobManagerActor [] [] - Exception from job f52f0dd3-874b-48e3-9d59-a5b9d312adb3: 
akka.pattern.AskTimeoutException: Recipient[Actor[akka://test/user/$o/rdd-manager-actor#1992115446]] had already been terminated.
    at akka.pattern.AskableActorRef$.ask$extension(AskSupport.scala:132)
    at spark.jobserver.JobServerNamedRdds.getOrElseCreate(JobServerNamedRdds.scala:24)
    at spark.jobserver.CacheRddByNameJob.runJob(SparkTestJobs.scala:57)
    at spark.jobserver.JobManagerActor$$anonfun$spark$jobserver$JobManagerActor$$getJobFuture$4.apply(JobManagerActor.scala:219)
    at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
    at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
    at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
[2015-01-27 13:15:16,669] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage/kill,null}
[2015-01-27 13:15:16,670] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/,null}
[2015-01-27 13:15:16,670] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/static,null}
[2015-01-27 13:15:16,670] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/threadDump/json,null}
[2015-01-27 13:15:16,670] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/threadDump,null}
[2015-01-27 13:15:16,670] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors/json,null}
[2015-01-27 13:15:16,670] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/executors,null}
[2015-01-27 13:15:16,670] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/environment/json,null}
[2015-01-27 13:15:16,670] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/environment,null}
[2015-01-27 13:15:16,671] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/rdd/json,null}
[2015-01-27 13:15:16,671] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/rdd,null}
[2015-01-27 13:15:16,671] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage/json,null}
[2015-01-27 13:15:16,671] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/storage,null}
[2015-01-27 13:15:16,671] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/pool/json,null}
[2015-01-27 13:15:16,671] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/pool,null}
[2015-01-27 13:15:16,671] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage/json,null}
[2015-01-27 13:15:16,671] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/stage,null}
[2015-01-27 13:15:16,671] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages/json,null}
[2015-01-27 13:15:16,672] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/stages,null}
[2015-01-27 13:15:16,672] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/job/json,null}
[2015-01-27 13:15:16,672] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/job,null}
[2015-01-27 13:15:16,672] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs/json,null}
[2015-01-27 13:15:16,672] INFO  ver.handler.ContextHandler [] [] - stopped o.e.j.s.ServletContextHandler{/jobs,null}
[2015-01-27 13:15:16,724] INFO  rg.apache.spark.ui.SparkUI [] [] - Stopped Spark web UI at http://localhost:57192
[2015-01-27 13:15:16,724] INFO  ark.scheduler.DAGScheduler [] [] - Stopping DAGScheduler
[2015-01-27 13:15:17,777] INFO  apOutputTrackerMasterActor [] [] - MapOutputTrackerActor stopped!
[2015-01-27 13:15:17,781] INFO  .spark.storage.MemoryStore [] [] - MemoryStore cleared
[2015-01-27 13:15:17,781] INFO  spark.storage.BlockManager [] [] - BlockManager stopped
[2015-01-27 13:15:17,782] INFO  storage.BlockManagerMaster [] [] - BlockManagerMaster stopped
[2015-01-27 13:15:17,783] INFO  .apache.spark.SparkContext [] [] - Successfully stopped SparkContext
[2015-01-27 13:15:17,788] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:49686/system/remoting-terminator] - Shutting down remote daemon.
[2015-01-27 13:15:17,789] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:49686/system/remoting-terminator] - Remote daemon shut down; proceeding with flushing remote transports.
[2015-01-27 13:15:17,799] INFO  rovider$RemotingTerminator [] [akka.tcp://sparkDriver@localhost:49686/system/remoting-terminator] - Remoting shut down.

akka.pattern.AskTimeoutException

hi,

i see tons of below error when i run test on a local job server:

send request :http://devsparkcluster.cloudapp.net/jobs?appName=job-server-tests&classPath=spark.jobserver.WordCountExample
{
"status": "ERROR",
"result": {
"message": "Timed out",
"errorClass": "akka.pattern.AskTimeoutException",
"stack": ["akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:334)", "akka.actor.Scheduler$$anon$11.run(Scheduler.scala:118)", "scala.concurrent.Future$InternalCallbackExecutor$.scala$concurrent$Future$InternalCallbackExecutor$$unbatchedExecute(Future.scala:694)", "scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:691)", "akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(Scheduler.scala:455)", "akka.actor.LightArrayRevolverScheduler$$anon$12.executeBucket$1(Scheduler.scala:407)", "akka.actor.LightArrayRevolverScheduler$$anon$12.nextTick(Scheduler.scala:411)", "akka.actor.LightArrayRevolverScheduler$$anon$12.run(Scheduler.scala:363)", "java.lang.Thread.run(Thread.java:745)"]
}
}

here is the complete of my configuration (i have increased the value for context-creation-timeout, and the akka.actor.creation-timeout):

/spark-jobserver-0.4.1/job-server/src/main/resources$ cat application.conf

Settings for safe local mode development

spark {
master = "local[4]"

spark web UI port

webUrlPort = 8080

logConf = true

jobserver {
port = 8090
bind-address = "0.0.0.0"

# Number of job results to keep per JobResultActor/context
job-result-cache-size = 5000

jobdao = spark.jobserver.io.JobFileDAO

filedao {
  rootdir = /tmp/spark-jobserver/filedao/data
}

# Time out for job server to wait while creating contexts
context-creation-timeout = 180 s

# A zero-arg class implementing spark.jobserver.util.SparkContextFactory
context-factory = spark.jobserver.util.DefaultSparkContextFactory

}

predefined Spark contexts

Below is an example, but do not uncomment it. Everything defined here is carried over to

deploy-time configs, so they will be created in all environments. :(

contexts {
# abc-demo {
# num-cpu-cores = 4 # Number of cores to allocate. Required.
# memory-per-node = 1024m # Executor memory per node, -Xmx style eg 512m, 1G, etc.
# }
# define additional contexts here
}

Default settings for ad hoc as well as manually created contexts

You can add any Spark config params here, for example, spark.mesos.coarse = true

context-settings {
num-cpu-cores = 8 # Number of cores to allocate. Required.
memory-per-node = 1G # Executor memory per node, -Xmx style eg 512m, 1G, etc.
# max-jobs-per-context = 4 # Max # of jobs to run at the same time
}
}

akka {

Use SLF4J/logback for deployed environment logging

loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel="DEBUG"
log-config-on-start = on
stdout-loglevel = "DEBUG"
jvm-exit-on-fatal-error = on

timeout = 180

io
{
tcp
{
nr-of-selectors = 10
trace-logging = on
}

  udp
    {
      nr-of-selectors = 10
      trace-logging = on
    }

  udp-connected
    {
      trace-logging = on
    }
}

actor
{
creation-timeout = 180 s
debug
{
autoreceive = on
lifecycle = on
fsm = on
unhandled: on
receive: on
event-stream = on
}
}

remote {
log-sent-messages = on
log-received-messages = on
}

}

check the reference.conf in spray-can/src/main/resources for all defined settings

spray.can.server {

uncomment the next line for making this an HTTPS example

ssl-encryption = on

idle-timeout = 20 s
request-timeout = 15 s
pipelining-limit = 2 # for maximum performance (prevents StopReading / ResumeReading messages to the IOBridge)

Needed for HTTP/1.0 requests with missing Host headers

default-host-header = "spray.io:8765"

verbose-error-messages = on

}

where am i wrong? and what cause that timeout issue? i know the startJob failed , due to this error in the getJobFeture call:

debug=> getJobFeture failed for job ab4dcf01-94ef-480c-9014-2d783ac8aabe : error - org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 6, 10.0.0.7): java.io.IOException: unexpected exception type
java.io.ObjectStreamClass.throwMiscException(ObjectStreamClass.java:1538)
java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1025)
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1896)
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:159)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
Driver stacktrace:

but i don't know whether this caused by the spark cluster. if it is, then why errors in spark cluster affect the job server? especially the weird akkaTimeout error?

thanks in advance for any help.

best way to implement spark streams

Hi all,

Just wondering if someone could point me the right direction to interact with streams in a runJob method. We have a streaming job implemented that passes the streaming context in runJob, but streams behave differently from single RDDs, so they never really return any value.

thanks,

Mesos master urls no longer recognized

Specifically, the error is:

Caused by: java.lang.RuntimeException: Could not parse Master URL: 'mesos://zk://10.1.1.101:2181/mesos'

Previously this worked just fine; it appears that merging in this changeset is the cause. Will try to see if I can fix it up myself (tomorrow), but wanted to report it first.

Unable to use yarn-client as master

Setting master to 'yarn-client' in the config results in this error during startup:

2015-01-05 22:06:55,765 ERROR [JobServer-akka.actor.default-dispatcher-5] actor.OneForOneStrategy (Slf4jLogger.scala:apply$mcV$sp(66)) - exception during creation
akka.actor.ActorInitializationException: exception during creation
at akka.actor.ActorInitializationException$.apply(Actor.scala:218)
at akka.actor.ActorCell.create(ActorCell.scala:578)
at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:425)
at akka.actor.ActorCell.systemInvoke(ActorCell.scala:447)
at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:262)
at akka.dispatch.Mailbox.run(Mailbox.scala:218)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at akka.util.Reflect$.instantiate(Reflect.scala:65)
at akka.actor.Props.newActor(Props.scala:337)
at akka.actor.ActorCell.newActor(ActorCell.scala:534)
at akka.actor.ActorCell.create(ActorCell.scala:560)
... 9 more
Caused by: java.lang.RuntimeException: Could not parse Master URL: 'yarn-client'
at spark.jobserver.SparkWebUiActor.getSparkHostName(SparkWebUiActor.scala:85)
at spark.jobserver.SparkWebUiActor.(SparkWebUiActor.scala:40)
... 17 more

Sorted maps return unsorted from REST response

Simple example is a sorted word count.

In spark job:

val myTop = someRDD.takeOrdered(500)(WordCountOrdering)
myTop.toMap  // this is sorted but result from WebApi is re-mapped to any order

It also see the following resultToMap function isn't used:

https://github.com/spark-jobserver/spark-jobserver/blob/master/job-server/src/spark.jobserver/WebApi.scala#L364

If I have a moment I can look into it, but wanted to put it on the issues radar.
Need to dig into: ctx.complete(resultToTable(result))

ctx.complete(resultToTable(result))

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.