Giter Site home page Giter Site logo

impatient's Introduction

Cascading

Thanks for using Cascading.

Cascading 3.3

Cascading 3 includes a few major changes and additions from prior major releases:

  • Complete re-write of the platform query planner and improvements to the planner API
  • Addition of Apache Tez as a supported runtime platform
  • Changes to the Tap/Scheme generic type signatures to support portability

These changes hope to simplify the creation of new bindings to new platform implementations and to improve the performance of resulting applications.

General Information:

For project documentation and community support, visit: cascading.org

To download a pre-built distribution, visit http://cascading.org/downloads/, or use Maven (described below).

The project includes nine Cascading jar files:

  • cascading-core-x.y.z.jar - all Cascading Core class files
  • cascading-xml-x.y.z.jar - all Cascading XML operations class files
  • cascading-expression-x.y.z.jar - all Cascading Janino expression operations class files
  • cascading-local-x.y.z.jar - all Cascading Local in-memory mode class files
  • cascading-hadoop-x.y.z.jar - all Cascading Hadoop 1.x MapReduce mode class files
  • cascading-hadoop2-io-x.y.z.jar - all Cascading Hadoop 2.x HDFS and IO related class files
  • cascading-hadoop2-mr1-x.y.z.jar - all Cascading Hadoop 2.x MapReduce mode class files
  • cascading-hadoop2-tez-x.y.z.jar - all Cascading Hadoop 2.x Tez mode class files
  • cascading-hadoop2-tez-stats-x.y.z.jar - all Cascading Tez YARN timeline server class files

These class jars, along with, tests, source and javadoc jars, are all available via the Conjars.org Maven repository.

Hadoop 1.x mode is where the Cascading application should run on a Hadoop MapReduce cluster.

Hadoop 2.x MR1 mode is the same as above but for Hadoop 2.x releases.

Hadoop 2.x Tez mode is where the Cascading application should run on an Apache Tez DAG cluster.

Local mode is where the Cascading application will run locally in memory without any Hadoop dependencies or cluster distribution. This implementation has minimal to no robustness in low memory situations, by design.

As of Cascading 3.x, all above jar files are built against Java 1.7. Prior versions of Cascading are built against Java 1.6.

Extensions, the SDK, and DSLs

There are a number of projects based on and extensions to Cascading available.

Visit the Cascading Extensions page for a current list.

Or download the Cascading SDK which includes many pre-built binaries.

Of note are three top level projects:

  • Fluid - A fluent Java API for Cascading that is compatible with the default API.
  • Lingual - ANSI SQL and JDBC on Cascading
  • Pattern - Machine Learning scoring and PMML support with Cascading

And alternative languages:

And a third-party computing platform:

Versioning

Cascading stable releases are always of the form x.y.z, where z is the current maintenance release.

x.y.z releases are maintenance releases. No public incompatible API changes will be made, but in an effort to fix bugs, remediation may entail throwing new Exceptions.

x.y releases are minor releases. New features are added. No public incompatible API changes will be made on the core processing APIs (Pipes, Functions, etc), but in an effort to resolve inconsistencies, minor semantic changes may be necessary.

It is important to note that we do reserve to make breaking changes to the new query planner API through the 3.x releases. This allows us to respond to bugs and performance issues without issuing new major releases. Cascading 4.0 will keep the public query planner APIs stable.

The source and tags for all stable releases can be found here: https://github.com/Cascading/cascading

WIP (work in progress) releases are fully tested builds of code not yet deemed fully stable. On every build by our continuous integration servers, the WIP build number is increased. Successful builds are then tagged and published.

The WIP releases are always of the form x.y.z-wip-n, where x.y.z will be the next stable release version the WIP releases are leading up to. n is the current successfully tested build.

The source, working branches, and tags for all WIP releases can be found here: https://github.com/cwensel/cascading

Or downloaded from here: http://cascading.org/wip/

When a WIP is deemed stable and ready for production use, it will be published as a x.y.z release, and made available from the http://cascading.org/downloads/ page.

Writing and Running Tests

Comprehensive tests should be written against the cascading.PlatformTestCase.

When running tests built against the PlatformTestCase, the local cluster can be disabled (if enabled by the test) by setting:

-Dtest.cluster.enabled=false

From Gradle, to run a single test case:

> gradle :cascading-hadoop2-mr1:platformTest --tests=*.FieldedPipesPlatformTest -i

or a single test method:

> gradle :cascading-hadoop2-mr1:platformTest --tests=*.FieldedPipesPlatformTest.testNoGroup -i

Debugging the 3.x Planner

The new 3.0 planner has a much improved debugging framework.

When running tests, set the following

-Dtest.traceplan.enabled=true

If you are on Mac OS X and have installed GraphViz, dot files can be converted to pdf on the fly. To enable, set:

-Dutil.dot.to.pdf.enabled=true

Optionally, for stand alone applications, statistics and tracing can be enabled selectively with the following properties:

  • cascading.planner.stats.path - outputs detailed statistics on time spent by the planner
  • cascading.planner.plan.path - basic planner information
  • cascading.planner.plan.transforms.path - detailed information for each rule

Contributing and Reporting Issues

See CONTRIBUTING.md at https://github.com/Cascading/cascading.

Using with Maven/Ivy

It is strongly recommended developers pull Cascading from our Maven compatible jar repository Conjars.org.

You can find the latest public and WIP (work in progress) releases here:

When creating tests, make sure to add any of the relevant above dependencies to your test scope or equivalent configuration along with the cascading-platform dependency.

Note the cascading-platform compile dependency has no classes, you must pull the tests dependency with the tests classifier.

See http://cascading.org/downloads/#maven for example Maven pom dependency settings.

Source and Javadoc artifacts (using the appropriate classifier) are also available through Conjars.

Note that cascading-hadoop, cascading-hadoop2-mr1, and cascading-hadoop2-tez have a provided dependency on the Hadoop jars so that it won't get sucked into any application packaging as a dependency, typically.

Building and IDE Integration

For most cases, building Cascading is unnecessary as it has been pre-built, tested, and published to our Maven repository (above).

To build Cascading, run the following in the shell:

> git clone https://github.com/cascading/cascading.git
> cd cascading
> gradle build

Cascading requires at least Gradle 2.7 and Java 1.7 to build.

To use an IDE like IntelliJ, run the following to create IntelliJ project files:

> gradle idea

Similarly for Eclipse:

> gradle eclipse

Using with Apache Hadoop

First confirm you are using a supported version of Apache Hadoop by checking the Compatibility page.

To use Cascading with Hadoop, we suggest stuffing cascading-core and cascading-hadoop2-mr1, jar files and all third-party libs into the lib folder of your job jar and executing your job via $HADOOP_HOME/bin/hadoop jar your.jar <your args>.

For example, your job jar would look like this (via: jar -t your.jar)

/<all your class and resource files>
/lib/cascading-core-x.y.z.jar
/lib/cascading-hadoop2-mr1-x.y.z.jar
/lib/cascading-hadoop2-io-x.y.z.jar
/lib/cascading-expression-x.y.z.jar
/lib/<cascading third-party jar files>

Hadoop will unpack the jar locally and remotely (in the cluster) and add any libraries in lib to the classpath. This is a feature specific to Hadoop.

impatient's People

Contributors

ceteri avatar cwensel avatar dvryaboy avatar fs111 avatar rdesmond avatar stmcpherson avatar supreetoberoi avatar zac-hopkinson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

impatient's Issues

part1 doesn't compile

On d9d5eec

fish:part1 dirkraft$ gradle jar
:compileJava

FAILURE: Build failed with an exception.

* What went wrong:
Could not resolve all dependencies for configuration ':providedCompile'.
> Artifact 'org.codehaus.jackson:jackson-jaxrs:1.7.1@jar' not found.

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.

BUILD FAILED

Total time: 4.209 secs
fish:part1 dirkraft$

error when run impatient example in IDEA

My Hadoop version is hadoop-2.5.0-cdh5.2.0, It works fine When

  1. start hdfs and yarn
  2. follow the tutorial use cmd: hadoop jar ./build/libs/impatient.jar data/rain.txt output/rain.

then I try run the demo in IDEA.
I use maven project, my pom.xml like :
<hadoop.version>2.5.0-cdh5.2.0</hadoop.version>
<cascading.version>2.6.1</cascading.version>

driven driven-plugin 1.2-eap-5 cascading cascading-core ${cascading.version} cascading cascading-local ${cascading.version} cascading cascading-hadoop ${cascading.version} cascading cascading-hadoop2-mr1 ${cascading.version} cascading cascading-xml ${cascading.version} cascading cascading-platform ${cascading.version} test then I copy Main.java, run it . Since I set inPath and outPath to local fs, I didn't start hadoop ## Error happen:

Exception in thread "main" cascading.flow.FlowException: unhandled exception
at cascading.flow.BaseFlow.complete(BaseFlow.java:918)
at com.zqh.cascading.impatient.Copy.main(Copy.java:66)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
Caused by: java.lang.NoSuchMethodError: org.apache.hadoop.mapred.LocalJobRunner.(Lorg/apache/hadoop/conf/Configuration;)V
at org.apache.hadoop.mapred.LocalClientProtocolProvider.create(LocalClientProtocolProvider.java:42)
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:95)
at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:82)
at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:75)
at org.apache.hadoop.mapred.JobClient.init(JobClient.java:472)
at org.apache.hadoop.mapred.JobClient.(JobClient.java:450)
at cascading.flow.hadoop.planner.HadoopFlowStepJob.internalNonBlockingStart(HadoopFlowStepJob.java:107)
at cascading.flow.planner.FlowStepJob.blockOnJob(FlowStepJob.java:207)
at cascading.flow.planner.FlowStepJob.start(FlowStepJob.java:150)
at cascading.flow.planner.FlowStepJob.call(FlowStepJob.java:124)
at cascading.flow.planner.FlowStepJob.call(FlowStepJob.java:43)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:745)

there are also errors when I start hadoop and run it .
Does this error means I use the wrong hadoop version? since my cascading-hadoop version is cascading-hadoop2-mr1. does cascading not support mr2 or yarn?

Build fails with Java 7, Gradle 2.0, and Hadoop 2.4.1 on Ubuntu 14.04

When I try to build:

me@computer:~/github/Impatient$ ~/dev/gradle-2.0/bin/gradle build.gradle    

FAILURE: Build failed with an exception.

* Where:
Build file '/home/me/github/Impatient/part1/build.gradle' line: 43

* What went wrong:
A problem occurred evaluating project ':part1'.
> You can't change configuration 'providedCompile' because it is already resolved!

I have cleared my .m2 repo, but that didn't help.

My version info

Gradle

me@computer:~/github/Impatient$ ~/dev/gradle-2.0/bin/gradle --version

------------------------------------------------------------
Gradle 2.0
------------------------------------------------------------

Build time:   2014-07-01 07:45:34 UTC
Build number: none
Revision:     b6ead6fa452dfdadec484059191eb641d817226c

Groovy:       2.3.3
Ant:          Apache Ant(TM) version 1.9.3 compiled on December 23 2013
JVM:          1.7.0_55 (Oracle Corporation 24.51-b03)
OS:           Linux 3.13.0-32-generic amd64

Hadoop

me@computer:~/github/Impatient$ ~/dev/hadoop-2.4.1/bin/hadoop version
Hadoop 2.4.1
Subversion http://svn.apache.org/repos/asf/hadoop/common -r 1604318
Compiled by jenkins on 2014-06-21T05:43Z
Compiled with protoc 2.5.0
From source with checksum bb7ac0a3c73dc131f4844b873c74b630
This command was run using /home/me/dev/hadoop-2.4.1/share/hadoop/common/hadoop-common-2.4.1.jar

Cannot get get Gradle to build

$ gradle clean jar

FAILURE: Build failed with an exception.

  • Where:
    Build file '/home/lina/dev/workspaces/Impatient/part1/build.gradle' line: 31
  • What went wrong:
    A problem occurred evaluating root project 'part1'.
    Cause: You must specify a urls for a Maven repo.
  • Try:
    Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.

BUILD FAILED

Total time: 1.785 secs

Some details:

$ gradle --version

Gradle 1.0-milestone-3

Gradle build time: Thursday, September 8, 2011 4:06:52 PM UTC
Groovy: 1.8.6
Ant: Apache Ant(TM) version 1.8.2 compiled on December 3 2011
Ivy: non official version
JVM: 1.6.0_24 (Sun Microsystems Inc. 20.0-b12)
OS: Linux 3.2.0-31-generic amd64

I tried this solution, but it didn't work:
http://www.baselogic.com/blog/development/gradle-repositories-syntax-changed-cause-url-maven-repository/

issue in "gradle clean jar"

while typing command gradle clean jar i got this btw i have hadoop 1.2.1 and cascading 2.6 i put cascading jars in hadoop/lib , and in cascading impatient documentation work with hadoop1 et hadoop 2
co

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.