Giter Site home page Giter Site logo

alluxio / alluxio Goto Github PK

View Code? Open in Web Editor NEW
6.6K 438.0 2.9K 200.27 MB

Alluxio, data orchestration for analytics and machine learning in the cloud

Home Page: https://www.alluxio.io

License: Apache License 2.0

Shell 1.09% Java 94.52% HTML 0.02% JavaScript 0.06% Python 0.03% Roff 0.04% Go 1.47% Dockerfile 0.02% TypeScript 2.09% Makefile 0.01% Handlebars 0.02% SCSS 0.07% C 0.05% C++ 0.30% Rust 0.21%
alluxio memory-speed hadoop spark presto tensorflow data-analysis data-orchestration virtual-distributed-filesystem

alluxio's Introduction

logo

Slack Release Docker Pulls Documentation OpenSSF Scorecard Twitter Follow License

What is Alluxio

Alluxio (formerly known as Tachyon) is a virtual distributed storage system. It bridges the gap between computation frameworks and storage systems, enabling computation applications to connect to numerous storage systems through a common interface. Read more about Alluxio Overview.

The Alluxio project originated from a research project called Tachyon at AMPLab, UC Berkeley, which was the data layer of the Berkeley Data Analytics Stack (BDAS). For more details, please refer to Haoyuan Li's PhD dissertation Alluxio: A Virtual Distributed File System.

Who Uses Alluxio

Alluxio is used in production to manage Petabytes of data in many leading companies, with the largest deployment exceeding 3,000 nodes. You can find more use cases at Powered by Alluxio or visit our first community conference (Data Orchestration Summit) to learn from other community members!

Who Owns and Manages Alluxio Project

Alluxio Open Source Foundation is the owner of Alluxio project. Project operation is done by Alluxio Project Management Committee (PMC). You can checkout more details in its structure and how to join Alluxio PMC here.

Community and Events

Please use the following to reach members of the community:

Download Alluxio

Binary download

Prebuilt binaries are available to download at https://www.alluxio.io/download .

Docker

Download and start an Alluxio master and a worker. More details can be found in documentation.

# Create a network for connecting Alluxio containers
$ docker network create alluxio_nw
# Create a volume for storing ufs data
$ docker volume create ufs
# Launch the Alluxio master
$ docker run -d --net=alluxio_nw \
    -p 19999:19999 \
    --name=alluxio-master \
    -v ufs:/opt/alluxio/underFSStorage \
    alluxio/alluxio master
# Launch the Alluxio worker
$ export ALLUXIO_WORKER_RAMDISK_SIZE=1G
$ docker run -d --net=alluxio_nw \
    --shm-size=${ALLUXIO_WORKER_RAMDISK_SIZE} \
    --name=alluxio-worker \
    -v ufs:/opt/alluxio/underFSStorage \
    -e ALLUXIO_JAVA_OPTS="-Dalluxio.worker.ramdisk.size=${ALLUXIO_WORKER_RAMDISK_SIZE} -Dalluxio.master.hostname=alluxio-master" \
    alluxio/alluxio worker

MacOS Homebrew

$ brew install alluxio

Quick Start

Please follow the Guide to Get Started to run a simple example with Alluxio.

Report a Bug

To report bugs, suggest improvements, or create new feature requests, please open a Github Issue. If you are not sure whether you run into bugs or simply have general questions with respect to Alluxio, post your questions on Alluxio Slack channel.

Depend on Alluxio

Alluxio project provides several different client artifacts for external projects to depend on Alluxio client:

  • Artifact alluxio-shaded-client is recommended generally for a project to use Alluxio client. The jar of this artifact is self-contained (including all dependencies in a shaded form to prevent dependency conflicts), and thus larger than the following two artifacts.
  • Artifact alluxio-core-client-fs provides Alluxio Java file system API) to access all Alluxio-specific functionalities. This artifact is included in alluxio-shaded-client.
  • Artifact alluxio-core-client-hdfs provides HDFS-Compatible file system API. This artifact is included in alluxio-shaded-client.

Here are examples to declare the dependencies on alluxio-shaded-client using Maven:

<dependency>
  <groupId>org.alluxio</groupId>
  <artifactId>alluxio-shaded-client</artifactId>
  <version>2.6.0</version>
</dependency>

Contributing

Contributions via GitHub pull requests are gladly accepted from their original author. Along with any pull requests, please state that the contribution is your original work and that you license the work to the project under the project's open source license. Whether or not you state this explicitly, by submitting any copyrighted material via pull request, email, or other means you agree to license the material under the project's open source license and warrant that you have the legal authority to do so. For a more detailed step-by-step guide, please read how to contribute to Alluxio. For new contributor, please take two new contributor tasks.

For advanced feature requests and contributions, Alluxio core team is hosting regular online meetings with community users and developers to iterate the project in two special interest groups:

  • Alluxio and AI workloads: e.g., running Tensorflow, Pytorch on Alluxio through the POSIX API. Checkout the meeting notes
  • Alluxio and Presto workloads: e.g., running Presto on Alluxio. Checkout the meeting notes

Subscribe our public calendar to join us.

Useful Links

alluxio's People

Contributors

aaudiber avatar apc999 avatar bf8086 avatar bradyoo avatar calvinjia avatar dbw9580 avatar dcapwell avatar dongche avatar gjhkael avatar gpang avatar haoyuan avatar horizonnet avatar hsaputra avatar ifcharming avatar jiacheliu3 avatar jja725 avatar jsimsa avatar luoli523 avatar luqqiu avatar madanadit avatar maobaolong avatar peisun1115 avatar riversand9 avatar ronggu avatar saltylin avatar ssz1997 avatar xenorith avatar yupeng9 avatar yuzhu avatar zacblanco avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

alluxio's Issues

Does it need to add additional dependencies for tachyon-client?

I use Tachyon as a client for my application. I add these dependencies section below in my maven project.

<properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <scala.version>2.10.4</scala.version>
    <akka.version>2.3.9</akka.version>
    <org.slf4j.version>1.7.5</org.slf4j.version>
    <org.tachyonproject.version>0.7.0</org.tachyonproject.version>
    <file_encoding>UTF-8</file_encoding>
</properties>

<dependencies>
    <dependency>
        <groupId>org.tachyonproject</groupId>
        <artifactId>tachyon-servers</artifactId>
        <version>${org.tachyonproject.version}</version>
    </dependency>
    <dependency>
        <groupId>org.tachyonproject</groupId>
        <artifactId>tachyon-client</artifactId>
        <version>${org.tachyonproject.version}</version>
    </dependency>
    <dependency>
        <groupId>org.tachyonproject</groupId>
        <artifactId>tachyon-common</artifactId>
        <version>${org.tachyonproject.version}</version>
    </dependency>
    <dependency>
        <groupId>org.scala-lang</groupId>
        <artifactId>scala-library</artifactId>
        <version>${scala.version}</version>
    </dependency>
    <dependency>
        <groupId>org.scala-lang</groupId>
        <artifactId>scala-actors</artifactId>
        <version>${scala.version}</version>
    </dependency>
</dependencies>

When I run my application on a worker machine. It throws exception:

15/07/31 14:23:35 INFO : Opening stream from underlayer fs: /local/server/tachyon-0.7.0/underFSStorage/tmp/tachyon/data/5
15/07/31 14:23:35 WARN : No Under File System Factory implementation supports the path /local/server/tachyon-0.7.0/underFSStorage/tmp/tachyon/data/5
Exception in thread "main" java.lang.IllegalArgumentException: No Under File System Factory found for: /local/server/tachyon-0.7.0/underFSStorage/tmp/tachyon/data/5
    at tachyon.underfs.UnderFileSystemRegistry.create(UnderFileSystemRegistry.java:109)
    at tachyon.underfs.UnderFileSystem.get(UnderFileSystem.java:99)
    at tachyon.client.RemoteBlockInStream.setupStreamFromUnderFs(RemoteBlockInStream.java:347)
    at tachyon.client.RemoteBlockInStream.read(RemoteBlockInStream.java:237)
    at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283)
    at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325)
    at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177)
    at java.io.InputStreamReader.read(InputStreamReader.java:184)
    at java.io.BufferedReader.fill(BufferedReader.java:154)
    at java.io.BufferedReader.read(BufferedReader.java:175)
    at scala.io.BufferedSource$$anonfun$iter$1$$anonfun$apply$mcI$sp$1.apply$mcI$sp(BufferedSource.scala:38)
    at scala.io.Codec.wrap(Codec.scala:68)
    at scala.io.BufferedSource$$anonfun$iter$1.apply(BufferedSource.scala:38)
    at scala.io.BufferedSource$$anonfun$iter$1.apply(BufferedSource.scala:38)
    at scala.collection.Iterator$$anon$9.next(Iterator.scala:162)
    at scala.collection.Iterator$$anon$17.hasNext(Iterator.scala:511)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
    at scala.io.Source.hasNext(Source.scala:226)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at scala.io.Source.foreach(Source.scala:178)
    at scala.collection.TraversableOnce$class.addString(TraversableOnce.scala:320)
    at scala.io.Source.addString(Source.scala:178)
    at scala.collection.TraversableOnce$class.mkString(TraversableOnce.scala:286)
    at scala.io.Source.mkString(Source.scala:178)
    at scala.collection.TraversableOnce$class.mkString(TraversableOnce.scala:288)
    at scala.io.Source.mkString(Source.scala:178)
    at scala.collection.TraversableOnce$class.mkString(TraversableOnce.scala:290)
    at scala.io.Source.mkString(Source.scala:178)
    at org.test.Test$.main(Test.scala:43)
    at org.test.Test.main(Test.scala)

If use the jar file "tachyon-assemblies-0.7.0-jar-with-dependencies.jar" from the Tachyon source package, It can runs well. So I guess, did I need to add additional dependencies for tachyon client?

TachyonWorker HeartbeatThread replaced with ScheduledExecutorService

TachyonWorker's mHeartbeatThread is implemented with a while-true-loop and sleeping a time interval . ScheduledExecutorService's scheduleAtFixedRate or scheduleWithFixedDelay methods can create and execute tasks with a fixed delay . It's maybe better than a sleeping .
Is it a Good Idea ? Thanks a lot

[alluxio.logger.type] - java.net.BindException: Address already in use: connect

2016-04-01 13:49:15,625 ERROR [alluxio.logger.type] - java.net.SocketException: No buffer space available (maximum connections reached?): connect
alluxio.org.apache.thrift.transport.TTransportException: java.net.SocketException: No buffer space available (maximum connections reached?): connect
at alluxio.org.apache.thrift.transport.TSocket.open(TSocket.java:226)
at alluxio.org.apache.thrift.transport.TFramedTransport.open(TFramedTransport.java:81)
at alluxio.client.block.BlockWorkerClient.connectOperation(BlockWorkerClient.java:210)
at alluxio.client.block.BlockWorkerClient.connect(BlockWorkerClient.java:304)
at alluxio.AbstractClient.retryRPC(AbstractClient.java:291)
at alluxio.client.block.BlockWorkerClient.sessionHeartbeat(BlockWorkerClient.java:408)
at alluxio.client.block.BlockWorkerClient.periodicHeartbeat(BlockWorkerClient.java:424)
at alluxio.client.block.BlockWorkerClientHeartbeatExecutor.heartbeat(BlockWorkerClientHeartbeatExecutor.java:34)
at alluxio.client.block.BlockWorkerClient.beforeDisconnect(BlockWorkerClient.java:169)
at alluxio.AbstractClient.disconnect(AbstractClient.java:197)
at alluxio.AbstractClient.close(AbstractClient.java:221)
at alluxio.client.block.BlockStoreContext.releaseWorkerClient(BlockStoreContext.java:249)
at alluxio.client.block.AlluxioBlockStore.promote(AlluxioBlockStore.java:253)
at alluxio.client.file.FileInStream.updateBlockInStream(FileInStream.java:384)
at alluxio.client.file.FileInStream.checkAndAdvanceBlockInStream(FileInStream.java:236)
at alluxio.client.file.FileInStream.read(FileInStream.java:161)
at org.apache.avro.io.BinaryDecoder$InputStreamByteSource.readRaw(BinaryDecoder.java:824)
at org.apache.avro.io.BinaryDecoder.doReadBytes(BinaryDecoder.java:349)
at org.apache.avro.io.BinaryDecoder.readFixed(BinaryDecoder.java:302)
at org.apache.avro.io.Decoder.readFixed(Decoder.java:150)
at org.apache.avro.file.DataFileStream.initialize(DataFileStream.java:100)
at org.apache.avro.file.DataFileStream.(DataFileStream.java:84)
at alluxio.Test.testRead(Test.java:47)
at alluxio.Test.main(Test.java:35)
Caused by: java.net.SocketException: No buffer space available (maximum connections reached?): connect
at java.net.DualStackPlainSocketImpl.connect0(Native Method)
at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:73)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:157)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)
at java.net.Socket.connect(Socket.java:579)
at alluxio.org.apache.thrift.transport.TSocket.open(TSocket.java:221)
... 23 more

i don't no how to set the worker's host

dear
Sine the way to get worker host has been changed
String resolvedWorkerHost = NetworkUtils.getLocalHostName();
LOG.info("Resolved local TachyonWorker host to " + resolvedWorkerHost);
I just do not know how to set the worker's host what i just want to set
Is there any better solutions

thanks

loadufs is not ignoring /tachyon directory

Added more debug info to see what is happening:

Current LIST: [/tachyon]
Path to check outList = /
Path to check outList = hdfs://stanley///tachyon
Path to check outList = hdfs://stanley///tmp
Path to check outList = hdfs://stanley/tachyon//tachyon/data
Path to check outList = hdfs://stanley/tachyon//tachyon/journal
Path to check outList = hdfs://stanley/tachyon//tachyon/workers
Path to check outList = hdfs://stanley/tmp//tmp/xyz
Path to check outList = hdfs://stanley/tachyon/data//tachyon/data/2
Path to check outList = hdfs://stanley/tachyon/journal//tachyon/journal/_format_1392074367056
Path to check outList = hdfs://stanley/tachyon/journal//tachyon/journal/image.data
Path to check outList = hdfs://stanley/tachyon/journal//tachyon/journal/log.data
Path to check outList = hdfs://stanley/tachyon/workers//tachyon/workers/1392068000011
Path to be included = /tmp/xyz
Path to be included = /tachyon/data/2
Path to be included = /tachyon/journal/_format_1392074367056
Path to be included = /tachyon/journal/image.data
Path to be included = /tachyon/journal/log.data

How to determine whether a file exists?

How to determine whether a file exists?Why deprecate the function of "client.exist(...)"?

TachyonFileSystem tfs = TachyonFileSystemFactory.get();
FileInfo fileInfo = tfs.getInfo(file);
if(fileInfo.getFileId != null){
return true;
}else{
return false;
}
the above is correct?

if the arguments is a String of file path, how should I determine whether the file exists?

Would you please add http server for tachyon?

Would you please add http server for tachyon?
May we using tachyon as a distribute image storage ?
but we need http/https/http2 server. to let user access these file via http /https /http2 request.
For example, we storage a image call demo.jpg with tachyon.
we want to access these file via

http://ip-address:8080/path/to/demo.jpg

About start tachyon on ec2

Hey all,

I'm working on deploy techyon on ec2. but found below error. seems some bugs?
An error occurred while executing multiple actions in parallel.
Any errors that occurred are shown below.

btw, I modified some value of conf/ec2.yml with my prefer setting. like ami,vpc-id,subnet-id,az,keypair,keypath.

I am sure the key exist on my local machine #Mac OS ~/.ssh/tachyon.pem
Any suggestions or support for this case, Thanks in advance.

Error logs#############################

An unexpected error occurred when executing the action on the
'TachyonMaster' machine. Please report this as a bug:

The key pair 'tachyon.pem' does not exist

/var/root/.vagrant.d/gems/gems/excon-0.45.4/lib/excon/middlewares/expects.rb:6:in response_call' /var/root/.vagrant.d/gems/gems/excon-0.45.4/lib/excon/middlewares/response_parser.rb:8:inresponse_call'
/var/root/.vagrant.d/gems/gems/excon-0.45.4/lib/excon/connection.rb:372:in response' /var/root/.vagrant.d/gems/gems/excon-0.45.4/lib/excon/connection.rb:236:inrequest'
/var/root/.vagrant.d/gems/gems/fog-xml-0.1.2/lib/fog/xml/sax_parser_connection.rb:35:in request' /var/root/.vagrant.d/gems/gems/fog-xml-0.1.2/lib/fog/xml/connection.rb:7:inrequest'
/var/root/.vagrant.d/gems/gems/fog-aws-0.7.6/lib/fog/aws/compute.rb:525:in _request' /var/root/.vagrant.d/gems/gems/fog-aws-0.7.6/lib/fog/aws/compute.rb:520:inrequest'
/var/root/.vagrant.d/gems/gems/fog-aws-0.7.6/lib/fog/aws/requests/compute/run_instances.rb:139:in run_instances' /var/root/.vagrant.d/gems/gems/fog-aws-0.7.6/lib/fog/aws/models/compute/servers.rb:158:insave_many'
/var/root/.vagrant.d/gems/gems/fog-aws-0.7.6/lib/fog/aws/models/compute/server.rb:201:in save' /var/root/.vagrant.d/gems/gems/fog-core-1.35.0/lib/fog/core/collection.rb:51:increate'
/var/root/.vagrant.d/gems/gems/vagrant-aws-0.6.0/lib/vagrant-aws/action/run_instance.rb:102:in call' /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:34:incall'
/var/root/.vagrant.d/gems/gems/vagrant-aws-0.6.0/lib/vagrant-aws/action/elb_register_instance.rb:16:in call' /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:34:incall'
/var/root/.vagrant.d/gems/gems/vagrant-aws-0.6.0/lib/vagrant-aws/action/warn_networks.rb:14:in call' /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:34:incall'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/builtin/synced_folders.rb:86:in call' /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:34:incall'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/builtin/provision.rb:80:in call' /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:34:incall'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:95:in block in finalize_action' /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:34:incall'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:34:in call' /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/builder.rb:116:incall'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/runner.rb:66:in block in run' /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/util/busy.rb:19:inbusy'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/runner.rb:66:in run' /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/builtin/call.rb:53:incall'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:34:in call' /var/root/.vagrant.d/gems/gems/vagrant-aws-0.6.0/lib/vagrant-aws/action/connect_aws.rb:43:incall'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:34:in call' /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/builtin/config_validate.rb:25:incall'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:34:in call' /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/builtin/handle_box.rb:56:incall'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:34:in call' /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/builder.rb:116:incall'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/runner.rb:66:in block in run' /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/util/busy.rb:19:inbusy'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/runner.rb:66:in run' /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/machine.rb:214:inaction_raw'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/machine.rb:191:in block in action' /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/environment.rb:516:inlock'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/machine.rb:178:in call' /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/machine.rb:178:inaction'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/batch_action.rb:82:in `block (2 levels) in run'

An unexpected error occurred when executing the action on the
'TachyonWorker1' machine. Please report this as a bug:

The key pair 'tachyon.pem' does not exist

/var/root/.vagrant.d/gems/gems/excon-0.45.4/lib/excon/middlewares/expects.rb:6:in response_call' /var/root/.vagrant.d/gems/gems/excon-0.45.4/lib/excon/middlewares/response_parser.rb:8:inresponse_call'
/var/root/.vagrant.d/gems/gems/excon-0.45.4/lib/excon/connection.rb:372:in response' /var/root/.vagrant.d/gems/gems/excon-0.45.4/lib/excon/connection.rb:236:inrequest'
/var/root/.vagrant.d/gems/gems/fog-xml-0.1.2/lib/fog/xml/sax_parser_connection.rb:35:in request' /var/root/.vagrant.d/gems/gems/fog-xml-0.1.2/lib/fog/xml/connection.rb:7:inrequest'
/var/root/.vagrant.d/gems/gems/fog-aws-0.7.6/lib/fog/aws/compute.rb:525:in _request' /var/root/.vagrant.d/gems/gems/fog-aws-0.7.6/lib/fog/aws/compute.rb:520:inrequest'
/var/root/.vagrant.d/gems/gems/fog-aws-0.7.6/lib/fog/aws/requests/compute/run_instances.rb:139:in run_instances' /var/root/.vagrant.d/gems/gems/fog-aws-0.7.6/lib/fog/aws/models/compute/servers.rb:158:insave_many'
/var/root/.vagrant.d/gems/gems/fog-aws-0.7.6/lib/fog/aws/models/compute/server.rb:201:in save' /var/root/.vagrant.d/gems/gems/fog-core-1.35.0/lib/fog/core/collection.rb:51:increate'
/var/root/.vagrant.d/gems/gems/vagrant-aws-0.6.0/lib/vagrant-aws/action/run_instance.rb:102:in call' /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:34:incall'
/var/root/.vagrant.d/gems/gems/vagrant-aws-0.6.0/lib/vagrant-aws/action/elb_register_instance.rb:16:in call' /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:34:incall'
/var/root/.vagrant.d/gems/gems/vagrant-aws-0.6.0/lib/vagrant-aws/action/warn_networks.rb:14:in call' /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:34:incall'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/builtin/synced_folders.rb:86:in call' /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:34:incall'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/builtin/provision.rb:80:in call' /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:34:incall'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:95:in block in finalize_action' /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:34:incall'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:34:in call' /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/builder.rb:116:incall'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/runner.rb:66:in block in run' /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/util/busy.rb:19:inbusy'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/runner.rb:66:in run' /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/builtin/call.rb:53:incall'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:34:in call' /var/root/.vagrant.d/gems/gems/vagrant-aws-0.6.0/lib/vagrant-aws/action/connect_aws.rb:43:incall'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:34:in call' /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/builtin/config_validate.rb:25:incall'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:34:in call' /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/builtin/handle_box.rb:56:incall'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:34:in call' /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/builder.rb:116:incall'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/runner.rb:66:in block in run' /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/util/busy.rb:19:inbusy'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/runner.rb:66:in run' /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/machine.rb:214:inaction_raw'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/machine.rb:191:in block in action' /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/environment.rb:516:inlock'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/machine.rb:178:in call' /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/machine.rb:178:inaction'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/batch_action.rb:82:in `block (2 levels) in run'

About writing files

Well, I see at present, the append operation is not supported. But seems there are no way to prevent user from get and write the file again.( and worse case is multiple write, possibly in a map/reduce fail and retry env ?)

In any of these cases, the file is corrupted, and block lost or multiple blocks on different worker node, or blocks containing data from different writes. in one word, not working.

So , though at present the workable usage seems to me be one time single client write. We probably need at least a check to prevent user from corrupt their files.

Though local fs might behavior some how similar, but at least I think they provide a write type that lead user to think about what they are doing, And moreover, this is for a dist env, so seems this behavior not exactly right, e.g. hdfs prevent a second write stream.

TFsShell method: copyFromLocal, touch are using full URI instead of path

Like mkdir copyFromLocal and touch methods should pass path to the tachyonClient. Current status TFsShell#touch :

    String path = argv[1];
    String file = Utils.getFilePath(path);
    TachyonFS tachyonClient = TachyonFS.get(Utils.validatePath(path));
    TachyonFile tFile = tachyonClient.getFile(tachyonClient.createFile(path));

as You can see file variable is not being used, instead full URI (path) is passed
if I patch last line in following way TachyonFile tFile = tachyonClient.getFile(tachyonClient.createFile(file)); it is working.

Usage of try cache?

I don't see what might be the usage of try cache? It just log a warning. And other than that, silently go through the flow with probably no data or part of data actually been written. User probably never know their data is corrupted in this mode ( I just can't image a useful case that they don't care about this...) And since there are must cache already, shouldn't this silent ignore case been handled by user themselves on top of must cache mode? And more, they can choose whether they want to clean up the file, or just leave the file as it is like current behavior

Delete file do not respect lock?

It seems to me the current lockBlock op just prevent block been removed by LRU eviction, while, if user issue an delete file cmd, the worker storage side code don't take the lock status in consideration.

redundant HdfsFileIn/Outputstream ?

Hi

It seems to me , in TFS, the FileOutStream is directly used, while FileInStream is wrapped by HdfsFileInStream and pass to the Hadoop FSDataInputStream. And HdfsFileOutStream is not used.

However, isn't that FileInStream interface enough for construction of Hadoop FSDataInputStream? the other extra methods other than what defined by Hadoop InputStream should be implemented by FSDataInputStream? Or is there some defact in FileInStream that will break the chain?

TachyonFile ReadByteBuffer v.s. getInStream usage

This two read approaching is just try to provide alternative for user to use it? While, I am not sure what's the design goal, but getInStream can handle loading data from underfs when remote worker also do not have the data, while ReadByteBuffer seems do not have this feature, should they be matched? or any specific reason to doing so?

0.4.0 Tests.

When updating everything to 0.4.0 I consistently see an error when the Worker.stop() is called inside of LocalTachyonCluster.java.

Running on OpenJDK 7 against F20 dependencies.


Uncaught java.lang.ThreadDeath exception in thread "Thread-21" in a method java.lang.Thread.sleep() with signature (J)V
Exception in thread "Thread-21" java.lang.ThreadDeath
at java.lang.Thread.stop(Thread.java:835) [jar:file:/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.3.0.fc20.x86_64/jre/lib/rt.jar!/java/lang/Thread.class]
at tachyon.Worker.stop(Worker.java:163) [file:/home/tstclair/work/spaces/tachyon/tachyon-rpm/tachyon-e536d57d321718b525c14ab5b143a3318325523e/target/classes/tachyon/Worker.class]
at tachyon.LocalTachyonCluster.stop(LocalTachyonCluster.java:160) [file:/home/tstclair/work/spaces/tachyon/tachyon-rpm/tachyon-e536d57d321718b525c14ab5b143a3318325523e/target/test-classes/tachyon/LocalTachyonCluster.class]
at tachyon.MasterInfoTest.after(MasterInfoTest.java:38) [file:/home/tstclair/work/spaces/tachyon/tachyon-rpm/tachyon-e536d57d321718b525c14ab5b143a3318325523e/target/test-classes/tachyon/MasterInfoTest.class]

Licensing Checks

The files listed below are missing license clauses (ASL2)

report generated via:
mock --clean --init -r fedora-rawhide-x86_64 && fedora-review -m fedora-rawhide-x86_64 -n tachyon

Instructions listed: https://fedoraproject.org/wiki/SIGs/bigdata/packaging/Tachyon#Building_RPM


Unknown or generated

/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/bin/clear-cache.sh
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/bin/format.sh
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/bin/killall.sh
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/bin/mount-ramfs-linux.sh
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/bin/mount-ramfs-mac.sh
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/bin/mount-ramfs.sh
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/bin/mount.sh
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/bin/mvn-sync.sh
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/bin/restart-failed-worker.sh
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/bin/restart-failed-workers.sh
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/bin/run-all-tests.sh
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/bin/run-tests.sh
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/bin/slaves.sh
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/bin/start-local.sh
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/bin/start-master.sh
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/bin/start-safe.sh
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/bin/start-worker.sh
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/bin/start.sh
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/bin/stop.sh
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/bin/tachyon-config.sh
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/bin/tachyon-ls.sh
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/bin/tachyon-rm.sh
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/bin/thrift-gen.sh
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/BlockInfo.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/CheckpointInfo.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/CommonUtils.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/Constants.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/DataServer.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/DataServerMessage.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/Format.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/HeartbeatExecutor.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/HeartbeatThread.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/Inode.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/InodeFile.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/InodeFolder.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/InodeRawTable.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/InodeType.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/KryoFactory.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/Log4jFileAppender.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/LogType.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/Master.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/MasterClient.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/MasterClientHeartbeatExecutor.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/MasterInfo.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/MasterLogReader.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/MasterLogWriter.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/MasterServiceHandler.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/MasterWorkerInfo.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/Pair.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/PrefixList.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/SubsumeHdfs.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/UnderFileSystem.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/UnderFileSystemHdfs.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/UnderFileSystemSingleLocal.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/UserInfo.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/Users.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/Version.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/Worker.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/WorkerClient.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/WorkerClientHeartbeatExecutor.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/WorkerServiceHandler.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/WorkerSpaceCounter.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/WorkerStorage.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/client/BlockInStream.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/client/BlockOutStream.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/client/EmptyBlockInStream.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/client/FileInStream.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/client/FileOutStream.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/client/InStream.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/client/OutStream.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/client/RawColumn.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/client/RawTable.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/client/ReadType.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/client/TachyonFS.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/client/TachyonFile.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/client/WriteType.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/command/TFsShell.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/command/Utils.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/conf/CommonConf.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/conf/MasterConf.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/conf/UserConf.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/conf/Utils.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/conf/WorkerConf.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/examples/BasicOperations.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/examples/BasicRawTableOperations.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/examples/Performance.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/hadoop/TFS.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/hadoop/TFileInputStreamHdfs.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/hadoop/TFileOutputStreamHdfs.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/hadoop/Utils.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/thrift/BlockInfoException.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/thrift/ClientBlockInfo.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/thrift/ClientFileInfo.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/thrift/ClientRawTableInfo.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/thrift/ClientWorkerInfo.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/thrift/Command.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/thrift/CommandType.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/thrift/FailedToCheckpointException.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/thrift/FileAlreadyExistException.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/thrift/FileDoesNotExistException.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/thrift/InvalidPathException.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/thrift/MasterService.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/thrift/NetAddress.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/thrift/NoLocalWorkerException.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/thrift/OutOfMemoryForPinFileException.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/thrift/SuspectedFileSizeException.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/thrift/TableColumnException.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/thrift/TableDoesNotExistException.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/thrift/TachyonException.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/thrift/WorkerService.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/web/UIWebServer.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/web/WebInterfaceBrowseServlet.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/web/WebInterfaceGeneralServlet.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/main/java/tachyon/web/WebInterfaceMemoryServlet.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/test/java/tachyon/BlockInfoTest.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/test/java/tachyon/DataServerTest.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/test/java/tachyon/InodeFileTest.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/test/java/tachyon/InodeFolderTest.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/test/java/tachyon/InodeRawTableTest.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/test/java/tachyon/LocalTachyonCluster.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/test/java/tachyon/MasterClientTest.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/test/java/tachyon/MasterInfoTest.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/test/java/tachyon/MasterLogWriterTest.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/test/java/tachyon/TestUtils.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/test/java/tachyon/UserInfoTest.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/test/java/tachyon/WorkerServiceHandlerTest.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/test/java/tachyon/client/BlockInStreamTest.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/test/java/tachyon/client/FileInStreamTest.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/test/java/tachyon/client/FileOutStreamTest.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/test/java/tachyon/client/RawColumnTest.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/test/java/tachyon/client/RawTableTest.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/test/java/tachyon/client/TachyonFSTest.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/test/java/tachyon/client/TachyonFileTest.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/test/java/tachyon/command/TFsShellTest.java
/var/lib/mock/fedora-rawhide-x86_64/root/builddir/build/BUILD/tachyon-0.3.0/src/test/java/tachyon/hadoop/HadoopCompatibleFSTest.java

0.64版本 错误: 找不到或无法加载主类 tachyon.Format

tachyon-0.6.4]# mvn -Dhadoop.version=2.3.0 clean install
[INFO] Scanning for projects...
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Build Order:
[INFO]
[INFO] Tachyon Project Parent
[INFO] Tachyon Project Core
[INFO] Tachyon Project Client
[INFO] Tachyon Project Assemblies
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Tachyon Project Parent 0.6.4
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ tachyon-parent ---
[INFO] Deleting /home/cloudwave/tachyon-0.6.4/target
[INFO]
[INFO] --- maven-enforcer-plugin:1.0:enforce (enforce-maven) @ tachyon-parent ---
[INFO]
[INFO] --- maven-checkstyle-plugin:2.13:check (checkstyle) @ tachyon-parent ---
[INFO] Starting audit...
Audit done.

[INFO]
[INFO] --- license-maven-plugin:2.9:check (default) @ tachyon-parent ---
[INFO] Checking licenses...
[INFO]
[INFO] >>> maven-source-plugin:2.3:jar (attach-sources) > generate-sources @ tachyon-parent >>>
[INFO]
[INFO] --- maven-enforcer-plugin:1.0:enforce (enforce-maven) @ tachyon-parent ---
[INFO]
[INFO] --- maven-checkstyle-plugin:2.13:check (checkstyle) @ tachyon-parent ---
[INFO] Starting audit...
Audit done.

[INFO]
[INFO] --- license-maven-plugin:2.9:check (default) @ tachyon-parent ---
[INFO] Checking licenses...
[INFO]
[INFO] <<< maven-source-plugin:2.3:jar (attach-sources) < generate-sources @ tachyon-parent <<<
[INFO]
[INFO] --- maven-source-plugin:2.3:jar (attach-sources) @ tachyon-parent ---
[INFO]
[INFO] --- maven-javadoc-plugin:2.9:jar (attach-javadoc) @ tachyon-parent ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO]
[INFO] --- maven-install-plugin:2.4:install (default-install) @ tachyon-parent ---
[INFO] Installing /home/cloudwave/tachyon-0.6.4/pom.xml to /root/.m2/repository/org/tachyonproject/tachyon-parent/0.6.4/tachyon-parent-0.6.4.pom
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Tachyon Project Core 0.6.4
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ tachyon ---
[INFO] Deleting /home/cloudwave/tachyon-0.6.4/core/target
[INFO]
[INFO] --- maven-enforcer-plugin:1.0:enforce (enforce-maven) @ tachyon ---
[INFO]
[INFO] --- maven-checkstyle-plugin:2.13:check (checkstyle) @ tachyon ---
[INFO] Starting audit...
Audit done.

[INFO]
[INFO] --- license-maven-plugin:2.9:check (default) @ tachyon ---
[INFO] Checking licenses...
[INFO]
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ tachyon ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 2 resources
[INFO]
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ tachyon ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 163 source files to /home/cloudwave/tachyon-0.6.4/core/target/classes
[WARNING] /home/cloudwave/tachyon-0.6.4/core/src/main/java/tachyon/util/CommonUtils.java:[43,16] sun.misc.Cleaner是内部专用 API, 可能会在未来发行版中删除
[WARNING] /home/cloudwave/tachyon-0.6.4/core/src/main/java/tachyon/util/CommonUtils.java:[44,18] sun.nio.ch.DirectBuffer是内部专用 API, 可能会在未来发行版中删除
[WARNING] /home/cloudwave/tachyon-0.6.4/core/src/main/java/tachyon/util/CommonUtils.java:[126,7] sun.misc.Cleaner是内部专用 API, 可能会在未来发行版中删除
[WARNING] /home/cloudwave/tachyon-0.6.4/core/src/main/java/tachyon/util/CommonUtils.java:[126,27] sun.nio.ch.DirectBuffer是内部专用 API, 可能会在未来发行版中删除
[INFO] /home/cloudwave/tachyon-0.6.4/core/src/main/java/tachyon/UnderFileSystemHdfs.java: 某些输入文件使用或覆盖了已过时的 API。
[INFO] /home/cloudwave/tachyon-0.6.4/core/src/main/java/tachyon/UnderFileSystemHdfs.java: 有关详细信息, 请使用 -Xlint:deprecation 重新编译。
[INFO] /home/cloudwave/tachyon-0.6.4/core/src/main/java/tachyon/thrift/ClientFileInfo.java: 某些输入文件使用了未经检查或不安全的操作。
[INFO] /home/cloudwave/tachyon-0.6.4/core/src/main/java/tachyon/thrift/ClientFileInfo.java: 有关详细信息, 请使用 -Xlint:unchecked 重新编译。
[INFO]
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ tachyon ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO]
[INFO] --- maven-compiler-plugin:3.2:testCompile (default-testCompile) @ tachyon ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 61 source files to /home/cloudwave/tachyon-0.6.4/core/target/test-classes
[INFO]
[INFO] --- maven-surefire-plugin:2.14:test (default-test) @ tachyon ---
[INFO] Surefire report directory: /home/cloudwave/tachyon-0.6.4/core/target/surefire-reports


T E S T S

Running tachyon.master.DependencyTest
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.168 sec
Running tachyon.master.InodeFileTest
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.047 sec
Running tachyon.master.EditLogOperationTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.024 sec
Running tachyon.master.RawTablesTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.016 sec
Running tachyon.master.BlockInfoTest
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003 sec
Running tachyon.master.PinTest
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.962 sec
Running tachyon.master.InodeFolderTest
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.022 sec
Running tachyon.master.JournalTest
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 294.573 sec
Running tachyon.master.MasterInfoTest
Tests run: 37, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 316.419 sec
Running tachyon.master.MasterFaultToleranceTest
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.624 sec
Running tachyon.master.MasterClientTest
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.481 sec
Running tachyon.PrefixListTest
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.004 sec
Running tachyon.TachyonURITest
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.02 sec
Running tachyon.PairTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0 sec
Running tachyon.UserInfoTest
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0 sec
Running tachyon.io.WriterReaderTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0 sec
Running tachyon.io.UtilsTest
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0 sec
Running tachyon.command.UtilsTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.001 sec
Running tachyon.command.TFsShellTest
Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 302.889 sec
Running tachyon.hadoop.fs.TestDFSIO
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.085 sec
Running tachyon.hadoop.UtilsTest
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.001 sec
Running tachyon.hadoop.GlusterFSTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0 sec
Running tachyon.hadoop.TFSTest
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.582 sec
Running tachyon.hadoop.HdfsFileInputStreamTest
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.662 sec
Running tachyon.util.CommonUtilsTest
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.595 sec
Running tachyon.util.UnderfsUtilsTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.503 sec
Running tachyon.util.NetworkUtilsTest
Tests run: 3, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 0.007 sec <<< FAILURE!
replaceHostNameTest(tachyon.util.NetworkUtilsTest) Time elapsed: 0.006 sec <<< FAILURE!
java.lang.AssertionError: expected:hdfs://localhost.localdomain:9000/dir but was:hdfs://localhost:9000/dir
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:147)
at tachyon.util.NetworkUtilsTest.replaceHostNameTest(NetworkUtilsTest.java:40)

resolveHostNameTest(tachyon.util.NetworkUtilsTest) Time elapsed: 0 sec <<< FAILURE!
org.junit.ComparisonFailure: expected:<localhost[.localdomain]> but was:<localhost[]>
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at tachyon.util.NetworkUtilsTest.resolveHostNameTest(NetworkUtilsTest.java:48)

getFqdnHostTest(tachyon.util.NetworkUtilsTest) Time elapsed: 0 sec <<< FAILURE!
org.junit.ComparisonFailure: expected:<localhost[.localdomain]> but was:<localhost[]>
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at tachyon.util.NetworkUtilsTest.getFqdnHostTest(NetworkUtilsTest.java:53)

Running tachyon.worker.WorkerServiceHandlerTest
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.15 sec
Running tachyon.worker.BlockHandlerLocalTest
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.647 sec
Running tachyon.worker.WorkerStorageTest
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 55.667 sec
Running tachyon.worker.hierarchy.EvictStrategyTest
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.822 sec
Running tachyon.worker.hierarchy.HierarchyStoreTest
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.845 sec
Running tachyon.worker.hierarchy.AllocateStrategyTest
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.068 sec
Running tachyon.worker.hierarchy.StorageTierTest
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.933 sec
Running tachyon.worker.hierarchy.StorageDirTest
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.511 sec
Running tachyon.worker.DataServerTest
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 125.683 sec
Running tachyon.conf.UtilsTest
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.002 sec
Running tachyon.UnderFileSystemTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.001 sec
Running tachyon.client.LocalBlockInStreamTest
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.14 sec
Running tachyon.client.TachyonFSTest
Tests run: 30, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.2 sec <<< FAILURE!
toStringTest(tachyon.client.TachyonFSTest) Time elapsed: 0.001 sec <<< FAILURE!
org.junit.ComparisonFailure: expected:tachyon://localhost[]/127.0.0.1:19998 but was:tachyon://localhost[.localdomain]/127.0.0.1:19998
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at tachyon.client.TachyonFSTest.toStringTest(TachyonFSTest.java:355)

Running tachyon.client.TachyonFileUpdateTest
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.818 sec
Running tachyon.client.FileInStreamTest
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.627 sec
Running tachyon.client.TachyonFileTest
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 47.559 sec
Running tachyon.client.table.RawTableTest
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.029 sec
Running tachyon.client.table.RawColumnTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.014 sec
Running tachyon.client.BlockInStreamTest
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.122 sec
Running tachyon.client.FileOutStreamTest
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.663 sec
Running tachyon.client.RemoteBlockInStreamTest
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.722 sec
Running tachyon.retry.ExponentialBackoffRetryTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.002 sec

Results :

Failed tests:
NetworkUtilsTest.replaceHostNameTest:40 expected:hdfs://localhost.localdomain:9000/dir but was:hdfs://localhost:9000/dir
NetworkUtilsTest.resolveHostNameTest:48 expected:<localhost[.localdomain]> but was:<localhost[]>
NetworkUtilsTest.getFqdnHostTest:53 expected:<localhost[.localdomain]> but was:<localhost[]>
TachyonFSTest.toStringTest:355 expected:tachyon://localhost[]/127.0.0.1:19998 but was:tachyon://localhost[.localdomain]/127.0.0.1:19998

Tests run: 351, Failures: 4, Errors: 0, Skipped: 0

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Tachyon Project Parent ............................. SUCCESS [ 4.509 s]
[INFO] Tachyon Project Core ............................... FAILURE [25:42 min]
[INFO] Tachyon Project Client ............................. SKIPPED
[INFO] Tachyon Project Assemblies ......................... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 25:47 min
[INFO] Finished at: 2015-05-25T16:45:23+08:00
[INFO] Final Memory: 51M/1365M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.14:test (default-test) on project tachyon: There are test failures.
[ERROR]
[ERROR] Please refer to /home/cloudwave/tachyon-0.6.4/core/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn -rf :tachyon

Test Failures in TFsShellTest when using Thrift 0.9 on v0.2.1

I'm still digging through the details but they all see to be around expecting exceptions.

e.g.
tachyon.command.TFsShellTest
mkdirInvalidPathTest(tachyon.command.TFsShellTest) Time elapsed: 0.103 sec <<< FAILURE!
java.lang.AssertionError: Expected exception: tachyon.thrift.InvalidPathException
at org.junit.internal.runners.statements.ExpectException.evaluate(ExpectException.java:32)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

refactor tachyon-env.sh

tachyon-env.sh serves dual purposes: set java executable path and java commandline options. While we still need its first function, the static java options needs to get out of there.

We really need a key/value kind of configuration either json pairs or something like Hadoop's core-site.xml, that contains runtime configuration. This helps in the following area:

  1. construct Configuration object from well-structured key/value. We thus don't have to be conditionally parse system properties in constructing underFilesystem

  2. change configuration dynamically, something the static java commandline options cannot do.

Improve LRU algorithm ?

Seems to me the current approaching to find the blocks to eviction is to loop through the list to find the oldest block, then, loop again? might benefit from some kind of sort mechanism?

Split core into multiple jars

All the code is in core so when a user depends on it all server related dependencies get pulled in. The current client module tries to address this by filtering out dependencies but its not enough since it also ships code that may not work without those dependencies (server code).

I propose that we split core into three modules: client (should this be named tachyon for backwards compatibility?), server, example; these modules should contain their respected classes.

With this split it should be easier for users to use the client libs directly since examples module would function like user code and dependencies would be lowered to bare minimum.

RFC: C/Python binding

To expand the ecosystem, we need to provide C/Python bindings so projects that are written in other languages can use Tachyon.

Permissions issues on tachyon.worker.data.folder (/mnt/ramdisk/tachyonworker)

Issues #90, and #91 were actually behavioral effects to starting daemons as different system/daemon users, and not having permissions to write to the tachyon.worker.data.folder. Presently it makes sense to have the deamons share a common group and to create subfolders with 775 permissions (vs 755), and allow adminstrators to set the group parameter.

e.g.
user:hdfs (namenode & datanode)
user:yarn (resourcemanager & nodemanager)
user:tachyon|hdfs (master & slave)

shared group would be 'hadoop' or 'hdfs'.

Presently I can verify this all works, by hand.

The issue is that the user 'yarn' fails, trying to write to: /mnt/ramdisk/tachyonworker/users

Tachyon's performance question

We want to use Tachyon, and we evaluate its performance with TestDFSIO.java in hadoop-mapreduce-client-jobclient-2.6.0-tests.jar, we find that Tachyon was a bit better than HDFS with cache in local sequence read and write, was equal in remote read, and was worse in local random read and backward read. Can we improve Tachyon's pure read and write performance with modify some configuration items or other means, or is our test method incorrect?

Test Result

1 Initial local read, files 24*1GB

unit: ssequence readrandom readbackward read
Tachyon on HDFS144.86failedfailed
Tachyon on own FS78.357245.057(the file read wasn't in memory)failed
HDFS128.598167.697329.518

2 Local read with files cache in memory, files 24*1GB

unit: ssequence readrandom readbackward read
Tachyon11.64952.7946.877
HDFS15.627.88927.412

3 Remote read, files 12*1GB

unit: stimes
Tachyon1113.811
HDFS1078.591

4 Write, files 24*1GB, Tachyon's write type: CACHE_THROUGH, HDFS's dfs.replication: 1

unit: stimes
Tachyon on HDFS99.39
Tachyon on own FS73.042
HDFS76.212

Test Environment

Master: blade server with CPU 1.9GHz24, RAM 16GB2, running NameNode and DataNode, Tachyon master and Tachyon worker
Slave1: blade server with CPU 1.9GHz24, RAM 16GB1, running DataNode, Tachyon worker
Slave2: blade server with CPU 1.9GHz24, RAM 16GB1, running DataNode, Tachyon worker
Software version: Tachyon 0.6.3, Hadoop 2.6.0

Test Method

We modified the TestDFSIO.java adding a parameter fs to input our tachyon fs, and test Tachyon like this, refer to my github for detail:
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0-tests.jar TestDFSIO -libjars /home/michael/tachyon/tachyon-0.6.3/client/target/tachyon-client-0.6.3-jar-with-dependencies.jar -write -nrFiles 36 -fileSize 1000 -fs tachyon://master:19998

[ERROR] Alternative tachyon.thrift.FileDoesNotExistException is a subclass of alternative org.apache.thrift.TException

In updating to the latest sources I receive a compilation error on the modified catch statement from merged pull request #65 on line 486 of TachyonFS.java.

[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 11.261s
[INFO] Finished at: Wed Dec 04 21:09:44 CST 2013
[INFO] Final Memory: 22M/247M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on project tachyon: Compilation failure
[ERROR] /home/tstclair/work/packaging/fedora/amplab-tachyon/tachyon-6a0a12d8c1f64dab1d084690cd4716fa77813e6b/src/main/java/tachyon/client/TachyonFS.java:[486,63] Alternatives in a multi-catch statement cannot be related by subclassing
[ERROR] Alternative tachyon.thrift.FileDoesNotExistException is a subclass of alternative org.apache.thrift.TException

Link github with Jira

Now we have commit referencing to jira tickets, it is better to link them for better tracking.

Specifying a bad port for TACHYON_MASTER_ADDRESS can cause an OutOfMemoryError

If I specify my TACHYON_MASTER_ADDRESS with port 19999 instead of 19998 then I get the following after a few seconds:

Exception in thread "main" java.lang.OutOfMemoryError: unable to create new native thread
    at java.lang.Thread.start0(Native Method)
    at java.lang.Thread.start(Thread.java:679)
    at tachyon.master.MasterClient.connect(MasterClient.java:163)
    at tachyon.master.MasterClient.worker_register(MasterClient.java:780)
    at tachyon.worker.WorkerStorage.<init>(WorkerStorage.java:298)
    at tachyon.worker.Worker.<init>(Worker.java:65)
    at tachyon.worker.Worker.createWorker(Worker.java:167)
    at tachyon.worker.Worker.main(Worker.java:218)

My setup is a bit odd, so I'm not sure how reproducible this is, but it seems bad regardless.

0.4.0 MR write cache Exception

when running (as any user):

hadoop jar wordcount.jar org.myorg.WordCount tachyon://localhost:19998/user/tstclair/input tachyon://localhost:19998/foobar1

I see the following error on write:

13/12/17 13:18:52 INFO mapreduce.Job: Task Id : attempt_1387216123041_0036_r_000000_0, Status : FAILED
Error: java.io.IOException: Can not write cache.
at tachyon.client.BlockOutStream.write(BlockOutStream.java:159)
at tachyon.client.FileOutStream.write(FileOutStream.java:128)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:59)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at org.apache.hadoop.mapred.TextOutputFormat$LineRecordWriter.writeObject(TextOutputFormat.java:81)
at org.apache.hadoop.mapred.TextOutputFormat$LineRecordWriter.write(TextOutputFormat.java:96)
at org.apache.hadoop.mapred.ReduceTask$OldTrackingRecordWriter.write(ReduceTask.java:511)
at org.apache.hadoop.mapred.ReduceTask$3.collect(ReduceTask.java:440)
at org.myorg.WordCount$Reduce.reduce(WordCount.java:83)
at org.myorg.WordCount$Reduce.reduce(WordCount.java:77)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:462)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:408)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)

MR Input caching.

When running:

$ hadoop jar wordcount.jar org.myorg.WordCount tachyon://localhost:19998/user/tstclair/input /user/tstclair/output2

I would have suspected that providing a tachyon url for the input would have caused input splits to be cached. However when running:

$ sudo runuser tachyon -s /bin/bash /bin/bash -c "tachyon.sh tfs ls /user/tstclair/input"
45.06 KB 12-16-2013 13:55:09:693 Not In Memory /user/tstclair/input/constitution.txt

The "Input Data" does not appear to be cached. Given the above 'hadoop command' I fully expect the output to not be cached, but I'm curious about the input.

Compare Tachyon with HDFS

Hi, all
From the Tachyon Paper(SoCC'14),I can see that the paper compares Tachyon with MemHDFS,and as it says, MemHDFS is over RamFS. How to install the MemHDFS(over the RAMFS) ? when i configure the "dfs.data.dir" to use a ramfs dir, but it does not work? Do you have any ideas about it?

Not able to run wordcount with Tachyon

I am trying to run Hadoop Wordcount on Tachyon. I followed this link. But once I run wordcount Jar with below command

hadoop jar HadoopWordCount-0.0.1-SNAPSHOT-jar-with-dependencies.jar edu.WordCount -libjars tachyon-client-0.7.1-jar-with-dependencies.jar tachyon://tachyon_ip:19998/wordcountsample/word2.txtaa /OUT/wcTachyon

Error: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class tachyon.hadoop.TFS not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2112)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2578)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)

Can u pls let me know what is the issue.
I have set the properties in core-site and hdfs-site.

TFS and TFShell doesn't respect fault tollerant mode with zookeeper

This is preventing the client to use any of tachyon masters, instead, the client needs to be aware which tachyon master is the active one.

I think TFS is really important since it is used by spark as underlying FileSystem.

So far I have verified that TFShell doesn't work for sure.

white list and pin list feature not finished?

Hi

According to my understanding, the purpose of white list is to allow some file been reside in memory by default ( however seems with write type, I don't understand how this should work like ) and pin list is to force some file always in memory ( while are there any memory swap out mechanism at present? Or design to load the file by default in memory from say hdfs when restart cluster?)

However , it seems to me neither of them have been fully implemented , just a flag bit been play around? Or I miss some code?

file rename operation not effect in method WorkerStorage.addCheckpoint

when run test like : bin/tachyon runTest Basic THROUGH , throws exceptions

envs:
jdk1.7.0_55
os: CentOS release 6.3

strace:
13 FailedToCheckpointException(message:Failed to rename /data/index/Deploy/test/tachyon-0.5.0/libexec/../underfs/tmp/tachyon/workers/141646800
0001/2/3 to /data/index/Deploy/test/tachyon-0.5.0/libexec/../underfs/tmp/tachyon/data/3)
14 at tachyon.worker.WorkerStorage.addCheckpoint(WorkerStorage.java:400)
15 at tachyon.worker.WorkerServiceHandler.addCheckpoint(WorkerServiceHandler.java:46)
16 at tachyon.thrift.WorkerService$Processor$addCheckpoint.getResult(WorkerService.java:876)
17 at tachyon.thrift.WorkerService$Processor$addCheckpoint.getResult(WorkerService.java:860)
18 at tachyon.org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
19 at tachyon.org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
20 at tachyon.org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.invoke(AbstractNonblockingServer.java:478)
21 at tachyon.org.apache.thrift.server.Invocation.run(Invocation.java:18)
22 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
23 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
24 at java.lang.Thread.run(Thread.java:745)

Cannot start tachyon with 0.5.0

  1. tar jar package
  2. cp tachyon-env.sh.template tachyon-env.sh
  3. modify "export TACHYON_UNDERFS_ADDRESS=/tmp"
  4. ./bin/tachyon-start.sh local
    print:
    Killed 0 processes
    Killed 0 processes
    Connection to localhost... root@localhost's password:
    Killed 0 processes
    Connection to localhost closed.
    Formatting RamFS: /mnt/ramdisk (1gb)
    Starting master @ localhost
    Starting worker @ hdh139
    and nothing in log/

what's wrong ?Thanks

Tachyon workers failing to connect to Master

I downloaded Tachyon 0.4.1 and followed the instructions listed here on a 3 node CentOS 7 cluster - http://tachyon-project.org/Running-Tachyon-on-a-Cluster.html

I'm very familiar with Hadoop and the Tachyon instructions appear to reflect the same way you would setup a Hadoop cluster if you're going to use a master and slaves file. However, the first thing I noticed was that there was no conf/master file. So I created one. I then put the FQDN of my Tachyon Master in it and edited the conf/slaves file and listed my 2 Tachyon Workers, each on a new line.

I then SCPd the entire tachyon directory to each node in the cluster, shelled into my Tachyon Master node and ran "bin/tachyon format" which appeared to work well and then ran "bin/tachyon-start.sh all Mount"

The Master's log looks good, but the UI won't come up. I shelled into the Workers and checked their logs and they both are full of the following error:
2014-07-09 06:30:41,752 ERROR WORKER_LOGGER (MasterClient.java:connect) - Failed to connect (1) to master localhost/127.0.0.1:19998 : java.net.ConnectException: Connection refused

To me, this looks like the worker is being started and being passed "localhost" as the Master hostname instead of the value that is in my conf/master file. I can confirm that the conf/master file is accurate on each one of the workers.

Deploy move away from Uber Jar

Deployment as an uberjar makes some things simple, but makes working with hadoop more problematic. Currently in order to upgrade hadoop, I will need to recompile tachyon just to get the jar changes. Also to support hadoop's ServiceLoader (aka, won't need to update hadoop configs to use tachyon), uber jars get in the way.

There are a few things we can do to make this easier:

  1. tar packages dependencies and tachyon.jar as different jars under a /lib dir. This would include hadoop (2.x and a diff tar for 1.x) client jars but admins can delete to swap hadoop.
  2. tar packages dependencies and tachyon.jar as different jars under a /lib dir. Hadoop is missing, but we try to load hadoop by calling the hadoop classpath command.
  3. tar packages dependencies and tachyon.jar as different jars under a /lib dir. No hadoop in classpath, user adds.

Clean up those corrupted files?

At present, it seems there are a lot of way that can lead to corrupted files on tachyon.

e.g. with Must Cache mode, user don't catch the exception and close the stream ( like what the basic operation example's current behavior ), this will leave the file at a not complete status. Or in the case that an in memory file is lost due to file on ram disk lost ( remount on restart, power lost)

for the first case, I am not sure should that be handled by FileOutStream to close the file? say set the complete flag or clean the file.

While for both case, should have a tool to check the fs and clean up those files.

Space paths in Hadoop make error

Hi,
I am testing Tachyon with CDH4.3. When I try to load a folder with spaces I get the next error (and the folder isn't loaded)

The folder is a hive time partition folder
./bin/tachyon loadufs tachyon://e-00006658.melicloud.com:19998 hdfs://namenode.melidoop.com:8020 /data/dejavu2
Exception in thread "main" java.io.IOException: InvalidPathException(message:Path /data/dejavu2/ds=2011-05-25 03%3A00%3A00 is invalid.)
at tachyon.master.MasterClient.user_getFileId(MasterClient.java:461)
at tachyon.client.TachyonFS.getFileId(TachyonFS.java:647)
at tachyon.client.TachyonFS.exist(TachyonFS.java:419)
at tachyon.util.UnderfsUtil.getInfo(UnderfsUtil.java:82)
at tachyon.util.UnderfsUtil.main(UnderfsUtil.java:109)
Caused by: InvalidPathException(message:Path /data/dejavu2/ds=2011-05-25 03%3A00%3A00 is invalid.)
at tachyon.thrift.MasterService$user_getFileId_result$user_getFileId_resultStandardScheme.read(MasterService.java:20728)
at tachyon.thrift.MasterService$user_getFileId_result$user_getFileId_resultStandardScheme.read(MasterService.java:20706)
at tachyon.thrift.MasterService$user_getFileId_result.read(MasterService.java:20650)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at tachyon.thrift.MasterService$Client.recv_user_getFileId(MasterService.java:739)
at tachyon.thrift.MasterService$Client.user_getFileId(MasterService.java:726)
at tachyon.master.MasterClient.user_getFileId(MasterClient.java:459)
... 4 more

Looging a Little, I trace the error to here: https://github.com/amplab/tachyon/blob/master/main/src/main/java/tachyon/util/CommonUtils.java#L543

Thanks,
Gabriel.

Webserver conf is dependent on src directory structure.

Inside of (src/main/java/tachyon/web/UIWebServer.java), there is a hard coded relative path to the repo structure. It probably makes sense to switch to a configurable variable.

File warPath = new File(CommonConf.get().TACHYON_HOME + "/src/main/java/tachyon/web/resources");

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.