Giter Site home page Giter Site logo

dynamodb-janusgraph-storage-backend's Introduction

Amazon DynamoDB Storage Backend for JanusGraph

JanusGraph: Distributed Graph Database is a scalable graph database optimized for storing and querying graphs containing hundreds of billions of vertices and edges distributed across a multi-machine cluster. JanusGraph is a transactional database that can support thousands of concurrent users executing complex graph traversals in real time. -- JanusGraph Homepage

Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed database and supports both document and key-value data models. Its flexible data model and reliable performance make it a great fit for mobile, web, gaming, ad-tech, IoT, and many other applications. -- AWS DynamoDB Homepage

JanusGraph + DynamoDB = Distributed Graph Database - Cluster Host Management

Build Status

Features

The following is a list of features of the Amazon DynamoDB Storage Backend for JanusGraph.

  • AWS managed authentication and authorization.
  • Configure table prefix to allow multiple graphs to be stored in a single account in the same region.
  • Full graph traversals with rate limited table scans.
  • Flexible data model allows configuration between single-item and multiple-item model based on graph size and utilization.
  • Test graph locally with DynamoDB Local.
  • Integrated with JanusGraph metrics.
  • JanusGraph 0.2.0 and TinkerPop 3.2.6 compatibility.
  • Upgrade compatibility from Titan 1.0.0.

Getting Started

This example populates a JanusGraph database backed by DynamoDB Local using the Marvel Universe Social Graph. The graph has a vertex per comic book character with an edge to each of the comic books in which they appeared.

Load a subset of the Marvel Universe Social Graph

  1. Install the prerequisites (Git, JDK 1.8, Maven, Docker, wget, gpg) of this tutorial. The command below uses a convenience script for Amazon Linux on EC2 instances to install Git, Open JDK 1.8, Maven, Docker and Docker Compose. It adds the ec2-user to the docker group so that you can execute Docker commands without using sudo. Log out and back in to effect changes on ec2-user.

    curl https://raw.githubusercontent.com/awslabs/dynamodb-janusgraph-storage-backend/master/src/test/resources/install-reqs.sh | bash
    exit
  2. Clone the repository and change directories.

    git clone https://github.com/awslabs/dynamodb-janusgraph-storage-backend.git && cd dynamodb-janusgraph-storage-backend
  3. Use Docker and Docker Compose to bake DynamoDB Local into a container and start Gremlin Server with the DynamoDB Storage Backend for JanusGraph installed.

    docker build -t awslabs/dynamodblocal ./src/test/resources/dynamodb-local-docker \
    && src/test/resources/install-gremlin-server.sh \
    && cp server/dynamodb-janusgraph-storage-backend-*.zip src/test/resources/dynamodb-janusgraph-docker \
    && mvn docker:build -Pdynamodb-janusgraph-docker \
    && docker-compose -f src/test/resources/docker-compose.yml up -d \
    && docker exec -i -t dynamodb-janusgraph /var/jg/bin/gremlin.sh
  4. After the Gremlin shell starts, set it up to execute commands remotely.

    :remote connect tinkerpop.server conf/remote.yaml session
    :remote console
  5. Load the first 100 lines of the Marvel graph using the Gremlin shell.

    com.amazon.janusgraph.example.MarvelGraphFactory.load(graph, 100, false)
  6. Print the characters and the comic-books they appeared in where the characters had a weapon that was a shield or claws.

    g.V().has('weapon', within('shield','claws')).as('weapon', 'character', 'book').select('weapon', 'character','book').by('weapon').by('character').by(__.out('appeared').values('comic-book'))
  7. Print the characters and the comic-books they appeared in where the characters had a weapon that was not a shield or claws.

    g.V().has('weapon').has('weapon', without('shield','claws')).as('weapon', 'character', 'book').select('weapon', 'character','book').by('weapon').by('character').by(__.out('appeared').values('comic-book'))
  8. Print a sorted list of the characters that appear in comic-book AVF 4.

    g.V().has('comic-book', 'AVF 4').in('appeared').values('character').order()
  9. Print a sorted list of the characters that appear in comic-book AVF 4 that have a weapon that is not a shield or claws.

    g.V().has('comic-book', 'AVF 4').in('appeared').has('weapon', without('shield','claws')).values('character').order()
  10. Exit remote mode and Control-C to quit.

    :remote console
  11. Clean up the composed Docker containers.

    docker-compose -f src/test/resources/docker-compose.yml stop

Load the Graph of the Gods

  1. Repeat steps 3 and 4 of the Marvel graph section, cleaning up the server directory beforehand with rm -rf server.

  2. Load the Graph of the Gods.

    GraphOfTheGodsFactory.loadWithoutMixedIndex(graph, true)
  3. Now you can follow the rest of the JanusGraph Getting Started documentation, starting from the Global Graph Indeces section. See the scriptEngines/gremlin-groovy/scripts list element in the Gremlin Server YAML file for more information about what is in scope in the remote environment.

  4. Alternatively, repeat steps 1 through 8 of the Marvel graph section and follow the examples in the TinkerPop documentation. Skip the TinkerGraph.open() step as the remote execution environment already has a graph variable set up. TinkerPop have other tutorials available as well.

Run Gremlin on Gremlin Server in EC2 using CloudFormation templates

The DynamoDB Storage Backend for JanusGraph includes CloudFormation templates that creates a VPC, an EC2 instance in the VPC, installs Gremlin Server with the DynamoDB Storage Backend for JanusGraph installed, and starts the Gremlin Server Websocket endpoint. Also included are templates that create the graph's DynamoDB tables. The Network ACL of the VPC includes just enough access to allow:

  • you to connect to the instance using SSH and create tunnels (SSH inbound)
  • the EC2 instance to download yum updates from central repositories (HTTP outbound)
  • the EC2 instance to download your dynamodb.properties file and the Gremlin Server package from S3 (HTTPS outbound)
  • the EC2 instance to connect to DynamoDB (HTTPS outbound)
  • the ephemeral ports required to support the data flow above, in each direction

Requirements for running this CloudFormation template include two items.

  • You require an SSH key for EC2 instances must exist in the region you plan to create the Gremlin Server stack.
  • You require permission to call the ec2:DescribeKeyPairs API when creating a stack from the AWS console.
  • You need to have created an IAM role in the region that has S3 Read access and DynamoDB full access, the very minimum policies required to run this CloudFormation stack. S3 read access is required to provide the dynamodb.properties file to the stack in cloud-init. DynamoDB full access is required because the DynamoDB Storage Backend for JanusGraph can create and delete tables, and read and write data in those tables.

Note, this cloud formation template downloads the JanusGraph zip files available on the JanusGraph downloads page. The CloudFormation template downloads these packages and builds and adds the DynamoDB Storage Backend for JanusGraph with its dependencies.

CloudFormation Template table

Below you can find a list of CloudFormation templates discussed in this document, and links to launch each stack in CloudFormation and to view the stack in the designer.

Template name Description View
Single-Item Model Tables Set up six graph tables with the single item data model. View
Multiple-Item Model Tables Set up six graph tables with the multiple item data model. View
Gremlin Server on DynamoDB The HTTP user agent header to send with all requests. View

Instructions to Launch CloudFormation Stacks

  1. Choose between the single and multiple item data models and create your graph tables with the corresponding CloudFormation template above by downloading it and passing it to the CloudFormation console. Note, the configuration provided in src/test/resources/dynamodb.properties assumes that you will deploy the stack in us-west-2 and that you will use the multiple item model.
  2. Inspect the latest version of the Gremlin Server on DynamoDB stack in the third row above.
  3. Download the template from the third row to your computer and use it to create the Gremlin Server on DynamoDB stack.
  4. On the Select Template page, name your Gremlin Server stack and select the CloudFormation template that you just downloaded.
  5. On the Specify Parameters page, you need to specify the following:
  • EC2 Instance Type
  • The Gremlin Server port, default 8182.
  • The S3 URL to your dynamodb.properties configuration file
  • The name of your pre-existing EC2 SSH key. Be sure to chmod 400 on your key as EC2 instance will reject connections if permissions on the key are too open.
  • The network whitelist for the SSH protocol. You will need to allow incoming connections via SSH to enable the SSH tunnels that will secure Websockets connections to Gremlin Server.
  • The path to an IAM role that has the minimum amount of privileges to run this CloudFormation script and run Gremlin Server with the DynamoDB Storage Backend for JanusGraph. This role will require S3 read to get the dynamodb.properties file, and DynamoDB full access to create tables and read and write items in those tables. This IAM role needs to be created with a STS trust relationship including ec2.amazonaws.com as an identity provider. The easiest way to do this is to create a new role on the IAM console and from the AWS Service Role list in the accordion, select Amazon EC2, and add the AmazonDynamoDBFullAccess and AmazonS3ReadOnlyAccess managed policies.
  1. On the Options page, click Next.
  2. On the Review page, select "I acknowledge that this template might cause AWS CloudFormation to create IAM resources." Then, click Create.
  3. Start the Gremlin console on the host through SSH. You can just copy paste the GremlinShell output of the CloudFormation template and run it on your command line.
  4. Repeat steps 4 and onwards of the Marvel graph section above.

Data Model

The Amazon DynamoDB Storage Backend for JanusGraph has a flexible data model that allows clients to select the data model for each JanusGraph backend table. Clients can configure tables to use either a single-item model or a multiple-item model.

Single-Item Model

The single-item model uses a single DynamoDB item to store all values for a single key. In terms of JanusGraph backend implementations, the key becomes the DynamoDB hash key, and each column becomes an attribute name and the column value is stored in the respective attribute value.

This is definitely the most efficient implementation, but beware of the 400kb limit DynamoDB imposes on items. It is best to only use this on tables you are sure will not surpass the item size limit. Graphs with low vertex degree and low number of items per index can take advantage of this implementation.

Multiple-Item Model

The multiple-item model uses multiple DynamoDB items to store all values for a single key. In terms of JanusGraph backend implementations, the key becomes the DynamoDB hash key, and each column becomes the range key in its own item. The column values are stored in its own attribute.

The multiple item model is less efficient than the single-item during initial graph loads, but it gets around the 400kb limitation. The multiple-item model uses range Query calls instead of GetItem calls to get the necessary column values.

DynamoDB Specific Configuration

Each configuration option has a certain mutability level that governs whether and how it can be modified after the database is opened for the first time. The following listing describes the mutability levels.

  1. FIXED - Once the database has been opened, these configuration options cannot be changed for the entire life of the database
  2. GLOBAL_OFFLINE - These options can only be changed for the entire database cluster at once when all instances are shut down
  3. GLOBAL - These options can only be changed globally across the entire database cluster
  4. MASKABLE - These options are global but can be overwritten by a local configuration file
  5. LOCAL - These options can only be provided through a local configuration file

Leading namespace names are shortened and sometimes spaces were inserted in long strings to make sure the tables below are formatted correctly.

General DynamoDB Configuration Parameters

All of the following parameters are in the storage (s) namespace, and most are in the storage.dynamodb (s.d) namespace subset.

Name Description Datatype Default Value Mutability
s.backend The primary persistence provider used by JanusGraph. To use DynamoDB you must set this to com.amazon.janusgraph.diskstorage. dynamodb.DynamoDBStoreManager String LOCAL
s.d.prefix A prefix to put before the JanusGraph table name. This allows clients to have multiple graphs in the same AWS DynamoDB account in the same region. String jg LOCAL
s.d.metrics-prefix Prefix on the codahale metric names emitted by DynamoDBDelegate. String d LOCAL
s.d.force-consistent-read This feature sets the force consistent read property on DynamoDB calls. Boolean true LOCAL
s.d.enable-parallel-scan This feature changes the scan behavior from a sequential scan (with consistent key order) to a segmented, parallel scan. Enabling this feature will make full graph scans faster, but it may cause this backend to be incompatible with Titan's OLAP library. Boolean false LOCAL
s.d.max-self-throttled-retries The number of retries that the backend should attempt and self-throttle. Integer 60 LOCAL
s.d.initial-retry-millis The amount of time to initially wait (in milliseconds) when retrying self-throttled DynamoDB API calls. Integer 25 LOCAL
s.d.control-plane-rate The rate in permits per second at which to issue DynamoDB control plane requests (CreateTable, UpdateTable, DeleteTable, ListTables, DescribeTable). Double 10 LOCAL
s.d.native-locking Set this to false if you need to use JanusGraph's locking mechanism for remote lock expiry. Boolean true LOCAL
s.d.use-titan-ids Set this to true if you are migrating from Titan to JanusGraph so that you do not have to copy your titan_ids table. Boolean false LOCAL

DynamoDB KeyColumnValue Store Configuration Parameters

Some configurations require specifications for each of the JanusGraph backend Key-Column-Value stores. Here is a list of the default JanusGraph backend Key-Column-Value stores:

  • edgestore
  • graphindex
  • janusgraph_ids (this used to be called titan_ids in Titan)
  • system_properties
  • systemlog
  • txlog

Any store you define in the umbrella storage.dynamodb.stores.* namespace that starts with ulog_ will be used for user-defined transaction logs.

Again, if you opt out of storage-native locking with the storage.dynamodb.native-locking = false configuration, you will need to configure the data model, initial capacity and rate limiters for the three following stores:

  • edgestore_lock_
  • graphindex_lock_
  • system_properties_lock_

You can configure the initial read and write capacity, rate limits, scan limits and data model for each KCV graph store. You can always scale up and down the read and write capacity of your tables in the DynamoDB console. If you have a write once, read many workload, or you are running a bulk data load, it is useful to adjust the capacity of edgestore and graphindex tables as necessary in the DynamoDB console, and decreasing the allocated capacity and rate limiters afterwards.

For details about these Key-Column-Value stores, please see Store Mapping and JanusGraph Data Model. All of these configuration parameters are in the storage.dynamodb.stores (s.d.s) umbrella namespace subset. In the tables below these configurations have the text t where the JanusGraph store name should go.

When upgrading from Titan 1.0.0, you will need to set the ids.store-name configuration to titan_ids to avoid re-using id ranges that are already assigned.

Name Description Datatype Default Value Mutability
s.d.s.t.data-model SINGLE means that all the values for a given key are put into a single DynamoDB item. A SINGLE is efficient because all the updates for a single key can be done atomically. However, the tradeoff is that DynamoDB has a 400k limit per item so it cannot hold much data. MULTI means that each 'column' is used as a range key in DynamoDB so a key can span multiple items. A MULTI implementation is slightly less efficient than SINGLE because it must use DynamoDB Query rather than a direct lookup. It is HIGHLY recommended to use MULTI for edgestore and graphindex unless your graph has very low max degree. String MULTI FIXED
s.d.s.t.initial-capacity-read Define the initial read capacity for a given DynamoDB table. Make sure to replace the s with your actual table name. Integer 4 LOCAL
s.d.s.t.initial-capacity-write Define the initial write capacity for a given DynamoDB table. Make sure to replace the s with your actual table name. Integer 4 LOCAL
s.d.s.t.read-rate The max number of reads per second. Double 4 LOCAL
s.d.s.t.write-rate Used to throttle write rate of given table. The max number of writes per second. Double 4 LOCAL
s.d.s.t.scan-limit The maximum number of items to evaluate (not necessarily the number of matching items). If DynamoDB processes the number of items up to the limit while processing the results, it stops the operation and returns the matching values up to that point, and a key in LastEvaluatedKey to apply in a subsequent operation, so that you can pick up where you left off. Also, if the processed data set size exceeds 1 MB before DynamoDB reaches this limit, it stops the operation and returns the matching values up to the limit, and a key in LastEvaluatedKey to apply in a subsequent operation to continue the operation. Integer 10000 LOCAL

DynamoDB Client Configuration Parameters

All of these configuration parameters are in the storage.dynamodb.client (s.d.c) namespace subset, and are related to the DynamoDB SDK client configuration.

Name Description Datatype Default Value Mutability
s.d.c.connection-timeout The amount of time to wait (in milliseconds) when initially establishing a connection before giving up and timing out. Integer 60000 LOCAL
s.d.c.connection-ttl The expiration time (in milliseconds) for a connection in the connection pool. Integer 60000 LOCAL
s.d.c.connection-max The maximum number of allowed open HTTP connections. Integer 10 LOCAL
s.d.c.retry-error-max The maximum number of retry attempts for failed retryable requests (ex: 5xx error responses from services). Integer 0 LOCAL
s.d.c.use-gzip Sets whether gzip compression should be used. Boolean false LOCAL
s.d.c.use-reaper Sets whether the IdleConnectionReaper is to be started as a daemon thread. Boolean true LOCAL
s.d.c.user-agent The HTTP user agent header to send with all requests. String LOCAL
s.d.c.endpoint Sets the service endpoint to use for connecting to DynamoDB. String LOCAL
s.d.c.signing-region Sets the signing region to use for signing requests to DynamoDB. Required. String LOCAL

DynamoDB Client Proxy Configuration Parameters

All of these configuration parameters are in the storage.dynamodb.client.proxy (s.d.c.p) namespace subset, and are related to the DynamoDB SDK client proxy configuration.

Name Description Datatype Default Value Mutability
s.d.c.p.domain The optional Windows domain name for configuration an NTLM proxy. String LOCAL
s.d.c.p.workstation The optional Windows workstation name for configuring NTLM proxy support. String LOCAL
s.d.c.p.host The optional proxy host the client will connect through. String LOCAL
s.d.c.p.port The optional proxy port the client will connect through. String LOCAL
s.d.c.p.username The optional proxy user name to use if connecting through a proxy. String LOCAL
s.d.c.p.password The optional proxy password to use when connecting through a proxy. String LOCAL

DynamoDB Client Socket Configuration Parameters

All of these configuration parameters are in the storage.dynamodb.client.socket (s.d.c.s) namespace subset, and are related to the DynamoDB SDK client socket configuration.

Name Description Datatype Default Value Mutability
s.d.c.s.buffer-send-hint The optional size hints (in bytes) for the low level TCP send and receive buffers. Integer 1048576 LOCAL
s.d.c.s.buffer-recv-hint The optional size hints (in bytes) for the low level TCP send and receive buffers. Integer 1048576 LOCAL
s.d.c.s.timeout The amount of time to wait (in milliseconds) for data to be transfered over an established, open connection before the connection times out and is closed. Long 50000 LOCAL
s.d.c.s.tcp-keep-alive Sets whether or not to enable TCP KeepAlive support at the socket level. Not used at the moment. Boolean LOCAL

DynamoDB Client Executor Configuration Parameters

All of these configuration parameters are in the storage.dynamodb.client.executor (s.d.c.e) namespace subset, and are related to the DynamoDB SDK client executor / thread-pool configuration.

Name Description Datatype Default Value Mutability
s.d.c.e.core-pool-size The core number of threads for the DynamoDB async client. Integer 25 LOCAL
s.d.c.e.max-pool-size The maximum allowed number of threads for the DynamoDB async client. Integer 50 LOCAL
s.d.c.e.keep-alive The time limit for which threads may remain idle before being terminated for the DynamoDB async client. Integer LOCAL
s.d.c.e.max-queue-length The maximum size of the executor queue before requests start getting run in the caller. Integer 1024 LOCAL
s.d.c.e.max-concurrent-operations The expected number of threads expected to be using a single JanusGraph instance. Used to allocate threads to batch operations. Integer 1 LOCAL

DynamoDB Client Credential Configuration Parameters

All of these configuration parameters are in the storage.dynamodb.client.credentials (s.d.c.c) namespace subset, and are related to the DynamoDB SDK client credential configuration.

Name Description Datatype Default Value Mutability
s.d.c.c.class-name Specify the fully qualified class that implements AWSCredentialsProvider or AWSCredentials. String com.amazonaws.auth. BasicAWSCredentials LOCAL
s.d.c.c.constructor-args Comma separated list of strings to pass to the credentials constructor. String accessKey,secretKey LOCAL

Upgrading from Titan 1.0.0

Earlier versions of this software supported Titan 1.0.0. This software supports upgrading from the DynamoDB Storage Backend for Titan 1.0.0 by following the steps to update your configuration below.

  1. Set the JanusGraph configuration option ids.store-name=titan_ids. This allows you to reuse your titan_ids table.
  2. Update the classpath to the DynamoDB Storage Backend to use the latest package name, storage.backend=com.amazon.janusgraph.diskstorage.dynamodb.DynamoDBStoreManager .

Run all tests against DynamoDB Local on an EC2 Amazon Linux AMI

  1. Install dependencies. For Amazon Linux:

    sudo wget http://repos.fedorapeople.org/repos/dchen/apache-maven/epel-apache-maven.repo \
      -O /etc/yum.repos.d/epel-apache-maven.repo
    sudo sed -i s/\$releasever/6/g /etc/yum.repos.d/epel-apache-maven.repo
    sudo yum update -y && sudo yum upgrade -y
    sudo yum install -y apache-maven sqlite-devel git java-1.8.0-openjdk-devel
    sudo alternatives --set java /usr/lib/jvm/jre-1.8.0-openjdk.x86_64/bin/java
    sudo alternatives --set javac /usr/lib/jvm/java-1.8.0-openjdk.x86_64/bin/javac
    git clone https://github.com/awslabs/dynamodb-janusgraph-storage-backend.git
    cd dynamodb-janusgraph-storage-backend && mvn install
  2. Open a screen so that you can log out of the EC2 instance while running tests with screen.

  3. Run the single-item data model tests.

    mvn verify -P integration-tests \
    -Dexclude.category=com.amazon.janusgraph.testcategory.MultipleItemTestCategory \
    -Dinclude.category="**/*.java" > o 2>&1
  4. Run the multiple-item data model tests.

    mvn verify -P integration-tests \
    -Dexclude.category=com.amazon.janusgraph.testcategory.SingleItemTestCategory \
    -Dinclude.category="**/*.java" > o 2>&1
  5. Run other miscellaneous tests.

    mvn verify -P integration-tests -Dinclude.category="**/*.java" \
        -Dgroups=com.amazon.janusgraph.testcategory.IsolateRemainingTestsCategory > o 2>&1
  6. Exit the screen with CTRL-A D and logout of the EC2 instance.

  7. Monitor the CPU usage of your EC2 instance in the EC2 console. The single-item tests may take at least 1 hour and the multiple-item tests may take at least 2 hours to run. When CPU usage goes to zero, that means the tests are done.

  8. Log back into the EC2 instance and resume the screen with screen -r to review the test results.

    cd target/surefire-reports && grep testcase *.xml | grep -v "\/"
  9. Terminate the instance when done.

dynamodb-janusgraph-storage-backend's People

Contributors

amcp avatar apivovarov avatar avram avatar dependabot[bot] avatar f2006 avatar finalj2 avatar khanshariquem avatar mikedias avatar ngrigoriev avatar warrn avatar weicfd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dynamodb-janusgraph-storage-backend's Issues

Very slow graph insertions during local development

Is it normal that inserting < 20 vertices and < 30 edges between them takes 1-2 minutes to complete? I am running a test locally with dynamodb-titan100-storage-backend:1.0.0 against DynamoDB Local. The graph.tx().commit() takes several minutes to complete on a 2014 MacBook Pro.

Is this expected? Do you have an idea of where the bottleneck is (Titan, Dynamo)? Can I do anything to speed this up?

After inserting those nodes, queries against the index take ~1s first and then few milliseconds once the local cache has been populated.

Thanks,
Ingo

Unable to SSH on port 8182

After running the CloudFormation script, the Security Group applied to the EC2 instance only allows access on port 22. Even if I add a custom rule to allow TCP traffic on 8182 the request is still denied:

ssh -o ServerAliveInterval=50 -i ${HOME}/.ec2/SSH.pem -p 8182 [email protected]
ssh: connect to host ec2-xx-xx-xx-xx.compute-1.amazonaws.com port 8182: Connection refused

I am able to ssh on port 22 but not 8182. There were no errors during the CloudFormation process.

Also note that the SSH command provided in "outputs" does not specify -p 8182 and attempts to connect to port 22 by default.

What do I need to do to connection on port 8182?

Creating tunnel from localhost to gremlin server timed out

Hi, I have been following this wiki for Launch DynamoDB Storage Backend for Titan: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Tools.TitanDB.GremlinServerEC2.html

The stack has been deployed and I am stuck at Step 9. The error message I had is: ssh -o ServerAliveInterval=50 -nNT -L8182:localhost:8182 -i ${HOME}/.ec2/dynamodb-gremlin-server.pem.txt [email protected] ssh: connect to host ec2-54-167-134-221.compute-1.amazonaws.com port 22: Operation timed out

I have the dynamodb-gremlin-server.pem SSH key created on AWS and downloaded locally, but I don't have the ${HOME}/.ec2/ dir locally. So I had to manually create the /.ec2 folder and copy the key into it. Not sure if I missed anything and if the time out error is related to this.

Please let me know if you have any suggestions. Thank you very much!!

how to enable userSuppliedId?

by calling :> graph.features(), I found that

VertexFeatures
-- MultiProperties: true
-- MetaProperties: true
-- AddVertices: true
-- RemoveVertices: true
-- UserSuppliedIds: false
-- AddProperty: true
-- RemoveProperty: true
-- NumericIds: true
-- StringIds: false
-- UuidIds: false
-- CustomIds: false
-- AnyIds: false

I want to enable "userSuppliedId" so that I can have a vertex with id like "http://this/is/id".
Is it possible?

java.lang.NoSuchMethodError

git clone, mvn install && mvn test -Pstart-dynamodb-local in one shell, mvn test -Psingle-integration-tests -Ddynamodb-partitions=1 -Ddynamodb-control-plane-rate=10000 -Ddynamodb-unlimited-iops=true -Dproperties-file=src/test/resources/dynamodb-local.properties in another fails with

Caused by: java.lang.NoSuchMethodError: org.apache.http.conn.ssl.SSLConnectionSocketFactory.<init>(Ljavax/net/ssl/SSLContext;Ljavax/net/ssl/HostnameVerifier;)V
    at com.amazonaws.http.conn.ssl.SdkTLSSocketFactory.<init>(SdkTLSSocketFactory.java:56)
    at com.amazonaws.http.apache.client.impl.ApacheConnectionManagerFactory.getPreferredSocketFactory(ApacheConnectionManagerFactory.java:87)
    at com.amazonaws.http.apache.client.impl.ApacheConnectionManagerFactory.create(ApacheConnectionManagerFactory.java:65)
    at com.amazonaws.http.apache.client.impl.ApacheConnectionManagerFactory.create(ApacheConnectionManagerFactory.java:58)
    at com.amazonaws.http.apache.client.impl.ApacheHttpClientFactory.create(ApacheHttpClientFactory.java:50)
    at com.amazonaws.http.apache.client.impl.ApacheHttpClientFactory.create(ApacheHttpClientFactory.java:38)
    at com.amazonaws.http.AmazonHttpClient.<init>(AmazonHttpClient.java:213)
    at com.amazonaws.AmazonWebServiceClient.<init>(AmazonWebServiceClient.java:145)
    at com.amazonaws.AmazonWebServiceClient.<init>(AmazonWebServiceClient.java:136)
    at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.<init>(AmazonDynamoDBClient.java:453)
    at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.<init>(AmazonDynamoDBClient.java:429)
    at com.amazon.titan.diskstorage.dynamodb.DynamoDBDelegate.<init>(DynamoDBDelegate.java:174)
    at com.amazon.titan.diskstorage.dynamodb.Client.<init>(Client.java:145)
    at com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager.<init>(DynamoDBStoreManager.java:90)
    at sun.reflect.GeneratedConstructorAccessor24.newInstance(Unknown Source)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at com.thinkaurelius.titan.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:44)
    at com.thinkaurelius.titan.diskstorage.Backend.getImplementationClass(Backend.java:473)

NullPointerException in TitanBlueprintsGraph$GraphTransaction

I have a code structure that looks like this

try {
  //add new vertices, edges and retrieve and update existing ones.
  ...
  graph.graph().tx().commit()
} catch(Throwable ex){
    graph.graph().tx().rollback()
    Preconditions.checkState(false, "Error occurred while parsing rosters", ex)
}

I get this error at random times (usually after around 100k commits)

792893 [SIGHUP handler] WARN  com.thinkaurelius.titan.graphdb.database.StandardTitanGraph  - Unable to close transaction standardtitantx[0x342ec625]
java.lang.IllegalArgumentException: The transaction has already been closed
    at com.google.common.base.Preconditions.checkArgument(Preconditions.java:122)
    at com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx.commit(StandardTitanTx.java:1345)
    at com.thinkaurelius.titan.graphdb.tinkerpop.TitanBlueprintsTransaction$1.commit(TitanBlueprintsTransaction.java:172)
    at org.apache.tinkerpop.gremlin.structure.Transaction$CLOSE_BEHAVIOR$1.accept(Transaction.java:174)
    at org.apache.tinkerpop.gremlin.structure.Transaction$CLOSE_BEHAVIOR$1.accept(Transaction.java:171)
    at com.thinkaurelius.titan.graphdb.tinkerpop.TitanBlueprintsGraph$GraphTransaction.close(TitanBlueprintsGraph.java:288)
    at com.thinkaurelius.titan.graphdb.tinkerpop.TitanBlueprintsTransaction$1.close(TitanBlueprintsTransaction.java:203)
    at com.thinkaurelius.titan.graphdb.tinkerpop.TitanBlueprintsTransaction.close(TitanBlueprintsTransaction.java:235)
    at com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.closeInternal(StandardTitanGraph.java:202)
    at com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.access$600(StandardTitanGraph.java:78)
    at com.thinkaurelius.titan.graphdb.database.StandardTitanGraph$ShutdownThread.start(StandardTitanGraph.java:803)
    at java.lang.ApplicationShutdownHooks.runHooks(ApplicationShutdownHooks.java:102)
    at java.lang.ApplicationShutdownHooks$1.run(ApplicationShutdownHooks.java:46)
    at java.lang.Shutdown.runHooks(Shutdown.java:123)
    at java.lang.Shutdown.sequence(Shutdown.java:167)
    at java.lang.Shutdown.exit(Shutdown.java:212)
    at java.lang.Terminator$1.handle(Terminator.java:52)
    at sun.misc.Signal$1.run(Signal.java:212)
    at java.lang.Thread.run(Thread.java:745)
[WARNING] 
java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.codehaus.mojo.exec.ExecJavaMojo$1.run(ExecJavaMojo.java:293)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
    at com.thinkaurelius.titan.graphdb.tinkerpop.TitanBlueprintsGraph$GraphTransaction.isOpen(TitanBlueprintsGraph.java:278)
    at org.apache.tinkerpop.gremlin.structure.Transaction$READ_WRITE_BEHAVIOR$1.accept(Transaction.java:209)
    at org.apache.tinkerpop.gremlin.structure.Transaction$READ_WRITE_BEHAVIOR$1.accept(Transaction.java:206)
    at org.apache.tinkerpop.gremlin.structure.util.AbstractTransaction.rollback(AbstractTransaction.java:104)
    at org.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:215)
    at com.kapsoft.isportsdb.parsers.baseball.RetroSheetDataParser$_parseRostersAndAddToGraph_closure3$_closure9.doCall(RetroSheetDataParser.groovy:581)
    at sun.reflect.GeneratedMethodAccessor62.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:90)
    at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:324)
    at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:292)
    at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1016)
    at groovy.lang.Closure.call(Closure.java:423)
    at groovy.lang.Closure.call(Closure.java:439)
    at org.codehaus.groovy.runtime.DefaultGroovyMethods.each(DefaultGroovyMethods.java:2027)
    at org.codehaus.groovy.runtime.dgm$161.doMethodInvoke(Unknown Source)
    at com.kapsoft.isportsdb.parsers.baseball.RetroSheetDataParser$_parseRostersAndAddToGraph_closure3.doCall(RetroSheetDataParser.groovy:541)
    at sun.reflect.GeneratedMethodAccessor67.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:90)
    at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:324)
    at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:292)
    at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1016)
    at groovy.lang.Closure.call(Closure.java:423)
    at groovy.lang.Closure.call(Closure.java:439)
    at org.codehaus.groovy.runtime.DefaultGroovyMethods.each(DefaultGroovyMethods.java:2027)
    at org.codehaus.groovy.runtime.DefaultGroovyMethods.each(DefaultGroovyMethods.java:2012)
    at org.codehaus.groovy.runtime.DefaultGroovyMethods.each(DefaultGroovyMethods.java:2053)
    at org.codehaus.groovy.runtime.dgm$162.doMethodInvoke(Unknown Source)
    at org.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:215)
    at com.kapsoft.isportsdb.parsers.baseball.RetroSheetDataParser.parseRostersAndAddToGraph(RetroSheetDataParser.groovy:539)
    at org.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:215)
    at com.kapsoft.isportsdb.parsers.baseball.RetroSheetDataParser.parse(RetroSheetDataParser.groovy:110)
    at com.kapsoft.isportsdb.parsers.baseball.MainRetroSheetDataParser.main(MainRetroSheetDataParser.java:25)
    ... 6 more

While this seems like a Titan issue, I have run the same script against Berkeley backend and it worked multiple times with no issue. Therefore, I assume there is an issue somewhere causing this NullPointerException in your code.

My configuration file looks like this:

#general Titan configuration
gremlin.graph=com.thinkaurelius.titan.core.TitanFactory
storage.setup-wait=60000
storage.buffer-size=1024
# Metrics configuration - http://s3.thinkaurelius.com/docs/titan/1.0.0/titan-config-ref.html#_metrics
#metrics.enabled=true
#metrics.prefix=t
# Required; specify logging interval in milliseconds
#metrics.csv.interval=500
#metrics.csv.directory=metrics
# Turn off titan retries as we batch and have our own exponential backoff strategy.
storage.write-time=2 ms
storage.read-time=2 ms
storage.backend=com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager

#Amazon DynamoDB Storage Backend for Titan configuration
storage.dynamodb.force-consistent-read=true
# should be the graph name rexster/graphs/graph/graph-name
storage.dynamodb.prefix=v100
storage.dynamodb.metrics-prefix=d
storage.dynamodb.enable-parallel-scans=false
storage.dynamodb.max-self-throttled-retries=60
storage.dynamodb.control-plane-rate=10

# DynamoDB client configuration: credentials
storage.dynamodb.client.credentials.class-name=com.amazonaws.auth.DefaultAWSCredentialsProviderChain
storage.dynamodb.client.credentials.constructor-args=

# DynamoDB client configuration: endpoint (Below, set to DynamoDB Local as invoked by mvn test -Pstart-dynamodb-local).
# You can change the endpoint to point to Production DynamoDB regions.)
storage.dynamodb.client.endpoint=https://dynamodb.us-east-1.amazonaws.com

# max http connections - not recommended to use more than 250 connections in DynamoDB Local
storage.dynamodb.client.connection-max=250
# turn off sdk retries
storage.dynamodb.client.retry-error-max=0

# DynamoDB client configuration: thread pool
storage.dynamodb.client.executor.core-pool-size=25
# Do not need more threads in thread pool than the number of http connections
storage.dynamodb.client.executor.max-pool-size=250
storage.dynamodb.client.executor.keep-alive=600000
storage.dynamodb.client.executor.max-concurrent-operations=1
# should be at least as large as the storage.buffer-size
storage.dynamodb.client.executor.max-queue-length=1024

#750 r/w CU result in provisioning the maximum equal numbers read and write Capacity Units that can
# be set on one table before it is split into two or more partitions for IOPS. If you will have more than one Rexster server
# accessing the same graph, you should set the read-rate and write-rate properties to values commensurately lower than the
# read and write capacity of the backend tables.

storage.dynamodb.stores.edgestore.capacity-read=100
storage.dynamodb.stores.edgestore.capacity-write=100
storage.dynamodb.stores.edgestore.read-rate=100
storage.dynamodb.stores.edgestore.write-rate=100
storage.dynamodb.stores.edgestore.scan-limit=10000

storage.dynamodb.stores.graphindex.capacity-read=100
storage.dynamodb.stores.graphindex.capacity-write=100
storage.dynamodb.stores.graphindex.read-rate=100
storage.dynamodb.stores.graphindex.write-rate=100
storage.dynamodb.stores.graphindex.scan-limit=10000

storage.dynamodb.stores.systemlog.capacity-read=10
storage.dynamodb.stores.systemlog.capacity-write=10
storage.dynamodb.stores.systemlog.read-rate=10
storage.dynamodb.stores.systemlog.write-rate=10
storage.dynamodb.stores.systemlog.scan-limit=10000

storage.dynamodb.stores.titan_ids.capacity-read=10
storage.dynamodb.stores.titan_ids.capacity-write=10
storage.dynamodb.stores.titan_ids.read-rate=10
storage.dynamodb.stores.titan_ids.write-rate=10
storage.dynamodb.stores.titan_ids.scan-limit=10000

storage.dynamodb.stores.system_properties.capacity-read=10
storage.dynamodb.stores.system_properties.capacity-write=10
storage.dynamodb.stores.system_properties.read-rate=10
storage.dynamodb.stores.system_properties.write-rate=10
storage.dynamodb.stores.system_properties.scan-limit=10000

storage.dynamodb.stores.txlog.capacity-read=10
storage.dynamodb.stores.txlog.capacity-write=10
storage.dynamodb.stores.txlog.read-rate=10
storage.dynamodb.stores.txlog.write-rate=10
storage.dynamodb.stores.txlog.scan-limit=10000

# elasticsearch config that is required to run GraphOfTheGods
index.search.backend=elasticsearch
index.search.directory=/tmp/searchindex
index.search.elasticsearch.client-only=false
index.search.elasticsearch.local-mode=true
index.search.elasticsearch.interface=NODE

What are your thoughts on this?

Project stopped building - Could not resolve dependencies for project- No versions available for com.amazonaws:DynamoDBLocal:jar

In the last couple of days the mvn install of this project has stopped working. I generally operate within docker containers so my environment is reasonably well managed and consistent. But yesterday I started getting the below error.

I'm seeing in inside and out of docker containers, and I'm noticed others mentioning this problem.

Could it be a co-incidence that this was reported?

[ERROR] Failed to execute goal on project dynamodb-titan100-storage-backend: Could not resolve dependencies for project com.amazonaws:dynamodb-titan100-storage-backend:jar:1.0.0: Failed to collect dependencies for [com.amazonaws:aws-java-sdk-dynamodb:jar:[1.10.5.1,2.0.0) (compile), com.amazonaws:DynamoDBLocal:jar:[1.10.5.1,2.0.0) (compile), com.thinkaurelius.titan:titan-core:jar:1.0.0 (compile), com.thinkaurelius.titan:titan-test:jar:1.0.0 (test), com.thinkaurelius.titan:titan-es:jar:1.0.0 (test), com.codahale.metrics:metrics-core:jar:3.0.1 (compile), au.com.bytecode:opencsv:jar:2.4 (compile), com.fasterxml.jackson.datatype:jackson-datatype-json-org:jar:2.5.3 (compile), org.slf4j:slf4j-log4j12:jar:1.7.5 (compile), org.apache.tinkerpop:gremlin-core:jar:3.0.1-incubating (compile), org.apache.tinkerpop:gremlin-groovy:jar:3.0.1-incubating (test), org.apache.tinkerpop:gremlin-test:jar:3.0.1-incubating (test), org.apache.tinkerpop:gremlin-console:jar:3.0.1-incubating (test)]: No versions available for com.amazonaws:DynamoDBLocal:jar:[1.10.5.1,2.0.0) within specified range -> [Help 1]

Any ideas?

Understanding pricing when using DynamoDB Titan

Hi,

this is not an issue, rather a fuzzy question.

I would like to get a good understanding of what would be the price (in terms of $) of using DynamoDB Titan. For this, I need to be able to understand when DynamoDB Titan does reads and writes. Right now I am pretty clueless.

Ideally I would like to run a testcase which adds some vertices, edges and then does a rather simple traversal and then see how many reads and writes were done. Any ideas of how I can achieve this? Possibly through metrics?

if it turns out I can't extract this information myself, I would very much appreciate a first brief explanation about when DynamoDB Titan performs reads and writes.

Thank you!

random data inconsistency when thread using dynamodb-titan-storage-backend library is interrupted in commit phase

Hello,

We have encountered an issue with how this dynamodb-titan-storage-backend library handles interrupts to the parrallelMutate method in the "DynamoDBDelegate.java" code.
As can be seen in the method (https://github.com/awslabs/dynamodb-titan-storage-backend/blob/1.0.0/src/main/java/com/amazon/titan/diskstorage/dynamodb/DynamoDBDelegate.java#L285), interrupts to this method by titan are not handled gracefully. As per line 300, "// fail out because titan does not poll this thread for interrupted anywhere".

We are using the Hystrix library (https://github.com/Netflix/Hystrix) to protect our service from overloading. Every Titan transaction runs in a separate Hystrix command in our app. These commands are interrupted on timeout, so if any latency problem occurs, we are failing fast. This interruption may occur in various different moments of a TitanDB transaction. However, when the interrupt occurs, we are getting the following error on the parrallelMutate method (error trace shortened here for brevity, as indicated by "...", full stacktrace can be found at http://pastebin.com/LLzZXkiY):


13:19:13 [ERROR] [c.t.t.g.database.StandardTitanGraph] [] Could not commit transaction [9] due to exception
com.thinkaurelius.titan.core.TitanException: Could not execute operation due to backend exception
    at com.thinkaurelius.titan.diskstorage.util.BackendOperation.execute(BackendOperation.java:44) ~[titan-core-1.0.0.jar:na]
    at com.thinkaurelius.titan.diskstorage.keycolumnvalue.cache.CacheTransaction.persist(CacheTransaction.java:87) ~[titan-core-1.0.0.jar:na]
...
Caused by: com.amazon.titan.diskstorage.dynamodb.BackendRuntimeException: was interrupted during parallelMutate
    at com.amazon.titan.diskstorage.dynamodb.DynamoDBDelegate.parallelMutate(DynamoDBDelegate.java:301) ~[dynamodb-titan100-storage-backend-1.0.0.jar:na]
...
13:19:13 [INFO ] [c.h.c.c.t.TitanTransactionManager] [] TRANSACTION - doRollback()
13:19:13 [ERROR] [c.h.c.c.t.TitanTransactionManager] [] Commit exception overridden by rollback exception
com.thinkaurelius.titan.core.TitanException: Could not commit transaction due to exception during persistence
    at com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx.commit(StandardTitanTx.java:1363) ~[titan-core-1.0.0.jar:na]
    at com.hybris.caas.category.titan.TitanTransactionManager.doCommit(TitanTransactionManager.java:126) ~[classes/:na]
...
Caused by: com.thinkaurelius.titan.core.TitanException: Could not execute operation due to backend exception
    at com.thinkaurelius.titan.diskstorage.util.BackendOperation.execute(BackendOperation.java:44) ~[titan-core-1.0.0.jar:na]
    at com.thinkaurelius.titan.diskstorage.keycolumnvalue.cache.CacheTransaction.persist(CacheTransaction.java:87) ~[titan-core-1.0.0.jar:na]
...
Caused by: com.thinkaurelius.titan.diskstorage.PermanentBackendException: Permanent exception while executing backend operation CacheMutation
    at com.thinkaurelius.titan.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:69) ~[titan-core-1.0.0.jar:na]
    at com.thinkaurelius.titan.diskstorage.util.BackendOperation.execute(BackendOperation.java:42) ~[titan-core-1.0.0.jar:na]
    ... 40 common frames omitted
Caused by: com.amazon.titan.diskstorage.dynamodb.BackendRuntimeException: was interrupted during parallelMutate
    at com.amazon.titan.diskstorage.dynamodb.DynamoDBDelegate.parallelMutate(DynamoDBDelegate.java:301) ~[dynamodb-titan100-storage-backend-1.0.0.jar:na]
...

We did some further investigations and we found out that if that interruption occurs during a commit phase, it may lead to data inconsistency.
That data inconsistency may show up as an inconsistency between Titan's composite index and real data (vertex is returned by index, but is not present in database anymore) or by incomplete vertex without all provided properties. In essence, it seems that while out thread is interrupted, the data is still inserted in the DynamoDB table during the operation. However, the rollback fails due to the backend exception caused by the parrallelMutate issue mentioned above.

We created a sample project to reproduce that problem, but to be fair it can be quite hard to reproduce it consistently.
With this sample project (https://github.com/marcinczernecki/titan-dynamodb-hystrix-issue), we were able to reproduce it once or twice, but we were running it over and over all day to achieve that. It's a Spring Boot application that creates an index (if it's not present) and runs Hystrix commands that wrap a Titan transaction with single vertex creation. The command has a configurable timeout (persistenceTimeout), so if you choose a proper value for this timeout, it should be interrupted during the commit phase (but it depends on your network latency to DynamoDB).
When all commands finish we are checking our data consistency within the DynamoDB table. Most of the time it seems to work OK but sometimes it leaves some inconsistency within the DynamoDB table. It might be that this inconsistency is more likely to occur when it's the first run and tables in DynamoDB doesn't exists yet (but that's not always the case).

When we use Cassandra DB as our TitanDB's backend we do not have such a problem, and we are using same application and Titan's settings there.

While we could potentially increase the persistenceTimeout value to a large value, this is not a suitable option for us. Extending the timeout to a a larger value would be contrary to our main goal which is to help our service stay responsive with Hystrix. Besides, it would likely negatively impact our applications performance which may make it not practical for our use-case.

Can this method ("parrallelMutate" and wherever else applicable) be altered please to handle any interrupts gracefully?
As it stands, this issue is even failing the resulting rollback process and is thus causing our DynamoDB table data to be inconsistent/out-of-sync with our application. This can often result in unexpected data within our table, "ghost vertices" in our graphs, etc.

Many thanks,

  • Krzysztof

Serialization issues with AWS Java SDK 1.10.62

After our build system automatically pulled the latest aws java jdk dynamodb 1.10.62, we are facing serious data consistency issues after creating some vertices and edges using the new version.

It looks like this sdk version introduced a new way of json marshalling which seems to cause problems at least in combination with Titan. I am pasting the exception details here hoping that anyone with deeper insight in the dynamodb - titan relation can make sense of it.

We are using the titan dynamodb storage backend from a Java EE application running on Wildfly application server, we faced the issues both with the local dynamo db and the real cloud service.

With aws-java-sdk 1.10.62, we get the following exception during initialization:
Caused by: java.lang.IllegalArgumentException: Encountered missing datatype registration for number: 1295741633
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:145)
at com.thinkaurelius.titan.graphdb.database.serialize.StandardSerializer.getDataType(StandardSerializer.java:187)
at com.thinkaurelius.titan.graphdb.database.serialize.StandardSerializer.readClassAndObject(StandardSerializer.java:267)
at com.thinkaurelius.titan.diskstorage.configuration.backend.KCVSConfiguration.staticBuffer2Object(KCVSConfiguration.java:250)
at com.thinkaurelius.titan.diskstorage.configuration.backend.KCVSConfiguration.toMap(KCVSConfiguration.java:183)
at com.thinkaurelius.titan.diskstorage.configuration.backend.KCVSConfiguration.asReadConfiguration(KCVSConfiguration.java:190)
at com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration.(GraphDatabaseConfiguration.java:1422)
at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:94)
at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:74)
The configuration passed to TitanFactory is basically the property file given in the examples.

After forcing the sdk version back to 1.10.61, the graph can be initialised again, however there are exceptions thrown by Titan with operations that involve loading the relations. Some data seems to be corrupted, trying to delete the vertices that can not be loaded also fails with the following exception:
java.lang.IllegalArgumentException: Invalid ASCII encoding offset: 654
at com.thinkaurelius.titan.graphdb.database.serialize.attribute.StringSerializer.read(StringSerializer.java:105)
at com.thinkaurelius.titan.graphdb.database.serialize.attribute.StringSerializer.read(StringSerializer.java:24)
at com.thinkaurelius.titan.graphdb.database.serialize.StandardSerializer.readObjectInternal(StandardSerializer.java:241)
at com.thinkaurelius.titan.graphdb.database.serialize.StandardSerializer.readObject(StandardSerializer.java:229)
at com.thinkaurelius.titan.graphdb.database.EdgeSerializer.readPropertyValue(EdgeSerializer.java:191)
at com.thinkaurelius.titan.graphdb.database.EdgeSerializer.readPropertyValue(EdgeSerializer.java:181)
at com.thinkaurelius.titan.graphdb.database.EdgeSerializer.parseRelation(EdgeSerializer.java:117)
at com.thinkaurelius.titan.graphdb.database.EdgeSerializer.readRelation(EdgeSerializer.java:60)
at com.thinkaurelius.titan.graphdb.transaction.RelationConstructor.readRelation(RelationConstructor.java:61)
at com.thinkaurelius.titan.graphdb.transaction.RelationConstructor$1$1.next(RelationConstructor.java:46)
at com.thinkaurelius.titan.graphdb.transaction.RelationConstructor$1$1.next(RelationConstructor.java:34)
at com.thinkaurelius.titan.graphdb.vertices.AbstractVertex.remove(AbstractVertex.java:87)
at com.thinkaurelius.titan.graphdb.vertices.StandardVertex.remove(StandardVertex.java:92)

Performance related Question

Hi,

we have single vertex and single edge very simple model and use dynamodb backend. increased table read capacity.

when we try yo get around 3000 record from server it takes 20-25 second. it that normal? should we do any additional configuration changes?

regards
Burak Tugan

Adding support for AWS Elasticsearch as an indexing backend

Currently this library does not support AWS Elasticsearch as an indexing backend due to the issue documented here and here. AWS ES only provides a HTTP(S) interface - no TCP. If a ES client that supports HTTP(S) like Jest would be used, this library could be used against AWS ES.

Issues programmatically sending script requests over ws

I'm not certain whether this is best posted on the Tinkerpop3 JIRA, Titan 1.x github or here.. So you'll have to forgive me if this is out of place.

I'm having an immense amount of trouble sending a command via websockets using C++. Logically this shouldn't be too hard, though I think the API is changing so drastically that none of the documentation I find has an accurate explanation of the message structure / format.

I'm using the easywsclient library, and an example of my code (and format) is as follows:

std::unique_ptr<WebSocket> ws(WebSocket::from_url("ws://SERVER:8182/"));

string command = "\"[1,2,3,4]\"";
string mime = "\"application/json\"";
string language = "\"gremlin-groovy\"";
string session = "\"\"";
string bindings = "\"\"";
string rebindings = "\"\"";
string op = "\"eval\"";
string processor = "\"org.apache.tinkerpop.gremlin.server.op.session.SessionOpProcessor\"";
string accept = "\"application/json\"";
string executeHandler;
string request = "\"655BD810-B41E-429D-B78F-3CC5F3B8E9BA\"";

string msg = mime + "|-{" +
    "\"requestId\": " + request + "," +
    "\"op\": " + op + ","  +
    "\"processor\": " + processor + "," +
    "\"args\": {"  +
        "\"gremlin\": " + command + "," +
        "\"bindings\": " + bindings + "," +
        "\"language\": " + language + "," +
        "\"rebindings\": " + rebindings +
    "}" +
"}";

ws->send(msg);
//** then ws->poll and ws->dispatch to retrieve the response

To which my erroneous response is:

{"requestId":null,"status":{"message":"","code":499,"attributes":{}},"result":{"data":"Invalid OpProcessor requested [null]","meta":{}}}

I cannot seem to find any variation of op or processor that returns anything but the above.

Any help would be greatly appreciated!

Titan on dynamoDB

I am trying to work with RDF in Titan on DynamoDb. I have everything working on a local instnce, but have been unable to figure out the proper configuration to connect to DynamoDB in AWS. I followed the steps in the AWS git project documentation for connecting to a local instance. All this was done on a t2.medium instance in AWS running Ubuntu. This instance has an AMI role that gives it access to dynamoDB. I do not want to have to use access keys to pass credentials as those would have to be tied to my AWS account which I don't want.

My most recent configuration attempt looks like:

InstanceProfileCredentialsProvider creds = new InstanceProfileCredentialsProvider();
BaseConfiguration bConf = new BaseConfiguration();
bConf.setProperty("storage.backend","com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager");
bConf.setProperty("storage.dynamodb.client.endpoint", "https://dynamodb.us-west-1.amazonaws.com");
bConf.setProperty("storage.dynamodb.client.credentials", creds);
bConf.setProperty("index.search.backend","elasticsearch");
bConf.setProperty("index.search.directory","/tmp/searchindex");
bConf.setProperty("index.search.elasticsearch.client-only","false");
bConf.setProperty("index.search.elasticsearch.local-mode","true");
bConf.setProperty("index.search.elasticsearch.interface","NODE");

The stacktrace that I get is:

Exception in thread "main" com.thinkaurelius.titan.core.TitanException: Could not open global configuration
at com.thinkaurelius.titan.diskstorage.Backend.getStandaloneGlobalConfiguration(Backend.java:399)
at com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration.(GraphDatabaseConfiguration.java:1277)
at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:93)
at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:73)
at com.nggdemo.Sesame_Titan_Dynamo.Initializer.main(Initializer.java:38)
Caused by: com.thinkaurelius.titan.diskstorage.TemporaryBackendException: DescribeTable_titan_system_properties The security token included in the request is invalid. (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: UnrecognizedClientException; Request ID: OF3H7MIH8JSBU9PCREQAVPQH3FVV4KQNSO5AEMVJF66Q9ASUAAJG)
at com.amazon.titan.diskstorage.dynamodb.DynamoDBDelegate.processDynamoDBAPIException(DynamoDBDelegate.java:215)
at com.amazon.titan.diskstorage.dynamodb.DynamoDBDelegate.describeTable(DynamoDBDelegate.java:637)
at com.amazon.titan.diskstorage.dynamodb.DynamoDBDelegate.describeTable(DynamoDBDelegate.java:627)
at com.amazon.titan.diskstorage.dynamodb.DynamoDBDelegate.createTableAndWaitForActive(DynamoDBDelegate.java:829)
at com.amazon.titan.diskstorage.dynamodb.AbstractDynamoDBStore.ensureStore(AbstractDynamoDBStore.java:62)
at com.amazon.titan.diskstorage.dynamodb.MetricStore.ensureStore(MetricStore.java:47)
at com.amazon.titan.diskstorage.dynamodb.TableNameDynamoDBStoreFactory.create(TableNameDynamoDBStoreFactory.java:52)
at com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager.openDatabase(DynamoDBStoreManager.java:196)
at com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager.openDatabase(DynamoDBStoreManager.java:1)
at com.thinkaurelius.titan.diskstorage.Backend.getStandaloneGlobalConfiguration(Backend.java:387)
... 4 more
Caused by: com.amazonaws.AmazonServiceException: The security token included in the request is invalid. (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: UnrecognizedClientException; Request ID: OF3H7MIH8JSBU9PCREQAVPQH3FVV4KQNSO5AEMVJF66Q9ASUAAJG)
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:770)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:1776)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.describeTable(AmazonDynamoDBClient.java:1075)
at com.amazon.titan.diskstorage.dynamodb.DynamoDBDelegate.describeTable(DynamoDBDelegate.java:635)
... 12 more

Hopefully this is enough information without being overwhelming. If you need other information, please let me know.

build error on 'mvn install'

(ubuntu 15.04)
sudo apt-get install maven2
mvn install

[INFO] Scanning for projects...
[INFO] ------------------------------------------------------------------------
[INFO] Building Amazon DynamoDB Storage Backend for Titan
[INFO]    task-segment: [install]
[INFO] ------------------------------------------------------------------------
[INFO] [resources:resources {execution: default-resources}]
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] ------------------------------------------------------------------------
[ERROR] BUILD ERROR
[INFO] ------------------------------------------------------------------------
[INFO] Error building POM (may not be this project's POM).


Project ID: com.sun.jersey:jersey-project:pom:1.9

Reason: Cannot find parent: net.java:jvnet-parent for project: com.sun.jersey:jersey-project:pom:1.9 for project com.sun.jersey:jersey-project:pom:1.9


[INFO] ------------------------------------------------------------------------
[INFO] For more information, run Maven with the -e switch
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 12 seconds
[INFO] Finished at: Sun Aug 23 02:32:18 PDT 2015
[INFO] Final Memory: 50M/333M
[INFO] ------------------------------------------------------------------------

Start dynamodb local fail

It worked without any problem before, but today i started it as usual and got this error. I did not change anything. I started dynamodb local like this
$mvn test -Pstart-dynamodb-local
And got error:

[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Amazon DynamoDB Storage Backend for Titan 1.0.0
[INFO] ------------------------------------------------------------------------
[WARNING] Could not transfer metadata com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from/to maven-s3-release-repo (s3://dynamodblocal/release): Cannot access s3://dynamodblocal/release with type default using the available connector factories: BasicRepositoryConnectorFactory
[WARNING] Could not transfer metadata com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from/to maven-s3-snapshot-repo (s3://dynamodblocal/snapshot): Cannot access s3://dynamodblocal/snapshot with type default using the available connector factories: BasicRepositoryConnectorFactory
[WARNING] Failure to transfer com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from s3://dynamodblocal/release was cached in the local repository, resolution will not be reattempted until the update interval of maven-s3-release-repo has elapsed or updates are forced. Original error: Could not transfer metadata com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from/to maven-s3-release-repo (s3://dynamodblocal/release): Cannot access s3://dynamodblocal/release with type default using the available connector factories: BasicRepositoryConnectorFactory
[WARNING] Failure to transfer com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from s3://dynamodblocal/snapshot was cached in the local repository, resolution will not be reattempted until the update interval of maven-s3-snapshot-repo has elapsed or updates are forced. Original error: Could not transfer metadata com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from/to maven-s3-snapshot-repo (s3://dynamodblocal/snapshot): Cannot access s3://dynamodblocal/snapshot with type default using the available connector factories: BasicRepositoryConnectorFactory
[INFO]
[INFO] --- exec-maven-plugin:1.2:exec (default) @ dynamodb-titan100-storage-backend ---
Error: Could not find or load main class com.amazonaws.services.dynamodbv2.local.main.ServerRunner
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 6.888 s
[INFO] Finished at: 2016-01-08T10:13:42+07:00
[INFO] Final Memory: 14M/157M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.2:exec (default) on project dynamodb-titan100-storage-backend: Command execution failed. Process exited with an error: 1(Exit value: 1) -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException

Missing directories after installation

About one month ago I successfully installed dynamodb-titan-storage-backend on my development machine (Mac OSX) using these instructions: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Tools.TitanDB.DownloadingAndRunning.html.

As I try to move to a testing server, I am now unable to perform the install again. I have tried an additional installation on my dev machine, and the same problem occurs here as well.

Result: the dynamodb-titan-storage-backend/server/dynamodb-titan100-storage-backend-1.0.0-hadoop1 directory is missing most subdirectories.

During installation, mvn install appears to complete with success (step 2 of the instructions above), but Closer inspection of the output shows a number of resources that cannot be accessed:

[WARNING] Could not transfer metadata com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from/to maven-s3-release-repo (s3://dynamodblocal/release): Cannot access s3://dynamodblocal/release with type default using the available connector factories: BasicRepositoryConnectorFactory
[WARNING] Could not transfer metadata com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from/to maven-s3-snapshot-repo (s3://dynamodblocal/snapshot): Cannot access s3://dynamodblocal/snapshot with type default using the available connector factories: BasicRepositoryConnectorFactory
[WARNING] Failure to transfer com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from s3://dynamodblocal/release was cached in the local repository, resolution will not be reattempted until the update interval of maven-s3-release-repo has elapsed or updates are forced. Original error: Could not transfer metadata com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from/to maven-s3-release-repo (s3://dynamodblocal/release): Cannot access s3://dynamodblocal/release with type default using the available connector factories: BasicRepositoryConnectorFactory
[WARNING] Failure to transfer com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from s3://dynamodblocal/snapshot was cached in the local repository, resolution will not be reattempted until the update interval of maven-s3-snapshot-repo has elapsed or updates are forced. Original error: Could not transfer metadata com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from/to maven-s3-snapshot-repo (s3://dynamodblocal/snapshot): Cannot access s3://dynamodblocal/snapshot with type default using the available connector factories: BasicRepositoryConnectorFactory
[WARNING] Failure to transfer com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from s3://dynamodblocal/release was cached in the local repository, resolution will not be reattempted until the update interval of maven-s3-release-repo has elapsed or updates are forced. Original error: Could not transfer metadata com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from/to maven-s3-release-repo (s3://dynamodblocal/release): Cannot access s3://dynamodblocal/release with type default using the available connector factories: BasicRepositoryConnectorFactory
[WARNING] Failure to transfer com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from s3://dynamodblocal/snapshot was cached in the local repository, resolution will not be reattempted until the update interval of maven-s3-snapshot-repo has elapsed or updates are forced. Original error: Could not transfer metadata com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from/to maven-s3-snapshot-repo (s3://dynamodblocal/snapshot): Cannot access s3://dynamodblocal/snapshot with type default using the available connector factories: BasicRepositoryConnectorFactory
[WARNING] Could not transfer metadata com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from/to maven-s3-release-repo (s3://dynamodb-local/release): Cannot access s3://dynamodb-local/release with type default using the available connector factories: BasicRepositoryConnectorFactory
[WARNING] Could not transfer metadata com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from/to maven-s3-snapshot-repo (s3://dynamodb-local/snapshot): Cannot access s3://dynamodb-local/snapshot with type default using the available connector factories: BasicRepositoryConnectorFactory

Step 5 of the instructions (src/test/resources/install-gremlin-server.sh) has similar warnings about not being able to access some resources. After the "Build Success 13.607s" message, the process hesitates for a couple of minutes, then completes with the warning signs. The output is below as the second code block.

Did some resource the server installer is trying to download get moved in the past month?

mvn install output:

$ mvn install
[INFO] Scanning for projects...
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Amazon DynamoDB Storage Backend for Titan 1.0.0
[INFO] ------------------------------------------------------------------------
[WARNING] Could not transfer metadata com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from/to maven-s3-release-repo (s3://dynamodblocal/release): Cannot access s3://dynamodblocal/release with type default using the available connector factories: BasicRepositoryConnectorFactory
[WARNING] Could not transfer metadata com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from/to maven-s3-snapshot-repo (s3://dynamodblocal/snapshot): Cannot access s3://dynamodblocal/snapshot with type default using the available connector factories: BasicRepositoryConnectorFactory
[WARNING] Failure to transfer com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from s3://dynamodblocal/release was cached in the local repository, resolution will not be reattempted until the update interval of maven-s3-release-repo has elapsed or updates are forced. Original error: Could not transfer metadata com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from/to maven-s3-release-repo (s3://dynamodblocal/release): Cannot access s3://dynamodblocal/release with type default using the available connector factories: BasicRepositoryConnectorFactory
[WARNING] Failure to transfer com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from s3://dynamodblocal/snapshot was cached in the local repository, resolution will not be reattempted until the update interval of maven-s3-snapshot-repo has elapsed or updates are forced. Original error: Could not transfer metadata com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from/to maven-s3-snapshot-repo (s3://dynamodblocal/snapshot): Cannot access s3://dynamodblocal/snapshot with type default using the available connector factories: BasicRepositoryConnectorFactory
[WARNING] Failure to transfer com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from s3://dynamodblocal/release was cached in the local repository, resolution will not be reattempted until the update interval of maven-s3-release-repo has elapsed or updates are forced. Original error: Could not transfer metadata com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from/to maven-s3-release-repo (s3://dynamodblocal/release): Cannot access s3://dynamodblocal/release with type default using the available connector factories: BasicRepositoryConnectorFactory
[WARNING] Failure to transfer com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from s3://dynamodblocal/snapshot was cached in the local repository, resolution will not be reattempted until the update interval of maven-s3-snapshot-repo has elapsed or updates are forced. Original error: Could not transfer metadata com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from/to maven-s3-snapshot-repo (s3://dynamodblocal/snapshot): Cannot access s3://dynamodblocal/snapshot with type default using the available connector factories: BasicRepositoryConnectorFactory
[WARNING] Could not transfer metadata com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from/to maven-s3-release-repo (s3://dynamodb-local/release): Cannot access s3://dynamodb-local/release with type default using the available connector factories: BasicRepositoryConnectorFactory
[WARNING] Could not transfer metadata com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from/to maven-s3-snapshot-repo (s3://dynamodb-local/snapshot): Cannot access s3://dynamodb-local/snapshot with type default using the available connector factories: BasicRepositoryConnectorFactory
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ dynamodb-titan100-storage-backend ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] 
[INFO] --- maven-compiler-plugin:3.3:compile (default-compile) @ dynamodb-titan100-storage-backend ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 58 source files to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/classes
[INFO] /Users/patrick/TrashThis/dynamodb-titan-storage-backend/src/main/java/com/amazon/titan/diskstorage/dynamodb/Client.java: /Users/patrick/TrashThis/dynamodb-titan-storage-backend/src/main/java/com/amazon/titan/diskstorage/dynamodb/Client.java uses or overrides a deprecated API.
[INFO] /Users/patrick/TrashThis/dynamodb-titan-storage-backend/src/main/java/com/amazon/titan/diskstorage/dynamodb/Client.java: Recompile with -Xlint:deprecation for details.
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ dynamodb-titan100-storage-backend ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 9 resources
[INFO] 
[INFO] --- maven-dependency-plugin:2.10:copy-dependencies (copy-dependencies) @ dynamodb-titan100-storage-backend ---
[INFO] Copying lucene-sandbox-4.10.4.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/lucene-sandbox-4.10.4.jar
[INFO] Copying sqlite4java-win32-x64-1.0.392.dll to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/sqlite4java-win32-x64-1.0.392.dll
[INFO] Copying groovy-json-2.4.1-indy.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/groovy-json-2.4.1-indy.jar
[INFO] Copying groovy-console-2.4.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/groovy-console-2.4.1.jar
[INFO] Copying commons-io-2.3.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/commons-io-2.3.jar
[INFO] Copying log4j-core-2.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/log4j-core-2.1.jar
[INFO] Copying gmetric4j-1.0.3.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/gmetric4j-1.0.3.jar
[INFO] Copying joda-time-2.8.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/joda-time-2.8.1.jar
[INFO] Copying groovy-xml-2.4.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/groovy-xml-2.4.1.jar
[INFO] Copying mockito-core-1.10.19.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/mockito-core-1.10.19.jar
[INFO] Copying sqlite4java-win32-x86-1.0.392.dll to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/sqlite4java-win32-x86-1.0.392.dll
[INFO] Copying jsr305-3.0.0.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jsr305-3.0.0.jar
[INFO] Copying jetty-client-8.1.12.v20130726.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jetty-client-8.1.12.v20130726.jar
[INFO] Copying metrics-graphite-3.0.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/metrics-graphite-3.0.1.jar
[INFO] Copying commons-math-2.2.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/commons-math-2.2.jar
[INFO] Copying groovy-swing-2.4.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/groovy-swing-2.4.1.jar
[INFO] Copying slf4j-log4j12-1.7.5.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/slf4j-log4j12-1.7.5.jar
[INFO] Copying h2-1.3.171.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/h2-1.3.171.jar
[INFO] Copying snakeyaml-1.15.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/snakeyaml-1.15.jar
[INFO] Copying commons-logging-1.1.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/commons-logging-1.1.1.jar
[INFO] Copying gremlin-console-3.0.1-incubating.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/gremlin-console-3.0.1-incubating.jar
[INFO] Copying javax.servlet-3.0.0.v201112011016.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/javax.servlet-3.0.0.v201112011016.jar
[INFO] Copying netty-all-4.0.28.Final.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/netty-all-4.0.28.Final.jar
[INFO] Copying groovy-2.4.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/groovy-2.4.1.jar
[INFO] Copying hppc-0.7.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/hppc-0.7.1.jar
[INFO] Copying javax.json-1.0.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/javax.json-1.0.jar
[INFO] Copying jetty-continuation-8.1.12.v20130726.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jetty-continuation-8.1.12.v20130726.jar
[INFO] Copying dom4j-1.6.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/dom4j-1.6.1.jar
[INFO] Copying lucene-core-4.10.4.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/lucene-core-4.10.4.jar
[INFO] Copying jackson-annotations-2.5.0.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jackson-annotations-2.5.0.jar
[INFO] Copying hamcrest-all-1.3.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/hamcrest-all-1.3.jar
[INFO] Copying opencsv-2.4.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/opencsv-2.4.jar
[INFO] Copying ivy-2.3.0.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/ivy-2.3.0.jar
[INFO] Copying antlr4-runtime-4.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/antlr4-runtime-4.1.jar
[INFO] Copying groovy-jsr223-2.4.1-indy.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/groovy-jsr223-2.4.1-indy.jar
[INFO] Copying gremlin-groovy-test-3.0.1-incubating.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/gremlin-groovy-test-3.0.1-incubating.jar
[INFO] Copying commons-lang-2.6.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/commons-lang-2.6.jar
[INFO] Copying aws-java-sdk-s3-1.11.58.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/aws-java-sdk-s3-1.11.58.jar
[INFO] Copying titan-test-1.0.0.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/titan-test-1.0.0.jar
[INFO] Copying jackson-databind-2.5.3.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jackson-databind-2.5.3.jar
[INFO] Copying metrics-core-3.0.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/metrics-core-3.0.1.jar
[INFO] Copying log4j-api-2.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/log4j-api-2.1.jar
[INFO] Copying gremlin-shaded-3.0.1-incubating.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/gremlin-shaded-3.0.1-incubating.jar
[INFO] Copying lucene-spatial-4.10.4.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/lucene-spatial-4.10.4.jar
[INFO] Copying javassist-3.16.1-GA.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/javassist-3.16.1-GA.jar
[INFO] Copying jcabi-log-0.14.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jcabi-log-0.14.jar
[INFO] Copying gremlin-groovy-3.0.1-incubating.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/gremlin-groovy-3.0.1-incubating.jar
[INFO] Copying high-scale-lib-1.1.4.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/high-scale-lib-1.1.4.jar
[INFO] Copying titan-core-1.0.0.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/titan-core-1.0.0.jar
[INFO] Copying DynamoDBLocal-1.11.0.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/DynamoDBLocal-1.11.0.1.jar
[INFO] Copying guava-18.0.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/guava-18.0.jar
[INFO] Copying org.abego.treelayout.core-1.0.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/org.abego.treelayout.core-1.0.1.jar
[INFO] Copying lucene-queryparser-4.10.4.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/lucene-queryparser-4.10.4.jar
[INFO] Copying lucene-suggest-4.10.4.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/lucene-suggest-4.10.4.jar
[INFO] Copying titan-es-1.0.0.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/titan-es-1.0.0.jar
[INFO] Copying easymock-3.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/easymock-3.1.jar
[INFO] Copying metrics-ganglia-3.0.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/metrics-ganglia-3.0.1.jar
[INFO] Copying lucene-memory-4.10.4.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/lucene-memory-4.10.4.jar
[INFO] Copying gremlin-driver-3.0.1-incubating.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/gremlin-driver-3.0.1-incubating.jar
[INFO] Copying log4j-1.2.17.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/log4j-1.2.17.jar
[INFO] Copying jetty-http-8.1.12.v20130726.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jetty-http-8.1.12.v20130726.jar
[INFO] Copying asm-commons-4.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/asm-commons-4.1.jar
[INFO] Copying aws-java-sdk-core-1.11.58.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/aws-java-sdk-core-1.11.58.jar
[INFO] Copying libsqlite4java-osx-1.0.392.dylib to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/libsqlite4java-osx-1.0.392.dylib
[INFO] Copying jackson-datatype-json-org-2.5.3.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jackson-datatype-json-org-2.5.3.jar
[INFO] Copying gprof-0.3.1-groovy-2.4.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/gprof-0.3.1-groovy-2.4.jar
[INFO] Copying jline-2.12.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jline-2.12.jar
[INFO] Copying jbcrypt-0.3m.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jbcrypt-0.3m.jar
[INFO] Copying stringtemplate-3.2.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/stringtemplate-3.2.jar
[INFO] Copying jetty-io-8.1.12.v20130726.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jetty-io-8.1.12.v20130726.jar
[INFO] Copying jetty-util-8.1.12.v20130726.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jetty-util-8.1.12.v20130726.jar
[INFO] Copying lucene-misc-4.10.4.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/lucene-misc-4.10.4.jar
[INFO] Copying xml-apis-1.0.b2.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/xml-apis-1.0.b2.jar
[INFO] Copying jcl-over-slf4j-1.7.12.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jcl-over-slf4j-1.7.12.jar
[INFO] Copying antlr-2.7.7.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/antlr-2.7.7.jar
[INFO] Copying groovy-groovysh-2.4.1-indy.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/groovy-groovysh-2.4.1-indy.jar
[INFO] Copying elasticsearch-1.5.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/elasticsearch-1.5.1.jar
[INFO] Copying groovy-sql-2.4.1-indy.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/groovy-sql-2.4.1-indy.jar
[INFO] Copying httpcore-4.3.2.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/httpcore-4.3.2.jar
[INFO] Copying reflections-0.9.9-RC1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/reflections-0.9.9-RC1.jar
[INFO] Copying javatuples-1.2.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/javatuples-1.2.jar
[INFO] Copying hamcrest-core-1.3.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/hamcrest-core-1.3.jar
[INFO] Copying antlr-runtime-3.2.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/antlr-runtime-3.2.jar
[INFO] Copying slf4j-api-1.7.5.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/slf4j-api-1.7.5.jar
[INFO] Copying cglib-nodep-2.2.2.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/cglib-nodep-2.2.2.jar
[INFO] Copying jcabi-manifests-1.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jcabi-manifests-1.1.jar
[INFO] Copying jmespath-java-1.11.58.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jmespath-java-1.11.58.jar
[INFO] Copying jetty-server-8.1.12.v20130726.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jetty-server-8.1.12.v20130726.jar
[INFO] Copying commons-lang3-3.3.2.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/commons-lang3-3.3.2.jar
[INFO] Copying tinkergraph-gremlin-3.0.1-incubating.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/tinkergraph-gremlin-3.0.1-incubating.jar
[INFO] Copying aws-java-sdk-kms-1.11.58.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/aws-java-sdk-kms-1.11.58.jar
[INFO] Copying gremlin-core-3.0.1-incubating.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/gremlin-core-3.0.1-incubating.jar
[INFO] Copying commons-configuration-1.10.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/commons-configuration-1.10.jar
[INFO] Copying commons-collections-3.2.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/commons-collections-3.2.1.jar
[INFO] Copying json-lib-2.3-jdk15.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/json-lib-2.3-jdk15.jar
[INFO] Copying commons-codec-1.7.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/commons-codec-1.7.jar
[INFO] Copying nekohtml-1.9.16.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/nekohtml-1.9.16.jar
[INFO] Copying gbench-0.4.3-groovy-2.4.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/gbench-0.4.3-groovy-2.4.jar
[INFO] Copying json-20090211_1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/json-20090211_1.jar
[INFO] Copying commons-beanutils-1.8.0.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/commons-beanutils-1.8.0.jar
[INFO] Copying asm-4.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/asm-4.1.jar
[INFO] Copying gremlin-test-3.0.1-incubating.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/gremlin-test-3.0.1-incubating.jar
[INFO] Copying ion-java-1.0.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/ion-java-1.0.1.jar
[INFO] Copying commons-cli-1.2.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/commons-cli-1.2.jar
[INFO] Copying xercesImpl-2.9.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/xercesImpl-2.9.1.jar
[INFO] Copying objenesis-2.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/objenesis-2.1.jar
[INFO] Copying lucene-analyzers-common-4.10.4.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/lucene-analyzers-common-4.10.4.jar
[INFO] Copying jackson-core-2.5.3.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jackson-core-2.5.3.jar
[INFO] Copying groovy-2.4.1-indy.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/groovy-2.4.1-indy.jar
[INFO] Copying groovy-templates-2.4.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/groovy-templates-2.4.1.jar
[INFO] Copying mockito-all-1.8.5.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/mockito-all-1.8.5.jar
[INFO] Copying http-builder-0.7.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/http-builder-0.7.jar
[INFO] Copying junit-benchmarks-0.7.0.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/junit-benchmarks-0.7.0.jar
[INFO] Copying libsqlite4java-linux-i386-1.0.392.so to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/libsqlite4java-linux-i386-1.0.392.so
[INFO] Copying lucene-join-4.10.4.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/lucene-join-4.10.4.jar
[INFO] Copying junit-4.12.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/junit-4.12.jar
[INFO] Copying jackson-dataformat-cbor-2.6.6.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jackson-dataformat-cbor-2.6.6.jar
[INFO] Copying ezmorph-1.0.6.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/ezmorph-1.0.6.jar
[INFO] Copying aws-java-sdk-dynamodb-1.11.58.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/aws-java-sdk-dynamodb-1.11.58.jar
[INFO] Copying randomizedtesting-runner-2.0.8.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/randomizedtesting-runner-2.0.8.jar
[INFO] Copying sqlite4java-1.0.392.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/sqlite4java-1.0.392.jar
[INFO] Copying lucene-grouping-4.10.4.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/lucene-grouping-4.10.4.jar
[INFO] Copying oncrpc-1.0.7.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/oncrpc-1.0.7.jar
[INFO] Copying xml-resolver-1.2.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/xml-resolver-1.2.jar
[INFO] Copying lucene-queries-4.10.4.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/lucene-queries-4.10.4.jar
[INFO] Copying lucene-highlighter-4.10.4.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/lucene-highlighter-4.10.4.jar
[INFO] Copying httpclient-4.3.5.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/httpclient-4.3.5.jar
[INFO] Copying spatial4j-0.4.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/spatial4j-0.4.1.jar
[INFO] Copying libsqlite4java-linux-amd64-1.0.392.so to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/libsqlite4java-linux-amd64-1.0.392.so
[INFO] 
[INFO] --- maven-compiler-plugin:3.3:testCompile (default-testCompile) @ dynamodb-titan100-storage-backend ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 54 source files to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/test-classes
[INFO] 
[INFO] --- maven-surefire-plugin:2.18.1:test (default-test) @ dynamodb-titan100-storage-backend ---
[INFO] Tests are skipped.
[INFO] 
[INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ dynamodb-titan100-storage-backend ---
[INFO] Building jar: /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dynamodb-titan100-storage-backend-1.0.0.jar
[INFO] 
[INFO] --- maven-install-plugin:2.4:install (default-install) @ dynamodb-titan100-storage-backend ---
[INFO] Installing /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dynamodb-titan100-storage-backend-1.0.0.jar to /Users/patrick/.m2/repository/com/amazonaws/dynamodb-titan100-storage-backend/1.0.0/dynamodb-titan100-storage-backend-1.0.0.jar
[INFO] Installing /Users/patrick/TrashThis/dynamodb-titan-storage-backend/pom.xml to /Users/patrick/.m2/repository/com/amazonaws/dynamodb-titan100-storage-backend/1.0.0/dynamodb-titan100-storage-backend-1.0.0.pom
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 13.355 s
[INFO] Finished at: 2016-11-21T09:12:10-03:00
[INFO] Final Memory: 44M/414M
[INFO] ------------------------------------------------------------------------

src/test/resources/install-gremlin-server.sh:

$ src/test/resources/install-gremlin-server.sh
[INFO] Scanning for projects...
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Amazon DynamoDB Storage Backend for Titan 1.0.0
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ dynamodb-titan100-storage-backend ---
[INFO] Deleting /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 0.422 s
[INFO] Finished at: 2016-11-21T08:50:15-03:00
[INFO] Final Memory: 6M/77M
[INFO] ------------------------------------------------------------------------
[INFO] Scanning for projects...
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Amazon DynamoDB Storage Backend for Titan 1.0.0
[INFO] ------------------------------------------------------------------------
[WARNING] Could not transfer metadata com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from/to maven-s3-release-repo (s3://dynamodblocal/release): Cannot access s3://dynamodblocal/release with type default using the available connector factories: BasicRepositoryConnectorFactory
[WARNING] Could not transfer metadata com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from/to maven-s3-snapshot-repo (s3://dynamodblocal/snapshot): Cannot access s3://dynamodblocal/snapshot with type default using the available connector factories: BasicRepositoryConnectorFactory
[WARNING] Failure to transfer com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from s3://dynamodblocal/release was cached in the local repository, resolution will not be reattempted until the update interval of maven-s3-release-repo has elapsed or updates are forced. Original error: Could not transfer metadata com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from/to maven-s3-release-repo (s3://dynamodblocal/release): Cannot access s3://dynamodblocal/release with type default using the available connector factories: BasicRepositoryConnectorFactory
[WARNING] Failure to transfer com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from s3://dynamodblocal/snapshot was cached in the local repository, resolution will not be reattempted until the update interval of maven-s3-snapshot-repo has elapsed or updates are forced. Original error: Could not transfer metadata com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from/to maven-s3-snapshot-repo (s3://dynamodblocal/snapshot): Cannot access s3://dynamodblocal/snapshot with type default using the available connector factories: BasicRepositoryConnectorFactory
[WARNING] Failure to transfer com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from s3://dynamodblocal/release was cached in the local repository, resolution will not be reattempted until the update interval of maven-s3-release-repo has elapsed or updates are forced. Original error: Could not transfer metadata com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from/to maven-s3-release-repo (s3://dynamodblocal/release): Cannot access s3://dynamodblocal/release with type default using the available connector factories: BasicRepositoryConnectorFactory
[WARNING] Failure to transfer com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from s3://dynamodblocal/snapshot was cached in the local repository, resolution will not be reattempted until the update interval of maven-s3-snapshot-repo has elapsed or updates are forced. Original error: Could not transfer metadata com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from/to maven-s3-snapshot-repo (s3://dynamodblocal/snapshot): Cannot access s3://dynamodblocal/snapshot with type default using the available connector factories: BasicRepositoryConnectorFactory
[WARNING] Could not transfer metadata com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from/to maven-s3-release-repo (s3://dynamodb-local/release): Cannot access s3://dynamodb-local/release with type default using the available connector factories: BasicRepositoryConnectorFactory
[WARNING] Could not transfer metadata com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from/to maven-s3-snapshot-repo (s3://dynamodb-local/snapshot): Cannot access s3://dynamodb-local/snapshot with type default using the available connector factories: BasicRepositoryConnectorFactory
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ dynamodb-titan100-storage-backend ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] 
[INFO] --- maven-compiler-plugin:3.3:compile (default-compile) @ dynamodb-titan100-storage-backend ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 58 source files to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/classes
[INFO] /Users/patrick/TrashThis/dynamodb-titan-storage-backend/src/main/java/com/amazon/titan/diskstorage/dynamodb/Client.java: /Users/patrick/TrashThis/dynamodb-titan-storage-backend/src/main/java/com/amazon/titan/diskstorage/dynamodb/Client.java uses or overrides a deprecated API.
[INFO] /Users/patrick/TrashThis/dynamodb-titan-storage-backend/src/main/java/com/amazon/titan/diskstorage/dynamodb/Client.java: Recompile with -Xlint:deprecation for details.
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ dynamodb-titan100-storage-backend ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 9 resources
[INFO] 
[INFO] --- maven-dependency-plugin:2.10:copy-dependencies (copy-dependencies) @ dynamodb-titan100-storage-backend ---
[INFO] Copying lucene-sandbox-4.10.4.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/lucene-sandbox-4.10.4.jar
[INFO] Copying sqlite4java-win32-x64-1.0.392.dll to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/sqlite4java-win32-x64-1.0.392.dll
[INFO] Copying groovy-json-2.4.1-indy.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/groovy-json-2.4.1-indy.jar
[INFO] Copying groovy-console-2.4.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/groovy-console-2.4.1.jar
[INFO] Copying commons-io-2.3.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/commons-io-2.3.jar
[INFO] Copying log4j-core-2.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/log4j-core-2.1.jar
[INFO] Copying gmetric4j-1.0.3.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/gmetric4j-1.0.3.jar
[INFO] Copying joda-time-2.8.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/joda-time-2.8.1.jar
[INFO] Copying groovy-xml-2.4.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/groovy-xml-2.4.1.jar
[INFO] Copying mockito-core-1.10.19.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/mockito-core-1.10.19.jar
[INFO] Copying sqlite4java-win32-x86-1.0.392.dll to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/sqlite4java-win32-x86-1.0.392.dll
[INFO] Copying jsr305-3.0.0.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jsr305-3.0.0.jar
[INFO] Copying jetty-client-8.1.12.v20130726.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jetty-client-8.1.12.v20130726.jar
[INFO] Copying metrics-graphite-3.0.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/metrics-graphite-3.0.1.jar
[INFO] Copying commons-math-2.2.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/commons-math-2.2.jar
[INFO] Copying groovy-swing-2.4.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/groovy-swing-2.4.1.jar
[INFO] Copying slf4j-log4j12-1.7.5.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/slf4j-log4j12-1.7.5.jar
[INFO] Copying h2-1.3.171.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/h2-1.3.171.jar
[INFO] Copying snakeyaml-1.15.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/snakeyaml-1.15.jar
[INFO] Copying commons-logging-1.1.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/commons-logging-1.1.1.jar
[INFO] Copying gremlin-console-3.0.1-incubating.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/gremlin-console-3.0.1-incubating.jar
[INFO] Copying javax.servlet-3.0.0.v201112011016.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/javax.servlet-3.0.0.v201112011016.jar
[INFO] Copying netty-all-4.0.28.Final.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/netty-all-4.0.28.Final.jar
[INFO] Copying groovy-2.4.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/groovy-2.4.1.jar
[INFO] Copying hppc-0.7.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/hppc-0.7.1.jar
[INFO] Copying javax.json-1.0.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/javax.json-1.0.jar
[INFO] Copying jetty-continuation-8.1.12.v20130726.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jetty-continuation-8.1.12.v20130726.jar
[INFO] Copying dom4j-1.6.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/dom4j-1.6.1.jar
[INFO] Copying lucene-core-4.10.4.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/lucene-core-4.10.4.jar
[INFO] Copying jackson-annotations-2.5.0.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jackson-annotations-2.5.0.jar
[INFO] Copying hamcrest-all-1.3.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/hamcrest-all-1.3.jar
[INFO] Copying opencsv-2.4.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/opencsv-2.4.jar
[INFO] Copying ivy-2.3.0.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/ivy-2.3.0.jar
[INFO] Copying antlr4-runtime-4.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/antlr4-runtime-4.1.jar
[INFO] Copying groovy-jsr223-2.4.1-indy.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/groovy-jsr223-2.4.1-indy.jar
[INFO] Copying gremlin-groovy-test-3.0.1-incubating.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/gremlin-groovy-test-3.0.1-incubating.jar
[INFO] Copying commons-lang-2.6.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/commons-lang-2.6.jar
[INFO] Copying aws-java-sdk-s3-1.11.58.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/aws-java-sdk-s3-1.11.58.jar
[INFO] Copying titan-test-1.0.0.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/titan-test-1.0.0.jar
[INFO] Copying jackson-databind-2.5.3.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jackson-databind-2.5.3.jar
[INFO] Copying metrics-core-3.0.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/metrics-core-3.0.1.jar
[INFO] Copying log4j-api-2.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/log4j-api-2.1.jar
[INFO] Copying gremlin-shaded-3.0.1-incubating.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/gremlin-shaded-3.0.1-incubating.jar
[INFO] Copying lucene-spatial-4.10.4.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/lucene-spatial-4.10.4.jar
[INFO] Copying javassist-3.16.1-GA.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/javassist-3.16.1-GA.jar
[INFO] Copying jcabi-log-0.14.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jcabi-log-0.14.jar
[INFO] Copying gremlin-groovy-3.0.1-incubating.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/gremlin-groovy-3.0.1-incubating.jar
[INFO] Copying high-scale-lib-1.1.4.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/high-scale-lib-1.1.4.jar
[INFO] Copying titan-core-1.0.0.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/titan-core-1.0.0.jar
[INFO] Copying DynamoDBLocal-1.11.0.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/DynamoDBLocal-1.11.0.1.jar
[INFO] Copying guava-18.0.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/guava-18.0.jar
[INFO] Copying org.abego.treelayout.core-1.0.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/org.abego.treelayout.core-1.0.1.jar
[INFO] Copying lucene-queryparser-4.10.4.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/lucene-queryparser-4.10.4.jar
[INFO] Copying lucene-suggest-4.10.4.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/lucene-suggest-4.10.4.jar
[INFO] Copying titan-es-1.0.0.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/titan-es-1.0.0.jar
[INFO] Copying easymock-3.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/easymock-3.1.jar
[INFO] Copying metrics-ganglia-3.0.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/metrics-ganglia-3.0.1.jar
[INFO] Copying lucene-memory-4.10.4.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/lucene-memory-4.10.4.jar
[INFO] Copying gremlin-driver-3.0.1-incubating.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/gremlin-driver-3.0.1-incubating.jar
[INFO] Copying log4j-1.2.17.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/log4j-1.2.17.jar
[INFO] Copying jetty-http-8.1.12.v20130726.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jetty-http-8.1.12.v20130726.jar
[INFO] Copying asm-commons-4.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/asm-commons-4.1.jar
[INFO] Copying aws-java-sdk-core-1.11.58.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/aws-java-sdk-core-1.11.58.jar
[INFO] Copying libsqlite4java-osx-1.0.392.dylib to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/libsqlite4java-osx-1.0.392.dylib
[INFO] Copying jackson-datatype-json-org-2.5.3.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jackson-datatype-json-org-2.5.3.jar
[INFO] Copying gprof-0.3.1-groovy-2.4.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/gprof-0.3.1-groovy-2.4.jar
[INFO] Copying jline-2.12.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jline-2.12.jar
[INFO] Copying jbcrypt-0.3m.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jbcrypt-0.3m.jar
[INFO] Copying stringtemplate-3.2.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/stringtemplate-3.2.jar
[INFO] Copying jetty-io-8.1.12.v20130726.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jetty-io-8.1.12.v20130726.jar
[INFO] Copying jetty-util-8.1.12.v20130726.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jetty-util-8.1.12.v20130726.jar
[INFO] Copying lucene-misc-4.10.4.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/lucene-misc-4.10.4.jar
[INFO] Copying xml-apis-1.0.b2.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/xml-apis-1.0.b2.jar
[INFO] Copying jcl-over-slf4j-1.7.12.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jcl-over-slf4j-1.7.12.jar
[INFO] Copying antlr-2.7.7.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/antlr-2.7.7.jar
[INFO] Copying groovy-groovysh-2.4.1-indy.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/groovy-groovysh-2.4.1-indy.jar
[INFO] Copying elasticsearch-1.5.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/elasticsearch-1.5.1.jar
[INFO] Copying groovy-sql-2.4.1-indy.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/groovy-sql-2.4.1-indy.jar
[INFO] Copying httpcore-4.3.2.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/httpcore-4.3.2.jar
[INFO] Copying reflections-0.9.9-RC1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/reflections-0.9.9-RC1.jar
[INFO] Copying javatuples-1.2.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/javatuples-1.2.jar
[INFO] Copying hamcrest-core-1.3.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/hamcrest-core-1.3.jar
[INFO] Copying antlr-runtime-3.2.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/antlr-runtime-3.2.jar
[INFO] Copying slf4j-api-1.7.5.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/slf4j-api-1.7.5.jar
[INFO] Copying cglib-nodep-2.2.2.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/cglib-nodep-2.2.2.jar
[INFO] Copying jcabi-manifests-1.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jcabi-manifests-1.1.jar
[INFO] Copying jmespath-java-1.11.58.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jmespath-java-1.11.58.jar
[INFO] Copying jetty-server-8.1.12.v20130726.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jetty-server-8.1.12.v20130726.jar
[INFO] Copying commons-lang3-3.3.2.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/commons-lang3-3.3.2.jar
[INFO] Copying tinkergraph-gremlin-3.0.1-incubating.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/tinkergraph-gremlin-3.0.1-incubating.jar
[INFO] Copying aws-java-sdk-kms-1.11.58.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/aws-java-sdk-kms-1.11.58.jar
[INFO] Copying gremlin-core-3.0.1-incubating.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/gremlin-core-3.0.1-incubating.jar
[INFO] Copying commons-configuration-1.10.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/commons-configuration-1.10.jar
[INFO] Copying commons-collections-3.2.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/commons-collections-3.2.1.jar
[INFO] Copying json-lib-2.3-jdk15.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/json-lib-2.3-jdk15.jar
[INFO] Copying commons-codec-1.7.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/commons-codec-1.7.jar
[INFO] Copying nekohtml-1.9.16.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/nekohtml-1.9.16.jar
[INFO] Copying gbench-0.4.3-groovy-2.4.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/gbench-0.4.3-groovy-2.4.jar
[INFO] Copying json-20090211_1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/json-20090211_1.jar
[INFO] Copying commons-beanutils-1.8.0.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/commons-beanutils-1.8.0.jar
[INFO] Copying asm-4.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/asm-4.1.jar
[INFO] Copying gremlin-test-3.0.1-incubating.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/gremlin-test-3.0.1-incubating.jar
[INFO] Copying ion-java-1.0.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/ion-java-1.0.1.jar
[INFO] Copying commons-cli-1.2.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/commons-cli-1.2.jar
[INFO] Copying xercesImpl-2.9.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/xercesImpl-2.9.1.jar
[INFO] Copying objenesis-2.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/objenesis-2.1.jar
[INFO] Copying lucene-analyzers-common-4.10.4.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/lucene-analyzers-common-4.10.4.jar
[INFO] Copying jackson-core-2.5.3.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jackson-core-2.5.3.jar
[INFO] Copying groovy-2.4.1-indy.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/groovy-2.4.1-indy.jar
[INFO] Copying groovy-templates-2.4.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/groovy-templates-2.4.1.jar
[INFO] Copying mockito-all-1.8.5.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/mockito-all-1.8.5.jar
[INFO] Copying http-builder-0.7.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/http-builder-0.7.jar
[INFO] Copying junit-benchmarks-0.7.0.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/junit-benchmarks-0.7.0.jar
[INFO] Copying libsqlite4java-linux-i386-1.0.392.so to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/libsqlite4java-linux-i386-1.0.392.so
[INFO] Copying lucene-join-4.10.4.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/lucene-join-4.10.4.jar
[INFO] Copying junit-4.12.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/junit-4.12.jar
[INFO] Copying jackson-dataformat-cbor-2.6.6.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/jackson-dataformat-cbor-2.6.6.jar
[INFO] Copying ezmorph-1.0.6.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/ezmorph-1.0.6.jar
[INFO] Copying aws-java-sdk-dynamodb-1.11.58.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/aws-java-sdk-dynamodb-1.11.58.jar
[INFO] Copying randomizedtesting-runner-2.0.8.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/randomizedtesting-runner-2.0.8.jar
[INFO] Copying sqlite4java-1.0.392.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/sqlite4java-1.0.392.jar
[INFO] Copying lucene-grouping-4.10.4.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/lucene-grouping-4.10.4.jar
[INFO] Copying oncrpc-1.0.7.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/oncrpc-1.0.7.jar
[INFO] Copying xml-resolver-1.2.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/xml-resolver-1.2.jar
[INFO] Copying lucene-queries-4.10.4.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/lucene-queries-4.10.4.jar
[INFO] Copying lucene-highlighter-4.10.4.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/lucene-highlighter-4.10.4.jar
[INFO] Copying httpclient-4.3.5.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/httpclient-4.3.5.jar
[INFO] Copying spatial4j-0.4.1.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/spatial4j-0.4.1.jar
[INFO] Copying libsqlite4java-linux-amd64-1.0.392.so to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dependencies/libsqlite4java-linux-amd64-1.0.392.so
[INFO] 
[INFO] --- maven-compiler-plugin:3.3:testCompile (default-testCompile) @ dynamodb-titan100-storage-backend ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 54 source files to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/test-classes
[INFO] 
[INFO] --- maven-surefire-plugin:2.18.1:test (default-test) @ dynamodb-titan100-storage-backend ---
[INFO] Tests are skipped.
[INFO] 
[INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ dynamodb-titan100-storage-backend ---
[INFO] Building jar: /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dynamodb-titan100-storage-backend-1.0.0.jar
[INFO] 
[INFO] --- maven-install-plugin:2.4:install (default-install) @ dynamodb-titan100-storage-backend ---
[INFO] Installing /Users/patrick/TrashThis/dynamodb-titan-storage-backend/target/dynamodb-titan100-storage-backend-1.0.0.jar to /Users/patrick/.m2/repository/com/amazonaws/dynamodb-titan100-storage-backend/1.0.0/dynamodb-titan100-storage-backend-1.0.0.jar
[INFO] Installing /Users/patrick/TrashThis/dynamodb-titan-storage-backend/pom.xml to /Users/patrick/.m2/repository/com/amazonaws/dynamodb-titan100-storage-backend/1.0.0/dynamodb-titan100-storage-backend-1.0.0.pom
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 13.607 s
[INFO] Finished at: 2016-11-21T08:50:30-03:00
[INFO] Final Memory: 45M/323M
[INFO] ------------------------------------------------------------------------
~/TrashThis/dynamodb-titan-storage-backend/server ~/TrashThis/dynamodb-titan-storage-backend
mkdir: /Users/patrick/TrashThis/dynamodb-titan-storage-backend/server/dynamodb-titan100-storage-backend-1.0.0-hadoop1/badlibs: File exists
src/test/resources/install-gremlin-server.sh: line 59: pushd: /Users/patrick/TrashThis/dynamodb-titan-storage-backend/server/dynamodb-titan100-storage-backend-1.0.0-hadoop1/lib: No such file or directory
mv: rename joda-time-1.6.2.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/server/dynamodb-titan100-storage-backend-1.0.0-hadoop1/badlibs/joda-time-1.6.2.jar: No such file or directory
mv: rename jackson-annotations-2.3.0.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/server/dynamodb-titan100-storage-backend-1.0.0-hadoop1/badlibs/jackson-annotations-2.3.0.jar: No such file or directory
mv: rename jackson-core-2.3.0.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/server/dynamodb-titan100-storage-backend-1.0.0-hadoop1/badlibs/jackson-core-2.3.0.jar: No such file or directory
mv: rename jackson-databind-2.3.0.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/server/dynamodb-titan100-storage-backend-1.0.0-hadoop1/badlibs/jackson-databind-2.3.0.jar: No such file or directory
mv: rename jackson-datatype-json-org-2.3.0.jar to /Users/patrick/TrashThis/dynamodb-titan-storage-backend/server/dynamodb-titan100-storage-backend-1.0.0-hadoop1/badlibs/jackson-datatype-json-org-2.3.0.jar: No such file or directory
~/TrashThis/dynamodb-titan-storage-backend
cp: /Users/patrick/TrashThis/dynamodb-titan-storage-backend/server/dynamodb-titan100-storage-backend-1.0.0-hadoop1/conf/gremlin-server/gremlin-server.yaml: No such file or directory
cp: /Users/patrick/TrashThis/dynamodb-titan-storage-backend/server/dynamodb-titan100-storage-backend-1.0.0-hadoop1/conf/gremlin-server/gremlin-server-local.yaml: No such file or directory
cp: /Users/patrick/TrashThis/dynamodb-titan-storage-backend/server/dynamodb-titan100-storage-backend-1.0.0-hadoop1/conf/gremlin-server/dynamodb.properties: No such file or directory
cp: /Users/patrick/TrashThis/dynamodb-titan-storage-backend/server/dynamodb-titan100-storage-backend-1.0.0-hadoop1/conf/gremlin-server/dynamodb-local.properties: No such file or directory
cp: /Users/patrick/TrashThis/dynamodb-titan-storage-backend/server/dynamodb-titan100-storage-backend-1.0.0-hadoop1/bin/gremlin-server-service.sh: No such file or directory

Change directories to the server root:
cd server/dynamodb-titan100-storage-backend-1.0.0-hadoop1

Start Gremlin Server against us-east-1 with the following command (uses the default credential provider chain):
bin/gremlin-server.sh /Users/patrick/TrashThis/dynamodb-titan-storage-backend/server/dynamodb-titan100-storage-backend-1.0.0-hadoop1/conf/gremlin-server/gremlin-server.yaml

Start Gremlin Server against DynamoDB Local with the following command (remember to start DynamoDB Local first with mvn test -Pstart-dynamodb-local):
bin/gremlin-server.sh /Users/patrick/TrashThis/dynamodb-titan-storage-backend/server/dynamodb-titan100-storage-backend-1.0.0-hadoop1/conf/gremlin-server/gremlin-server-local.yaml

Connect to Gremlin Server using the Gremlin console:
bin/gremlin.sh

Connect to the graph on Gremlin Server:
:remote connect tinkerpop.server conf/remote.yaml

zip error: Nothing to do! (try: zip -rq dynamodb-titan100-storage-backend-1.0.0-hadoop1.zip . -i dynamodb-titan100-storage-backend-1.0.0-hadoop1)
src/test/resources/install-gremlin-server.sh: line 93: popd: directory stack empty

Add DynamoDB to Maven Central

I'm excited about using TitanDB backed by Dynamo, however I'm unable to get the project up and running. When running maven install, DynamoDB local is unable to be found.

No versions available for com.amazonaws:DynamoDBLocal:jar:[1.10, 2.0.0) within specified range

Can we expect DynamoDB local to be added to maven central anytime soon?

Titan DynamoDB does not release all acquired locks on commit (via gremlin)

This issue is explained in detail on StackOverflow. It seems like something isn't correct in the dynamo world (Berkeley backend looks ok).

The problem results in a lock not being released at the correct time and when subsequent transactions are run you get:
tx 0x705eafda280e already locked key-column ( 8- 0- 0- 0- 0- 0- 0-128, 80-160) when tx 0x70629e1d56bf tried to lock

When the offending transaction is run, in the logs I can see that 3 locks acquired when opening the transaction:

[33mtitan_server_1 |[0m 120479 [gremlin-server-exec-3] TRACE com.amazon.titan.diskstorage.dynamodb.AbstractDynamoDBStore - acquiring lock on ( 8- 0- 0- 0- 0- 0- 0-128, 80-160) at 123552624951495
[33mtitan_server_1 |[0m 120489 [gremlin-server-exec-3] TRACE com.amazon.titan.diskstorage.dynamodb.AbstractDynamoDBStore - acquiring lock on ( 6-137-160- 48- 46- 48- 46-177, 0) at 123552635424334
[33mtitan_server_1 |[0m 120489 [gremlin-server-exec-3] TRACE com.amazon.titan.diskstorage.dynamodb.AbstractDynamoDBStore - acquiring lock on ( 6-137-160- 48- 46- 48- 46-178, 0) at 123552635704705

...but only 2 are released when committing it:

[33mtitan_server_1 |[0m 120722 [gremlin-server-exec-3] DEBUG com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreTransaction - commit id:0x705eafda280e
[33mtitan_server_1 |[0m 120722 [gremlin-server-exec-3] TRACE com.amazon.titan.diskstorage.dynamodb.AbstractDynamoDBStore - Expiring ( 6-137-160- 48- 46- 48- 46-177, 0) in tx 0x705eafda280e because of EXPLICIT
[33mtitan_server_1 |[0m 120722 [gremlin-server-exec-3] TRACE com.amazon.titan.diskstorage.dynamodb.AbstractDynamoDBStore - Expiring ( 6-137-160- 48- 46- 48- 46-178, 0) in tx 0x705eafda280e because of EXPLICIT
[33mtitan_server_1 |[0m 120722 [gremlin-server-exec-3] DEBUG org.apache.tinkerpop.gremlin.server.op.AbstractEvalOpProcessor - Preparing to iterate results from - RequestMessage{, requestId=09f27811-dcc3-4e53-a749-22828d34997f, op='eval', processor='', args={gremlin=g.V().hasLabel("databaseMetadata").has("version", "0.0.1").property("version", "0.0.2").next();g.tx().commit();, batchSize=64}} - in thread [gremlin-server-exec-3]

Please can someone take a look at this? If you could indicate that you are looking into this that would be good as I get the impression that this project isn't given that much attention (apologies if that's not the case).

Problem starting Rexster

I'm able to get the Gremlin command line working, and I was moving on the the Rexster server next, but I keep getting hung up with the error below. It looks like it's related to the Jackson (which I updated to 2.5.4 to get the command line working). Just checking if anyone has an idea about how to resolve the dump below:

sh-3.2# server/dynamodb-titan054-storage-backend-1.0.0-hadoop2/bin/rexster.sh --start -c /Users/ericodom/Projects/titan-dynamodb/src/test/resources/rexster-local.xml
/Users/ericodom/Projects/titan-dynamodb
0 [main] INFO com.tinkerpop.rexster.Application - .:Welcome to Rexster:.
68 [main] INFO com.tinkerpop.rexster.server.RexsterProperties - Using [/Users/ericodom/Projects/titan-dynamodb/src/test/resources/rexster-local.xml] as configuration source.
74 [main] INFO com.tinkerpop.rexster.Application - Rexster is watching [/Users/ericodom/Projects/titan-dynamodb/src/test/resources/rexster-local.xml] for change.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/Users/ericodom/Projects/titan-dynamodb/server/dynamodb-titan054-storage-backend-1.0.0-hadoop2/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/Users/ericodom/Projects/titan-dynamodb/server/dynamodb-titan054-storage-backend-1.0.0-hadoop2/ext/dynamodb-titan054-storage-backend/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
289 [main] WARN com.tinkerpop.rexster.config.GraphConfigurationContainer - Could not load graph v054. Please check the XML configuration.
289 [main] WARN com.tinkerpop.rexster.config.GraphConfigurationContainer - GraphConfiguration could not be found or otherwise instantiated: [com.thinkaurelius.titan.tinkerpop.rexster.TitanGraphConfiguration]. Ensure that it is in Rexster's path.
com.tinkerpop.rexster.config.GraphConfigurationException: GraphConfiguration could not be found or otherwise instantiated: [com.thinkaurelius.titan.tinkerpop.rexster.TitanGraphConfiguration]. Ensure that it is in Rexster's path.
at com.tinkerpop.rexster.config.GraphConfigurationContainer.getGraphFromConfiguration(GraphConfigurationContainer.java:142)
at com.tinkerpop.rexster.config.GraphConfigurationContainer.(GraphConfigurationContainer.java:54)
at com.tinkerpop.rexster.server.XmlRexsterApplication.reconfigure(XmlRexsterApplication.java:99)
at com.tinkerpop.rexster.server.XmlRexsterApplication.(XmlRexsterApplication.java:47)
at com.tinkerpop.rexster.Application.(Application.java:97)
at com.tinkerpop.rexster.Application.main(Application.java:189)
Caused by: java.lang.IllegalArgumentException: Could not instantiate implementation: com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager
at com.thinkaurelius.titan.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:55)
at com.thinkaurelius.titan.diskstorage.Backend.getImplementationClass(Backend.java:421)
at com.thinkaurelius.titan.diskstorage.Backend.getStorageManager(Backend.java:361)
at com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration.(GraphDatabaseConfiguration.java:1275)
at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:93)
at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:73)
at com.thinkaurelius.titan.tinkerpop.rexster.TitanGraphConfiguration.configureGraphInstance(TitanGraphConfiguration.java:33)
at com.tinkerpop.rexster.config.GraphConfigurationContainer.getGraphFromConfiguration(GraphConfigurationContainer.java:124)
... 5 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at com.thinkaurelius.titan.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:44)
... 12 more
Caused by: java.lang.NoSuchMethodError: com.fasterxml.jackson.databind.ObjectMapper.enable([Lcom/fasterxml/jackson/core/JsonParser$Feature;)Lcom/fasterxml/jackson/databind/ObjectMapper;
at com.amazonaws.internal.config.InternalConfig.(InternalConfig.java:43)
at com.amazonaws.internal.config.InternalConfig$Factory.(InternalConfig.java:304)
at com.amazonaws.util.VersionInfoUtils.userAgent(VersionInfoUtils.java:139)
at com.amazonaws.util.VersionInfoUtils.initializeUserAgent(VersionInfoUtils.java:134)
at com.amazonaws.util.VersionInfoUtils.getUserAgent(VersionInfoUtils.java:95)
at com.amazonaws.ClientConfiguration.(ClientConfiguration.java:53)
at com.amazon.titan.diskstorage.dynamodb.Constants.(Constants.java:114)
at com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager.getPort(DynamoDBStoreManager.java:66)
at com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager.(DynamoDBStoreManager.java:83)
... 17 more
291 [main] WARN com.tinkerpop.rexster.config.GraphConfigurationContainer - Could not instantiate implementation: com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager
java.lang.IllegalArgumentException: Could not instantiate implementation: com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager
at com.thinkaurelius.titan.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:55)
at com.thinkaurelius.titan.diskstorage.Backend.getImplementationClass(Backend.java:421)
at com.thinkaurelius.titan.diskstorage.Backend.getStorageManager(Backend.java:361)
at com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration.(GraphDatabaseConfiguration.java:1275)
at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:93)
at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:73)
at com.thinkaurelius.titan.tinkerpop.rexster.TitanGraphConfiguration.configureGraphInstance(TitanGraphConfiguration.java:33)
at com.tinkerpop.rexster.config.GraphConfigurationContainer.getGraphFromConfiguration(GraphConfigurationContainer.java:124)
at com.tinkerpop.rexster.config.GraphConfigurationContainer.(GraphConfigurationContainer.java:54)
at com.tinkerpop.rexster.server.XmlRexsterApplication.reconfigure(XmlRexsterApplication.java:99)
at com.tinkerpop.rexster.server.XmlRexsterApplication.(XmlRexsterApplication.java:47)
at com.tinkerpop.rexster.Application.(Application.java:97)
at com.tinkerpop.rexster.Application.main(Application.java:189)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at com.thinkaurelius.titan.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:44)
... 12 more
Caused by: java.lang.NoSuchMethodError: com.fasterxml.jackson.databind.ObjectMapper.enable([Lcom/fasterxml/jackson/core/JsonParser$Feature;)Lcom/fasterxml/jackson/databind/ObjectMapper;
at com.amazonaws.internal.config.InternalConfig.(InternalConfig.java:43)
at com.amazonaws.internal.config.InternalConfig$Factory.(InternalConfig.java:304)
at com.amazonaws.util.VersionInfoUtils.userAgent(VersionInfoUtils.java:139)
at com.amazonaws.util.VersionInfoUtils.initializeUserAgent(VersionInfoUtils.java:134)
at com.amazonaws.util.VersionInfoUtils.getUserAgent(VersionInfoUtils.java:95)
at com.amazonaws.ClientConfiguration.(ClientConfiguration.java:53)
at com.amazon.titan.diskstorage.dynamodb.Constants.(Constants.java:114)
at com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager.getPort(DynamoDBStoreManager.java:66)
at com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager.(DynamoDBStoreManager.java:83)
... 17 more
298 [main] INFO com.tinkerpop.rexster.server.metrics.HttpReporterConfig - Configured HTTP Metric Reporter.
1155 [main] INFO com.tinkerpop.rexster.server.HttpRexsterServer - HTTP/REST thread pool configuration: kernal[4 / 4] worker[8 / 8]
1156 [main] INFO com.tinkerpop.rexster.server.HttpRexsterServer - Using org.glassfish.grizzly.strategies.LeaderFollowerNIOStrategy IOStrategy for HTTP/REST.
1214 [main] INFO com.tinkerpop.rexster.server.HttpRexsterServer - Rexster Server running on: [http://localhost:8182]
1214 [main] INFO com.tinkerpop.rexster.server.RexProRexsterServer - Using org.glassfish.grizzly.strategies.LeaderFollowerNIOStrategy IOStrategy for RexPro.
1214 [main] INFO com.tinkerpop.rexster.server.RexProRexsterServer - RexPro thread pool configuration: kernal[4 / 4] worker[8 / 8]
1216 [main] INFO com.tinkerpop.rexster.server.RexProRexsterServer - Rexster configured with no security.
1217 [main] INFO com.tinkerpop.rexster.server.RexProRexsterServer - RexPro Server bound to [0.0.0.0:8184]
1223 [main] INFO com.tinkerpop.rexster.server.ShutdownManager - Bound shutdown socket to /127.0.0.1:8183. Starting listener thread for shutdown requests.

TimeoutException when adding new vertices during waitForIDBlockGetter

Hi,

I am seeing the following timeout exception when trying to add new vertexes to my Titan graph. This happens on the real DynamoDB service https://dynamodb.us-west-2.amazonaws.com (not on DynamoDB local). I am using com.amazonaws:dynamodb-titan100-storage-backend:1.0.0.

Any ideas what is causing this?

Thanks,
Ingo

ERROR [2015-12-15 22:04:17,220] io.dropwizard.jersey.errors.LoggingExceptionMapper: Error handling a request: 37c9f0de45e92a75
! java.util.concurrent.TimeoutException: null
! at java.util.concurrent.FutureTask.get(FutureTask.java:205) ~[na:1.8.0_65]
! at com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool.waitForIDBlockGetter(StandardIDPool.java:129) ~[app.jar:1.0.0-SNAPSHOT]
! ... 75 common frames omitted
! Causing: com.thinkaurelius.titan.core.TitanException: ID block allocation on partition(3)-namespace(0) timed out in 2.000 min
! at com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool.waitForIDBlockGetter(StandardIDPool.java:150) ~[app.jar:1.0.0-SNAPSHOT]
! at com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool.nextBlock(StandardIDPool.java:172) ~[app.jar:1.0.0-SNAPSHOT]
! at com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool.nextID(StandardIDPool.java:198) ~[app.jar:1.0.0-SNAPSHOT]
! at com.thinkaurelius.titan.graphdb.database.idassigner.VertexIDAssigner.assignID(VertexIDAssigner.java:320) ~[app.jar:1.0.0-SNAPSHOT]
! at com.thinkaurelius.titan.graphdb.database.idassigner.VertexIDAssigner.assignID(VertexIDAssigner.java:169) ~[app.jar:1.0.0-SNAPSHOT]
! at com.thinkaurelius.titan.graphdb.database.idassigner.VertexIDAssigner.assignID(VertexIDAssigner.java:140) ~[app.jar:1.0.0-SNAPSHOT]
! at com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.assignID(StandardTitanGraph.java:437) ~[app.jar:1.0.0-SNAPSHOT]
! at com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx.addVertex(StandardTitanTx.java:507) ~[app.jar:1.0.0-SNAPSHOT]
! at com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx.addVertex(StandardTitanTx.java:525) ~[app.jar:1.0.0-SNAPSHOT]
! at com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx.addVertex(StandardTitanTx.java:521) ~[app.jar:1.0.0-SNAPSHOT]
! at com.thinkaurelius.titan.graphdb.tinkerpop.TitanBlueprintsGraph.addVertex(TitanBlueprintsGraph.java:150) ~[app.jar:1.0.0-SNAPSHOT]

ClassNotFoundException: com.thinkaurelius.titan.diskstorage.es.ElasticSearchIndex

Hi,

I am trying to configure DynamoDB Local as backend for my Titan database. Up until now I have been using BerkeleyDB.

My build.gradle file looks like this:

plugins {
  id 'groovy'
  id 'com.github.johnrengelman.shadow' version '1.2.1'
  id 'net.saliman.cobertura' version '2.2.7'
}

archivesBaseName = '....'
apply plugin: 'groovy'

repositories {
    mavenCentral()
    jcenter()
    flatDir {
        dirs 'lib'
    }
    maven {
        url "https://jitpack.io"
    }
    maven {
        url "http://dynamodb-local.s3-website-us-west-2.amazonaws.com/release"
    }
}

dependencies {
    compile 'org.codehaus.groovy:groovy-all:2.4.5'
    compile 'commons-configuration:commons-configuration:1.10'
    compile 'com.thinkaurelius.titan:titan-core:1.0.0'
    compile 'com.thinkaurelius.titan:titan-berkeleyje:1.0.0'
    compile 'com.amazonaws:dynamodb-titan100-storage-backend:1.0.0'
    compile 'com.amazonaws:aws-java-sdk-dynamodb:1.10.45'
    compile 'com.amazonaws:DynamoDBLocal:1.10.20'
    compile fileTree(dir: 'lib', include: ['*.jar'])
    testCompile "org.spockframework:spock-core:1.0-groovy-2.4"
    testRuntime 'org.objenesis:objenesis:2.2'
    testRuntime "com.github.cglib.cglib:cglib-nodep:5503bcca74"
}

I configure my Graph instance like this: TitanFactory.open("dynamodblocal.properties") where that properties file is basically this file (https://github.com/awslabs/dynamodb-titan-storage-backend/blob/1.0.0/src/test/resources/dynamodb-local.properties) except that I have removed the last 5 lines related to index search.

I do not want to perform any index search and thus I do not want ElasticSearch, that is why I have removed the last 5 lines of the properties file.

When I run my application, an exception is thrown when opening the properties file:

java.lang.IllegalArgumentException: Could not find implementation class: com.thinkaurelius.titan.diskstorage.es.ElasticSearchIndex
        at com.thinkaurelius.titan.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:47)
        at com.thinkaurelius.titan.diskstorage.Backend.getImplementationClass(Backend.java:473)
        at com.thinkaurelius.titan.diskstorage.Backend.getIndexes(Backend.java:460)
        at com.thinkaurelius.titan.diskstorage.Backend.<init>(Backend.java:147)
        at com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration.getBackend(GraphDatabaseConfiguration.java:1805)
        at com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.<init>(StandardTitanGraph.java:123)
        at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:94)
        at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:62)
        at com.....Titan.getInstance(Titan.groovy:63)
        at com.....node.Control.<init>(Control.groovy:25)
        at com.....node.ControlSpec.$spock_initializeFields(ControlSpec.groovy:34)

        Caused by:
        java.lang.ClassNotFoundException: com.thinkaurelius.titan.diskstorage.es.ElasticSearchIndex
            at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
            at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
            at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
            at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
            at java.lang.Class.forName(Class.java:264)
            at com.thinkaurelius.titan.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:42)

I do not understand why the ElasticSearchIndex class is fetched, since I have not specified it anywhere.

Just to verify that this doesn't have to do with my code, I have changed the properties file to the following:

storage.backend=berkeleyje
storage.directory=/tmp/foo

And this works just fine.

I can avoid the exception by adding titan-es to my build.gradle file, but I would like to avoid that as I am not using ElasticSearch.

Any ideas how I can avoid including titan-es in my project?
Thanks!

This traverser does not support merging: org.apache.tinkerpop.gremlin.process.traversal.traverser.O_Traverser

On my EC2, upon executing steps 12, and 13 of the README.md's getting started portion, I am facing the following error:

gremlin> :> g.V().has('comic-book', 'AVF 4').in('appeared').values('character').order()
This traverser does not support merging: org.apache.tinkerpop.gremlin.process.traversal.traverser.O_Traverser
Display stack trace? [yN] y
org.apache.tinkerpop.gremlin.groovy.plugin.RemoteException: This traverser does not support merging: org.apache.tinkerpop.gremlin.process.traversal.traverser.O_Traverser
	at org.apache.tinkerpop.gremlin.console.groovy.plugin.DriverRemoteAcceptor.submit(DriverRemoteAcceptor.java:116)
	at org.apache.tinkerpop.gremlin.console.commands.SubmitCommand.execute(SubmitCommand.groovy:41)
	at org.codehaus.groovy.tools.shell.Shell.execute(Shell.groovy:101)
	at org.codehaus.groovy.tools.shell.Groovysh.super$2$execute(Groovysh.groovy)
	at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:90)
	at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:324)
	at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1207)
	at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuperN(ScriptBytecodeAdapter.java:130)
	at org.codehaus.groovy.tools.shell.Groovysh.executeCommand(Groovysh.groovy:254)
	at org.codehaus.groovy.tools.shell.Groovysh.execute(Groovysh.groovy:153)
	at org.codehaus.groovy.tools.shell.Shell.leftShift(Shell.groovy:119)
	at org.codehaus.groovy.tools.shell.ShellRunner.work(ShellRunner.groovy:94)
	at org.codehaus.groovy.tools.shell.InteractiveShellRunner.super$2$work(InteractiveShellRunner.groovy)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:90)
	at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:324)
	at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1207)
	at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuperN(ScriptBytecodeAdapter.java:130)
	at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuper0(ScriptBytecodeAdapter.java:150)
	at org.codehaus.groovy.tools.shell.InteractiveShellRunner.work(InteractiveShellRunner.groovy:123)
	at org.codehaus.groovy.tools.shell.ShellRunner.run(ShellRunner.groovy:58)
	at org.codehaus.groovy.tools.shell.InteractiveShellRunner.super$2$run(InteractiveShellRunner.groovy)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:90)
	at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:324)
	at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1207)
	at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuperN(ScriptBytecodeAdapter.java:130)
	at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuper0(ScriptBytecodeAdapter.java:150)
	at org.codehaus.groovy.tools.shell.InteractiveShellRunner.run(InteractiveShellRunner.groovy:82)
	at org.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:215)
	at org.apache.tinkerpop.gremlin.console.Console.<init>(Console.groovy:144)
	at org.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:215)
	at org.apache.tinkerpop.gremlin.console.Console.main(Console.groovy:303)

Signature Error moving to production storage on DynamoDB

After setting up credentials for access on us-east-1 and following instructions to switch to storage from local DynamoDB to production DynamoDB, I get the following error when trying to open a graph:

AmazonServiceException The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.

The Canonical String for this request should have been
'POST
/

content-length:39
content-type:application/x-amz-json-1.0
host:dynamodb.us-east-1.amazonaws.com
user-agent:aws-sdk-java/1.10.26 Mac_OS_X/10.10.5 Java_HotSpot(TM)_64-Bit_Server_VM/25.40-b25/1.8.0_40
x-amz-date:20151015T054332Z
x-amz-target:DynamoDB_20120810.DescribeTable

content-length;content-type;host;user-agent;x-amz-date;x-amz-target
bexxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx9d'

The String-to-Sign should have been
'AWS4-HMAC-SHA256
20151015T054332Z
20151015/us-east-1/dynamodb/aws4_request
c5xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx94'
 (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: InvalidSignatureException; Request ID: 47XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXJG)  com.amazonaws.http.AmazonHttpClient.handleErrorResponse (AmazonHttpClient.java:1181)

How to add storage level caching between Titan and DynamoDB?

[This was also posted on http://stackoverflow.com/q/34623041/1769636]

I would like to introduce a read/write-through cache between DynamoDB and my application to store Titan DB query results, vertices, and edges.

I see two solutions to this:
Implicit caching done directly by the Titan/DynamoDB library. Classes like the ParallelScanner could be changed to read from AWS ElastiCache first. The change would have to be applied to read & write operations to ensure consistency.
Explicit caching done by the application before even invoking the Titan/Gremlin API.
The first option seems to be the more fine-grained, cross-cutting, and generic.

Does something like this already exist? Maybe for other storage backends?
Is there a reason why this does not exist already? Graph DB applications seem to be very read-intensive so cross-instance caching seems like a pretty significant feature to speedup queries.

Thanks,
Ingo

g = TitanFactory.open(conf) throws an error

I am trying to Load a subset of the Marvel Universe Social Grap and stack on step 5.
I assume that by 'Titan DynamoDB Storage Backend in the Gremlin shell' you mean the bin/german.sh command that comes with the installation of titan. let me know if that's not the case.

~/projects/titan-0.5.4-hadoop2 bin/gremlin.sh

\,,,/
(o o)
-----oOOo-(_)-oOOo-----
18:35:17 WARN  org.apache.hadoop.util.NativeCodeLoader  - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
gremlin> conf = new BaseConfiguration()
==>org.apache.commons.configuration.BaseConfiguration@398f3f27
gremlin> conf.setProperty("storage.backend", "com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager")
==>null
gremlin> conf.setProperty("storage.dynamodb.client.endpoint", "http://localhost:4567")
==>null
gremlin> conf.setProperty("index.search.backend", "elasticsearch")
==>null
gremlin> conf.setProperty("index.search.directory", "/tmp/searchindex")
==>null
gremlin> conf.setProperty("index.search.elasticsearch.client-only", "false")
==>null
gremlin> conf.setProperty("index.search.elasticsearch.local-mode", "true")
==>null
gremlin> conf.setProperty("index.search.elasticsearch.inteface", "NODE")
==>null
gremlin> g = TitanFactory.open(conf)
Could not find implementation class: com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager
Display stack trace? [yN]
java.lang.IllegalArgumentException: Could not find implementation class: com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager
        at com.thinkaurelius.titan.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:47)
        at com.thinkaurelius.titan.diskstorage.Backend.getImplementationClass(Backend.java:421)
        at com.thinkaurelius.titan.diskstorage.Backend.getStorageManager(Backend.java:361)
        at com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration.<init>(GraphDatabaseConfiguration.java:1275)
        at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:93)
        at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:73)
        at com.thinkaurelius.titan.core.TitanFactory$open.call(Unknown Source)
        at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42)
        at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
        at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116)
        at groovysh_evaluate.run(groovysh_evaluate:84)
        at groovysh_evaluate$run.call(Unknown Source)
        at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42)
        at groovysh_evaluate$run.call(Unknown Source)
        at org.codehaus.groovy.tools.shell.Interpreter.evaluate(Interpreter.groovy:67)
        at org.codehaus.groovy.tools.shell.Interpreter$evaluate.call(Unknown Source)
        at org.codehaus.groovy.tools.shell.Groovysh.execute(Groovysh.groovy:152)
        at org.codehaus.groovy.tools.shell.Shell.leftShift(Shell.groovy:114)
        at org.codehaus.groovy.tools.shell.Shell$leftShift$0.call(Unknown Source)
        at org.codehaus.groovy.tools.shell.ShellRunner.work(ShellRunner.groovy:88)

        at org.codehaus.groovy.tools.shell.InteractiveShellRunner.super$2$work(InteractiveShellRunner.groovy)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:90)
        at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:233)
        at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1079)
        at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuperN(ScriptBytecodeAdapter.java:128)
        at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuper0(ScriptBytecodeAdapter.java:148)
        at org.codehaus.groovy.tools.shell.InteractiveShellRunner.work(InteractiveShellRunner.groovy:100)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite$PogoCachedMethodSiteNoUnwrapNoCoerce.invoke(PogoMetaMethodSite.java:272)
        at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite.callCurrent(PogoMetaMethodSite.java:52)
        at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:137)
        at org.codehaus.groovy.tools.shell.ShellRunner.run(ShellRunner.groovy:57)
        at org.codehaus.groovy.tools.shell.InteractiveShellRunner.super$2$run(InteractiveShellRunner.groovy)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:90)
        at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:233)
        at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1079)
        at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuperN(ScriptBytecodeAdapter.java:128)
        at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuper0(ScriptBytecodeAdapter.java:148)
        at org.codehaus.groovy.tools.shell.InteractiveShellRunner.run(InteractiveShellRunner.groovy:66)
        at com.thinkaurelius.titan.hadoop.tinkerpop.gremlin.Console.<init>(Console.java:78)
        at com.thinkaurelius.titan.hadoop.tinkerpop.gremlin.Console.<init>(Console.java:91)
        at com.thinkaurelius.titan.hadoop.tinkerpop.gremlin.Console.main(Console.java:95)
Caused by: java.lang.ClassNotFoundException: com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager
        at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:191)
        at com.thinkaurelius.titan.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:42)
        ... 52 more

Opening graph fails while updating provisioned capacity on dynamodb table

The dynamodb titan backend doesn't handle a table in an updating state when opening a new graph. This is a defect in DynamoDBDelegate.createTableAndWaitForActive as when it checks for state, if table exists but is not ACTIVE it attempts to createTable and you get the exception "ResourceInUseException: Table already exists"

Really it should only try to createTable if the table does not exist. If the table exists in the state of CREATING or UPDATING it should just attempt to wait.

This is particularly an issue when trying to use Titan in a EMR as when a new task is started a new graph needs to be opened but if you are attempting to provision more capacity in dynamodb all the new tasks will fail with an exception.

Question about calculateItemSizeInBytes

Hi team,

I was doing some testing with your methods to calculate the size of a dynamoDB item and I'm getting results beyond 440KB which should be DynamoDB max size for a row.
My data is composed of Strings, Numbers and Dates only. I suspect the problem is on the size of numbers always being assumed to have 21 bytes of size.
Have you guys checked this?

Thanks,
Hugo

Error: Could not find or load main class com.thinkaurelius.titan.hadoop.tinkerpop.gremlin.Console

I am trying to follow the directions as found here. On Step 4, when I run mvn test -Pstart-gremlin it is failing with Error: Could not find or load main class com.thinkaurelius.titan.hadoop.tinkerpop.gremlin.Console. This is after a successful mvn install and being able to successfully run dynamodb in another console.

Running mvn -version returns the following:

C:\Users\<<<username>>>\code\dynamodb-titan-storage-backend>mvn -version
Apache Maven 3.3.3 (7994120775791599e205a5524ec3e0dfe41d4a06; 2015-04-22T06:57:37-05:00)
Maven home: C:\Users\<<<username>>>\code\apache-maven-3.3.3\bin\..
Java version: 1.8.0_05, vendor: Oracle Corporation
Java home: C:\Program Files\Java\jdk1.8.0_05\jre
Default locale: en_US, platform encoding: Cp1252
OS name: "windows 7", version: "6.1", arch: "amd64", family: "dos"

C:\Users\<<<username>>>\code\dynamodb-titan-storage-backend>

Here is the full output of Steps 2 and 4 (Step 3 seems to be working):

C:\Users\<<<username>>>\code\dynamodb-titan-storage-backend>mvn install
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Amazon DynamoDB Storage Backend for Titan 1.0.0
[INFO] ------------------------------------------------------------------------
Downloading: http://dynamodb-local.s3-website-us-west-2.amazonaws.com/release/com/amazonaws/aws-java-sdk-dynamodb/maven-metadata.xml
Downloading: https://repo.maven.apache.org/maven2/com/amazonaws/aws-java-sdk-dynamodb/maven-metadata.xml
Downloaded: https://repo.maven.apache.org/maven2/com/amazonaws/aws-java-sdk-dynamodb/maven-metadata.xml (3 KB at 1.2 KB/sec)
Downloading: https://repo.maven.apache.org/maven2/com/amazonaws/DynamoDBLocal/maven-metadata.xml
Downloading: http://dynamodb-local.s3-website-us-west-2.amazonaws.com/release/com/amazonaws/DynamoDBLocal/maven-metadata.xml
Downloaded: http://dynamodb-local.s3-website-us-west-2.amazonaws.com/release/com/amazonaws/DynamoDBLocal/maven-metadata.xml (408 B at 0.6 KB/sec)
[WARNING] Could not transfer metadata com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from/to maven-s3-release-repo (s3://dynamodblocal/release): Cannot acces
s s3://dynamodblocal/release with type default using the available connector factories: BasicRepositoryConnectorFactory
[WARNING] Could not transfer metadata com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from/to maven-s3-snapshot-repo (s3://dynamodblocal/snapshot): Cannot acc
ess s3://dynamodblocal/snapshot with type default using the available connector factories: BasicRepositoryConnectorFactory
[INFO]
[INFO] --- maven-enforcer-plugin:1.4:enforce (enforce-maven) @ dynamodb-titan054-storage-backend ---
[INFO]
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ dynamodb-titan054-storage-backend ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO]
[INFO] --- maven-compiler-plugin:3.3:compile (default-compile) @ dynamodb-titan054-storage-backend ---
[INFO] Nothing to compile - all classes are up to date
[INFO]
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ dynamodb-titan054-storage-backend ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 6 resources
[INFO]
[INFO] --- maven-dependency-plugin:2.10:copy-dependencies (copy-dependencies) @ dynamodb-titan054-storage-backend ---
[INFO] com.amazonaws:aws-java-sdk-kms:jar:1.10.20 already exists in destination.
[INFO] org.glassfish.grizzly:grizzly-http:jar:2.1.2 already exists in destination.
[INFO] com.almworks.sqlite4java:sqlite4java-win32-x64:dll:1.0.392 already exists in destination.
[INFO] com.google.protobuf:protobuf-java:jar:2.5.0 already exists in destination.
[INFO] org.apache.lucene:lucene-join:jar:4.8.1 already exists in destination.
[INFO] com.thinkaurelius.titan:titan-core:jar:0.5.4 already exists in destination.
[INFO] org.openrdf.sesame:sesame-http-client:jar:2.7.10 already exists in destination.
[INFO] com.sun.jersey:jersey-json:jar:1.9 already exists in destination.
[INFO] joda-time:joda-time:jar:2.8.1 already exists in destination.
[INFO] net.fortytwo.ripple:ripple-base:jar:1.1 already exists in destination.
[INFO] org.apache.ant:ant-launcher:jar:1.8.3 already exists in destination.
[INFO] org.apache.hadoop:hadoop-client:jar:2.2.0 already exists in destination.
[INFO] net.fortytwo.ripple:ripple-flow-rdf:jar:1.1 already exists in destination.
[INFO] org.apache.commons:commons-math:jar:2.2 already exists in destination.
[INFO] org.glassfish.grizzly:grizzly-rcm:jar:2.1.2 already exists in destination.
[INFO] org.semarglproject:semargl-core:jar:0.4 already exists in destination.
[INFO] org.semarglproject:semargl-rdfa:jar:0.4 already exists in destination.
[INFO] org.apache.hadoop:hadoop-common:jar:2.2.0 already exists in destination.
[INFO] com.thinkaurelius.groovy-shaded-asm:groovy-shaded-asm:jar:1.8.9 already exists in destination.
[INFO] net.fortytwo.ripple:ripple-flow:jar:1.1 already exists in destination.
[INFO] commons-logging:commons-logging:jar:1.1.1 already exists in destination.
[INFO] org.eclipse.jetty.orbit:javax.servlet:jar:3.0.0.v201112011016 already exists in destination.
[INFO] javax.servlet:javax.servlet-api:jar:3.0.1 already exists in destination.
[INFO] com.sun.jersey:jersey-core:jar:1.9 already exists in destination.
[INFO] org.apache.httpcomponents:httpclient-cache:jar:4.2.5 already exists in destination.
[INFO] com.amazonaws:aws-java-sdk-s3:jar:1.10.20 already exists in destination.
[INFO] com.amazonaws:aws-java-sdk-core:jar:1.10.20 already exists in destination.
[INFO] org.glassfish.gmbal:gmbal-api-only:jar:3.0.0-b023 already exists in destination.
[INFO] org.apache.lucene:lucene-spatial:jar:4.8.1 already exists in destination.
[INFO] com.sun.jersey.jersey-test-framework:jersey-test-framework-core:jar:1.9 already exists in destination.
[INFO] org.eclipse.jetty:jetty-continuation:jar:8.1.12.v20130726 already exists in destination.
[INFO] dom4j:dom4j:jar:1.6.1 already exists in destination.
[INFO] com.carrotsearch:hppc:jar:0.6.0 already exists in destination.
[INFO] org.apache.hadoop:hadoop-yarn-client:jar:2.2.0 already exists in destination.
[INFO] org.apache.lucene:lucene-highlighter:jar:4.8.1 already exists in destination.
[INFO] org.glassfish.external:management-api:jar:3.0.0-b012 already exists in destination.
[INFO] au.com.bytecode:opencsv:jar:2.4 already exists in destination.
[INFO] commons-net:commons-net:jar:3.1 already exists in destination.
[INFO] org.glassfish.grizzly:grizzly-http-server:jar:2.1.2 already exists in destination.
[INFO] org.antlr:antlr4-runtime:jar:4.1 already exists in destination.
[INFO] com.github.jsonld-java:jsonld-java-sesame:jar:0.3 already exists in destination.
[INFO] org.codehaus.jackson:jackson-jaxrs:jar:1.8.3 already exists in destination.
[INFO] org.openrdf.sesame:sesame-repository-api:jar:2.7.10 already exists in destination.
[INFO] org.openrdf.sesame:sesame-repository-sparql:jar:2.7.10 already exists in destination.
[INFO] org.tukaani:xz:jar:1.0 already exists in destination.
[INFO] org.codehaus.groovy:groovy:jar:1.8.9 already exists in destination.
[INFO] com.tinkerpop.blueprints:blueprints-sail-graph:jar:2.5.0 already exists in destination.
[INFO] org.openrdf.sesame:sesame-rio-n3:jar:2.7.10 already exists in destination.
[INFO] org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.2.0 already exists in destination.
[INFO] org.glassfish.grizzly:grizzly-framework:jar:2.1.2 already exists in destination.
[INFO] org.apache.hadoop:hadoop-mapreduce-client-common:jar:2.2.0 already exists in destination.
[INFO] org.easymock:easymock:jar:3.1 already exists in destination.
[INFO] org.openrdf.sesame:sesame-queryparser-api:jar:2.7.10 already exists in destination.
[INFO] org.openrdf.sesame:sesame-rio-nquads:jar:2.7.10 already exists in destination.
[INFO] org.xerial.snappy:snappy-java:jar:1.0.4.1 already exists in destination.
[INFO] org.openrdf.sesame:sesame-rio-datatypes:jar:2.7.10 already exists in destination.
[INFO] com.tinkerpop.gremlin:gremlin-java:jar:2.5.0 already exists in destination.
[INFO] log4j:log4j:jar:1.2.17 already exists in destination.
[INFO] com.sun.jersey.jersey-test-framework:jersey-test-framework-grizzly2:jar:1.9 already exists in destination.
[INFO] com.almworks.sqlite4java:libsqlite4java-osx:dylib:1.0.392 already exists in destination.
[INFO] com.amazonaws:aws-java-sdk-dynamodb:jar:1.10.20 already exists in destination.
[INFO] org.eclipse.jetty:jetty-util:jar:8.1.12.v20130726 already exists in destination.
[INFO] org.apache.lucene:lucene-sandbox:jar:4.8.1 already exists in destination.
[INFO] xml-apis:xml-apis:jar:1.0.b2 already exists in destination.
[INFO] org.openrdf.sesame:sesame-rio-languages:jar:2.7.10 already exists in destination.
[INFO] org.openrdf.sesame:sesame-rio-trig:jar:2.7.10 already exists in destination.
[INFO] com.sun.xml.bind:jaxb-impl:jar:2.2.3-1 already exists in destination.
[INFO] org.apache.httpcomponents:httpclient:jar:4.3.6 already exists in destination.
[INFO] antlr:antlr:jar:2.7.7 already exists in destination.
[INFO] org.apache.hadoop:hadoop-mapreduce-client-app:jar:2.2.0 already exists in destination.
[INFO] org.apache.ant:ant:jar:1.8.3 already exists in destination.
[INFO] com.tinkerpop.blueprints:blueprints-core:jar:2.5.0 already exists in destination.
[INFO] org.apache.hadoop:hadoop-yarn-server-common:jar:2.2.0 already exists in destination.
[INFO] com.tinkerpop.gremlin:gremlin-test:jar:2.5.0 already exists in destination.
[INFO] net.fortytwo.sesametools:repository-sail:jar:1.8 already exists in destination.
[INFO] org.apache.hadoop:hadoop-yarn-api:jar:2.2.0 already exists in destination.
[INFO] cglib:cglib-nodep:jar:2.2.2 already exists in destination.
[INFO] org.apache.hadoop:hadoop-yarn-common:jar:2.2.0 already exists in destination.
[INFO] org.restlet.jse:org.restlet:jar:2.1.1 already exists in destination.
[INFO] org.openrdf.sesame:sesame-rio-ntriples:jar:2.7.10 already exists in destination.
[INFO] com.sun.jersey:jersey-server:jar:1.9 already exists in destination.
[INFO] org.apache.zookeeper:zookeeper:jar:3.4.5 already exists in destination.
[INFO] org.apache.hadoop:hadoop-mapreduce-client-shuffle:jar:2.2.0 already exists in destination.
[INFO] com.thoughtworks.paranamer:paranamer:jar:2.3 already exists in destination.
[INFO] com.tinkerpop:pipes:jar:2.5.0 already exists in destination.
[INFO] org.openrdf.sesame:sesame-rio-api:jar:2.7.10 already exists in destination.
[INFO] com.thinkaurelius.titan:titan-test:jar:0.5.4 already exists in destination.
[INFO] org.apache.lucene:lucene-grouping:jar:4.8.1 already exists in destination.
[INFO] asm:asm-analysis:jar:3.2 already exists in destination.
[INFO] commons-configuration:commons-configuration:jar:1.6 already exists in destination.
[INFO] commons-digester:commons-digester:jar:1.8 already exists in destination.
[INFO] org.openrdf.sesame:sesame-rio-binary:jar:2.7.10 already exists in destination.
[INFO] org.apache.lucene:lucene-memory:jar:4.8.1 already exists in destination.
[INFO] org.openrdf.sesame:sesame-queryalgebra-evaluation:jar:2.7.10 already exists in destination.
[INFO] org.openrdf.sesame:sesame-rio-trix:jar:2.7.10 already exists in destination.
[INFO] commons-collections:commons-collections:jar:3.2.1 already exists in destination.
[INFO] commons-codec:commons-codec:jar:1.7 already exists in destination.
[INFO] commons-cli:commons-cli:jar:1.2 already exists in destination.
[INFO] org.objenesis:objenesis:jar:2.1 already exists in destination.
[INFO] com.google.guava:guava:jar:15.0 already exists in destination.
[INFO] org.elasticsearch:elasticsearch-hadoop-mr:jar:2.0.0 already exists in destination.
[INFO] com.thinkaurelius.titan:titan-es:jar:0.5.4 already exists in destination.
[INFO] com.carrotsearch:junit-benchmarks:jar:0.7.0 already exists in destination.
[INFO] com.almworks.sqlite4java:libsqlite4java-linux-i386:so:1.0.392 already exists in destination.
[INFO] concurrent:concurrent:jar:1.3.4 already exists in destination.
[INFO] org.openrdf.sesame:sesame-sail-inferencer:jar:2.7.10 already exists in destination.
[INFO] com.fasterxml.jackson.core:jackson-annotations:jar:2.2.3 already exists in destination.
[INFO] org.codehaus.jettison:jettison:jar:1.3.3 already exists in destination.
[INFO] org.glassfish:javax.servlet:jar:3.1 already exists in destination.
[INFO] com.spatial4j:spatial4j:jar:0.4.1 already exists in destination.
[INFO] commons-httpclient:commons-httpclient:jar:3.1 already exists in destination.
[INFO] com.almworks.sqlite4java:libsqlite4java-linux-amd64:so:1.0.392 already exists in destination.
[INFO] org.apache.hadoop:hadoop-mapreduce-client-jobclient:jar:2.2.0 already exists in destination.
[INFO] org.codehaus.jackson:jackson-mapper-asl:jar:1.8.8 already exists in destination.
[INFO] aopalliance:aopalliance:jar:1.0 already exists in destination.
[INFO] org.apache.logging.log4j:log4j-core:jar:2.1 already exists in destination.
[INFO] info.ganglia.gmetric4j:gmetric4j:jar:1.0.3 already exists in destination.
[INFO] org.mockito:mockito-core:jar:1.10.19 already exists in destination.
[INFO] com.almworks.sqlite4java:sqlite4java-win32-x86:dll:1.0.392 already exists in destination.
[INFO] org.apache.lucene:lucene-codecs:jar:4.8.1 already exists in destination.
[INFO] com.tinkerpop.blueprints:blueprints-test:jar:2.5.0 already exists in destination.
[INFO] org.eclipse.jetty:jetty-client:jar:8.1.12.v20130726 already exists in destination.
[INFO] com.codahale.metrics:metrics-graphite:jar:3.0.1 already exists in destination.
[INFO] org.openrdf.sesame:sesame-queryparser-serql:jar:2.7.10 already exists in destination.
[INFO] org.slf4j:slf4j-log4j12:jar:1.7.5 already exists in destination.
[INFO] org.apache.lucene:lucene-queries:jar:4.8.1 already exists in destination.
[INFO] org.apache.lucene:lucene-misc:jar:4.8.1 already exists in destination.
[INFO] org.openrdf.sesame:sesame-rio-rdfjson:jar:2.7.10 already exists in destination.
[INFO] org.apache.lucene:lucene-queryparser:jar:4.8.1 already exists in destination.
[INFO] com.google.inject:guice:jar:3.0 already exists in destination.
[INFO] org.elasticsearch:elasticsearch:jar:1.2.1 already exists in destination.
[INFO] com.esotericsoftware.kryo:kryo:jar:2.22 already exists in destination.
[INFO] org.apache.hadoop:hadoop-auth:jar:2.2.0 already exists in destination.
[INFO] org.glassfish:javax.json:jar:1.0 already exists in destination.
[INFO] com.sun.jersey.contribs:jersey-guice:jar:1.9 already exists in destination.
[INFO] org.openrdf.sesame:sesame-query:jar:2.7.10 already exists in destination.
[INFO] com.sun.jersey:jersey-client:jar:1.9 already exists in destination.
[INFO] org.apache.lucene:lucene-core:jar:4.8.1 already exists in destination.
[INFO] org.apache.lucene:lucene-suggest:jar:4.8.1 already exists in destination.
[INFO] com.google.code.findbugs:jsr305:jar:1.3.9 already exists in destination.
[INFO] asm:asm-util:jar:3.2 already exists in destination.
[INFO] jline:jline:jar:0.9.94 already exists in destination.
[INFO] org.apache.ivy:ivy:jar:2.3.0 already exists in destination.
[INFO] javax.xml.bind:jaxb-api:jar:2.2.2 already exists in destination.
[INFO] org.openrdf.sesame:sesame-queryparser-sparql:jar:2.7.10 already exists in destination.
[INFO] net.fortytwo:linked-data-sail:jar:1.1 already exists in destination.
[INFO] asm:asm-commons:jar:3.2 already exists in destination.
[INFO] javax.activation:activation:jar:1.1 already exists in destination.
[INFO] commons-io:commons-io:jar:2.1 already exists in destination.
[INFO] org.apache.lucene:lucene-analyzers-common:jar:4.8.1 already exists in destination.
[INFO] com.codahale.metrics:metrics-core:jar:3.0.1 already exists in destination.
[INFO] com.amazonaws:DynamoDBLocal:jar:1.10.5.1 already exists in destination.
[INFO] org.apache.logging.log4j:log4j-api:jar:2.1 already exists in destination.
[INFO] org.openrdf.sesame:sesame-sail-api:jar:2.7.10 already exists in destination.
[INFO] javax.inject:javax.inject:jar:1 already exists in destination.
[INFO] com.fasterxml.jackson.jaxrs:jackson-jaxrs-json-provider:jar:2.2.3 already exists in destination.
[INFO] org.glassfish.grizzly:grizzly-http-servlet:jar:2.1.2 already exists in destination.
[INFO] commons-beanutils:commons-beanutils-core:jar:1.8.0 already exists in destination.
[INFO] com.github.stephenc.high-scale-lib:high-scale-lib:jar:1.1.4 already exists in destination.
[INFO] com.thinkaurelius.titan:titan-hadoop:jar:0.5.4 already exists in destination.
[INFO] org.abego.treelayout:org.abego.treelayout.core:jar:1.0.1 already exists in destination.
[INFO] org.mockito:mockito-all:jar:1.8.1 already exists in destination.
[INFO] org.openrdf.sesame:sesame-model:jar:2.7.10 already exists in destination.
[INFO] com.codahale.metrics:metrics-ganglia:jar:3.0.1 already exists in destination.
[INFO] org.codehaus.jackson:jackson-core-asl:jar:1.8.8 already exists in destination.
[INFO] stax:stax-api:jar:1.0.1 already exists in destination.
[INFO] asm:asm:jar:3.2 already exists in destination.
[INFO] org.eclipse.jetty:jetty-http:jar:8.1.12.v20130726 already exists in destination.
[INFO] org.codehaus.jackson:jackson-xc:jar:1.8.3 already exists in destination.
[INFO] commons-lang:commons-lang:jar:2.4 already exists in destination.
[INFO] org.eclipse.jetty:jetty-io:jar:8.1.12.v20130726 already exists in destination.
[INFO] org.apache.commons:commons-compress:jar:1.4.1 already exists in destination.
[INFO] org.openrdf.sesame:sesame-http-protocol:jar:2.7.10 already exists in destination.
[INFO] org.semarglproject:semargl-rdf:jar:0.4 already exists in destination.
[INFO] com.tinkerpop:frames:jar:2.5.0 already exists in destination.
[INFO] asm:asm-tree:jar:3.2 already exists in destination.
[INFO] org.mortbay.jetty:jetty-util:jar:6.1.26 already exists in destination.
[INFO] org.reflections:reflections:jar:0.9.9-RC1 already exists in destination.
[INFO] org.hamcrest:hamcrest-core:jar:1.3 already exists in destination.
[INFO] org.javassist:javassist:jar:3.18.0-GA already exists in destination.
[INFO] org.apache.hadoop:hadoop-hdfs:jar:2.2.0 already exists in destination.
[INFO] org.slf4j:slf4j-api:jar:1.7.5 already exists in destination.
[INFO] org.openrdf.sesame:sesame-util:jar:2.7.10 already exists in destination.
[INFO] xmlenc:xmlenc:jar:0.52 already exists in destination.
[INFO] org.fusesource.jansi:jansi:jar:1.5 already exists in destination.
[INFO] org.eclipse.jetty:jetty-server:jar:8.1.12.v20130726 already exists in destination.
[INFO] org.apache.commons:commons-lang3:jar:3.3.2 already exists in destination.
[INFO] org.openrdf.sesame:sesame-queryresultio-sparqlxml:jar:2.7.10 already exists in destination.
[INFO] com.github.jsonld-java:jsonld-java:jar:0.3 already exists in destination.
[INFO] com.fasterxml.jackson.core:jackson-core:jar:2.2.3 already exists in destination.
[INFO] org.apache.hadoop:hadoop-annotations:jar:2.2.0 already exists in destination.
[INFO] org.apache.httpcomponents:httpcore:jar:4.3.3 already exists in destination.
[INFO] org.semarglproject:semargl-sesame:jar:0.4 already exists in destination.
[INFO] org.openrdf.sesame:sesame-queryalgebra-model:jar:2.7.10 already exists in destination.
[INFO] commons-beanutils:commons-beanutils:jar:1.7.0 already exists in destination.
[INFO] junit:junit:jar:4.11 already exists in destination.
[INFO] com.thinkaurelius.titan:titan-hbase:jar:0.5.4 already exists in destination.
[INFO] com.fasterxml.jackson.datatype:jackson-datatype-json-org:jar:2.2.3 already exists in destination.
[INFO] org.openrdf.sesame:sesame-rio-turtle:jar:2.7.10 already exists in destination.
[INFO] com.tinkerpop.gremlin:gremlin-groovy:jar:2.5.0 already exists in destination.
[INFO] colt:colt:jar:1.2.0 already exists in destination.
[INFO] org.openrdf.sesame:sesame-sail-memory:jar:2.7.10 already exists in destination.
[INFO] com.fasterxml.jackson.core:jackson-databind:jar:2.2.3 already exists in destination.
[INFO] com.fasterxml.jackson.jaxrs:jackson-jaxrs-base:jar:2.2.3 already exists in destination.
[INFO] com.fasterxml.jackson.module:jackson-module-jaxb-annotations:jar:2.2.3 already exists in destination.
[INFO] org.openrdf.sesame:sesame-rio-rdfxml:jar:2.7.10 already exists in destination.
[INFO] org.openrdf.sesame:sesame-sail-nativerdf:jar:2.7.10 already exists in destination.
[INFO] net.fortytwo.sesametools:common:jar:1.8 already exists in destination.
[INFO] com.carrotsearch.randomizedtesting:randomizedtesting-runner:jar:2.0.8 already exists in destination.
[INFO] com.almworks.sqlite4java:sqlite4java:jar:1.0.392 already exists in destination.
[INFO] org.openrdf.sesame:sesame-queryresultio-api:jar:2.7.10 already exists in destination.
[INFO] org.acplt:oncrpc:jar:1.0.7 already exists in destination.
[INFO] org.json:json:jar:20090211 already exists in destination.
[INFO] org.apache.avro:avro:jar:1.7.4 already exists in destination.
[INFO] com.sun.jersey:jersey-grizzly2:jar:1.9 already exists in destination.
[INFO]
[INFO] --- maven-compiler-plugin:3.3:testCompile (default-testCompile) @ dynamodb-titan054-storage-backend ---
[INFO] Nothing to compile - all classes are up to date
[INFO]
[INFO] --- maven-surefire-plugin:2.18.1:test (default-test) @ dynamodb-titan054-storage-backend ---
[INFO] Tests are skipped.
[INFO]
[INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ dynamodb-titan054-storage-backend ---
[INFO]
[INFO] --- maven-install-plugin:2.4:install (default-install) @ dynamodb-titan054-storage-backend ---
[INFO] Installing C:\Users\<<<username>>>\code\dynamodb-titan-storage-backend\target\dynamodb-titan054-storage-backend-1.0.0.jar to C:\Users\<<<username>>>\.m2\repository\com\amazonaws\
dynamodb-titan054-storage-backend\1.0.0\dynamodb-titan054-storage-backend-1.0.0.jar
[INFO] Installing C:\Users\<<<username>>>\code\dynamodb-titan-storage-backend\pom.xml to C:\Users\<<<username>>>\.m2\repository\com\amazonaws\dynamodb-titan054-storage-backend\1.0.0\dyn
amodb-titan054-storage-backend-1.0.0.pom
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 11.997 s
[INFO] Finished at: 2015-09-22T19:48:29-05:00
[INFO] Final Memory: 26M/113M
[INFO] ------------------------------------------------------------------------

C:\Users\<<<username>>>\code\dynamodb-titan-storage-backend>mvn test -Pstart-gremlin
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Amazon DynamoDB Storage Backend for Titan 1.0.0
[INFO] ------------------------------------------------------------------------
[WARNING] Could not transfer metadata com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from/to maven-s3-release-repo (s3://dynamodblocal/release): Cannot acces
s s3://dynamodblocal/release with type default using the available connector factories: BasicRepositoryConnectorFactory
[WARNING] Could not transfer metadata com.amazonaws:aws-java-sdk-dynamodb/maven-metadata.xml from/to maven-s3-snapshot-repo (s3://dynamodblocal/snapshot): Cannot acc
ess s3://dynamodblocal/snapshot with type default using the available connector factories: BasicRepositoryConnectorFactory
[INFO]
[INFO] --- maven-enforcer-plugin:1.4:enforce (enforce-maven) @ dynamodb-titan054-storage-backend ---
[INFO]
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ dynamodb-titan054-storage-backend ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO]
[INFO] --- maven-compiler-plugin:3.3:compile (default-compile) @ dynamodb-titan054-storage-backend ---
[INFO] Nothing to compile - all classes are up to date
[INFO]
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ dynamodb-titan054-storage-backend ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 6 resources
[INFO]
[INFO] --- maven-dependency-plugin:2.10:copy-dependencies (copy-dependencies) @ dynamodb-titan054-storage-backend ---
[INFO] com.amazonaws:aws-java-sdk-kms:jar:1.10.20 already exists in destination.
[INFO] org.glassfish.grizzly:grizzly-http:jar:2.1.2 already exists in destination.
[INFO] com.almworks.sqlite4java:sqlite4java-win32-x64:dll:1.0.392 already exists in destination.
[INFO] com.google.protobuf:protobuf-java:jar:2.5.0 already exists in destination.
[INFO] org.apache.lucene:lucene-join:jar:4.8.1 already exists in destination.
[INFO] com.thinkaurelius.titan:titan-core:jar:0.5.4 already exists in destination.
[INFO] org.openrdf.sesame:sesame-http-client:jar:2.7.10 already exists in destination.
[INFO] com.sun.jersey:jersey-json:jar:1.9 already exists in destination.
[INFO] joda-time:joda-time:jar:2.8.1 already exists in destination.
[INFO] net.fortytwo.ripple:ripple-base:jar:1.1 already exists in destination.
[INFO] org.apache.ant:ant-launcher:jar:1.8.3 already exists in destination.
[INFO] org.apache.hadoop:hadoop-client:jar:2.2.0 already exists in destination.
[INFO] net.fortytwo.ripple:ripple-flow-rdf:jar:1.1 already exists in destination.
[INFO] org.apache.commons:commons-math:jar:2.2 already exists in destination.
[INFO] org.glassfish.grizzly:grizzly-rcm:jar:2.1.2 already exists in destination.
[INFO] org.semarglproject:semargl-core:jar:0.4 already exists in destination.
[INFO] org.semarglproject:semargl-rdfa:jar:0.4 already exists in destination.
[INFO] org.apache.hadoop:hadoop-common:jar:2.2.0 already exists in destination.
[INFO] com.thinkaurelius.groovy-shaded-asm:groovy-shaded-asm:jar:1.8.9 already exists in destination.
[INFO] net.fortytwo.ripple:ripple-flow:jar:1.1 already exists in destination.
[INFO] commons-logging:commons-logging:jar:1.1.1 already exists in destination.
[INFO] org.eclipse.jetty.orbit:javax.servlet:jar:3.0.0.v201112011016 already exists in destination.
[INFO] javax.servlet:javax.servlet-api:jar:3.0.1 already exists in destination.
[INFO] com.sun.jersey:jersey-core:jar:1.9 already exists in destination.
[INFO] org.apache.httpcomponents:httpclient-cache:jar:4.2.5 already exists in destination.
[INFO] com.amazonaws:aws-java-sdk-s3:jar:1.10.20 already exists in destination.
[INFO] com.amazonaws:aws-java-sdk-core:jar:1.10.20 already exists in destination.
[INFO] org.glassfish.gmbal:gmbal-api-only:jar:3.0.0-b023 already exists in destination.
[INFO] org.apache.lucene:lucene-spatial:jar:4.8.1 already exists in destination.
[INFO] com.sun.jersey.jersey-test-framework:jersey-test-framework-core:jar:1.9 already exists in destination.
[INFO] org.eclipse.jetty:jetty-continuation:jar:8.1.12.v20130726 already exists in destination.
[INFO] dom4j:dom4j:jar:1.6.1 already exists in destination.
[INFO] com.carrotsearch:hppc:jar:0.6.0 already exists in destination.
[INFO] org.apache.hadoop:hadoop-yarn-client:jar:2.2.0 already exists in destination.
[INFO] org.apache.lucene:lucene-highlighter:jar:4.8.1 already exists in destination.
[INFO] org.glassfish.external:management-api:jar:3.0.0-b012 already exists in destination.
[INFO] au.com.bytecode:opencsv:jar:2.4 already exists in destination.
[INFO] commons-net:commons-net:jar:3.1 already exists in destination.
[INFO] org.glassfish.grizzly:grizzly-http-server:jar:2.1.2 already exists in destination.
[INFO] org.antlr:antlr4-runtime:jar:4.1 already exists in destination.
[INFO] com.github.jsonld-java:jsonld-java-sesame:jar:0.3 already exists in destination.
[INFO] org.codehaus.jackson:jackson-jaxrs:jar:1.8.3 already exists in destination.
[INFO] org.openrdf.sesame:sesame-repository-api:jar:2.7.10 already exists in destination.
[INFO] org.openrdf.sesame:sesame-repository-sparql:jar:2.7.10 already exists in destination.
[INFO] org.tukaani:xz:jar:1.0 already exists in destination.
[INFO] org.codehaus.groovy:groovy:jar:1.8.9 already exists in destination.
[INFO] com.tinkerpop.blueprints:blueprints-sail-graph:jar:2.5.0 already exists in destination.
[INFO] org.openrdf.sesame:sesame-rio-n3:jar:2.7.10 already exists in destination.
[INFO] org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.2.0 already exists in destination.
[INFO] org.glassfish.grizzly:grizzly-framework:jar:2.1.2 already exists in destination.
[INFO] org.apache.hadoop:hadoop-mapreduce-client-common:jar:2.2.0 already exists in destination.
[INFO] org.easymock:easymock:jar:3.1 already exists in destination.
[INFO] org.openrdf.sesame:sesame-queryparser-api:jar:2.7.10 already exists in destination.
[INFO] org.openrdf.sesame:sesame-rio-nquads:jar:2.7.10 already exists in destination.
[INFO] org.xerial.snappy:snappy-java:jar:1.0.4.1 already exists in destination.
[INFO] org.openrdf.sesame:sesame-rio-datatypes:jar:2.7.10 already exists in destination.
[INFO] com.tinkerpop.gremlin:gremlin-java:jar:2.5.0 already exists in destination.
[INFO] log4j:log4j:jar:1.2.17 already exists in destination.
[INFO] com.sun.jersey.jersey-test-framework:jersey-test-framework-grizzly2:jar:1.9 already exists in destination.
[INFO] com.almworks.sqlite4java:libsqlite4java-osx:dylib:1.0.392 already exists in destination.
[INFO] com.amazonaws:aws-java-sdk-dynamodb:jar:1.10.20 already exists in destination.
[INFO] org.eclipse.jetty:jetty-util:jar:8.1.12.v20130726 already exists in destination.
[INFO] org.apache.lucene:lucene-sandbox:jar:4.8.1 already exists in destination.
[INFO] xml-apis:xml-apis:jar:1.0.b2 already exists in destination.
[INFO] org.openrdf.sesame:sesame-rio-languages:jar:2.7.10 already exists in destination.
[INFO] org.openrdf.sesame:sesame-rio-trig:jar:2.7.10 already exists in destination.
[INFO] com.sun.xml.bind:jaxb-impl:jar:2.2.3-1 already exists in destination.
[INFO] org.apache.httpcomponents:httpclient:jar:4.3.6 already exists in destination.
[INFO] antlr:antlr:jar:2.7.7 already exists in destination.
[INFO] org.apache.hadoop:hadoop-mapreduce-client-app:jar:2.2.0 already exists in destination.
[INFO] org.apache.ant:ant:jar:1.8.3 already exists in destination.
[INFO] com.tinkerpop.blueprints:blueprints-core:jar:2.5.0 already exists in destination.
[INFO] org.apache.hadoop:hadoop-yarn-server-common:jar:2.2.0 already exists in destination.
[INFO] com.tinkerpop.gremlin:gremlin-test:jar:2.5.0 already exists in destination.
[INFO] net.fortytwo.sesametools:repository-sail:jar:1.8 already exists in destination.
[INFO] org.apache.hadoop:hadoop-yarn-api:jar:2.2.0 already exists in destination.
[INFO] cglib:cglib-nodep:jar:2.2.2 already exists in destination.
[INFO] org.apache.hadoop:hadoop-yarn-common:jar:2.2.0 already exists in destination.
[INFO] org.restlet.jse:org.restlet:jar:2.1.1 already exists in destination.
[INFO] org.openrdf.sesame:sesame-rio-ntriples:jar:2.7.10 already exists in destination.
[INFO] com.sun.jersey:jersey-server:jar:1.9 already exists in destination.
[INFO] org.apache.zookeeper:zookeeper:jar:3.4.5 already exists in destination.
[INFO] org.apache.hadoop:hadoop-mapreduce-client-shuffle:jar:2.2.0 already exists in destination.
[INFO] com.thoughtworks.paranamer:paranamer:jar:2.3 already exists in destination.
[INFO] com.tinkerpop:pipes:jar:2.5.0 already exists in destination.
[INFO] org.openrdf.sesame:sesame-rio-api:jar:2.7.10 already exists in destination.
[INFO] com.thinkaurelius.titan:titan-test:jar:0.5.4 already exists in destination.
[INFO] org.apache.lucene:lucene-grouping:jar:4.8.1 already exists in destination.
[INFO] asm:asm-analysis:jar:3.2 already exists in destination.
[INFO] commons-configuration:commons-configuration:jar:1.6 already exists in destination.
[INFO] commons-digester:commons-digester:jar:1.8 already exists in destination.
[INFO] org.openrdf.sesame:sesame-rio-binary:jar:2.7.10 already exists in destination.
[INFO] org.apache.lucene:lucene-memory:jar:4.8.1 already exists in destination.
[INFO] org.openrdf.sesame:sesame-queryalgebra-evaluation:jar:2.7.10 already exists in destination.
[INFO] org.openrdf.sesame:sesame-rio-trix:jar:2.7.10 already exists in destination.
[INFO] commons-collections:commons-collections:jar:3.2.1 already exists in destination.
[INFO] commons-codec:commons-codec:jar:1.7 already exists in destination.
[INFO] commons-cli:commons-cli:jar:1.2 already exists in destination.
[INFO] org.objenesis:objenesis:jar:2.1 already exists in destination.
[INFO] com.google.guava:guava:jar:15.0 already exists in destination.
[INFO] org.elasticsearch:elasticsearch-hadoop-mr:jar:2.0.0 already exists in destination.
[INFO] com.thinkaurelius.titan:titan-es:jar:0.5.4 already exists in destination.
[INFO] com.carrotsearch:junit-benchmarks:jar:0.7.0 already exists in destination.
[INFO] com.almworks.sqlite4java:libsqlite4java-linux-i386:so:1.0.392 already exists in destination.
[INFO] concurrent:concurrent:jar:1.3.4 already exists in destination.
[INFO] org.openrdf.sesame:sesame-sail-inferencer:jar:2.7.10 already exists in destination.
[INFO] com.fasterxml.jackson.core:jackson-annotations:jar:2.2.3 already exists in destination.
[INFO] org.codehaus.jettison:jettison:jar:1.3.3 already exists in destination.
[INFO] org.glassfish:javax.servlet:jar:3.1 already exists in destination.
[INFO] com.spatial4j:spatial4j:jar:0.4.1 already exists in destination.
[INFO] commons-httpclient:commons-httpclient:jar:3.1 already exists in destination.
[INFO] com.almworks.sqlite4java:libsqlite4java-linux-amd64:so:1.0.392 already exists in destination.
[INFO] org.apache.hadoop:hadoop-mapreduce-client-jobclient:jar:2.2.0 already exists in destination.
[INFO] org.codehaus.jackson:jackson-mapper-asl:jar:1.8.8 already exists in destination.
[INFO] aopalliance:aopalliance:jar:1.0 already exists in destination.
[INFO] org.apache.logging.log4j:log4j-core:jar:2.1 already exists in destination.
[INFO] info.ganglia.gmetric4j:gmetric4j:jar:1.0.3 already exists in destination.
[INFO] org.mockito:mockito-core:jar:1.10.19 already exists in destination.
[INFO] com.almworks.sqlite4java:sqlite4java-win32-x86:dll:1.0.392 already exists in destination.
[INFO] org.apache.lucene:lucene-codecs:jar:4.8.1 already exists in destination.
[INFO] com.tinkerpop.blueprints:blueprints-test:jar:2.5.0 already exists in destination.
[INFO] org.eclipse.jetty:jetty-client:jar:8.1.12.v20130726 already exists in destination.
[INFO] com.codahale.metrics:metrics-graphite:jar:3.0.1 already exists in destination.
[INFO] org.openrdf.sesame:sesame-queryparser-serql:jar:2.7.10 already exists in destination.
[INFO] org.slf4j:slf4j-log4j12:jar:1.7.5 already exists in destination.
[INFO] org.apache.lucene:lucene-queries:jar:4.8.1 already exists in destination.
[INFO] org.apache.lucene:lucene-misc:jar:4.8.1 already exists in destination.
[INFO] org.openrdf.sesame:sesame-rio-rdfjson:jar:2.7.10 already exists in destination.
[INFO] org.apache.lucene:lucene-queryparser:jar:4.8.1 already exists in destination.
[INFO] com.google.inject:guice:jar:3.0 already exists in destination.
[INFO] org.elasticsearch:elasticsearch:jar:1.2.1 already exists in destination.
[INFO] com.esotericsoftware.kryo:kryo:jar:2.22 already exists in destination.
[INFO] org.apache.hadoop:hadoop-auth:jar:2.2.0 already exists in destination.
[INFO] org.glassfish:javax.json:jar:1.0 already exists in destination.
[INFO] com.sun.jersey.contribs:jersey-guice:jar:1.9 already exists in destination.
[INFO] org.openrdf.sesame:sesame-query:jar:2.7.10 already exists in destination.
[INFO] com.sun.jersey:jersey-client:jar:1.9 already exists in destination.
[INFO] org.apache.lucene:lucene-core:jar:4.8.1 already exists in destination.
[INFO] org.apache.lucene:lucene-suggest:jar:4.8.1 already exists in destination.
[INFO] com.google.code.findbugs:jsr305:jar:1.3.9 already exists in destination.
[INFO] asm:asm-util:jar:3.2 already exists in destination.
[INFO] jline:jline:jar:0.9.94 already exists in destination.
[INFO] org.apache.ivy:ivy:jar:2.3.0 already exists in destination.
[INFO] javax.xml.bind:jaxb-api:jar:2.2.2 already exists in destination.
[INFO] org.openrdf.sesame:sesame-queryparser-sparql:jar:2.7.10 already exists in destination.
[INFO] net.fortytwo:linked-data-sail:jar:1.1 already exists in destination.
[INFO] asm:asm-commons:jar:3.2 already exists in destination.
[INFO] javax.activation:activation:jar:1.1 already exists in destination.
[INFO] commons-io:commons-io:jar:2.1 already exists in destination.
[INFO] org.apache.lucene:lucene-analyzers-common:jar:4.8.1 already exists in destination.
[INFO] com.codahale.metrics:metrics-core:jar:3.0.1 already exists in destination.
[INFO] com.amazonaws:DynamoDBLocal:jar:1.10.5.1 already exists in destination.
[INFO] org.apache.logging.log4j:log4j-api:jar:2.1 already exists in destination.
[INFO] org.openrdf.sesame:sesame-sail-api:jar:2.7.10 already exists in destination.
[INFO] javax.inject:javax.inject:jar:1 already exists in destination.
[INFO] com.fasterxml.jackson.jaxrs:jackson-jaxrs-json-provider:jar:2.2.3 already exists in destination.
[INFO] org.glassfish.grizzly:grizzly-http-servlet:jar:2.1.2 already exists in destination.
[INFO] commons-beanutils:commons-beanutils-core:jar:1.8.0 already exists in destination.
[INFO] com.github.stephenc.high-scale-lib:high-scale-lib:jar:1.1.4 already exists in destination.
[INFO] com.thinkaurelius.titan:titan-hadoop:jar:0.5.4 already exists in destination.
[INFO] org.abego.treelayout:org.abego.treelayout.core:jar:1.0.1 already exists in destination.
[INFO] org.mockito:mockito-all:jar:1.8.1 already exists in destination.
[INFO] org.openrdf.sesame:sesame-model:jar:2.7.10 already exists in destination.
[INFO] com.codahale.metrics:metrics-ganglia:jar:3.0.1 already exists in destination.
[INFO] org.codehaus.jackson:jackson-core-asl:jar:1.8.8 already exists in destination.
[INFO] stax:stax-api:jar:1.0.1 already exists in destination.
[INFO] asm:asm:jar:3.2 already exists in destination.
[INFO] org.eclipse.jetty:jetty-http:jar:8.1.12.v20130726 already exists in destination.
[INFO] org.codehaus.jackson:jackson-xc:jar:1.8.3 already exists in destination.
[INFO] commons-lang:commons-lang:jar:2.4 already exists in destination.
[INFO] org.eclipse.jetty:jetty-io:jar:8.1.12.v20130726 already exists in destination.
[INFO] org.apache.commons:commons-compress:jar:1.4.1 already exists in destination.
[INFO] org.openrdf.sesame:sesame-http-protocol:jar:2.7.10 already exists in destination.
[INFO] org.semarglproject:semargl-rdf:jar:0.4 already exists in destination.
[INFO] com.tinkerpop:frames:jar:2.5.0 already exists in destination.
[INFO] asm:asm-tree:jar:3.2 already exists in destination.
[INFO] org.mortbay.jetty:jetty-util:jar:6.1.26 already exists in destination.
[INFO] org.reflections:reflections:jar:0.9.9-RC1 already exists in destination.
[INFO] org.hamcrest:hamcrest-core:jar:1.3 already exists in destination.
[INFO] org.javassist:javassist:jar:3.18.0-GA already exists in destination.
[INFO] org.apache.hadoop:hadoop-hdfs:jar:2.2.0 already exists in destination.
[INFO] org.slf4j:slf4j-api:jar:1.7.5 already exists in destination.
[INFO] org.openrdf.sesame:sesame-util:jar:2.7.10 already exists in destination.
[INFO] xmlenc:xmlenc:jar:0.52 already exists in destination.
[INFO] org.fusesource.jansi:jansi:jar:1.5 already exists in destination.
[INFO] org.eclipse.jetty:jetty-server:jar:8.1.12.v20130726 already exists in destination.
[INFO] org.apache.commons:commons-lang3:jar:3.3.2 already exists in destination.
[INFO] org.openrdf.sesame:sesame-queryresultio-sparqlxml:jar:2.7.10 already exists in destination.
[INFO] com.github.jsonld-java:jsonld-java:jar:0.3 already exists in destination.
[INFO] com.fasterxml.jackson.core:jackson-core:jar:2.2.3 already exists in destination.
[INFO] org.apache.hadoop:hadoop-annotations:jar:2.2.0 already exists in destination.
[INFO] org.apache.httpcomponents:httpcore:jar:4.3.3 already exists in destination.
[INFO] org.semarglproject:semargl-sesame:jar:0.4 already exists in destination.
[INFO] org.openrdf.sesame:sesame-queryalgebra-model:jar:2.7.10 already exists in destination.
[INFO] commons-beanutils:commons-beanutils:jar:1.7.0 already exists in destination.
[INFO] junit:junit:jar:4.11 already exists in destination.
[INFO] com.thinkaurelius.titan:titan-hbase:jar:0.5.4 already exists in destination.
[INFO] com.fasterxml.jackson.datatype:jackson-datatype-json-org:jar:2.2.3 already exists in destination.
[INFO] org.openrdf.sesame:sesame-rio-turtle:jar:2.7.10 already exists in destination.
[INFO] com.tinkerpop.gremlin:gremlin-groovy:jar:2.5.0 already exists in destination.
[INFO] colt:colt:jar:1.2.0 already exists in destination.
[INFO] org.openrdf.sesame:sesame-sail-memory:jar:2.7.10 already exists in destination.
[INFO] com.fasterxml.jackson.core:jackson-databind:jar:2.2.3 already exists in destination.
[INFO] com.fasterxml.jackson.jaxrs:jackson-jaxrs-base:jar:2.2.3 already exists in destination.
[INFO] com.fasterxml.jackson.module:jackson-module-jaxb-annotations:jar:2.2.3 already exists in destination.
[INFO] org.openrdf.sesame:sesame-rio-rdfxml:jar:2.7.10 already exists in destination.
[INFO] org.openrdf.sesame:sesame-sail-nativerdf:jar:2.7.10 already exists in destination.
[INFO] net.fortytwo.sesametools:common:jar:1.8 already exists in destination.
[INFO] com.carrotsearch.randomizedtesting:randomizedtesting-runner:jar:2.0.8 already exists in destination.
[INFO] com.almworks.sqlite4java:sqlite4java:jar:1.0.392 already exists in destination.
[INFO] org.openrdf.sesame:sesame-queryresultio-api:jar:2.7.10 already exists in destination.
[INFO] org.acplt:oncrpc:jar:1.0.7 already exists in destination.
[INFO] org.json:json:jar:20090211 already exists in destination.
[INFO] org.apache.avro:avro:jar:1.7.4 already exists in destination.
[INFO] com.sun.jersey:jersey-grizzly2:jar:1.9 already exists in destination.
[INFO]
[INFO] --- maven-compiler-plugin:3.3:testCompile (default-testCompile) @ dynamodb-titan054-storage-backend ---
[INFO] Nothing to compile - all classes are up to date
[INFO]
[INFO] --- maven-surefire-plugin:2.18.1:test (default-test) @ dynamodb-titan054-storage-backend ---
[INFO] Tests are skipped.
[INFO]
[INFO] --- exec-maven-plugin:1.2:exec (default) @ dynamodb-titan054-storage-backend ---
Error: Could not find or load main class com.thinkaurelius.titan.hadoop.tinkerpop.gremlin.Console
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 13.269 s
[INFO] Finished at: 2015-09-22T19:49:33-05:00
[INFO] Final Memory: 24M/237M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.2:exec (default) on project dynamodb-titan054-storage-backend: Command execution failed. Process
 exited with an error: 1(Exit value: 1) -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException

C:\Users\<<<username>>>\code\dynamodb-titan-storage-backend>

Graph processing progressively gets slower on titan + dynamoDB (local) as more vertices/edges are added

I posted this on SO, but I wonder if this might be a more relevant place to ask this.

I am working with titan 1.0 using AWS dynamoDB local implementation as storage backend on a 16GB machine. My use case involves generating graphs periodically containing vertices & edges in the order of 120K. Every time I generate a new graph in-memory, I check the graph stored in DB and either (i) add vertices/edges that do not exist, or (ii) update properties if they already exist (existence is determined by 'Label' and a 'Value' attribute). Note that the 'Value' property is indexed. Transactions are committed in batches of 500 vertices.

Problem: I find that this process gets slower each time I process a new graph (1st graph finished in 45 mins with empty db initially, 2nd took 2.5 hours, 3rd in 3.5 hours, 4th in 6 hours, 5th in 10 hours and so on). In fact, when processing a given graph, it is fairly quick at start time but progressively gets slower (initial batches take 2-4 secs and later on it increases to 100s of seconds for same batch size of 500 nodes; I also see sometimes it takes 1000-2000 secs for a batch). This is the processing time alone (see approach below); commit takes between 8-10 secs always. I configured the jvm heap size to 10G, and I notice that when the app is running it is eventually using up all of it.

Question: Is this behavior to be expected? It seems to me something is wrong here (either in my config / approach?). Any help or suggestions would be greatly appreciated.

Approach:

Starting from the root node of the in-memory graph, I retrieve all child nodes and maintain a queue
For each child node, I check to see if it exists in DB, else create new node, and update some properties

 Vertex dbVertex = dbgraph.traversal().V()
    .has(currentVertexInMem.label(), "Value",
            (String) currentVertexInMem.value("Value"))
    .tryNext()
    .orElseGet(() -> createVertex(dbgraph, currentVertexInMem));

    if (dbVertex != null) {
        // Update Properties
        updateVertexProperties(dbgraph, currentVertexInMem, dbVertex);
    }

// Add edge if necessary
if (parentDBVertex != null) {
    GraphTraversal<Vertex, Edge> edgeIt = graph.traversal().V(parentDBVertex).outE()
        .has("EdgeProperty1", eProperty1) // eProperty1 is String input parameter
        .has("EdgeProperty2", eProperty2); // eProperty2 is Long input parameter

    Boolean doCreateEdge = true;
     Edge e = null;
     while (edgeIt.hasNext()) {
         e = edgeIt.next();
         if (e.inVertex().equals(dbVertex)) {
             doCreateEdge = false;
             break;
         }

     if (doCreateEdge) {
         e = parentDBVertex.addEdge("EdgeLabel", dbVertex, "EdgeProperty1", eProperty1, "EdgeProperty2", eProperty2);
     } 
e = null;
it = null;
}

...

if ((processedVertexCount.get() % 500 == 0) 
 || processedVertexCount.get() == verticesToProcess.get()) {                            
graph.tx().commit();
}

Create function:

public static Vertex createVertex(Graph graph, Vertex clientVertex) {
Vertex newVertex = null; 
switch (clientVertex.label()) {
case "Label 1":
    newVertex = graph.addVertex(T.label, clientVertex.label(), "Value",
            clientVertex.value("Value"), 
            "Property1-1", clientVertex.value("Property1-1"), 
            "Property1-2", clientVertex.value("Property1-2"));
    break;

case "Label 2":
    newVertex = graph.addVertex(T.label, clientVertex.label(), "Value",
            clientVertex.value("Value"), "Property2-1",
            clientVertex.value("Property2-1"),
            "Property2-2", clientVertex.value("Property2-2"));
    break;

default:
    newVertex = graph.addVertex(T.label, clientVertex.label(), "Value",
            clientVertex.value("Value"));
    break;
}
return newVertex;
}

How to restart the titan server?

Sorry feel like a real idiot asking this but when connected through SSH to the server what is the command to restart the titan server to pick up any config changes? Particularly the gremlin server. The titan / gremlin documentation is so scattered I'm having trouble finding this simple thing.

Running aws lambda with TitanDb and DynamoDB as a backend

This is more of a question rather than an issue, but I just wanted to know whether it's a requriement to run a gremlin server inorder to use DynamoDB as a backend. It would seem that after playing around with things that you only need to configure your TitanGraph instance to reference an endpoint and backend to DynamoDB. This suggests that I can directly update and query my graph backend through each lambda invocation by simply instantiating a graph instance.

My thoughts was that I could create my backend graph structure once and then let my lambdas populate and query the graph database. Is that reasonable use case for TitanDB and DynamoDB as a backend? Running a fleet of gremlin servers to handle querying and updating the graph database seems like a lot of overhead.

Default configuration is super high IOPS cost

We got dinged because the default configuration provisions very high Dynamo DB IOPS which are charged per hour whether you use them or not. Maybe the default should be the lowest possible IOPS with a prominent warning/instruction on how to raise it?

A metric named d_multiple.v100_multiple_executor-queue-size already exists

Hi,

When instantiating several connections at the same time from multiple threads on my Java code, I come across this error

Caused by: java.lang.IllegalArgumentException: Could not instantiate implementation: com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager
    at com.thinkaurelius.titan.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:55)
    at com.thinkaurelius.titan.diskstorage.Backend.getImplementationClass(Backend.java:473)
    at com.thinkaurelius.titan.diskstorage.Backend.getStorageManager(Backend.java:407)
    at com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration.<init>(GraphDatabaseConfiguration.java:1320)
    at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:94)
    at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:74)
    at com.mohataher.tinkerpop.importer.base.TitanGraphUtils.load(TitanGraphUtils.java:42)
    ... 15 more
Caused by: java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
    at com.thinkaurelius.titan.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:44)
    ... 21 more
Caused by: com.thinkaurelius.titan.diskstorage.PermanentBackendException: Bad configuration used: com.thinkaurelius.titan.diskstorage.configuration.BasicConfiguration@e1300db
    at com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager.<init>(DynamoDBStoreManager.java:92)
    ... 26 more
Caused by: java.lang.IllegalArgumentException: A metric named d_multiple.v100_multiple_executor-queue-size already exists
    at com.codahale.metrics.MetricRegistry.register(MetricRegistry.java:91)
    at com.amazon.titan.diskstorage.dynamodb.DynamoDBDelegate.<init>(DynamoDBDelegate.java:165)
    at com.amazon.titan.diskstorage.dynamodb.Client.<init>(Client.java:145)
    at com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager.<init>(DynamoDBStoreManager.java:90)
    ... 26 more

When re-attempting to start the connection, it starts smoothly with no issue. What causes this issue?

Does this backend support parallelized bulk loading?

Hi all,

Congrats on the launch & awesome work on this!

I've successfully built my application (bulk loading a graph) against this backend, and have it run against the local, in-memory DynamoDB. Now I'd like to switch to running against DynamoDB in the cloud, and perform bulk-loading from a single machine running Titan (in fact, an EC2 instance).

Here's what the code looks like:

      // Each node file represents a distinct range of node IDs, and their properties
      for (String nodeFile : nodeFiles) {
          threads.add(new Thread(
              new NodeSplitLoader(g, config, nodeFile)));
      }
      for (Thread thread : threads) {
          thread.start();
      }

      // ....

      // Each thread creates its own BatchGraph over a single TitanGraph, and
      // handles one node file; inside each thread's run():
      BatchGraph bg = new BatchGraph(g, VertexIDType.NUMBER, 500000);
      for (each line) {
          // Do parsing to get nodeId and properties
          bg.addVertex(TitanId.toVertexId(nodeId), properties);
      }
      bg.commit()

The particular code is on Github here. Here's an example of what the node files look like:

// node file 1
0 prop1 prop2 ...
1 prop1 prop2 ...
...
99 prop1 prop2 ...

// node file 2
100 prop1 prop2 ...
...

Does the DynamoDB support such parallelized bulk loading? Also, can I do the same parallelization with edges, after all the nodes are loaded? The Titan version I'm using is 0.5.4.

Unable to execute scripts with bound parameters: Gremlin Server hangs

Greetings. Using Titan 1.0.0 with DynamoDB local, I'm unable to execute scripts with bound parameters sent to Gremlin Server via Websocket. Scripts with no parameters work fine. I'm using the default gremlin-server-local.yaml file (default setup from http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Tools.TitanDB.DownloadingAndRunning.html#Tools.TitanDB.DownloadingAndRunning.title). Everything works fine from the Gremlin console. Also using java version "1.8.0_45".

The following query works: Gremlin Server returns intermediate and termination messages, and the vertex gets added to the graph:

{
  "requestId": "7091d5a0-ae15-11e5-b942-d57806399243",
  "processor": "",
  "op": "eval",
  "args": {
    "gremlin": "g.addV(label, 'user', 'firstname', 'Alice')",
    "accept": "application/json",
    "language": "gremlin-groovy"
  }
}

However, the same query with bound parameters fails to execute. Gremlin server doesn't stream any message back to the client, and no vertex is added to the graph:

{
  "requestId": "2160a1d0-ae17-11e5-a9b1-fd3c55aea4b5",
  "processor": "",
  "op": "eval",
  "args": {
    "gremlin": {
      "script": "g.addV(label, 'user', 'firstname', p1)",
      "params": {
        "p1": "Alice"
      }
    },
    "accept": "application/json",
    "language": "gremlin-groovy"
  }
}

Gremlin Server throws, here's the complete stacktrace:

340896 [gremlin-server-worker-1] WARN  io.netty.channel.DefaultChannelPipeline  - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
java.lang.ClassCastException: java.util.LinkedHashMap cannot be cast to java.lang.String
  at org.apache.tinkerpop.gremlin.server.op.AbstractEvalOpProcessor.evalOpInternal(AbstractEvalOpProcessor.java:145)
  at org.apache.tinkerpop.gremlin.server.op.standard.StandardOpProcessor.evalOp(StandardOpProcessor.java:69)
  at org.apache.tinkerpop.gremlin.server.op.standard.StandardOpProcessor$$Lambda$112/798544109.accept(Unknown Source)
  at org.apache.tinkerpop.gremlin.server.handler.OpExecutorHandler.channelRead0(OpExecutorHandler.java:66)
  at org.apache.tinkerpop.gremlin.server.handler.OpExecutorHandler.channelRead0(OpExecutorHandler.java:41)
  at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
  at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
  at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
  at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
  at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
  at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
  at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
  at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
  at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
  at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
  at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
  at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
  at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
  at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
  at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
  at io.netty.handler.codec.http.websocketx.WebSocketServerProtocolHandler$1.channelRead(WebSocketServerProtocolHandler.java:146)
  at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
  at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
  at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:182)
  at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
  at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
  at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
  at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
  at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
  at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
  at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
  at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
  at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
  at java.lang.Thread.run(Thread.java:745)

I'm using https://github.com/jbmusso/gremlin-javascript and can dig into the internal if needed. Client tests work fine against TP 3.0.1, including scripts executed with bound parameters.

ElasticSearch strange behavior

Hello,

I'm using Titan 1.0.0 on AWS with DynamoDB as the datastore.

I've defined some indexes for text and string search the following way:

description=mgmt.makePropertyKey('description').dataType(String.class).make();
mgmt.buildIndex('showByDescription',Vertex.class).addKey(description,Mapping.TEXTSTRING.asParameter()).indexOnly(show).buildMixedIndex("search");

In gremlin, when I launch this kind of query:

g.V().hasLabel('show').has('description', textContains('SOME DESCRIPTION'));

I can see that I get some results.

On the other hand I don't succeed to perform exact phrase match. The following queries retrieve nothing, despite I know the data are present.

g.V().hasLabel('show').has('description', 'SOME DESCRIPTION');
g.V().hasLabel('show').has('description', eq('SOME DESCRIPTION'));

I checked directly in elastic search, and I can see that in the titan index, I have 2 types for the 'description' property: 'description', and 'description__STRING'.

I suppose that 'description__STRING' is used for exact phrase match.

Why do the last 2 gremlin queries retrieve nothing? Are the queries wrong? Is there another syntaxt for exact phrase match?

Regards,

Toufic Zayed

GraphFactory could not instantiate this Graph

When my instance boots, Titan is logging an error that it can't instantiate the graph. This is causing graph and g to not be available to my client (gremlin-javascript). If I stop/start Titan, the service starts normally (no errors) and gremlin-javascript can once again use graph and g.

I created my instance via the CloudFormation template, and this is the only instance I'm using. How can I resolve this startup issue? Here is the error log:

0    [main] INFO  org.apache.tinkerpop.gremlin.server.GremlinServer  - 
         \,,,/
         (o o)
-----oOOo-(3)-oOOo-----

266  [main] INFO  org.apache.tinkerpop.gremlin.server.GremlinServer  - Configuring Gremlin Server from /usr/local/packages/dynamodb-titan100-storage-backend-1.0.0-hadoop1/conf/gremlin-server/gremlin-server.yaml
2220 [main] INFO  com.amazon.titan.diskstorage.dynamodb.AbstractDynamoDBStore  - Entering ensureStore table:v100_system_properties
3075 [main] INFO  com.thinkaurelius.titan.core.util.ReflectiveConfigOptionLoader  - Loaded and initialized config classes: 12 OK out of 12 attempts in PT0.098S
4151 [main] INFO  com.amazon.titan.diskstorage.dynamodb.AbstractDynamoDBStore  - Closing table:v100_system_properties
4155 [main] INFO  com.amazon.titan.diskstorage.dynamodb.AbstractDynamoDBStore  - Closing table:v100_system_properties
4158 [main] INFO  com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration  - Generated unique-instance-id=ac1f316d2400-ip-172-31-49-1091
4197 [main] INFO  com.thinkaurelius.titan.diskstorage.Backend  - Configuring index [search]
4400 [main] INFO  org.elasticsearch.node  - [Atom Bob] version[1.5.1], pid[2400], build[5e38401/2015-04-09T13:41:35Z]
4400 [main] INFO  org.elasticsearch.node  - [Atom Bob] initializing ...
4416 [main] INFO  org.elasticsearch.plugins  - [Atom Bob] loaded [], sites []
7331 [main] INFO  org.elasticsearch.node  - [Atom Bob] initialized
7331 [main] INFO  org.elasticsearch.node  - [Atom Bob] starting ...
7336 [main] INFO  org.elasticsearch.transport  - [Atom Bob] bound_address {local[1]}, publish_address {local[1]}
7346 [main] INFO  org.elasticsearch.discovery  - [Atom Bob] elasticsearch/9K4-3iseRpSYqVExWE4J9A
7348 [elasticsearch[Atom Bob][clusterService#updateTask][T#1]] INFO  org.elasticsearch.cluster.service  - [Atom Bob] master {new [Atom Bob][9K4-3iseRpSYqVExWE4J9A][ip-172-31-49-109][local[1]]{local=true}}, removed {[Atom Bob][E0dnS2NRS7C3Jz-vb_dirA][ip-172-31-49-109][local[1]]{local=true},}, reason: local-disco-initial_connect(master)
7489 [main] INFO  org.elasticsearch.http  - [Atom Bob] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/172.31.49.109:9200]}
7490 [main] INFO  org.elasticsearch.node  - [Atom Bob] started
8045 [elasticsearch[Atom Bob][clusterService#updateTask][T#1]] INFO  org.elasticsearch.gateway  - [Atom Bob] recovered [1] indices into cluster_state
38049 [main] INFO  com.thinkaurelius.titan.diskstorage.Backend  - Initiated backend operations thread pool of size 2
38049 [main] INFO  com.amazon.titan.diskstorage.dynamodb.AbstractDynamoDBStore  - Entering ensureStore table:v100_titan_ids
38156 [main] INFO  com.amazon.titan.diskstorage.dynamodb.AbstractDynamoDBStore  - Entering ensureStore table:v100_edgestore
38169 [main] INFO  com.amazon.titan.diskstorage.dynamodb.AbstractDynamoDBStore  - Entering ensureStore table:v100_graphindex
38187 [main] INFO  com.amazon.titan.diskstorage.dynamodb.AbstractDynamoDBStore  - Entering ensureStore table:v100_txlog
38220 [main] INFO  com.amazon.titan.diskstorage.dynamodb.AbstractDynamoDBStore  - Entering ensureStore table:v100_systemlog
38246 [main] INFO  com.amazon.titan.diskstorage.dynamodb.AbstractDynamoDBStore  - Entering ensureStore table:v100_system_properties
38307 [main] WARN  org.apache.tinkerpop.gremlin.server.GremlinServer  - Graph [graph] configured at [conf/gremlin-server/dynamodb.properties] could not be instantiated and will not be available in Gremlin Server.  GraphFactory message: GraphFactory could not instantiate this Graph implementation [class com.thinkaurelius.titan.core.TitanFactory]
java.lang.RuntimeException: GraphFactory could not instantiate this Graph implementation [class com.thinkaurelius.titan.core.TitanFactory]
    at org.apache.tinkerpop.gremlin.structure.util.GraphFactory.open(GraphFactory.java:82)
    at org.apache.tinkerpop.gremlin.structure.util.GraphFactory.open(GraphFactory.java:70)
    at org.apache.tinkerpop.gremlin.structure.util.GraphFactory.open(GraphFactory.java:104)
    at org.apache.tinkerpop.gremlin.server.GraphManager.lambda$new$27(GraphManager.java:50)
    at java.util.LinkedHashMap$LinkedEntrySet.forEach(LinkedHashMap.java:663)
    at org.apache.tinkerpop.gremlin.server.GraphManager.<init>(GraphManager.java:48)
    at org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor.<init>(ServerGremlinExecutor.java:94)
    at org.apache.tinkerpop.gremlin.server.GremlinServer.<init>(GremlinServer.java:88)
    at org.apache.tinkerpop.gremlin.server.GremlinServer.main(GremlinServer.java:290)
Caused by: java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.tinkerpop.gremlin.structure.util.GraphFactory.open(GraphFactory.java:78)
    ... 8 more
Caused by: com.thinkaurelius.titan.core.TitanException: A Titan graph with the same instance id [ac1f316d2400-ip-172-31-49-1091] is already open. Might required forced shutdown.
    at com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.<init>(StandardTitanGraph.java:146)
    at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:94)
    at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:74)
    ... 13 more
38312 [main] INFO  org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor  - Initialized Gremlin thread pool.  Threads in pool named with pattern gremlin-*
38908 [main] INFO  org.apache.tinkerpop.gremlin.groovy.engine.ScriptEngines  - Loaded nashorn ScriptEngine
39606 [main] INFO  org.apache.tinkerpop.gremlin.groovy.engine.ScriptEngines  - Loaded gremlin-groovy ScriptEngine
40778 [main] WARN  org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor  - Could not initialize gremlin-groovy ScriptEngine with scripts/empty-sample.groovy as script could not be evaluated - javax.script.ScriptException: groovy.lang.MissingPropertyException: No such property: graph for class: Script1
40778 [main] INFO  org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor  - Initialized GremlinExecutor and configured ScriptEngines.
40878 [main] WARN  org.apache.tinkerpop.gremlin.server.AbstractChannelizer  - Could not instantiate configured serializer class - org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0 - it will not be available. There is no graph named [graph] configured to be used in the useMapperFromGraph setting
40880 [main] INFO  org.apache.tinkerpop.gremlin.server.AbstractChannelizer  - Configured application/vnd.gremlin-v1.0+gryo-stringd with org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0
40923 [main] WARN  org.apache.tinkerpop.gremlin.server.AbstractChannelizer  - Could not instantiate configured serializer class - org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV1d0 - it will not be available. There is no graph named [graph] configured to be used in the useMapperFromGraph setting
40924 [main] WARN  org.apache.tinkerpop.gremlin.server.AbstractChannelizer  - Could not instantiate configured serializer class - org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0 - it will not be available. There is no graph named [graph] configured to be used in the useMapperFromGraph setting
41026 [gremlin-server-boss-1] INFO  org.apache.tinkerpop.gremlin.server.GremlinServer  - Gremlin Server configured with worker thread pool of 1, gremlin pool of 8 and boss thread pool of 1.
41026 [gremlin-server-boss-1] INFO  org.apache.tinkerpop.gremlin.server.GremlinServer  - Channel started at port 8182.

Slow bulk load speed

I'm experimenting with this backend to load a large graph, O(10 million) nodes, O(1 billion) edges.

Here's the configuration I am using: gist. Configs that are not set probably have default values (other than things like storage.batch-loading set to true). As mentioned in the other issue, here's the code I'm using, if it helps!

The bulk load speed is very, very slow: a couple hours for 100k nodes (each node has tens of properties, the total size of which is less than 1KB). By monitoring the instance, it is not the bottleneck, and it seems DynamoDB's intake is an issue.

Is there something obvious I'm missing in the config? Could anyone give some tuning advice to speed up the loading?

Running my test suite takes 5 times longer with DynamoDBLocal vs Berkeley. Expected?

Hi,

I have a test suite with some 1100 testcases that do various basic writes and reads on very small graphs (less than 50 vertices). Each test builds its own graph and then runs some queries.

The entire suite takes 2min 25sec to run with Berkeley as backend, while it takes 13min 16sec to run with DynamoDBLocal, using the properties file from your repo (https://github.com/awslabs/dynamodb-titan-storage-backend/blob/1.0.0/src/test/resources/dynamodb-local.properties). Is this performance expected?

I have tried to tweak a few properties to increase the performance. In particular, I have raised all storage.dynamodb.stores.*.capacity-* to 1000 and all storage.dynamodb.stores.*.*-rate to 1000. Then my test suite ran in 6min 45sec (3 times slower than Berkeley).

Do you have any tips on how I can improve the performance even more or is this really as fast as it gets?

Thanks!

What is max-concurrent-operations?

There is no description or explanation of storage.dynamodb.client.executor.max-concurrent-operations and how to use it in the docs.

It would be a good idea to add that to the table on README.

Thank you.

Error: Could not instantiate implementation: com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager

I am trying to following here. On step 5, when I run g = TitanFactory.open(conf) on gremlin, it is failing Could not instantiate implementation: com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager.
This is after succcessful mvn install and successeful running dynamodb-local with mvn test -Pstart-dynamodb-local command.

Here is the stack trace.

gremlin> conf = new BaseConfiguration()
==>org.apache.commons.configuration.BaseConfiguration@1922e6d
gremlin> conf.setProperty("storage.backend", "com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager")
==>null
gremlin> conf.setProperty("storage.dynamodb.client.endpoint", "http://localhost:4567")
==>null
gremlin> conf.setProperty("index.search.backend", "elasticsearch")
==>null
gremlin> conf.setProperty("index.search.directory", "/tmp/searchindex")
==>null
gremlin> conf.setProperty("index.search.elasticsearch.client-only", "false")
==>null
gremlin> conf.setProperty("index.search.elasticsearch.local-mode", "true")
==>null
gremlin> conf.setProperty("index.search.elasticsearch.interface", "NODE")
==>null
gremlin> g = TitanFactory.open(conf)
Could not instantiate implementation: com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager
Display stack trace? [yN] y
java.lang.IllegalArgumentException: Could not instantiate implementation: com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager
    at com.thinkaurelius.titan.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:55)
    at com.thinkaurelius.titan.diskstorage.Backend.getImplementationClass(Backend.java:421)
    at com.thinkaurelius.titan.diskstorage.Backend.getStorageManager(Backend.java:361)
    at com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration.<init>(GraphDatabaseConfiguration.java:1275)
    at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:93)
    at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:73)
    at com.thinkaurelius.titan.core.TitanFactory$open.call(Unknown Source)
    at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42)
    at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
    at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116)
    at groovysh_evaluate.run(groovysh_evaluate:84)
    at groovysh_evaluate$run.call(Unknown Source)
    at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42)
    at groovysh_evaluate$run.call(Unknown Source)
    at org.codehaus.groovy.tools.shell.Interpreter.evaluate(Interpreter.groovy:67)
    at org.codehaus.groovy.tools.shell.Interpreter$evaluate.call(Unknown Source)
    at org.codehaus.groovy.tools.shell.Groovysh.execute(Groovysh.groovy:152)
    at org.codehaus.groovy.tools.shell.Shell.leftShift(Shell.groovy:114)
    at org.codehaus.groovy.tools.shell.Shell$leftShift$0.call(Unknown Source)
    at org.codehaus.groovy.tools.shell.ShellRunner.work(ShellRunner.groovy:88)
    at org.codehaus.groovy.tools.shell.InteractiveShellRunner.super$2$work(InteractiveShellRunner.groovy)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:90)
    at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:233)
    at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1079)
    at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuperN(ScriptBytecodeAdapter.java:128)
    at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuper0(ScriptBytecodeAdapter.java:148)
    at org.codehaus.groovy.tools.shell.InteractiveShellRunner.work(InteractiveShellRunner.groovy:100)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite$PogoCachedMethodSiteNoUnwrapNoCoerce.invoke(PogoMetaMethodSite.java:272)
    at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite.callCurrent(PogoMetaMethodSite.java:52)
    at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:137)
    at org.codehaus.groovy.tools.shell.ShellRunner.run(ShellRunner.groovy:57)
    at org.codehaus.groovy.tools.shell.InteractiveShellRunner.super$2$run(InteractiveShellRunner.groovy)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:90)
    at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:233)
    at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1079)
    at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuperN(ScriptBytecodeAdapter.java:128)
    at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuper0(ScriptBytecodeAdapter.java:148)
    at org.codehaus.groovy.tools.shell.InteractiveShellRunner.run(InteractiveShellRunner.groovy:66)
    at com.thinkaurelius.titan.hadoop.tinkerpop.gremlin.Console.<init>(Console.java:78)
    at com.thinkaurelius.titan.hadoop.tinkerpop.gremlin.Console.<init>(Console.java:91)
    at com.thinkaurelius.titan.hadoop.tinkerpop.gremlin.Console.main(Console.java:95)
Caused by: java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
    at com.thinkaurelius.titan.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:44)
    ... 52 more
Caused by: java.lang.NoSuchMethodError: com.fasterxml.jackson.databind.ObjectMapper.enable([Lcom/fasterxml/jackson/core/JsonParser$Feature;)Lcom/fasterxml/jackson/databind/ObjectMapper;
    at com.amazonaws.internal.config.InternalConfig.<clinit>(InternalConfig.java:43)
    at com.amazonaws.internal.config.InternalConfig$Factory.<clinit>(InternalConfig.java:304)
    at com.amazonaws.util.VersionInfoUtils.userAgent(VersionInfoUtils.java:139)
    at com.amazonaws.util.VersionInfoUtils.initializeUserAgent(VersionInfoUtils.java:134)
    at com.amazonaws.util.VersionInfoUtils.getUserAgent(VersionInfoUtils.java:95)
    at com.amazonaws.ClientConfiguration.<clinit>(ClientConfiguration.java:53)
    at com.amazon.titan.diskstorage.dynamodb.Constants.<clinit>(Constants.java:114)
    at com.amagremlin> zon.titan.diskstorage.dynamodb.DynamoDBStoreManager.getPort(DynamoDBStoreManager.java:66)
    at com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager.<init>(DynamoDBStoreManager.java:83)
    ... 57 more

I hope this helps. Thank you.

Rexter fails to run against DynamoDB Local

Rexter fails to run against DynamoDB Local. (I have dynamodb-local server running)

$ server/dynamodb-titan054-storage-backend-1.0.0-hadoop2/bin/rexster.sh --start -c ~dynamodb-titan-storage-backend/src/test/resources/rexster-local.xml

0    [main] INFO  com.tinkerpop.rexster.Application  - .:Welcome to Rexster:.
381  [main] INFO  com.tinkerpop.rexster.server.RexsterProperties  - Using [~dynamodb-titan-storage-backend/src/test/resources/rexster-local.xml] as configuration source.
410  [main] INFO  com.tinkerpop.rexster.Application  - Rexster is watching [~dynamodb-titan-storage-backend/src/test/resources/rexster-local.xml] for change.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:~dynamodb-titan-storage-backend/server/dynamodb-titan054-storage-backend-1.0.0-hadoop2/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:~dynamodb-titan-storage-backend/server/dynamodb-titan054-storage-backend-1.0.0-hadoop2/ext/dynamodb-titan054-storage-backend/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2455 [main] INFO  com.amazon.titan.diskstorage.dynamodb.AbstractDynamoDBStore  - Entering ensureStore name:system_properties
Exception in thread "main" java.lang.NoSuchFieldError: INSTANCE
  at com.amazonaws.http.conn.SdkConnectionKeepAliveStrategy.getKeepAliveDuration(SdkConnectionKeepAliveStrategy.java:48)
  at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:535)
  at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
  at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
  at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:728)
  at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
  at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
  at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:1776)
  at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.describeTable(AmazonDynamoDBClient.java:1075)
  at com.amazon.titan.diskstorage.dynamodb.DynamoDBDelegate.describeTable(DynamoDBDelegate.java:635)
  at com.amazon.titan.diskstorage.dynamodb.DynamoDBDelegate.describeTable(DynamoDBDelegate.java:627)
  at com.amazon.titan.diskstorage.dynamodb.DynamoDBDelegate.createTableAndWaitForActive(DynamoDBDelegate.java:829)
  at com.amazon.titan.diskstorage.dynamodb.AbstractDynamoDBStore.ensureStore(AbstractDynamoDBStore.java:62)
  at com.amazon.titan.diskstorage.dynamodb.MetricStore.ensureStore(MetricStore.java:47)
  at com.amazon.titan.diskstorage.dynamodb.TableNameDynamoDBStoreFactory.create(TableNameDynamoDBStoreFactory.java:52)
  at com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager.openDatabase(DynamoDBStoreManager.java:196)
  at com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager.openDatabase(DynamoDBStoreManager.java:53)
  at com.thinkaurelius.titan.diskstorage.Backend.getStandaloneGlobalConfiguration(Backend.java:387)
  at com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration.<init>(GraphDatabaseConfiguration.java:1277)
  at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:93)
  at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:73)
  at com.thinkaurelius.titan.tinkerpop.rexster.TitanGraphConfiguration.configureGraphInstance(TitanGraphConfiguration.java:33)
  at com.tinkerpop.rexster.config.GraphConfigurationContainer.getGraphFromConfiguration(GraphConfigurationContainer.java:124)
  at com.tinkerpop.rexster.config.GraphConfigurationContainer.<init>(GraphConfigurationContainer.java:54)
  at com.tinkerpop.rexster.server.XmlRexsterApplication.reconfigure(XmlRexsterApplication.java:99)
  at com.tinkerpop.rexster.server.XmlRexsterApplication.<init>(XmlRexsterApplication.java:47)
  at com.tinkerpop.rexster.Application.<init>(Application.java:97)
  at com.tinkerpop.rexster.Application.main(Application.java:189)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.