Giter Site home page Giter Site logo

incubator-hugegraph-tools's Introduction

HugeGraph-Tools

HugeGraph-Tools is a customizable command line utility for deploying, managing and backing up/restoring graphs from HugeGraph database.

Main Functions

  • Deploy and clear HugeGraph-Server and HugeGraph-Studio automatically.
  • Manage graphs and query with Gremlin from multiple HugeGraph databases essily.
  • Backup/restore graph schema and graph data from/to HugeGraph databases conveniently, also support backup periodically

Learn More

The project homepage contains more information about HugeGraph-Tools.

License

HugeGraph-Tools is licensed under Apache 2.0 License.

incubator-hugegraph-tools's People

Contributors

coderzc avatar dependabot[bot] avatar imbajin avatar javeme avatar linary avatar shzcore avatar xuliguov5 avatar zhoney avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

incubator-hugegraph-tools's Issues

在不同的机器上使用 hugegraph-tools 报错

HugeGraphServer 和 HugeGraph-tools 部署在不同的机器上,在 tools 所在的机器上调用 api,例如:

./hugegraph --graph my_graph_name --url http://my_ip:8080 task-list --status NEW
Exception in thread "main" java.lang.RuntimeException: Construct manager failed for class 'class com.baidu.hugegraph.manager.TasksManager'
	at com.baidu.hugegraph.cmd.HugeGraphCommand.manager(HugeGraphCommand.java:272)
	at com.baidu.hugegraph.cmd.HugeGraphCommand.execute(HugeGraphCommand.java:214)
	at com.baidu.hugegraph.cmd.HugeGraphCommand.main(HugeGraphCommand.java:310)
Caused by: java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at com.baidu.hugegraph.cmd.HugeGraphCommand.manager(HugeGraphCommand.java:270)
	... 2 more
Caused by: java.lang.AbstractMethodError: javax.ws.rs.core.UriBuilder.uri(Ljava/lang/String;)Ljavax/ws/rs/core/UriBuilder;
	at javax.ws.rs.core.UriBuilder.fromUri(UriBuilder.java:119)
	at org.glassfish.jersey.client.JerseyWebTarget.<init>(JerseyWebTarget.java:71)
	at org.glassfish.jersey.client.JerseyClient.target(JerseyClient.java:290)
	at org.glassfish.jersey.client.JerseyClient.target(JerseyClient.java:76)
	at com.baidu.hugegraph.rest.RestClient.<init>(RestClient.java:85)
	at com.baidu.hugegraph.rest.RestClient.<init>(RestClient.java:62)
	at com.baidu.hugegraph.client.RestClient.<init>(RestClient.java:46)
	at com.baidu.hugegraph.driver.HugeClient.<init>(HugeClient.java:65)
	at com.baidu.hugegraph.driver.HugeClient.<init>(HugeClient.java:58)
	at com.baidu.hugegraph.base.ToolClient.<init>(ToolClient.java:40)
	at com.baidu.hugegraph.base.ToolManager.<init>(ToolManager.java:36)
	at com.baidu.hugegraph.manager.TasksManager.<init>(TasksManager.java:48)
	... 7 more

然而,在这台机器上使用 curl 可以访问 server 端的 API。在同样的机器上不指定 --url 参数(即使用 http://localhost:8080 默认值)运行上述命令也没有问题。

graph-clear清除数据报错(backend=hbase)

backend=hbase时,运行graph-clear清除数据报错,无法正常清除数据。
具体错误信息如下所示:
[root@cluster-node-2 hugegraph-tools-1.4.0]# bin/hugegraph graph-clear -c "I'm sure to delete all data"
Exception in thread "main" class com.baidu.hugegraph.backend.BackendException: Failed to truncate table 's_li' for 's'
at com.baidu.hugegraph.exception.ServerException.fromResponse(ServerException.java:44)
at com.baidu.hugegraph.client.RestClient.checkStatus(RestClient.java:100)
at com.baidu.hugegraph.rest.RestClient.delete(RestClient.java:232)
at com.baidu.hugegraph.api.graphs.GraphsAPI.clear(GraphsAPI.java:82)
at com.baidu.hugegraph.driver.GraphsManager.clear(GraphsManager.java:54)
at com.baidu.hugegraph.manager.GraphsManager.clear(GraphsManager.java:44)
at com.baidu.hugegraph.cmd.HugeGraphCommand.execute(HugeGraphCommand.java:222)
at com.baidu.hugegraph.cmd.HugeGraphCommand.main(HugeGraphCommand.java:379)
Caused by: java.io.InterruptedIOException: Interrupt while waiting on Operation: DISABLE, Table Name: tinkerpop:s_li

说明:
1、已配置url&graph&timeout等相关参数,如下
export HUGEGRAPH_URL=http://10.19.151.142:8080
export HUGEGRAPH_GRAPH=tinkerpop
#export HUGEGRAPH_USERNAME=
#export HUGEGRAPH_PASSWORD=
export HUGEGRAPH_TIMEOUT=360000

2、换成backend=mysql的,运行清除命令可以正常清除。

Error "Could not create the Java Virtual Machine" when trying init DB(Default conf,rockdb)

user@test:~/hugegraph/hugegraph-0.8.0$ ./bin/init-store.sh
Initing HugeGraph Store...
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.

user@test:~/hugegraph/hugegraph-0.8.0$ java -version
openjdk version "9-internal"
OpenJDK Runtime Environment (build 9-internal+0-2016-04-14-195246.buildd.src)
OpenJDK 64-Bit Server VM (build 9-internal+0-2016-04-14-195246.buildd.src, mixed mode)

[Bug]为什么使用hugegraph tool 备份的数据,比直接查询的数据要少

版本: hugegraph-tools-1.5.0 hugegraph-0.11.2

查询语句:

sh bin/hugegraph  --url http://127.0.0.1:8180 --graph hugegraph3 gremlin-execute --script "g.V().hasLabel('post').count()"
Run gremlin script
1237554

备份语句:

sh bin/hugegraph  --url http://127.0.0.1:8180 --graph hugegraph3 backup --format text  --label post --compress false -d backup10 -t vertex --properties imagefile,creationdate,content,locationip
Graph 'hugegraph3' start backup!
Vertices backup started
Vertices has been backup: 1237542
Vertices backup finished: 1237542
===============================================
backup summary: {
        property key number: 0,
        vertex label number: 0,
        edge label number: 0,
        index label number: 0,
        vertex number: 1237542,
        edge number: 0,
}
cost time(s): 58

gremlin-schedule异步执行count(),导致studio与server无法连接,批量写入api调用失败。

【版本信息】
Server 0.10.4
Studio 0.10.0
tools 1.4.0

【问题描述】
gremlin-schedule异步执行如下语句:
./hugegraph gremlin-schedule -s "g.E().hasLabel('relyibai').count()"

数据量,根据返回的任务结果,如下:
Task info: {task_name=g.E().hasLabel('relyibai').count(), task_progress=0, task_create=1601349393637, task_status=success, task_update=1601350548763, task_result=[1000000], task_retries=0, id=20, task_type=gremlin, task_callable=com.baidu.hugegraph.api.job.GremlinAPI$GremlinJob, task_input={"gremlin":"g.E().hasLabel('relyibai').count()","aliases":{"hugegraph":"graph"},"bindings":{},"language":"gremlin-groovy"}}

异步任务运行期间,发现如下是问题:
1、因同时在通过 batch api接口加载数据,发现数据加载失败,与Server 响应超时。
2、为验证server是否正常,在studio 进行简单gremlin查询,g.V().limit(10),报错无法查询。
在异步任务完成后,studio查询正常,重启启动批量入库任务,期间不再执行gremlin-schedule,批量入库正常结束。

日志信息:
1、server日志:
1862 九月 29, 2020 11:26:12 上午 org.glassfish.jersey.server.ServerRuntime$Responder writeResponse
1863 严重: An I/O error has occurred while writing a response message entity to the container output stream.
1864 org.glassfish.jersey.server.internal.process.MappableException: java.io.IOException: Connection is closed
1865 at org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundWriteTo(MappableExceptionWrapperInterceptor.java:92)
1866 at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162)
1867 at org.glassfish.jersey.message.internal.MessageBodyFactory.writeTo(MessageBodyFactory.java:1130)
1868 at org.glassfish.jersey.server.ServerRuntime$Responder.writeResponse(ServerRuntime.java:711)
1869 at org.glassfish.jersey.server.ServerRuntime$Responder.processResponse(ServerRuntime.java:444)
1870 at org.glassfish.jersey.server.ServerRuntime$Responder.process(ServerRuntime.java:434)
1871 at org.glassfish.jersey.server.ServerRuntime$2.run(ServerRuntime.java:329)
1872 at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271)
1873 at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267)
1874 at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
1875 at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
1876 at org.glassfish.jersey.internal.Errors.process(Errors.java:267)
1877 at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:317)
1878 at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:305)
1879 at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1154)
1880 at org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpContainer.service(GrizzlyHttpContainer.java:384)
1881 at org.glassfish.grizzly.http.server.HttpHandler$1.run(HttpHandler.java:224)
1882 at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:593)
1883 at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:573)
1884 at java.lang.Thread.run(Thread.java:748)
1885 Caused by: java.io.IOException: Connection is closed
1886 at org.glassfish.grizzly.nio.NIOConnection.assertOpen(NIOConnection.java:445)
1887 at org.glassfish.grizzly.http.io.OutputBuffer.write(OutputBuffer.java:677)
1888 at org.glassfish.grizzly.http.server.NIOOutputStreamImpl.write(NIOOutputStreamImpl.java:83)
1889 at org.glassfish.jersey.message.internal.CommittingOutputStream.write(CommittingOutputStream.java:229)
1890 at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$UnCloseableOutputStream.write(WriterInterceptorExecutor.java:299)
1891 at com.fasterxml.jackson.core.json.UTF8JsonGenerator._flushBuffer(UTF8JsonGenerator.java:2033)
1892 at com.fasterxml.jackson.core.json.UTF8JsonGenerator._writeStringSegment2(UTF8JsonGenerator.java:1348)
1893 at com.fasterxml.jackson.core.json.UTF8JsonGenerator._writeStringSegment(UTF8JsonGenerator.java:1295)
1894 at com.fasterxml.jackson.core.json.UTF8JsonGenerator.writeString(UTF8JsonGenerator.java:457)
1895 at com.fasterxml.jackson.databind.ser.impl.IndexedStringListSerializer.serializeContents(IndexedStringListSerializer.java:121)
1896 at com.fasterxml.jackson.databind.ser.impl.IndexedStringListSerializer.serialize(IndexedStringListSerializer.java:79)
1897 at com.fasterxml.jackson.databind.ser.impl.IndexedStringListSerializer.serialize(IndexedStringListSerializer.java:21)
1898 at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:416)
1899 at com.fasterxml.jackson.databind.ObjectWriter$Prefetch.serialize(ObjectWriter.java:1416)
1900 at com.fasterxml.jackson.databind.ObjectWriter.writeValue(ObjectWriter.java:940)
1901 at com.fasterxml.jackson.jaxrs.base.ProviderBase.writeTo(ProviderBase.java:618)
1902 at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.invokeWriteTo(WriterInterceptorExecutor.java:265)
1903 at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.aroundWriteTo(WriterInterceptorExecutor.java:250)
1904 at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162)
1905 at org.glassfish.jersey.server.internal.JsonWithPaddingInterceptor.aroundWriteTo(JsonWithPaddingInterceptor.java:106)
1906 at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162)
1907 at org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundWriteTo(MappableExceptionWrapperInterceptor.java:86)
1908 ... 19 more
1909 Caused by: java.io.IOException: 断开的管道
1910 at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
1911 at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
1912 at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
1913 at sun.nio.ch.IOUtil.write(IOUtil.java:51)
1914 at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
1915 at org.glassfish.grizzly.nio.transport.TCPNIOUtils.flushByteBuffer(TCPNIOUtils.java:149)
1916 at org.glassfish.grizzly.nio.transport.TCPNIOUtils.writeCompositeBuffer(TCPNIOUtils.java:86)
1917 at org.glassfish.grizzly.nio.transport.TCPNIOAsyncQueueWriter.write0(TCPNIOAsyncQueueWriter.java:129)
1918 at org.glassfish.grizzly.nio.transport.TCPNIOAsyncQueueWriter.write0(TCPNIOAsyncQueueWriter.java:106)
1919 at org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter.write(AbstractNIOAsyncQueueWriter.java:260)
1920 at org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter.write(AbstractNIOAsyncQueueWriter.java:169)
1921 at org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter.write(AbstractNIOAsyncQueueWriter.java:71)
1922 at org.glassfish.grizzly.nio.transport.TCPNIOTransportFilter.handleWrite(TCPNIOTransportFilter.java:126)
1923 at org.glassfish.grizzly.filterchain.TransportFilter.handleWrite(TransportFilter.java:191)
1924 at org.glassfish.grizzly.filterchain.ExecutorResolver$8.execute(ExecutorResolver.java:111)
1925 at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:284)
1926 at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:201)
1927 at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:133)
1928 at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:112)
1929 at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
1930 at org.glassfish.grizzly.filterchain.FilterChainContext.write(FilterChainContext.java:890)
1931 at org.glassfish.grizzly.filterchain.FilterChainContext.write(FilterChainContext.java:858)
1932 at org.glassfish.grizzly.http.io.OutputBuffer.flushBuffer(OutputBuffer.java:1059)
1933 at org.glassfish.grizzly.http.io.OutputBuffer.write(OutputBuffer.java:709)
1934 ... 39 more
1935
1936 2020-09-29 11:26:14 2710681 [task-worker-2] [INFO ] org.apache.hadoop.hbase.client.AsyncRequestFutureImpl [] - #3, waiting for 883093 actions to finish on table: h ugegraph:g_oe

2、批量数据加载报错日志

2020-09-29 11:21:26.499[Thread-20] ERROR cn.encdata.cloud.graph.job.LoadDataJobHandler[49] Failed to load data.
257575 com.baidu.hugegraph.rest.ClientException: Failed to do request
257576 at com.baidu.hugegraph.rest.RestClient.request(RestClient.java:128)
257577 at com.baidu.hugegraph.rest.RestClient.post(RestClient.java:151)
257578 at com.baidu.hugegraph.rest.RestClient.post(RestClient.java:138)
257579 at com.baidu.hugegraph.api.graph.VertexAPI.create(VertexAPI.java:56)
257580 at com.baidu.hugegraph.driver.GraphManager.addVertices(GraphManager.java:83)

257588 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
257589 at java.lang.Thread.run(Thread.java:748)
257590 Caused by: javax.ws.rs.ProcessingException: java.net.SocketTimeoutException: Read timed out
257591 at org.glassfish.jersey.apache.connector.ApacheConnector.apply(ApacheConnector.java:496)
257592 at org.glassfish.jersey.client.ClientRuntime.invoke(ClientRuntime.java:278)
257593 at org.glassfish.jersey.client.JerseyInvocation.lambda$invoke$0(JerseyInvocation.java:753)
257594 at org.glassfish.jersey.internal.Errors.process(Errors.java:316)
257595 at org.glassfish.jersey.internal.Errors.process(Errors.java:298)
257596 at org.glassfish.jersey.internal.Errors.process(Errors.java:229)
257597 at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:414)
257598 at org.glassfish.jersey.client.JerseyInvocation.invoke(JerseyInvocation.java:752)
257599 at org.glassfish.jersey.client.JerseyInvocation$Builder.method(JerseyInvocation.java:445)
257600 at org.glassfish.jersey.client.JerseyInvocation$Builder.post(JerseyInvocation.java:351)
257601 at com.baidu.hugegraph.rest.RestClient.lambda$post$1(RestClient.java:153)
257602 at com.baidu.hugegraph.rest.RestClient.request(RestClient.java:126)
257603 ... 13 common frames omitted
257604 Caused by: java.net.SocketTimeoutException: Read timed out
257605 at java.net.SocketInputStream.socketRead0(Native Method)
257606 at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
257607 at java.net.SocketInputStream.read(SocketInputStream.java:171)
257608 at java.net.SocketInputStream.read(SocketInputStream.java:141)
257609 at org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
257610 at org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153)
257611 at org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:280)
257612 at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138)
257613 at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56)
257614 at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259)
257615 at org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163)
257616 at org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:157)
257617 at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273)
257618 at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
257619 at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)
257620 at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186)
257621 at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
257622 at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
257623 at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
257624 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:72)
257625 at org.glassfish.jersey.apache.connector.ApacheConnector.apply(ApacheConnector.java:450)
257626 ... 24 common frames omitted

@javeme

configuration files not match

hugegraph-tools,下载的hugegraph-studio的Conf目录下默认为hugegraph-studio.properties 文件,deploy/start-all过程中提示找不到 hugestudio.conf文件。

util.sh中的get_ip函数存在bug

你好,在使用hugegraph-tools进行hugegraph server和studio的deploy过程中,我发现,有时候rest-server.ymlhugegraph-studio.properties文件能够被正确修改主机ip字段,有时候则为空。

function get_ip() {
    local os=`uname`
    local loopback="127.0.0.1"
    local ip=""
    case $os in
        Linux) ip=`ifconfig | grep 'inet addr:'| grep -v "$loopback" | cut -d: -f2 | awk '{ print $1}'`;;
        FreeBSD|OpenBSD|Darwin) ip=`ifconfig  | grep -E 'inet.[0-9]' | grep -v "$loopback" | awk '{ print $2}'`;;
        SunOS) ip=`ifconfig -a | grep inet | grep -v "$loopback" | awk '{ print $2} '`;;
        *) ip=$loopback;;
    esac
    echo $ip
}

升级HugeGraph时,migrate命令流式导入数据

现在是backup+restore,且是backup完成之后才开始restore操作,中间需要数据落地本地文件系统或者HDFS。可以尝试优化为数据不落地,源图backup出的数据直接restore回目标图

migrate 在restoring模式下报错

hugegraph-tools version: 1.5.0
hugegraph-server vertion: 0.10.4

执行命令: ./bin/hugegraph --url http://172.30.5.166:8183 --graph hugegraph migrate --target-url http://172.30.5.166:8183 --target-graph hugegraph1
报错 “Exception in thread "main" class java.lang.IllegalArgumentException: Must provide schema id if in RESTORING mode”

日志如下:

Migrate graph 'hugegraph' from 'http://172.30.5.166:8183' to 'http://172.30.5.166:8183' as 'hugegraph1'
Property key backup started
Property key backup finished: 20
Vertex label backup started
Vertex label backup finished: 15
Edge label backup started
Edge label backup finished: 22
Index label backup started
Index label backup finished: 26
Vertices backup started
Vertices has been backup: 2261
Vertices backup finished: 2261
Edges backup started
Edges has been backup: 5390
Edges backup finished: 5390

backup summary: {
property key number: 20,
vertex label number: 15,
edge label number: 22,
index label number: 26,
vertex number: 2261,
edge number: 5390,
}
cost time(s): 0
Graph 'hugegraph1' start restore in mode 'RESTORING'!
Property key restore started
Property key restore finished: 20
Vertex label restore started
Vertex label restore finished: 15
Edge label restore started
Edge label restore finished: 22
Index label restore started
Exception in thread "main" class java.lang.IllegalArgumentException: Must provide schema id if in RESTORING mode
at com.baidu.hugegraph.exception.ServerException.fromResponse(ServerException.java:47)
at com.baidu.hugegraph.client.RestClient.checkStatus(RestClient.java:93)
at com.baidu.hugegraph.rest.AbstractRestClient.post(AbstractRestClient.java:198)
at com.baidu.hugegraph.rest.AbstractRestClient.post(AbstractRestClient.java:172)
at com.baidu.hugegraph.api.schema.IndexLabelAPI.create(IndexLabelAPI.java:49)
at com.baidu.hugegraph.driver.SchemaManager.addIndexLabel(SchemaManager.java:187)
at com.baidu.hugegraph.driver.SchemaManager.addIndexLabel(SchemaManager.java:182)
at com.baidu.hugegraph.manager.RestoreManager.lambda$restoreIndexLabels$9(RestoreManager.java:237)
at com.baidu.hugegraph.manager.BackupRestoreBaseManager.read(BackupRestoreBaseManager.java:186)
at com.baidu.hugegraph.manager.RestoreManager.restore(RestoreManager.java:249)
at com.baidu.hugegraph.manager.RestoreManager.restoreIndexLabels(RestoreManager.java:242)
at com.baidu.hugegraph.manager.RestoreManager.restore(RestoreManager.java:83)
at com.baidu.hugegraph.cmd.HugeGraphCommand.execute(HugeGraphCommand.java:233)
at com.baidu.hugegraph.cmd.HugeGraphCommand.main(HugeGraphCommand.java:427)

graph-clear报错,I'm sure to delete all data 中的空格在命令行怎么录入?

graph-clear,清除某个图的全部 schema 和 data
--confirm-message 或者 -c,必填项,删除确认信息,需要手动输入,二次确认防止误删,"I'm sure to delete all data",包括双引号.

请问这个命令应该怎么写?
./hugegraph --graph rocksdbgraph4 --url http://5.28.43.113:8080 graph-clear --graph-name rocksdbgraph4 -c "I'm sure to delete all data"

./hugegraph: line 91: [: too many arguments
Was passed main parameter 'sure' but no main parameter was defined in your arg class

Exception in thread "main" class java.lang.IllegalStateException: Can't find task scheduler for graph 'hugegraph[hugegraph]'

在执行以下命令时都报与task scheduler相关的问题,但我从未设置过与task相关的任务,请问如何解决? 谢谢!!! @javeme

$ ./bin/hugegraph graph-clear -c "I'm sure to delete all data"

Exception in thread "main" class java.lang.IllegalStateException: Can't find task scheduler for graph 'hugegraph[hugegraph]'
at com.baidu.hugegraph.exception.ServerException.fromResponse(ServerException.java:44)
at com.baidu.hugegraph.client.RestClient.checkStatus(RestClient.java:100)
at com.baidu.hugegraph.rest.RestClient.delete(RestClient.java:232)
at com.baidu.hugegraph.api.graphs.GraphsAPI.clear(GraphsAPI.java:82)
at com.baidu.hugegraph.driver.GraphsManager.clear(GraphsManager.java:54)
at com.baidu.hugegraph.manager.GraphsManager.clear(GraphsManager.java:44)
at com.baidu.hugegraph.cmd.HugeGraphCommand.execute(HugeGraphCommand.java:222)
at com.baidu.hugegraph.cmd.HugeGraphCommand.main(HugeGraphCommand.java:379)

$ ./bin/hugegraph task-list
Exception in thread "main" class java.lang.IllegalStateException: Can't find task scheduler for graph 'hugegraph[hugegraph]'
at com.baidu.hugegraph.exception.ServerException.fromResponse(ServerException.java:44)
at com.baidu.hugegraph.client.RestClient.checkStatus(RestClient.java:100)
at com.baidu.hugegraph.rest.RestClient.get(RestClient.java:212)
at com.baidu.hugegraph.api.task.TaskAPI.list(TaskAPI.java:66)
at com.baidu.hugegraph.driver.TaskManager.list(TaskManager.java:54)
at com.baidu.hugegraph.manager.TasksManager.list(TasksManager.java:52)
at com.baidu.hugegraph.cmd.HugeGraphCommand.execute(HugeGraphCommand.java:261)
at com.baidu.hugegraph.cmd.HugeGraphCommand.main(HugeGraphCommand.java:379)

$ ./bin/hugegraph task-clear --force
Exception in thread "main" class java.lang.IllegalStateException: Can't find task scheduler for graph 'hugegraph[hugegraph]'
at com.baidu.hugegraph.exception.ServerException.fromResponse(ServerException.java:44)
at com.baidu.hugegraph.client.RestClient.checkStatus(RestClient.java:100)
at com.baidu.hugegraph.rest.RestClient.get(RestClient.java:212)
at com.baidu.hugegraph.api.task.TaskAPI.list(TaskAPI.java:66)
at com.baidu.hugegraph.driver.TaskManager.list(TaskManager.java:54)
at com.baidu.hugegraph.manager.TasksManager.list(TasksManager.java:52)
at com.baidu.hugegraph.manager.TasksManager.clear(TasksManager.java:72)
at com.baidu.hugegraph.cmd.HugeGraphCommand.execute(HugeGraphCommand.java:288)
at com.baidu.hugegraph.cmd.HugeGraphCommand.main(HugeGraphCommand.java:379)

清除hbase后端数据问题

输入命令 ./hugegraph --url "xxx" graph-clear
报 The following option is required:[--confirm-message | -c]
输入命令 ./hugegraph --url "xxx" graph-clear -c
报 Expected a value after paramter -c

想知道,这个graph-clear 详细的使用方法,-c 后面还有接什么参数么?

报错,貌似没有默认主机名

hugeGraph ./hugegraph-tools-1.2.0/bin/hugegraph deploy -v 0.7 -p .
sed: can't read hugegraph-studio-0.7.0/conf/hugestudio.properties: No such file or directory
Initing HugeGraph Store...
2018-11-26 11:08:58 1216  [main] [INFO ] com.baidu.hugegraph.cmd.InitStore [] - Init graph with config file: conf/hugegraph.properties
2018-11-26 11:08:58 1414  [main] [INFO ] com.baidu.hugegraph.HugeGraph [] - Opening backend store 'rocksdb' for graph 'hugegraph'
2018-11-26 11:08:58 1491  [main] [INFO ] com.baidu.hugegraph.backend.store.rocksdb.RocksDBStore [] - Opening RocksDB with data path: rocksdb-data/schema
2018-11-26 11:08:59 2120  [main] [INFO ] com.baidu.hugegraph.cmd.InitStore [] - Skip init-store due to the backend store of 'hugegraph' had been initialized
2018-11-26 11:08:59 2135  [pool-5-thread-1] [INFO ] com.baidu.hugegraph.backend.store.rocksdb.RocksDBStore [] - Opening RocksDB with data path: rocksdb-data/system
/root/hugeGraph/hugegraph-tools-1.2.0/hugegraph-0.7.4/bin/util.sh: line 60: lsof: command not found
Starting HugeGraphServer...
Connecting to HugeGraphServer (http://:8080/graphs)................The operation timed out when attempting to connect to http://:8080/graphs
See /root/hugeGraph/hugegraph-tools-1.2.0/hugegraph-0.7.4/logs/hugegraph-server.log for HugeGraphServer log output.
Failed to start HugeGraphServer, please check the logs under '/root/hugeGraph/hugegraph-tools-1.2.0/hugegraph-0.7.4/logs' for details

[Question] hugegraph-tools-1.4.0数据迁移问题

Problem Type (问题类型)

configs (配置项 / 文档相关)

Before submit

  • 我已经确认现有的 IssuesFAQ 中没有相同 / 重复问题

Environment (环境信息)

  • Server Version: v0.10.4
  • Backend: RocksDB

Your Question (问题描述)

bin/hugegraph --url 自己的ip:端口 --graph 自己的图名称 migrate --target-url 自己的ip:端口 --target-graph 自己的图名称
执行结果:
Migrate graph 'hugegraph' from 'http://127.0.0.1:8080' to '自己的ip:端口' as '自己的图名称'

为什么我指定了自己的ip:端口和图,执行命令的时候还是去默认的位置找
在bin/hugegraph 中也修改了全局变量--url和--graph为我自己的,但也没生效

Exception in thread "main" java.lang.RuntimeException: Construct manager failed for class 'class com.baidu.hugegraph.manager.BackupManager'
at com.baidu.hugegraph.cmd.HugeGraphCommand.manager(HugeGraphCommand.java:318)
at com.baidu.hugegraph.cmd.HugeGraphCommand.execute(HugeGraphCommand.java:172)
at com.baidu.hugegraph.cmd.HugeGraphCommand.main(HugeGraphCommand.java:379)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.baidu.hugegraph.cmd.HugeGraphCommand.manager(HugeGraphCommand.java:316)
Caused by: null: <!doctype html><title>HTTP Status 404 – Not Found</title><style type="text/css">h1 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:22px;} h2 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:16px;} h3 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:14px;} body {font-family:Tahoma,Arial,sans-serif;color:black;background-color:white;} b {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;} p {font-family:Tahoma,Arial,sans-serif;background:white;color:black;font-size:12px;} a {color:black;} a.name {color:black;} .line {height:1px;background-color:#525D76;border:none;}</style>

HTTP Status 404 – Not Found


Type Status Report

Message /versions

Description The origin server did not find a current representation for the target resource or is not willing to disclose that one exists.


Apache Tomcat/9.0.11


at com.baidu.hugegraph.exception.ServerException.fromResponse(ServerException.java:44)
at com.baidu.hugegraph.client.RestClient.checkStatus(RestClient.java:100)
at com.baidu.hugegraph.rest.RestClient.get(RestClient.java:193)
at com.baidu.hugegraph.api.version.VersionAPI.get(VersionAPI.java:41)
at com.baidu.hugegraph.driver.VersionManager.getApiVersion(VersionManager.java:45)
at com.baidu.hugegraph.driver.HugeClient.checkServerApiVersion(HugeClient.java:122)
at com.baidu.hugegraph.driver.HugeClient.initManagers(HugeClient.java:108)
at com.baidu.hugegraph.driver.HugeClient.(HugeClient.java:72)
at com.baidu.hugegraph.driver.HugeClient.(HugeClient.java:60)
at com.baidu.hugegraph.base.ToolClient.(ToolClient.java:40)
at com.baidu.hugegraph.base.ToolManager.(ToolManager.java:36)
at com.baidu.hugegraph.base.RetryManager.(RetryManager.java:43)
at com.baidu.hugegraph.manager.BackupRestoreBaseManager.(BackupRestoreBaseManager.java:74)
at com.baidu.hugegraph.manager.BackupManager.(BackupManager.java:74)
... 7 more

Vertex/Edge example (问题点 / 边数据举例)

No response

Schema [VertexLabel, EdgeLabel, IndexLabel] (元数据结构)

No response

restore 图时出现exception, 只能恢复部分数据

hugegraph-tools: 1.4.0
hugegraph: 0.10.4
后端:hbase

backup的命令和summary如下, 其中split-size使用了默认的,直接将数据备份到了本地:

./bin/hugegraph --url http://hugegraph:8181 --graph home backup -t all --directory ./backup-home

Graph 'home' start backup!
Property key backup started
Property key backup finished: 21
Vertex label backup started
Vertex label backup finished: 15
Edge label backup started
Edge label backup finished: 22
Index label backup started
Index label backup finished: 29
Vertices backup started
Vertices has been backup: 168115
Vertices backup finished: 168115
Edges backup started
Edges has been backup: 526578
Edges backup finished: 526578

backup summary: {
property key number: 21,
vertex label number: 15,
edge label number: 22,
index label number: 29,
vertex number: 168115,
edge number: 526578,
}
cost time(s): 19

设置图模式
./bin/hugegraph --url http://hugegraph1:8183 --graph homeBak graph-mode-set -m RESTORING
恢复图命令
./bin/hugegraph --url http://hugegraph1:8183 --graph homeBak restore -t all --directory ./backup-home

Graph 'homeBak' start restore in mode 'RESTORING'!
Property key restore started
Property key restore finished: 21
Vertex label restore started
Vertex label restore finished: 15
Edge label restore started
Edge label restore finished: 22
Index label restore started
Index label restore finished: 29
Vertices restore started
Restoring VERTEX ...
files: [
vertices1.zip,
vertices2.zip,
vertices3.zip,
vertices0.zip,
]
Vertices has been restored: When restoring vertices in file 'vertices0.zip' occurs exception 'com.baidu.hugegraph.exception.ToolsException: Exception occurred while restoring vertices(after 3 retries)'
When restoring vertices in file 'vertices2.zip' occurs exception 'com.baidu.hugegraph.exception.ToolsException: Exception occurred while restoring vertices(after 3 retries)'
85197
Vertices restore finished: 85197
Edges restore started
Restoring EDGE ...
files: [
edges0.zip,
edges1.zip,
edges2.zip,
edges3.zip,
]
Edges has been restored: When restoring edges in file 'edges1.zip' occurs exception 'com.baidu.hugegraph.exception.ToolsException: Exception occurred while restoring edges(after 3 retries)'
When restoring edges in file 'edges0.zip' occurs exception 'com.baidu.hugegraph.exception.ToolsException: Exception occurred while restoring edges(after 3 retries)'
159881
Edges restore finished: 159881

restore summary: {
property key number: 21,
vertex label number: 15,
edge label number: 22,
index label number: 29,
vertex number: 85197,
edge number: 159881,
}
cost time(s): 57

想问一下是因为什么原因导致了exeption呢? 如何去解决

dump graph failed

使用 dump 命令的时候,报错:

Graph 'graph_name' start dump!
log4j:WARN No appenders could be found for logger (org.apache.http.client.protocol.RequestAddCookies).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" com.baidu.hugegraph.exception.ToolsException: Exception occurred while querying shards of vertices(after 3 retries)
	at com.baidu.hugegraph.base.RetryManager.retry(RetryManager.java:56)
	at com.baidu.hugegraph.manager.BackupManager.backupVertices(BackupManager.java:126)
	at com.baidu.hugegraph.manager.DumpGraphManager.dump(DumpGraphManager.java:67)
	at com.baidu.hugegraph.cmd.HugeGraphCommand.execute(HugeGraphCommand.java:158)
	at com.baidu.hugegraph.cmd.HugeGraphCommand.main(HugeGraphCommand.java:310)
Caused by: class java.lang.IllegalArgumentException: The split-size must be >= 1048576 bytes, but got 0
	at com.baidu.hugegraph.exception.ServerException.fromResponse(ServerException.java:44)
	at com.baidu.hugegraph.client.RestClient.checkStatus(RestClient.java:69)
	at com.baidu.hugegraph.rest.RestClient.get(RestClient.java:192)
	at com.baidu.hugegraph.api.traverser.VerticesAPI.shards(VerticesAPI.java:65)
	at com.baidu.hugegraph.driver.TraverserManager.vertexShards(TraverserManager.java:227)
	at com.baidu.hugegraph.manager.BackupManager.lambda$backupVertices$0(BackupManager.java:127)
	at com.baidu.hugegraph.base.RetryManager.retry(RetryManager.java:51)
	... 4 more

测试的图是 loader 的 examples 下提供了默认图。

compatible problem with new version of hugegraph studio

Hi,

我看到上个pr处理了start-all.shstart_hugegraph_studio()函数中的配置文件名称不匹配问题,但是deploy.sh中也存在对studio配置文件的更改:

function config_hugegraph_studio() {
    local studio_server_conf="$STUDIO_DIR/conf/hugestudio.properties"

    write_property $studio_server_conf "server\.httpBindAddress" $IP
}

如上所示,默认会把属性写入到hugestudio.properties中。

另外,新版的hugegraph-stuido配置server ip和server port的属性名已经不是server.httpBindAddressserver.httpPort了,而是studio.server.hoststudio.server.port

error when use graph-clear

I want to use ./hugegraph graph-clear to delete schema and data, but got errors like this:

$ ./hugegraph graph-clear -c "I'm sure to delete all data"
./hugegraph: line 90: [: too many arguments
Was passed main parameter 'sure' but no main parameter was defined in your arg class

$ ./hugegraph graph-clear -c hugegraph "I'm sure to delete all data"
./hugegraph: line 90: [: too many arguments
Was passed main parameter 'I'm' but no main parameter was defined in your arg class

$ ./hugegraph --graph hugegraph graph-clear -c
Expected a value after parameter -c

$ ./hugegraph --graph hugegraph graph-clear -c "I'm sure to delete all data"
./hugegraph: line 90: [: too many arguments
Was passed main parameter 'sure' but no main parameter was defined in your arg class

[问题 ] 图数据0.9/0.10版本升级0.11失败(点边数据不一致)

Expected behavior 期望表现

图数据0.9/0.10版本通过tools migrate升级至0.11版本

Actual behavior 实际表现

点数据可正确转换后导入(例如数值类型)
但边id未做相应的转换,导致点边id不匹配,例子见下:

Steps to reproduce the problem 复现步骤

使用tools工具进行migrate,迁移完成后,使用gremlin查询同一点边,点id发生了变换,但是边id未变化,此时点为孤立点

  • primary key模式下:旧点id 1:37449855 新点id 1:5Es1~

Vertex/Edge example 数据示例

  1. 0.9版本点查询:g.V().hasLabel('file', 'network').has('uid', within(37449855,436342406))
    image

  2. 迁移到0.11版本查询同一条gremlin:g.V().hasLabel('file', 'network').has('uid', within(37449855,436342406))
    35631609170989_ pic_hd

Specifications of environment 环境信息

  • hugegraph server version: 0.11
  • hugegraph tool version: 1.5.0

@zhoney @javeme

add command migrate

Support for migrating data from one graph to another, including scenarios across different versions when upgrading.

备份好的数据再恢复数据,其中index_label出现错误

Note ( 特别注意 ) :

  1. 请先搜索, 并确认现有的 IssuesFAQ 中没有与您相同 / 相关的问题, 请勿重复提交
  2. 我们需要尽可能详细的信息来分析问题, 越详细的信息 (包括日志 / 截图 / 配置等) 会越快被响应和处理
  3. 请关注提交的 issue, 缺乏信息 / 长时间 ( > 14 天) 没有回复, issue 可能会被 关闭 (需要时可再开启)

Environment ( 环境信息 - 必填 )

hugegraph-0.9.2
hugegraph-tools-1.5.0

  • Server Version: v0.9.2 (refer here)
  • Backend: Cassandra 3.x, x nodes, HDD or SSD
  • OS: xx CPUs, xx G RAM, Centos 7.x
  • Data Size: xx vertices, xx edges (like 1000W 点, 9000W 边)

Your Question ( 问题描述 )

1,现有图数据hugegraphdoctor,通过备份,--graph hugegraphdoctor backup -t 'property_key,vertex_label,edge_label,index_label,vertex,edge' --directory ./backupJson

2,新建了图数据 hugegraphdoctor_backup, 通过命令转模式--graph hugegraphdoctor_backup graph-mode-set -m RESTORING并清除数据,--graph hugegraphdoctor_backup graph-clear -c "I'm sure to delete all data"

3,恢复数据到hugegraphdoctor_backup,通过命令 --graph hugegraphdoctor_backup restore -t 'property_key,vertex_label,edge_label,index_label,vertex,edge' --directory ./backupJson

4,发生错误如下:

 Graph 'hugegraphdoctor_backup' start restore in mode 'RESTORING'!
Property key restore started
Property key restore finished: 57
Vertex label restore started
Vertex label restore finished: 10
Edge label restore started
Edge label restore finished: 9
Index label restore started
Exception in thread "main" class java.lang.IllegalArgumentException: Must provide schema id if in RESTORING mode
	at com.baidu.hugegraph.exception.ServerException.fromResponse(ServerException.java:47)
	at com.baidu.hugegraph.client.RestClient.checkStatus(RestClient.java:93)
	at com.baidu.hugegraph.rest.AbstractRestClient.post(AbstractRestClient.java:198)
	at com.baidu.hugegraph.rest.AbstractRestClient.post(AbstractRestClient.java:172)
	at com.baidu.hugegraph.api.schema.IndexLabelAPI.create(IndexLabelAPI.java:49)
	at com.baidu.hugegraph.driver.SchemaManager.addIndexLabel(SchemaManager.java:187)
	at com.baidu.hugegraph.driver.SchemaManager.addIndexLabel(SchemaManager.java:182)
	at com.baidu.hugegraph.manager.RestoreManager.lambda$restoreIndexLabels$9(RestoreManager.java:237)
	at com.baidu.hugegraph.manager.BackupRestoreBaseManager.read(BackupRestoreBaseManager.java:186)
	at com.baidu.hugegraph.manager.RestoreManager.restore(RestoreManager.java:249)
	at com.baidu.hugegraph.manager.RestoreManager.restoreIndexLabels(RestoreManager.java:242)
	at com.baidu.hugegraph.manager.RestoreManager.restore(RestoreManager.java:83)
	at com.baidu.hugegraph.cmd.HugeGraphCommand.execute(HugeGraphCommand.java:191)
	at com.baidu.hugegraph.cmd.HugeGraphCommand.main(HugeGraphCommand.java:427)

注: 图使用 / 配置相关问题, 请优先参考 REST-API 文档, 以及 Server 配置文档

Related information ( 补充相关信息 ) :

Provide related "Data & Schema" info (Click to expand)

Vertex/Edge example ( 问题点 / 边数据举例 )

// JSON of Vertex / Edge ⬇

Schema [VertexLabel, EdgeLabel, IndexLabel] ( 元数据结构 )

// JSON of GraphSchema ⬇

备份好的数据再恢复数据,其中index_label出现错误

版本号:
hugegraph-0.9.2
hugegraph-tools-1.5.0
1,现有图数据hugegraphdoctor,通过备份,--graph hugegraphdoctor backup -t 'property_key,vertex_label,edge_label,index_label,vertex,edge' --directory ./backupJson

2,新建了图数据 hugegraphdoctor_backup, 通过命令转模式--graph hugegraphdoctor_backup graph-mode-set -m RESTORING并清除数据,--graph hugegraphdoctor_backup graph-clear -c "I'm sure to delete all data"

3,恢复数据到hugegraphdoctor_backup,通过命令 --graph hugegraphdoctor_backup restore -t 'property_key,vertex_label,edge_label,index_label,vertex,edge' --directory ./backupJson

4,发生错误如下:
Graph 'hugegraphdoctor_backup' start restore in mode 'RESTORING'!
Property key restore started
Property key restore finished: 57
Vertex label restore started
Vertex label restore finished: 10
Edge label restore started
Edge label restore finished: 9
Index label restore started
Exception in thread "main" class java.lang.IllegalArgumentException: Must provide schema id if in RESTORING mode
at com.baidu.hugegraph.exception.ServerException.fromResponse(ServerException.java:47)
at com.baidu.hugegraph.client.RestClient.checkStatus(RestClient.java:93)
at com.baidu.hugegraph.rest.AbstractRestClient.post(AbstractRestClient.java:198)
at com.baidu.hugegraph.rest.AbstractRestClient.post(AbstractRestClient.java:172)
at com.baidu.hugegraph.api.schema.IndexLabelAPI.create(IndexLabelAPI.java:49)
at com.baidu.hugegraph.driver.SchemaManager.addIndexLabel(SchemaManager.java:187)
at com.baidu.hugegraph.driver.SchemaManager.addIndexLabel(SchemaManager.java:182)
at com.baidu.hugegraph.manager.RestoreManager.lambda$restoreIndexLabels$9(RestoreManager.java:237)
at com.baidu.hugegraph.manager.BackupRestoreBaseManager.read(BackupRestoreBaseManager.java:186)
at com.baidu.hugegraph.manager.RestoreManager.restore(RestoreManager.java:249)
at com.baidu.hugegraph.manager.RestoreManager.restoreIndexLabels(RestoreManager.java:242)
at com.baidu.hugegraph.manager.RestoreManager.restore(RestoreManager.java:83)
at com.baidu.hugegraph.cmd.HugeGraphCommand.execute(HugeGraphCommand.java:191)
at com.baidu.hugegraph.cmd.HugeGraphCommand.main(HugeGraphCommand.java:427)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.