Comments (21)
@ewencp Same problem here, without any exception, just 500 timeout when update or delete a connector. And when this happens, all the PUT/POST requests will not work.
[2017-05-23 11:14:12,700] INFO 10.10.4.1 - - [23/May/2017:03:12:05 +0000] "DELETE /connectors/mongo_cron_source_slave HTTP/1.1" 500 92 127182 (org.apache.kafka.connect.runtime.rest.RestServer)
from kafka-connect-elasticsearch.
I am also facing time out error while posting the source connector for DB2,POST API wait for almost 90 second and after that it time out with below error
[2019-06-02 00:17:17,906] INFO 192.168.1.2 - - [01/Jun/2019:18:45:47 +0000] "POST /connectors HTTP/1.1" 500 48 90004 (org.apache.kafka.connect.runtime.rest.RestServer:60)
I can see below warning also in Kafka connect log just before the time out error
This member will leave the group because consumer poll timeout has expired. This means the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time processing messages. You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:1011)
is there any configuration to increase the API time out. I have also noticed interesting behavior, when i run the kafka connect in standalone mode it works perfectly. I can see the DB2 table data in kafka topic.
from kafka-connect-elasticsearch.
Closing this as the original issue has been resolved. Follow up commentary pertains to other connectors.
from kafka-connect-elasticsearch.
That was my stupid error.
https://docs.confluent.io/current/installation/docker/docs/configuration.html#confluent-kafka-cp-kafka
By default replication factor is 3. I fixed my problem by setting KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
in my docker-compose.yml.
So keep in mind, if kafka has not been successfully started, kafka-connector will respond with 500 error code because it stores data in kafka topics.
from kafka-connect-elasticsearch.
I'm not experiencing @danielfnfaria's precise issues, but something I've noticed is that some erroneous requests to Kafka Connect distributed workers cause their internal REST endpoints to die, with no error returned to the caller or even logged at all.
The first example that I experienced was because I assumed that the Confluent Platform's uber-RPM included the S3 connector, when it turns out that it doesn't. Before I realized this, any attempt I made at registering an S3 sink timed out with a 500 error; and not just that, after such a timeout, all requests to the worker's REST interface would time out thereafter until the worker was restarted. Once I realized that the S3 connector jars were not actually there and installed that RPM separately, the registration request succeeded.
So basically, whatever problem @danielfnfaria is experiencing here, the bigger problem is that distributed workers swallow exceptions and die when you send them a "killer request."
from kafka-connect-elasticsearch.
any update on this issue? we are running to the same issue (get /connectors timeout)?
from kafka-connect-elasticsearch.
@danielfnfaria Is there any more information that this? What is the actual output -- it looks like it wrote 48 bytes?
Can you check the connect logs to see if there are any relevant messages?
from kafka-connect-elasticsearch.
I am getting same error for GET or POST /connectors API. I am using confluent-3.3.0 package.
2017-08-08 10:42:02 INFO RestServer:60 - 10.160.240.125 - - [08/Aug/2017:10:40:32 +0000] "GET /connectors HTTP/1.1" 500 48 90007
2017-08-08 10:42:34 INFO RestServer:60 - 10.160.240.125 - - [08/Aug/2017:10:41:03 +0000] "POST /connectors HTTP/1.1" 500 48 90124
Please help to resolve this error.
from kafka-connect-elasticsearch.
By downgrading confluent to 3.2.0 version, I am able to access /connectors API.
from kafka-connect-elasticsearch.
Same problem
from kafka-connect-elasticsearch.
Same problem, any updates?
from kafka-connect-elasticsearch.
I solve this problem by set the rest.advertised.host.name
(with ip address) and rest.advertised.port
, each connector process needs to have an unique host or port, and these hosts and ports should be accessible to every node of the cluster.
If you start a cluster with some nodes share the same host name and port, connectors will be blocked after receive the update/delete request.
from kafka-connect-elasticsearch.
I too am having this issue on v3.3.0 of kafka-connect. The /connectors endpoint appears to be broken in this version.
from kafka-connect-elasticsearch.
#116 recently enhanced the connector to use exponential backoff. That was merged into the 3.3.x
, 3.4.x
, and master
branches but has not yet been released. Feel free to build it to see if this fixes the issue -- would love to hear feedback.
from kafka-connect-elasticsearch.
I am getting the same problem. were you able to solve it, seems some small config is missing :-(
from kafka-connect-elasticsearch.
Getting the same issue.
I've noticed one very interesting thing. The following docker file works like a charm.
---
version: '2'
services:
zookeeper-1:
image: confluentinc/cp-zookeeper:latest
#depends_on:
# - kibana
environment:
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: 22181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
ZOOKEEPER_SERVERS: 0.0.0.0:22888:23888;192.168.2.60:32888:33888;192.168.2.60:42888:43888
volumes:
- zoo1:/data
networks:
- esnet2
ports:
- "22181:22181"
- "22888:22888"
- "23888:23888"
zookeeper-2:
image: confluentinc/cp-zookeeper:latest
#depends_on:
# - kibana
environment:
ZOOKEEPER_SERVER_ID: 2
ZOOKEEPER_CLIENT_PORT: 32181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
ZOOKEEPER_SERVERS: 192.168.2.60:22888:23888;0.0.0.0:32888:33888;192.168.2.60:42888:43888
volumes:
- zoo2:/data
networks:
- esnet2
ports:
- "32181:32181"
- "32888:32888"
- "33888:33888"
zookeeper-3:
image: confluentinc/cp-zookeeper:latest
#depends_on:
# - kibana
environment:
ZOOKEEPER_SERVER_ID: 3
ZOOKEEPER_CLIENT_PORT: 42181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
ZOOKEEPER_SERVERS: 192.168.2.60:22888:23888;192.168.2.60:32888:33888;0.0.0.0:42888:43888
volumes:
- zoo3:/data
networks:
- esnet2
ports:
- "42181:42181"
- "42888:42888"
- "43888:43888"
kafka-1:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper-1
#- zookeeper-2
#- zookeeper-3
environment:
KAFKA_BROKER_ID: 1
KAFKA_NUM_PARTITIONS: 3
KAFKA_DEFAULT_REPLICATION_FACTOR: 3
KAFKA_DELETE_TOPIC_ENABLE: "true"
KAFKA_LOG_RETENTION_HOURS: 8760 # 1 year
KAFKA_ZOOKEEPER_CONNECT: 192.168.2.60:22181,192.168.2.60:32181,192.168.2.60:42181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.2.60:19092
volumes:
- kafka1:/var/lib/kafka/data
networks:
- esnet2
ports:
- "19092:19092"
kafka-2:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper-1
#- zookeeper-2
#- zookeeper-3
environment:
KAFKA_BROKER_ID: 2
KAFKA_NUM_PARTITIONS: 3
KAFKA_DEFAULT_REPLICATION_FACTOR: 3
KAFKA_DELETE_TOPIC_ENABLE: "true"
KAFKA_LOG_RETENTION_HOURS: 8760 # 1 year
KAFKA_ZOOKEEPER_CONNECT: 192.168.2.60:22181,192.168.2.60:32181,192.168.2.60:42181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.2.60:29092
volumes:
- kafka2:/var/lib/kafka/data
networks:
- esnet2
ports:
- "29092:29092"
kafka-3:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper-1
#- zookeeper-2
#- zookeeper-3
environment:
KAFKA_BROKER_ID: 3
KAFKA_NUM_PARTITIONS: 3
KAFKA_DEFAULT_REPLICATION_FACTOR: 3
KAFKA_DELETE_TOPIC_ENABLE: "true"
KAFKA_LOG_RETENTION_HOURS: 8760 # 1 year
KAFKA_ZOOKEEPER_CONNECT: 192.168.2.60:22181,192.168.2.60:32181,192.168.2.60:42181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.2.60:39092
volumes:
- kafka3:/var/lib/kafka/data
networks:
- esnet2
ports:
- "39092:39092"
connect-1:
image: confluentinc/cp-kafka-connect:latest
ports:
- 18083:18083
depends_on:
- kafka-1
#- kafka-2
#- kafka-3
environment:
CONNECT_BOOTSTRAP_SERVERS: 192.168.2.60:19092,192.168.2.60:29092,192.168.2.60:39092
CONNECT_REST_PORT: 18083
CONNECT_GROUP_ID: "connect"
CONNECT_CONFIG_STORAGE_TOPIC: connect-config
CONNECT_OFFSET_STORAGE_TOPIC: connect-offsets
CONNECT_STATUS_STORAGE_TOPIC: connect-status
CONNECT_REPLICATION_FACTOR: 3
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 3
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 3
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 3
CONNECT_KEY_CONVERTER: "org.apache.kafka.connect.storage.StringConverter"
CONNECT_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_KEY_CONVERTER_SCHEMAS_ENABLE: "false"
CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE: "false"
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_KEY_CONVERTER_SCHEMAS_ENABLE: "true"
CONNECT_INTERNAL_VALUE_CONVERTER_SCHEMAS_ENABLE: "true"
#CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
#CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
CONNECT_REST_ADVERTISED_HOST_NAME: 192.168.2.60
CONNECT_LOG4J_ROOT_LOGLEVEL: INFO
CONNECT_LOG4J_LOGGERS: org.reflections=ERROR
CONNECT_PLUGIN_PATH: /usr/share/java
connect-2:
image: confluentinc/cp-kafka-connect:latest
ports:
- 28083:28083
depends_on:
- kafka-1
#- kafka-2
#- kafka-3
environment:
CONNECT_BOOTSTRAP_SERVERS: 192.168.2.60:19092,192.168.2.60:29092,192.168.2.60:39092
CONNECT_REST_PORT: 28083
CONNECT_GROUP_ID: "connect"
CONNECT_CONFIG_STORAGE_TOPIC: connect-config
CONNECT_OFFSET_STORAGE_TOPIC: connect-offsets
CONNECT_STATUS_STORAGE_TOPIC: connect-status
CONNECT_REPLICATION_FACTOR: 3
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 3
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 3
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 3
CONNECT_KEY_CONVERTER: "org.apache.kafka.connect.storage.StringConverter"
CONNECT_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_KEY_CONVERTER_SCHEMAS_ENABLE: "false"
CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE: "false"
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_KEY_CONVERTER_SCHEMAS_ENABLE: "true"
CONNECT_INTERNAL_VALUE_CONVERTER_SCHEMAS_ENABLE: "true"
#CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
#CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
CONNECT_REST_ADVERTISED_HOST_NAME: 192.168.2.60
CONNECT_LOG4J_ROOT_LOGLEVEL: INFO
CONNECT_LOG4J_LOGGERS: org.reflections=ERROR
CONNECT_PLUGIN_PATH: /usr/share/java
connect-3:
image: confluentinc/cp-kafka-connect:latest
ports:
- 38083:38083
depends_on:
- kafka-1
#- kafka-2
#- kafka-3
environment:
CONNECT_BOOTSTRAP_SERVERS: 192.168.2.60:19092,192.168.2.60:29092,192.168.2.60:39092
CONNECT_REST_PORT: 38083
CONNECT_GROUP_ID: "connect"
CONNECT_CONFIG_STORAGE_TOPIC: connect-config
CONNECT_OFFSET_STORAGE_TOPIC: connect-offsets
CONNECT_STATUS_STORAGE_TOPIC: connect-status
CONNECT_REPLICATION_FACTOR: 3
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 3
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 3
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 3
CONNECT_KEY_CONVERTER: "org.apache.kafka.connect.storage.StringConverter"
CONNECT_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_KEY_CONVERTER_SCHEMAS_ENABLE: "false"
CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE: "false"
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_KEY_CONVERTER_SCHEMAS_ENABLE: "true"
CONNECT_INTERNAL_VALUE_CONVERTER_SCHEMAS_ENABLE: "true"
#CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
#CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
CONNECT_REST_ADVERTISED_HOST_NAME: 192.168.2.60
CONNECT_LOG4J_ROOT_LOGLEVEL: INFO
CONNECT_LOG4J_LOGGERS: org.reflections=ERROR
CONNECT_PLUGIN_PATH: /usr/share/java
volumes:
zoo1:
driver: local
zoo2:
driver: local
zoo3:
driver: local
kafka1:
driver: local
kafka2:
driver: local
kafka3:
driver: local
networks:
esnet2:
driver: bridge
But when I start this one
---
version: '2'
services:
zookeeper-1:
image: confluentinc/cp-zookeeper:latest
#depends_on:
# - kibana
environment:
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: 22181
ZOOKEEPER_TICK_TIME: 2000
#ZOOKEEPER_INIT_LIMIT: 5
#ZOOKEEPER_SYNC_LIMIT: 2
#ZOOKEEPER_SERVERS: 0.0.0.0:22888:23888;192.168.2.60:32888:33888;192.168.2.60:42888:43888
volumes:
- zoo1:/data
networks:
- esnet2
ports:
- "22181:22181"
- "22888:22888"
- "23888:23888"
kafka-1:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper-1
#- zookeeper-2
#- zookeeper-3
environment:
KAFKA_BROKER_ID: 1
KAFKA_NUM_PARTITIONS: 1
KAFKA_DEFAULT_REPLICATION_FACTOR: 1
KAFKA_DELETE_TOPIC_ENABLE: "true"
KAFKA_LOG_RETENTION_HOURS: 8760 # 1 year
KAFKA_ZOOKEEPER_CONNECT: 192.168.2.60:22181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.2.60:19092
volumes:
- kafka1:/var/lib/kafka/data
networks:
- esnet2
ports:
- "19092:19092"
connect-1:
image: confluentinc/cp-kafka-connect:latest
ports:
- 18083:18083
depends_on:
- kafka-1
#- kafka-2
#- kafka-3
environment:
CONNECT_BOOTSTRAP_SERVERS: 192.168.2.60:19092
CONNECT_REST_PORT: 18083
CONNECT_GROUP_ID: "connect"
CONNECT_CONFIG_STORAGE_TOPIC: connect-config
CONNECT_OFFSET_STORAGE_TOPIC: connect-offsets
CONNECT_STATUS_STORAGE_TOPIC: connect-status
CONNECT_REPLICATION_FACTOR: 1
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: "org.apache.kafka.connect.storage.StringConverter"
CONNECT_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_KEY_CONVERTER_SCHEMAS_ENABLE: "false"
CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE: "false"
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_KEY_CONVERTER_SCHEMAS_ENABLE: "true"
CONNECT_INTERNAL_VALUE_CONVERTER_SCHEMAS_ENABLE: "true"
#CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
#CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
CONNECT_REST_ADVERTISED_HOST_NAME: 192.168.2.60
CONNECT_REST_ADVERTISED_PORT: 18083
CONNECT_LOG4J_ROOT_LOGLEVEL: INFO
CONNECT_LOG4J_LOGGERS: org.reflections=INFO
CONNECT_PLUGIN_PATH: /usr/share/java
volumes:
esdata1:
driver: local
zoo1:
driver: local
kafka1:
driver: local
networks:
esnet2:
driver: bridge
I can't request even list of connectors GET /connectors
, i'm getting 500 Request time out.
I don't know why it works for cluster-mode connect.
from kafka-connect-elasticsearch.
Looks like this issue happened for a while for some situations. I am using confluent version 4.0.1 distribute mode, I can reproduce this issue. For my situation, I have one JdbcSourceConnector and one RedshiftSinkConnector. The first deploy or deletion REST work for either connector, but all the following REST call will hang. I went through this thread http://mail-archives.apache.org/mod_mbox/kafka-users/201612.mbox/%[email protected]%3E, also confluentinc/kafka-connect-jdbc#302. but these don't help my situation. Does anyone have a suggestion?
from kafka-connect-elasticsearch.
I got my issue solved. For my case, the problem is that we are using "timestamp+incrementing" mode, but the source is huge table without index on the timestamp column, so after the source connector is created, it start to query the DB and wait for the result until timeout. And then it runs the query again and again. During the query time, rest api reports "500: timeout" for any new connector deployment(I don't know how connector handle that logic internally). But when I change to anther table with index built on. It works. Not sure if there are some connector monitor can be used to detect this corner case. But definitely, query timeout should not bring down the rest api.
from kafka-connect-elasticsearch.
I am also facing time out error while posting the source connector for DB2,POST API wait for almost 90 second and after that it time out with below error
[2019-06-02 00:17:17,906] INFO 192.168.1.2 - - [01/Jun/2019:18:45:47 +0000] "POST /connectors HTTP/1.1" 500 48 90004 (org.apache.kafka.connect.runtime.rest.RestServer:60)
I can see below warning also in Kafka connect log just before the time out error
This member will leave the group because consumer poll timeout has expired. This means the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time processing messages. You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:1011)
is there any configuration to increase the API time out. I have also noticed interesting behavior, when i run the kafka connect in standalone mode it works perfectly. I can see the DB2 table data in kafka topic.
Hi
Do you have any solution for this. Im also facing the exact issue when loading a source connector in distributed mode. Please kindly reply if anybody has any solutions for this.
from kafka-connect-elasticsearch.
Facing the same issue here with kafka-connect-sftp
source connector :(
from kafka-connect-elasticsearch.
I got a similar issue. I posted my solution at https://stackoverflow.com/questions/71520181/got-500-request-timed-out-for-kafka-connect-rest-api-post-put-delete
To me, simply restart the Kafka Connect, the issue will be gone for me.
kubectl rollout restart deployment my-kafka-connect --namespace=my-kafka
So far, the timeout issue hasn't showed up again.
from kafka-connect-elasticsearch.
Related Issues (20)
- How to Convert JSON String field to ES Object?
- Capture Kafka key without using it as ID HOT 2
- Suggestion for INSERT operation "Ignoring EXTERNAL version conflict for operation INDEX on document"
- Used Elastic Java REST client is deprecated in 7.15.0 HOT 1
- Error with `"behavior.on.null.values": "delete"`
- Consumer paused indefinitely when using `AsyncOffsetTracker` with lot of null values
- Cannot use data stream with time_series mode HOT 2
- Error: Cannot infer mapping without schema HOT 1
- Connector fails with payloads >20 MB HOT 1
- Can't create a connector even if its loaded in Strimzi
- Support requests per second configuration options
- Log when there are too many requests errors
- [BUG] `TOO_MANY_REQUESTS` error craches the tasks with a unrecoverable exceptions without retries
- Ignore 'document_parsing_exception' HOT 1
- Inconsistent Logging for Tombstone Messages in Elastic Sink Connector
- abnormal data loss question
- Data Stream naming is far too restrictive HOT 1
- Creating index based on Timestamp doesn't work
- Limit retry backoff (and unlimited retries) HOT 4
- add support for index templates other than logs and metrics as types when using data streams
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from kafka-connect-elasticsearch.