Giter Site home page Giter Site logo

Comments (21)

sailxjx avatar sailxjx commented on July 18, 2024 8

@ewencp Same problem here, without any exception, just 500 timeout when update or delete a connector. And when this happens, all the PUT/POST requests will not work.
[2017-05-23 11:14:12,700] INFO 10.10.4.1 - - [23/May/2017:03:12:05 +0000] "DELETE /connectors/mongo_cron_source_slave HTTP/1.1" 500 92 127182 (org.apache.kafka.connect.runtime.rest.RestServer)

from kafka-connect-elasticsearch.

rupeshpatel02 avatar rupeshpatel02 commented on July 18, 2024 7

I am also facing time out error while posting the source connector for DB2,POST API wait for almost 90 second and after that it time out with below error

[2019-06-02 00:17:17,906] INFO 192.168.1.2 - - [01/Jun/2019:18:45:47 +0000] "POST /connectors HTTP/1.1" 500 48 90004 (org.apache.kafka.connect.runtime.rest.RestServer:60)

I can see below warning also in Kafka connect log just before the time out error

This member will leave the group because consumer poll timeout has expired. This means the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time processing messages. You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:1011)

is there any configuration to increase the API time out. I have also noticed interesting behavior, when i run the kafka connect in standalone mode it works perfectly. I can see the DB2 table data in kafka topic.

from kafka-connect-elasticsearch.

levzem avatar levzem commented on July 18, 2024 6

Closing this as the original issue has been resolved. Follow up commentary pertains to other connectors.

from kafka-connect-elasticsearch.

affair avatar affair commented on July 18, 2024 5

That was my stupid error.
https://docs.confluent.io/current/installation/docker/docs/configuration.html#confluent-kafka-cp-kafka

By default replication factor is 3. I fixed my problem by setting KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 in my docker-compose.yml.
So keep in mind, if kafka has not been successfully started, kafka-connector will respond with 500 error code because it stores data in kafka topics.

from kafka-connect-elasticsearch.

ldcasillas-progreso avatar ldcasillas-progreso commented on July 18, 2024 4

I'm not experiencing @danielfnfaria's precise issues, but something I've noticed is that some erroneous requests to Kafka Connect distributed workers cause their internal REST endpoints to die, with no error returned to the caller or even logged at all.

The first example that I experienced was because I assumed that the Confluent Platform's uber-RPM included the S3 connector, when it turns out that it doesn't. Before I realized this, any attempt I made at registering an S3 sink timed out with a 500 error; and not just that, after such a timeout, all requests to the worker's REST interface would time out thereafter until the worker was restarted. Once I realized that the S3 connector jars were not actually there and installed that RPM separately, the registration request succeeded.

So basically, whatever problem @danielfnfaria is experiencing here, the bigger problem is that distributed workers swallow exceptions and die when you send them a "killer request."

from kafka-connect-elasticsearch.

dex80526 avatar dex80526 commented on July 18, 2024 2

any update on this issue? we are running to the same issue (get /connectors timeout)?

from kafka-connect-elasticsearch.

ewencp avatar ewencp commented on July 18, 2024

@danielfnfaria Is there any more information that this? What is the actual output -- it looks like it wrote 48 bytes?

Can you check the connect logs to see if there are any relevant messages?

from kafka-connect-elasticsearch.

yogeshsangvikar avatar yogeshsangvikar commented on July 18, 2024

I am getting same error for GET or POST /connectors API. I am using confluent-3.3.0 package.

2017-08-08 10:42:02 INFO RestServer:60 - 10.160.240.125 - - [08/Aug/2017:10:40:32 +0000] "GET /connectors HTTP/1.1" 500 48 90007
2017-08-08 10:42:34 INFO RestServer:60 - 10.160.240.125 - - [08/Aug/2017:10:41:03 +0000] "POST /connectors HTTP/1.1" 500 48 90124

Please help to resolve this error.

from kafka-connect-elasticsearch.

yogeshsangvikar avatar yogeshsangvikar commented on July 18, 2024

By downgrading confluent to 3.2.0 version, I am able to access /connectors API.

from kafka-connect-elasticsearch.

hleb-albau avatar hleb-albau commented on July 18, 2024

Same problem

from kafka-connect-elasticsearch.

hakamairi avatar hakamairi commented on July 18, 2024

Same problem, any updates?

from kafka-connect-elasticsearch.

sailxjx avatar sailxjx commented on July 18, 2024

I solve this problem by set the rest.advertised.host.name (with ip address) and rest.advertised.port, each connector process needs to have an unique host or port, and these hosts and ports should be accessible to every node of the cluster.
If you start a cluster with some nodes share the same host name and port, connectors will be blocked after receive the update/delete request.

from kafka-connect-elasticsearch.

vultron81 avatar vultron81 commented on July 18, 2024

I too am having this issue on v3.3.0 of kafka-connect. The /connectors endpoint appears to be broken in this version.

from kafka-connect-elasticsearch.

rhauch avatar rhauch commented on July 18, 2024

#116 recently enhanced the connector to use exponential backoff. That was merged into the 3.3.x, 3.4.x, and master branches but has not yet been released. Feel free to build it to see if this fixes the issue -- would love to hear feedback.

from kafka-connect-elasticsearch.

neeraj2k6 avatar neeraj2k6 commented on July 18, 2024

I am getting the same problem. were you able to solve it, seems some small config is missing :-(

from kafka-connect-elasticsearch.

affair avatar affair commented on July 18, 2024

Getting the same issue.
I've noticed one very interesting thing. The following docker file works like a charm.

---
version: '2'
services:
  zookeeper-1:
    image: confluentinc/cp-zookeeper:latest
    #depends_on:
    #  - kibana
    environment:
      ZOOKEEPER_SERVER_ID: 1
      ZOOKEEPER_CLIENT_PORT: 22181
      ZOOKEEPER_TICK_TIME: 2000
      ZOOKEEPER_INIT_LIMIT: 5
      ZOOKEEPER_SYNC_LIMIT: 2
      ZOOKEEPER_SERVERS: 0.0.0.0:22888:23888;192.168.2.60:32888:33888;192.168.2.60:42888:43888
    volumes:
       - zoo1:/data
    networks:
      - esnet2
    ports:
       - "22181:22181"
       - "22888:22888"
       - "23888:23888"

  zookeeper-2:
    image: confluentinc/cp-zookeeper:latest
    #depends_on:
    #  - kibana
    environment:
      ZOOKEEPER_SERVER_ID: 2
      ZOOKEEPER_CLIENT_PORT: 32181
      ZOOKEEPER_TICK_TIME: 2000
      ZOOKEEPER_INIT_LIMIT: 5
      ZOOKEEPER_SYNC_LIMIT: 2
      ZOOKEEPER_SERVERS: 192.168.2.60:22888:23888;0.0.0.0:32888:33888;192.168.2.60:42888:43888
    volumes:
       - zoo2:/data
    networks:
      - esnet2
    ports:
       - "32181:32181"
       - "32888:32888"
       - "33888:33888"

  zookeeper-3:
    image: confluentinc/cp-zookeeper:latest
    #depends_on:
    #  - kibana
    environment:
      ZOOKEEPER_SERVER_ID: 3
      ZOOKEEPER_CLIENT_PORT: 42181
      ZOOKEEPER_TICK_TIME: 2000
      ZOOKEEPER_INIT_LIMIT: 5
      ZOOKEEPER_SYNC_LIMIT: 2
      ZOOKEEPER_SERVERS: 192.168.2.60:22888:23888;192.168.2.60:32888:33888;0.0.0.0:42888:43888
    volumes:
       - zoo3:/data
    networks:
      - esnet2
    ports:
       - "42181:42181"
       - "42888:42888"
       - "43888:43888"

  kafka-1:
    image: confluentinc/cp-kafka:latest
    depends_on:
      - zookeeper-1
      #- zookeeper-2
      #- zookeeper-3
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_NUM_PARTITIONS: 3
      KAFKA_DEFAULT_REPLICATION_FACTOR: 3
      KAFKA_DELETE_TOPIC_ENABLE: "true"
      KAFKA_LOG_RETENTION_HOURS: 8760 # 1 year
      KAFKA_ZOOKEEPER_CONNECT: 192.168.2.60:22181,192.168.2.60:32181,192.168.2.60:42181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.2.60:19092
    volumes:
       - kafka1:/var/lib/kafka/data
    networks:
      - esnet2
    ports:
       - "19092:19092"

  kafka-2:
    image: confluentinc/cp-kafka:latest
    depends_on:
      - zookeeper-1
      #- zookeeper-2
      #- zookeeper-3
    environment:
      KAFKA_BROKER_ID: 2
      KAFKA_NUM_PARTITIONS: 3
      KAFKA_DEFAULT_REPLICATION_FACTOR: 3
      KAFKA_DELETE_TOPIC_ENABLE: "true"
      KAFKA_LOG_RETENTION_HOURS: 8760 # 1 year
      KAFKA_ZOOKEEPER_CONNECT: 192.168.2.60:22181,192.168.2.60:32181,192.168.2.60:42181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.2.60:29092
    volumes:
       - kafka2:/var/lib/kafka/data
    networks:
      - esnet2
    ports:
       - "29092:29092"

  kafka-3:
    image: confluentinc/cp-kafka:latest
    depends_on:
      - zookeeper-1
      #- zookeeper-2
      #- zookeeper-3
    environment:
      KAFKA_BROKER_ID: 3
      KAFKA_NUM_PARTITIONS: 3
      KAFKA_DEFAULT_REPLICATION_FACTOR: 3
      KAFKA_DELETE_TOPIC_ENABLE: "true"
      KAFKA_LOG_RETENTION_HOURS: 8760 # 1 year
      KAFKA_ZOOKEEPER_CONNECT: 192.168.2.60:22181,192.168.2.60:32181,192.168.2.60:42181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.2.60:39092
    volumes:
       - kafka3:/var/lib/kafka/data
    networks:
      - esnet2
    ports:
       - "39092:39092"

  connect-1:
    image: confluentinc/cp-kafka-connect:latest
    ports:
      - 18083:18083
    depends_on:
      - kafka-1
      #- kafka-2
      #- kafka-3
    environment:
      CONNECT_BOOTSTRAP_SERVERS: 192.168.2.60:19092,192.168.2.60:29092,192.168.2.60:39092
      CONNECT_REST_PORT: 18083
      CONNECT_GROUP_ID: "connect"

      CONNECT_CONFIG_STORAGE_TOPIC: connect-config
      CONNECT_OFFSET_STORAGE_TOPIC: connect-offsets
      CONNECT_STATUS_STORAGE_TOPIC: connect-status

      CONNECT_REPLICATION_FACTOR: 3

      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 3
      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 3
      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 3

      CONNECT_KEY_CONVERTER: "org.apache.kafka.connect.storage.StringConverter"
      CONNECT_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"

      CONNECT_KEY_CONVERTER_SCHEMAS_ENABLE: "false"
      CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE: "false"

      CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
      CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"

      CONNECT_INTERNAL_KEY_CONVERTER_SCHEMAS_ENABLE: "true"
      CONNECT_INTERNAL_VALUE_CONVERTER_SCHEMAS_ENABLE: "true"

      #CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
      #CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"

      CONNECT_REST_ADVERTISED_HOST_NAME: 192.168.2.60

      CONNECT_LOG4J_ROOT_LOGLEVEL: INFO
      CONNECT_LOG4J_LOGGERS: org.reflections=ERROR

      CONNECT_PLUGIN_PATH: /usr/share/java

  connect-2:
    image: confluentinc/cp-kafka-connect:latest
    ports:
      - 28083:28083
    depends_on:
      - kafka-1
      #- kafka-2
      #- kafka-3
    environment:
      CONNECT_BOOTSTRAP_SERVERS: 192.168.2.60:19092,192.168.2.60:29092,192.168.2.60:39092
      CONNECT_REST_PORT: 28083
      CONNECT_GROUP_ID: "connect"

      CONNECT_CONFIG_STORAGE_TOPIC: connect-config
      CONNECT_OFFSET_STORAGE_TOPIC: connect-offsets
      CONNECT_STATUS_STORAGE_TOPIC: connect-status

      CONNECT_REPLICATION_FACTOR: 3

      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 3
      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 3
      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 3

      CONNECT_KEY_CONVERTER: "org.apache.kafka.connect.storage.StringConverter"
      CONNECT_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"

      CONNECT_KEY_CONVERTER_SCHEMAS_ENABLE: "false"
      CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE: "false"

      CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
      CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"

      CONNECT_INTERNAL_KEY_CONVERTER_SCHEMAS_ENABLE: "true"
      CONNECT_INTERNAL_VALUE_CONVERTER_SCHEMAS_ENABLE: "true"

      #CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
      #CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"

      CONNECT_REST_ADVERTISED_HOST_NAME: 192.168.2.60

      CONNECT_LOG4J_ROOT_LOGLEVEL: INFO
      CONNECT_LOG4J_LOGGERS: org.reflections=ERROR

      CONNECT_PLUGIN_PATH: /usr/share/java

  connect-3:
    image: confluentinc/cp-kafka-connect:latest
    ports:
      - 38083:38083
    depends_on:
      - kafka-1
      #- kafka-2
      #- kafka-3
    environment:
      CONNECT_BOOTSTRAP_SERVERS: 192.168.2.60:19092,192.168.2.60:29092,192.168.2.60:39092
      CONNECT_REST_PORT: 38083
      CONNECT_GROUP_ID: "connect"

      CONNECT_CONFIG_STORAGE_TOPIC: connect-config
      CONNECT_OFFSET_STORAGE_TOPIC: connect-offsets
      CONNECT_STATUS_STORAGE_TOPIC: connect-status

      CONNECT_REPLICATION_FACTOR: 3

      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 3
      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 3
      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 3

      CONNECT_KEY_CONVERTER: "org.apache.kafka.connect.storage.StringConverter"
      CONNECT_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"

      CONNECT_KEY_CONVERTER_SCHEMAS_ENABLE: "false"
      CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE: "false"

      CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
      CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"

      CONNECT_INTERNAL_KEY_CONVERTER_SCHEMAS_ENABLE: "true"
      CONNECT_INTERNAL_VALUE_CONVERTER_SCHEMAS_ENABLE: "true"

      #CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
      #CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"

      CONNECT_REST_ADVERTISED_HOST_NAME: 192.168.2.60

      CONNECT_LOG4J_ROOT_LOGLEVEL: INFO
      CONNECT_LOG4J_LOGGERS: org.reflections=ERROR

      CONNECT_PLUGIN_PATH: /usr/share/java

volumes:
  zoo1:
    driver: local
  zoo2:
    driver: local
  zoo3:
    driver: local
  kafka1:
    driver: local
  kafka2:
    driver: local
  kafka3:
    driver: local

networks:
  esnet2:
    driver: bridge

But when I start this one

---
version: '2'
services:
  zookeeper-1:
    image: confluentinc/cp-zookeeper:latest
    #depends_on:
    #  - kibana
    environment:
      ZOOKEEPER_SERVER_ID: 1
      ZOOKEEPER_CLIENT_PORT: 22181
      ZOOKEEPER_TICK_TIME: 2000
      #ZOOKEEPER_INIT_LIMIT: 5
      #ZOOKEEPER_SYNC_LIMIT: 2
      #ZOOKEEPER_SERVERS: 0.0.0.0:22888:23888;192.168.2.60:32888:33888;192.168.2.60:42888:43888
    volumes:
       - zoo1:/data
    networks:
      - esnet2
    ports:
       - "22181:22181"
       - "22888:22888"
       - "23888:23888"

  kafka-1:
    image: confluentinc/cp-kafka:latest
    depends_on:
      - zookeeper-1
      #- zookeeper-2
      #- zookeeper-3
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_NUM_PARTITIONS: 1
      KAFKA_DEFAULT_REPLICATION_FACTOR: 1
      KAFKA_DELETE_TOPIC_ENABLE: "true"
      KAFKA_LOG_RETENTION_HOURS: 8760 # 1 year
      KAFKA_ZOOKEEPER_CONNECT: 192.168.2.60:22181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.2.60:19092
    volumes:
       - kafka1:/var/lib/kafka/data
    networks:
      - esnet2
    ports:
       - "19092:19092"

  connect-1:
    image: confluentinc/cp-kafka-connect:latest
    ports:
      - 18083:18083
    depends_on:
      - kafka-1
      #- kafka-2
      #- kafka-3
    environment:
      CONNECT_BOOTSTRAP_SERVERS: 192.168.2.60:19092
      CONNECT_REST_PORT: 18083
      CONNECT_GROUP_ID: "connect"

      CONNECT_CONFIG_STORAGE_TOPIC: connect-config
      CONNECT_OFFSET_STORAGE_TOPIC: connect-offsets
      CONNECT_STATUS_STORAGE_TOPIC: connect-status

      CONNECT_REPLICATION_FACTOR: 1

      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1

      CONNECT_KEY_CONVERTER: "org.apache.kafka.connect.storage.StringConverter"
      CONNECT_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"

      CONNECT_KEY_CONVERTER_SCHEMAS_ENABLE: "false"
      CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE: "false"

      CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
      CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"

      CONNECT_INTERNAL_KEY_CONVERTER_SCHEMAS_ENABLE: "true"
      CONNECT_INTERNAL_VALUE_CONVERTER_SCHEMAS_ENABLE: "true"

      #CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
      #CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"

      CONNECT_REST_ADVERTISED_HOST_NAME: 192.168.2.60
      CONNECT_REST_ADVERTISED_PORT: 18083

      CONNECT_LOG4J_ROOT_LOGLEVEL: INFO
      CONNECT_LOG4J_LOGGERS: org.reflections=INFO

      CONNECT_PLUGIN_PATH: /usr/share/java

volumes:
  esdata1:
    driver: local
  zoo1:
    driver: local
  kafka1:
    driver: local

networks:
  esnet2:
    driver: bridge

I can't request even list of connectors GET /connectors, i'm getting 500 Request time out.
I don't know why it works for cluster-mode connect.

from kafka-connect-elasticsearch.

gaisanshi avatar gaisanshi commented on July 18, 2024

Looks like this issue happened for a while for some situations. I am using confluent version 4.0.1 distribute mode, I can reproduce this issue. For my situation, I have one JdbcSourceConnector and one RedshiftSinkConnector. The first deploy or deletion REST work for either connector, but all the following REST call will hang. I went through this thread http://mail-archives.apache.org/mod_mbox/kafka-users/201612.mbox/%[email protected]%3E, also confluentinc/kafka-connect-jdbc#302. but these don't help my situation. Does anyone have a suggestion?

from kafka-connect-elasticsearch.

gaisanshi avatar gaisanshi commented on July 18, 2024

I got my issue solved. For my case, the problem is that we are using "timestamp+incrementing" mode, but the source is huge table without index on the timestamp column, so after the source connector is created, it start to query the DB and wait for the result until timeout. And then it runs the query again and again. During the query time, rest api reports "500: timeout" for any new connector deployment(I don't know how connector handle that logic internally). But when I change to anther table with index built on. It works. Not sure if there are some connector monitor can be used to detect this corner case. But definitely, query timeout should not bring down the rest api.

from kafka-connect-elasticsearch.

Bharath1796 avatar Bharath1796 commented on July 18, 2024

I am also facing time out error while posting the source connector for DB2,POST API wait for almost 90 second and after that it time out with below error

[2019-06-02 00:17:17,906] INFO 192.168.1.2 - - [01/Jun/2019:18:45:47 +0000] "POST /connectors HTTP/1.1" 500 48 90004 (org.apache.kafka.connect.runtime.rest.RestServer:60)

I can see below warning also in Kafka connect log just before the time out error

This member will leave the group because consumer poll timeout has expired. This means the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time processing messages. You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:1011)

is there any configuration to increase the API time out. I have also noticed interesting behavior, when i run the kafka connect in standalone mode it works perfectly. I can see the DB2 table data in kafka topic.

Hi

Do you have any solution for this. Im also facing the exact issue when loading a source connector in distributed mode. Please kindly reply if anybody has any solutions for this.

from kafka-connect-elasticsearch.

M3lkior avatar M3lkior commented on July 18, 2024

Facing the same issue here with kafka-connect-sftp source connector :(

from kafka-connect-elasticsearch.

hongbo-miao avatar hongbo-miao commented on July 18, 2024

I got a similar issue. I posted my solution at https://stackoverflow.com/questions/71520181/got-500-request-timed-out-for-kafka-connect-rest-api-post-put-delete

To me, simply restart the Kafka Connect, the issue will be gone for me.

kubectl rollout restart deployment my-kafka-connect --namespace=my-kafka

So far, the timeout issue hasn't showed up again.

from kafka-connect-elasticsearch.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.