Giter Site home page Giter Site logo

terraform-nomad-trino's Introduction

Terraform-nomad-trino


Module contains a Nomad job ./conf/nomad/trino.hcl with Trino sql server.

Contents

  1. Prerequisites
  2. Compatibility
  3. Requirements
    1. Required software
  4. Usage
    1. Connect to the services (proxies)
    2. Verifying setup
    3. Providers
    4. Intentions
  5. Example usage
  6. Inputs
  7. Outputs
  8. Secrets & credentials
  9. Contributors
  10. License
  11. References

Prerequisites

Please follow this section in original template

Compatibility

Software OSS Version Enterprise Version
Terraform 0.13.1 or newer
Consul 1.8.3 or newer 1.8.3 or newer
Vault 1.5.2.1 or newer 1.5.2.1 or newer
Nomad 0.12.3 or newer 0.12.3 or newer

Requirements

Required modules

Module Version
terraform-nomad-hive 0.3.0 or newer
terraform-nomad-minio 0.3.0 or newer
terraform-nomad-postgres 0.3.0 or newer

Required software

All software is provided and run with docker. See the Makefile for inspiration.

If you are using another system such as MacOS, you may need to install the following tools in some sections:

Usage

The following command will run the example in example/trino_cluster:

make up

and

make up-standalone

will run the example in example/trino_standalone

For more information, check out the documentation in the trino_cluster README.

Connect to the services (proxies)

Since the services in this module use the sidecar_service, you need to connect to the services using a Consul connect proxy. The proxy connections are pre-made and defined in the Makefile:

make proxy-hive     # to hivemetastore
make proxy-minio    # to minio
make proxy-postgres # to postgres
make proxy-trino   # to trino

You can now connect to Trino using the Trino CLI with the following command:

make trino-cli # connect to Trino CLI

⚠️ Note

If you are on a Mac the proxies and make trino-cli may not work. Instead, you can install the Consul binary and run the commands in the Makefile manually (without docker run ..). Further, you need to install the Trino CLI on your local machine or inside the box. See also required software.

Verifying setup

Option 1 [Hive-metastore and Nomad]

  1. Go to http://localhost:4646/ui/exec/hive-metastore
  2. Chose metastoreserver -> metastoreserver and click enter.
  3. Connect using beeline cli:
# from metastore (loopback)
beeline -u jdbc:hive2://
  1. You can now query existing tables with the (beeline-cli)
SHOW DATABASES;
SHOW TABLES IN <database-name>;
DROP DATABASE <database-name>;
SELECT * FROM <table_name>;

# examples
SHOW TABLES;
SELECT * FROM iris;
SELECT * FROM tweets;

Option 2 [Trino and Nomad]

⚠️ Only works with trino_standalone example.

  1. Go to http://localhost:4646/ui/exec/trino
  2. Chose standalone -> server and click enter.
  3. Connect using the Trino-cli:
trino
  1. You can now query existing tables with the Trino-cli:
SHOW CATALOGS [ LIKE pattern ]
SHOW SCHEMAS [ FROM catalog ] [ LIKE pattern ]
SHOW TABLES [ FROM schema ] [ LIKE pattern ]

# examples
SHOW CATALOGS;
SHOW SCHEMAS IN hive;
SHOW TABLES IN hive.default;
SELECT * FROM hive.default.iris;

Option 3 [local Trino-cli]

ℹ️ Check required software section first.

The following command contains two docker containers with the flag --network=host, natively run on Linux. An important note is that MacOS Docker runs in a virtual machine. In that case, you need to use the local binary consul to install proxy and in another terminal local binary with trino cli to connect.

In a terminal run a proxy and Trino-cli session:

make trino-cli

You can now query tables (3 tables should be available):

show tables;
select * from <table>;

To debug or continue developing you can use Trino cli locally. Some useful commands.

# manual table creation for different file types
trino --server localhost:8080 --catalog hive --schema default --user trino --file ./example/resources/query/csv_create_table.sql
trino --server localhost:8080 --catalog hive --schema default --user trino --file ./example/resources/query/json_create_table.sql
trino --server localhost:8080 --catalog hive --schema default --user trino --file ./example/resources/query/flattenedjson_json.sql
trino --server localhost:8080 --catalog hive --schema default --user trino --file ./example/resources/query/avro_tweets_create_table.sql

Providers

This module uses the following providers:

Intentions

The following intentions are required. In the examples, intentions are created in the Ansible playboook 01_create_intetion.yml:

Intention between type
trino-local => trino allow
minio-local => minio allow
trino => hive-metastore allow
trino-sidecar-proxy => hive-metastore allow
trino-sidecar-proxy => minio allow

⚠️ Note that these intentions needs to be created if you are using the module in another module.

Example usage

The following code is an example of the Trino module in cluster mode. For detailed information check the example/trino_cluster or the example/trino_standalone directory.

The following code is an example usage of the example/trino_standalone.
Note: The Postgres used in this example is the same for both Hive and Trino.

module "trino" {
  source = "github.com/fredrikhgrelland/terraform-nomad-trino.git?ref=0.3.0"

  depends_on = [
    module.postgres,
    module.minio,
    module.hive
  ]

  # nomad
  nomad_job_name    = "trino"
  nomad_datacenters = ["dc1"]
  nomad_namespace   = "default"

  # Vault provided credentials
  vault_secret = {
    use_vault_provider         = true
    vault_kv_policy_name       = "kv-secret"
    vault_kv_path              = "secret/data/dev/trino"
    vault_kv_field_secret_name = "cluster_shared_secret"
  }

  service_name     = "trino"
  mode             = "cluster"
  workers          = 1
  consul_http_addr = "http://10.0.3.10:8500"
  debug            = true
  use_canary       = true
  hive_config_properties = [
      "hive.allow-drop-table=true",
      "hive.allow-rename-table=true",
      "hive.allow-add-column=true",
      "hive.allow-drop-column=true",
      "hive.allow-rename-column=true",
    "hive.compression-codec=ZSTD"]

  # other
  hivemetastore_service = {
    service_name = module.hive.service_name
    port         = module.hive.port
  }

  minio_service = {
    service_name = module.minio.minio_service_name
    port         = module.minio.minio_port
    access_key   = ""
    secret_key   = ""
  }

  # Vault provided credentials
  minio_vault_secret = {
    use_vault_provider       = true
    vault_kv_policy_name     = "kv-secret"
    vault_kv_path            = "secret/data/dev/minio"
    vault_kv_field_access_name = "access_key"
    vault_kv_field_secret_name = "secret_key"
  }

   postgres_service = {
      service_name  = module.postgres.service_name
      port          = module.postgres.port
      username      = module.postgres.username
      password      = module.postgres.password
      database_name = module.postgres.database_name
   }
   postgres_vault_secret = {
      use_vault_provider      = false
      vault_kv_policy_name    = ""
      vault_kv_path           = ""
      vault_kv_field_username = ""
      vault_kv_field_password = ""
   }
}

Inputs

Name Description Type Default Required
nomad_provider_address Nomad provider address string "http://127.0.0.1:4646" yes
nomad_data_center Nomad data centers list(string) ["dc1"] yes
nomad_namespace [Enterprise] Nomad namespace string "default" yes
nomad_job_name Nomad job name string "trino" yes
mode Switch for Nomad jobs to use cluster or standalone deployment string "standalone" no
shared_secret_user Shared secret provided by user(length must be >= 12) string "asdasdsadafdsa" no
vault_secret Set of properties to be able fetch shared cluster secret from Vault object(bool, string, string, string) use_vault_secret_provider = true
vault_kv_policy_name = "kv-secret"
vault_kv_path = "secret/data/dev/trino"
vault_kv_field_secret_name = "cluster_shared_secret"
no
service_name Trino service name string "trino" yes
resource Resource allocation for Trino nodes (cpu & memory) object(number, number) {
cpu = 500
memory = 1024
}
no
resource_proxy Resource allocation for proxy (cpu & memory) object(number, number) {
cpu = 200
memory = 128
}
no
port Trino http port number 8080 yes
docker_image Trino docker image string "trinodb/trino:354" yes
local_docker_image Switch for Nomad jobs to use artifact for image lookup bool false no
container_environment_variables Trino environment variables list(string) [""] no
hive_config_properties Custom hive configuration properties list(string) [""] no
workers cluster: Number of Nomad worker nodes number 1 no
coordinator Include a coordinator in addition to the workers. Set this to false when extending an existing cluster bool true no
use_canary Uses canary deployment for Trino bool false no
consul_connect_plugin Deploy Consul connect plugin for trino bool true no
consul_connect_plugin_version Version of the Consul connect plugin for trino (on maven central) src here: https://github.com/gugalnikov/trino-consul-connect string "2.2.0" no
consul_connect_plugin_artifact_source Artifact URI source string "https://oss.sonatype.org/service/local/repositories/releases/content/io/github/gugalnikov/trino-consul-connect" no
debug Turn on debug logging in trino nodes bool false no
hivemetastore.service_name Hive metastore service name string "hive-metastore" yes
hivemetastore.port Hive metastore port number 9083 yes
minio_service Minio data-object contains service_name, port, access_key and secret_key obj(string, number, string, string) - no
minio_vault_secret Minio data-object contains vault related information to fetch credentials obj(bool, string, string, string, string) {
use_vault_provider = false,
vault_kv_policy_name = "kv-secret",
vault_kv_path = "secret/data/dev/trino",
vault_kv_field_access_name = "access_key",
vault_kv_field_secret_name = "secret_key"
}
no
postgres_service Postgres data-object contains service_name, port, username, password and database_name obj(string, number, string, string, string) - no
postgres_vault_secret Set of properties to be able to fetch Postgres secrets from vault obj(bool, string, string, string, string) {
use_vault_provider = false,
vault_kv_policy_name = "kv-secret",
vault_kv_path = "secret/data/dev/trino",
vault_kv_field_username = "username",
vault_kv_field_password = "username"
}
no

Outputs

Name Description Type
trino_service_name Trino service name string

Secrets & credentials

When using the mode = "cluster", you can set your secrets in two ways, either manually or upload secrets to Vault.

Set credentials manually

To set the credentials manually you first need to tell the module to not fetch credentials from Vault. To do that, set vault_secret.use_vault_provider to false (see below for example). If this is done the module will use the variable shared_secret_user to set the Trino credentials. These will default to defaulttrinosecret if not set by the user. Below is an example on how to disable the use of Vault credentials, and setting your own credentials.

module "trino" {
...
  vault_secret  = {
                    use_vault_provider         = false,
                    vault_kv_policy_name       = "",
                    vault_kv_path              = "",
                    vault_kv_field_secret_name = "",
                  }
  shared_secret_user = "my-secret-key" # default 'defaulttrinosecret'
}

Set credentials using Vault secrets

By default use_vault_provider is set to true. However, when testing using the box (e.g. make dev) the Trino secret is randomly generated and put in secret/dev/trino inside Vault, from the 01_generate_secrets_vault.yml playbook. This is an independent process and will run regardless of the vault_secret.use_vault_provider is false or true.

If you want to use the automatically generated credentials in the box, you can do so by changing the vault_secret object as seen below:

module "trino" {
...
  vault_secret  = {
                    use_vault_secret_provider   = true
                    vault_kv_policy_name        = "kv-secret"
                    vault_kv_path               = "secret/data/dev/trino"
                    vault_kv_field_secret_name  = "cluster_shared_secret"
                  }
}

If you want to change the secrets path and keys/values in Vault with your own configuration you would need to change the variables in the vault_secret-object. Say that you have put your secrets in secret/services/trino/users and change the key to my_trino_secret_name. You must have Vault policy with name kv-users-secret and at least read-access to path secret/services/trino/users. Then you need to do the following configuration:

module "trino" {
...
  vault_secret  = {
                    use_vault_secret_provider = true,
                    vault_kv_policy_name       = "kv-users-secret"
                    vault_kv_path              = "secret/data/services/trino/users",
                    vault_kv_field_secret_name = "my_trino_secret_name"
                  }
}

Contributors

License

This work is licensed under Apache 2 License. See LICENSE for full details.


References

terraform-nomad-trino's People

Contributors

claesgill avatar dangernil avatar fredrikhgrelland avatar hannemariavister avatar lmjelstad avatar neha-sinha2305 avatar pdmthorsrud avatar zhenik avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

terraform-nomad-trino's Issues

Consul-connect enabled presto cluster.

In order to form a functioning cluster of presto nodes in a consul-connect service mesh, we need presto to resolve inside of the cluster. We can not rely in service discovery inside of the cluster, as presto will announce and resolve its workers with the discovery-server built into airlift.io.
In order to connect-enable presto we need the entire uri to match "inside and outside" of the containers connected by consul connect. In order for this to work in nomad and resolve hive metastore and minio by normal sidecards, we will use a combination of consul connect native designation, a certificates-handler sidecar and update /etc/hosts by noop templating of the service catalog.

There will be a draft PR shortly for all of this. We keep the option of standlone container without all the trickery, as well as a fully fledged cluster job.

Add support for vault-provided credentials (and example)

What is the issue?

Add support for fetching credentials for dependent modules from the Vault and render them directly to nomad job.

Suggestion(s)/solution(s) [Optional]

Follow hive pr Skatteetaten/terraform-nomad-hive#53

Optional

Creds with vault

Checklist (after created issue)

  • Added label(s)
  • Added to project
  • Added to milestone

enable pki backend fails, certificate error

Current behaviour

Error message:

TASK [service_bootstrap : vault - post/pki - enable PKI backend] ***************
fatal: [default]: FAILED! => {
    "changed": false
}

MSG:

Failed to initialize Terraform modules:

Error: Failed to install provider

Error while installing hashicorp/vault v2.15.0: could not query provider
registry for registry.terraform.io/hashicorp/vault: failed to retrieve
authentication checksums for provider: the request failed after 2 attempts,
please try again later: Get
"https://releases.hashicorp.com/terraform-provider-vault/2.15.0/terraform-provider-vault_2.15.0_SHA256SUMS":
x509: certificate signed by unknown authority



PLAY RECAP *********************************************************************
default                    : ok=23   changed=5    unreachable=0    failed=1    skipped=6    rescued=0    ignored=0

Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
make: *** [up] Error 1

Expected behaviour

Succesful run

How to reproduce?

make up from root

Suggestion(s)/solution(s) [Optional]

No idea just yet

Checklist (after created issue)

  • Added label(s)
  • Added to project
  • Added to milestone

`make up-standalone` does not work

Current behavior

Fail at resolving variables

TASK [Terraform presto standalone] *********************************************
fatal: [default]: FAILED! => {
    "changed": false
}

MSG:

Failed to validate Terraform configuration files:

Error: Unsupported argument

  on main.tf line 37, in module "presto":
  37:   shared_secret_provider = local.presto.shared_secret_provider

An argument named "shared_secret_provider" is not expected here.


Error: Unsupported argument

  on main.tf line 39, in module "presto":
  39:   shared_secret_vault = {

An argument named "shared_secret_vault" is not expected here.



PLAY RECAP *********************************************************************
default                    : ok=9    changed=1    unreachable=0    failed=1    skipped=12   rescued=0    ignored=0   

Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
make: *** [up-standalone] Error 1

Expected behaviour

Up and running a standalone example

How to reproduce?

make clean
make up-standalone

Suggestion(s)/solution(s) [Optional]

Setup proper variables

Checklist (after created issue)

  • Added label(s)
  • Added to project
  • Added to milestone

Add CPU to resources

Feature description

Add CPU as a variable you can set in the module

Why is it needed?

More flexibility and control for the user

Suggestion(s)/solution(s) [Optional]

add to variables.tf

Definition of done

User can set CPU from input variable to module

Checklist (after created issue)

  • Added label(s)
  • Added to project
  • Added to milestone

Add auto scaling to nomad

Using the nomad autoscaler we could implement an autoscaler.
Remember, we can not kill nodes only add.

The APM could be prometheus with a jmx plugin scraping presto jmx emitter.

Remove locals block/make examples more readable

Feature description

The locals block in our examples is a little messy, and there is quite a bit of excess code.

Why is it needed?

Better readability

Suggestion(s)/solution(s) [Optional]

Remove the locals block and write the variables directly.

Definition of done

Checklist (after created issue)

  • Added label(s)
  • Added to project
  • Added to milestone

A slash too many in a template path

Current behaviour

We have a slash at the start of a path for a template in presto_standalone.hcl and presto.hcl
https://github.com/fredrikhgrelland/terraform-nomad-presto/blob/11ef41ef3ca2472d7f32226a2c7425f350554d71/conf/nomad/presto_standalone.hcl#L153
https://github.com/fredrikhgrelland/terraform-nomad-presto/blob/11ef41ef3ca2472d7f32226a2c7425f350554d71/conf/nomad/presto.hcl#L204

Expected behaviour

Shouldn't be there, and will, according to Fredrik, cause an error in Foundation 2.

How to reproduce?

Suggestion(s)/solution(s) [Optional]

Remove the first slash

Checklist (after created issue)

  • Added label(s)
  • Added to project
  • Added to milestone

`make clean` does not delete state directories from presto-cluster folder

Current behaviour

Make clean sletter ikke det som ligger under /examples/presto-cluster. Måtte slette dem manuelt ved å kjøre rm -rf .terrafom/ terraform.tfstate

Expected behaviour

.terraform/ og terraform.tfstate under /examples/presto-cluster slettes når man kjører make clean

How to reproduce?

Suggestion(s)/solution(s) [Optional]

Checklist (after created issue)

  • Added label(s)
  • Added to project
  • Added to milestone

Warnings running `make up`

Current behaviour

Throws warnings when running targets from makefile

~/projects/terraform-nomad-presto(master) » make clean                                                                                                      m88614@SKE-DC6KF-MD6T
Makefile:77: warning: overriding commands for target `status'
Makefile:60: warning: ignoring old commands for target `status'

image

Expected behaviour

No warnings

How to reproduce?

Run make clean or make up

Suggestion(s)/solution(s) [Optional]

Will look at why it happens tomorrow. Making this now so that I don't forget

Checklist (after created issue)

  • Added label(s)
  • Added to project
  • Added to milestone

Additional information in README.md

What is the issue?

I miss information about how to create local proxy to presto instance to make http://localhost:8080 avaliable (presto-gui and running queries from intelliJ).

Suggestion(s)/solution(s) [Optional]

I suggest to include information about how to create local proxy to presto instance, which is:make proxy-presto. The commands already exists in Makefile.

Checklist (after created issue)

  • Added label(s)
  • Added to project
  • Added to milestone

Add canary-switch

Canaries will not work with limited resources on the vagrant box. Create a switch to turn it off and use it in example.

Add more flatten view example for json data

Feature description

Originally posted zhenik#1

Why is it needed?

Request from @k86021 , for further workshop

Suggestion(s)/solution(s) [Optional]

Definition of done

Automated SQL request to create VIEW

Checklist (after created issue)

  • Added label(s)
  • Added to project
  • Added to milestone

Add intentions documentation

What is the issue?

No documentation about intentions.

Suggestion(s)/solution(s) [Optional]

Add intetions documentation in README.md

Checklist (after created issue)

  • Added label(s)
  • Added to project
  • Added to milestone

Turn `ON` all optional features in example

What is the issue?

Lacking optional features in tests:

  • vault-provided credentials

Suggestion(s)/solution(s) [Optional]

Checklist (after created issue)

  • Added label(s)
  • Added to project
  • Added to milestone

Increase memory allocation for consul sidecar proxy

Feature description

Increase memory allocation for presto sidecar proxy

Why is it needed?

Proxy crashes due to beeing out of memory

Suggestion(s)/solution(s) [Optional]

Increase memory allocation

Definition of done

Side car proxy services are running with more memory

Checklist (after created issue)

  • Added label(s)
  • Added to project
  • Added to milestone

Update documentation, distributed (cluster) mode deployment

What is the issue?

Ref: https://github.com/fredrikhgrelland/terraform-nomad-presto#option-2-presto-and-nomad

Verifying setup -> Option 2 does not work when deploying presto in distributed mode.
It fails on the step when the user needs to execute a command

show catalogs;

After command presto

Suggestion(s)/solution(s) [Optional]

Check the configuration of coordinator.

  • It might be the wrong port 8080, due to internal proxy communication
  • http disabled, only https
node.id=3b1c9ca0-2096-9018-22c1-017d58008f1c
node.environment=presto
node.internal-address=presto


coordinator=true
node-scheduler.include-coordinator=false
discovery-server.enabled=true
discovery.uri=https://127.0.0.1:25056

dynamic.http-client.https.hostname-verification=false
failure-detector.http-client.https.hostname-verification=false
memoryManager.http-client.https.hostname-verification=false
scheduler.http-client.https.hostname-verification=false
workerInfo.http-client.https.hostname-verification=false

discovery.http-client.https.hostname-verification=false
node-manager.http-client.https.hostname-verification=false
exchange.http-client.https.hostname-verification=false


http-server.http.enabled=false
http-server.authentication.type=CERTIFICATE
# Work behind proxy
http-server.authentication.allow-insecure-over-http=true
http-server.process-forwarded=true
http-server.https.enabled=true
http-server.https.port=25056
http-server.https.keystore.path=/local/presto.pem
http-server.https.truststore.path=/local/roots.pem

# This is the same jks, but it will not do the consul connect authorization in intra cluster communication
internal-communication.https.required=true

internal-communication.shared-secret= "asdasdsadafdsa"

internal-communication.https.keystore.path=/local/presto.pem
internal-communication.https.truststore.path=/local/roots.pem

query.client.timeout=5m
query.min-expire-age=30m
query.max-memory=76MB

Checklist (after created issue)

  • Added label(s)
  • Added to project
  • Added to milestone

Remove certificate_handler as it is not needed.

After digging in the source code of presto I realized that presto can handle concatinated pem-formated files as well as jks. Moving the template stanzas into the server task to simplify.

Move the generation of vault credentials from ansible to terraform

Feature description

I suggest we move the whole generation of secrets used in the module over to the module itself, and remove it from the ansible scripts. The ansible scripts are not part of the module, meaning anyone using our module would need to create secrets in their vault before using this. We could still keep all the funtionality we have now, of being able to use user-provided secrets, as well as setting a custom path to the vault secrets, but also bundle in a creation and usage of secrets in vault with the module itself.

Why is it needed?

User experience

Suggestion(s)/solution(s) [Optional]

Take this part, and convert it to terraform code
https://github.com/fredrikhgrelland/terraform-nomad-presto/blob/6fbb7ae2e50cd6c06fab59526ef6a103872aaaae/dev/ansible/00_generate_secrets_vault.yml#L1-L10

Using the vault provider

Definition of done

Ansible code to generate secrets is moved into the terraform code in the module itself
All existing features are kept

Checklist (after created issue)

  • Added label(s)
  • Added to project
  • Added to milestone

Adding healthcheck to presto_cluster example

Feature description

It is requested a better healthcheck for Presto in the presto_cluster example. Similar to the one in presto_standalone.hcl#L44-L52.

Why is it needed?

For better coverage and make sure presto is healthy 🧑‍⚕️

  • hive-availability
  • minio-availability

Suggestion(s)/solution(s) [Optional]

Need to do something similar as the one in presto_standalone.hcl#L44-L52.

Think we need to create a proxy service that is continuously checking the Presto service for us.

Definition of done

When we have a simliar feature as the one in presto_standalone.hcl#L44-L52.

Checklist (after created issue)

  • Added label(s)
  • Added to project
  • Added to milestone

make-command in option #3 in verifying setup gives an error

Current behaviour

I'm following the setup here: https://github.com/fredrikhgrelland/terraform-nomad-presto#option-3-local-presto-cli

Eivinds-MacBook-Pro:terraform-nomad-presto eivindberg$ sudo make presto-cli
Password:
Makefile:78: warning: overriding commands for target `status'
Makefile:61: warning: ignoring old commands for target `status'
make: *** No rule to make target `y', needed by `presto-cli'.  Stop.

Removing the :y-flag in the Makefile gives the Presto-cli, however I have no connection to Presto.

Eivinds-MacBook-Pro:terraform-nomad-presto eivindberg$ sudo make presto-cli
Makefile:78: warning: overriding commands for target `status'
Makefile:61: warning: ignoring old commands for target `status'
CID=$(docker run --rm -d --network host consul:1.8 connect proxy -token master -service presto-local -upstream presto:8080)
docker run --rm -it --network host prestosql/presto:341 presto --server localhost:8080 --http-proxy localhost:8080 --catalog hive --schema default --user presto --debug
presto:default> show catalogs;
Error running command: java.net.ConnectException: Failed to connect to localhost/0:0:0:0:0:0:0:1:8080
java.io.UncheckedIOException: java.net.ConnectException: Failed to connect to localhost/0:0:0:0:0:0:0:1:8080

Having no connection could be related to issue #46 Update documentation, distributed mode deployment.

The make presto-cli not working is something else.

Expected behaviour

To enter the CLI and being able to run a query successfully from the cli by running:

make presto-cli and then the query (like SHOW CATALOGS;).

Checklist (after created issue)

  • Added label(s)
  • Added to project
  • Added to milestone

Remove use of locals/find new way to define examples

Feature description

The locals block in our examples is a little messy, and there is quite a bit of excess code.

Why is it needed?

Better readability

Suggestion(s)/solution(s) [Optional]

Remove the locals block and write the variables directly.

Definition of done

Checklist (after created issue)

  • Added label(s)
  • Added to project
  • Added to milestone

Presto reports warning if hive is down/not present

Current behaviour

image

Expected behavior

Presto should fail.

How to reproduce?

  1. make test in mode=standalone
  2. Go to nomad, stop manually hive job
  3. Check healthchecks in consul

Suggestion(s)/solution(s) [Optional]

Checklist (after created issue)

  • Added label(s)
  • Added to project
  • Added to milestone

Add `standalone` example

What is the issue?

Add standalone example for simplification

Suggestion(s)/solution(s) [Optional]

add new directory example/standalone or example/presto_standalone

Checklist (after created issue)

  • Added label(s)
  • Added to project
  • Added to milestone

`make presto-cli` does not work

Current behaviour

show catalogs; throw the error

Expected behaviour

presto:default> show catalogs;
 Catalog 
---------
 hive    
 jmx     
 memory  
 system  
 tpcds   
 tpch    
(6 rows)

Query 20201026_132630_00069_hhwdp, FINISHED, 1 node
http://localhost:8080/ui/query.html?20201026_132630_00069_hhwdp
Splits: 19 total, 19 done (100.00%)
CPU Time: 0.0s total,     0 rows/s,     0B/s, 5% active
Per Node: 0.0 parallelism,     0 rows/s,     0B/s
Parallelism: 0.0
Peak Memory: 0B
0.47 [0 rows, 0B] [0 rows/s, 0B/s]

How to reproduce?

  1. make up
  2. make presto-cli
  3. show catalogs;

Log

 ~/src/github.com/zhenik/terraform-nomad-presto │ master *1 !1  make presto-cli                                                                                                                                     ✔ │ 11s │ 14:27:29 
Makefile:77: warning: overriding recipe for target 'status'
Makefile:60: warning: ignoring old recipe for target 'status'
CID=$(docker run --rm -d --network host consul:1.8 connect proxy -token master -service presto-local -upstream presto:8080)
docker run --rm -it --network host prestosql/presto:341 presto --server localhost:8080 --http-proxy localhost:8080 --catalog hive --schema default --user presto --debug
docker rm -f $CID
presto:default> show catalogs;
Error running command: java.net.SocketException: Connection reset
java.io.UncheckedIOException: java.net.SocketException: Connection reset
        at io.prestosql.client.JsonResponse.execute(JsonResponse.java:154)
        at io.prestosql.client.StatementClientV1.<init>(StatementClientV1.java:134)
        at io.prestosql.client.StatementClientFactory.newStatementClient(StatementClientFactory.java:24)
        at io.prestosql.cli.QueryRunner.startInternalQuery(QueryRunner.java:146)
        at io.prestosql.cli.QueryRunner.startQuery(QueryRunner.java:132)
        at io.prestosql.cli.Console.process(Console.java:347)
        at io.prestosql.cli.Console.runConsole(Console.java:273)
        at io.prestosql.cli.Console.run(Console.java:172)
        at io.prestosql.cli.Console.call(Console.java:101)
        at io.prestosql.cli.Console.call(Console.java:74)
        at picocli.CommandLine.executeUserObject(CommandLine.java:1933)
        at picocli.CommandLine.access$1100(CommandLine.java:145)
        at picocli.CommandLine$RunLast.executeUserObjectOfLastSubcommandWithSameParent(CommandLine.java:2332)
        at picocli.CommandLine$RunLast.handle(CommandLine.java:2326)
        at picocli.CommandLine$RunLast.handle(CommandLine.java:2291)
        at picocli.CommandLine$AbstractParseResultHandler.execute(CommandLine.java:2159)
        at picocli.CommandLine.execute(CommandLine.java:2058)
        at io.prestosql.cli.Presto.main(Presto.java:32)
Caused by: java.net.SocketException: Connection reset
        at java.base/java.net.SocketInputStream.read(SocketInputStream.java:186)
        at java.base/java.net.SocketInputStream.read(SocketInputStream.java:140)
        at okio.Okio$2.read(Okio.java:139)
        at okio.AsyncTimeout$2.read(AsyncTimeout.java:237)
        at okio.RealBufferedSource.indexOf(RealBufferedSource.java:345)
        at okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:217)
        at okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:211)
        at okhttp3.internal.http1.Http1Codec.readResponseHeaders(Http1Codec.java:187)
        at okhttp3.internal.http.CallServerInterceptor.intercept(CallServerInterceptor.java:88)
        at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
        at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
        at io.prestosql.client.OkHttpUtil.lambda$interceptRequest$3(OkHttpUtil.java:106)
        at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
        at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:45)
        at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
        at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
        at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93)
        at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
        at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
        at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
        at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
        at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:125)
        at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
        at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
        at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:200)
        at okhttp3.RealCall.execute(RealCall.java:77)
        at io.prestosql.client.JsonResponse.execute(JsonResponse.java:131)
        ... 17 more
presto:default> 

Suggestion(s)/solution(s) [Optional]

Fix proxy and cli commands

Checklist (after created issue)

  • Added label(s)
  • Added to project
  • Added to milestone

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.