Giter Site home page Giter Site logo

inlets / inletsctl Goto Github PK

View Code? Open in Web Editor NEW
455.0 12.0 63.0 12.58 MB

Create inlets servers on the top cloud platforms

Home Page: https://docs.inlets.dev/

License: MIT License

Go 94.49% Makefile 1.33% Shell 4.18%
inlets inlets-pro inlets-operator digitalocean scaleway packet hacktoberfest

inletsctl's Introduction

inletsctl - create inlets servers on the top cloud platforms

License: MIT Documentation Downloads Arm CI sponsored by Actuated

inletsctl automates the task of creating an exit-server (tunnel server) on public cloud infrastructure. The create command provisions a cheap cloud VM with a public IP and pre-installs inlets for you. You'll then get a connection string that you can use with the inlets client.

Conceptual diagram

Webhook example with Inlets OSS

Case-study with receiving webhooks from https://blog.alexellis.io/webhooks-are-great-when-you-can-get-them/

Use-cases:

  • Setup L4 TCP and HTTPS tunnels for your local services using inlets-pro with inletsctl create
  • Create tunnels for use with Kubernetes clusters, create the tunnel and use it whenever you need it
  • Port-forward services your local Kubernetes cluster using inletsctl kfwd

Contents

Video demo

In the demo we:

  • Create a cloud host on DigitalOcean with a single command
  • Run a local Python HTTP server
  • Connect our inlets-pro client
  • Access the Python HTTP server via the DigitalOcean Public IP
  • Use the CLI to delete the host

asciicast

inletsctl is the quickest and easiest way to automate tunnels, whilst retaining complete control of your tunnel and data.

Features

  • Provision hosts quickly using cloud-init with inlets pre-installed - inletsctl create
  • Delete hosts by ID or IP address - inletsctl delete
  • Automate port-forwarding from Kubernetes clusters with inletsctl kfwd

How much will this cost?

The inletsctl create command will provision a cloud host with the provider and region of your choice and then start running inlets server. The host is configured with the standard VM image for Ubuntu or Debian Linux and inlets is installed via userdata/cloud-init.

The provision package contains defaults for OS images to use and for cloud host plans and sizing. You'll find all available options on inletsctl create --help

The cost for cloud hosts varies depending on a number of factors such as the region, bandwidth used, and so forth. A rough estimation is that it could cost around 5 USD / month to host a VM on for DigitalOcean, Civo, or Scaleway. The VM is required to provide your public IP. Some hosting providers supply credits and a free-tier such as GCE and AWS.

See the pricing grid on the inlets-operator for a detailed breakdown.

inletsctl does not automatically delete your exit nodes (read cloud hosts), so you'll need to do that in your dashboard or via inletsctl delete when you are done.

Install inletsctl

# Install to local directory (and for Windows users)
curl -sLSf https://inletsctl.inlets.dev | sh

# Install directly to /usr/local/bin/
curl -sLSf https://inletsctl.inlets.dev | sudo sh

Windows users are encouraged to use git bash to install the inletsctl binary.

Looking for documentation?

To learn about the various features of inletsctl and how to configure each cloud provisioner, head over to the docs:

Contributing & getting help

Before seeking support, make sure you have read the instructions correctly, and try to run through them a second or third time to see if you have missed anything.

Then, try the troubleshooting guide in the official docs (link above).

Why do we need this tool?

Why is inletsctl a separate binary? This tool is shipped separately, so that the core tunnel binary does not become bloated. The EC2 and AWS SDKs for Golang are very heavy-weight and result in a binary of over 30MB vs the small and nimble inlets-pro binaries.

Provisioners

inletsctl can provision exit-servers to the following providers: DigitalOcean, Scaleway, Civo.com, Google Cloud, Equinix Metal, AWS EC2, Azure, Linode, Hetzner and Vultr.

An open-source Go package named provision can be extended for each new provider. This code can be used outside of inletsctl by other projects wishing to create hosts and to run some scripts upon start-up via userdata.

type Provisioner interface {
	Provision(BasicHost) (*ProvisionedHost, error)
	Status(id string) (*ProvisionedHost, error)
	Delete(HostDeleteRequest) error
}

License

inletsctl is distributed under the MIT license. inlets-pro, which inletsctl uses is licensed under the inlets-pro End User License Agreement (EULA).

A valid static inlets license or a Gumroad subscription is required to create tunnel servers with inletsctl.

inletsctl's People

Contributors

adamjohnson01 avatar alexandrevilain avatar alexellis avatar cpanato avatar dependabot[bot] avatar johannestegner avatar jsiebens avatar kadern0 avatar mbacchi avatar pcaderno avatar rgee0 avatar starbops avatar thesurlydev avatar utsavanand2 avatar waterdrips avatar welteki avatar wingkwong avatar zechen0 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

inletsctl's Issues

Fix issue with the opening of TCP ports for inlets-pro on GCE

Expected Behaviour

The GCE operator should open up ports for inlets-pro
For users switching between inlets OSS and inlets-pro, the firewall rules should be
updated automatically.

Current Behaviour

If a firewall rule named inlets for inlets OSS already exists, it doesn't open up ports for inlets-pro

Possible Solution

PR #45

Steps to Reproduce (for bugs)

  1. Run inletsctl with inlets OSS
  2. Run inletsctl again with inlets-pro

Add a --rm flag to "inletsctl create" for a temporary tunnel

We should add a --rm flag to "inletsctl create" for when users need a temporary tunnel.

The use-case would be that I want to share my blog i.e. 127.0.0.1:4000 with a friend or colleague, but I don't want to be billed for the DigitalOcean VM beyond my 1-2 hours of uptime.

I'd run a command similar to docker run --rm which removes the tunnel immediately after I hit control + c or exit.

We'd have something like this, but it'd need to work with inlets-pro too:

OSS:

inletsctl create --rm --upstream http://127.0.0.1:4000

Your IP is: X
Starting "inlets client" now, hit control+c to delete the tunnel.

Pro:

inletsctl create --rm --license $LICENSE --remote-tcp=http://127.0.0.1
Your IP is: X
Starting "inlets-pro client" now, hit control+c to delete the tunnel.

creating exit server with letsencrypt fails to create & enable systemd service

Expected Behaviour

The exit server creates a systemd service & enables it for inlets-pro with the LetsEncrypt configuration

Current Behaviour

No systemd service is created and inlets-pro has to be started manually

root@exciting-montalcini6:/var/log# sudo systemctl status inlets-pro
Unit inlets-pro.service could not be found.
root@exciting-montalcini6:/var/log# sudo journalctl -u inlets-pro
-- Logs begin at Mon 2021-07-26 11:23:08 UTC, end at Mon 2021-07-26 12:17:05 UTC. --
-- No entries --

Possible Solution

@alexellis suggested it was an issue with the https service not being included:

func MakeHTTPSUserdata(authToken, version, letsEncryptEmail, letsEncryptIssuer string, domains []string) string {

Context

Following this guide: https://docs.inlets.dev/#/tools/inletsctl?id=create-a-https-tunnel-with-a-custom-domain

Your Environment

inlets-pro: Version: 0.8.6 - 18d8e239f940d8694532e332867c25f823439cc6
inletsctl: Version: 0.8.10
Git Commit: ed0bcc767eb9357bef75062dee746684b97eb0bf
  • Cloud provider and region being used: DigitalOcean and Vultr

  • Operating System and version (e.g. Linux, Windows, MacOS): uname -a

Darwin bens-macbook 20.4.0 Darwin Kernel Version 20.4.0: Fri Mar  5 01:14:14 PST 2021; root:xnu-7195.101.1~3/RELEASE_X86_64 x86_64
  • Link to your project or a code example to reproduce issue:
inletsctl create --provider digitalocean \
--access-token **** \
--region ewr \
--letsencrypt-domain dev.bpmct.net \
--letsencrypt-email **** \
--letsencrypt-issuer prod

Enable Packet CLI flags

Expected Behaviour

We should be able to create/delete with the Packet.com provisioner

Current Behaviour

The code is present, but it's not hooked up to any CLI flags yet

Possible Solution

Copy the approach for DO / Scaleway. Have fun

Update these two methods:

func createHost(provider, name, region, userData string) (*provision.BasicHost, error) {

func getProvisioner(provider, accessToken, secretKey, organisationID, region string) (provision.Provisioner, error) {

Then check the CLI flags are all being brought in as needed:

func init() {

Perform the above for create.go and delete.go

The vendored package "provision" already contains code for Packet - https://github.com/inlets/inletsctl/blob/3b53f1dca10c1597d4c14a50bd305b6306757826/vendor/github.com/inlets/inlets-operator/pkg/provision/packet.go

Broken firewall rules for GCE

I think there may be a conflict between the naming for firewall rules for inlets vs inlets-pro, which could lead to issues for users.

First of all I set up a host with OSS, and a firewallRule was created, i.e. for port 8080 I assume

alex@alexx:~/dev/sponsors-functions$ inletsctl create --provider gce --zone us-central1-a --project-id $PROJECTID --access-token-file key.json
Using provider: gce
Requesting host: priceless-blackburn2 in us-central1-a, from gce
2020/02/09 09:20:52 inlets firewallRule does not exist
2020/02/09 09:20:52 Creating inlets firewallRule opening port: 8080
Host: priceless-blackburn2|us-central1-a|alexellis, status: active
[1/500] Host: priceless-blackburn2|us-central1-a|alexellis, status: 
[2/500] Host: priceless-blackburn2|us-central1-a|alexellis, status: 
[3/500] Host: priceless-blackburn2|us-central1-a|alexellis, status: 

Then I set up a node for inlets-pro, which said the rule already existed:

alex@alexx:~/dev/sponsors-functions$ inletsctl create --provider gce --zone us-central1-a --project-id $PROJECTID --access-token-file key.json --remote-tcp localhost
Using provider: gce
Requesting host: objective-khorana7 in us-central1-a, from gce
2020/02/09 09:21:20 inlets firewallRule exists
Host: objective-khorana7|us-central1-a|alexellis, status: active
[1/500] Host: objective-khorana7|us-central1-a|alexellis, status: 
[2/500] Host: objective-khorana7|us-central1-a|alexellis, status: active

The problem here is that inlets-pro vs inlets OSS use different ports and so they need separate firewall rules, perhaps with different names too.

Pro: 8123 (auto-tls + control-port), * - any port the client can open

OSS: 8080 - control-port, - 80 - data port

This probably affects inlets-operator too.

inlets-issue

I actually have no idea how this works, for OSS, since the port 80 should be open and is not.

I forwarded out OpenFaaS and couldn't access it on the public IP, but port 80 was open.

AWS Security Groups not cleaned up in the case of failure

Expected Behaviour

If there's a failure creating the ec2 instance, then all associated resources (in this case SG) should be cleaned up.

Current Behaviour

The ec2 instance creation fails and leaves the SG created in place.

Possible Solution

see expected behavior

Steps to Reproduce (for bugs)

./inletsctl create -f ~/Downloads/access-key --secret-key-file ~/Downloads/secret-access-key -r us-west-2 -z us-west-2c -p ec2

Context

Your Environment

  • inlets version inlets --version

  • Docker/Kubernetes version docker version / kubectl version:

  • Operating System and version (e.g. Linux, Windows, MacOS):

  • Link to your project or a code example to reproduce issue:

Create command results in 400 if default VPC is unavailable for ec2 provider

A failure occurs when attempting to create an exit node via inletsctl create command for ec2 provider.

Expected Behaviour

An ec2 instance and security group are successfully created.

Current Behaviour

A 400 status is returned with the following message when a default VPC is unavailable:

InvalidParameter: The AssociatePublicIpAddress parameter is only supported for VPC launches.

Possible Solution

Explicitly require vpc-id and subnet-id when provider is ec2. This fixes the above issue and allows users to have full control of where the exit node will be placed. I will be following up with proposed fix via PR shortly.

Steps to Reproduce (for bugs)

Execute the following command for an AWS account/region with no default VPC:

./inletsctl create -f ~/Downloads/access-key --secret-key-file ~/Downloads/secret-access-key -p ec2 -r us-west-2

Context

Your Environment

  • inlets version inlets --version: 0.5.6 and master branch as of Aug 10.

  • Docker/Kubernetes version docker version / kubectl version:

  • Operating System and version (e.g. Linux, Windows, MacOS): Ubuntu 20.04

  • Link to your project or a code example to reproduce issue:

Default --remote-tcp to localhost

We can set localhost as the default for the --remote-tcp flag. This would allow inlets-pro users who are forwarding traffic to the same host that is running the inlets-pro client.

Expected Behaviour

if the flag --remote-tcp is not set we could default to localhost which would forward traffic to the host running the inlets-pro client command

Current Behaviour

you have to set it every time even if you want localhost

Possible Solution

If we do this, we need to think about how we enable the inlets-pro server initialisation rather than standard inlets, at the moment it checks if --remote-tcp is set.
possibly a --pro flag? which could be used if you want localhost, but if you set a --remote tcp and didnt use --pro it should still enable pro

Write a state file with the machine provisioned, its ID and auth token

This feature will write a state file with the machine provisioned, its ID and auth token

Expected Behaviour

A file (probably YAML file) should be written to disk after provisioning a host, it will be used by the user to be able to delete or reconnect to the host at a later date, it's especially useful if the auth token, ID or IP was lost.

Current Behaviour

In this scenario, I have to delete / re-create, or ssh into the exit server after doing a password reset.

Possible Solution

Write a file such as host-name-inletsctl.yaml with:

host:
  metadata:
    created_at: A
    provider: B
    region: C
  ip: Y
  id : Z
  auth_token: A
  pro: true/false

Initially, this will not be consumable by the CLI itself, but will act like a state file or reminder for the user.

inletsctl create not handling --tcp properly

The inlets client throws error when provisioning an inlets server in Azure, because the --tcp is not being handled correctly as the previous --pro by inletsctl create. See the related piece of code: https://github.com/inlets/inletsctl/blob/master/cmd/create.go#L195-L197

Error: unable to download CA from remote inlets server for auto-tls: Get "https://40.76.8.88:8123/.well-known/ca.crt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

inletsctl create

โžœ  inletsctl create --provider=azure --subscription-id=ffffffff-ffff-ffff-ffff-fffffffffff \
    --region=eastus --access-token-file=./client_credentials.json \
    --tcp
...
...
[11/500] Host: inlets-zen-wilson6|deployment-e8ee78a4-eb51-45f2-93af-e5a7b520e67a, status: Running
[12/500] Host: inlets-zen-wilson6|deployment-e8ee78a4-eb51-45f2-93af-e5a7b520e67a, status: active
inlets PRO (0.8.1) exit-server summary:
  IP: 40.76.8.88
  Auth-token: tvsLYh0ZwxaO9xri18QlUBGmIgbUw2U3YrUDD30oSl5FZXJDq5BXPukyWDIsEeCU

inlets inlets-pro client

โžœ  inlets inlets-pro client --url "wss://40.76.8.88:8123/connect" \
  --token "tvsLYh0ZwxaO9xri18QlUBGmIgbUw2U3YrUDD30oSl5FZXJDq5BXPukyWDIsEeCU" \
  --license-file "$LICENSE" \
  --upstream $UPSTREAM \
  --ports $PORTS

2021/02/17 17:33:42 Starting TCP client. Version 0.8.0-dirty - 7d2137f283e67490d64ea68903f7d49b9c9463c3
2021/02/17 17:33:42 Licensed to: *******, expires: ******
2021/02/17 17:33:42 Upstream server: 192.168.101.100, for ports: 9090
Error: unable to download CA from remote inlets server for auto-tls: Get "https://40.76.8.88:8123/.well-known/ca.crt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

Expected Behaviour

inlets client should connect to server successfully.

Current Behaviour

Throw error:

Error: unable to download CA from remote inlets server for auto-tls: Get "https://40.76.8.88:8123/.well-known/ca.crt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

Possible Solution

This code should be updated: https://github.com/inlets/inletsctl/blob/master/cmd/create.go#L195-L197

Steps to Reproduce (for bugs)

1.inletsctl create --provider=azure --subscription-id=ffffffff-ffff-ffff-ffff-fffffffffff \ --region=eastus --access-token-file=./client_credentials.json \ --tcp
2. inlets inlets-pro client --url "wss://40.76.8.88:8123/connect" \ --token "tvsLYh0ZwxaO9xri18QlUBGmIgbUw2U3YrUDD30oSl5FZXJDq5BXPukyWDIsEeCU" \ --license-file "$LICENSE" \ --upstream $UPSTREAM \ --ports $PORTS

Context

Your Environment

  • inletsctl master branch

  • macOS

Generated output command for Delete results in error for gce provider

Expected Behaviour

Copy and paste the command from the generated output should work

Current Behaviour

Running the command as it is output results in the following error:

$ inletsctl delete --provider gce --id "hungry-shaw4|us-central1-a|burtonr"
Using provider: gce
failed to get 'inlets-token' value.: flag accessed but not defined: inlets-token

This was the output generated by the create command:

Inlets OSS exit-node summary:
  IP: XXX.XXX.XXX.XXX
  Auth-token: someReallyLongTokenString

Command:
  export UPSTREAM=http://127.0.0.1:8000
  inlets client --remote "ws://XXX.XXX.XXX.XXX:8080" \
	--token "someReallyLongTokenString" \
	--upstream $UPSTREAM

To Delete:
  inletsctl delete --provider gce --id "hungry-shaw4|us-central1-a|burtonr"

Possible Solution

Update the generated command, or update the process to provide the proper functionality.
I did do a repo search in GitHub for the text "inlets-token" and was only able to find it referenced in the create.go file. Not sure why or how it's ending up being referenced in the delete command.

Steps to Reproduce (for bugs)

  1. Run inletsctl create...
  2. Copy the command under the text "To Delete:"
  3. Paste the command into the terminal and execute

Context

Created an exit node on Google Cloud to test with. Now wanting to delete it, I cannot use the command provided, but instead need to log in to the GCP console

Your Environment

  • inlets version inlets --version

FYI: This should be updated to say inletsctl version

Version: 0.4.0
Git Commit: 26ec251

  • Docker/Kubernetes version docker version / kubectl version:
    N/A
  • Operating System and version (e.g. Linux, Windows, MacOS):
    Linux
  • Link to your project or a code example to reproduce issue:
    N/A

[Feature] inletsctl should be able to install inlets or inlets-pro

inletsctl should be able to install inlets or inlets-pro

Expected Behaviour

Using the approach to fetch kubeseal in faas-cli, inletsctl should be able to grab inlets and inlets-pro in order to run the client in commands like create and kfwd

Current Behaviour

Done manually as per https://github.com/inlets/inlets/blob/master/docs/kubernetes.md#try-inlets-with-kind-kubernetes-in-docker

Possible Solution

Copy solution from faas-cli

openfaas/faas-cli#730

inlets-pro: command not found

This is double strange/error - for the provisioning, don't use the -pro flag, yet the outcome shows to use the pro version, which is not available.

Expected Behaviour

Current Behaviour

Possible Solution

Steps to Reproduce (for bugs)

  1. curl -sLSf https://inletsctl.inlets.dev | sudo sh

Context

Your Environment

  • inlets version inlets --version
    Version: 3.0.1

  • Docker/Kubernetes version docker version / kubectl version:

  • Operating System and version (e.g. Linux, Windows, MacOS):

  • Link to your project or a code example to reproduce issue:

[Feature] Look for an INLETS_ACCESS_TOKEN_FILE env var if --access-token-file is not set

It would be nice to be able to set a env variable for the location of the access-token file. At the moment we need to set it for every command, and its lots to type

Expected Behaviour

if you ommit --access-token-file and there is an environment variable set (INLETS_ACCESS_TOKEN_FILE) then we could try to use the value in that file (if it exists)

Current Behaviour

you have to set it every time

Possible Solution

check if the flag is set, if so we use the flag vlaue, if its not we could see if the env variable is set, and if it is we could use the value.

Proposed provisioner: AWS Fargate

This oneโ€™s a bit different, but I thought it would be worth discussing. Fargate allows for public IP addresses and seems like it could meet all the requirements laid out for a new provisioner. What do you think? My main motivation is that I imagine it starts up quicker than other options.

Convert CI to GitHub Actions

Task

Convert CI to GitHub Actions

Context

Travis is removing its free tier for OSS projects, so this should be built with GitHub Actions instead, which is not going to add additional cost.

I would expect this to be one of the easier projects to convert given that Docker isn't used in the build.

See also:

https://github.com/mvdan/github-actions-golang
https://github.com/actions/create-release
https://github.com/actions/upload-release-asset

alexellis/arkade#263
alexellis/k3sup#279

Enable inlets-pro and TCP with inletsctl kfwd

Enable inlets-pro and TCP with inletsctl kfwd

Expected Behaviour

inlets-pro users should be able to do raw TCP forwarding with inletsctl kfwd

Current Behaviour

OSS inlets users can forward HTTP/s ports

Possible Solution

Error with downloading SHASUM

Error with downloading SHASUM

Expected Behaviour

Download script would work without error and check the SHASUM of the tgz

Current Behaviour

A user reported that it wasn't working.

alex@kosmos github.com % curl -SLs https://inletsctl.inlets.dev|sudo bash
Downloading package https://github.com/inlets/inletsctl/releases/download/0.7.0/inletsctl-darwin.tgz as /tmp/inletsctl-darwin.tgz
Download Complete, extracting /tmp/inletsctl-darwin.tgz to /tmp/ ...
OK
SHA256 fetched from release: Not
shasum: standard input: no properly formatted SHA checksum lines found
SHA mismatch! This means there must be a problem with the download

It seems that we are now missing the SHAs on the releases page from 0.7.0 - https://github.com/inlets/inletsctl/releases/tag/0.7.0

If you need to download inletsctl, just head over to the releases page for the time being.

Proposed provisioner: Linode

Linode provides fairly cheap bandwidth in a handful of locations. Would you be interested in a provisioner for them?

Compress each binary release

We should compress each binary so that the download is quicker. Tar/gzip seems the most portable option.

The get.sh script must also be updated.

This is needed after adding the EC2 SDK which took us from 8mb to 38mb ๐Ÿคฏ

Update Ubuntu images to 18.04

Expected Behaviour

Ubuntu 16.04 is now EOL, and only paid "ESM" support / updates are available.

Update all provisioners to use Ubuntu 18.04 images and then test each.

Otherwise, images may be removed without warning, such as in: inlets/inlets-operator#123, resulting in users being unable to use inletsctl/inlets-operator.

Openstack (STACKIT) Support

Openstack (STACKIT) Support

inlets is really a cool project, and we (here at the Schwarz Group) would like to use on our own cloud provider STACKIT (https://stackit.de/en/). Openstack is the engine of STACKIT.

On addition, Kubernetes is a big building block for the future and and i think inlets will help the developers in many area of their work.

To use inlets, it would be sweet to support Openstack/STACKIT too.

I created the provider already, and i just wait to get your approval before I create a pull request for it.

I would be more than happy to get an approval from you.

Looking forward.

Civo IP mismatch

It appears that Civo are using some kind of IP mapping, where the inlets PRO installation tries to get the public IP by reaching out to an IP finding service, but the IP that came back was incorrect.

inlets-pro client --url "wss://91.211.153.165:8123/connect" --token "$TOKEN"  --license-file ~/LICENSE --upstream http://192.168.0.26 --ports 80,443,8080
2020/09/23 10:08:47 Welcome to inlets-pro! Client version 0.7.0
2020/09/23 10:08:47 Licensed to: Alex Ellis <[email protected]>, expires: 48 day(s)
2020/09/23 10:08:47 Upstream server: http://192.168.0.26, for ports: 80, 443, 8080
Error: unable to download CA from remote inlets server for auto-tls: Get https://91.211.153.165:8123/.well-known/ca.crt: x509: certificate is valid for 185.136.235.146, not 91.211.153.165
$ civo instance list
+--------------------------------------+---------------+----------+-----------+------+----------+--------+----------------+------------+--------+
| ID                                   | Hostname      | Size     | Cpu Cores | Ram  | SSD disk | Region | Public IP      | Private IP | Status |
+--------------------------------------+---------------+----------+-----------+------+----------+--------+----------------+------------+--------+
| 0eb29dc5-b2ea-43c6-8bbe-2fd78010be92 | crazy-clarke6 | g2.small |         1 | 2048 |       25 | lon1   | 91.211.153.165 | 10.0.0.20  | ACTIVE |
+--------------------------------------+---------------+----------+-----------+------+----------+--------+----------------+------------+--------+

SSH:

civo@crazy-clarke6:~$ cat /etc/default/inlets-pro
AUTHTOKEN=TOKENVALUE
IP=185.136.235.146

That IP gets there via export IP=$(curl -sfSL https://checkip.amazonaws.com)

Running it later on, interactively gives:

civo@crazy-clarke6:~$ curl -sfSL https://checkip.amazonaws.com
91.211.153.165
civo@crazy-clarke6:~$ 

Azure provisioner to use auth contents as parameter instead of auth file path

Problem

Currently Azure provisioner factory function takes auth file as the 2nd parameter.

func NewAzureProvisioner(subscriptionId, authFile string)

It makes it hard to use in some context/env where the file system is not interoperable between components involved in different stages. An example is using inlets in inlets-operator, where the file system of later one is in the k8s cluster.

Solution

The solution is to make provisioner factory function to accept the auth contents as the 2nd parameter.

func NewAzureProvisioner(subscriptionId, authFileContents string)

Notes

It won't change how the end-user uses the inletsctl command line because we are not changing any command line here. This is more for SDK users of inletsctl such as inlets-operator which consume the provisioner factory function directly.

Enable delete via IP address

Expected Behaviour

After provisioning and coming back later, an inlets exit-node will be known mainly by its IP rather than the internal ID used by the cloud provider.

We could do something like this for a friendlier experience:

inlets delete --ip 209.97.131.180

Current Behaviour

We delete via an ID

inlets delete --id 164857028

Possible Solution

if --ip is given, iterate and page through the instances the cloud provider has (filtered by tag?) and match on IP, then use that ID to run the deletion.

This is a follow-on feature from #1

AWS EC2 Provisioner does not support temporary credentials with session token.

Many corporate or high security environments vend temporary AWS credentials to access an AWS account. These credentials have three factors: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN. Without all three the authentication will fail.

Currently the AWS EC2 provisioner supports specifying the access key, and secret key, but does not support specifying a session token. As a result inletsctl will attempt to make AWS API calls with only the first two factors, and auth will be rejected.

Expected Behaviour

I should be able to optionally specify a session token when calling inletsctl create, or inletctl should make use of the environment variables: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN as this is the recommended way to set temporary credentials to a specific shell session.

Current Behaviour

inletsctl does not support session tokens.

Context

By default AWS credentials last forever, until revoked. Many orgs instead use temporary AWS credentials which have a session token and expire. Without support for this form of auth it is hard to use inletsctl in orgs that require the use of temporary credentials.

More docs on temporary AWS credentials here: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html

Your Environment

Version: 0.8.6
Git Commit: f0b3d64

Example command inlets pro doesn't use a env_var

Expected Behaviour

I expect the inletsctl command when setting up an inlets-pro exit node prints a command that makes use of the 2 env vars it suggests,

 export TCP_PORTS="8000"
  export LICENSE=""

Current Behaviour

It suggests setting these vars, then uses a hard-coded 8000 for tcp port

Command:
  export TCP_PORTS="8000"
  export LICENSE=""
  inlets-pro client --connect "wss://178.62.88.252:8123/connect" \
        --token "zL" \
        --license "$LICENSE" \
        --tcp-ports 8000

Possible Solution

set the last line of the output above to --tcp-ports $TCP_PORTS

Add provisioner for Civo

Expected Behaviour

As a user of Civo Cloud, I'd like to provision exit nodes there.

export CIVO_API="" # Get from https://api.civo.com

inletsctl create --provider civo  --access-token $CIVO_API 

Current Behaviour

Not available at present

Possible Solution

For a proof of concept implement a provisioner and see in the /pkg/ folder. Once validated, this code can be moved into the inlets-operator and vendored back to the main project.

Use the smallest/cheapest instance which comes with an IPv4. This looks to be "extra small" and comes in at 5 USD / mo. An image for Ubuntu 16.04 is available

Civo has no supported Golang SDK, therefore you'll need to make HTTP API calls directly to their API. Keep things light-weight, only three operations are needed:

See the API documentation:

Note: Civo does not support cloud-init, but have recently added a start-up script which is sufficient.

[Suggestion] Use separate commands for each provider

Rather than using an ambiguous string to define a provider, rather have each provider as a sub-command to the create command

Expected Behaviour

Have better type safety, better discovery, and easier usage for users

Current Behaviour

Run the create command and add flags as they are needed. Different per provider

Possible Solution

Create separate commands for each provider. The files are already separated, this would just be to add the &cobra.Command{} to the top of each file, and adding the commands to the create command.

The added bonus here is that there can be provider specific help information displayed in the terminal. An excellent use of the help text would be to display what is being created... (this is only known by reading the code, or inlets-operator readme)

$ inletsctl create gcp --help
Create an exit node on Google Cloud Platform.

Note: The account associated with the access token 
    will need compute.firewalls.create and serviceaccount user permissions

Usage: 
    inletsctl create gcp \
        --access-token-file $HOME/gcp-token.json \
        --zone "us-central1-a" \
        --project-id "awesomness"

Flags:
  ...

This would also allow the flags available to be specific to the provider being used rather than having to negotiate a long list where some are relevant, and others are not.

It also provides an outlet for provider specific information. In the above example, this includes access permissions needed for inletsctl needs to be successful

It would also remove 19 if checks each with a string value defined each time (maintenance overhead).

Steps to Reproduce (for bugs)

N/A

Context

I read through the code to know what each flag was being used for in the specific provider I was planning on using. Some of the flags are called out in the help, but not all.
"Do the commands with no provider called out mean it's required for all or none?"

eg: --region

"What happens if I leave one of the flags empty?"

eg: --remote-tcp

Also, there is no specific documentation for any one provider. In the demo gif, it's DigitalOcean, in the readme, there is an example for Scaleway, but again nothing inclusive

Your Environment

  • inlets version inlets --version
    Version: 0.3.9
  • Docker/Kubernetes version docker version / kubectl version:
    N/A
  • Operating System and version (e.g. Linux, Windows, MacOS):
    Ubuntu Linux
  • Link to your project or a code example to reproduce issue:
    N/A

[Feature] Provision exit-servers with HTTPS using new Let's Encrypt feature

When inlets-pro got its own Let's Encrypt features, it meant that we could set up a VM with a HTTPS certificate in a very short period of time. That process to setup the VM is still manual, as documented here

Expected Behaviour

Accept the flags from inlets pro http server and make them available for a HTTPS server created with inletsctl:

inletsctl create \
  --provider digitalocean \
  --access-token-file ~/do-token.txt \
  --letsencrypt-domain openfaas.example.com \
  --letsencrypt-email [email protected] \
  --letsencrypt-issuer staging

Current Behaviour

Only creates TCP servers today.

Possible Solution

Consider updating the cloud-provision package and inletsctl itself. A new systemd unit file may also be required, to inject the new variables into the unit file.

Uploading too many assets

We appear to be uploading too many assets

Expected Behaviour

Just the tgz and SHA files should be added to the releases page

Current Behaviour

Everything is being added including the uncompressed versions.

Possible Solution

Filter the files in the staging directory (bin) or update the publish job to filter the uploads

Steps to Reproduce (for bugs)

  1. Fork the repo
  2. Do a release build

Add delete command

Expected Behaviour

The delete command could take an ID or an IP address plus a provider, then iterate your instances and delete anything that matched.

inletsctl rm --ip 157.245.42.195 --provider digitalocean --access-key-file ~/do-access

Current Behaviour

Simply use doctl compute droplet ls/rm or the equivalent

Possible Solution

Copy the approach of create.go and call it delete.go

The provisioning libraries already support delete as an operation.

[Suggestion] Move defaults into provision package

@alexellis how would you feel about moving the provisioner defaults into the provision package and removing the additional section? The only config currently in there is for projectID, port and zone. If we make port a constant and add it to provision as well then we only have to add projectID and zone to BasicHost which can be populated if set. This would also require some minor changes to a couple of the provisioners.

Something like this in provision.go

func NewBasicHost(provider, name, region, projectID, zone, userData string) (*BasicHost, error) {
	if _, ok := defaults[provider]; !ok {
		return nil, fmt.Errorf("no provisioner for provider: %q", provider)
	}
	host := &BasicHost{
		Name:      name,
		OS:        defaults[provider].os,
		Plan:      defaults[provider].plan,
		UserData:  userData,
		ProjectID: projectID,
		Zone:      zone,
	}
	if region == "" && len(defaults[provider].region) != 0 {
		host.Region = defaults[provider].region
	}  else {
		host.Region = region
	}
	return host, nil
}

type provisionerDefaults struct {
	os     string
	plan   string
	region string
}

const ControlPort = 8080

var defaults = map[string]provisionerDefaults{
	"digitalocean": {
		os:   "ubuntu-16-04-x64",
		plan: "512mb",
	},
	"packet": {
		os:     "ubuntu_16_04",
		plan:   "t1.small.x86",
		region: "ams1",
	},
	"scaleway": {
		os:     "ubuntu-bionic",
		plan:   "DEV1-S",
		region: "fr-par-1",
	},
	"civo": {
		os:   "811a8dfb-8202-49ad-b1ef-1e6320b20497",
		plan: "g2.small",
	},
	"gce": {
		os:   "projects/debian-cloud/global/images/debian-9-stretch-v20191121",
		plan: "f1-micro",
	},
	"ec2": {
		os:     "ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-20191114",
		plan:   "t3.nano",
		region: "eu-west-1",
	},
}

I am happy to do the work for this if you think it is a good idea. No worries if not.

error="websocket: bad handshake"

Expected Behaviour

Should be able to connect to ec2 server provisioned by inletsctl, even without pro version

Current Behaviour

ERRO[2021/06/01 11:17:37] Failed to connect to proxy. Response status: 400 - 400 Bad Request. Response body: Client sent an HTTP request to an HTTPS server. error="websocket: bad handshake"
ERRO[2021/06/01 11:17:37] Remotedialer proxy error error="websocket: bad handshake"

Possible Solution

Steps to Reproduce (for bugs)

Context

Your Environment

  • inlets version inlets --version
    Version: 3.0.1

  • Docker/Kubernetes version docker version / kubectl version:

  • Operating System and version (e.g. Linux, Windows, MacOS):

  • Link to your project or a code example to reproduce issue:

Packet provider: project-id required

In the Packet provider, if you omit the --project-id flag, you get
a 404 error when you attempt to create the machine instead of
a working machine.

#5 (comment)

I think but am not 100% sure that this is also true for GCE.

Expected Behaviour

If the --project-id flag is omitted, a message should appear
stating that it's required.

Current Behaviour

Command to create a server is accepted without the --project-id flag,
and the response looks like this:

ed@iyengar:~$ inletsctl create --provider packet --access-token VBGBq525BcpXXXXXXXXXXX
Using provider: packet
Requesting host: charming-brown5 in ams1, from packet
POST https://api.packet.net/projects//devices: 404 Not found

Possible Solution

At https://github.com/inlets/inletsctl/blob/master/cmd/create.go#L156 where --project-id is parsed, introduce a check to see that it's of non-zero length if it's required for the provider.

Context

This came up in the course of a first-time test to get inletsctl running and was easy to remedy.

Your Environment

  • inletsctl version 0.4.4
  • Windows on Arm + WSL Ubuntu for arm64

The delete command does not work for GCE

The delete command printed out after creating an exit node on GCE does not delete the instance until a --project-id flag is provided

Expected Behaviour

Unless until specified explicitly, the project-id should be inferred from the GCE exit node customID

Current Behaviour

Screenshot 2020-01-30 at 3 34 41 PM

Possible Solution

Pull Request for the fix: #40

Steps to Reproduce (for bugs)

  1. create an instance with gce as the provider.
  2. make note of the printed delete command after successful execution of the above step
  3. try deleting the instance with an extra flag for the --access-token-file
  4. The command would fail with an error:
    got HTTP response code 404 with body: Not Found

Your Environment

  • inletsctl version
    Version: 0.4.1
    Git Commit: 3f1c896

  • Operating System and version (e.g. Linux, Windows, MacOS):
    MacOS

Download command failed on MacOS

Expected Behaviour

inletsctl download --pro should get me a MacOS binary when using Mac, but it was a Linux binary that was downloaded

Possible Solution

Add unit tests and fix bug.

Steps to Reproduce (for bugs)

  1. Use a Mac
  2. Run inletsctl download --pro
  3. See the wrong binary, it downloads for Linux

cc @Waterdrips

Proposed provisioner: Hetzner

Hey @alexellis

Hetzner is a cheap german hosting provider with data centers in germany and finland.
One can get a cheap vm in their cloud environment for something like 3Euro/month.

I would like to have this provider in inletsctl and also work on this if that is okay for you ?

Regards
Moritz

[Feature] install client with systemd

Expected Behaviour

Given a remote URL, upstream and auth token, users should be able install a systemd unit file so that their tunnel persists.

Current Behaviour

Needs to be done manually or ad-hoc

Possible Solution

Copy approach from faasd - https://github.com/alexellis/faasd/blob/master/cmd/install.go

Steps to Reproduce (for bugs)

Workflow:

  1. inletsctl create (observe the output)
  2. inletsctl install --remote x --upstream y --token z
  3. systemctl status inlets

Context

For users who need a permanent tunnel from a Linux laptop, VM, bare-metal on-prem host or RPi, this is going to make running the client much easier for them.

Also related #12

curl/sudo sh - Install fails on Ubuntu 20.10

SHA comparison failed because the binary got installed in /tmp instead of into /tmp/bin/

Expected Behaviour

Current Behaviour

Possible Solution

My workaround was to create /tmp/bin and move the inletsctl from /tmp/ to /tmp/bin/

Steps to Reproduce (for bugs)

Ubuntu 20.10

  1. curl -sLSf https://inletsctl.inlets.dev | sudo sh

Context

Your Environment

  • inletsctl version
 _       _      _            _   _ 
(_)_ __ | | ___| |_ ___  ___| |_| |
| | '_ \| |/ _ \ __/ __|/ __| __| |
| | | | | |  __/ |_\__ \ (__| |_| |
|_|_| |_|_|\___|\__|___/\___|\__|_|

Version: 0.8.2
Git Commit: 1ba0bc1ce0e18757422c4de3cb8ca386d0a9d289

suspected issue with downloading inlets-pro on GCP

Expected Behaviour

Expect created server to run inlets(-pro) and output commands to result in a connected tunnel

Current Behaviour

Client shows:

$ ./inlets-pro client --connect "wss://$EXIT_IP:8123/connect" --token "$TOKEN" --license "$PRO_LICENSE" --tcp-ports $TCP_PORTS
2020/02/08 23:45:17 Welcome to inlets-pro!
2020/02/08 23:45:17 Starting client - version 0.5.3
2020/02/08 23:45:17 Licensed to: [email protected], expires: 102 day(s)
2020/02/08 23:45:17 TCP Ports: [3306]
Error: unable to download CA from remote inlets server for auto-tls: Get https://104.155.178.133:8123/.well-known/ca.crt: dial tcp 104.155.178.133:8123: connect: connection refused

Logged in to the created server. This is what I found:

rheutan7@sharp-nightingale4:~$ cat /etc/default/inlets-pro 
AUTHTOKEN=tjqMrBbreSA9141nI7KMOVOhJxvJTbXISdHGen6yWEZxBhXe7L4nMyDX4U4hNghw
REMOTETCP=192.168.0.40
IP=104.155.178.133
rheutan7@sharp-nightingale4:~$ cat /etc/systemd/system/inlets-pro.service 
[Unit]
Description=inlets-pro Server Service
After=network.target

[Service]
Type=simple
Restart=always
RestartSec=2
StartLimitInterval=0
EnvironmentFile=/etc/default/inlets-pro
ExecStart=/usr/local/bin/inlets-pro server --auto-tls --common-name="${IP}"  --remote-tcp="${REMOTETCP}" --token="${AUTHTOKEN}"

[Install]
WantedBy=multi-user.target
rheutan7@sharp-nightingale4:~$ sudo systemctl status inlets-pro
โ— inlets-pro.service - inlets-pro Server Service
   Loaded: loaded (/etc/systemd/system/inlets-pro.service; enabled; vendor preset: enabled)
   Active: activating (auto-restart) (Result: exit-code) since Sun 2020-02-09 05:48:55 UTC; 1s ago
  Process: 1264 ExecStart=/usr/local/bin/inlets-pro server --auto-tls --common-name=${IP} --remote-tcp=${REMOTETCP} --token=${AUTHTOKEN} (code=exited, status=203/EXEC)
 Main PID: 1264 (code=exited, status=203/EXEC)

Feb 09 05:48:55 sharp-nightingale4 systemd[1]: inlets-pro.service: Main process exited, code=exited, status=203/EXEC
Feb 09 05:48:55 sharp-nightingale4 systemd[1]: inlets-pro.service: Unit entered failed state.
Feb 09 05:48:55 sharp-nightingale4 systemd[1]: inlets-pro.service: Failed with result 'exit-code'.
rheutan7@sharp-nightingale4:~$ which inlets-pro
rheutan7@sharp-nightingale4:~$ inlets-pro
-bash: inlets-pro: command not found
rheutan7@sharp-nightingale4:~$ ls /usr/local/bin/
rheutan7@sharp-nightingale4:~$ 

Possible Solution

Perhaps a verification script that ensures the binary is installed, and the inlets-pro service is running before exiting to know that there was an error.
The process should clean up after itself by deleting everything that was created if there is a failure

Steps to Reproduce (for bugs)

  1. Run inletsctl create --provider gce and all the flags necessary
  2. Run the output commands to connect with a client
  3. See the failed connection
  4. Log in to the created server and view the inlets-pro service logs: sudo systemctl status inlets-pro

Context

Unable to get inlets-pro to work with Google Cloud where most of my applications run

Your Environment

  • inlets version inlets --version
Version: 0.4.6
Git Commit: a03f5e2b9f7a1968795739ec39eaab99c0680447
  • Docker/Kubernetes version docker version / kubectl version:
    N/A
  • Operating System and version (e.g. Linux, Windows, MacOS):
    Linux
  • Link to your project or a code example to reproduce issue:
    N/A

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.