Giter Site home page Giter Site logo

jumppad-labs / jumppad Goto Github PK

View Code? Open in Web Editor NEW
243.0 243.0 24.0 43.38 MB

Modern cloud native development environments

Home Page: https://jumppad.dev

License: Mozilla Public License 2.0

Go 98.13% HCL 1.21% Shell 0.04% Makefile 0.21% Dockerfile 0.01% Gherkin 0.41%
developer-tools docker go kubernetes

jumppad's People

Contributors

anubhavmishra avatar apollo13 avatar dependabot[bot] avatar eveld avatar gregoryhunt avatar ishan27g avatar ksatirli avatar mocofound avatar nicholasjackson avatar nimatel avatar rebrendov avatar renevo avatar suzuki-shunsuke avatar tamsky avatar tdensmore avatar zortaniac avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

jumppad's Issues

UX menu around apply

Figure out UX for apply

Options 1 - Apply with a url

Automatically clones the blueprint and copies into $HOME/.shipyard/blueprints before running

yard apply https://github.com/blueprints/consul-routing

Does not add to blueprints folder just applies

yard apply ./local/folder

Simple no args show blueprints in $HOME/.shipyard/blueprints

yard apply

Which blueprint would you like to run

[1] Consul Service Mesh (blueprint tag)
[2] Vault (#vault)
[3] Consul HTTP Routing example (#consulhttp)

[r] Remote repo
[l] Local repo

Run specific blueprint using the tag

yard apply #consulhttp

Menu of blueprints from an archive

yard apply https://github.com/shipyard-run/blueprints

Which blueprint would you like to run

[1] Consul Routing
[2] Vault Kubernetes

K8s Multi-Node Cluster Support

Hi,
I wanted to check-in regarding any plans in the road map to support multi node k8s clusters using Shipyard. I am trying to test out a few things using Consul with k8s and would love to see how shipyard can help enable that setup.

Color customisations in .vscode/settings.json

Hello! I use vscode with a custom dark theme, and when I open shipyard project for the first time it changed an Activity bar and a Status bar color. I think color settings for vscode can be safely rremoved from .vscode/settings.json.

Container pull UX

When creating a container, should only pull from the remote docker registry if the tagged version of the image does not exist in the local registry. Need to think of a way to override this behavior.

Image management, prune pulled images

When shipyard pulls an image it should add it to a local image repo.
We can then run a command shipyard prune which would clean up any images pulled by shipyard.

Man Page

No CLI should be without an applicable man page. I'd want to check here first vs having to go online to shipyard's documentation.

Blueprint Validation ala Terraform

I would like to see a blueprint validate so errors can be caught before run. Validate could also be used in CI as I upload blueprints to github

shipyard validate

Terraform Provider

We are on the fence about whether Shipyard should be a Terraform provider, after much deliveration we felt the workflow was not quite right.

There were things like wanting to be able to do shipyard ssh or having the capability to connect to remote stacks. This did not feel like it fit with the Terraform workflow and follow the Unix principles of a single tool for a single job.

That said, the architecture of the code was set out so that you could import Shipyard as a library and then create a provider from it.

At the moment our feeling is that you turn the Terraform provider on its head a little. We could see shipyard having a Terraform provider which allows you to execute Terraform config as part of the run workflow.

However we are totally open to this, I think the key thing is getting the sweet spot in the workflow, we would be lying if we said we were not heavily influenced by Vagrant and Terraform. We already have the need for a DAG and state, the provider model too should follow Terraform's practices of separate binaries, maybe.

Nomad Ingress for Network Namespaces

Nomad allows a network namespace to be defined for a task group, this namespace does not allow ingress traffic from any other namespace. Traffic in the group can talk freely.

In order to enable Ingress when network namespaces are used we need a way to route from a host level to a particular application running in a network namespace.

Ideas:

  • Proxy running at host IP which has full access to all network namespaces
  • Proxy exposes an API which allows a port to be opened from the host to the remote namespace

Getter implementation is not initialized

Hello again! I tried to shipyard run github.com/shipyard-run/blueprints//consul-docker and got the following panic.

Running configuration from:  github.com/shipyard-run/blueprints//consul-docker

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x19cd8d3]

goroutine 1 [running]:
github.com/shipyard-run/shipyard/pkg/clients.(*GetterImpl).Get(0xc00015a430, 0x7ffd524691a6, 0x31, 0xc0006cc0c0, 0x51, 0x1, 0x1)
        /home/runner/work/shipyard/shipyard/pkg/clients/getter.go:80 +0xc3
github.com/shipyard-run/shipyard/cmd.newRunCmdFunc.func1(0xc0001cf180, 0xc00015a4f0, 0x1, 0x1, 0x0, 0x0)
        /home/runner/work/shipyard/shipyard/cmd/run.go:93 +0x23ab
github.com/spf13/cobra.(*Command).execute(0xc0001cf180, 0xc00015a4c0, 0x1, 0x1, 0xc0001cf180, 0xc00015a4c0)
        /home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:826 +0x460
github.com/spf13/cobra.(*Command).ExecuteC(0x3806f60, 0x0, 0x1dd4f20, 0xc0000b4058)
        /home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:914 +0x2fb
github.com/spf13/cobra.(*Command).Execute(...)
        /home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:864
github.com/shipyard-run/shipyard/cmd.Execute(...)
        /home/runner/work/shipyard/shipyard/cmd/root.go:92
main.main()
        /home/runner/work/shipyard/shipyard/main.go:10 +0x52
shipyard version
Shipyard version: 0.0.33

Jupiter Notebooks

Integrate shipyard with Jupiter Notebooks, this would be nice to integrate as a full resource.

jupyter "mynotebook" {
  target = "container.mine"
}

Ref:
https://jupyter.org/

Display subcommand help by default

If I supply a command in which arguments are required. I'd expect help to be displayed on stdout vs doing nothing and exit which is currently what is being experienced.

Shipyard status command to get the current state of a stack

The status command currently returns JSON which is difficult to read and is not ordered. This is actually the raw working structure.

status should have two modes
default: Human readable
json flag: JSON output

{
  "blueprint": null,
  "resources": [
    {
      "command": [
        "consul",
        "agent",
        "-config-file=/config/dc1.hcl"
      ],
      "depends_on": [
        "network.dc1",
        "network.wan"
      ],
      "image": {
        "Name": "consul:1.7.0",
        "Password": "",
        "Username": ""
      },
      "name": "consul_dc1",
      "networks": [
        {
          "name": "network.dc1"
        },
        {
          "name": "network.wan"
        }
      ],
      "status": "applied",
      "type": "container",
      "volumes": [
        {
          "destination": "/config/dc1.hcl",
          "source": "/home/nicj/go/src/github.com/shipyard-run/blueprints/consul-gateways/consul_config/dc1.hcl"
        }
      ]
    },
    {
      "command": [
        "consul",
        "connect",
        "envoy",
        "-mesh-gateway",
        "-register",
        "-address",
        "10.15.0.202:443",
        "-wan-address",
        "192.168.0.202:443",
        "--",
        "-l",
        "debug"
      ],

Example new output:

Resource: Test
Type: Cluster
...options
``

Reduce Update Text to a Notification vs a large banner

Hey @nicholasjackson while the update message is useful for letting users know a new version of shipyard is available. The message takes a lot of real estate on stdout, especially if I am not supporting colors it looks jumbled.

Can you reduce this a bit?

A single line IMHO would suffice in letting me know a new version is available for me to download.

https://github.com/shipyard-run/shipyard/blob/553c1ec8d478f0b13fafdc3ce71b63c973eca008/pkg/utils/utils.go#L241-L250

Nomad Ingress

Define an ingress resource which allows external access to a task running in Nomad

nomad_ingress "nomad-products" {
  target  = "nomad_cluster.dev"
  job = "product-api"
  group = "product-api"
  task = "product-api-service"

  port {
    local  = "http"
    remote = "4646"
    host   = "14646"
  }
}

Process:

  • Interrogate the Nomad API find the task, if multiple allocations choose one at random
  • Determine port from either absolute or lookup random port based on name
  • Create Ingress container with SOCAT pointing to Nomad cluster and service port

Limitations:

  • All jobs need to use Host networking (default), this should be overcome in a later PR

Refactor Ingress Resources

Ingress resources should be specific to the target i.e:

k8s_ingress "k8s-dashboard" {
  target = "k3s"
  service = "kubernetes-dashboard"
  namespace = "kubernetes-dashboard"

  port {
    local = 8443
    remote = 8443
    host = 18443
  }
}

container_ingress "nomad-http" {
  target  = "nomad"

  port {
    local  = 4646
    remote = 4646
    host   = 14646
  }
}

nomad_ingress "nomad-products" {
  target  = "dev"
  job = "product-api"
  group = "product-api"
  task = "product-api-service"

  port {
    local  = 4646
    remote = 4646
    host   = 14646
  }
}

remote_ingress "product-api" {
  target = "consul-remote" // remote network
  service = "product-api"

  port {
    local  = 4646
    remote = 4646
    host   = 14646
  }
}

Remote Connections

Problem:
When working on a micro service developers often require the local version of the service to be connected to a full environment. While Shipyard provides some level of capability, spinning up large and complex applications is beyond the hardware capabilities of most development machines.

Generally the approach taken is that the service is deployed to the shared environment and tested externally.

While remote debuggers allow a connection from a local IDE like VSCode to a service running in a remote cluster, there are several problems:

  1. The application code running in the cluster must have debug symbols enabled
  2. The application code running in the cluster must match the source on the local machine
  3. After every change the service must be re-deployed to the remote cluster

Ideally it should be possible to route traffic to a locally running instance of a service from a remote cluster.

It should also be possible to route traffic based on HTTP metadata, etc specifically to the local service without impacting any other element of the shared environment.

Enabling testing
One core problem with testing is generally the inputs for the test cases. Test cases are often defined by known issues and functionality. Often bugs exist in application code due to missing functionality which was not defined. Since the test case is directly tied to functionality errors occur. To aid in the discoverability of bugs, a developer should be able to run the latest code with production inputs. This feature would enable the shadowing of traffic from a production environment to a local dev or test environment.

Proposed solution:
A new remote network resource which allows traffic to be routed to remote destinations
A new local service resource which allows local applications to be part of the shipyard stack

Functional overview
A central component (controller) within the remote network would enable tunnelling of connections between the remote dev machine and service traffic in the cluster.

The controller is API driven and runs as a single monolithic instance. When a dev environment connects to the controller it specifies which service it would like to masquerade as.

Specific to a Consul Service Mesh
For example we wish to send traffic destined for the remote products-api service to the local dev instance.

The controller will register a "fake" service instance of the products-api service with Consul, the advertise address for the service would be the address of the controller.

Since the Consul cluster would regard the "fake" instance as another service instance it will be able to take part in the service catalog and L7 routing.

The controller would automatically configure L7 routing for the service such as enabling traffic splitting or HTTP meta data based routing. This ensures that the fake service can be isolated from the normal network traffic.

The local client makes an outbound connection to the controller through the Consul Gateway using a valid mTLS certificate. Upon successful connection a persistent TCP connection is opened, this circumvents the need for the local machine to be remotely accessible (public IP, etc)

When traffic is sent controller and the controller will proxy this through the tunnel back to the dev environment.

To connect to the controller the dev environment will use a valid mesh mTLS certificate and use the remote gateway as an ingress.

Example Config

Below is an example of what the configuration might look like. It is assumed that the remote controller would be deployed separately.

// A remote_network defines a network which does not exist within the current machine
// this could be a service mesh such as Consul running on a remote cluster.
remote_network "consul-remote" {
  endpoint = "192.179.231.1"
}

 // make remote services accessible locally
 // the service would be accessible using the FQDN `api-products-db.consul-remote.shipyard:5432`
 // and the port
remote_ingress "product-api" {
  target = "consul-remote" // remote network
  service = "product-api"

  port {
    local  = 4646
    remote = 4646
    host   = 14646
  }
}

// A local service defines an application component which is running on the local machine
// this can be used to proxy requests from the dev stack to local debug code
// if the network target is a remote_network then traffic from a remote cluster will be sent 
// to the local instance 
local_service "dev-products-http" {
  name "api-products" //service name is the registered service name
  network = "consul-remote"

  local_port = 8080

  // routing rules are optional however all elements have a combatinatination (invented a word) effect. 
 // i.e they are combined.
  routing {
    http_header {
      key = "DEBUG"
      value = "nicstest"
    }
    
    // send 10 percent of remote traffic to the local instance
    traffic_split = 10
    
    // a copy of the traffic is sent to the local machine, the original request
    // also arrives at the normal destination, think TEE
    shadow_traffic = true
  }
}

Short Install

Ability to install Shipyard from a curl command

curl http://shipyard.run/install | bash 

All in one install and apply

curl http://shipyard.run/apply | bash http://github.com/blueprints/something 

Nomad Jobs

Need to define a stanza to allow the running of Nomad jobs

nomad_job "something" {
  cluster = "nomad.test"
}

Resource names allow dot notation

Defining a resource using:
foo.example.com results in an error. I would expect I can use dot notation to define my resources. You can sanitize this during blueprint parse as you read in the resource names.

This was a major request for my team FYI.

Testing

Improve Unit test coverage by testing unhappy paths.

Make tests more generic and re-usable.

Refactor config testing.

This is currently underway, I am refactoring the Docker SDK into a more task oriented API. For example creating a container requires 4 calls to the Docker API. Testing the providers which create containers is incredibly verbose because of this.

By creating a Task Driven API the providers will depend on this rather than the Docker client API. When testing the Task Driven API can be mocked meaning way cleaner tests. There will be a full test suite separately for the Task Driven API.

The Task driven API will also abstract the implementation of Docker from the provider.

This way we can replace Docker with ContainerD or PodMan, etc.

Blueprint Namespaces

Currently, there is not a way to have multiple blueprints coexist sanely. Blueprints should be namespaced or ordinal numbers added to their resource names in order to segregate and list them.

Tools

The bash version of shipyard used to have a tools command which ran an interactive terminal containing common tools such as HashiCorp command line tools and KubeCtl.

We need to bring this feature to Shipyard. The tools container should be run as follows.

shipyard tools

If an existing tools container is running then a shell to this container will be attached

If the tools container is not running then it will be started.

In addition we need to improve the user experience for the tools container.

At present you define the tools container as follows:

container "tools" {
  image   {
    name = "shipyardrun/tools:latest"
  }

  command = ["tail", "-f", "/dev/null"]

  # Nomad files
  volume {
    source      = "./nomad_config"
    destination = "/files/nomad"
  }

  network = "network.cloud"
  
  env {
    key = "NOMAD_ADDR"
    value = "http://nomad-http.cloud.shipyard:4646"
  }
}

Tools should be rolled into a custom resource called tools and many of the following options should be automatically set:

  • Kube config files for running Kubernetes clusters should be automatically be mounted
  • Environment variables for Nomad and Kubernetes clusters should be automatically set
  • Should be possible to specify a custom tools container
  • Networks are automatically attached
  • As a stretch goal there should be a feature where the networks/variables on a tools container are automatically updated as the state changes.

Binary Distribution

It should be possible to distribute shipyard using Apt, Chocolatey, Brew, Snap, Windows Installer.

[bug] debug image loading logs line formatted poorly

When using the nomad_cluser.image configuration resource, debug log lines are weirdly formatted with blank lines and trailing characters. It would be nice to clean this up to improve the UX.

2020-05-26T16:12:38.564+0200 [DEBUG] oaded image: consul:1.7.2
2020-05-26T16:12:39.446+0200 [DEBUG]
2020-05-26T16:12:39.839+0200 [DEBUG] 7Loaded image: gcr.io/google_containers/pause-amd64:3.0
2020-05-26T16:12:40.457+0200 [DEBUG]
2020-05-26T16:12:41.161+0200 [DEBUG] 2Loaded image: nicholasjackson/fake-service:v0.9.0
2020-05-26T16:12:41.469+0200 [DEBUG]
2020-05-26T16:12:46.624+0200 [DEBUG] :Loaded image: nicholasjackson/consul-envoy:v1.7.2-v0.14.1
2020-05-26T16:12:47.550+0200 [DEBUG]
2020-05-26T16:12:57.494+0200 [DEBUG] %Loaded image: prom/prometheus:latest
2020-05-26T16:12:57.608+0200 [DEBUG]
2020-05-26T16:12:58.437+0200 [DEBUG] *Loaded image: prom/statsd-exporter:latest

Shipyard needs to return appropriate exit codes

Let's try and trap and send the appropriate exit code.

For instance. On error, I should not see exit code zero and would expect 1 or another code depending on the exact issue that was encountered.

Downloading Blueprints

At present when downloading a blueprint from GitHub only the last directory is used in the Blueprint cache at $HOME/blueprints.

├── consul-gateways
│   ├── consul_config
│   │   ├── api.hcl
│   │   ├── dc1.hcl
│   │   ├── dc2.hcl
│   │   └── web.hcl
│   ├── dc1.hcl
│   ├── dc2.hcl
│   ├── gateways.yard
│   └── network.hcl
├── examples
│   └── container
│       └── minimal
│           └── config.hcl
└── minimal
    └── config.hcl

The full path of the github repo should be stored similar to how GOPATH is used, this is to avoid accidentally overwriting a blueprint.

Currently the code is implemented in:
https://github.com/shipyard-run/shipyard/blob/a3ef03f5b2a9ce77912483ad16050babba41e2b5/cmd/get.go#L59

However this function should be moved to the engine as when modules are implemented it will need to be used.

Modules

Shipyard should be able to reference other blueprints and configuration sources. This would allow a modular approach to building environments.

When shipyard run is executed Shipyard would include the referenced module in the list of files it is currently parsing. It would then create those resources as normal. If the reference is a remote repository then Shipyard would download it to the users local cache before execution.

Example config:

Local folder

module "consul" {
  source = "./subfolder"
} 

Remote Blueprint

module "consul" {
  source = "github.com/shipyard-run/blueprints//consul-nomad"
} 

Go 1.14.0

Currently it is not possible to build this application using Go 1.14.0. This is due to an upstream problem with the Helm package v3.1.1.

The application will build, however when running a blueprint with a Helm chart, the run will fail.

E.g.

2020-03-01T10:57:43.033Z [ERROR] Unable to apply blueprint: error="Error running chart: template: consul/templates/client-daemonset.yaml:121:23: executing "consul/templates/client-daemonset.yaml" at <(.Values.client.join) and (gt (len .Values.client.join) 0)>: can't give argument to non-function .Values.client.join"

Go 1.13.8 does not have this problem.

Windows Support

Ensure all file paths are generalised across all operating systems to ensure that Windows (non WSL/2) and Unix based systems are supported.

For clarification, this relates to Shipyard running in windows cmd prompt with Docker desktop. Shipyard works well with WSL2.

Namespace Networks

Modify the behaviour of Shipyard so that networks are namespaced [network].shipyard.

This resolves an interesting failure to apply a blueprint as the user had an existing network with the same name.

Exec command

Shipyard should have an exec command which allows interactive shell to a container, pod or nomad job.

The UX around the shell should have the following options:

  1. All in one
shipyard exec [target] [job | pod] [task | container] -- [command: default shell]

# container in a Nomad job, run `ls`
shipyard exec cluster.nomad product-api api-service  -- ls

# container in Kubernetes pod, default shell
shipyard exec cluster.nomad product-api api-service

# docker container, default shell
shipyard exec container.consul

Notes:

  • Should the target pod, etc be slash separated, flag based, or space delimited?
  • Shipyard should determine if bash or zsh is available and default to that if possible rather
    than sh
  1. Interactive Mode
$ shipyard exec

# Which target would you like to use?
[1] Consul Container
[2] Kuberentes Cluster
[3] Nomad (cluster)

$ 3

# Would you like to access the node or a job?
[1] Node
[2] Job (default)

$ 2

# What job in the Nomad cluster would you like to use?
[1] Products API
[2] Front end service

$ 1

# Which container would you like to attach to?
[1] Products Service
[2] Envoy Sidecar

$ 1

# What command would you like to execute (press return for default shell)?

$ ls -lha

Notes:

  • Interactive mode should be able to pick up from a partial exec, for example only if the cluster and job was specified in the command. Interactive mode should prompt for container and command.
  • If there is only one option then the question should be skipped.
  1. Bash Autocomplete
$ shipyard exec con[tab]

Consul Cluster  Consul Container  Confluence Container

Allow log view per resource and aggregate

While it's easy enough to view logs of the individual docker containers that are provisioned using shipyard. I am finding myself wishing for a log view similar to docker-compose.

shipyard logs - should show all logs for the stack like docker-compose up
shipyard logs <container_resource> - should show logs for that resource.

"Error parsing README.md front: unknown delim"

I am following the guide at https://learn.hashicorp.com/consul/gs-consul-service-mesh/understand-consul-service-mesh#setup-a-kubernetes-environment-with-shipyard getting the following error. :(

curl https://shipyard.run/apply | \
 bash -s github.com/shipyard-run/blueprints//learn-consul-service-mesh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   315  100   315    0     0     77      0  0:00:04  0:00:04 --:--:--    77
Running configuration from:  github.com/shipyard-run/blueprints//learn-consul-service-mesh

Error parsing README.md front: unknown delim
2020-06-05T12:52:27.182+0200 [DEBUG] Statefile does not exist

DAG and State

Currently state and dependencies in Shipyard are very rudimentary. Ideally we need to provide capability for incremental application of resources.

Deprecate .yard file

The main blueprint file should be deprecated in favour of a README.md markdown file which contains front-matter. Browser windows currently defined in the blueprint should be moved to the individual resources as defined in #43

Running blueprints does not open defined browsers

If a Blueprint file specifies one or more web browsers to be opened, the current logic is that on Darwin open is used. On Linux xdg-open is used.

// After apply runs the following browser windows are opened (does not open them if run with --headless)
browser_windows = [
  "http://localhost:18200",
  "http://localhost:18080",
  "http://localhost:18443",
]

Need to implement more robust logic round checking for these programs, also need to implement vanilla Windows version.

When using WSL 2 on windows xdgopen` needs to be installed and a browser such as chrome need to be aliased as a bash command.

Ingress

Is the current process for Ingress the best approach?

We have thought about differing approaches for Ingress to move this into a single container, however
finding a balance between UX and something which can deliver the requirements is not easy.
While the current process is not hugely optimised it satisfies the requirements allows simple UX and consumes few resources.

Requirements:

  • Any ingress should be accessible using DNS convention [container].[network].shipyard
  • Ingress should be accessible using an IP address
  • Ingress should be able to have a statically assigned IP address or dynamic
  • Ingress should be able to expose a port to the host and bind to any port
  • Ingress should be able to proxy to containers, K8s pods/services, and Nomad tasks

Embed `browser_windows` in `hcl` config file

Current Behavior

Currently it is possible to configure the blueprint by adding a browser_windows section in the .yard file.

browser_windows = [
  "http://localhost:18443",
  "http://localhost:18500",
]

This works fine in cases where the blueprint is created for a complete scenario (aka all applications that need to be exposed are started with the first shipyard run ...).

Extended behavior
I'd like a way to configure the browser_windows behavior inside the ingress object:

  • Example syntax 1
ingress "consul-ui" {
  target = "k8s_cluster.k8s"
  service  = "svc/hashicorp-consul-ui"
  
  # Defines whether to honor the `browser_window` param or not
  headless = true
  # Defines URL to open in browser after ingress is created
  browser_window = "http://localhost:18500"
    
  network  {
    name = "network.local"
  }

  port {
    local  = 80
    remote = 80
    host   = 18500
  }
}
  • Example syntax 2
ingress "consul-ui" {
  target = "k8s_cluster.k8s"
  service  = "svc/hashicorp-consul-ui"
  
  # can be honored globally by `shipyard` config
  # local_env could potentially be expanded for new feature that leverage desktop integration
  local_env  {
    # Defines whether to honor the `browser_window` param or not
    # headless = true # having this too here might be an overkill

    # Defines URL to open in browser after ingress is created
    browser_window = "http://localhost:18500"
  }
  
    
  network  {
    name = "network.local"
  }

  port {
    local  = 80
    remote = 80
    host   = 18500
  }
}

Shipyard List: show currently running blueprints

shipyard list: should show blueprints that are running

This would be a nice command to show all currently running blueprints. This should be backlogged, however, because at this time multiple running blueprints do not coexist well together.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.