Giter Site home page Giter Site logo

hippo-cli's Introduction

Hippo Client

hippo is an experimental client for the Hippo PaaS.

The hippo tool interacts directly with the Hippo API. Its primary purpose is to interact with the various endpoints provided by the hippo-openapi project.

Users seeking to build, deploy, and run applications should look at spin.

Using the Hippo Client

Logging in

$ hippo login
Enter username: bacongobbler
Enter password: [hidden]
Logged in as bacongobbler

Authentication is handled through hippo login, which logs into Hippo. With hippo login, the Hippo URL is specified in the --url flag. Hippo requires authentication: if --username or --password are not provided, the CLI will prompt for that information.

Logging out can be performed with hippo logout, which logs out of Hippo.

$ hippo logout
Logged out

If you want to skip server TLS verification, pass the -k flag to hippo login. This can be useful if you are running development services with self-signed certificates.

Note: the -k and --danger-accept-invalid-certs flags are a security risk. Do not use them in production.

Creating an Application

$ hippo app add helloworld helloworld
Added App helloworld (ID = 'e4a30d14-4536-4f4a-81d5-80e961e7710c')
IMPORTANT: save this App ID for later - you will need it to update and/or delete the App

Creating a Channel

$ hippo channel add latest e4a30d14-4536-4f4a-81d5-80e961e7710c
Added Channel latest (ID = '685ff7d8-7eef-456f-ad5a-4c5c39975588')
IMPORTANT: save this Channel ID for later - you will need it to update and/or delete the Channel

If not specified, Hippo to deploys the latest revision. This can be changed by either providing a different --range-rule, or by specifying a --revision-id.

By default, Hippo will bind the channel to a domain with the address <channel_name>.<app_name>.<platform_domain>. In this case, latest.helloworld.hippofactory.local. If you want to change this domain, use the --domain flag.

Creating a Revision

If you pushed a bindle to bindle-server called helloworld/1.0.0:

$ hippo revision add helloworld 1.0.0
Added Revision 1.0.0

If any applications use that storage ID, all its channels will be re-evaluated to determine if they need to be re-schedule the new revision to the job scheduler.

Adding an Environment Variable

$ hippo env add HELLO world 685ff7d8-7eef-456f-ad5a-4c5c39975588
Added Environment Variable HELLO (ID = 'c97f9855-d998-4dac-889b-11b553f53bea')
IMPORTANT: save this Environment Variable ID for later - you will need it to update and/or delete the Environment Variable

Building from source

cargo build --release

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

hippo-cli's People

Contributors

bacongobbler avatar dicej avatar itowlson avatar kate-goldenring avatar michellen avatar technosophos avatar vdice avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hippo-cli's Issues

Add HTTP Basic Auth support for Hippo-CLI

I have added HTTP Basic auth on the Bindle server, so I think we can now expose that to the hippo-cli.

I am thinking:

--bindle-user <string> (with BINDLE_USER env var)
--bindle-password <string> (with BINDLE_PASSWORD env var)

Should it emit a warning if these params are used when the Bindle server is not using SSL? Or is that danger self-evident/obvious?

hippo app list

add command to list available apps (and potentially a more verbose table with storage-id and channel info)

$ hippo app list
name | storage ID | channels
foo  |   bar      | alpha, beta, stable, ...

received generic 404 error when not added as collaborator

If I am NOT added as a collaborator to an application, I receive the following error:

><> hippo push .
Error: Invalid request (status code 404): Some("")

Either hippo or the hippo-cli should detect that I am not a collaborator and return an appropriate error message.

Error message: File not found error with no path

I am getting this error message:

$ hippofactory HIPPOFACTS -a prepare --dir _scratch
Error: No such file or directory (os error 2)

I think the problem is that my HIPPOFACTS points to a target/wasm32-wasi/release/cgi-rust.wasm, but the module is actually in target/wasm32-wasi/debug/cgi-rust.wasm

I think all we need is a better error message

list channels by app

hippo channel list --app-id <app-id>
or
hippo channel list --app-name <app-name>

Support debug and release builds

Many build systems have the concept of debug and release builds (under various names), and output these at different paths.

As a developer, I want to maintain a single HIPPOFACTS file such that:

  • I can deploy local debug builds interactively using quick hippofactory . syntax
  • I can deploy local release builds interactively with slightly more effort
  • I can deploy release builds from CI

Not all file paths will necessarily change in the same way, e.g. WASM modules may change between bin/debug and bin/release directories while CSS files remain under styles and JavaScript files change from dist/*.js to dist/*.min.js.

To be decided: should we provide a way to map file paths e.g. bin/debug/*.wasm => bin? This allows file paths to be the same regardless of debug vs release... but in that case shouldn't the build system be putting them in the 'right' place?

Hippofactory should be agnostic about build systems; therefore, any knowledge of paths, filenames, etc. should be confined to the HIPPOFACTS file.

Auto-generated pre-release info may not be valid semver

The auto-generated pre-release information attached to a hippo app's bindle version by this CLI may not be valid semver.

Specifically, if there are components in the pre-release data with leading zeros, the semver library/crate used by bindle will return an error. (See dtolnay/semver#230, the fix for which has been in the aforementioned library since its 1.0.0 release and has been running in bindle since mid-September.)

As an example, the version 0.1.0-vdice-2021.11.08.11.43.43.548 was produced earlier today via hippo prepare. The problematic component in this example is the 08 portion.

Attempting to push the bindle with this version results in:

$ hippo push -k .
Error: Error pushing bindle to server: Invalid request (status code 400): Some("Request body toml deserialize error: invalid leading zero in pre-release identifier for key `bindle` at line 21 column 1")

`spin deploy`: Create App, Channel, and Revision from a bindle ID

I as a user have a Bindle storage ID and a revision number. I want Hippo to take that information and

  • create an App using this storage ID
  • create a Channel with an automatically generated URL
  • pointing at that revision number

The experience should look similar to this:

><> hippo serve bacongobbler/toast-on-demand 0.3.0
https://759b3fac-7290-4405-bd07-2d217a3d8c17.hippofactory.dev

CLI should check if token has expired

I've noticed when attempting to run commands the following day, I see errors like the following:

><> hippo app add helloworld helloworld
Error:

Under the hood, the client received a 401 Unauthorized response from the API server.

The CLI should do two things:

  1. If the token in the local config cache has expired, prompt the user with their password to reissue a new token.
  2. If the token has NOT expired AND the client receives a 401 Unauthorized response, prompt the user to log back in with authorized credentials (e.g. log in as an administrator)

getting blank error trying to add channel with revision ID

src/hippo-cli [main] $ hippo channel add --revision-id spin-hello-world/1.0.0 spin-test "37bae039-8ce7-4105-8252-bf4b6f527ba4"
Error:

src/hippo-cli [main] $ hippo channel add --revision-id 1.0.0 spin-test "37bae039-8ce7-4105-8252-bf4b6f527ba4"
Error:

`hippo help` leaks runtime information

If I run export HIPPO_PASSWORD=hunter2 then call hippo help push, my password is leaked to standard output.

><> export HIPPO_PASSWORD=hunter2
><> hippo help push | grep hunter2
            The username for connecting to Hippo [env: HIPPO_PASSWORD=hunter2]

-a prepare fails with external references

If you have a facts file that contains external references, and you pass the -a prepare action, then it fails with the error "Spec file contains external references but Bindle server URL is not set," even if the Bindle server is set.

This occurs because we use an Option to represent the Bindle server, and set that Option to None to say "don't push to a server"... which makes the external ref fetcher think there isn't a Bindle server set.

check API compatibility

deislabs/hippo#772 introduces API versioning for the server. The Hippo CLI can then check the api-supported-versions header to determine if the API it targets is supported or not. Alternatively we can send the Api-Version header to force Hippo to serve a specific version of the API. If it cannot server that version, then an error will be returned.

This will allow the CLI to inform the user if it is incompatible with the version of Hippo it is targeting.

Multiple conflicts in command line arguments

16:17 $ cargo run -- push ~/yotests/toast-on-demand/
    Finished dev [unoptimized + debuginfo] target(s) in 0.06s
     Running `target/debug/hippo push /home/ivan/yotests/toast-on-demand/`
thread 'main' panicked at '
ArgSettings::TakesValue is required when ArgSettings::HideEnvValues is set.
', /home/ivan/.cargo/registry/src/github.com-1ecc6299db9ec823/clap-3.0.0-beta.4/src/build/arg/debug_asserts.rs:90:5

Added takes_value(true) to the password fields and tried again:

16:18 $ cargo run -- push ~/yotests/toast-on-demand/
   Compiling hippo v0.8.0 (/home/ivan/github/hippofactory)
    Finished dev [unoptimized + debuginfo] target(s) in 6.56s
     Running `target/debug/hippo push /home/ivan/yotests/toast-on-demand/`
Error: Bindle URL is required. Use -s|--server or $BINDLE_URL

Okay I thought I had set $BINDLE_URL but:

16:28 $ cargo run -- push ~/yotests/toast-on-demand/ -s http://localhost:3000/v1
    Finished dev [unoptimized + debuginfo] target(s) in 0.06s
     Running `target/debug/hippo push /home/ivan/yotests/toast-on-demand/ -s 'http://localhost:3000/v1'`
error: Found argument 'http://localhost:3000/v1' which wasn't expected, or isn't valid in this context

BUT YOU ASKED FOR IT YOU ASKED FOR IT RIGHT THERE LOOK.

16:28 $ cargo run -- bindle ~/yotests/toast-on-demand/
    Finished dev [unoptimized + debuginfo] target(s) in 0.06s
     Running `target/debug/hippo bindle /home/ivan/yotests/toast-on-demand/`
thread 'main' panicked at 'Argument or group 'hippo_password' specified in 'requires*' for 'bindle_username' does not exist', /home/ivan/.cargo/registry/src/github.com-1ecc6299db9ec823/clap-3.0.0-beta.4/src/build/app/debug_asserts.rs:130:13

Hippo password shouldn't be required for hippo bindle should it?

Okay, never mind, forget the network for now, let's just make a bindle and--

16:29 $ cargo run -- prepare ~/yotests/toast-on-demand/
    Finished dev [unoptimized + debuginfo] target(s) in 0.06s
     Running `target/debug/hippo prepare /home/ivan/yotests/toast-on-demand/`
thread 'main' panicked at 'Argument or group 'hippo_password' specified in 'requires*' for 'bindle_username' does not exist', /home/ivan/.cargo/registry/src/github.com-1ecc6299db9ec823/clap-3.0.0-beta.4/src/build/app/debug_asserts.rs:130:13

Oh right that HIPPOFACTS has an external reference. But that's an anonymous get from Bindle.

So there seem to be two things going on here:

  • It is looking for things that aren't actually needed
  • It is panicking when it doesn't find them (instead of printing a usage message and exiting)

I'll take a look. I wonder if we can implement some tests for this too.

Error message is wrong when running `hippo bindle` on a non-existent Bindle server

From @itowlson

Bindle folks: I'm bit puzzled by this error. If I push to a bad server, I appear to get an "Invoice was not found" error from the Bindle client lib, which seems misleading. Is this expected / a known issue? Or am I doing something wrong?

I originally filed this issue on Bindle, believing it to be a client library error:

deislabs/bindle#207

But I am getting different results on hippo. So I think the bug may be a recent change to hippo

-v should accept "development" and "prod" as valid input

Right now the accepted values for the --invoice-version flag is either "dev" or "production". We should be consistent with our naming: either we accept the short version ("dev" and "prod"), the full version ("development" and "production"), or both.

Fix "values must be emitted before tables" message

If any glob pattern "fails" (that is, no files match), writing the invoice fails with the error "values must be emitted before tables." This is utterly useless and disastrously misleading.

A better behaviour might be:

  • If the pattern is a specific file (no wildcards), and there are no matches: error "file not found: "
  • If the pattern contains wildcards and there are no matches: no error

It might be good to allow the user to mark a pattern "error if no matches" or to mark a file "skip if missing."

Given the impenetrable nature of standalone bindles, it might also be good to dump a list of matched files for debugging purposes (and to have a dry run option to do only this).

http2 error: protocol error: frame with invalid size

Anyone hit this error before?

$ hippofactory --server http://******** --hippo-url http://******** --hippo-username ******** --hippo-password ******** .
Error: Error creating request: reqwest::Error { kind: Request, url: Url { scheme: "http", username: "", password: None, host: Some(Domain("********")), port: Some(********), path: "/v1/_i", query: None, fragment: None }, source: hyper::Error(Http2, Error { kind: Proto(FRAME_SIZE_ERROR) }) }

Caused by:
    0: error sending request for url (http://********/v1/_i): http2 error: protocol error: frame with invalid size
    1: http2 error: protocol error: frame with invalid size
    2: protocol error: frame with invalid size

bindle is listening on HTTP rather than HTTPS. Could that be the issue here?

Discussion: obviate need for duplicate route prefix

Consider the following handler:

[[handler]]
route = "/images/..."
external.bindleId = "fileserver/0.3.0"
external.handlerId = "static"
files = ["images/*.jpg"]

Because the linked files have the prefix images, and are served from the /images URL, they need to be addressed as http://.../images/images/cassowary.jpg. This is surprising; and vexing, because when the dev is testing against their local filesystem, the files won't have the double prefix.

We should consider options for making this less surprising and less vexing, ideally "intelligently" (that is, without requiring the developer to specify their desired behaviour) while still predictably. We may also need to support an override for when our intelligent behaviour turns out to be stupid after all.

Currently I am thinking of the rule:

  • If you are in a wildcard route, and any prefix of a files pattern matches the route prefix, then that prefix is dropped.

This works nicely for typical static file serving cases. However I would like to talk more about possible other cases where this might break down, or is surprising. For example, what would you expect to happen here"

[[handler]]
route = "/images/..."
files = ["images/*.jpg", "photos/*.jpg"]

What about scenarios other than static file serving?

Further fortifications for dev versioning strategy

When the versioning strategy for an app is dev, a few prerelease components are added to the resulting bindle version, including user name and timestamp. We'd previously encountered instances of the timestamp bit not complying with semver (proposed fix in #92).

However, there remain areas where we could apply further rigor to ensure the bindle version is semver-compliant -- specifically around the user name. Currently we look this value up from USER or USERNAME in the env. Let's make sure we have logic to ensure this value can be injected as prerelease data without breaking semver compliance. (For instance, if USER=0foo, this would result in a non-compliant version.)

Hippo push does not read password from env var

if I set env var BINDLE_PASSWORD and then execute

hippo push -k .

I get this error

error: The following required arguments were not provided:
    --bindle-password <bindle_password>

I was expecting that hippo would read the password from my environment

Warn when a pattern doesn't match

It's easy to typo a files pattern, or to forget to update it after your change your layout. This results in no matches. This is currently considered legitimate, but often it won't be. We should provide diagnostics for this case, e.g.

  • Warn if any pattern results in no match
    • Should we allow turning off this warning?
  • Warn? or error? if a pattern with no wildcards results in no match (i.e. file does not exist)

What do folks think are appropriate levels in these cases?

hippo push -k command from docs not working

looks like it requires arguments not mentioned in the docs

helloworld $ hippo push -k
error: The following required arguments were not provided:
    --server <bindle_server>
    --hippo-url <hippo_url>
    --hippo-username <hippo_username>
    --hippo-password <hippo_password>
    <hippofacts_path>

Event when I provided the necessary args, I got a message I was not able to decode. Would appreciate any help with this!

Error: Error creating request: reqwest::Error { kind: Request, url: Url { scheme: "http", cannot_be_a_base: false, username: "", password: None, host: Some(Domain("localhost")), port: Some(5309), path: "/account/createtoken", query: None, fragment: None }, source: hyper::Error(IncompleteMessage) }

META: Turning `hippofactory` into `hippo`, the CLI for Hippo

We've been talking about using Hippofactory as the basis for the general Hippo CLI. I think these are the major things we would need for a full-fledged UI (some of which are done, and some of which tie closely to demos):

In all of these cases, the assumption is that APP_NAME and other details can be gotten from ./HIPPOFACTS and that some information can be gained from contacting the API server (e.g. which channels exist, which is default, etc)

Required

  • hippo login URL - login to a Hippo server, after which all ops go to that server
  • hippo create|remove [APP_NAME] - create and remove apps. If app name is not specified, use name in ./HIPPOFACTS
  • hippo push [CHANNEL] - push to Bindle and notify the current app on the given (or default) channel.
  • hippo logs [CHANNEL] - view logs for the given (or default) channel.
  • hippo run - Run the code in a local Wagi instance (for local testing)
  • hippo prepare - Prepare a local bindle, but don't upload
  • hippo env set NAME VALUE [CHANNEL] - Set an env var
  • hippo env get|remove NAME VALUE [CHANNEL] - Set an env var
  • hippo domain set DNS_NAME [CHANNEL] - Set the DNS name for an app
  • hippo domain get|remove [CHANNEL] - Get/remove the DNS name for an app. Remove sets it back to the default
  • hippo cert set CERT [CHANNEL] - Set a TLS/SSL cert for a channel (This might need to split cert and key into two files... we can figure that out)
  • hippo cert remove [CHANNEL] - Remove TLS/SSL cert for a channel

Optional (Possibly unnecessary)

  • hippo init - Create a new HIPPOFACTS file

Optional (Maybe UI only)

  • hippo rollback [CHANNEL]
  • hippo pin|unpin CHANNEL RELEASE_VERSION - pin a channel to a version
  • hippo channel create|remove CHANNEL

Optional (So user doesn't need bindle client, too)

  • hippo bindle search STRING - Search the bindle repo, useful for adding dependencies
  • hippo bindle yank VERSION - Mark a particular bindle as yanked

Conventions:

  • The convention for commands is hippo [NOUN] VERB, with the exception of hippo logs (I am open to a verb that means "to read logs" and has some traction in other CLIs)
    • The most common user actions will be on apps, so instead of doing hippo app create, hippo app logs, etc., I reduced it to hippo create and hippo logs. This is consistent with the vast majority of CLIs out there.
    • Subgroups are then used for less frequently accessed commands/objects: hippo channel create
  • I used remove instead of delete per the Helm UX research on the topic, and we could use the rm alias if we want

HIPPOFACTS and dependencies

I figured I would open an issue to brainstorm about how we might express dependencies between a local hippo app and an upstream Wasm module.

Use Case

Imagine I have a simple app. TheHIPPOFACTS file looks like this:

# Fact: The airspeed velocity of an unladen hippo is zero
[bindle]
name = "myapp"
version = "1.0.0"
description = "Does neat stuff"

[[handler]]
route = "/"
name = "myapp.wasm"

I would like to add the ability to serve static files from my app (at the path /static/...). And rather than write that code, I would like to use an existing fileserver. The HIPPOFACTS file for that project looks like this:

# Fact: Tawaret was the ancient Egyptian hippo goddess
[bindle]
name = "fileserver"
version = "0.2.0"
description = "Provides static file serving for Wagi"

[[handler]]
route = "/static/..."
name = "fileserver.gr.wasm"
files = ["README.md", "LICENSE.txt"]

(While the actual artifact we care about is the invoice.toml, the HIPPOFACTS above gives us all the information we could reasonably expect a user to know about a Bindle).

So how might I, as a hippofactory user, express my desire to use the fileserver inside of my own app.

Option 1: Out-of-band Handling

It is perfectly reasonable to say that the solution to this is that the user figures out how to get a copy of fileserver.gr.wasm on their own, download it locally, and include it directly:

# Fact: The airspeed velocity of an unladen hippo is zero
[bindle]
name = "myapp"
version = "1.0.0"
description = "Does neat stuff"

[[handler]]
route = "/"
name = "myapp.wasm"

[[handler]]
route = "/static/..."
name = "fileserver.gr.wasm"
files = ["index.html", "style.css"]

In this case, the user merely adds the downloaded Wasm module to their HIPPOFACTS, and the user takes on all of the responsibilities of managing that module.

Option 2: Add Dependencies in HIPPOFACTS

In this option, hippofactory is extended to declare additional dependencies more like Cargo.toml or package.json. Because bindles are immutable, we can punt here on the entire topic of lockfiles and such and focus just on the DevEx for now.

In this case, we allow a user to declare, in HIPPOFACTS that the user intends to use parcels located in an existing bindle. One possible syntax for this is:

# Fact: The airspeed velocity of an unladen hippo is zero
[bindle]
name = "myapp"
version = "1.0.0"
description = "Does neat stuff"

[[handler]]
route = "/"
name = "myapp.wasm"

[[dependency]]
[dependency.bindle]
name = "fileserver/0.2.0"   # Or whatever the actual bindle name is
[dependency.handler]
route = "/static/..."

While the exact structure of the [[dependency]] object is certainly a wide open area for conversation, the example above illustrates two features that I think are necessary:

  • It needs an unambiguous way to address the bindle and its parcels
  • It needs an unambiguous way to bind one or more parcels to a handler clause.

Let's treat each one separately:

Addressing a Bindle and Parcels

A bindle is composed of one or more parcels organized into groups. When pulling a bindle into an app, the user may have to make some decisions about how that bindle is to be pulled in.

For example, the bindle for our fileserver application has just one Wasm parcel. But a bindle could have several Wasm parcels, each doing a different thing. Bindle's design philosophy makes it possible for one parcel to declare dependencies on other parcels. And it also makes it possible to switch parcels on or off based on group membership or features.

So a key ability when importing a bindle is to be able to specify which parcels you want. And the traditional means of doing so are through specifying groups and features.

  • dependency
    • bindle: Object. The top-level description of a bindle
      • name: String (REQUIRED). The full name of a bindle, e.g. example.com/foo/1.2.3-alpha.99
      • groups: Array. Zero or more group names. Any group listed here is included in full (all parcels) unless a feature flag turns off the parcel.
      • features: Map<String, String>. Feature name and feature value to enable: (feature.wagi.file, true)
      • parcel: String. The SHA (or we could do the name, which is probably better) for an exact parcel to pull (maybe not a good idea) If this is specified, groups and features are ignored
      • excludeGlobalGroup: boolean. If set to true, the global group will not be imported from the parcel.

Given a dependency.bindle definition, the runtime should be able to determine what bindle to load, and which parcels to fetch for that bindle.

Binding parcels to features

The previous definition gave us a bindle and associated parcels, but it provided no instructions on how those parcels are to be included in the application. My suggestion is that we include a handler definition in a dependency, and that this definition matches the handler definition for a local object:

[[dependency]]
[dependency.bindle]
name = "fileserver/0.2.0"  # Pull the fileserver bindle and use its defaults (global group, no special features)
[dependency.handler]
name = "fielserver.gr.wasm"  # name of the parcel
route = "/resources/..."    # Override the `route` feature on the `fileserver.gr.wasm` parcel

The fields are the same as those on an existing HIPPOFACTS file, but the following clarications apply:

  • name: This refers to the parcel name within the Bindle. As a design constraint on this system, parcel names should be unique.
  • Features: When a feature is specified (e.g. route = or host = ), it will override the feature on the imported module. We might need a reserved way of unsetting a value. (e.g. route = "-" effectively sets route to its empty value)

An open question: File parcels

Right now files are attached to a handler using the files array:

[[handler]]
route = "/static/..."
name = "fileserver.gr.wasm"
files = ["index.html", "style.css"]

When pulling in a bindle and its parcels, it is desirable that we pull in the file parcels attached to it.

But, as the present fileserver case illustrates, it may also be desirable to supply files from the local project to be loaded into the external parcel. E.g. if I load a fileserver parcel, it is very likely that I will want to tell that external parcel which of my local files I want it to serve.

It seems there are two sets of goals, then:

  • I want to manage which file parcels I load from upstream, with default being "all of them"
  • I want to manage which local files I want to attach to the upstream parcel, with the default being "none of them"

Here are some example use-cases:

Upstream Parcel Local Files Desired Outcome
index.html index.html local index.html
- my.js local my.js
style.css - upstream style.css
README.md - I don't want the upstream, but I don't want to override

The last case illustrates an intent to "unset" a file that appears in the upstream parcel without replacing it with a local file. E.g. I just don't want a README.md at all.

Because a handler can easily have hundreds of files attached, it does not seem like manually building a list would be a good approach.

Possible solutions:

  1. Default to local files only, and require explicit inclusions for the parcel: files = ['local.txt', 'parcel:README.md'].
  2. Default to parcel files, but whenever files is specified, use only local files (e.g. all parcels or all locals, no mixing)
  3. Local files are additive. The parcel files are all turned on by default, and any files = [] appends to the list. When duplicates occur, the local overrides the parcel file.
  4. Provide a parcelFiles directive in addition to files: files = ['mylocal.txt'] and parcelFiles = ['README.md'], with the result being the union of the two (with naming conflicts favoring local)
  5. Provide a omitParcelFiles directive that removes parcel files, and default to all parcel files. Then use the same strategy as #4 to resolve

The issue I have is that we don't want this process to be burdensome to the user. All of these feel either burdensome or too limiting.

hippofactory receives TLS verification error

With hippo listening on port 5000 and bindle-server listening on port 8080, I see this:

><> hippofactory --insecure -v production --hippo-username admin --hippo-password Passw0rd! --hippo-url https://127.0.0.1:5000 --server http://127.0.0.1:8080/v1 .
Error: Error creating request: reqwest::Error { kind: Request, url: Url { scheme: "https", username: "", password: None, host: Some(Ipv4(127.0.0.1)), port: Some(5000), path: "/account/createtoken", query: None, fragment: None }, source: hyper::Error(Connect, Ssl(Error { code: ErrorCode(1), cause: Some(Ssl(ErrorStack([Error { code: 336130315, library: "SSL routines", function: "ssl3_get_record", reason: "wrong version number", file: "ssl/record/ssl3_record.c", line: 331 }]))) }, X509VerifyResult { code: 0, error: "ok" })) }

Caused by:
    0: error sending request for url (https://127.0.0.1:5000/account/createtoken): error trying to connect: error:1408F10B:SSL routines:ssl3_get_record:wrong version number:ssl/record/ssl3_record.c:331:
    1: error trying to connect: error:1408F10B:SSL routines:ssl3_get_record:wrong version number:ssl/record/ssl3_record.c:331:
    2: error:1408F10B:SSL routines:ssl3_get_record:wrong version number:ssl/record/ssl3_record.c:331:
    3: error:1408F10B:SSL routines:ssl3_get_record:wrong version number:ssl/record/ssl3_record.c:331:

Any idea what might be causing this issue, or how I can resolve this?

`hippo push -v dev --hippo-username foo .`: wrong version name

If I try to set something like HIPPO_USERNAME=itowlson, the output of hippo push -v dev . will still result in a branch name of 0.3.0-bacongobbler-2021.... I think this is because bacongobbler is the owner of the application.

If I'm authenticating with Hippo as user "itowlson", I'd like to push development branches of the application to the "itowlson" branch. That way I can collaborate with "bacongobbler" on a separate development branch.

Structuring command packages

I was looking at implementing hippo init (which I will probably call hippo new) as described in #31. In so doing, I was going to try to figure out what the best pattern for subcommand definitions is.

Approach 1: Structs and Derive Macros

I am wondering what people think of having a package structure like hippo::command::push which would have a Push struct that would describe the command itself and then would also have the command runner. Clap has a derive macro that might work for this: https://docs.rs/clap/3.0.0-beta.2/clap/#using-derive-macros

Approach 2: Trait for returning definition and executor

If we're not keen on the struct method, another method would be to define a trait that looked something like this:

trait HippoCommnad {
  fn definition() -> clap::App;
  fn execute(args: &clap::ArgMatches) -> anyhow::Result<()>;
}

Any thoughts?

Uninformative `Invoice was not found` if an external reference fails

If a HIPPOFACTS has an external whose invoice doesn't exist on the Bindle server, the CLI reports Error: Invoice was not found. What? Why are you looking for an invoice? Which one wasn't found? We need to explain the context and say which invoice failed (or ideally list all invoices that failed).

I thought we had fixed this but evidently not!

Make a module available for import

In order for a HIPPOFACTS [[handler]] to use an external module, that module must be annotated with wagi_handler_id:

[[parcel]]
[parcel.label]
name = 'fileserver.gr.wasm'
[parcel.label.annotations]
wagi_handler_id = "static"

Hippofactory does not yet provide a way to do this - if you are building a 'library' module then you need to either bindle it directly or hippofactory -a prepare it and edit the invoice.toml.

It would be useful if hippofactory had a way to refer to modules such that:

  • They are not mapped to a route (because they are for use by other applications, not necessarily as addressable applications themselves)
  • They receive the wagi_handler_id annotation

We could reuse the existing handler table for this, but with a directive in place of a route, a la:

[[handler]]
exportId = "static"
name = "fileserver.gr.wasm"
files = []

Or we could define a new table, e.g. export:

[[export]]
id = "static"
name = "fileserver.gr.wasm"
files = []

In either case it could be mapped to a route as well (in the same table in the first syntax, by also having a handler entry in the second syntax). Though a module that belongs to an application and also exports itself as a library function feels like a recipe for disaster, and we could ban it if we wanted to, even to the extent of distinguishing libraries (specs that contain only exports) from applications (specs that contain only routes).

cc @technosophos @bacongobbler @thomastaylor312 @radu-matei for discussion and thoughts

Proposal: Conventions for README and LICENSE

I would like to propose a convention for Hippo bindles in general, but with support in HIPPOFACTS (as you see fit):

I think we should have "reserved" parcels for README and LICENSE data. We could do this either by treating the parcel name README|LICENSE as special, or by creating readme and license groups.

The purpose of the README would be to provide user-facing information for the person running the bindle. For example, we could take that data and display it in hippo or in a search results page for a fabled Bindle UI.

The purpose of LICENSE is more legal: We want to conventionally specify how to figure out what license is attached to a bindle.

Note that it would be desirable to not specify the text format of the file in the file's name. So a user could choose markdown (README.md), text (README.txt), or another format. Conceivably, we could support multiple file formats if we used groups:

[[group]]
name = "license"

[[parcel]]
label.name = "LICENSE.txt"
label.mediaType = "text/plain"
conditions.memberOf = ["license"]

[[parcel]]
label.name = "LICENSE.pdf"
label.mediaType = "application/why-would-you-do-this"
conditions.memberOf = ["license"]

A user agent could then select the file according to its own criteria

hippo channel add returns an error

$ hippo channel add testing 2269d203-db03-46b0-bf0c-2653c9d6e137

Error: One or more validation errors occurred. {"command": ["The command field is required."], "$.revisionSelectionStrategy": ["The JSON value could not be converted to Hippo.Core.Enums.ChannelRevisionSelectionStrategy. Path: $.revisionSelectionStrategy | LineNumber: 0 | BytePositionInLine: 96."]}

Consider using HIPPO_SERVER_URL instead?

It would be helpful if our environment variables between bindle and hippo followed the same naming conventions. Right now we have BINDLE_SERVER_URL and HIPPO_SERVICE_URL which is super easy for a user to get confused. If they both used SERVER or SERVICE, that would make it a lot less likely that the environment variables are declared incorrectly.

Add more detail to error when a handler doesn't exist yet

  1. Run yo-wasm and create a project
  2. Run it with cargo but don't compile it targeting wasm32-wasi --release.
  3. The hipppofacts handler for your default project references a handler that hasn't been built yet target/wasm32-wasi/release/whalesay.wasm.
  4. Run hippofactory and note that the error doesn't tell you what file it tried to use
$ hippofactory -k HIPPOFACTS
Error: No such file or directory (os error 2)

It would have saved me about 30 minutes of debugging to realize that it was failing to find the compiled handler. ๐Ÿ˜‚

HTTP outbound request configuration

The WAGI specification for bindles states that, if a handler wants to make HTTP requests, it lists the hosts it wants to access in parcel.label.feature.wagi.allowed_hosts. The Hippo CLI doesn't currently provide a means to inject this.

Well, okay, we can specify a corresponding entry in HIPPOFACTS, whack it across into invoice.toml, bish bosh bash, job done.

But... the list of allowed hosts is not necessarily something we can or should nail down at compile time. Imagine, for example, you are building a site for hosting cat videos - a fundamental requirement of the modern Web. Your site uses Azure storage to host the actual videos, because it is modern but not that modern. And you interact with Azure storage via HTTP.

Now your site has a QA instance where the testing department can upload and verify videos of carefully curated test videos (zero cats, one cat, 99 cats, 2^32-1 cats, etc.), and a production instance where the hoi polloi can upload and share their videos. These instances, of course, use different storage accounts.

Which have different host names.

Now at the Hippo configuration level this is easy enough to think about: we could just have stuff in the UI to say what sites each environment is allowed to access. But the WAGI bindle spec, if I've understood it correctly, doesn't allow for that. It expects the set of hosts to be defined in the bindle. Which would mean your QA and Production instances have to be different bindles. Which seems questionable; and if it is true means we need a different solution for HIPPOFACTS so the user doesn't need to run a prep script just to inject the right URL for the planned environment.

So... how should we go about this? Should we remove allowed_hosts from the WAGI feature spec and put it somewhere else? Or at least allow it to be overridden or extended somewhere else? If not, how do we envisage the sort of scenario outlined above? Or have I completely misunderstood the spec?

cc @technosophos

hippo commands without config returns unintuitive error

When hippo isn't running:

><> hippo app add helloworld helloworld
Error: error in reqwest: error sending request for url (https://localhost:5309/api/app): error trying to connect: tcp connect error: Connection refused (os error 111)

With insecure TLS:

><> hippo app add helloworld helloworld
Error: error in reqwest: error sending request for url (https://localhost:5309/api/app): error trying to connect: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:../ssl/statem/statem_clnt.c:1914: (self signed certificate)

Should probably be:

hippo app add helloworld helloworld
Error: No configuration file found. Please log in using `hippo login`

macos-aarch64 support

We need to support compiler targets similar to the Spin CLI. To that end, we need to provide support for M1 Mac users.

`export HIPPO_URL=https://localhost:5309/` returns error

$ export HIPPO_URL=https://localhost:5309/
$ hippo auth register --username admin --password 'Passw0rd!'
Error:

The client expected the environment variable to be the following:

$ export HIPPO_URL=https://localhost:5309

We should sanitize input, removing extra path segments from the base URL.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.