Giter Site home page Giter Site logo

coffer's Introduction

Coffer

coffer-logo

Coffer is multi-backend password store with multiple frontends. It reaches out to Vault, pass and others to store your most precious secrets.

Usage

CLI

The CLI is built from a multitude of commands, a quick list can be found below:

Available commands:
  view                     View entries under the specified path, optionally
                           returning only the specified field for each entry
  create                   Create a new entry at the specified path
  set-field                Set a field on the entry at the specified path
  delete-field             Delete a field from the entry at the specified path
  find                     Find and list entries, optionally filtering
  rename                   Rename an entry or directory
  copy                     Copy an entry or directory
  delete                   Delete an entry or directory
  tag                      Add or remove tags from an entry

Configuration

First you must configure coffer and tell it how to reach your preferred backend(s). The way you do this depends on which frontend you want to use, please refer to frontends for a list. As an example, here is a config.toml taken in by the CLI frontend:

main_backend = "vault-local"

[[backend]]
type = "vault-kv"
name = "vault-local"
address = "localhost:8200"
mount = "secret"
token = "<vault token>"

The configuration consists of a single main_backend key and N [[backend]] sections. Each one must at least have the type key and the name key. The rest depends on each backend, to see a comprehensive list, refer to backends.

The coffer CLI will look for its configuration file in the following order:

  1. A file passed to the tool using the -c/--config argument
  2. A file passed to the tool using the COFFER_CONFIG environment variable
  3. config.toml file in the current working directory.

Web API

Coffer also has a Web API. Its documentation can be found here.

The port can be specified via the command line option --port or the environment variable COFFER_SERVER_PORT.

$ stack run coffer-server -- --port=8081

Development

Nix

This is the preffered way, install Nix according to the Nix download page. Then execute:

$ nix develop --extra-experimental-features 'nix-command flakes'

You're now ready to develop coffer, using the exact same version (down to a single bit) of GHC, Cabal and all the other required tooling. To actually perform a build run:

$ cabal build

To run coffer while developing run:

$ cabal run coffer -- <&args>

A language server is also available, to use it, either start your IDE from a nix shell, or setup direnv for your IDE. To install direnv itself, take a look here.

System Packages

If you don't want to install Nix or can't, then you can try to install the required packages from your distro's package manager. This is not recommended, supported or tested. A complete, but maybe incorrect list, which may need translation to your package manager's naming convention, can be found below:

ghc
cabal
haskell-language-server (optional)
stack
stylish-haskell
zlib

Testing

The package contains 4 test suites:

  1. A unit tests suite for the coffer library, coffer:test:test.
  2. An integration test suite for the Web API, coffer:test:server-integration.
  3. A set of golden tests written in bats for the coffer executable.
  4. A Haskell doctest suite, coffer:test:doctests.

Use make test to run all test suites:

$ make test

$ make test \
    TEST_ARGUMENTS='--pattern "test name"' \
    BATSFILTER='test name' \
    DOCTEST_ARGUMENTS='--verbose'

You can also run an individual test suite:

$ make test-unit TEST_ARGUMENTS='--pattern "test name"'

$ make server-integration TEST_ARGUMENTS='--pattern "test name"'

$ make bats BATSFILTER='test name'

$ make doctest DOCTEST_ARGUMENTS='--verbose'

coffer's People

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Forkers

magicrb dk318

coffer's Issues

Return proper status codes when an error occurs

Clarification and motivation

When a command fails, the Web API returns a 200 status code with an error message in the body. For example, creating the same entry twice results in:

$ curl -s -D /dev/stderr \
  -H 'token: root' \
  'localhost:8081/api/v1/content/create?path=/ex1' \
  -X POST \
  -H 'Content-Type: application/json' \
  -d '[{"a": "b"}, {"c": "d"}]' | jq

HTTP/1.1 200 OK
Transfer-Encoding: chunked
Date: Fri, 29 Apr 2022 10:38:15 GMT
Server: Warp/3.3.18
Content-Type: application/json;charset=utf-8

{
  "contents": {
    "unEntryPath": [
      "ex1"
    ]
  },
  "tag": "CREntryAlreadyExists"
}

We should return an appropriate status code depending on the error (in this specific case, it should be a 409 Conflict) with an error message in the body:

{
  "error": "An entry already exists at <path>",
  "code": 123
}

Each type of error should also be associated with a numeric code.
This will allow the frontend to know exactly what went wrong and act appropriately.
For example, when a copy operation fails, it may return something like:

409 Conflict
{
  "error": "An entry already exists at <path>",
  "code": 123
}

Or:

409 Conflict
{
  "error": "<path> is a directory",
  "code": 456
}

And the frontend will be able to match on the numeric code and decide what to do next.

We should create a markdown table in /docs with all the error codes and what they represent.

Acceptance criteria

  • Endpoints return an appropriate status code when they fail
  • The response body contains an error message and a numeric code
  • We have markdown table explaining what the numeric error codes represent

/tag endpoint: split into POST and DELETE

Clarification and motivation

Right now, we have this endpoint:

POST /tag?delete

We should split it into two endpoints:

POST /tag
DELETE /tag

Acceptance criteria

  • We have two separate endpoints, one for creating and one for deleting tags
  • There is no delete query parameter

Make path's charset rules backend-specific

Clarification and motivation

Currently coffer restricts all Entry paths to ['a'..'z'] ++ ['A'..'Z'] ++ ['0'..'9'] ++ "-_" while Unix/Linux allows for many more. This may cause issues with file based backends like pass. Rather than cheking globally, if a backend has limitations it should enforce them at its boundary.

Acceptance criteria

Either allow more chars or close as wontfix.

Unify names

Clarification and motivation

The design doc uses "field name" and "field contents". The CLI module followed this naming convention. Some of the core modules were developed in parallel and use "field key" and "field value" respectively. So there's a disconnect there.

We should rename the FieldKey and FieldValue types (and all related local variables and other identifiers, error messages, etc) to be consistent with the design doc.

Acceptance criteria

The terms "field name" and "field contents" are used consistently across the codebase.

Support multiple backends from the CLI

This issue depends on #15.

Right now, the CLI assumes there's only 1 backend. It will need to be refactored to allow the user to specify which backend they want to interact with.

@MagicRB has suggested the following syntax:

coffer view mybackend#some/path

Optimize deletion of entries in `renameCmd`

Clarification and motivation

In the implementation of renameCmd, we use deleteCmd to delete any old entries.

However, reusing deleteCmd means we have to make an otherwise unnecessary call to getEntryOrDirThrow.

It would be more efficient to directly call deleteSecret instead of deleteCmd.

Acceptance criteria

renameCmd does not use deleteCmd. Instead, it calls deleteSecret on every entry that needs to be deleted.

[BUG] The output of `set-field` is malformed when fieldcontents is multi-line

Description

When you use set-field to create a new field/update an existing field with a fieldcontents that contains multiple lines, the command's output looks "messy" (see example below). The new fieldcontents is split across multiple lines and mixed together with the success message.

As part of this issue, let's also take the time to make sure every single command behaves correctly in the presence of multi-line fieldcontents, and write the appropriate tests.

To Reproduce

Steps to reproduce the behavior:

$ coffer create /path
$ coffer set-field /path user "$(echo -e "first\n\nsecond")"
[SUCCESS] Set field 'user' to 'first

second' (public) at '/path'.

Expected behavior

Expected the new fieldcontents to be clearly separated from the success message. Maybe something like this:

[SUCCESS] Set field 'user' (public) at '/path' to:
first

second

Environment

Move config to a config file

@MagicRB has written a Config module with the relevant data types & decoders.

In the diogo/cli branch, we are still hardcoding the configuration in Main.

We should refactor the CLI so that the config is read from a file rather than being hardcoded.

It should also be noted that the Config data types support multiple backends, but the CLI assumes there's only 1 backend. The CLI will have to be changed to support multiple backends, but this can be done in a separate issue (#16). For now, let's just assume the user intends to use the first backend from the config file.

/create endpoint: accept an "entry object" instead of a 2-element array

This issue is blocked by #37.

Clarification and motivation

Right now the /create endpoint takes a json object with a 2-element array. The 1st element is a json object containing public fields, and the 2nd element is another json object containing private fields.

The rest of the entry's info is in the query string.

Example:

curl -s -D /dev/stderr \
  -H 'token: root' \
  'localhost:8081/api/v1/content/create?path=/ex1&force&tag=tag1&tag=tag2' \
  -X POST \
  -H 'Content-Type: application/json' \
  -d '[{"some-public-field": "a"}, {"some-private-field": "b"}]' | jq

Instead, the endpoint should accept a json object representing the entire "Entry" entity, e.g.

{
  "path": "/ex1",
  "fields": {
    "some-public-field": {
      "contents": "a",
      "visibility": "public"
    },
    "some-private-field": {
      "contents": "b",
      "visibility": "private"
    }
  },
  "tags": [
    "tag1",
    "tag2"
  ]
}

Acceptance criteria

  • The /create endpoint accepts a json object representing the "Entry" entity

Run requests in parallel

Most commands need to retrieve all entries descending from either the root directory or some other directory.
When we have hundreds of entries stored in vault, this will mean doing hundreds of requests (1 per directory + 1 per entry).

Unfortunately, Vault's API does not support batch requests.

At the moment, these requests are done sequentially. We'd see a massive speedup if they were run concurrently.

Commands where this could be applied:

  • view / find: read directories/entries concurrently
  • rename / copy: read / write entries
  • rename / delete --recursive: delete entries

Make mocha tests independent from each other

Clarification and motivation

Right now, the mocha tests must be run in a specific order.
For example, the test for the /view endpoint assumes some entries were created in previous tests

We should rewrite these tests such that they can be run independently from each other, like we did with the bats tests

Acceptance criteria

  • The mocha tests can be run independently from each other

Qualified paths in `coffer` messages

Clarification and motivation

In #18 I've supported qualified paths (we can specify backend which certain path is related to) but now both success and error messages show only path without backend

coffer copy 'b1#/a' 'b2#/a'  
[SUCCESS] Copied '/a/b' to '/a/b'.

And it should look like

coffer copy 'b1#/a' 'b2#/a'  
[SUCCESS] Copied 'b1#/a/b' to 'b2#/a/b'.

This issue is blocked by #18 and #21.

Acceptance criteria

All coffer messages show qualified paths.

Allow specifying the config file via an environment variable

Clarification and motivation

In #19, we're adding a --config option to allow the user to specify a different path for the config file.

We should also allow the user to do this via an environment variable, e.g. COFFER_CONFIG.

When this is done, we should be able to do this:

  1. Replace all the tests/golden/**-cmd/config.toml files with a single tests/golden/default-config.toml
  2. Set the COFFER_CONFIG env var in the setup function in tests/golden/helpers.bash to make all tests use tests/golden/default-config.toml by default

Notes:

  • The CLI option should have precedence over the env var.

This issue is blocked by #21 and #41

Acceptance criteria

  • The config file can be specified via an env var
  • The CLI option has precedence over the env var
  • All the config files in tests/golden have been replaced by a single config file.

Merge `--filter` and `--filter-field` into one option in core, CLI and web API

Clarification and motivation

@dcastro: I think the --filter and --filter-field options can be merged into a single --filter option.

e.g.:

--filter name~vault               <- filter entries by entry name
--filter url:value~google.com     <- filter entries by field value

Probably, multiple --filter in CLI should act all at once, and so do filter[]=... in web API.

Acceptance criteria

  • core has them merged
  • CLI has them merged
  • web API has them merger

Remote Backend

Clarification and motivation

A special backend which exposes a HTTP coffer instances backend as a local backend. The use case is having a remote coffer instance which accesses some internal service, for example vault and using that from the CLI. It would essentially be equivalent to the current web frontend but in the terminal.

Acceptance criteria

New functional remote backend and associated tests.

Prettify JSON instances

This issue is blocked by #37. It would probably be a good idea to do #83 first.

Clarification and motivation

The JSON representation of the web API leaks a lot of internal details about our data structures. For example, the /view endpoint leaks a lot of haskell field names such as dEntries and unEntryPath and haskell constructor names such as VRDirectory:

{
  "contents": {
    "dEntries": [],
    "dSubdirs": {
      "dir": {
        "dEntries": [
          {
            "ePath": {
              "unEntryPath": [
                "dir",
                "entry3"
              ]
            },
            "eFields": {
              "user": {
                "fValue": "diogo",
                "fVisibility": "public",
                "fDateModified": "2022-04-28T17:21:56.774631254Z"
              }
            },
            "eDateModified": "2022-04-28T17:21:56.774631254Z",
            "eMasterField": null,
            "eTags": []
          }
        ],
        "dSubdirs": {}
      }
    }
  },
  "tag": "VRDirectory"
}

In some cases, it's just a matter of using the right functions for renaming fields (e.g. with the aeson-casing package):

deriveJSON (aesonPrefix camelCase) ''TypeName

In other cases, the instances should be handcrafted. E.g., the entry path should be rendered as "/dir/entry3" rather than:

"unEntryPath": [
  "dir",
  "entry3"
]

Acceptance criteria

  • The ToJSON instances don't print any haskell specific details.
  • I'm not sure if the FromJSON instances are needed or used at this point. If they are needed, ToJSON and FromJSON should roundtrip.
  • For handcrafted instances, we should have roundtripping tests.

Add support for specifying "default fields"

In #1, we'll implement a way of retrieving a single field from a single entry.

It would be convenient to not be forced to specify which field we want to retrieve all the time.
In most cases, most entries will have a "default field" that we want to copy (e.g. password or token).

We should let the user specify which field they want to be the default field (in coffer create, via CLI options, via the "editor mode", or via the web API).

Questions to be addressed:

  • What happens when the default field is deleted?
  • How would the user change which field is the default field for a given entry?

Use `nyan-interpolation`

Clarification and motivation

In Main.hs, we construct user-facing messages such as:

            printSuccess $
              "Set field '" +| fieldName |+
              "' to '" +| (field ^. E.value) |+
              "' (" +| (field ^. E.visibility) |+
              ") at '" +| entry ^. E.path |+ "'."

This would look less convoluted if we used some kind of interpolation.
Interpolation would also make the code less prone to errors such as forgetting to open/close a single quote '.

We recently released nyan-interpolation, let's use it here.

I haven't checked if these quasiquotes produced by nyan-interpolation could be embedded in Builder expressions (I suspect they can). If so, we should also investigate whether we could use it to simplify CLI.PrettyPrint.

Acceptance criteria

We've used interpolation where possible.

Allow specifying a different config file

We should support a "global option" (an option that applies to all CLI commands), like -c/--config, that allows users to specify a different config file than the default (./config.toml).

Read backend configs from a file

This issue is blocked by #37.

Clarification and motivation

Right now the Web API takes a vault token via a header. This is a problem on two levels:

  • It can only work with vault backends (and not with, e.g., pass backends)
  • We shouldn't expect the client to tell the server which backend it needs to connect to. The server should read its own config from some config.toml.

The location of the config file should be configurable via a command line option or an env variable (just like the CLI)

The Web API should take qualified paths. The client can tell the API which backend to connect to by using a qualified path.

Acceptance criteria

  • The Web API reads its config from a file
  • The config filepath is configurable
  • The Web API takes qualified paths

Improve error handling

  1. The exception handler in Backend.Vault.Kv swallows a lot of info about thrown exceptions.
          \case FailureResponse _request response ->
                    case statusCode $ responseStatusCode response of
                      404 -> Nothing
                      e -> Just $ OtherError (T.pack $ show e)
                DecodeFailure text response -> Just MarshallingFailed
                UnsupportedContentType mediaType response -> Just MarshallingFailed
                InvalidContentTypeHeader response -> Just MarshallingFailed
                ConnectionError exception -> Just ConnectError
  1. In a few places we're also throwing MarshallingFailed without specifying why it failed (what we expected to receive, what we actually received).
        case response ^. I.ddata . at ("version" :: T.Text) of
          Just (A.Number i) -> maybe (throw MarshallingFailed) pure (S.toBoundedInteger i)
          _ -> throw MarshallingFailed
        maybeThrow = maybe (throw MarshallingFailed) pure
  1. We also throw OtherError "404" without specifying which request failed.
throw $ OtherError "404"
  1. We use eitherToMaybe that swallows error messages.

This makes exceptions extremely hard to troubleshoot.

We should make sure as much info as possible is preserved and displayed to the user.

Additionally, we should write a Buildable instance for CofferError and change the CLI to pretty-print error messages.

Acceptance Criteria

  • Exception details are preserved
  • The CLI pretty-prints exceptions

Show strings escaped in error messages

Clarification and motivation

Error messages about invalid characters in paths/field names/tags should show the string escaped.

Imagine the user copy-pastes some text that happens to contain the "zero width space" unicode character (\xE2\x80\x8B in utf-8 hex notation), and uses that text in the path in coffer create.

This character is completely invisible, but coffer will (correctly) reject it.

At the moment, because we don't escape it, this is what the user will see:

$ coffer create "$(echo "hello\xE2\x80\x8Bthere")"
Invalid entry path: 'hellothere'.
Path segments can only contain the following characters: 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789-_'


Usage: coffer create ENTRYPATH [-e|--edit] [-f|--force] [--tag TAG] 
                     [--field NAME=CONTENT] [--privatefield NAME=CONTENT]
  Create a new entry at the specified path

This will just leave the user scratching their head - we're telling them the path has invalid characters, but all the characters shown on the screen are valid!

To avoid this, we should display invalid paths/field names/tags with all characters escaped.

Acceptance criteria

  • Invalid paths/field names/tags are displayed with all characters escaped.

Create backend-specific error types

Clarification and motivation

As of #60, we're using the following definition for CofferError:

data CofferError
  = ServantError ClientError
  | BackendNotFound BackendName
  | OtherError Builder

There are 2 issues here:

  1. In the Backend.Vault.Kv module, we're constructing OtherError with user-facing error messages as strings. We should follow the same pattern we're using everywhere else and construct a proper ADT + a Buildable instance.
  2. CofferError is meant to be backend-agnostic, but it assumes that all backends will (possibly) throw a ServantError. However, this assumption is not true - the pass backend, for example, does not use servant.

The proposed solution is to refactor the CofferError such that it can wrap any backend-specific error:

data CofferError where
  ...
  BackendError :: Buildable err => err -> CofferError

And then create backend-specific error types, with proper constructors,and move the ServantError constructor:

data VaultError
  = ServantError ClientError
  | FieldMetadataNotFound EntryPath FieldName
  | ...

Acceptance criteria

  1. We're using ADTs + Buildable to model errors, and not just building ad-hoc strings with error messages.
  2. CofferError is backend-agnostic, and the haddocks make this explicit.
  3. We have backend-specific error types (at the moment, it's just VaulError)

Come up with a description for the --help command

Clarification and motivation

Come up with a short description (and, optionally, an extended description) for the --help command.

Right now, it looks like this:

$ coffer --help

TODO: coffer description goes here

Usage: coffer COMMAND
  TODO: coffer description goes here

Available options:
  -h,--help                Show this help text

Available commands:
  view                     View entries under the specified path, optionally
                           returning only the specified field for each entry
  create                   Create a new entry at the specified path
  set-field                Set a field on the entry at the specified path
  delete-field             Delete a field from the entry at the specified path
  find                     Find and list entries, optionally filtering
  rename                   Rename an entry or directory
  copy                     Copy an entry or directory
  delete                   Delete an entry or directory
  tag                      Add or remove tags from an entry

Acceptance criteria

The --help command prints a short description of coffer and, optionally, an extended description.

Add a `bats` test suite

We should setup a bats test suite for end-to-end tests for the CLI.


Once we do that, we should also delete the doctests in Backend.Vault.Kv.Internal.Routes.

-- $setup
-- >>> import Network.HTTP.Client.TLS (tlsManagerSettings)
-- >>> import Network.HTTP.Client     (newManager)
-- >>> import Servant.Clieng          (BaseUrl, Https)
--
-- >>> let vaultToken   = "TOKEN"
-- >>> let vaultAddress = "vault.example.org"
--
-- >>>     manager     <- newManager tlsManagerSettings
-- >>> let clientEnv   <- mkClientEnv manager (BaseUrl Https vaultAddress 8200 "")

Doctests should not depend on external services, we should be able to run them in isolation without any additional setup. So bats is a much better tool for this purpose than doctest.

Make the port number configurable

Clarification and motivation

The port that servant will list on should be configurable, via a command-line option or an env variable.

The command-line option should take precedence over the env variable.

Acceptance criteria

  • The port number is configurable

Implement Web API

Clarification and motivation

We should expose the same functionality as the CLI via a Web API.

We can use servant for this.

Acceptance criteria

We can spin up a web server exposing same the functionality as the coffer CLI.

Avoid creating many connection managers

In runVaultIO, we're creating a new connection manager on every BackendEffect action:

    env <-
      case url of
        (BaseUrl Http _ _ _) -> do
          manager <- embed $ newManager defaultManagerSettings
          pure $ mkClientEnv manager url
        (BaseUrl Https _ _ _) -> do
          manager <- embed $ newManager tlsManagerSettings
          pure $ mkClientEnv manager url

This can be observed by adding a trace statement:

          manager <- embed $ do
            traceM "Creating manager"
            newManager defaultManagerSettings

And then running coffer view / (after creating a few entries):

$ cabal run exe:coffer -- view /

"Creating manager"
"Creating manager"
"Creating manager"
"Creating manager"
"Creating manager"
"Creating manager"
"Creating manager"
"Creating manager"
/
  test/
    ...

We should refactor this so only 1 connection manager is created.

Setup project

We need the following:

  • a buildkite pipeline
    • reuse checks
    • trailing-whitespace-check
    • check cabal files match the stack files
    • weeder
    • run bats tests
    • xrefcheck
    • run doctests
  • code_style.md document
  • CONTRIBUTING.md
  • makefiles
  • support for cabal & stack
  • tasty test suite
  • github issue/pull request templates
  • license files
  • stylish-haskell
  • hlint
  • .editorconfig

See templates here: https://github.com/serokell/metatemplates

Let's wait until #7 is merged before we add code-formatting tools (stylish-haskell and hlint)

Automate spinning up/killing vault instances

Clarification and motivation

Right now, in order to run the bats tests, developers have to manually start vault (with -dev and -dev-root-token-id "root") and then run make bats.

When we write tests for #16, some tests will require having two instances of vault running, so the process will require even more steps.

I think we should modify the make bats target so that it does all the required setup/teardown automatically (and the logic should probably be moved to a bash script).

The script should do something like this:

  • Update the git submodules
  • Start two instances of vault in dev mode
    • with -dev-root-token-id "root"
    • listening on non-default ports to avoid clashing with the developer's own instance (e.g. 8210 and 8220).
  • Run bats
  • Kill both vault instances regardless of whether the tests succeeded or failed

Acceptance criteria

  • The make bats target automatically spins up and kills the necessary vault instances.
  • If a user already has an instance running on the default port 8200, make bats does not interfere with that.

/delete endpoint: Use the `DELETE` verb

Clarification and motivation

The /delete endpoint uses the POST verb, where it has DELETE semantics. We should use the DELETE verb instead.

Acceptance criteria

  • The /delete endpoint uses the DELETE verb instead of POST

Add Readme and project description

Clarification and motivation

We should fill in the project's README.
This should include:

  • A short description of what coffer does + will do in the future.
  • Which problems with the existing state of the art we're trying to address.
  • How to install, configure, and use the tool
  • How to setup a local dev environment
    • This includes how to setup a local vault instance
    • How to run unit & golden tests

We should also:

  • write a short description in the description field of package.yaml.
  • add a description to the CLI's --help command (see TODOs in CLI.Parser.parserInfo)

Acceptance criteria

The following have been updated: README.md, package.yaml and CLI.Parser.parserInfo.

Rename effects in the BackendEffect from `Secret` to`Entry`

readSecret and so on don't precisely reflect the terminology we have agreed on. Changing them now would be a rather large change which would effect the whole codebase. Therefore we should wait for the `"big merge" until we tackle this.

Allow retrieving a specific field from a specific entry

As per the MVP spec, coffer view displays a tree of 1 or more entries found under a specified path.

However, we'd like to have a way of outputting a single field from a single entry, so that it can be more conveniently copied to the clipboard (e.g. by piping to wl-copy).

We should discuss what the command/options should look like in order to support this feature.

Reuse existing megaparsec parsers

Clarification and motivation

The Web API reimplements some parsing logic to parse things like Filter (see the instance FromHttpApiData Filter).

We should reuse the same parsers we used in the CLI modules.

Additionally, the logic to parse a QualifiedPath is duplicated in FromHttpApiData (QualifiedPath path) and in CLI.Parser.readQualifiedEntryPath. We should try to have a single mkQualifiedPath function that is reused in the FromHttpApiData instance and in the CLI parser.

Acceptance criteria

  • The parsing logic is not duplicated.

Add makefile target to run the Web API tests

Clarification and motivation

Add a makefile target so devs can easily run the web API tests, without having to do any manual setup/cleanup.

The target should take care of installing any necessary npm package (using npm install), spinning up a servant server, killing it afterwards, etc

Document the fact that the tests expect npm to be installed on the dev's machine.

Acceptance criteria

  • Devs can use make mocha to run the Web API tests

Add a "dry run" mode

Clarification and motivation

The copy/rename/delete commands are capable of performing multiple operations at once. Those operations can also be destructive.

I think it would be useful to add a --dry-run/-d mode for these operations, so the user can see what exactly will be created/deleted before they decide whether they want to actually go ahead with the operation.

Acceptance criteria

  • The copy/rename/delete commands have a --dry-run/-d mode that lists all the operations that the command entails.

[BUG] `run-bats-tests.sh` doesn't start second vault instance properly

Description

Sometimes when you run bats tests with make bats the second instance of vault doesn't start.
And because of this, some tests are failing.

To Reproduce

Steps to reproduce the behavior:

  1. vault server -dev -dev-root-token-id="root" -dev-listen-address="localhost:8209"
  2. make bats

Expected behavior

The second instance of vault starts properly in each make bats run

Environment

[BUG] `Bats` test are failing.

Description

After resolving #15 bats tests are failing, because there is no config.toml in tests directory.

To Reproduce

Steps to reproduce the behavior:

  1. stack install
  2. make bats

Expected behavior

bats tests are passing.

Environment

Pass/GPG Backend

Clarification and motivation

Implement a new backend, like the already existing one for Vault.Kv that uses either pass or gpg directly. The question is whether to use pass or gpg, they're essentially equivalent, but if we went for gpg we'd avoid a needless dep on pass, I'm not saying that we shouldn't maintain pass compatibilty, but we can go around it if we want.

Acceptance criteria

New functional backend on top of pass/gpp, associated tests, and compatibility with pass.

UPDATE: now having looked at the source code of pass, I refuse to touch GPG with a 20 foot pole, I personally don't mind the extra dependency so I'll just start wrapping pass.

`xrefcheck` should ignore `reuse` links in `CONTRIBUTING.md`

Clarification and motivation

At this moment we are experiencing xrefcheck task failure in the pipeline because it can't access some reuse links in CONTRIBUTING.md (it fails with Response timeout). But we have no problems with accessing these links. So, I think we should temporarily tell xrefcheck not to check those links until this problem is solved.

Acceptance criteria

reuse links in CONTRIBUTING.md are ignored by xrefcheck and this task in the pipeline passes.

Run mocha tests in the pipeline

Clarification and motivation

We should set up the pipeline to automatically run the mocha tests.

Acceptance criteria

  • The pipeline has a step for running the mocha tests

Reset ANSI control sequences after printing

Clarification and motivation

The contents of a field may contain any and all unicode characters.

This means they can also include ANSI control sequences to, for example, turn the background red (\x1b[41;1m):

$ coffer create /dir/entry1 --field user="$(echo "\x1b[41;1mdiogo")"
$ coffer view /

image

This means that one field's control sequences can potentially affect the rest of coffer's output.

$ coffer create /dir/entry1 --field user="$(echo "\x1b[41;1mdiogo")"
$ coffer create /dir/entry2 --field user="leonid"                   
$ coffer view /    

image

To prevent this from happening, we should reset all ANSI control sequences (with \x1b[0m) after printing a field's contents.

We could use the ansi-terminal package to do this.

Because we're printing the field's contents in multiple different places, I think it would make sense to wrap the Text in a newtype, and write a Buildable instance that automatically appends the reset sequence.

Acceptance criteria

  • ANSI control sequences are reset after a field's contents is printed on the terminal.

Make the `view` command the default command

There's a good chance view will be the command users use the most.

It would be convenient to not have to specify the command name, and just default to using view when no command name is given.

In other words, these two should behave the same:

$ coffer view path/to/entry
$ coffer path/to/entry

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.