Giter Site home page Giter Site logo

nir0s / ghost Goto Github PK

View Code? Open in Web Editor NEW
46.0 4.0 27.0 242 KB

A simple, server/less, single-api, multi-backend, ghostly secret-store/key-store for your passwords, ssh-keys and cloud credentials. Ghost isn't real, it's just in your head.

License: Apache License 2.0

Makefile 1.75% Python 98.25%
secret-store secret vault secret-key ssh-key credentials secret-storage ssh ssh-keys ssh-manager serverless

ghost's Introduction

ghost (shhhhhh)

Travis Build Status AppVeyor Build Status PyPI Version Supported Python Versions Requirements Status Code Coverage Code Quality Is Wheel

ghost aims to provide a secret-store with a single, simple-to-use API supporting multiple storage backends without requiring a server to run.

To that end, ghost supports file based backends like TinyDB and SQLite. Using other backends means, of course, that they need to be available to ghost, while ghost itself remains stateless.

Currently, ghost supports authenticating only via a passphrase. Authenticating via KMS, GitHub and the likes, might be supported in the future.

Note that beginning with v0.6.1, Python 2.6 is no longer provided.

Alternatives

  • While Vault is truly spectacular and I've been using it for quite a while now, it requires a server running.
  • Credstash is only AWS KMS + DDB based.
  • Keywhiz, like vault, also requires a server.. and let's face it, I ain't gonna run a JVM on my laptop just for that thank you.
  • Unicreds is based on credstash and, again, only supports KMS + DDB.
  • Sops is complicated to use and also is KMS+DDB based.
  • There's a new project called sstash, but it only supports file based encryption and is not intuitive enough as I see it.
  • Google developed something called Keyczar, but it doesn't seem to be under development.
  • Pinterest has a seemingly interesting project called Knox. Knox required a server to be running and doesn't support multiple backends. It also seems more developer oriented than anything else.
  • Lyft has a really nice solution called Confidant which also has a nice UI to go along with it. It authenticates via KMS and stores keys in DDB and requires and server to be running.

Installation

Ghost supports Linux, Windows and OSX on Python 2.7 and 3.3+

pip install ghost

For dev:

pip install https://github.com/nir0s/ghost/archive/master.tar.gz

Usage

CLI

$ ghost
Usage: ghost [OPTIONS] COMMAND [ARGS]...

  Ghost generates a secret-store in which you can keep your secrets
  encrypted. Ghost isn't real. It's just in your head.

Options:
  -h, --help  Show this message and exit.

Commands:
  delete   Delete a key
  export   Export all keys to a file
  get      Retrieve a key
  init     Initialize a stash
  list     List keys
  load     Load keys from backup
  lock     Lock a key to protect it
  migrate  Migrate keys from source to destination stash
  purge    Purge all keys
  put      Insert a new key
  ssh      Use a key to SSH-connect to a machine
  unlock   Unlock a key


# Initializing a stash
$ ghost init
Initializing stash...
Initialized stash at: /home/nir0s/.ghost/stash.json
Your passphrase can be found under the `passphrase.ghost` file in the current directory
Make sure you save your passphrase somewhere safe. If lost, any access to your stash will be impossible.
...

$ export GHOST_PASSPHRASE=$(cat passphrase.ghost)

$ ghost list
Listing all keys in /home/nir0s/.ghost/stash.json...
The stash is empty. Go on, put some keys in there...

# Putting keys in the stash
$ ghost put aws secret=my_secret access=my_access
Stashing key...
$ ghost put gcp token=my_token --description "GCP Token" --meta Owner=Me --meta Exp=15.06.17
...

# Retrieving a key (alternatively, bash redirect to file - `ghost get aws` > file)
$ ghost get aws
Retrieving key...

Description:   None
Uid:           08ee6102-5668-440f-b583-97a1c7a17e5a
Created_At:    2016-09-15 15:10:01
Metadata:      None
Modified_At:   2016-09-15 15:10:01
Value:         access=my_access;secret=my_secret;
Name:          aws

# Retrieving a single value from the key
$ ghost get aws secret
my_secret

# Retrieving a key in machine readable json
$ ghost get gcp -j
{
    "description": "My GCP Token", 
    "uid": "b8552219-8761-4179-b20d-0a1544dd91a3", 
    "created_at": "2016-09-15 15:22:53", 
    "metadata": {
        "Owner": "Me", 
        "ExpirationDate": "15.06.17"
    }, 
    "modified_at": "2016-09-15 15:23:46", 
    "value": {
        "token": "my_token"
    }, 
    "name": "gcp"
}

# Modifying an existing key
# `--add` can be used to add to a key while modify overwrites it.
$ ghost put gcp token=my_modified_token --modify
Stashing key...

$ ghost get gcp
Retrieving key...

Description:   My GCP Token
Uid:           789a3705-044c-4e34-b720-4bc43bfbae90
Created_At:    2016-09-15 15:56:04
Metadata:      Owner=Me;ExpirationDate=15.06.17;
Modified_At:   2016-09-15 15:57:05
Value:         token=my_modified_token;
Name:          gcp

# Listing the existing keys
$ ghost list
Listing all keys in /home/nir0s/.ghost/stash.json...
Available Keys:
  - aws
  - gcp

# Deleting a key
$ ghost delete aws
Deleting key...
...

# Deleting all keys
$ ghost purge -f
Purging stash /home/nir0s/.ghost/stash.json...

$ ghost list
Listing all keys in /home/nir0s/.ghost/stash.json...
The stash is empty. Go on, put some keys in there...
...

NOTE: The default backend for the CLI is TinyDB. If you want to use the SQLAlchemy backend, you must either provide the --stash and --backend flags with every command or set the GHOST_STASH_PATH and GHOST_BACKEND env vars after having initialized the stash. Not providing the stash path and the backend will result in ghost failing misrebly.

Directly from Python

import ghost

# Initialize a new stash
storage = ghost.TinyDBStorage(
    db_path='/home/nir0s/.ghost/stash.json',
    stash_name='ghost')
# Can also generate a passphrase via `ghost.generate_passphrase(size=20)`
stash = ghost.Stash(storage, passphrase='P!3pimp5i31')
stash.init()

# Insert a key
stash.put(name='aws', value={'secret': 'my_secret', 'access': 'my_access'})
# Get the key
key = stash.get(key_name='aws')
print(key)
...

# List all keys in a stash
stash.list()

# Delete a key
stash.delete('aws')

Working with multiple stashes

By default, ghost generates a default stash named "ghost", regardless of the storage backend you're using. Each backend supports working with multiple stashes (or otherwise, "tenants"). This allows users to distinguish between environments, for example.

To initialize a named stash:

$ ghost init http://internal-es:9200[stash-name] --backend elasticsearch

You can initialize as many stashes as you want, as long, of course, as each storage backend's endpoint has a unique name for each of its stashes.

You can then initialize another:

$ ghost init http://internal-es:9200[another-stash] --backend elasticsearch

Locking and Unlocking keys

Sometimes, you might want to lock a key to make sure it isn't deleted or modified accidentally.

NOTE: Purging a stash will also delete locked keys.

To that end, ghost allows you to lock a key:

$ ghost lock my_key
Locking key...
$ ghost delete my_key
Deleting key...
Key `my_key` is locked and therefore cannot be deleted Please unlock the key and try again
...

$ ghost unlock my_key
...

Listing containing matches or closest match

We can also list keys which contain a certain string or some close matches to that string.

For example, let's assume we have four keys: aws, aws-2, abws-2 and gcp:

$ ghost list
Listing all keys...
Available Keys:
  - aws
  - aws-2
  - abws-2
  - gcp

$ ghost list aws
Listing all keys...
Available Keys:
  - aws
  - aws-2

$ ghost list ~aws
Listing all keys...
Available Keys:
  - aws
  - aws-2
  - abws-2
  • Providing a KEY_NAME argument to ghost list will allow us to look for any keys containing KEY_NAME.
  • Providing a tilde infront of KEY_NAME allows us to look for closest matches. The cutoff weight can be passed using the --cutoff flag (or the cutoff argument in Python).
  • Note that this does not mean you can't provide key names starting with a tilde, as ~aws will always be a close match of aws unless the cutoff is high enough in which case it'll stop being reasonable to search for closest matches (around ~0.8 or so).

ssh-ing to a machine

Ghost allows you to store a key of type ssh and then use ghost ssh to connect to the machine.

This allows you to store secret information on your most used machines (you probably won't do that for 4000 application servers, unless you're crazy) and connect to them easily.

$ ghost put my-machine --type ssh [email protected] key_file_path=~/.ssh/key.pem

$ ghost ssh my-machine
Welcome to Ubuntu 16.04.2 LTS (GNU/Linux 4.4.0-64-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

17 packages can be updated.
0 updates are security updates.


*** System restart required ***
Last login: Wed Mar 22 21:07:21 2017 from 46.120.240.223
[email protected]:~$
...

An added nicety is that you don't actually have to have key files stored on your file system, as ghost knows how to address (unlike the ssh executable) keys stored as strings. So instead of providing ssh_key_path, you could provide ssh_key=...SSH_STRING... and ghost will use that automatically.

Note that ghost will force you to provide the conn and one of ssh_key or ssh_key_path values when using the --type=ssh key type.

SSH Proxying

You can also use a ProxyCommand based ssh method to connect to a machine through a proxy:

$ ghost put machine-through-proxy --type ssh [email protected] key_file_path=~/.ssh/key.pem [email protected] proxy_key_path=~/.ssh/my_proxy_key

$ ghost ssh my-machine
...

You can also use proxy_key to provide the string of the key instead of ssh_key_path.

Additionally, any string put under the extend value in the key will be concatenated to the resulting ssh command.

SSH Tunneling

Using the tunnel key, you can create an ssh tunnel to a server (through a proxy, or not):

$ ghost put machine-through-proxy --type ssh [email protected] key_file_path=~/.ssh/key.pem tunnel='LOCAL_PORT:localhost:REMOTE_PORT'

$ ghost ssh my-machine >/dev/null 2>&1 &
...

Purging a stash

To allow for extreme measures when necessary, ghost provides the purge API (and command). If you quickly need to delete all keys from a stash, you can use it. To purge a stash you'll have to provide a mandatory force flag as precautionary measure.

Passphrase file generation and discovery

When initializing a stash, ghost generates a passphrase file containing either the passphrase you explicitly provide or an auto-generated one. The file is saved under cwd/passphrase.ghost. After having been generated, you can read the file into an environment variable to use it like so:

$ export GHOST_PASSPHRASE=$(cat passphrase.ghost)

To simplify UX when using the CLI, ghost discovers the passphrase.ghost file generated when initializing the a stash and uses it unless told otherwise.

unless the --passphrase flag or GHOST_PASSPHRASE env var are set, ghost will search for the passphrase.ghost file under:

  1. cwd/passphrase.ghost
  2. ~/.ghost/passphrase.ghost
  3. (Only non-Windows) /etc/ghost/passphrase.ghost

The Python API requires passing the passphrase explicitly to the Stash class when generating its instance.

It is important to note that if you regularly use two storage backends, you might not want to use the auto-discovery mechanism at all as to not accidently try to use one key with a mismatching stash.

Backends

NOTE: ghost includes dependencies required for TinyDB only as its installation should be light-weight by default. ` You can install extras for each specific backend. See below.

NOTE: While true for the API, the CLI does not currently expose any advanced configuration for the backends such as setting certs, credentials or paths.

Until the API documentation is complete, please take a look at the Storage API's on how to use each storage.

The TinyDB backend provides an easy to read, portable JSON file based stash. It is the default backend when using the CLI as it is the simplest to digest for new users.

(Initially tested on v1.0.15)

NOTE: To use postgre, mysql and the likes, you must have the relevant package installed for SQLAlchemy to work. For instance, providing postgresql://scott:tiger@localhost/mydatabase as the path to the backend requires installing psycopg2. Failing to install the relevant package will result in SQLAlchemy raising an error which will state what's missing.

To enable, run pip install ghost[sqlalchemy]

The SQLAlchemy backend provides a way to use all well known SQL databases as backends including a local sqlite file. Functionally, the sqlite SQLAlchemy based backend resembles the TinyDB backend, but is not humanly readable.

All SQLAlchemy connection strings are allowed so Postgre, MySQL, MSSQL and the likes are easily accessible

(Initially tested on v2.4.1 using elasticsearch-py 2.4.0)

To enable, run pip install ghost[elasticsearch]

The Elasticsearch backend resembles the TinyDB backend in that it simply stores JSON documents. An Index called ghost is created in the cluster (unless another index name is provided via the API) and used to store the keys.

(Initially tested on v0.7.0)

To enable, run pip install ghost[consul]

NOTE: As per [consul's documentation], you cannot provide values larger than 512kb.

The Consul backend allows to use Consul's distributed nature to distribute keys between servers. Consul's kv-store (v1) is used to store the keys. You must configure your Consul cluster prior to using it with Ghost as ghost will practically do zero-configuration on your cluster. As long as the kv-store's REST API is accessible to ghost, you're good. You may, of course, use a single Consul server as a stash, but to prevent dataloss, that is of course not recommended.

(Initially tested on v0.6.1 using hvac 0.2.16)

To enable, run pip install ghost[vault]

NOTE: You MUST provide your Vault token either via the API or via the VAULT_TOKEN env var to use the Vault backend.

Ironically maybe, you may use Vault as your stash. Since Vault itself encrypts and decrypts keys and requires a token, it would seem weird to use ghost as a front-end for it. I do not recommend using ghost with Vault unless you need to do cross-backend work - that is, use multiple backends at once or preserve a single API where Vault isn't always accessible. The main reason for using ghost and not Vault is mainly its no-server nature. If you already have Vault running, you may as well use its CLI/API and not use ghost to overcome unnecessary abstraction layers.

As such, much like with Consul, note that ghost does not provide any complicated configuration options for Vault using the CLI or otherwise. You need to have your Vault[Cluster] preconfigured after-which ghost will store all keys under the secrets path (can be overriden). You may provide a key named aws/account_1, for instance, in which case ghost will just pass the path along to Vault.

To enable, run pip install ghost[s3].

The S3 backend saves keys as JSON encoded objects inside the provided bucket.

Requirements

  • A Stash path must be provided - this is the bucket name to be used. Also, it is necessary to provide a bucket location. If you're using the CLI then you can use
    export GHOST_BUCKET_LOCATION="BUCKET_NAME"
    
  • Also AWS credentials and region name must be provided. If you're using the CLI then you can use
    export AWS_DEFAULT_REGION="***"
    export AWS_ACCESS_KEY_ID="***"
    export AWS_SECRET_ACCESS_KEY="***"
    OPTIONAL: 
    export AWS_SESSION_TOKEN="***"
    export AWS_PROFILE="***"
    

Encryption & Decryption

Encryption is done using cryptography. It is done only on values and these are saved in hexa. Keys are left in plain text.

Values are encrypted once provided and decrypted only upon request, meaning that they're only available in memory for a very short period of time.

See cryptography's documentation for additional information.

Audit log

NOTE: This is WIP. The audit log is currently kept on the machine where ghost is run. As such, it is hardly useful for auditing purposes when using a remote backend. As ghost evoles, it will offer remote auditing.

An audit log is saved under ~/.ghost/audit.log containing a log of all primary actions (put, get, delete, purge, list) done on any stash. The path can be set using the GHOST_AUDIT_LOG env var.

The log file itself is not machine readable. Whether it will be remains to be seen.

The log should look somewhat like this:

2016-10-25 15:23:24,441 - [/home/nir0s/.ghost/stash.json] [LIST]
2016-10-25 15:23:31,350 - [/home/nir0s/.ghost/stash.json] [PUT] - {"key_name": "aws", "metadata": "null", "description": null, "value": "HIDDEN", "uid": "19fde800-89b9-4c25-a0af-b790e118bab7"}
2016-10-25 15:23:34,954 - [/home/nir0s/.ghost/stash.json] [LIST]
2016-10-25 15:24:33,322 - [/home/nir0s/.ghost/stash.json] [GET] - {"key_name": "aws"}
2016-10-25 15:24:33,323 - [/home/nir0s/.ghost/stash.json] [DELETE] - {"key_name": "aws"}
2016-10-25 15:24:33,323 - [/home/nir0s/.ghost/stash.json] [DELETE] - {"key_name": "aws"}
2016-10-25 15:24:49,890 - [/home/nir0s/.ghost/stash.json] [PUT] - {"key_name": "aws", "metadata": "null", "description": null, "value": "HIDDEN", "uid": "ffa4fb66-e3c0-445c-bafc-a60f480dc45a"}
2016-10-25 15:24:52,230 - [/home/nir0s/.ghost/stash.json] [PUT] - {"key_name": "gcp", "metadata": "null", "description": null, "value": "HIDDEN", "uid": "567f891a-d097-4575-a472-4409dc459a9a"}
2016-10-25 15:24:55,625 - [/home/nir0s/.ghost/stash.json] [PUT] - {"key_name": "gfa", "metadata": "null", "description": null, "value": "HIDDEN", "uid": "434b197b-c82e-41b1-a4d2-eaeb7cd6cf72"}
2016-10-25 15:25:00,553 - [/home/nir0s/.ghost/stash.json] [LIST]
2016-10-25 15:25:08,413 - [/home/nir0s/.ghost/stash.json] [GET] - {"key_name": "aws"}
2016-10-25 15:25:08,414 - [/home/nir0s/.ghost/stash.json] [DELETE] - {"key_name": "aws"}
2016-10-25 15:25:08,414 - [/home/nir0s/.ghost/stash.json] [DELETE] - {"key_name": "aws"}
2016-10-25 15:25:16,416 - [/home/nir0s/.ghost/stash.json] [PURGE] - all keys

Exporting and Importing

You can export and import all keys in a stash using the ghost export and ghost load commands (same methods in the Python API).

The export command allows you to generate a json file containing all keys (encrypted, of course) while the load command can then load that file into another stash using the same, or a different storage backend.

So, for instance, if you have a local implementation using sqlite, you could export all keys; create a new stash using the SQLAlchemy storage for postgre and load all keys into that storage for your server's implementation.

The migrate command will allow you to easily migrate all of your keys from one backend to another like so:

ghost migrate my_stash.json postgresql://localhost/ghost \
  --source-passphrase 123 \
  --destination-passphrase 321 \
  --source-backend tinydb \
  --destination-backend sqlalchemy

Note that using the migrate command (or API) will result in keys being decrypted and reencrypted on the destination stash.

Secret key delegation

Since ghost doesn't run as a distributed server, it doesn't provide a formal method for delegating keys to a server without explicitly passing them over in plain text post-decryption. You can work around that by retrieving a key without decrypting it (via the --no-decrypt flag in the CLI or the decrypt argument in the Python API) and sending it to the other server where the same passphrase is held and decrypting it there.

This can be done somewhat like this:

...
encrypted_value = stash.get('my_key', decrypt=False)['value']
save_to_file(encrypted_value)

# and on the server
...
stash = Stash(storage, passphrase='SAME_PASSPHRASE')
decrypted_value = stash._decrypt(encrypted_value_from_file)

Note that if you're using Consul as a backend, the distribution nature of its kv-store allows to delegate keys easily.

Testing

git clone [email protected]:nir0s/ghost.git
cd ghost
pip install tox
tox

Contributions..

See CONTRIBUTIONS on how to contribute additional backends.

Pull requests are always welcome..

ghost's People

Contributors

jcollado avatar nir0s avatar tehasdf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

ghost's Issues

Allow to get a single value from the value's dict

Right now, retrieving a key will retrieve all of its values assuming it has more than one.

For instance:

ghost get key a=b b=c
...

ghost get key -j
...
value: a=value1,b=value2
...

It would be great if we could cleanly retrieve a single value for automation purposes:

ghost get key a
...

value1

Allow to set a passphrase file location per stash

When a stash is initialized, a passphrase.ghost file is generated for it. A user can explicitly pass a passphrase in the API and the CLI or set an environment variable else the file is searched for in different locations.

If a user works with multiple stashes, they have to explicitly pass the passphrase as only the passphrase.ghost file is looked up.

Providing an API for setting a location in the storage for where to find the passphrase file would allow users to use multiple passphrase files easily.

The API could be implemented somewhat like so:

stash.init()

stash.set_passphrase('/etc/ghost/my-stash.passphrase.ghost')

Every time a user uses a specific stash, the passphrase file will be looked up in the set position.

Log every transaction to file

It would be helpful if every transaction was logged properly to a file in both a machine readable and human readable format for proper auditing.

Wrong parsing of path in sqlalchemy storage

Providing a relative path for sqlalchemy results in an error

Initializing stash...
Traceback (most recent call last):
  File "/home/nir0s/.virtualenvs/ghost/bin/ghost", line 11, in <module>
    load_entry_point('ghost', 'console_scripts', 'ghost')()
  File "/home/nir0s/.virtualenvs/ghost/lib/python2.7/site-packages/click/core.py", line 716, in **call**
    return self.main(_args, *_kwargs)
  File "/home/nir0s/.virtualenvs/ghost/lib/python2.7/site-packages/click/core.py", line 696, in main
    rv = self.invoke(ctx)
  File "/home/nir0s/.virtualenvs/ghost/lib/python2.7/site-packages/click/core.py", line 1060, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/nir0s/.virtualenvs/ghost/lib/python2.7/site-packages/click/core.py", line 889, in invoke
    return ctx.invoke(self.callback, *_ctx.params)
  File "/home/nir0s/.virtualenvs/ghost/lib/python2.7/site-packages/click/core.py", line 534, in invoke
    return callback(_args, **kwargs)
  File "/home/nir0s/repos/nir0s/ghost/ghost.py", line 700, in init_stash
    passphrase = stash.init()
  File "/home/nir0s/repos/nir0s/ghost/ghost.py", line 145, in init
    self._storage.init()
  File "/home/nir0s/repos/nir0s/ghost/ghost.py", line 372, in init
    os.makedirs(os.path.dirname(path))
  File "/home/nir0s/.virtualenvs/ghost/lib/python2.7/os.py", line 157, in makedirs
    mkdir(name, mode)
OSError: [Errno 13] Permission denied: '/x'

The problem is that we're trying to parse the path wrongly in the storage.

Use a hierarchy for reading the passphrase from files in different locations

To simplify UX, we should allow users to place the passphrase.ghost file generated when initializing a stash in different directories where it will be automatically read if it exists.

A reasonable hierarchy for retrieving a passphrase for a stash might be:

  • The --passphrase flag.
  • The GHOST_PASSPHRASE env var
  • A passphrase.ghost file found under the cwd
  • A passphrase.ghost file found under ~/.ghost
  • A passphrase.ghost file found under /etc/ghost

This idea assumes you only have one active stash at any given moment and that if you're using multiple stashes, one of them is a default, which would use the file and one will require you to provide the env var/flag. We might be able to provide a mechanism for attaching a file to a stash somehow, but I wouldn't do that right now.

Allow to manage multiple stashes in a single backend

As it is now, ghost can only manage a single stash per storage backend. This is due to the fact that it doesn't allow to declare a "path" for the stash. This also means that only a single passphrase can be applied per backend.

By allowing multiple "paths" (i.e. path in vault, index, in ES, table in tinydb/sqlalchemy, etc..), multiple stashes will be accessible.

Allow to get a key by uid

Currently, we only allow to retrieve by name, but a name will not necessarily be unique in the long run.

Make the TinyDB stash default so that `GHOST_STASH_PATH` isn't mandatory

Currently, after initializing a stash, users must set the GHOST_PASSPHRASE and GHOST_STASH_PATH env vars or use the --passphrase and --stash flags respectively.

The UX should be simplified by only requiring the passphrase. The default stash backend is already TinyDB and there's no reason why the default path shouldn't be the TinyDB's backend default path.

Allow to read/write from/to files

It would be nice to be able to directly get a value from a file and put something directly into a file like so:

ghost put my_ssh_key --from-file ~/.ssh/my_ssh_key

ghost get my_ssh_key --to-file ~/.ssh/my_ssh_key

Currently, the only way to put from a file is to do something like this:

ghost put my_ssh_key ssh_key="$(cat ~/.ssh/my_ssh_key)"

and getting to a file is quite annoying unless you use ghost get ... -j, pipe to jq and then to a file somehow.

Putting to a file will result in a stash's key having a single key in its value's dict while getting to a file will verify that the dict only has a single key in its value's dict otherwise should notify and fail.

Allow to lock a key

It would be nice if a user could "lock" a certain key to prevent it from being modified or deleted.

An API like this could be provided:

stash.lock('key')
...

stash.delete('key')
# The key could is locked and therefore could not be deleted
stash.is_locked('key')
# True

stash.unlock('key')
...

Allow to version keys

Currently, a key will be overridden if a put --modify command was executed on it. Allowing to create a new version of the key, which will become the default key will allow users to look back at previous versions of the key. This might prevent situations in which users unknowingly overrode keys they still need.

Versioning should be the default, but we can also allow the user to override.

As of now, the identifier for the version should be the key's name, which means that multiple keys with the same name but a different version will be possible.

If a user creates a key and the key's name wasn't found, it should give it a version of 1
If a key with the same name was found, the new key should get a version 2 and so on.
Retrieving a key should always get the latest version of it.

Provide a per-storage init command

Both the Consul and Vault (and later Elasticsearch, etc..) storage backends can receive additional configuration options currently not exposed via the CLI. The API comfortably exposes those options.

Creating a command which will allow to initialize each specific storage will make configuring the storage easier:

ghost init consul --stash http://localhost:8500 --directory 'my_dir' --verify --client-cert 'my_cert' --auth user:password

An alternative could be to allow the user to pass any set of kwargs to the init command like so:

ghost init http://localhost:8500 --backend consul --storage-args directory=mydir;verify=true;client_cert=my_cert;auth=user:password

Exporting and loading a stash does not allow to change the passphrase

When exporting a stash, the values stay encrypted. Loading them into a new stash then means that that stash's passphrase must be the same as the one exported to be able to read the values.

A possible solution could be to create an encrypted version of the entire source stash's data based on the destination stash's passphrase and then passing that passphrase to load, where it will be decrypted and each value will be encrypted according to the destination stash's passphrase.

We should probably take a look at the migrate function to see how to implement this nicely.

Test returned structures from storage backends generically

Right now, each storage backend test (for put, get, list, delete, etc..) is tested separately. This means that there could be inconsistencies in how the data structured returned by each of the backends' methods are built.

We should have generic storage backend tests which are run on the returned value of each base method to validate that they are all the same.

Failure when providing a local path for a stash

running ghost init stash.json results in an error:

$ ghost init stash.json
Initializing stash...
Traceback (most recent call last):
  File "/home/nir0s/.virtualenvs/ghost/bin/ghost", line 11, in <module>
    load_entry_point('ghost', 'console_scripts', 'ghost')()
  File "/home/nir0s/.virtualenvs/ghost/lib/python2.7/site-packages/click/core.py", line 716, in __call__
    return self.main(*args, **kwargs)
  File "/home/nir0s/.virtualenvs/ghost/lib/python2.7/site-packages/click/core.py", line 696, in main
    rv = self.invoke(ctx)
  File "/home/nir0s/.virtualenvs/ghost/lib/python2.7/site-packages/click/core.py", line 1060, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/nir0s/.virtualenvs/ghost/lib/python2.7/site-packages/click/core.py", line 889, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/nir0s/.virtualenvs/ghost/lib/python2.7/site-packages/click/core.py", line 534, in invoke
    return callback(*args, **kwargs)
  File "/home/nir0s/repos/nir0s/ghost/ghost.py", line 700, in init_stash
    passphrase = stash.init()
  File "/home/nir0s/repos/nir0s/ghost/ghost.py", line 145, in init
    self._storage.init()
  File "/home/nir0s/repos/nir0s/ghost/ghost.py", line 318, in init
    os.makedirs(os.path.dirname(self.db_path))
  File "/home/nir0s/.virtualenvs/ghost/lib/python2.7/os.py", line 157, in makedirs
    mkdir(name, mode)
OSError: [Errno 2] No such file or directory: ''

This happens because we're trying to create a directory for the stash path's base, which doesn't exist.

Add KMS auth method

It should be possible to generate a passphrase using KMS for people who already use KMS for their systems.

Setting a stash path after init to another path illogically works

Currently, if we init a stash in a certain path and then set the GHOST_STASH_PATH env var to another path, everything will work on the other path. The reason for having an init phase in the first place is to make a stash a reserved ghost database. Ghost stores a passphrase object within the DB to identify the stash with but doesn't actually perform the identification process at any point. This should be changed so that a stash path that wasn't formally initialized could not be used.

Make init idempotent in all storage backends

In the ES storage, you can run stash.init() as many times as you want. In The TinyDBStorage and (maybe) SQLAlchemyStorage, it'll tell you that the file/db is already initialized and raise an error.

All init functions should be idempotent.

Allow MultiFernet key usage

This will allow us to provide multiple keys for encryption and decryption if the user chooses it and can look somewhat like this (in the CLI):

ghost init --passphrase ASL*mla8fsLA* --passphrase @#IM$LIQSlll --passphrase ...

export GHOST_PASSPHRASE=ASL*mla8fsLA*;@#IM$LIQSlll;...
...

We can use the API like so:

>>> from cryptography.fernet import Fernet, MultiFernet
>>> key1 = Fernet(Fernet.generate_key())
>>> key2 = Fernet(Fernet.generate_key())
>>> f = MultiFernet([key1, key2])
>>> token = f.encrypt(b"Secret message!")
>>> token
'...'
>>> f.decrypt(token)
'Secret message!'

Allow using other storage backends from the CLI

Currently, only the TinyDB storage backend is supported in the CLI.

To solve this, we can get all storage class objects and their names. The names could be used by click via type=click.Choice(storage_names) after which if the name the user provides aligns with any of the available implementations, globals()[class.__name__](stash_path), it will be chosen.

clsmembers = inspect.getmembers(sys.modules[__name__], inspect.isclass) can return the desires class objects.

Vault stash fails to list when empty

When running ghost list on an empty vault stash it will fail to list. The reason is that it tries to access dictionary items that are not there if no keys were inserted into the stash.

Generate a jinja2 template automatically from the values of a retrieved key

Assume a template boto.ini:

[Credentials]
aws_access_key_id = {{ access }}
aws_secret_access_key = {{ secret }}

and a key:

Description:   None
Uid:           08ee6102-5668-440f-b583-97a1c7a17e5a
Created_At:    2016-09-15 15:10:01
Metadata:      None
Modified_At:   2016-09-15 15:10:01
Value:         access=my_access;secret=my_secret;
Name:          aws

It would be nice to generate a file automatically using the values of that key like so:

ghost get aws --generate boto.ini

and get

[Credentials]
aws_access_key_id = my_access
aws_secret_access_key = my_secret

This will allow a user to never keep files with credentials within them, but rather templates of files only.
Of course, you can always provide the file's data as the value to encrypt, but if the file is big or if it frequently changes, it's less comfortable.

load does not decrypt from export

flow:
ghost export
ghost init (on new machine)
mv passphrase.ghost to /home/centos/.ghost/passphrases/haviv
ghost load new-stash -p new-stash

error:
ghost get -s /home/centos/.ghost/cloudify.json[haviv] -p $(cat /home/centos/.ghost/passphrases/haviv) aws
STDOUT: Stash: tinydb at /home/centos/.ghost/cloudify.json[haviv]

STDERR: Traceback (most recent call last):
File "/usr/bin/ghost", line 11, in
load_entry_point('ghost==0.5.0', 'console_scripts', 'ghost')()
File "/usr/lib/python2.7/site-packages/click/core.py", line 722, in call
return self.main(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/usr/lib/python2.7/site-packages/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/lib/python2.7/site-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/lib/python2.7/site-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/ghost.py", line 1237, in get_key
record = stash.get(key_name=key_name, decrypt=not no_decrypt)
File "/usr/lib/python2.7/site-packages/ghost.py", line 262, in get
key['value'] = self._decrypt(key['value'])
File "/usr/lib/python2.7/site-packages/ghost.py", line 449, in _decrypt
encrypted_value).decode('ascii')
File "/usr/lib64/python2.7/site-packages/cryptography/fernet.py", line 103, in decrypt
raise InvalidToken
cryptography.fernet.InvalidToken

User can work on a stash before it is initialized

A user can put keys in a stash before initializing it. This results in a created but uninitialized stash which also can't be used because its passphrase is unknown.

We should verify that a stash is initialized before allowing to perform any action on it.

Add a `ghost migrate` API to migrate keys between stashes

Allowing to run ghost migrate which will actually perform stash.export() and stash.load() would go along way with allowing easy migratation between stashes.

The CLI could expose it like so:

ghost migrate ~/.my_stash.json http://127.0.0.1:8200 --source-passphrase xxx --destination-passphrase yyy --source-backend tinydb --destination-backend vault

In Python it could be pretty much the same

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.