parallaxsecond / parsec Goto Github PK
View Code? Open in Web Editor NEWPlatform AbstRaction for SECurity service
Home Page: https://parsec.community/
License: Apache License 2.0
Platform AbstRaction for SECurity service
Home Page: https://parsec.community/
License: Apache License 2.0
@pricec would be nice if someone else on the team was familiar with how to do this. Either handle it, or help @jamesonhyde-docker (creating documentation as we go).
It would be nice to push resulting docker apps to a private registry for us to easily trial/run/share.
Currently we have one generic ProviderConfig
structure for all of our providers. It might be better to have one configuration structure per provider type as most of them do not share the same fields.
This issue tracks the work of adding a new provider in PARSEC for PKCS 11 support. This new provider will implement the Provide trait to respond to clients requests. PKCS 11 will allow us to use a variety of HSM and other crypto backends.
The chosen implementation for this is to use the rust-pkcs11 crate that offers a wrapper around the PKCS 11 APIs. The configuration will contain information about the PKCS 11 library to load, the slot to use and the PIN needed to login (if needed).
cc @parallaxsecond/maintainers
Configuration for providers is currently being passed using a common ProviderConfig
. Ideally, we would replace this with a configuration structure that contains some common parts and then an enum to cover the provider-specific options. This would make it easier to specify which ones are Optional
and which ones aren't straight in the Config definition.
As noted in the soon-to-be-merged protobuf contracts, some (combinations of) key attributes that Parsec supports are considered deprecated and should only be used by clients with a clear understanding of the security implications.
Thomas Fossati's suggestion is that the deprecated primitives are rejected by default, but can be enabled through the runtime configuration by admins.
There is still a question about where these checks should be enforced. My suggestions:
parsec-interface-rs
, along with means of checking whether a request contains any such primitivesparsec-interface-rs
that allows deprecated algorithms and which is not enabled by default; otherwise return an error when encountering oneparsec
, but enforce the check through some other means, somewhere along the request pipeline (most likely in the BackEndHandler
, before sending the request to the provider)In all of our providers we check the result of the KIM trait methods and convert the String
to a ResponseStatus
, most of the time to be ResponseStatus::KeyIDManagerError
.
To avoid this repetion of code, this could be done on the trait itself and re-used everywhere.
Before #62 is done, we should consider what the unsafe
blocks that we currently have are doing and what should be contained in them.
Service/protocol versions are used in request/response headers and in the ping operation.
At the moment these values are stored as two separate integers. It would be helpful if:
Version
objectAs mentioned here, Mbed Crypto is not yet thread safe when it comes to allocating or releasing key slots.
This came out of our own stress tests as well, with the service failing to handle requests or tripping over in a segmentation fault. The underlying problem was that all the threads processing requests were getting the same key slot allocated (number 32) and were modifying each other's keys.
This issue should be addressed by hiding the calls to create, open or close a key behind a mutex.
Fuzzing tests could be added with the AFL crate.
should closely map to PSA APIs or something agreed upon in PASL discussions
Start with sign and verify primitives and a crypto.signer implementation as described in https://github.com/docker/pasl/blob/master/docs/client-examples/pasl_client_sign.txt
After having updated our docs and diagrams we need to threat model Parsec.
Implement a configurator module whose task is to spin up all the system components, given a TOML configuration.
Proposing the following changes:
build()
results in a fully configured moduleExample configuration file:
# PARSEC Service Configuration File
title = "Parsec Config"
[listener]
socket_path = "/tmp/security-daemon-socket"
socket_timeout = 100
[[key_id_manager]]
type = "ondisk"
name = "on-disk-manager"
path = "~/.parsec/key_ids/"
[[provider]]
type = "mbed-userspace"
persist_keys = true
key_id_manager = "on-disk-manager"
converter = "protobuf"
[[key_id_manager]]
type = "mongo"
name = "mongo-manager"
address = "127.0.0.1:27017"
[[provider]]
type = "pkcs11"
key_id_manager = "mongo-manager"
converter = "protobuf"
[[authenticator]]
type = "simple"
default_app_name = "root"
[[authenticator]]
type = "jwt"
Outstanding questions:
We use three different versions in Parsec:
It describes the version of the wire protocol itself: what are the different fields of requests and responses. It is needed to parse request and response headers.
The wire protocol might change in the future.
The Ping
operation returns the highest wire protocol version supported by the service.
Clients need to perform a bootstrapping sequence if they want to switch to a higher wire protocol version:
Ping
using wire protocol version 1.0
x.y
x.y
for further request.The response are made by the service using the same wire protocol version than the request it was sent so that client can parse them.
The four first fields of the wire protocol: magic number, header size, version major and version minor will never change for requests and responses.
If a client sends a request with a wire protocol version strictly higher than the highest one supported by the service, a response status VersionTooBig
will be sent back as an error.
VersionTooBig
otherwise) and pass the rest to the appropriate wire protocol version deserializer. We should add the reverse logic on the return journey to serialize the response using the same wire protocol version than the request.This version is returned by the ListProviders
operation in the ProviderInfoProto
structure of the Core Provider. It represents the implementation version of the whole Parsec service. Everytime something is changed on the Parsec service, this number should be modified using the semantic versioning rules the best way possible. It is a way for clients to check if the Parsec version they are using contains the bug/security vulnerabilities fixes that they need or contains the new features that they want.
To make it easy, this version should be the same as the version
field in the Cargo.toml
file of the parsec
crate.
Cargo.toml
file.Each provider (other than the Core provider) also returns a version as part of the ListProviders operation. This version represents the implementation version of the provider. It is needed if the provider is implemented as a co-server for example as there would be no way then to link the Parsec service version to the provider version.
The current implementation of the FAPI layer (and the interface it exposes) is quite promising in terms of ease of use, compared with the ESAPI layer. On example would be the complete lack of authentication sessions, which are, presumably, taken care of by TSS.
We should investigate this a bit and, if it is as beneficial as it looks, try and upgrade our use of the TPM. Saying this mostly so that we're consistent with our view of the world where the system should default to safe settings when you don't need specific ones, and this very much applies to our use case here.
Our code should not compile if there is any warning.
Only this one shoudl be ignored or fix #60
The response codes returned by the Provide
methods should be reviewed and documented in our book. We are currently not consistent amoung all of our providers and the response codes should be conform to what PSA describes.
We previously created the fuzz testing base framework, we can now expand this to do fuzz testing of individual components e.g. by pumping operations directly into specific providers.
Some more details: #54 (comment)
In the interface, we are currently checking if the header size in the request is different than the header size expected in the 1.0
wire protocol format (22). If it is we return InvalidHeader
.
We should not make that check there.
We should read "header size" bytes and ignore (do not parse) the bytes given after the end of the request in the used wire protocol version. If the header size given is smaller than the size of the request, then we should return an error to the client.
This would be useful is clients send padding at the end of the header.
This should be precised in the documentation.
Activating these lints should improve the quality of code but would necessitate lots of changes amoung our crates. See #91
We're currently using Mbed Crypto 1.1.0 which exposed an old version of PSA Crypto API.
After the interface work is done, we should update the version of mbed we pull to 2.0.0 and modify the way we create and handle keys (where necessary).
Changes that will probably be needed:
permit_copy
as a flag for key attributesIf providers will be forced to only support one key lifetime, a new issue shall be raised.
During compilation we observe the following warning:
warning: couldn't execute `llvm-config --prefix` (error: No such file or directory (os error 2))
warning: set the LLVM_CONFIG_PATH environment variable to a valid `llvm-config` executable
This is coming from bindgen
crate and its dependency, clang-sys
.
clang-sys
searches for libclang
library on the system using different means and one of them is via executing llvm-config --prefix
.
Even if the libclang
library is found on the system, a warning will be emitted.
Currently all requests generate a new thread - this can turn out to be very inefficient if a large number of requests hits the server at the same time, constantly. A better option would be to implement a thread pool and use it to allocate requests.
An idiomatic implementation is suggested in The Rust Book.
This change will be tested as part of the stress tests, when those will be implemented.
Because of #69 we would need a way of launching Parsec with a different configuration file than the one on the same level where the command is executed.
A command line argument should be set to select the configuration path, defaulting to the one on the current directory if not present.
At the moment the Mbed Crypto provider uses fixed-size (1024 bytes) buffers for operations where data is returned from Mbed Crypto, without checking if the operation failed for lack of space.
This could be improved by either computing a sufficient size before the operation and/or by checking the result value in case the failure had to do with the buffer size. I would lean towards just checking the result, although allocating too much memory might become costly in a busy environment (do we care for the Mbed provider?). So, a fix would be to allocate 1024 bytes, check the result and if it was not enough allocate more and try again (up to a configurable threshold).
The two operations that currently export any data are "sign" and "export public key". When we get to encrypting data we'll definitely have to pay more attention to buffer sizes.
Keys are currently stored where the service is run but there should be a configuration parameter to choose where.
Is only possible after #38
The top level modules (TLM) of the service should reflect the importance of said components. At the moment, each type of component is placed in a separate TLM. It would be useful, for the sake of limiting the number of TLMs, to redistribute them.
An option of TLM set would be:
authenticators
providers
core
- containing the current front
and back
modulesutils
- containing the current key_id_managers
and, in the future, the configuration logic and other bits that do not fit in other boxesThe change should be reflected in the documentation.
It might be useful to use tests/ci.sh for manual testing. But, to be able to do that we can make some improvements. For example:
The Owner Hierarchy is usually (i.e. should really) be protected through a password/authentication value. At the moment we do not make this configurable and rely on the fact that the simulator (against which we run all tests) does not set up a password by default.
We need to change both the tests and the provider to accommodate for this.
We should remove the mental strain of remembering to "release" key slots whenever we finish an operation by using a finally closure to do it for us whenever the key handle goes out of scope.
it'd probably work something like:
let key_handle = open_key(...);
let _finally = finally( || release_key(key_handle, &self.key_handle_mutex) );
We should investigate our use of unsafe
blocks and be assured of their safety. Ideally do the same for our dependencies. Tools like Rust Safety Dance could help us doing that.
This issue tracks the work of adding a new provider in PARSEC for TPM support. This new provider will implement the Provide
trait to respond to clients requests.
Multiple options are possible for the implementation:
cc @parallaxsecond/maintainers
I think it could be tested by changing nothing but having a different configuration file which points to the SoftHSM library. There is an available Dockerfile
that installs the library that we can use.
I think we also have to think how we are going to scale testing of multiple providers on the CI.
I was thinking of having single provider tests (our current tests) and multiple providers tests that test more the configuration as opposed to a provider implementation. For that purpose we should have multiple configuration files for all of those cases: one per provider and another one including them all.
This issue only happens once in a while on the Travis CI (Arm64 tests) only.
Logs of an example of failed test.
The configuration is: DOCKER_IMAGE_NAME=pkcs11-provider DOCKER_IMAGE_PATH=tests/per_provider/provider_cfg/pkcs11 SCRIPT="tests/ci.sh pkcs11"
Relevent part of the logs:
Start Parsec for stress tests
Finished dev [unoptimized + debuginfo] target(s) in 0.18s
Running `target/debug/parsec --config tests/per_provider/provider_cfg/pkcs11/config.toml`
Execute stress tests
Finished test [unoptimized + debuginfo] target(s) in 0.18s
Running target/debug/deps/parsec_service-9799ccc9d5a72007
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 12 filtered out
Running target/debug/deps/parsec-8c1e7cba57c8e590
Running target/debug/deps/mod-c4e7c83f5440ad54
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
running 1 test
thread '<unnamed>' panicked at 'Failed to import key: ConnectionError', src/libcore/result.rs:1188:5
stack backtrace:
0: backtrace::backtrace::libunwind::trace
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/libunwind.rs:88
1: backtrace::backtrace::trace_unsynchronized
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/mod.rs:66
2: std::sys_common::backtrace::_print_fmt
at src/libstd/sys_common/backtrace.rs:84
3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
at src/libstd/sys_common/backtrace.rs:61
4: core::fmt::write
at src/libcore/fmt/mod.rs:1025
5: std::io::Write::write_fmt
at src/libstd/io/mod.rs:1426
6: std::sys_common::backtrace::_print
at src/libstd/sys_common/backtrace.rs:65
7: std::sys_common::backtrace::print
at src/libstd/sys_common/backtrace.rs:50
8: std::panicking::default_hook::{{closure}}
at src/libstd/panicking.rs:193
9: std::panicking::default_hook
at src/libstd/panicking.rs:210
10: std::panicking::rust_panic_with_hook
at src/libstd/panicking.rs:471
11: rust_begin_unwind
at src/libstd/panicking.rs:375
12: core::panicking::panic_fmt
at src/libcore/panicking.rs:84
13: core::result::unwrap_failed
at src/libcore/result.rs:1188
14: core::result::Result<T,E>::expect
at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libcore/result.rs:983
15: parsec_client_test::stress_test_client::StressTestWorker::execute_request
at /root/.cargo/git/checkouts/parsec-client-test-9c23f39b40cb6dae/8b637f9/src/stress_test_client.rs:196
16: parsec_client_test::stress_test_client::StressTestWorker::run_test
at /root/.cargo/git/checkouts/parsec-client-test-9c23f39b40cb6dae/8b637f9/src/stress_test_client.rs:134
17: parsec_client_test::stress_test_client::StressTestClient::execute::{{closure}}
at /root/.cargo/git/checkouts/parsec-client-test-9c23f39b40cb6dae/8b637f9/src/stress_test_client.rs:82
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
[ERROR parsec_service::front::front_end] Failed to send response; error: generic input/output error
test per_provider::stress_test::stress_test ... test per_provider::stress_test::stress_test has been running for over 60 seconds
test per_provider::stress_test::stress_test ... FAILED
failures:
---- per_provider::stress_test::stress_test stdout ----
thread 'per_provider::stress_test::stress_test' panicked at 'Test thread panicked: Any', src/libcore/result.rs:1188:5
stack backtrace:
0: backtrace::backtrace::libunwind::trace
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/libunwind.rs:88
1: backtrace::backtrace::trace_unsynchronized
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/mod.rs:66
2: std::sys_common::backtrace::_print_fmt
at src/libstd/sys_common/backtrace.rs:84
3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
at src/libstd/sys_common/backtrace.rs:61
4: core::fmt::write
at src/libcore/fmt/mod.rs:1025
5: std::io::Write::write_fmt
at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/io/mod.rs:1426
6: std::io::impls::<impl std::io::Write for alloc::boxed::Box<W>>::write_fmt
at src/libstd/io/impls.rs:156
7: std::sys_common::backtrace::_print
at src/libstd/sys_common/backtrace.rs:65
8: std::sys_common::backtrace::print
at src/libstd/sys_common/backtrace.rs:50
9: std::panicking::default_hook::{{closure}}
at src/libstd/panicking.rs:193
10: std::panicking::default_hook
at src/libstd/panicking.rs:207
11: std::panicking::rust_panic_with_hook
at src/libstd/panicking.rs:471
12: rust_begin_unwind
at src/libstd/panicking.rs:375
13: core::panicking::panic_fmt
at src/libcore/panicking.rs:84
14: core::result::unwrap_failed
at src/libcore/result.rs:1188
15: core::result::Result<T,E>::expect
at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libcore/result.rs:983
16: parsec_client_test::stress_test_client::StressTestClient::execute
at /root/.cargo/git/checkouts/parsec-client-test-9c23f39b40cb6dae/8b637f9/src/stress_test_client.rs:93
17: mod::per_provider::stress_test::stress_test
at tests/per_provider/stress_test.rs:30
18: mod::per_provider::stress_test::stress_test::{{closure}}
at tests/per_provider/stress_test.rs:19
19: core::ops::function::FnOnce::call_once
at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libcore/ops/function.rs:232
20: <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once
at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/liballoc/boxed.rs:1022
21: __rust_maybe_catch_panic
at src/libpanic_unwind/lib.rs:78
22: std::panicking::try
at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/panicking.rs:270
23: std::panic::catch_unwind
at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/panic.rs:394
24: test::run_test_in_process
at src/libtest/lib.rs:567
25: test::run_test::run_test_inner::{{closure}}
at src/libtest/lib.rs:474
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
failures:
per_provider::stress_test::stress_test
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 34 filtered out
error: test failed, to rerun pass '--test mod'
Service components should be constructed exclusively through builders which do the whole work of initialising the component.
While most builders already perform the required steps, some need to be expanded to include initialisation of the component. Others need to be written altogether.
The builders that need updates are:
init
should be called in build
The builders should then be leveraged in the main script on service startup.
We should review all the documentation comments in all of our crates and modify them if they are not correct.
We should add examples wherever it is necessary.
Maybe it would be a good idea to put parsec
on crates.io as well so that:
cargo install
In our Cargo.toml
we are currently using a fork of the serde_asn1_der
crate to have big integer support. The picky-asn1-der
crate which was just realised should handle that as well and thta would be less hacky ๐ Let's switch to that!
The following command:
cargo clippy --all-targets --all-features -- -D clippy::all -D clippy::pedantic -D clippy::cargo
will enable all Clippy lints in all Parsec's code.
We should investigate which ones of those should be fixed.
Currently we only allow the owner hierarchy to be protected with a password (i.e. a String auth value), however this can be set to a binary value.
A suggestion would be to support in the owner_hierarchy_auth
field values of the form str:...
and hex:...
, defaulting to considering the value a string if no prefix is specified. Presumably the work in the service should just be around how this value is passed to the TSS crate, which should do the parsing behind the scenes. The issue tracking the TSS crate changes is here: parallaxsecond/rust-tss-esapi#37
It would be nice to have PARSEC installed and run inside a Docker container as a systemd
daemon from a local checkout of this repository.
The container could share with the host the Unix socket to communicate with the daemon.
This could be put in the CI to test systemd
installation.
Currently our log messages specify the error that occurred, however it would be useful to also include the location in code.
Example:
Currently: "Failed to read request; status: {}"
Enhanced: "FrontEndHandler handle_request: Failed to read request; status: {}"
Alternatively, when implementing actual logging, this might be sorted out for us by the logging framework (?)
Currently all tests in our CI execute on x86.
We need to make sure that compiling the service is as easy as cargo build --target aarch64-unknown-linux-gnu
. It will be currently except for the compilation of Mbed Crypto which needs extra logic in build.rs
scripts.
We also need to investigate CI testing on this target and what are our options.
We can look at cross and at native testing using Travis.
PARSEC should run as a daemon, running fully as a background process. Its logs should go at a convenient place and not on the standard output.
It should be possible to gracefully terminate it and wait for all of its threads to finish.
We panic in the code at many places with expect
or unwrap
. Ideally we want to return an error instead of panicking unless we are absolutely sure this is the right thing to do. We should maybe write guidelines somewhere about when "this is the right thing to do".
A strong benefit of the configuration is that you can change at run-time the providers that you want to use in the configuration file and send a SIGHUP
signal to Parsec to reload it.
It is only possible if all of the providers can be loaded dynamically. It is currently the case for PKCS 11 one but we nede to provide the same behaviour for the TPM one and Mbed Crypto.
It will also Parsec to be compiled even on systems that do not have those libraries available (it will only fail at run-time).
Go thoroughly through the Parsec book and update what is needed including diagrams to make sure everything is up to date.
This is needed before #89
Our Mbed Provider is closely tied to the FFI layer to Mbed Crypto, which lead to questions being raised about the safety of the code, especially around unsafe
blocks and how such blocks should be handled in the larger context.
A sensible solution is to split out the code directly interfacing with Mbed Crypto into its own module/crate, keeping the unsafe
calls contained.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.