Giter Site home page Giter Site logo

parsec's People

Contributors

adamparco avatar akazimierskigl avatar anta5010 avatar billatarm avatar dennisgove avatar ema avatar firstyear avatar gowthamsk-arm avatar heavypackets avatar hug-dev avatar ionut-arm avatar jamesonhyde-docker avatar jn9e9 avatar justincormack avatar kakemone avatar lnicola avatar marcsvll avatar mattdavis00 avatar mohamedasaker-arm avatar nickray avatar paulhowardarm avatar puiterwijk avatar robdimond-arm avatar robertdrazkowskigl avatar sbailey-arm avatar stevecapperarm avatar superhepper avatar sven-bg avatar tgonzalezorlandoarm avatar tomaszpawelecgl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

parsec's Issues

Add a PKCS 11 Provider

This issue tracks the work of adding a new provider in PARSEC for PKCS 11 support. This new provider will implement the Provide trait to respond to clients requests. PKCS 11 will allow us to use a variety of HSM and other crypto backends.

The chosen implementation for this is to use the rust-pkcs11 crate that offers a wrapper around the PKCS 11 APIs. The configuration will contain information about the PKCS 11 library to load, the slot to use and the PIN needed to login (if needed).

cc @parallaxsecond/maintainers

Split out ProviderConfig

Configuration for providers is currently being passed using a common ProviderConfig. Ideally, we would replace this with a configuration structure that contains some common parts and then an enum to cover the provider-specific options. This would make it easier to specify which ones are Optional and which ones aren't straight in the Config definition.

Implement configurable exclusion of deprecated primitives

As noted in the soon-to-be-merged protobuf contracts, some (combinations of) key attributes that Parsec supports are considered deprecated and should only be used by clients with a clear understanding of the security implications.

Thomas Fossati's suggestion is that the deprecated primitives are rejected by default, but can be enabled through the runtime configuration by admins.

There is still a question about where these checks should be enforced. My suggestions:

  • Create and maintain a list of deprecated primitives in parsec-interface-rs, along with means of checking whether a request contains any such primitives
  • Implement a feature in parsec-interface-rs that allows deprecated algorithms and which is not enabled by default; otherwise return an error when encountering one
  • Enable the feature in parsec, but enforce the check through some other means, somewhere along the request pipeline (most likely in the BackEndHandler, before sending the request to the provider)

Add version structures for better handling of versions

Service/protocol versions are used in request/response headers and in the ping operation.

At the moment these values are stored as two separate integers. It would be helpful if:

  • They were represented by a Version object
  • We had an explicit agreement on how to handle a difference between the version of a backend handler and the version specified in a request header handled by it.

Key handle manipulation is not thread-safe in Mbed Crypto

As mentioned here, Mbed Crypto is not yet thread safe when it comes to allocating or releasing key slots.
This came out of our own stress tests as well, with the service failing to handle requests or tripping over in a segmentation fault. The underlying problem was that all the threads processing requests were getting the same key slot allocated (number 32) and were modifying each other's keys.

This issue should be addressed by hiding the calls to create, open or close a key behind a mutex.

Implement configuration

Implement a configurator module whose task is to spin up all the system components, given a TOML configuration.

Proposing the following changes:

  • Implement and expose a configuration structure for each component that needs configuring
  • Add said structure to the Builder of the component so that build() results in a fully configured module
  • Implement the configurator to pick up a TOML tree structure, parse it into its components and initialise everything in the right order
  • Move system spin-up from main to the configurator, which returns the assembled service ready to be run

Example configuration file:


  # PARSEC Service Configuration File

  title = "Parsec Config"

  [listener]
  socket_path = "/tmp/security-daemon-socket"
  socket_timeout = 100

  [[key_id_manager]]
  type = "ondisk"
  name = "on-disk-manager"
  path = "~/.parsec/key_ids/"

  [[provider]]
  type = "mbed-userspace"
  persist_keys = true
  key_id_manager = "on-disk-manager"
  converter = "protobuf"

  [[key_id_manager]]
  type = "mongo"
  name = "mongo-manager"
  address = "127.0.0.1:27017"

  [[provider]]
  type = "pkcs11"
  key_id_manager = "mongo-manager"
  converter = "protobuf"

  [[authenticator]]
  type = "simple"
  default_app_name = "root"

  [[authenticator]]
  type = "jwt"

Outstanding questions:

  • What should be the final structure of the config file?
  • Should we move to generating provider ID dynamically?
  • Do we offer the option of setting the provider ID in the configuration?
  • What happens if we have duplicate authenticators? (e.g. two "simple" authenticators with different default app names?)
  • Do we allow defaulting of parameters? (e.g the converter for providers would always be "protobuf", at least for a while)

Rectify and document our versioning system

We use three different versions in Parsec:

  1. The wire protocol version
  2. The Parsec service version
  3. The provider version

The wire protocol version

It describes the version of the wire protocol itself: what are the different fields of requests and responses. It is needed to parse request and response headers.
The wire protocol might change in the future.
The Ping operation returns the highest wire protocol version supported by the service.
Clients need to perform a bootstrapping sequence if they want to switch to a higher wire protocol version:

  1. Client send Ping using wire protocol version 1.0
  2. Service answer with highest wire protocol version available x.y
  3. Client now knows that it can use wire protocol version x.y for further request.

The response are made by the service using the same wire protocol version than the request it was sent so that client can parse them.

The four first fields of the wire protocol: magic number, header size, version major and version minor will never change for requests and responses.

If a client sends a request with a wire protocol version strictly higher than the highest one supported by the service, a response status VersionTooBig will be sent back as an error.

Actions to take

  • Make sure that the wire protocol version system is documented properly and is consistent across all of our documentation
  • Add in the client library guide explanation about how the bootstrapping sequence should be performed.
  • Modify the front-end handler to first parse the first four fields, check if the version sent is lower than the highest supported (send back VersionTooBig otherwise) and pass the rest to the appropriate wire protocol version deserializer. We should add the reverse logic on the return journey to serialize the response using the same wire protocol version than the request.

The Parsec service version

This version is returned by the ListProviders operation in the ProviderInfoProto structure of the Core Provider. It represents the implementation version of the whole Parsec service. Everytime something is changed on the Parsec service, this number should be modified using the semantic versioning rules the best way possible. It is a way for clients to check if the Parsec version they are using contains the bug/security vulnerabilities fixes that they need or contains the new features that they want.
To make it easy, this version should be the same as the version field in the Cargo.toml file of the parsec crate.

Actions to take

  • Document this in the ListProviders section.
  • Should probably add a CHANGELOG page in the documentation describing the changes that each version brings so clients are aware.
  • Add the logic in the Core Provider to directly fetch its version from the Cargo.toml file.

The Provider version

Each provider (other than the Core provider) also returns a version as part of the ListProviders operation. This version represents the implementation version of the provider. It is needed if the provider is implemented as a co-server for example as there would be no way then to link the Parsec service version to the provider version.

Actions to take

  • Document this in the ListProviders section.
  • I propose that for the providers that are statically linked with the Parsec service, their version should be the same as the Parsec service version, that is the same as the Core Provider. It makes sense as they share the same dependencies than the Parsec service.

Investigate the possibility of using the FAPI TSS layer

The current implementation of the FAPI layer (and the interface it exposes) is quite promising in terms of ease of use, compared with the ESAPI layer. On example would be the complete lack of authentication sessions, which are, presumably, taken care of by TSS.

We should investigate this a bit and, if it is as beneficial as it looks, try and upgrade our use of the TPM. Saying this mostly so that we're consistent with our view of the world where the system should default to safe settings when you don't need specific ones, and this very much applies to our use case here.

Review response codes returned by providers

The response codes returned by the Provide methods should be reviewed and documented in our book. We are currently not consistent amoung all of our providers and the response codes should be conform to what PSA describes.

Rectify header size handling

In the interface, we are currently checking if the header size in the request is different than the header size expected in the 1.0 wire protocol format (22). If it is we return InvalidHeader.
We should not make that check there.
We should read "header size" bytes and ignore (do not parse) the bytes given after the end of the request in the used wire protocol version. If the header size given is smaller than the size of the request, then we should return an error to the client.
This would be useful is clients send padding at the end of the header.

This should be precised in the documentation.

Update to Mbed Crypto v2.0.0

We're currently using Mbed Crypto 1.1.0 which exposed an old version of PSA Crypto API.

After the interface work is done, we should update the version of mbed we pull to 2.0.0 and modify the way we create and handle keys (where necessary).

Changes that will probably be needed:

  • Updating the way Mbed Provider handles key creation (as per the new PSA spec)
  • Updating the way we deal with key lifetimes (deciding how this will be handled is WIP)
  • Adding permit_copy as a flag for key attributes

If providers will be forced to only support one key lifetime, a new issue shall be raised.

Warning during compilation about `llvm-config --prefix`

During compilation we observe the following warning:

warning: couldn't execute `llvm-config --prefix` (error: No such file or directory (os error 2))
warning: set the LLVM_CONFIG_PATH environment variable to a valid `llvm-config` executable

This is coming from bindgen crate and its dependency, clang-sys.
clang-sys searches for libclang library on the system using different means and one of them is via executing llvm-config --prefix.
Even if the libclang library is found on the system, a warning will be emitted.

Implement a thread pool

Currently all requests generate a new thread - this can turn out to be very inefficient if a large number of requests hits the server at the same time, constantly. A better option would be to implement a thread pool and use it to allocate requests.

An idiomatic implementation is suggested in The Rust Book.

This change will be tested as part of the stress tests, when those will be implemented.

Pass config.toml path as command-line argument

Because of #69 we would need a way of launching Parsec with a different configuration file than the one on the same level where the command is executed.
A command line argument should be set to select the configuration path, defaulting to the one on the current directory if not present.

Use dynamically-sized buffers in Mbed provider

At the moment the Mbed Crypto provider uses fixed-size (1024 bytes) buffers for operations where data is returned from Mbed Crypto, without checking if the operation failed for lack of space.

This could be improved by either computing a sufficient size before the operation and/or by checking the result value in case the failure had to do with the buffer size. I would lean towards just checking the result, although allocating too much memory might become costly in a busy environment (do we care for the Mbed provider?). So, a fix would be to allocate 1024 bytes, check the result and if it was not enough allocate more and try again (up to a configurable threshold).

The two operations that currently export any data are "sign" and "export public key". When we get to encrypting data we'll definitely have to pay more attention to buffer sizes.

Rearrange modules for a more structured feel

The top level modules (TLM) of the service should reflect the importance of said components. At the moment, each type of component is placed in a separate TLM. It would be useful, for the sake of limiting the number of TLMs, to redistribute them.

An option of TLM set would be:

  • authenticators
  • providers
  • core - containing the current front and back modules
  • utils - containing the current key_id_managers and, in the future, the configuration logic and other bits that do not fit in other boxes

The change should be reflected in the documentation.

Improvements for tests/ci.sh

It might be useful to use tests/ci.sh for manual testing. But, to be able to do that we can make some improvements. For example:

  1. The script can't be run more than once in some cases because:
    • For TPM and "all" providers tests tpm_server is started, but not stopped
    • For the PKCS11 and "all" providers find_slot_number.sh is run. This script adds a line to config.toml without checking if the line exists already and parsec fails to start.
  2. Parsec status is not checked after "cargo run" and "cargo test" is run even if parsec failed to start.
  3. If possible we should replace all the hard-coded timeouts with waiting for an event.
  4. May be make "cargo clean" optional

TPM provider must support Owner Hierarchy authentication

The Owner Hierarchy is usually (i.e. should really) be protected through a password/authentication value. At the moment we do not make this configurable and rely on the fact that the simulator (against which we run all tests) does not set up a password by default.

We need to change both the tests and the provider to accommodate for this.

Drop key handles implicitly

We should remove the mental strain of remembering to "release" key slots whenever we finish an operation by using a finally closure to do it for us whenever the key handle goes out of scope.

it'd probably work something like:

let key_handle = open_key(...);
let _finally = finally( || release_key(key_handle, &self.key_handle_mutex) );

Add a Trusted Platform Module Provider

This issue tracks the work of adding a new provider in PARSEC for TPM support. This new provider will implement the Provide trait to respond to clients requests.

Multiple options are possible for the implementation:

  • produce a FFI to one of the TPM2 Software Stack API interfaces and dynamically link to a library implementing it (path provided in the configuration). Ideally we want to use the highest abstraction level (Feature API) but it seems to be work in progress. The highest level API we can use today is the Enhanced System API.
  • investigate the use of the tss-sapi crate, to see if it would fit our needs
  • investigate the use of a PKCS 11 to TPM 2 bridge. At a first glance, the README files warns that it is not production ready though.

cc @parallaxsecond/maintainers

Test strategy for our providers on the CI

I think it could be tested by changing nothing but having a different configuration file which points to the SoftHSM library. There is an available Dockerfile that installs the library that we can use.

I think we also have to think how we are going to scale testing of multiple providers on the CI.

I was thinking of having single provider tests (our current tests) and multiple providers tests that test more the configuration as opposed to a provider implementation. For that purpose we should have multiple configuration files for all of those cases: one per provider and another one including them all.

PKCS 11 provider stress tests sometimes fail

This issue only happens once in a while on the Travis CI (Arm64 tests) only.
Logs of an example of failed test.
The configuration is: DOCKER_IMAGE_NAME=pkcs11-provider DOCKER_IMAGE_PATH=tests/per_provider/provider_cfg/pkcs11 SCRIPT="tests/ci.sh pkcs11"

Relevent part of the logs:

Start Parsec for stress tests
    Finished dev [unoptimized + debuginfo] target(s) in 0.18s
     Running `target/debug/parsec --config tests/per_provider/provider_cfg/pkcs11/config.toml`
Execute stress tests
    Finished test [unoptimized + debuginfo] target(s) in 0.18s
     Running target/debug/deps/parsec_service-9799ccc9d5a72007

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 12 filtered out

     Running target/debug/deps/parsec-8c1e7cba57c8e590
     Running target/debug/deps/mod-c4e7c83f5440ad54

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out


running 1 test
thread '<unnamed>' panicked at 'Failed to import key: ConnectionError', src/libcore/result.rs:1188:5
stack backtrace:
   0: backtrace::backtrace::libunwind::trace
             at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/libunwind.rs:88
   1: backtrace::backtrace::trace_unsynchronized
             at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/mod.rs:66
   2: std::sys_common::backtrace::_print_fmt
             at src/libstd/sys_common/backtrace.rs:84
   3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
             at src/libstd/sys_common/backtrace.rs:61
   4: core::fmt::write
             at src/libcore/fmt/mod.rs:1025
   5: std::io::Write::write_fmt
             at src/libstd/io/mod.rs:1426
   6: std::sys_common::backtrace::_print
             at src/libstd/sys_common/backtrace.rs:65
   7: std::sys_common::backtrace::print
             at src/libstd/sys_common/backtrace.rs:50
   8: std::panicking::default_hook::{{closure}}
             at src/libstd/panicking.rs:193
   9: std::panicking::default_hook
             at src/libstd/panicking.rs:210
  10: std::panicking::rust_panic_with_hook
             at src/libstd/panicking.rs:471
  11: rust_begin_unwind
             at src/libstd/panicking.rs:375
  12: core::panicking::panic_fmt
             at src/libcore/panicking.rs:84
  13: core::result::unwrap_failed
             at src/libcore/result.rs:1188
  14: core::result::Result<T,E>::expect
             at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libcore/result.rs:983
  15: parsec_client_test::stress_test_client::StressTestWorker::execute_request
             at /root/.cargo/git/checkouts/parsec-client-test-9c23f39b40cb6dae/8b637f9/src/stress_test_client.rs:196
  16: parsec_client_test::stress_test_client::StressTestWorker::run_test
             at /root/.cargo/git/checkouts/parsec-client-test-9c23f39b40cb6dae/8b637f9/src/stress_test_client.rs:134
  17: parsec_client_test::stress_test_client::StressTestClient::execute::{{closure}}
             at /root/.cargo/git/checkouts/parsec-client-test-9c23f39b40cb6dae/8b637f9/src/stress_test_client.rs:82
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
[ERROR parsec_service::front::front_end] Failed to send response; error: generic input/output error
test per_provider::stress_test::stress_test ... test per_provider::stress_test::stress_test has been running for over 60 seconds
test per_provider::stress_test::stress_test ... FAILED

failures:

---- per_provider::stress_test::stress_test stdout ----
thread 'per_provider::stress_test::stress_test' panicked at 'Test thread panicked: Any', src/libcore/result.rs:1188:5
stack backtrace:
   0: backtrace::backtrace::libunwind::trace
             at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/libunwind.rs:88
   1: backtrace::backtrace::trace_unsynchronized
             at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/mod.rs:66
   2: std::sys_common::backtrace::_print_fmt
             at src/libstd/sys_common/backtrace.rs:84
   3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
             at src/libstd/sys_common/backtrace.rs:61
   4: core::fmt::write
             at src/libcore/fmt/mod.rs:1025
   5: std::io::Write::write_fmt
             at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/io/mod.rs:1426
   6: std::io::impls::<impl std::io::Write for alloc::boxed::Box<W>>::write_fmt
             at src/libstd/io/impls.rs:156
   7: std::sys_common::backtrace::_print
             at src/libstd/sys_common/backtrace.rs:65
   8: std::sys_common::backtrace::print
             at src/libstd/sys_common/backtrace.rs:50
   9: std::panicking::default_hook::{{closure}}
             at src/libstd/panicking.rs:193
  10: std::panicking::default_hook
             at src/libstd/panicking.rs:207
  11: std::panicking::rust_panic_with_hook
             at src/libstd/panicking.rs:471
  12: rust_begin_unwind
             at src/libstd/panicking.rs:375
  13: core::panicking::panic_fmt
             at src/libcore/panicking.rs:84
  14: core::result::unwrap_failed
             at src/libcore/result.rs:1188
  15: core::result::Result<T,E>::expect
             at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libcore/result.rs:983
  16: parsec_client_test::stress_test_client::StressTestClient::execute
             at /root/.cargo/git/checkouts/parsec-client-test-9c23f39b40cb6dae/8b637f9/src/stress_test_client.rs:93
  17: mod::per_provider::stress_test::stress_test
             at tests/per_provider/stress_test.rs:30
  18: mod::per_provider::stress_test::stress_test::{{closure}}
             at tests/per_provider/stress_test.rs:19
  19: core::ops::function::FnOnce::call_once
             at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libcore/ops/function.rs:232
  20: <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once
             at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/liballoc/boxed.rs:1022
  21: __rust_maybe_catch_panic
             at src/libpanic_unwind/lib.rs:78
  22: std::panicking::try
             at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/panicking.rs:270
  23: std::panic::catch_unwind
             at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/panic.rs:394
  24: test::run_test_in_process
             at src/libtest/lib.rs:567
  25: test::run_test::run_test_inner::{{closure}}
             at src/libtest/lib.rs:474
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.


failures:
    per_provider::stress_test::stress_test

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 34 filtered out

error: test failed, to rerun pass '--test mod'

Improve builders for service components

Service components should be constructed exclusively through builders which do the whole work of initialising the component.

While most builders already perform the required steps, some need to be expanded to include initialisation of the component. Others need to be written altogether.

The builders that need updates are:

  • DomainSocket - init should be called in build
  • OnDiskKeyIdManager - new builder
  • MbedProvider - new builder

The builders should then be leveraged in the main script on service startup.

Review all the documentation comments

We should review all the documentation comments in all of our crates and modify them if they are not correct.
We should add examples wherever it is necessary.
Maybe it would be a good idea to put parsec on crates.io as well so that:

  • it can be installed directly with cargo install
  • Parsec Service developpers can have easy access to the documentation

Check for more Clippy lints

The following command:

cargo clippy --all-targets --all-features -- -D clippy::all -D clippy::pedantic -D clippy::cargo

will enable all Clippy lints in all Parsec's code.
We should investigate which ones of those should be fixed.

Allow TPM owner hierarchy auth to be non-string

Currently we only allow the owner hierarchy to be protected with a password (i.e. a String auth value), however this can be set to a binary value.

A suggestion would be to support in the owner_hierarchy_auth field values of the form str:... and hex:..., defaulting to considering the value a string if no prefix is specified. Presumably the work in the service should just be around how this value is passed to the TSS crate, which should do the parsing behind the scenes. The issue tracking the TSS crate changes is here: parallaxsecond/rust-tss-esapi#37

Test Parsec installation as a systemd daemon

It would be nice to have PARSEC installed and run inside a Docker container as a systemd daemon from a local checkout of this repository.
The container could share with the host the Unix socket to communicate with the daemon.
This could be put in the CI to test systemd installation.

Improve logging message structure

Currently our log messages specify the error that occurred, however it would be useful to also include the location in code.
Example:
Currently: "Failed to read request; status: {}"
Enhanced: "FrontEndHandler handle_request: Failed to read request; status: {}"

Alternatively, when implementing actual logging, this might be sorted out for us by the logging framework (?)

Make PARSEC a daemon

PARSEC should run as a daemon, running fully as a background process. Its logs should go at a convenient place and not on the standard output.
It should be possible to gracefully terminate it and wait for all of its threads to finish.

Audit our use of panicking

We panic in the code at many places with expect or unwrap. Ideally we want to return an error instead of panicking unless we are absolutely sure this is the right thing to do. We should maybe write guidelines somewhere about when "this is the right thing to do".

Have all the providers dynamically loadable

A strong benefit of the configuration is that you can change at run-time the providers that you want to use in the configuration file and send a SIGHUP signal to Parsec to reload it.

It is only possible if all of the providers can be loaded dynamically. It is currently the case for PKCS 11 one but we nede to provide the same behaviour for the TPM one and Mbed Crypto.

It will also Parsec to be compiled even on systems that do not have those libraries available (it will only fail at run-time).

Review and update documentation

Go thoroughly through the Parsec book and update what is needed including diagrams to make sure everything is up to date.

This is needed before #89

Create a PSA Crypto Rust wrapper crate

Our Mbed Provider is closely tied to the FFI layer to Mbed Crypto, which lead to questions being raised about the safety of the code, especially around unsafe blocks and how such blocks should be handled in the larger context.

A sensible solution is to split out the code directly interfacing with Mbed Crypto into its own module/crate, keeping the unsafe calls contained.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.