For building the crypto backend-driver we created several functions for handling DescriptorChains which could probably go in the DescriptorChain implementation itself.
When receiving a request at the back-end virtio-crypto driver we copy the guest buffers in firecracker memory and then pass that memory to the host cryptodev driver.
We should investigate if there is a way to pass the guest memory directly to the host driver and avoid the intermediate copy. In this case, we should be careful, since in some cases the front-end is passing us vlf buffers in multiple segments. In these cases we can't avoid copying.
Currently, we automatically build a firecracker binary for every pull request. We want to do the same for aarch64.
The building of Firecracker already uses docker containers for building. Upstream Firecracker already includes Dockerfile for both architectures, which have been changed to include the vAccelRT as a dependency.
For the case of x86_64 we have created our own image, and we changed the script used to build the Firecracker.
What we need to do:
Automate the construction of the docker container images for x86_64 & aarch64. This will require actions for changes in the Dockerfiles only. This needs to be done for both x86_64 & aarch64.
Automate the building and publishing of artifacts for every Pull Request against our supported versions (at the moment vaccel-v0.23) for both architectures. At the moment, we do it only for x86_64.
At the moment, we implement the back-end virtio-crypto driver as described in the specs.
Ultimately, what we want is to try out the vaccel-virtio implementation. vaccel-virtio is based on virtio-crypto, so our back-end driver should, at least, be able to keep the same structure for virtqueues.
What changes is the actual requests we will receive from the front-end how these will be parsed and forwarded to the vaccel runtime