Giter Site home page Giter Site logo

containerd / runwasi Goto Github PK

View Code? Open in Web Editor NEW
977.0 977.0 77.0 3.22 MB

Facilitates running Wasm / WASI workloads managed by containerd

License: Apache License 2.0

Dockerfile 0.25% Makefile 5.23% Rust 91.80% jq 0.08% Shell 1.15% WebAssembly 1.38% Gnuplot 0.11%
containerd kubernetes rust wasi wasm webassembly

runwasi's Introduction

containerd banner light mode containerd banner dark mode

PkgGoDev Build Status Nightlies Go Report Card CII Best Practices Check Links

containerd is an industry-standard container runtime with an emphasis on simplicity, robustness, and portability. It is available as a daemon for Linux and Windows, which can manage the complete container lifecycle of its host system: image transfer and storage, container execution and supervision, low-level storage and network attachments, etc.

containerd is a member of CNCF with 'graduated' status.

containerd is designed to be embedded into a larger system, rather than being used directly by developers or end-users.

architecture

Announcements

Now Recruiting

We are a large inclusive OSS project that is welcoming help of any kind shape or form:

  • Documentation help is needed to make the product easier to consume and extend.
  • We need OSS community outreach/organizing help to get the word out; manage and create messaging and educational content; and help with social media, community forums/groups, and google groups.
  • We are actively inviting new security advisors to join the team.
  • New subprojects are being created, core and non-core that could use additional development help.
  • Each of the containerd projects has a list of issues currently being worked on or that need help resolving.
    • If the issue has not already been assigned to someone or has not made recent progress, and you are interested, please inquire.
    • If you are interested in starting with a smaller/beginner-level issue, look for issues with an exp/beginner tag, for example containerd/containerd beginner issues.

Getting Started

See our documentation on containerd.io:

To get started contributing to containerd, see CONTRIBUTING.

If you are interested in trying out containerd see our example at Getting Started.

Nightly builds

There are nightly builds available for download here. Binaries are generated from main branch every night for Linux and Windows.

Please be aware: nightly builds might have critical bugs, it's not recommended for use in production and no support provided.

Kubernetes (k8s) CI Dashboard Group

The k8s CI dashboard group for containerd contains test results regarding the health of kubernetes when run against main and a number of containerd release branches.

Runtime Requirements

Runtime requirements for containerd are very minimal. Most interactions with the Linux and Windows container feature sets are handled via runc and/or OS-specific libraries (e.g. hcsshim for Microsoft). The current required version of runc is described in RUNC.md.

There are specific features used by containerd core code and snapshotters that will require a minimum kernel version on Linux. With the understood caveat of distro kernel versioning, a reasonable starting point for Linux is a minimum 4.x kernel version.

The overlay filesystem snapshotter, used by default, uses features that were finalized in the 4.x kernel series. If you choose to use btrfs, there may be more flexibility in kernel version (minimum recommended is 3.18), but will require the btrfs kernel module and btrfs tools to be installed on your Linux distribution.

To use Linux checkpoint and restore features, you will need criu installed on your system. See more details in Checkpoint and Restore.

Build requirements for developers are listed in BUILDING.

Supported Registries

Any registry which is compliant with the OCI Distribution Specification is supported by containerd.

For configuring registries, see registry host configuration documentation

Features

Client

containerd offers a full client package to help you integrate containerd into your platform.

import (
  "context"

  containerd "github.com/containerd/containerd/v2/client"
  "github.com/containerd/containerd/v2/pkg/cio"
  "github.com/containerd/containerd/v2/pkg/namespaces"
)


func main() {
	client, err := containerd.New("/run/containerd/containerd.sock")
	defer client.Close()
}

Namespaces

Namespaces allow multiple consumers to use the same containerd without conflicting with each other. It has the benefit of sharing content while maintaining separation with containers and images.

To set a namespace for requests to the API:

context = context.Background()
// create a context for docker
docker = namespaces.WithNamespace(context, "docker")

containerd, err := client.NewContainer(docker, "id")

To set a default namespace on the client:

client, err := containerd.New(address, containerd.WithDefaultNamespace("docker"))

Distribution

// pull an image
image, err := client.Pull(context, "docker.io/library/redis:latest")

// push an image
err := client.Push(context, "docker.io/library/redis:latest", image.Target())

Containers

In containerd, a container is a metadata object. Resources such as an OCI runtime specification, image, root filesystem, and other metadata can be attached to a container.

redis, err := client.NewContainer(context, "redis-master")
defer redis.Delete(context)

OCI Runtime Specification

containerd fully supports the OCI runtime specification for running containers. We have built-in functions to help you generate runtime specifications based on images as well as custom parameters.

You can specify options when creating a container about how to modify the specification.

redis, err := client.NewContainer(context, "redis-master", containerd.WithNewSpec(oci.WithImageConfig(image)))

Root Filesystems

containerd allows you to use overlay or snapshot filesystems with your containers. It comes with built-in support for overlayfs and btrfs.

// pull an image and unpack it into the configured snapshotter
image, err := client.Pull(context, "docker.io/library/redis:latest", containerd.WithPullUnpack)

// allocate a new RW root filesystem for a container based on the image
redis, err := client.NewContainer(context, "redis-master",
	containerd.WithNewSnapshot("redis-rootfs", image),
	containerd.WithNewSpec(oci.WithImageConfig(image)),
)

// use a readonly filesystem with multiple containers
for i := 0; i < 10; i++ {
	id := fmt.Sprintf("id-%s", i)
	container, err := client.NewContainer(ctx, id,
		containerd.WithNewSnapshotView(id, image),
		containerd.WithNewSpec(oci.WithImageConfig(image)),
	)
}

Tasks

Taking a container object and turning it into a runnable process on a system is done by creating a new Task from the container. A task represents the runnable object within containerd.

// create a new task
task, err := redis.NewTask(context, cio.NewCreator(cio.WithStdio))
defer task.Delete(context)

// the task is now running and has a pid that can be used to setup networking
// or other runtime settings outside of containerd
pid := task.Pid()

// start the redis-server process inside the container
err := task.Start(context)

// wait for the task to exit and get the exit status
status, err := task.Wait(context)

Checkpoint and Restore

If you have criu installed on your machine you can checkpoint and restore containers and their tasks. This allows you to clone and/or live migrate containers to other machines.

// checkpoint the task then push it to a registry
checkpoint, err := task.Checkpoint(context)

err := client.Push(context, "myregistry/checkpoints/redis:master", checkpoint)

// on a new machine pull the checkpoint and restore the redis container
checkpoint, err := client.Pull(context, "myregistry/checkpoints/redis:master")

redis, err = client.NewContainer(context, "redis-master", containerd.WithNewSnapshot("redis-rootfs", checkpoint))
defer container.Delete(context)

task, err = redis.NewTask(context, cio.NewCreator(cio.WithStdio), containerd.WithTaskCheckpoint(checkpoint))
defer task.Delete(context)

err := task.Start(context)

Snapshot Plugins

In addition to the built-in Snapshot plugins in containerd, additional external plugins can be configured using GRPC. An external plugin is made available using the configured name and appears as a plugin alongside the built-in ones.

To add an external snapshot plugin, add the plugin to containerd's config file (by default at /etc/containerd/config.toml). The string following proxy_plugin. will be used as the name of the snapshotter and the address should refer to a socket with a GRPC listener serving containerd's Snapshot GRPC API. Remember to restart containerd for any configuration changes to take effect.

[proxy_plugins]
  [proxy_plugins.customsnapshot]
    type = "snapshot"
    address =  "/var/run/mysnapshotter.sock"

See PLUGINS.md for how to create plugins

Releases and API Stability

Please see RELEASES.md for details on versioning and stability of containerd components.

Downloadable 64-bit Intel/AMD binaries of all official releases are available on our releases page.

For other architectures and distribution support, you will find that many Linux distributions package their own containerd and provide it across several architectures, such as Canonical's Ubuntu packaging.

Enabling command auto-completion

Starting with containerd 1.4, the urfave client feature for auto-creation of bash and zsh autocompletion data is enabled. To use the autocomplete feature in a bash shell for example, source the autocomplete/ctr file in your .bashrc, or manually like:

$ source ./contrib/autocomplete/ctr

Distribution of ctr autocomplete for bash and zsh

For bash, copy the contrib/autocomplete/ctr script into /etc/bash_completion.d/ and rename it to ctr. The zsh_autocomplete file is also available and can be used similarly for zsh users.

Provide documentation to users to source this file into their shell if you don't place the autocomplete file in a location where it is automatically loaded for the user's shell environment.

CRI

cri is a containerd plugin implementation of the Kubernetes container runtime interface (CRI). With it, you are able to use containerd as the container runtime for a Kubernetes cluster.

cri

CRI Status

cri is a native plugin of containerd. Since containerd 1.1, the cri plugin is built into the release binaries and enabled by default.

The cri plugin has reached GA status, representing that it is:

See results on the containerd k8s test dashboard

Validating Your cri Setup

A Kubernetes incubator project, cri-tools, includes programs for exercising CRI implementations. More importantly, cri-tools includes the program critest which is used for running CRI Validation Testing.

CRI Guides

Communication

For async communication and long-running discussions please use issues and pull requests on the GitHub repo. This will be the best place to discuss design and implementation.

For sync communication catch us in the #containerd and #containerd-dev Slack channels on Cloud Native Computing Foundation's (CNCF) Slack - cloud-native.slack.com. Everyone is welcome to join and chat. Get Invite to CNCF Slack.

Security audit

Security audits for the containerd project are hosted on our website. Please see the security page at containerd.io for more information.

Reporting security issues

Please follow the instructions at containerd/project

Licenses

The containerd codebase is released under the Apache 2.0 license. The README.md file and files in the "docs" folder are licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

Project details

containerd is the primary open source project within the broader containerd GitHub organization. However, all projects within the repo have common maintainership, governance, and contributing guidelines which are stored in a project repository commonly for all containerd projects.

Please find all these core project documents, including the:

information in our containerd/project repository.

Adoption

Interested to see who is using containerd? Are you using containerd in a project? Please add yourself via pull request to our ADOPTERS.md file.

runwasi's People

Contributors

0xe282b0 avatar bokuweb avatar brendanburns avatar captainvincent avatar cpuguy83 avatar danbugs avatar defims avatar denis2glez avatar dependabot[bot] avatar devigned avatar dierbei avatar iceber avatar ipuustin avatar jprendes avatar jsturtevant avatar kate-goldenring avatar lengrongfu avatar mossaka avatar rumpl avatar sachaos avatar utam0k avatar vyta avatar yihuaf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

runwasi's Issues

CI might be broken after #187

seems like CI might be broken after #187

   Dirty wasmedge-sys v0.15.0: the file `/usr/lib/llvm-14/lib/clang/14.0.0/include/stdbool.h` is missing
   Compiling wasmedge-sys v0.15.0
     Running `/home/runner/work/runwasi/runwasi/target/debug/build/wasmedge-sys-24c4a5d29523e79f/build-script-build`
error: failed to run custom build command for `wasmedge-sys v0.15.0`
note: To improve backtraces for build dependencies, set the CARGO_PROFILE_DEV_BUILD_OVERRIDE_DEBUG=true environment variable to enable debug information generation.

Caused by:
  process didn't exit successfully: `/home/runner/work/runwasi/runwasi/target/debug/build/wasmedge-sys-24c4a5d29523e79f/build-script-build` (exit status: 1)
  --- stderr
  /home/runner/work/runwasi/runwasi/target/debug/build/wasmedge-sys-24c4a5d29523e79f/build-script-build: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by /home/runner/work/runwasi/runwasi/target/debug/build/wasmedge-sys-24c4a5d29523e79f/build-script-build)
  /home/runner/work/runwasi/runwasi/target/debug/build/wasmedge-sys-24c4a5d29523e79f/build-script-build: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.33' not found (required by /home/runner/work/runwasi/runwasi/target/debug/build/wasmedge-sys-24c4a5d29523e79f/build-script-build)
  /home/runner/work/runwasi/runwasi/target/debug/build/wasmedge-sys-24c4a5d29523e79f/build-script-build: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /home/runner/work/runwasi/runwasi/target/debug/build/wasmedge-sys-24c4a5d29523e79f/build-script-build)
warning: build failed, waiting for other jobs to finish...

Originally posted by @jsturtevant in #192 (comment)

wasmedge shim fails with some entry points

The following commands fail:

docker run --rm --platform wasi/wasm --runtime io.contaierd.wasmedge.v1 secondstate/rust-example-hello:latest
docker run --rm --platform wasi/wasm --runtime io.contaierd.wasmedge.v1 --entrypoint hello.wasm secondstate/rust-example-hello:latest

while the following commands succeed:

docker run --rm --platform wasi/wasm --runtime io.contaierd.wasmedge.v1 --entrypoint /hello.wasm secondstate/rust-example-hello:latest
docker run --rm --platform wasi/wasm --runtime io.contaierd.wasmedge.v1 --entrypoint ./hello.wasm secondstate/rust-example-hello:latest

The secondstate/rust-example-hello:latest specifies its entrypoint as ["hello.wasm"].

The same 4 commands succeed with the io.containerd.wasmtime.v1 runtime.

Distribute using OCI artifacts / allow shim to pull artifact

This is a feature request to allow distribution of apps targeting runwasi using OCI artifacts, and not as a container images.

Currently, all applications targeting runwasi need to be distributed as a container image, with the structure carefully constructed. This is not an ideal long-term solution for a number of reasons, deduplication of Wasm files and static assets being one of them, particularly in the context of bytecodealliance/registry#87 and https://hackmd.io/50rfwV6BTJWN8VZBhdAN_g.

Ideally, pulling the artifact could be done by the actual shim implementation — and implementations could continue to default to wrapping the Wasm app in a container image; however, some shim implementations would benefit greatly from using their existing mechanism of distributing apps using OCI artifacts (see https://developer.fermyon.com/spin/distributing-apps.md#publishing-a-spin-application-to-a-registry).

Thoughts?

cc @rumpl, @squillace, @Mossaka, @devigned.

WasmEdge can run only 254 instances?

If I add a unit tests which creates, runs, waits and deletes a WasmEdge instance for 300 times, the test fails on the 255th run:

...
...
...
Running test iteration 253
Running test iteration 254
Running test iteration 255
Error: Any(failed to create container

Caused by:
    0: failed to wait for init ready
    1: failed to wait for init ready
    2: channel connection broken)


failures:
    instance::wasitest::test_wasi_300

It feels to me that we are leaking some resource (such as file descriptors), but I don't know if the problem is in WasmEdge, libcontainer or runwasi (or in the test setup :-). This is the test:

diff --git a/crates/containerd-shim-wasmedge/src/instance.rs b/crates/containerd-shim-wasmedge/src/instance.rs
index 87d0148..31a8e67 100644
--- a/crates/containerd-shim-wasmedge/src/instance.rs
+++ b/crates/containerd-shim-wasmedge/src/instance.rs
@@ -472,6 +472,32 @@ mod wasitest {
         Ok(())
     }

+    #[test]
+    #[serial]
+    fn test_wasi_300() -> Result<(), Error> {
+        if !has_cap_sys_admin() {
+            println!("running test with sudo: {}", function!());
+            return run_test_with_sudo(function!());
+        }
+
+        for i in 1..300 {
+            println!("Running test iteration {}", i);
+
+            let wasmbytes = wat2wasm(WASI_HELLO_WAT).unwrap();
+            let dir = tempdir()?;
+            let path = dir.path();
+            let res = run_wasi_test(&dir, wasmbytes)?;
+
+            assert_eq!(res.0, 0);
+
+            let output = read_to_string(path.join("stdout"))?;
+            assert_eq!(output, "hello world\n");
+
+            reset_stdio();
+        }
+        Ok(())
+    }
+
     #[test]
     #[serial]
     fn test_wasi_error() -> Result<(), Error> {

No run time for "wasm" is configuted

I have config the containerd and set the shim binary at correct path, but when I do kubectl apply -f test/k8s/deploy.yaml I got

wasi-demo-75d6bb666c-479km   0/1     ContainerCreating   0          9s

deploy.yaml

apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: "wasmedge"
handler: "wasmedge"

and I describe it

➜  runwasi git:(main) ✗ kubectl describe pod wasi-demo-75d6bb666c-479km         
Name:                wasi-demo-75d6bb666c-479km
Namespace:           default
Priority:            0
Runtime Class Name:  wasmedge
Service Account:     default
Node:                minikube/192.168.49.2
Start Time:          Tue, 11 Jul 2023 19:55:50 +0800
Labels:              app=wasi-demo
                     pod-template-hash=75d6bb666c
Annotations:         <none>
Status:              Pending
IP:                  
IPs:                 <none>
Controlled By:       ReplicaSet/wasi-demo-75d6bb666c
Containers:
  demo:
    Container ID:   
    Image:          ghcr.io/containerd/runwasi/wasi-demo-app:latest
    Image ID:       
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fb764 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-api-access-fb764:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                  Age               From               Message
  ----     ------                  ----              ----               -------
  Normal   Scheduled               18s               default-scheduler  Successfully assigned default/wasi-demo-75d6bb666c-479km to minikube
  Warning  FailedCreatePodSandBox  4s (x2 over 18s)  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to get sandbox runtime: no runtime for "wasmedge" is configured

and here is my /etc/containerd/config.toml

      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.wasmedge]
          runtime_type = "io.containerd.wasmedge.v1"
          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.wasmedge.options]
            BinaryName = "/usr/bin/containerd-shim-wasmedge-v1"

I have cp the shim binary at /bin, /usr/bin, /usr/local/bin, but it still not work.

and my cluster started by minikube, here is the describe about the node

➜  runwasi git:(main) ✗ kubectl describe node minikube | grep Container
  Container Runtime Version:  containerd://1.6.20

so, which step was wrong, I am soo confused.

Document full set up with k3s

There is a large amount of assumed knowledge and set up in the current instructions so it would be useful to have documentation of a full run through of setup and usage with k3s.

I'm working on getting this running in my lab using k3s. If I get it all working, I can write up the commands I used.

Move wasi impl to separate crate

The repo has a few binaries and a wasi implementation that is fairly tied to wasmtime.
#15 makes the core library runtime agnostic, meaning it does not depend on wasmtime.

In order to completely remove wasmtime as a dependency from the core library it may be useful to move the binaries along with the Wasi instance implementation into a separate crate (of course both crates can be in this repo).

Shim cannot connect to runtime daemon?

Hi, I'm playing with runwasi in kind by adapting the integration test Dockerfile. I see that the wasmtime shim works for running the docker.io/wasmedge/example-wasi:latest test image, but I cannot run the same workload when using a node image that configures daemon mode. Is there something else that I need to do to get daemon mode working?

Here's the error I see (both wasmedge and wasmtime fail in the same way):

Events:
  Type     Reason                  Age   From               Message
  ----     ------                  ----  ----               -------
  Normal   Scheduled               15s   default-scheduler  Successfully assigned default/wasi-job-demo-wm4cj to kind-worker
  Warning  FailedCreatePodSandBox  14s   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: failed to start shim: start failed: containerd-shim-wasmedged-v1: Ttrpc(RpcStatus(Status { code: NOT_FOUND, message: "/runwasi.services.sandbox.v1.Manager/Connect is not supported", details: [], special_fields: SpecialFields { unknown_fields: UnknownFields { fields: None }, cached_size: CachedSize { size: 0 } } }))
: exit status 1: unknown

I configured the daemon as a part of the containerd systemd service and do see that it is running, and the unix socket is present as well:

root@kind-worker:/# ps -ef
UID          PID    PPID  C STIME TTY          TIME CMD
root           1       0  0 20:22 ?        00:00:00 /sbin/init
root          79       1  0 20:22 ?        00:00:00 /lib/systemd/systemd-journald
message+      90       1  0 20:22 ?        00:00:00 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
root         113       1  0 20:22 ?        00:00:00 /usr/local/bin/containerd-wasmedged
root         117       1  1 20:22 ?        00:00:05 /usr/local/bin/containerd
root         201       1  1 20:23 ?        00:00:06 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd
root         254       1  0 20:23 ?        00:00:00 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id a63b62567b06b0cd4d17f8c3ba7b870bb9f98d86df803216f26a9df57c88a327 -address /run/containerd/containerd.sock
root         255       1  0 20:23 ?        00:00:00 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 10ae9a17d1bbe7a0098adb1e27fc296cfe0eaafacf26ba83fc71472aad92cef0 -address /run/containerd/containerd.sock
65535        295     255  0 20:23 ?        00:00:00 /pause
65535        297     254  0 20:23 ?        00:00:00 /pause
root         362     255  0 20:23 ?        00:00:00 /bin/kindnetd
root         387     254  0 20:23 ?        00:00:00 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=kind-worker
root@kind-worker:/# ls -l /var/run/io.containerd.wasmwasi.v1 
total 0
srwxr-xr-x 1 root root 0 Jul  4 20:22 manager.sock

journalctl -u wasmedged.service shows nothing interesting.

containerd config:

root@kind-worker:/# more /etc/containerd/config.toml 
version = 2

[plugins]
  [plugins."io.containerd.grpc.v1.cri"]
    restrict_oom_score_adj = false
    sandbox_image = "registry.k8s.io/pause:3.7"
    tolerate_missing_hugepages_controller = true
    [plugins."io.containerd.grpc.v1.cri".containerd]
      default_runtime_name = "runc"
      discard_unpacked_layers = true
      snapshotter = "overlayfs"
      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          base_runtime_spec = "/etc/containerd/cri-base.json"
          runtime_type = "io.containerd.runc.v2"
          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            SystemdCgroup = true
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.test-handler]
          base_runtime_spec = "/etc/containerd/cri-base.json"
          runtime_type = "io.containerd.runc.v2"
          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.test-handler.options]
            SystemdCgroup = true
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.wasm]
          runtime_type = "io.containerd.wasmedged.v1"
    [plugins."io.containerd.grpc.v1.cri".registry]
      config_path = "/etc/containerd/certs.d"

[proxy_plugins]
  [proxy_plugins.fuse-overlayfs]
    address = "/run/containerd-fuse-overlayfs.sock"
    type = "snapshot"

k3s kubectl logs return empty logs

ctr run directly got right stdout logs

sudo k3s ctr image import --all-platforms target/wasm32-wasi/debug/img.tar #img.tar is build from cd crates/wasi-demo-app && cargo build && cargo build --features oci-v1-tar && cd ../../
sudo k3s ctr run --rm --runtime=io.containerd.wasmtime.v1 ghcr.io/containerd/runwasi/wasi-demo-app:latest wasi-demo-app #need containerd-shim-wasmtime-v1 avaibled by running make && make install 

image

but when I run it with kubectl, I got empty pod logs

sudo k3s kubectl apply -f wasm.yml # need configure containerd with configure file like below
sudo k3s kubectl get pods
sudo k3s kubectl logs wasi-demo-xxx

image

the wasm.yml is:

---
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: wasmtime
handler: wasmtime
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wasi-demo
  labels:
    app: wasi-demo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wasi-demo
  template:
    metadata:
      labels:
        app: wasi-demo
    spec:
      runtimeClassName: wasmtime
      containers:
      - name: demo
        image: ghcr.io/containerd/runwasi/wasi-demo-app:latest
        imagePullPolicy: Never

/var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl is:

version = 2

[plugins."io.containerd.internal.v1.opt"]
  path = "/var/lib/rancher/k3s/agent/containerd"
[plugins."io.containerd.grpc.v1.cri"]
  stream_server_address = "127.0.0.1"
  stream_server_port = "10010"
  enable_selinux = false
  enable_unprivileged_ports = true
  enable_unprivileged_icmp = true
  sandbox_image = "rancher/mirrored-pause:3.6"

[plugins."io.containerd.grpc.v1.cri".containerd]
  snapshotter = "overlayfs"
  disable_snapshot_annotations = true


[plugins."io.containerd.grpc.v1.cri".cni]
  bin_dir = "/var/lib/rancher/k3s/data/7c994f47fd344e1637da337b92c51433c255b387d207b30b3e0262779457afe4/bin"
  conf_dir = "/var/lib/rancher/k3s/agent/etc/cni/net.d"


[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.wasmedge]
  runtime_type = "io.containerd.wasmedge.v1"

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.wasmtime]
  runtime_type = "io.containerd.wasmtime.v1"

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  runtime_type = "io.containerd.runc.v2"

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
  SystemdCgroup = true

containerd shims are in /usr/local/bin
image

Use youki libcontainer crate for all shims

The work in #78 enabled youki's libcontainer for the WasmEdge shim.
The Issue #110 intents to repeat the same work for the wasmtime shim.
During #78, @rumpl 's comment mentioned that we could enable youki's libcontainer in a lower level in runwasi, so that all shims can benefit from it.
Any thoughts?

tar.gz files included in releases are empty

It looks like the tar.gz assets included in releases are empty. I tried to look at why this might be but im not familiar with the github actions syntax and its hard to test locally. I resorted to building from source but I assume these were meant to have the pre-built binaries in them?

Enable container + Wasm workloads running within the same pod

Currently, runwasi only knows how to run Wasm workloads. K8s users often want to run sidecars for service meshes and other traditional container injections within the same pod.

It would empower a more idiomatic K8s experience if runwasi was able to run both Wasm workloads and traditional container workloads within the same pod.

Failed to build when compiling wasmedge-sys v0.12.2

Description

When I run make in root direction, it failed in the step to compile wasmedge-sys v0.12.2, and the error is here:

The following warnings were emitted during compilation:

warning: [wasmedge-sys] Failed to locate lib_dir, include_dir, or header.

error: failed to run custom build command for `wasmedge-sys v0.12.2`

Caused by:
  process didn't exit successfully: `/home/vagrant/runwasi/target/debug/build/wasmedge-sys-1a654059db2210f1/build-script-build` (exit status: 101)
  --- stdout
  cargo:warning=[wasmedge-sys] Failed to locate lib_dir, include_dir, or header.

  --- stderr
  thread 'main' panicked at '[wasmedge-sys] Failed to locate the required header and/or library file. Please reference the link: https://wasmedge.org/book/en/embed/rust.html', /home/vagrant/.cargo/registry/src/github.com-1ecc6299db9ec823/wasmedge-sys-0.12.2/build.rs:30:25
  note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
warning: build failed, waiting for other jobs to finish...
make: *** [Makefile:19: build] Error 101

Did I miss something?

Expected

Build successfully.

Environment

Ubuntu 20.04.6 LTS

Benchmarking

This issue is served as a place for discussing various ways / ideas we can benchmark runwasi the wasm shims that was proposed by @ipuustin

  • One idea is that we can write a simple wasm program (Fibonacci) and execute in runwasi and a native program executing in runc. This provides a base benchmark of comparising the performance of wasi program vs. native runc processes. It is not meant to benchmark the performance of WASI in general.
  • Having the base benchmark set, we can observe the performance difference for each version increments. For example, we can observe how much speed increase / descrease for version 0.2 vs. 0.3
  • Another idea of benchmarking is testing how "dense" of wasm pods can we go for a node. It is often advertised that wasm modules can increase CPU utilization and thus increasing the density of running pods per node. We can verify this point to push the containerd runtime to the extreme by running thousands of pods at the same time.

Feel free to add ideas and thoughts on this topic! Any suggestion is welcome 🙏

Windows support

Currently we cannot use this project on Windows. This is an umbrella issue to track Windows support. There may be more items added as the work progresses.

The project doesn't currently build: it builds since #238 🎉

 cargo build
   Compiling zstd-safe v5.0.2+zstd.1.5.2
   Compiling zstd-sys v2.0.4+zstd.1.5.2
   Compiling ittapi-sys v0.3.2
   Compiling wasmtime-runtime v2.0.2
   Compiling cranelift-wasm v0.89.2
   Compiling psm v0.1.21
   Compiling uapi v0.2.10
   Compiling uapi-proc v0.0.5
   Compiling num-traits v0.2.15
   Compiling ttrpc v0.6.1
   Compiling ittapi v0.3.2
   Compiling cap-fs-ext v0.26.1
error[E0425]: cannot find function `geteuid` in crate `libc`
   --> C:\Users\jstur\.cargo\registry\src\github.com-1ecc6299db9ec823\uapi-proc-0.0.5\src\lib.rs:16:34
    |
16  |             root: unsafe { libc::geteuid() == 0 },
    |                                  ^^^^^^^ help: a function with a similar name exists: `getpid`

Some errors have detailed explanations: E0412, E0422, E0425, E0432, E0433.
For more information about an error, try `rustc --explain E0412`.
error: could not compile `ttrpc` due to 58 previous errors
error: failed to run custom build command for `uapi v0.2.10

At a minimum the following tasks need to be completed:

Update rust-extensions project for the next release

PR #81 upgrades TTRPC to use protobuff 3.x to unblock some work. This is a tracking issue so we can update once we get a new release of the https://github.com/containerd/rust-extensions project

          > This is taking much longer than anticipated, things are moving along but slowly. I am finding it difficult to maintain this patch and work on the Windows implementation with the various changes going in for the youki work.

Any thoughts on pinning to a rev vs a realse so we can start bumping the other dependencies and moving forward on protobuf 3?

I'm good with pinning to a rev as long as we have an issue to track the follow up work. What do others think?

Originally posted by @devigned in #81 (comment)

Cargo test fail with test_cgroup

Not fail on every machines.

I have success and fail machines both, but I am not sure what kind of information I could provide.
I could help test in my environment if you have any idea.

So far I know if I sudo mkdir under /sys/fs/cgroup/memory, they will generate different folder content between success and fail machine.

Run

cargo test --all test_cgroup --verbose

And I got those error log.

---- sandbox::cgroups::tests::test_cgroup stdout ----
Error: Others("failed to apply cgroup: could not open cgroup file /sys/fs/cgroup/relative/nested/containerd-wasm-shim-test_cgroup/memory.max: No such file or directory (os error 2)")

---- sandbox::cgroups::tests::test_cgroup stdout ----
running test with sudo: sandbox::cgroups::tests::test_cgroup
Error: Stdio(Kind(Other))

make build encountered an error

I tried to use the make build command and got the following error:

image

I think this error may have occurred when installing wasmedge.

I tried to use the official command: curl -sSf https://raw.githubusercontent.com/WasmEdge/WasmEdge/master/utils/install.sh | bash -s -- -e all -v 0.12.1 , and the make build was executed successfully.

Now it looks like make bin/wasmedge doesn't install the required dependencies.

docker run with wasmedge shim stop working #81

Issue Highlight

  • Test below command fail due to c2c2d0d98237e297e72cbcf9c695a4a215a39bad #81
docker run --rm --runtime=io.containerd.wasmedge.v1 --platform wasi/wasm jorgeprendes420/wasmtest echo 'hello'

Fail commit need run with wasmedge 0.12.1

  • Test below command fail due to ca5260fd34635ca56d81db3f59ee8672fc7fde68 #78
git clone https://github.com/WasmEdge/wasmedge_hyper_demo.git
cd wasmedge_hyper_demo
docker compose up client

Fail commit need run with older wasmedge 0.11.2
[Update]
The shim now requires the entrypoint path to be resolved using normal posix executable resolution now.
so ENTRYPOINT need start with / is the new usage restrictions and add padding / in path could resolve the second one.

Motivation

Due to the plan of deprecating the fork under Second State, I need to conduct an evaluation of the current functionality of containerd/runwasi before proceeding. I have discovered that using the wasmedge runtime with Docker has been broken for quite some time (all testing while ctr is working fine). Here, I am providing a document that includes the codebase and process I have been using to test docker run.

Instance.wait() call semantics are weird

The Instance trait has a wait function which is used to wait for the instance to exit.
Due to issues with threading and lifetimes the call currently takes a channel sender.

Ideally any sort of async behavior should be handled by the caller (e.g. wrap it in a thread and handle the channels at the call site rather than expecting the implementation to).
Lifetimes of the type parameters on Instance make this a little more problematic and I am currently hesitant to use a 'static lifetime (required by thread::spawn).

Update wasmtime deps to latest version (6.x)

I tried to update the wasmtime dep to the latest and found that runwasmedge is blocking the upgrade do to a dep on wasmtime-fiber:

[ERROR rust_analyzer::lsp_utils] rust-analyzer failed to load workspace: Failed to read Cargo metadata from Cargo.toml file /home/jstur/projects/runwasi/Cargo.toml, Some(Version { major: 1, minor: 67, patch: 1 }): Failed to run `"cargo" "metadata" "--format-version" "1" "--features" "generate_bindings" "--manifest-path" "/home/jstur/projects/runwasi/Cargo.toml" "--filter-platform" "x86_64-unknown-linux-gnu"`: `cargo metadata` exited with an error:     Blocking waiting for file lock on package cache
    Updating crates.io index
error: failed to select a version for `wasmtime-fiber`.
    ... required by package `runwasmedge v0.1.0 (/home/jstur/projects/runwasi/crates/wasmedge)`
versions that meet the requirements `^2.0` are: 2.0.2, 2.0.1, 2.0.0

the package `wasmtime-fiber` links to the native library `wasmtime-fiber-shims`, but it conflicts with a previous package which links to `wasmtime-fiber-shims` as well:
package `wasmtime-fiber v6.0.0`
    ... which satisfies dependency `wasmtime-fiber = "=6.0.0"` of package `wasmtime v6.0.0`
    ... which satisfies dependency `wasmtime = "^6.0"` of package `runwasmtime v0.1.0 (/home/jstur/projects/runwasi/crates/wasmtime)`
Only one package in the dependency graph may specify the same links value. This helps ensure that only one copy of a native library is linked in the final binary. Try to adjust your dependencies so that only one package uses the links ='wasmtime-fiber' value. For more information, see https://doc.rust-lang.org/cargo/reference/resolver.html#links.

failed to select a version for `wasmtime-fiber` which could resolve this conflict


[ERROR rust_analyzer::lsp_utils] rust-analyzer failed to load workspace: Failed to read Cargo metadata from Cargo.toml file /home/jstur/projects/runwasi/Cargo.toml, Some(Version { major: 1, minor: 67, patch: 1 }): Failed to run `"cargo" "metadata" "--format-version" "1" "--features" "generate_bindings" "--manifest-path" "/home/jstur/projects/runwasi/Cargo.toml" "--filter-platform" "x86_64-unknown-linux-gnu"`: `cargo metadata` exited with an error:     Updating crates.io index
error: failed to select a version for `wasmtime-fiber`.
    ... required by package `runwasmedge v0.1.0 (/home/jstur/projects/runwasi/crates/wasmedge)`
versions that meet the requirements `^2.0` are: 2.0.2, 2.0.1, 2.0.0

the package `wasmtime-fiber` links to the native library `wasmtime-fiber-shims`, but it conflicts with a previous package which links to `wasmtime-fiber-shims` as well:
package `wasmtime-fiber v6.0.0`
    ... which satisfies dependency `wasmtime-fiber = "=6.0.0"` of package `wasmtime v6.0.0`
    ... which satisfies dependency `wasmtime = "^6.0"` of package `runwasmtime v0.1.0 (/home/jstur/projects/runwasi/crates/wasmtime)`
Only one package in the dependency graph may specify the same links value. This helps ensure that only one copy of a native library is linked in the final binary. Try to adjust your dependencies so that only one package uses the links ='wasmtime-fiber' value. For more information, see https://doc.rust-lang.org/cargo/reference/resolver.html#links.

failed to select a version for `wasmtime-fiber` which could resolve this conflict

It seems since each of these shims is a separate binary we should be able have different dependencies but the current way it is set up doesn't allow for this.

I think we either need wasmedge to update its dep at https://github.com/WasmEdge/WasmEdge/blob/e27198bf674c18989111c2075758c2ee147556fe/bindings/rust/wasmedge-sys/Cargo.toml#L24 or need to re-organize the packages to allow for the binaries to link files seperately (not 100% sure how to do this)

how to debug the wasm container

ctr run work well:

sudo ctr run --rm --runtime=io.containerd.wasmedge.v1 docker.io/library/wasmtest:latest testwasm /wasm echo 'hello'

but kubectl apply yaml failed. I don't know how to debug the wasm container(from scratch). for example echo container logs.

containerd-template.toml added config:

    [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.wasmtime]
      runtime_type = "io.containerd.wasmtime.v1"

k8s.yml:

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: wasmtest
  annotations:
    kubernetes.io/ingress.class: "traefik"
spec:
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: wasmtest
            port:
              number: 3000
---
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: wasmtime
handler: wasmtime
---
kind: Service
apiVersion: v1
metadata:
  name: wasmtest
  labels:
    name: wasmtest
spec:
  ports:
  - name: wasmtest3000
    protocol: TCP
    port: 3000
  selector:
    app: wasmtest
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: wasmtest
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wasmtest
  template:
    metadata:
      labels:
        app: wasmtest
    spec:
      runtimeClassName: wasmtime
      containers:
      - name: wasmtest
        image: docker.io/library/wasmtest:latest
        imagePullPolicy: Never 
        ports:
        - containerPort: 3000
microk8s kubectl apply -f k8s.yml

License?

I don't see a license file. Will this get an open-source license?

Add troubleshooting guide

Currently there is no troubleshotting guide. People might find it hard to follow readme to produce a hello world example.

Known issues are, but not limited to:

  1. containerd currently only support Linux. So in order to build runwasi, either you need to have a linux machien or run it in WSL on Windows
  2. docker buildx is a dependency
  3. make load is broken

Others("Device or resource busy (os error 16)"): unknown

After building and running the demo example, I got the following error:

sudo ctr run --rm --runtime=io.containerd.wasmtime.v1 docker.io/library/wasmtest:latest testwasm
ctr: Others("Device or resource busy (os error 16)"): unknown

investigation

The task is marked as CREATED:

sudo ctr task ls
TASK          PID    STATUS
testwasm13    0      CREATED

but get the following error when trying to delete it:

sudo ctr task rm testwasm13
ERRO[0000] unable to delete testwasm13                   error="task must be stopped before deletion: created: failed precondition"
ctr: task must be stopped before deletion: created: failed precondition

Also get a slightly different error when trying to "stop" it:

sudo ctr task kill -s SIGKILL testwasm13
ctr: cannot kill non-running container, current state: Exited(TaskState { s: PhantomData }): failed precondition

other info

There seems to be two issues:

  • the shim is and containerd are out of sync on the state of the shim. this leads to not being able to clean up the task/container
  • There is an unhandled exception in
    let res = unsafe { exec::fork(Some(cg.as_ref())) }?;
    This causes the Device or resource busy (os error 16)

versions

containerd version: containerd containerd.io 1.6.7 0197261a30bf81f1ee8e6a4dd2dea0ef95d67ccb
shim version: built from main (e266bbb)
linux version:

lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 22.04.1 LTS
Release:        22.04
Codename:       jammy

It does works on my WSL instance:

containerd containerd.io 1.6.6 10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1    

 lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 20.04.4 LTS
Release:        20.04
Codename:       focal

What paths to expose to the runtime

          Note that here https://github.com/ipuustin/runwasi/commit/7e72c3ca10a0454d4e220aa35db26f710fb03a17#diff-36f92d3bdc22f6005e8d13ff459d9d135364b34748de944da89785ecaa8d9e0aR58 is how we could enable rootfs file access for WasmEdge shim too. We need to have a discussion what we want to do about it -- should we only expose /dev and /proc files to the runtime, or to the container too?

Originally posted by @ipuustin in #142 (comment)

MacOS support

I get error[E0425]: cannot find function `prctl` in crate `libc`​ which suggests Linux-only.

CI is broken

It looks like the CI is broken that emits the following error message. See this

 --> crates/thirdparty/src/lib.rs:2:5
  |
2 | use oci_spec::runtime::Mount;
  |     ^^^^^^^^

wasmedge echo hangs on second execution

To repro run the following:

sudo ctr run --rm --runtime=io.containerd.wasmedge.v1 ghcr.io/containerd/runwasi/wasi-demo-app:latest testwasm /wasi-demo-app.wasm echo 'hello'
hello
exiting
sudo ctr run --rm --runtime=io.containerd.wasmedge.v1 ghcr.io/containerd/runwasi/wasi-demo-app:latest testwasm /wasi-demo-app.wasm echo 'hello'
hello
exiting
# hangs here

runwasi logo idea

I don't think the project has a logo, so I'm proposing the following.

I'm excited about the project, but I'm not a developer, so this is me trying to contribute. The idea is running WASI :-D

runwasi logo idea

I won't lose any sleep if nobody likes it or is too busy to care right now.

Full Linux OCI runtime spec support

Right now we have only partial support for the OCI runtime spec.
While some things in the spec may not make sense for running wasm code itself, it is useful for sandboxing for the wasm runtime and/or the execution of the wasm for defense-in-depth as well as ensuring fewer surprises for users expecting their settings to actually apply.

Some things missing:

build pipeline is failing

build pipeline: https://github.com/containerd/runwasi/actions/runs/5590373071/jobs/10220012269?pr=182

Error message:

/home/runner/.cargo/bin/cargo build --all --verbose
    Updating crates.io index
    Updating git repository `https://github.com/containerd/rust-extensions`
 Downloading crates ...
error: failed to download from `https://crates.io/api/v1/crates/cap-fs-ext/1.0.[15](https://github.com/containerd/runwasi/actions/runs/5590160267/jobs/10219494199?pr=181#step:8:16)/download`

Caused by:
  failed to get successful HTTP response from `https://crates.io/api/v1/crates/cap-fs-ext/1.0.15/download` (108.138.64.48), got 421
  debug headers:
  x-amz-cf-pop: IAD12-P1
  x-cache: Error from cloudfront
  x-amz-cf-pop: IAD12-P1
  x-amz-cf-id: JdJaQWKSmms_fHEe1k7N9PQila7[18](https://github.com/containerd/runwasi/actions/runs/5590160267/jobs/10219494199?pr=181#step:8:19)nePI_C1gpVEr8lKk-wFNLXVPw==
  body:
  <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
  <HTML><HEAD><META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">
  <TITLE>ERROR: The request could not be satisfied</TITLE>
  </HEAD><BODY>
  <H1>4[21](https://github.com/containerd/runwasi/actions/runs/5590160267/jobs/10219494199?pr=181#step:8:22) ERROR</H1>
  <H2>The request could not be satisfied.</H2>
  <HR noshade size="1px">
  The distribution does not match the certificate for which the HTTPS connection was established with.
  We can't connect to the server for this app or website at this …

Prevent wasm functions from being able to access their source.

TL;DR; wasm can currently inspect itself, and it probably shouldn't. A way to prevent this either by format/tooling or config is maybe a good idea.


Opening here despite the possibility of a better repo. I also considered on crun. Feel free to punt me to a better place.

It seems the ENTRYPOINT is a marker to identify the %.wasm file which is possibly amongst other files in rootFS layers. Code like below shows the guest must be inside the rootFS. In other words the rootFS is mounted, and the same source includes the wasm.

let mod_path = oci::get_root(spec).join(cmd);

I think this is convenient as it allows re-use of tools, but it would be surprising from a black-box or how normal wasm runtimes work. Normally the wasm source is specified independent of any filesystem mounts, and it would be surprising or a mistake for someone to mount their wasm in a place functions can accidentally or otherwise inspect it.

In other words, if I had to guess, someone thought about using an existing wasm layer type or a custom one (remember wasm is a single file so has no benefit of layers), but that would require changes to Dockerfile or its successors and said, nah. Maybe? I really don't know why choices were made, but it seems reasonable if the goal was to get building with the existing ecosystem.

This said, I think there are a lot of things that will take time to correct. I think a way to not leak the source wasm is worth asking for, either as a runtime-specific feature (here) or in some spec (no idea where).

Copying some people who may have thoughts and would act differently perhaps based on outcome,

  • @assambar - VMware runtimes, a builder of the only OCI container I can find published with multiple layers python-wasm
  • @knqyf263 - Trivy, which uses OCI for wasm extensions, but doesn't do it with rootFS layers (rather a wasm one). However, it is a CLI not a service so maybe less concern about this.
  • @giuseppe - crun basically who my colleague @evacchi seems to ping on any low-level container nuance ;)

I intentionally spammed only 3, so yeah feedback welcome regardless of from whom. I think we should have a clear rationale, even if reverse engineered, on this one.

thiserror and anyhow

This project uses two libraries for error handling, maybe we can choose only one and remove the other? I'm not sure why both are needed. If I had to choose I would keep anyhow, I like their context function. WDYT?

failed to start shim: start failed: terminate called after throwing an instance of 'std::out_of_range'

run the following:
ctr run --rm --runtime=io.containerd.wasmedge.v1 ghcr.io/containerd/runwasi/wasi-demo-app:latest testwasm /wasi-demo-app.wasm echo 'hello'

And get errors:
ctr: failed to start shim: start failed: terminate called after throwing an instance of 'std::out_of_range' what(): bitset::reset: __position (which is 1) >= _Nb (which is 1) : signal: aborted (core dumped): unknown

OS:
root@VM-0-10-ubuntu:~# cat /etc/issue
Ubuntu 20.04.6 LTS \n \l

Use argument --tcplisten

How do I pass arguments to wasmtime through the shim? I want to use --tcplisten to listen for TCP connections.

I'm trying this command.

ubuntu@wasi:~$ sudo ctr run --rm --runtime=io.containerd.wasmtime.v1 docker.io/martinlinkhorst/wasi:latest wasi10
info: Microsoft.Hosting.Lifetime
      Now listening on: http://localhost:5000
Fatal: TCP accept failed with errno 8. This may mean the host isn't listening for connections. Be sure to pass the --tcplisten parameter.

Cloning the repository and using `make` fails

Doing a fresh clone of the repository fails with:

 make
cargo build
    Updating crates.io index
   Compiling containerd-shim-wasm v0.1.0 (/home/jstur/projects/runwasi/crates/containerd-shim-wasm)
   Compiling lock_api v0.4.9
   Compiling parking_lot_core v0.9.6
   Compiling wasmedge-sys v0.12.2
   Compiling wasmedge-types v0.3.1
   Compiling oci-tar-builder v0.1.0 (/home/jstur/projects/runwasi/crates/oci-tar-builder)
The following warnings were emitted during compilation:

warning: [wasmedge-sys] Failed to locate lib_dir, include_dir, or header.

error: failed to run custom build command for `wasmedge-sys v0.12.2`

Caused by:
  process didn't exit successfully: `/home/jstur/projects/runwasi/target/debug/build/wasmedge-sys-380b7131222322c7/build-script-build` (exit status: 101)
  --- stdout
  cargo:warning=[wasmedge-sys] Failed to locate lib_dir, include_dir, or header.

  --- stderr
  thread 'main' panicked at '[wasmedge-sys] Failed to locate the required header and/or library file. Please reference the link: https://wasmedge.org/book/en/embed/rust.html', /home/jstur/.cargo/registry/src/github.com-1ecc6299db9ec823/wasmedge-sys-0.12.2/build.rs:30:25
  note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
warning: build failed, waiting for other jobs to finish...
make: *** [Makefile:19: build] Error 101

Should wasmedge be behind a feature flag in the crate or should we add something to the makefile to install the correct dependencies?

child process gets killed by SIGKILL after using cgroup v1 API

Containerd Log

time="2022-12-14T09:30:34.243727563Z" level=info msg="CreateContainer within sandbox \"f450952a49060dbe6756fb3638705b7c66404b38b0877741ac319e4edcb825f9\" for container &ContainerMetadata{Name:traefik,Attempt:0,}"
time="2022-12-14T09:30:34.280628605Z" level=info msg="CreateContainer within sandbox \"f450952a49060dbe6756fb3638705b7c66404b38b0877741ac319e4edcb825f9\" for &ContainerMetadata{Name:traefik,Attempt:0,} returns container id \"2a3087458a40f98ef65bbe454da5d84a379f03c1a1e1b19b9b57fd1e3e9885dc\""
time="2022-12-14T09:30:34.281158713Z" level=info msg="StartContainer for \"2a3087458a40f98ef65bbe454da5d84a379f03c1a1e1b19b9b57fd1e3e9885dc\""
time="2022-12-14T09:30:34.350862636Z" level=info msg="StartContainer for \"2a3087458a40f98ef65bbe454da5d84a379f03c1a1e1b19b9b57fd1e3e9885dc\" returns successfully"
time="2022-12-14T09:30:41.365630407Z" level=info msg="CreateContainer within sandbox \"cb2719f323623808ff663e1d0e409530a160cb62e702d0a1c3bc8670046e57fd\" for container &ContainerMetadata{Name:testwasm,Attempt:2,}"
time="2022-12-14T09:30:41.417023160Z" level=info msg="CreateContainer within sandbox \"cb2719f323623808ff663e1d0e409530a160cb62e702d0a1c3bc8670046e57fd\" for &ContainerMetadata{Name:testwasm,Attempt:2,} returns container id \"6ebb8cc29b333a124661983fba2dec5c82e4fc32c9a484212de41e4a3fa1e06e\""
time="2022-12-14T09:30:41.417626869Z" level=info msg="StartContainer for \"6ebb8cc29b333a124661983fba2dec5c82e4fc32c9a484212de41e4a3fa1e06e\""
[INFO] starting instance
[INFO] preparing module
[INFO] opening rootfs
[INFO] setting up wasi
[INFO] opening stdin
[INFO] opening stdout
[INFO] opening stderr
[INFO] building wasi context
[INFO] wasi context ready
[INFO] loading module from file
[INFO] instantiating instnace
[INFO] getting start function
[INFO] starting wasi instance
[INFO] started wasi instance with tid 1794
time="2022-12-14T09:30:41.559211243Z" level=info msg="StartContainer for \"6ebb8cc29b333a124661983fba2dec5c82e4fc32c9a484212de41e4a3fa1e06e\" returns successfully"
[INFO] child 1794 killed by signal SIGKILL, dumped: false
[INFO] wasi instance exited with status 137
time="2022-12-14T09:30:43.108591141Z" level=info msg="shim disconnected" id=6ebb8cc29b333a124661983fba2dec5c82e4fc32c9a484212de41e4a3fa1e06e
time="2022-12-14T09:30:43.108722243Z" level=warning msg="cleaning up after shim disconnected" id=6ebb8cc29b333a124661983fba2dec5c82e4fc32c9a484212de41e4a3fa1e06e namespace=k8s.io
time="2022-12-14T09:30:43.108732343Z" level=info msg="cleaning up dead shim"
time="2022-12-14T09:30:44.500146327Z" level=info msg="RemoveContainer for \"82de028e9dba19dfe45615e0efaa1e73cf35d05734b09aade8489485c5f48a84\""
time="2022-12-14T09:30:44.517400480Z" level=info msg="RemoveContainer for \"82de028e9dba19dfe45615e0efaa1e73cf35d05734b09aade8489485c5f48a84\" returns successfully"
time="2022-12-14T09:31:12.364643823Z" level=info msg="CreateContainer within sandbox \"cb2719f323623808ff663e1d0e409530a160cb62e702d0a1c3bc8670046e57fd\" for container &ContainerMetadata{Name:testwasm,Attempt:3,}"
time="2022-12-14T09:31:12.398472900Z" level=info msg="CreateContainer within sandbox \"cb2719f323623808ff663e1d0e409530a160cb62e702d0a1c3bc8670046e57fd\" for &ContainerMetadata{Name:testwasm,Attempt:3,} returns container id \"0101352d7327f58fc458166c0df7ce439528db33bd5006da002e69bb33d218d0\""
time="2022-12-14T09:31:12.398916606Z" level=info msg="StartContainer for \"0101352d7327f58fc458166c0df7ce439528db33bd5006da002e69bb33d218d0\""
[INFO] starting instance
[INFO] preparing module
[INFO] opening rootfs
[INFO] setting up wasi
[INFO] opening stdin
[INFO] opening stdout
[INFO] opening stderr
[INFO] building wasi context
[INFO] wasi context ready
[INFO] loading module from file
[INFO] instantiating instnace
[INFO] getting start function
[INFO] starting wasi instance
[INFO] started wasi instance with tid 1862
time="2022-12-14T09:31:12.528460632Z" level=info msg="StartContainer for \"0101352d7327f58fc458166c0df7ce439528db33bd5006da002e69bb33d218d0\" returns successfully"
[ERROR] error waiting for pid 1862: ECHILD: No child processes

Notice that there is a log message says "[INFO] child 1794 killed by signal SIGKILL, dumped: false"

How to reproduce?

Setup a k3d cluster image follow the steps in https://github.com/deislabs/containerd-wasm-shims/tree/main/deployments/k3d. Replace the spin & slight shim with wasmtime shim in "config.toml.tmpl"

[plugins.cri.containerd.runtimes.wasmtime]
  runtime_type = "io.containerd.wasmtime.v1"

Once the k3d cluster image is created, we can create a k3d cluster by running
k3d cluster create k3s-default --image k3swithshim --api-port 6550 -p "8081:80@loadbalancer" --agents 1

Then apply the following workloads

apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: wasmtime
handler: wasmtime
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wasm
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wasm
  template:
    metadata:
      labels:
        app: wasm
    spec:
      runtimeClassName: wasmtime
      containers:
        - name: testwasm
          image: docker.io/mossaka/wasmtest:2

Instance::new should return a Result type

          This was discussed before: https://github.com/containerd/runwasi/pull/54#issuecomment-1403269766

IMHO we should aim to remove as many unwrap() calls as possible from shim "main thread", because a library should not panic easily.

Originally posted by @ipuustin in #142 (comment)

the network commuication between k3s's pod and container failed

I create a test repository defims/wasmedge-hyper-server to reproduct this problem:

environment

wasm pod failed

# run:
sudo kubectl apply -f wasm.yml
sudo curl localhost:30001

# got:
curl: (7) Failed to connect to localhost port 30001 after 1 ms: Connection refused

the img.tar and wasmedge-hyper-server.wasm:
img-and-wasmedge-hyper-server.zip

# unzip img-and-wasmedge-hyper-server.zip and import the image
sudo ctr image import --all-platforms img.tar

/var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl:

version = 2

[plugins."io.containerd.internal.v1.opt"]
  path = "/var/lib/rancher/k3s/agent/containerd"
[plugins."io.containerd.grpc.v1.cri"]
  stream_server_address = "127.0.0.1"
  stream_server_port = "10010"
  enable_selinux = false
  enable_unprivileged_ports = true
  enable_unprivileged_icmp = true
  sandbox_image = "rancher/mirrored-pause:3.6"

[plugins."io.containerd.grpc.v1.cri".containerd]
  snapshotter = "overlayfs"
  disable_snapshot_annotations = true


[plugins."io.containerd.grpc.v1.cri".cni]
  bin_dir = "/var/lib/rancher/k3s/data/7c994f47fd344e1637da337b92c51433c255b387d207b30b3e0262779457afe4/bin"
  conf_dir = "/var/lib/rancher/k3s/agent/etc/cni/net.d"

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.wasmtime]
  runtime_type = "io.containerd.wasmtime.v1"

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.wasmedge]
  runtime_type = "io.containerd.wasmedge.v1"

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.spin]
  runtime_type = "io.containerd.spin.v1"

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  runtime_type = "io.containerd.runc.v2"

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
  SystemdCgroup = true

the wasm.yml file:

---
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: wasmedge
handler: wasmedge
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wasmedge-deployment
  labels:
    app: wasmedge-hyper-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wasmedge-hyper-server
  template:
    metadata:
      labels:
        app: wasmedge-hyper-server
    spec:
      runtimeClassName: wasmedge
      containers:
      - name: wasmedge-hyper-server
        image: ghcr.io/containerd/runwasi/wasmedge-hyper-server:latest
        imagePullPolicy: Never
        ports:
        - containerPort: 8089
---
apiVersion: v1
kind: Service
metadata:
  name: wasmedge-service
  labels:
    app: wasmedge-hyper-server
spec:
  type: NodePort
  selector:
    app: wasmedge-hyper-server
  ports:
    - name: http
      protocol: TCP
      port: 8089
      targetPort: 8089
      nodePort: 30001

nginx works

# run:
sudo kubectl apply -f nginx.yml
sudo curl localhost:30000

# got:
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

and the nginx.yml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.16.1
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  labels:
    app: nginx
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: 80
      nodePort: 30000

single pod with hostNetwork failed:

# run:
sudo kubectl apply -f pod.yml
sudo curl localhost:8089

# got:
curl: (7) Failed to connect to localhost port 8089 after 0 ms: Connection refused

pod.yml file:

---
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: wasmedge
handler: wasmedge
---
apiVersion: v1
kind: Pod
metadata:
  name: wasmedge-hyper-pod
spec:
  hostNetwork: true
  runtimeClassName: wasmedge
  containers:
  - name: wasmedge-hyper-server
    image: ghcr.io/containerd/runwasi/wasmedge-hyper-server:latest
    imagePullPolicy: Never
    ports:
    - containerPort: 8089

wasm container works:

I'm sure anything oher than the network works

# run:
sudo ctr run --rm --net-host --runtime=io.containerd.wasmedge.v1 ghcr.io/containerd/runwasi/wasmedge-hyper-server:latest wasmedge-hyper-server
sudo curl localhost:8089

# got:
Try POSTing data to /echo such as: `curl localhost:8089/echo -XPOST -d 'hello world'`

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.