Giter Site home page Giter Site logo

sylabs / singularity Goto Github PK

View Code? Open in Web Editor NEW
667.0 15.0 86.0 43.33 MB

SingularityCE is the Community Edition of Singularity, an open source container platform designed to be simple, fast, and secure.

Home Page: https://sylabs.io/docs/

License: Other

Go 94.64% C 2.34% Python 0.13% Shell 1.96% Awk 0.44% Makefile 0.47% Smarty 0.03%
containers hpc linux

singularity's Introduction

SingularityCE

CircleCI

Quick Links

What is SingularityCE?

SingularityCE is the Community Edition of Singularity, an open source container platform designed to be simple, fast, and secure. Many container platforms are available, but SingularityCE is designed for ease-of-use on shared systems and in high performance computing (HPC) environments. It features:

  • An immutable single-file container image format, supporting cryptographic signatures and encryption.
  • Integration over isolation by default. Easily make use of GPUs, high speed networks, parallel filesystems on a cluster or server.
  • Mobility of compute. The single file SIF container format is easy to transport and share.
  • A simple, effective security model. You are the same user inside a container as outside, and cannot gain additional privilege on the host system by default.

SingularityCE is open source software, distributed under the BSD License.

Getting Started with SingularityCE

To install SingularityCE from source, see the installation instructions. For other installation options, see our guide.

System administrators can learn how to configure SingularityCE, and get an overview of its architecture and security features in the administrator guide.

For users, see the user guide for details on how to run and build containers with SingularityCE.

Contributing to SingularityCE

Community contributions are always greatly appreciated. To start developing SingularityCE, check out the guidelines for contributing.

Please note we have a code of conduct. Please follow it in all your interactions with the project members and users.

Our roadmap, other documents, and user/developer meeting information can be found in GitHub Discussions.

We also welcome contributions to our user guide and admin guide.

Support

To get help with SingularityCE, check out the community spaces detailed at our Community Portal.

See also our Support Guidelines for further information about the best place, and how, to raise different kinds of issues and questions.

For additional support, contact Sylabs to receive more information.

Community Calls & Roadmap

We maintain our roadmap on GitHub Discussions, so that it's easy to collect ideas for new features, and discuss which should be prioritized for the next release.

Regular community calls are held for the project, on the first Thursday of each month, via Zoom. The agenda for each call includes a demonstration of new features, or a project / workflow related to SingularityCE. This is followed by development updates & discussion, before open questions. Meeting details are posted in Github Discussions, and recordings made available at the Sylabs YouTube Channel.

If you work on a project related to Singularity, or use Singularity in an interesting workflow, let us know if you'd like to present to the community!

Go Version Compatibility

SingularityCE aims to maintain support for the two most recent stable versions of Go. This corresponds to the Go Release Maintenance Policy and Security Policy, ensuring critical bug fixes and security patches are available for all supported language versions.

Citing Singularity

The SingularityCE software may be cited using our Zenodo DOI 10.5281/zenodo.5570766:

SingularityCE Developers (2021) SingularityCE. 10.5281/zenodo.5570766 https://doi.org/10.5281/zenodo.5570766

This is an 'all versions' DOI for referencing SingularityCE in a manner that is not version-specific. You may wish to reference the particular version of SingularityCE used in your work. Zenodo creates a unique DOI for each release, and these can be found in the 'Versions' sidebar on the Zenodo record page.

Please also consider citing the original publication describing Singularity:

Kurtzer GM, Sochat V, Bauer MW (2017) Singularity: Scientific containers for mobility of compute. PLoS ONE 12(5): e0177459. https://doi.org/10.1371/journal.pone.0177459

License

Unless otherwise noted, this project is licensed under a 3-clause BSD license found in the license file.

singularity's People

Contributors

aduffy19 avatar arangogutierrez avatar bauerm97 avatar bbockelm avatar cclerget avatar ctmadison avatar dependabot-preview[bot] avatar dependabot[bot] avatar drdaved avatar dtrudg avatar emmeff avatar flx42 avatar gmkurtzer avatar godloved avatar gvallee avatar ikaneshiro avatar ilmagico avatar jmstover avatar jscook2345 avatar mem avatar mikegray avatar phphavok avatar pisarukv avatar sashayakovtseva avatar satra avatar tri-adam avatar truatpasteurdotfr avatar vsoch avatar yarikoptic avatar yhcote avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

singularity's Issues

Remove obsolete alpine and debian packaging specs in dist/

Type of issue
technical debt

Description of issue
dist/alpinelinux and dist/debian contain heavily outdated packaging specs that are not suitable to build a current version. Both Alpine Linux and Debian maintain their own packaging, so we should probably remove the misleading files in our repo, so as not to confuse people that a direct build of up-do-date packages is possible with them.

Build From Dockerfile (native mode non-OCI-SIF)

Is your feature request related to a problem? Please describe.
Some projects don't have ready built or up-to-date containers, but do have a Dockerfile. At present you cannot build these containers with Singularity without translating the Dockerfile to a Singularity definition file.

Describe the solution you'd like
What about Singularity being able to build from a Dockerfile (with a limited subset of functionality for anything in a Singularity recipe that doesn’t map to a Docker directive) so the user doesn’t need to re-write recipes?

Additional context
Proposed on roadmap doc by @vsoch

Overhaul key command

Type of issue
technical debt

Description of issue
The implementation of the key command has some significant technical debt, with portions of code at a different level than they should be. In addition, management of keys is not trivial for those not clear about the public/private nature, fingerprints etc. of GPG. More expressive CLI output, and re-examining of the verbs and flags would be useful.

Long form --mount to support special chars in bind mount paths etc.

Is your feature request related to a problem? Please describe.
A path containing a : cannot be bind mounted.

Describe the solution you'd like
One approach is an escaping pattern as in apptainer/singularity#6008 - but the changes introduce further deep nested logic in the bind mount parsing.

It may be useful to consider how docker does this with a verbose --mount flag that uses type=,source=,destination= to avoid escaping problems.

moby/moby#8604

See commentary on the hpcng issue: apptainer/singularity#5923

Error running a container on multi-node

I'm running a sandbox version of a lammps container on 4 nodes in an HPC cluster. I'm getting the errors bellow. Any ideas please?

Version of Singularity:

What version of Singularity are you using? Run:

$ singularity version

singularity version 3.7.1

Expected behavior

What did you expect to see when you do...?

run the container without errors on multi-node

Actual behavior

00007ffd72d2e8e0: <0000000000000000  00007ffd72d2e928 FATAL:   container creation failed: mount tmpfs->${path}/singularity/3.7.1/var/singularity/mnt/session error: while mounting tmpfs: can't mount tmpfs filesystem to ${path}/singularity/3.7.1/var/singularity/mnt/session: write unix @->@: write: broken pipe

00007ffd72d2e8f0:  000055ce447f233a <runtime.mmap.func1+90>  0000000000000000
00007ffd72d2e900:  0000000000210808  00007ffd72d2e950
00007ffd72d2e910:  00007ffd72d2e960  0000000000000040
00007ffd72d2e920:  0000000000000040  0000000000000001
00007ffd72d2e930:  000000006e43a318  000055ce44de0f6c
00007ffd72d2e940:  0000000000000000  000055ce447fa69e <runtime.callCgoMmap+62>
00007ffd72d2e950:  00007ffd72d2e950  0000000000000000
00007ffd72d2e960:  fffffffe7fffffff  ffffffffffffffff
00007ffd72d2e970:  ffffffffffffffff  ffffffffffffffff
00007ffd72d2e980:  ffffffffffffffff  ffffffffffffffff
00007ffd72d2e990:  ffffffffffffffff  ffffffffffffffff
00007ffd72d2e9a0:  ffffffffffffffff  ffffffffffffffff
00007ffd72d2e9b0:  ffffffffffffffff  ffffffffffffffff
00007ffd72d2e9c0:  ffffffffffffffff  ffffffffffffffff
00007ffd72d2e9d0:  ffffffffffffffff  ffffffffffffffff

goroutine 1 [chan receive, locked to thread]:
runtime.gopark(0x55ce45260dc8, 0xc000054058, 0x170e, 0x2)
        runtime/proc.go:304 +0xe6
runtime.chanrecv(0xc000054000, 0x0, 0xc000000101, 0x55ce447a0101)
        runtime/chan.go:535 +0x2f9
runtime.chanrecv1(0xc000054000, 0x0)
        runtime/chan.go:412 +0x2b
runtime.gcenable()
        runtime/mgc.go:217 +0xae
runtime.main()
        runtime/proc.go:166 +0x11d
runtime.goexit()
        runtime/asm_amd64.s:1373 +0x1

rax    0x0
rbx    0x6
rcx    0x1477e97627ff
rdx    0x0
rdi    0x2
rsi    0x7ffd72d2e8e0
rbp    0x55ce44eee9b9
rsp    0x7ffd72d2e8e0
r8     0x0
r9     0x7ffd72d2e8e0
r10    0x8
r11    0x246
r12    0x55ce46006580
r13    0x0
r14    0x55ce44ed0fb4
r15    0x0
rip    0x1477e97627ff
rflags 0x246
cs     0x33
fs     0x0
gs     0x0
FATAL:   container creation failed: mount tmpfs->${path}/singularity/3.7.1/var/singularity/mnt/session error: while mounting tmpfs: can't mount tmpfs filesystem to ${path}/singularity/3.7.1/var/singularity/mnt/session: write unix @->@: write: broken pipe
FATAL:   container creation failed: mount tmpfs->${path}/singularity/3.7.1/var/singularity/mnt/session error: while mounting tmpfs: can't mount tmpfs filesystem to ${path}/singularity/3.7.1/var/singularity/mnt/session: read unix @->@: read: connection reset by peer
.. .. .. .. .. /opt/intel/psxe_runtime_2020.4.17/linux/mpi/intel64/bin/mpirun

Steps to reproduce this behavior

How can others reproduce this issue/problem?
$ mpiexec.hydra -hostfile hostfile -ppn $ppn -np $np singularity run <sandboxContainer>

$ cat /etc/os-release
### What OS/distro are you running
NAME="CentOS Linux"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="CentOS Linux 8"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-8"
CENTOS_MANTISBT_PROJECT_VERSION="8"

How did you install Singularity

Write here how you installed Singularity. Eg. RPM, source.
Not sure

Escaped strings being expanded in environment variables causing $LIB LD_PRELOAD isssue

Version of Singularity 3.7.3

Describe the bug
Unable to set LD_PRELOAD=/foo/bar/$LIB/baz.so in an arbitrary container.

$LIB is not another environment variable in this context, but is intended to be resolved by the linker, ld.so. When set, using export SINGULARITYENV_LD_PRELOAD='/foo/bar/$LIB/baz.so', $LIB is treated as environment variable (which is unlikely to be set) and the variable that is set in the container is LD_PRELOAD=/foo/bar//baz.so rather than the hoped for result. As a result the library is not loaded and an error is output ``ERROR: ld.so: object '/foo/bar//baz.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.

To Reproduce
The underlying cause can be reproduced with any environment variable with an escaped string containing $, however the behaviour in many cases could well be desired/by design. I have attached some example code that builds an LD_PRELOAD library that outputs a string when loaded using the $LIB variable attached.

preload_test.tar.gz

It requires a singularity image to test - ubuntu.simg in the same directory.
If you run make singularity_test then it should build the library and will run a hello world program using SINGULARITYENV_LD_PRELOAD, which will error before outputting Hello world!. It will then re-run the binary using a script in the container that sets LD_PRELOAD that works as expected and outputs Loaded library successfully before the output of the Hello World program.

Expected behavior
There should be a way to escape $ characters in environment variables so that they can be set in the container. Alternatively there could be an exception made for $LIB or ${LIB} in LD_PRELOAD.

OS / Linux Distribution Ubuntu 20.04.2 LTS

Installation Method Source (from release 3.7.3 on github)

Additional context
This used to work in v3.5.3 and I believe the behaviour was changed in v3.6. I mentioned this problem on the Slack channel and @dtrudg asked me to create an issue.

singularity run library://... does not honor active remote

Version of Singularity
What version of Singularity are you using? Run:

$ singularity version
singularity-ce version 3.8.0-rc.2

Describe the bug
When using singularity run library://entity/collection/container the container is always looked up in the sylabs cloud, even if a different remote is active in remote.yaml

To Reproduce
Steps to reproduce the behavior:

  1. set up a custom remote
  2. singularity remote use custom
  3. singularity run library://entity/collection/container (where the container is not present in the sylabs cloud)

Expected behavior
Container should be fetched from the active remote. "pull", "search", ... all use the active remote, only run does not.

Installation Method
compiled from source with the 3.8.0-rc.2 tag

fix
it seems the attached patch can fix this.
singularity-remote.patch.txt

Feature: NVIDIA_VISIBLE_DEVICES support

Is your feature request related to a problem? Please describe.
Singularity does not support GPU masking etc. that users running Docker may be familiar with.

Describe the solution you'd like
Masking / limiting use of NVIDIA GPUs in containers is via the NVIDIA_VISIBLE_DEVICES variable in the OCI world. Singularity binds a complete /dev tree by default, and all GPUs within a --contain-ed /dev tree, so GPU limitation is by application specific config and/or CUDA_VISIBLE_DEVICES, which has limitations and is not respected nicely by all software. Work to respect the NVIDIA_VISIBLE_DEVICES environment configuration when creating a --contain-ed /dev tree, such that GPU limitations and MIG are supported in a manner that is consistent with OCI runtimes. Investigate, and implement if possible, device masking in uncontained mode run with --nv

Having done a bunch of investigation and playing around on this now I think the only sensible approach is to call out to nvidia-container-cli for our binds and cgroups device limitation stuff, as long as it doesn't disrupt anything else Singularity is doing. Manually handling the setup in Singularity alone would involve a lot of looking at nvidia-smi output, and/or other info from /sys to understand what we need to do, in terms of how the env var ID/UUIDs map to device IDs for access restriction.

The main concern, though, is a transition from /proc/driver/nvidia/capababilities based access control onto a '/dev based approach'. This is ongoing and driver versions in use will straddle the approaches:

https://docs.nvidia.com/datacenter/tesla/mig-user-guide/index.html#device-nodes

The current CUDA 11/R450 GA (Linux driver 450.51.06) supports both mechanisms, but going forward the /dev based interface is the preferred method and the /proc based interface is deprecated. For now, users can choose the desired interface by using the nv_cap_enable_devfs parameter on the nvidia.ko kernel module

It's not feasible for us to write code for both of these models, and maintain / test it well. CUDA support is a quicker moving target these days, so adopting the upstream tooling seems to make sense.

Describe alternatives you've considered
See above.

Additional context
A POC was established prior to the SingularityCE fork at: apptainer/singularity#5829

Replace Deprecated Golint Linter

As of golangci-lint v1.40.1, the golint linter has been deprecated and replaced by Revive:

WARN [runner] The linter 'golint' is deprecated (since v1.41.0) due to: The repository of the linter has been archived by the owner.  Replaced by revive.

We should replace golint with revive in the linter configuration.

Hang / freeze when failing to push to Harbor oras registry

Version of Singularity

$ singularity version
3.8.0

Describe the bug
Singularity 3.8.0 (both CE and forks) fail to catch & handle the error case where a .sif fails to push to Harbor.

sif images continue to be pushed using oras (see below ticket)

Relates to apptainer/singularity#5691

To Reproduce
Steps to reproduce the behavior:

Have Harbor 2.2+ with OIDC (not username+password auth)

rm -rf $HOME/.singularity

singularity login -u <user> -p <harbor CLI secret> oras://your.harbor.instance

singularity -d push $HOME/alpine_latest.sif oras://your.harbor.instance/test/alpine:latest
DEBUG   [U=1000,P=18264]   persistentPreRun()            Singularity version: 3.8.0
DEBUG   [U=1000,P=18264]   persistentPreRun()            Parsing configuration file /usr/local/etc/singularity/singularity.conf
DEBUG   [U=1000,P=18264]   handleConfDir()               /home/dsouthwi/.singularity already exists. Not creating.
DEBUG   [U=1000,P=18264]   handleRemoteConf()            Ensuring file permission of 0600 on /home/dsouthwi/.singularity/remote.yaml
DEBUG   [U=1000,P=18264]   Init()                        Image format detection
DEBUG   [U=1000,P=18264]   Init()                        Check for sandbox image format
DEBUG   [U=1000,P=18264]   Init()                        sandbox format initializer returned: not a directory image
DEBUG   [U=1000,P=18264]   Init()                        Check for sif image format
DEBUG   [U=1000,P=18264]   Init()                        sif image format detected
DEBUG   [U=1000,P=18264]   UploadImage()                 ORAS push not accepted, retrying without config for registry compatibility

<wait 1-1000 minutes>
 
^CDEBUG   [U=1000,P=18264]   func2()                       User requested cancellation with interrupt
^C^C^C^C^C^C^C^C^C^C^C^C

program has froze/locked & unresponsive to interrupts. Reboot your session or machine.

Expected behavior
Ideally, pushing to oras harbor registry successfully.
If you do fail, at least timeout or something. Anything but locking up the session.

OS / Linux Distribution
Which Linux distribution are you using?
fails on centos/ubuntu/rhel in the same way.

Installation Method
bug present for both source build and RPM (EPEL, fedora, etc)

Support for Dockerfile USER (--oci mode)

Is your feature request related to a problem? Please describe.
Some Docker containers were built with a Dockerfile containing a USER directive. The application was installed under the assumption that at runtime the specified user is active in the container. This is not the case with Singularity, so these containers may fail with permissions or application configuration errors.

Describe the solution you'd like
SingularityCE has a ‘fakeroot engine’ that is able to configure a container run so that subuid/subgid configuration is used. This type of functionality opens the possibility of carrying through USER specifications from Docker containers, so that their payload can run as the expected username.

Support for Dockerfile USER should be enabled by the ability to execute images through runc as a low-level engine, through 3.11 and 4.0.

SR-IOV Networking Suport

Describe the solution you'd like
Common server Ethernet and IB cards support SR-IOV, where they can present multiple ‘PCIe virtual functions’ that act as independent network devices but share the same hardware. E.g. my Mellanox ConnectX3-PRO can be configured so that it presents as 16 network devices per port. This is often used to share a card between multiple VMs. Containers may also benefit from networking being shared at this layer, for general performance reasons and container-specific native IB support. See https://github.com/Mellanox/docker-sriov-plugin and subsequent CNI direction.

Additional context
This is a low priority "wish-list" item.

Schedule Community Meetings

Seeking input on schedule (day in month / time) for a monthly (or other frequency) community meeting.

We want to consider alternating times in order to accommodate people in different time-zones.

Non-root / Default Security Profiles

Describe the solution you'd like
SingularityCE can apply security restrictions, such as selinux rules, seccomp filters via a --security flag. However, this only works for root. Since SingularityCE focuse on non-root execution, it would be useful for optional/mandatory profiles to be applied to container runs for non-root users. This would allow security restrictions beyond the usual POSIX permissions to be mandated for container execution. Consider:

  • SElinux
  • Apparmor
  • Seccomp

Go Module Conformance

Type of issue
technical debt

Description of issue
A major version offers an opportunity to revise the versioning approach, so that SingularityCE pkg/ code can be called from other projects as expected of a go module.

Problems with binding user mounts

I am having a technical issue

I have a collection of docker images built into sandbox directories
I use this as a source and for my instance creation.
I then bind some arbitrary empty host folder to the container image. The container path has pre-existing files/folders created as a part of the image during build.
The host folder overwrites the container path, essentially mounting an empty folder unlike the expected union the docker runtime performs, causing the runscript to fail as scripts in these destination directories are used.

Not sure if I am doing something wrong here or am missing something.

Also can't seem to find documentation on what to do when volumes need to be shared across instances.

Any help is appreciated :)

documentation link in repo?

This is a pretty simple suggestion, but I think a direct link to the current docs (on readthedocs) along with the description of the repository would be hugely useful - I always have to google search and do many clicks to find it.

could not attach image file to loop device: no loop devices available

I recently encountered the following error with Singularity 3.7.3 on Arch linux. It used to work for me but now fails:

$ singularity pull --name hello.simg shub://vsoch/hello-world
$ singularity run hello.simg
FATAL:   container creation failed: mount /proc/self/fd/3->/var/singularity/mnt/session/rootfs error: while mounting image /proc/self/fd/3: failed to find loop device: could not attach image file to loop device: no loop devices available

There are certainly enough loop devices available:

$ ls /dev/loop*
/dev/loop0    /dev/loop121  /dev/loop145  /dev/loop169  /dev/loop192  /dev/loop215
(and more...)
$ mount | grep loop
(empty output)

The config contains max loop devices = 256.

Here is the debug output for the same command:

$ singularity -d run hello.simg
DEBUG   [U=1000,P=681364]  persistentPreRun()            Singularity version: 3.7.3
DEBUG   [U=1000,P=681364]  persistentPreRun()            Parsing configuration file /etc/singularity/singularity.conf
DEBUG   [U=1000,P=681364]  handleConfDir()               /home/alu/.singularity already exists. Not creating.
DEBUG   [U=1000,P=681364]  execStarter()                 Saving umask 0022 for propagation into container
DEBUG   [U=1000,P=681364]  execStarter()                 Checking for encrypted system partition
DEBUG   [U=1000,P=681364]  Init()                        Image format detection
DEBUG   [U=1000,P=681364]  Init()                        Check for sandbox image format
DEBUG   [U=1000,P=681364]  Init()                        sandbox format initializer returned: not a directory image
DEBUG   [U=1000,P=681364]  Init()                        Check for sif image format
DEBUG   [U=1000,P=681364]  Init()                        sif format initializer returned: SIF magic not found
DEBUG   [U=1000,P=681364]  Init()                        Check for squashfs image format
DEBUG   [U=1000,P=681364]  Init()                        squashfs image format detected
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding SHELL environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding SESSION_MANAGER environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding COLORTERM environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding LESS environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding XDG_SESSION_PATH environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding XDG_MENU_PREFIX environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding CONDA_EXE environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding _CE_M environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding HISTSIZE environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding SSH_AUTH_SOCK environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding XMODIFIERS environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding DESKTOP_SESSION environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding LC_MONETARY environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding EDITOR environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding DBUS_STARTER_BUS_TYPE environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding PWD environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding LOGNAME environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding XDG_SESSION_DESKTOP environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding QT_QPA_PLATFORMTHEME environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding XDG_SESSION_TYPE environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding CONDA_PREFIX environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding MAMBA_ROOT_PREFIX environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding MANPATH environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding SYSTEMD_EXEC_PID environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding XAUTHORITY environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding XDG_GREETER_DATA_DIR environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding MOTD_SHOWN environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding GDM_LANG environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding LC_PAPER environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding LANG environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding LS_COLORS environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding XDG_CURRENT_DESKTOP environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding DARWIN environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding VTE_VERSION environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding CONDA_PROMPT_MODIFIER environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding XDG_SEAT_PATH environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding MPW_FULLNAME environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding INVOCATION_ID environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding MANAGERPID environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding INFOPATH environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding NVM_DIR environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding XDG_SESSION_CLASS environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding TERM environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding _CE_CONDA environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding USER environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding LIBRARY_PATH environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding MICRO_MYOTIS environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding CONDA_SHLVL environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding DISPLAY environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding SHLVL environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding NVM_CD_FLAGS environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding PBGAPS environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding PAGER environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding QT_IM_MODULE environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding LC_MEASUREMENT environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding DBUS_STARTER_ADDRESS environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding OOO_FORCE_DESKTOP environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding COOKIE_VERIFICATION_SECRET environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding DENTIST environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding CONDA_PYTHON_EXE environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding TILIX_ID environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding XDG_RUNTIME_DIR environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding CONDA_DEFAULT_ENV environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding LC_TIME environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding JOURNAL_STREAM environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding LC_COLLATE environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding GTK3_MODULES environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding GDMSESSION environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding HISTFILESIZE environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding DBUS_SESSION_BUS_ADDRESS environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding HG environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding NVM_BIN environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding MAIL environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding LC_NUMERIC environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding OLDPWD environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding _ environment variable
DEBUG   [U=1000,P=681364]  SetContainerEnv()             Forwarding USER_PATH environment variable
VERBOSE [U=1000,P=681364]  SetContainerEnv()             Setting HOME=/home/alu
VERBOSE [U=1000,P=681364]  SetContainerEnv()             Setting PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
DEBUG   [U=1000,P=681364]  init()                        Use starter binary /usr/lib/singularity/bin/starter-suid
VERBOSE [U=0,P=681364]     print()                       Set messagelevel to: 5
VERBOSE [U=0,P=681364]     init()                        Starter initialization
DEBUG   [U=0,P=681364]     load_overlay_module()         Trying to load overlay kernel module
DEBUG   [U=0,P=681364]     load_overlay_module()         Overlay seems supported by the kernel
VERBOSE [U=0,P=681364]     is_suid()                     Check if we are running as setuid
VERBOSE [U=0,P=681364]     priv_drop()                   Drop root privileges
DEBUG   [U=1000,P=681364]  read_engine_config()          Read engine configuration
DEBUG   [U=1000,P=681364]  init()                        Wait completion of stage1
VERBOSE [U=1000,P=681373]  priv_drop()                   Drop root privileges permanently
DEBUG   [U=1000,P=681373]  set_parent_death_signal()     Set parent death signal to 9
VERBOSE [U=1000,P=681373]  init()                        Spawn stage 1
DEBUG   [U=1000,P=681373]  startup()                     singularity runtime engine selected
VERBOSE [U=1000,P=681373]  startup()                     Execute stage 1
DEBUG   [U=1000,P=681373]  StageOne()                    Entering stage 1
DEBUG   [U=1000,P=681373]  prepareAutofs()               Found "/proc/sys/fs/binfmt_misc" as autofs mount point
DEBUG   [U=1000,P=681373]  prepareAutofs()               Could not keep file descriptor for bind path /etc/localtime: no mount point
DEBUG   [U=1000,P=681373]  prepareAutofs()               Could not keep file descriptor for bind path /etc/hosts: no mount point
DEBUG   [U=1000,P=681373]  prepareAutofs()               Could not keep file descriptor for home directory /home/alu: no mount point
DEBUG   [U=1000,P=681373]  prepareAutofs()               Could not keep file descriptor for current working directory /home/alu/projects/pb-gaps/src/dentist-example: no mount point
DEBUG   [U=1000,P=681373]  Init()                        Image format detection
DEBUG   [U=1000,P=681373]  Init()                        Check for sandbox image format
DEBUG   [U=1000,P=681373]  Init()                        sandbox format initializer returned: not a directory image
DEBUG   [U=1000,P=681373]  Init()                        Check for sif image format
DEBUG   [U=1000,P=681373]  Init()                        sif format initializer returned: SIF magic not found
DEBUG   [U=1000,P=681373]  Init()                        Check for squashfs image format
DEBUG   [U=1000,P=681373]  Init()                        squashfs image format detected
DEBUG   [U=1000,P=681373]  setSessionLayer()             Overlay seems supported and allowed by kernel
DEBUG   [U=1000,P=681373]  setSessionLayer()             Attempting to use overlayfs (enable overlay = try)
VERBOSE [U=1000,P=681364]  wait_child()                  stage 1 exited with status 0
DEBUG   [U=1000,P=681364]  cleanup_fd()                  Close file descriptor 4
DEBUG   [U=1000,P=681364]  cleanup_fd()                  Close file descriptor 5
DEBUG   [U=1000,P=681364]  cleanup_fd()                  Close file descriptor 6
DEBUG   [U=1000,P=681364]  init()                        Set child signal mask
DEBUG   [U=1000,P=681364]  init()                        Create socketpair for master communication channel
DEBUG   [U=1000,P=681364]  init()                        Create RPC socketpair for communication between stage 2 and RPC server
VERBOSE [U=1000,P=681364]  priv_escalate()               Get root privileges
VERBOSE [U=0,P=681364]     priv_escalate()               Change filesystem uid to 1000
VERBOSE [U=0,P=681364]     init()                        Spawn master process
DEBUG   [U=0,P=681379]     set_parent_death_signal()     Set parent death signal to 9
VERBOSE [U=0,P=681379]     create_namespace()            Create mount namespace
VERBOSE [U=0,P=681364]     enter_namespace()             Entering in mount namespace
DEBUG   [U=0,P=681364]     enter_namespace()             Opening namespace file ns/mnt
VERBOSE [U=0,P=681379]     create_namespace()            Create mount namespace
DEBUG   [U=0,P=681364]     set_master_privileges()       Set master privileges
DEBUG   [U=0,P=681364]     apply_privileges()            Effective capabilities:   0x00000000000000c0
DEBUG   [U=0,P=681364]     apply_privileges()            Permitted capabilities:   0x000001ffffffffff
DEBUG   [U=0,P=681364]     apply_privileges()            Bounding capabilities:    0x000001ffffffffff
DEBUG   [U=0,P=681364]     apply_privileges()            Inheritable capabilities: 0x000001ffffffffff
DEBUG   [U=0,P=681364]     apply_privileges()            Ambient capabilities:     0x0000000000000000
DEBUG   [U=0,P=681364]     apply_privileges()            Set user ID to 1000
DEBUG   [U=0,P=681380]     set_rpc_privileges()          Set RPC privileges
DEBUG   [U=0,P=681380]     apply_privileges()            Effective capabilities:   0x0000000000200000
DEBUG   [U=0,P=681380]     apply_privileges()            Permitted capabilities:   0x000001ffffffffff
DEBUG   [U=0,P=681380]     apply_privileges()            Bounding capabilities:    0x0000000008204000
DEBUG   [U=0,P=681380]     apply_privileges()            Inheritable capabilities: 0x0000000000000000
DEBUG   [U=0,P=681380]     apply_privileges()            Ambient capabilities:     0x0000000000000000
DEBUG   [U=0,P=681380]     apply_privileges()            Set user ID to 1000
DEBUG   [U=1000,P=681380]  set_parent_death_signal()     Set parent death signal to 9
VERBOSE [U=1000,P=681380]  init()                        Spawn RPC server
DEBUG   [U=1000,P=681364]  startup()                     singularity runtime engine selected
VERBOSE [U=1000,P=681364]  startup()                     Execute master process
DEBUG   [U=1000,P=681380]  startup()                     singularity runtime engine selected
VERBOSE [U=1000,P=681380]  startup()                     Serve RPC requests
DEBUG   [U=1000,P=681364]  setupSessionLayout()          Using Layer system: overlay
DEBUG   [U=1000,P=681364]  setupOverlayLayout()          Creating overlay SESSIONDIR layout
DEBUG   [U=1000,P=681364]  addRootfsMount()              Mount rootfs in read-only mode
DEBUG   [U=1000,P=681364]  addRootfsMount()              Image type is 4096
DEBUG   [U=1000,P=681364]  addRootfsMount()              Mounting block [squashfs] image: /home/alu/projects/pb-gaps/src/dentist-example/hello.simg
DEBUG   [U=1000,P=681364]  addKernelMount()              Checking configuration file for 'mount proc'
DEBUG   [U=1000,P=681364]  addKernelMount()              Adding proc to mount list
VERBOSE [U=1000,P=681364]  addKernelMount()              Default mount: /proc:/proc
DEBUG   [U=1000,P=681364]  addKernelMount()              Checking configuration file for 'mount sys'
DEBUG   [U=1000,P=681364]  addKernelMount()              Adding sysfs to mount list
VERBOSE [U=1000,P=681364]  addKernelMount()              Default mount: /sys:/sys
DEBUG   [U=1000,P=681364]  addDevMount()                 Checking configuration file for 'mount dev'
DEBUG   [U=1000,P=681364]  addDevMount()                 Adding dev to mount list
VERBOSE [U=1000,P=681364]  addDevMount()                 Default mount: /dev:/dev
DEBUG   [U=1000,P=681364]  addHostMount()                Not mounting host file systems per configuration
VERBOSE [U=1000,P=681364]  addBindsMount()               Found 'bind path' = /etc/localtime, /etc/localtime
VERBOSE [U=1000,P=681364]  addBindsMount()               Found 'bind path' = /etc/hosts, /etc/hosts
DEBUG   [U=1000,P=681364]  addHomeStagingDir()           Staging home directory (/home/alu) at /var/singularity/mnt/session/home/alu
DEBUG   [U=1000,P=681364]  addHomeMount()                Adding home directory mount [/var/singularity/mnt/session/home/alu:/home/alu] to list using layer: overlay
DEBUG   [U=1000,P=681364]  addTmpMount()                 Checking for 'mount tmp' in configuration file
VERBOSE [U=1000,P=681364]  addTmpMount()                 Default mount: /tmp:/tmp
VERBOSE [U=1000,P=681364]  addTmpMount()                 Default mount: /var/tmp:/var/tmp
DEBUG   [U=1000,P=681364]  addScratchMount()             Not mounting scratch directory: Not requested
DEBUG   [U=1000,P=681364]  addLibsMount()                Checking for 'user bind control' in configuration file
DEBUG   [U=1000,P=681364]  addFilesMount()               Checking for 'user bind control' in configuration file
DEBUG   [U=1000,P=681364]  addResolvConfMount()          Adding /etc/resolv.conf to mount list
VERBOSE [U=1000,P=681364]  addResolvConfMount()          Default mount: /etc/resolv.conf:/etc/resolv.conf
DEBUG   [U=1000,P=681364]  addHostnameMount()            Skipping hostname mount, not virtualizing UTS namespace on user request
DEBUG   [U=1000,P=681364]  create()                      Mount all
DEBUG   [U=1000,P=681364]  mountGeneric()                Mounting tmpfs to /var/singularity/mnt/session
FATAL   [U=1000,P=681364]  Master()                      container creation failed: mount /proc/self/fd/3->/var/singularity/mnt/session/rootfs error: while mounting image /proc/self/fd/3: failed to find loop device: could not attach image file to loop device: no loop devices available

@dtrudg What to you mean by mountinfo? Here is the output of mount before launching the command:

proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
sys on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
dev on /dev type devtmpfs (rw,nosuid,relatime,size=3914504k,nr_inodes=978626,mode=755,inode64)
run on /run type tmpfs (rw,nosuid,nodev,relatime,mode=755,inode64)
/dev/mapper/AntergosVG-AntergosRoot on / type ext4 (rw,noatime,discard)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
none on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=30,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=11543)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)
/dev/sda1 on /boot type ext4 (rw,noatime,discard,stripe=4)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,nr_inodes=409600,inode64)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=785652k,nr_inodes=196413,mode=700,uid=1000,gid=100,inode64)
gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=100)
/dev/mmcblk0p1 on /run/media/alu/0BF9-003D type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096,uhelper=udisks2)

Originally posted by @a-ludi in #65 (comment)

Why doesn't singularity bind the `/run/user/<ID>` and `/etc/machine-id` paths by default?

Version of Singularity

What version of Singularity are you using? Run:

$ singularity version
3.8.0

Describe the bug

I noticed that to use several GUI's (e.g. visual studio code, rviz, gedit ect.) I need to bind both the /run/user/<ID> folder and the /etc/machine-id file. I was wondering why by design, these paths are not bounded when running a singularity container (see the docs for the currently bounded system paths)?

Further, I'm also unsure why for singularity containers, the SESSION_MANAGER variable is forwarded from the default system as for several of the GUI's, I have to unset this variable for warnings to disappear.

I understand that binding these paths and unsettling the SESSION_MANAGER variable might be specific to my use case. I, however, was wondering why the Symlabs team made this design choice? Although this issue is labelled as a bug, it might actually be a question into the default bounding and env forwarding behaviour of the singularity program.

To Reproduce

Steps to reproduce the behaviour:

  1. Create a ubuntu singularity container using sudo singularity build --sandbox ubuntu docker://ubuntu:18.04.
  2. Run the new container sudo singularity run --writable ubuntu.
  3. Install gedit into the container apt install gedit.
  4. Exit the container.
  5. Run the container using the --nv flag.
  6. Try to start gedit.
  7. See it works but throws the following repeated error:
(gedit:27671): dconf-CRITICAL **: 12:47:35.667: unable to create directory '/run/user/1000/dconf': Read-only file system.  dconf will not work properly.

This can be solved by manually binding the /run/user/<ID> folder. Further, if people try to open other GUI programs like rviz they receive the following error:

QStandardPaths: XDG_RUNTIME_DIR points to non-existing path '/run/user/1000', please create it with 0700 permissions.
dbus[30637]: D-Bus library appears to be incorrectly set up: see the manual page for dbus-uuidgen to correct this issue. (Failed to open "/var/lib/dbus/machine-id": No such file or directory; UUID file '/etc/machine-id' should contain a hex string of length 32, not length 0, with no other text)
  D-Bus not built with -rdynamic so unable to print a backtrace
Aborted (core dumped)

This can be solved by manually binding both the /run/user/<ID> and /etc/machine-id paths. After doing that the only warning that is left is the following:

Qt: Session management error: None of the authentication protocols specified are supported

This can be solved by unsetting the SESSION_MANAGER environmental variable which by default is set to local/ricks-HP-ZBook-Studio-x360-G5:@/tmp/.ICE-unix/1817,unix/ricks-HP-ZBook-Studio-x360-G5:/tmp/.ICE-unix/1817.

Expected behaviour

I did expect singularity to mount both the /run/user/<ID> folder and the /etc/machine-id file by default.

OS / Linux Distribution

Which Linux distribution are you using?

$ cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.2 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.2 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal

Installation Method

I installed singularity by source using the Symlabs 3.8 documentation.

Additional context

Related to:

Accurate confirmation messages including target of push operations

Is your feature request related to a problem? Please describe.
When pushing a key or an image it is often useful to have some feedback indicating you've pushed it to the correct place.

Describe the solution you'd like
We have had messages in the past providing push URIs in confirmations, but they have been rolled back as the original code hasn't been well adapted to handle change to the complex mechanism of determining destination services by remote endpoints and/or flags. See e.g. #4

We should aim to return these messages, in a way that they are always correct. This will require considering at which point(s) in the code, from CLI down to implementation of the operations, the URI is known / should be known.

Removal of Code Supporting Legacy Distros (3.10 tasks)

Type of issue
technical debt

Description of issue
SingulartyCE contains various workarounds for RHEL6 / 2.6 kernel, old versions of invoked external programs etc. Special cases supporting these distributions can be removed gradually, but 4.0 is a good target to have removed all to simplify maintenance.

This relates to the host on which Singuarity is installed, not the age of the containerized OS.

  • Remove workaround for squashfs-tools < 4.3
  • Remove remaining EL6 test conditionals
  • No need to handle _LINUX_CAPABILITY_VERSION_2

--writable-tmpfs for %test in singularity build

Hi,

Some of the software I want to embed in my image writes systematically in /var/log/$dedicated_directory when starting, even if just to display the version as I use in the %test section of my recipe. It is not a problem when using the image itself with a --writable flag or similar, but I can't find a way to test my software is properly installed at build time as I always get errors related to attempts to write on a read-only file system.

The ability to use --writable-tmpfs in singularity build for the test section as available for singularity shell or singularity exec would solve my problem and allow more flexibility in image testing.

I also considered to use --bind, but the directory used by the problematic software writes to a subdirectory of /var/log which do not exist at the moment the binding is done, so my only option is to bind the whole /var/log which I feel to be inadequate.

I am using singularity 3.8.0 with a docker debian:buster bootstrap.

Best regards,
Sylvain

Installation issue in Macbook Pro M1 MacOS BigSur

Hi,

I am trying to install Singularity on a Macbook Pro M1 MacOS BigSur 11.4 but every time I run the script below, I get into an error.
"export VM=sylabs/singularity-3.5-ubuntu-bionic64 &&
vagrant init $VM &&
vagrant up &&
vagrant ssh"

The error message is: There was an error while executing VBoxManage, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.

Command: ["startvm", "d3126691-9ee2-4e61-a838-c0d784eba7c0", "--type", "headless"]

Stderr: VBoxManage: error: The virtual machine 'vm-singularity_default_1622454996768_88432' has terminated unexpectedly during startup with exit code 1 (0x1)
VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component MachineWrap, interface IMachine

I found a lot of suggestions to uninstall and reinstall, which I already did, but is not working. The "Allow" message that suppose to pop up in the system preference is not showing, too.

Could you please help me? Thank you!

Best regards,
Joyce

Infrequent GC related netpoll fd error after descriptors closed by starter

Version of Singularity:

What version of Singularity are you using? Run:

3.7.x

Actual behavior

It has been reported that a netpoll error panic can occur when running containers that have a large environment, on systems with a high load average. We've also had a report of what appears to be the same issue, triggered very infrequently (1 in a million executions scale) with a slightly different backtrace (it likely varies based on version of Go used to build Singularity).

Go is a language with a runtime that includes a parallelized garbage collector. The netpoll failure on the fd is related to a GC memory profiling operation.

When Singularity is advanced in the container setup process it must close any file descriptors that are not associated with the container, or the container setup process. The container environment is sourced by a Go embedded shell interpreter. While the Go runtime could trigger a GC / memory profiling cycle at various times, a large environment is more likely to trigger a GC cycle / GC memory profiling operation in this step.

There appears to be a conflict between Singularity identifying and closing file descriptors, and Go runtime GC operations that may occur. The Go GC cycle isn't completely 'halt the world', perhaps explaining why high load and large environment are required to trigger it, as they possibly delay the completion of the starter execution path as some of the GC stuff happens in parallel.

The following change has been observed to work around this, by disabling Go runtime GC for the short-lived starter process that closes fds and performs other final container setup:

Allow the starter to keep variables that allow us to tweak the Go runtime GC behavior for the starter process:

diff --git a/cmd/starter/c/starter.c b/cmd/starter/c/starter.c
index a99de873b..678a32e7f 100644
--- a/cmd/starter/c/starter.c
+++ b/cmd/starter/c/starter.c
@@ -975,6 +975,7 @@ static void cleanup_fd(fdlist_t *master, struct starter *starter) {
             if ( starter->fds[i] == fd ) {
                 found = true;
                 /* set force close on exec */
+                debugf("Setting FD_CLOEXEC on starter fd %d\n", starter->fds[i]);
                 if ( fcntl(starter->fds[i], F_SETFD, FD_CLOEXEC) < 0 ) {
                     debugf("Can't set FD_CLOEXEC on file descriptor %d: %s\n", starter->fds[i], strerror(errno));
                 }
@@ -1128,9 +1129,17 @@ static void cleanenv(void) {
     /*
      * keep only SINGULARITY_MESSAGELEVEL for GO runtime, set others to empty
      * string and not NULL (see issue #3703 for why)
+     *
+     * DCT - also keep any GOGC and GODEBUG vars for go runtime
+     * debugging purposes.
      */
     for (e = environ; *e != NULL; e++) {
-        if ( strncmp(MSGLVL_ENV "=", *e, sizeof(MSGLVL_ENV)) != 0 ) {
+        if ( strncmp(MSGLVL_ENV "=", *e, sizeof(MSGLVL_ENV)) == 0 ||
+             strncmp("GOGC" "=", *e, sizeof("GOGC")) == 0 ||
+             strncmp("GODEBUG" "=", *e, sizeof("GODEBUG")) == 0 ) {
+            debugf("Keeping env var %s\n", *e);
+        } else {
+            debugf("Clearing env var %s\n", *e);
             *e = "";
         }
     }

Disable garbage collections altogether for starter by setting GOGC=off (this is the fix/workaround)

Note the GODEBUG= line allows finer grained debugging. In this example it's turning off GC mem profiling and instructing Go to print a trace of any GC operations. If the GOGC=off is present this shouldn't do anything.

diff --git a/internal/pkg/util/starter/starter.go b/internal/pkg/util/starter/starter.go
index e87abead3..92cca08cf 100644
--- a/internal/pkg/util/starter/starter.go
+++ b/internal/pkg/util/starter/starter.go
@@ -89,6 +89,10 @@ func Exec(name string, config *config.Common, ops ...CommandOp) error {
 	if err := c.init(config, ops...); err != nil {
 		return fmt.Errorf("while initializing starter command: %s", err)
 	}
+	sylog.Debugf("Setting GOGC=off for starter")
+	c.env = append(c.env, "GOGC=off")
+	sylog.Debugf("Setting GODEBUG=memprofilerate=0,gctrace=1 for starter")
+	c.env = append(c.env, "GODEBUG=memprofilerate=0,gctrace=1")
 	err := unix.Exec(c.path, []string{name}, c.env)
 	return fmt.Errorf("while executing %s: %s", c.path, err)
 }

Archives removed from older releases

Version of Singularity
3.2.1

Describe the bug
singularity-3.2.1.tar.gz file has been removed from sylabs/singularity repository.

To Reproduce

SINGULARITY_VERSION=3.2.1
wget -qO- https://github.com/sylabs/singularity/releases/download/v${SINGULARITY_VERSION}/singularity-${SINGULARITY_VERSION}.tar.gz

Expected behavior
I expect the tar.gz file to be downloaded.

OS / Linux Distribution
Ubuntu 20.04

Installation Method
Irrelevant

Additional context
I've been downloading this archive as part of a CI script, but it seems the archives were removed from older releases.

Versioned directory in tarballs

Is your feature request related to a problem? Please describe.
When the Singularity tarball from make dist is extracted, it results in a directory named singularity. This can clash if you extract multiple versions to the same place etc.

Describe the solution you'd like
The extracted directory name would ideally include the version of singularity.

Describe alternatives you've considered
N/A

Additional context
N/A

Review and correct internal/ vs pkg/ code location

Type of issue
technical debt

Description of issue
Various portions of code in public pkg/ areas are not likely to be stable over time at present. This should be worked out so that it is possible to move toward the expectation that 3rd parties can use pkg/ functions with some stability, and as a stepping stone toward Go module semantic versioning conformance.

Reworking the remote Command

Is your feature request related to a problem? Please describe.
The remote command configures access to Sylabs cloud services, alternative keyservers, and OCI registries. It is complex as there is overlap between these targets, a concept of priorities and global/exclusive keyservers etc. This is likely a good area for a comprehensive rework.

Describe the solution you'd like
Comprehensive review of the verbs and command structure. Does it make sense for OCI and Sylabs Cloud/Enterprise configs to be mixed, or does it lead to more confusion. Etc.

Describe alternatives you've considered
Documentation improvements may lessen confusion, allow this to be deferred.

Reference to non-existent GH issue during sandbox build

Type of issue
Housekeeping on some build warnings.

Description of issue
When building a container image with --sandbox the following warning is displayed.

WARNING: Permission handling has changed in Singularity 3.5 for improved OCI compatibility
WARNING: The sandbox will contain files/dirs that cannot be removed until permissions are modified
WARNING: Use 'chmod -R u+rwX' to set permissions that allow removal
WARNING: Use the '--fix-perms' option to 'singularity build' to modify permissions at build time
WARNING: You can provide feedback about this change at https://github.com/sylabs/singularity/issues/4671

The last line WARNING: You can provide feedback about this change at https://github.com/sylabs/singularity/issues/4671 references a github issue that is non-existent.

Mounted `sif` containers not unmounted after running.

Version of Singularity

$ singularity --version
singularity version 3.7.3-1.el7

Describe the bug

Singularity does not unmount the sif containers after running. After a while, all loop devices are "used up" and I need to unmount them manually to be able to run again.

To Reproduce

$ mount
...
tmpfs on /run/user/1003 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=796564k,mode=700,uid=1003,gid=1003)
$ singularity run shub://GodloveD/lolcow
INFO:    Use cached image
 ___________________
< Are you a turtle? >
 -------------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||
$ mount
...
tmpfs on /run/user/1003 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=796564k,mode=700,uid=1003,gid=1003)
/root/.singularity/cache/shub/a59d8de3121579fe9c95ab8af0297c2e3aefd827 on /run/media/root/disk type squashfs (ro,nosuid,nodev,relatime,seclabel,uhelper=udisks2)
$ singularity run shub://GodloveD/lolcow
INFO:    Use cached image
 ________________________________________
/ You will be advanced socially, without \
\ any special effort on your part.       /
 ----------------------------------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||
$ mount
...
tmpfs on /run/user/1003 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=796564k,mode=700,uid=1003,gid=1003)
/root/.singularity/cache/shub/a59d8de3121579fe9c95ab8af0297c2e3aefd827 on /run/media/root/disk type squashfs (ro,nosuid,nodev,relatime,seclabel,uhelper=udisks2)
/root/.singularity/cache/shub/a59d8de3121579fe9c95ab8af0297c2e3aefd827 on /run/media/root/disk1 type squashfs (ro,nosuid,nodev,relatime,seclabel,uhelper=udisks2)

Expected behavior

The mounted loop devices should be unmounted after the container exits.

OS / Linux Distribution

$ cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="http://cern.ch/linux/"
BUG_REPORT_URL="http://cern.ch/linux/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

Installation Method

Installed via yum.

Additional context

Running sandboxes does not seem to be causing any problems. Only the sif container files.

pushing a new key to a custom remote outputs no keystore url

Version of Singularity:

What version of Singularity are you using? Run:

$ singularity version
3.7.2

Expected behavior

when creating and pushing a new key for a custom remote (keys.domain.tld), the output doesn't report an URL

singularity key newpair

After filling in all the prompts and pushing to the custom remote the last output line should read.

Key successfully pushed to: keys.domain.tld

Actual behavior

Key successfully pushed to: 

EL8 ppc64le FAIL: TestCgroups

Version of Singularity:

What version of Singularity are you using? Run:

3.7.x

Actual behavior

=== RUN   TestCgroups
        cgroups_linux_test.go:64: open /sys/fs/cgroup/hugetlb/singularity/154890/hugetlb.16MB.limit_in_bytes: permission denied
    --- FAIL: TestCgroups (0.42s)

This 16MB hugetlb was specifically set for RHEL7 ppc64le in apptainer/singularity#5918

However, on EL 8.3 we see the otherwise standard 2MB...

[centos@centos8-ppc64le ~]$ ls /sys/fs/cgroup/hugetlb/singularity/154890/
cgroup.clone_children           hugetlb.2MB.failcnt
cgroup.procs                    hugetlb.2MB.limit_in_bytes
hugetlb.1GB.failcnt             hugetlb.2MB.max_usage_in_bytes
hugetlb.1GB.limit_in_bytes      hugetlb.2MB.usage_in_bytes
hugetlb.1GB.max_usage_in_bytes  notify_on_release
hugetlb.1GB.usage_in_bytes      tasks

FATAL error: repository name must be lowercase

I have installed the singularity 3.7.0 and SIF sucessfully. But when I tried to build a container from singularity definition files, I ran "sudo singularity build gedisingularity.sif makesingularity.def" and then I got back a fatal error as below:
FATAL: While performing build: conveyor failed to get: invalid image source: invalid reference format: repository name must be lowercase

The container definition file is written as:
makesingularity.txt

Password to run "shell" command

Hi there,
I was wondering if anyone can help. I have created a singularity image with custom scripts inside that I would like to be protected from reading.

Is there a way I can exec the singularity image without any password BUT requires a decryption password just for accessing to the images source through the "shell" command?

Many thanks!

Bump containerd via oras migration to oras-go

Type of issue
tech debt / dependencies

Description of issue

As discussed on Slack...

David Trudgian (Sylabs) 2:20 PM
Hey all - Singularity is currently stuck on a 1.5.0-beta.4 beta version of github.com/containerd/containerd because of the web of dependencies. The critical dep in order to get back to non-beta containerd is oras. Unfortunately oras is in the middle of splitting out library code from CLI code... I'm going to PR, but just noting here Singularity will have an oras dep that states the following for a bit:
This project is currently under active development. The API may and will change incompatibly from one commit to another.
https://github.com/oras-project/oras-go (edited)

’Docker-like’ mode

Is your feature request related to a problem? Please describe.
Users may run docker:// containers, be surprised when the lack of $HOME isolation etc. impacts behavior, and not understand which flags are needed to resolve issues.

Describe the solution you'd like
By default, SingularityCE runs containers far less isolated from the host than Docker - relying on system restrictions on the user. This is very convenient for traditional HPC-like jobs, but some Docker containers can have conflicts with files and other things that enter the container from the host. We have a number of flags such as --contain to work around this, but it’s often unclear which are needed. A shortcut to apply the most ‘docker-like’, but practical configuration would be useful.

Describe alternatives you've considered
Documentation improvements could assist the issue by providing better guidance, but a single flag for docker like behavior is a more accessible solution.

-g option to specify go compiler

Is your feature request related to a problem? Please describe.

Be able to specify go compiler as mconfig argument with for instance a -g option.

Describe the solution you'd like

We have a module environment with many compilers and versions. The latest go is go-11 from GCC 11.1.0. Would be nice if I could specify -g go-11 as argument to mconfig.

Describe alternatives you've considered
Tried for instance to create an alias go='go-11' , but that didn't work. Also tried absolute path.

Additional context

torel@srl-login1:/workspace/Singularity/singularity-ce-3.8.0$ module load gcc/11.1.0
torel@srl-login1:
/workspace/Singularity/singularity-ce-3.8.0$ gcc-11 -v
Using built-in specs.
COLLECT_GCC=gcc-11
COLLECT_LTO_WRAPPER=/cm/shared/apps/gcc/11.1.0/usr/lib/gcc/x86_64-linux-gnu/11/lto-wrapper
OFFLOAD_TARGET_NAMES=nvptx-none
Target: x86_64-linux-gnu
Configured with: ../configure --prefix=/cm/shared/apps/gcc/11.1.0 --enable-languages=c,ada,c++,go,brig,fortran,objc,obj-c++ --with-gmp=/cm/shared/apps/gmp/gcc/6.2.1 --with-mpc=/cm/shared/apps/mpc/gcc/1.2.1 --with-mpfr=/cm/shared/apps/mpfr/gcc/4.1.0 --with-gcc-major-version-only --program-suffix=-11 --enable-shared --enable-linker-build-id --libexecdir=/cm/shared/apps/gcc/11.1.0/usr/lib --without-included-gettext --enable-threads=posix --libdir=/cm/shared/apps/gcc/11.1.0/usr/lib --enable-nls --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-vtable-verify --enable-libmpx --enable-plugin --enable-default-pie --with-system-zlib --with-target-system-zlib --enable-objc-gc=auto --enable-multiarch --disable-werror --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32,m64,mx32 --enable-multilib --with-tune=generic --enable-offload-targets=nvptx-none --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu
Thread model: posix
Supported LTO compression algorithms: zlib zstd
gcc version 11.1.0 (GCC)
torel@srl-login1:~/workspace/Singularity/singularity-ce-3.8.0$ g++-11 -v
Using built-in specs.
COLLECT_GCC=g++-11
COLLECT_LTO_WRAPPER=/cm/shared/apps/gcc/11.1.0/usr/lib/gcc/x86_64-linux-gnu/11/lto-wrapper
OFFLOAD_TARGET_NAMES=nvptx-none
Target: x86_64-linux-gnu
Configured with: ../configure --prefix=/cm/shared/apps/gcc/11.1.0 --enable-languages=c,ada,c++,go,brig,fortran,objc,obj-c++ --with-gmp=/cm/shared/apps/gmp/gcc/6.2.1 --with-mpc=/cm/shared/apps/mpc/gcc/1.2.1 --with-mpfr=/cm/shared/apps/mpfr/gcc/4.1.0 --with-gcc-major-version-only --program-suffix=-11 --enable-shared --enable-linker-build-id --libexecdir=/cm/shared/apps/gcc/11.1.0/usr/lib --without-included-gettext --enable-threads=posix --libdir=/cm/shared/apps/gcc/11.1.0/usr/lib --enable-nls --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-vtable-verify --enable-libmpx --enable-plugin --enable-default-pie --with-system-zlib --with-target-system-zlib --enable-objc-gc=auto --enable-multiarch --disable-werror --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32,m64,mx32 --enable-multilib --with-tune=generic --enable-offload-targets=nvptx-none --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu
Thread model: posix
Supported LTO compression algorithms: zlib zstd
gcc version 11.1.0 (GCC)

torel@srl-login1:~/workspace/Singularity/singularity-ce-3.8.0$ go-11 version
go version go1.16.3 gccgo (GCC) 11.1.0 linux/amd64

torel@srl-login1:/workspace/Singularity/singularity-ce-3.8.0$ ./mconfig -c gcc-11 -x g++-11 -b ./build-x86_64 -p /cm/shared/apps/singularity-ce/3.8.0
Configuring for project `singularity-ce' with languages: C, Golang
=> running pre-basechecks project specific checks ...
=> running base system checks ...
checking: host C compiler... gcc-11
checking: host C++ compiler... g++-11
checking: host Go compiler (at least version 1.13)... not found!
mconfig: could not complete configuration
torel@srl-login1:
/workspace/Singularity/singularity-ce-3.8.0$ go-11 version
go version go1.16.3 gccgo (GCC) 11.1.0 linux/amd64

Perform build argument sanity checking

Version of Singularity:

What version of Singularity are you using?

3.7.2

Expected behavior

What did you expect to see when you do...?

Error out early if singularity build --remote IMAGE PATH contains an invalid scheme in the URI. For example, libary://... (instead of library:/...)

Actual behavior

What actually happend? Why was it incorrect?

(Remote) Build was performed and once complete, an error was displayed that IMAGE PATH is invalid.

Steps to reproduce this behavior

singularity build -r libary://entity/collection/container:tag

Note the error in the URI scheme.

ERROR : No more available loop devices, try increasing 'max loop devices' in singularity.con

Hi,
I have some trouble as same as the title when I try to run an image, and try to find salvation by Google, Bing, and Baidu and failed. Have you ever encountered the same problem? If so, how did you solve it?

  • singularity version, 2.6.1-dist
  • OS version: Ubuntu 20.04.2 LTS in Windows 10 WSL
  • pull image: singularity pull --name hello.simg shub://vsoch/hello-world
  • run image: singularity shell hello.simg

Thanks so much.

Remove Darwin Support

The SingularityCE code base currently has partial support on Darwin (macOS), aimed at supporting Singularity Desktop (Singularity on macOS via virtualization.) Although Singularity Desktop is no longer maintained, we continue to build, test and maintain Darwin code in the SingularityCE code base.

It is of course possible to run Singularity on Darwin via Vagrant, as documented in the SingularityCE Admin Guide. A notable exception is Macs built on Apple Silicon (#66), but since Singularity Desktop has never been supported on this platform either, this is not a loss of support, rather a continued absence.

Given the above, I believe we should consider removing Darwin support from the code base.

cgroups v2 support

Is your feature request related to a problem? Please describe.

SingularityCE will not work completely out of the box (OCI mode / cgroups settings) on distributions that are using the cgroups v2 hierarchy.

Fedora is using cgroups v2. Also Debian bullseye (the next release, currently testing). We can reasonably expect the next Ubuntu LTS and the next EL release to do so also.

Describe the solution you'd like

SingularityCE should support the cgroups v2 hierarchy.

Linter Version Check is Broken

Describe the bug

When running make check using golangci/golangci-lint:v1.40.1, a version difference is reported even though the correct version is installed.

To Reproduce

Run make check inside a golangci/golangci-lint:v1.40.1 container (example):

$ make -C ./builddir check
...
 CHECK golangci-lint
cd /root/project && \
	scripts/run-linter run --verbose --build-tags "containers_image_openpgp sylog oci_engine singularity_engine fakeroot_engine apparmor selinux seccomp" ./...
W: 
W: ********************************************************************************************
W: Singularity's CI checks will use version 1.40.1 of golangci-lint.
W: Your installed version differs. Please use 1.40.1 if you experience issues.
I: 
I: The output of "golangci-lint version" is:
I: 
I:     golangci-lint has version v1.40.1 built from 625445b1 on 2021-05-14T11:59:47Z
I: 
I: It was found in the following location:
I: 
I:     /usr/bin/golangci-lint
I: ********************************************************************************************
I: 
INFO [config_reader] Config search paths: [./ /root/project /root /]
...

Expected behavior

make check runs without warnings when the correct version of golangci-lint is installed.

Installation Method

Source

Additional context

Bug does not occur when installed using the install script locally.

HTTP Support for Library

SingularityCE currently requires Libraries (remotes) use TLS termination. While this is a good default, it does make it more difficult for users to stand up their own Library service such as Hinkskalle. In fact, that project actually documents a process to patch the source code to allow http. We should add an option to allow users to opt into using a non-TLS remote, as is possible via --nohttps with docker:// URIs.

Mellanox IB/OFED Library Discovery & Binding

Is your feature request related to a problem? Please describe.
When running a multi-node application that uses Infiniband networking, the user is currently responsible for making sure that required libraries are present in the container, or bound in from the host. This requires knowledge of libraries that is below the application level, and not commonly needed outside of a container, as an HPC admin will have made the libraries available by default.

Describe the solution you'd like
We should be able to discover the required libraries on the host, for automatic bind-in when the container distribution is compatible.

Describe alternatives you've considered
Documentation improvements could assist, but library paths vary between systems, so it is difficult to give a simple example that will almost always work.

Replace Deprecated UUID Package

The github.com/satori/go.uuid module used by this project does not appear to be actively maintained (ref).

We should consider switching to the github.com/gofrs/uuid package, or some other suitable alternative.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.