Giter Site home page Giter Site logo

ansible-role-libvirt-vm's Introduction

Libvirt VM

This role configures and creates (or destroys) VMs on a KVM hypervisor.

Requirements

The host should have Virtualization Technology (VT) enabled and should be preconfigured with libvirt/KVM.

Role Variables

  • libvirt_vm_default_console_log_dir: The default directory in which to store VM console logs, if a VM-specific log file path is not given. Default is "/var/log/libvirt-consoles".

  • libvirt_vm_default_uuid_deterministic: Whether UUID should be calculated by hashing the VM name. If not, the UUID is randomly generated by libvirt when the VM is defined. Default is False.

  • libvirt_vm_image_cache_path: The directory in which to cache downloaded images. Default is "/tmp/".

  • libvirt_volume_default_images_path: Directory in which instance images are stored. Default is '/var/lib/libvirt/images'.

  • libvirt_volume_default_type: What type of backing volume does the instance use? Default is volume. Options include block, file, network and volume.

  • libvirt_volume_default_format: Format for volumes created by the role. Default is qcow2. Options include raw, qcow2, vmdk. See man virsh for the full range.

  • libvirt_volume_default_device: Control how device appears in guest OS. Defaults to disk. Options include cdrom and disk.

  • libvirt_vm_engine: virtualisation engine. If not set, the role will attempt to auto-detect the optimal engine to use.

  • libvirt_vm_emulator: path to emulator binary. If not set, the role will attempt to auto-detect the correct emulator to use.

  • libvirt_cpu_mode_default: The default CPU mode if libvirt_cpu_mode or vm.cpu_mode is undefined.

  • libvirt_vm_arch: CPU architecture, default is x86_64.

  • libvirt_vm_uri: Override the libvirt connection URI. See the libvirt docs docs for more details.

  • libvirt_vm_virsh_default_env: Variables contained within this dictionary are added to the environment used when executing virsh commands.

  • libvirt_vm_clock_offset. If defined the instances clock offset is set to the provided value. When undefined sync is set to localtime.

  • libvirt_vm_trust_guest_rx_filters: Whether to trust guest receive filters. This gets mapped to the trustGuestRxFilters attribute of VM interfaces. Default is false

  • libvirt_vms: list of VMs to be created/destroyed. Each one may have the following attributes:

    • state: set to present to create or absent to destroy the VM. Defaults to present.

    • name: the name to assign to the VM.

    • uuid: the UUID to manually assign to the VM. If specified, neither uuid_deterministic nor libvirt_vm_default_uuid_deterministic are used.

    • uuid_deterministic: overrides default set in libvirt_vm_default_uuid_deterministic

    • memory_mb: the memory to assign to the VM, in megabytes.

    • vcpus: the number of VCPU cores to assign to the VM.

    • machine: Virtual machine type. Default is None if libvirt_vm_engine is kvm, otherwise pc-1.0.

    • cpu_mode: Virtual machine CPU mode. Default is host-passthrough if libvirt_vm_engine is kvm, otherwise host-model. Can be set to none to not configure a cpu mode.

    • clock_offset: Overrides default set in libvirt_vm_clock_offset

    • enable_vnc: If true enables VNC listening on localhost for use with VirtManager and similar tools

    • enable_spice: If true enables SPICE listening for use with Virtual Machine Manager and similar tools

    • enable_guest_virtio: If true enables guest virtio device for use with Qemu guest agent

    • volumes: a list of volumes to attach to the VM. Each volume is defined with the following dict:

      • type: What type of backing volume does the instance use? All options for libvirt_volume_default_type are valid here. Default is libvirt_volume_default_type.
      • pool: Name or UUID of the storage pool from which the volume should be allocated. Required when type is volume.
      • name: Name to associate with the volume being created; For file type volumes include extension if you would like volumes created with one.
      • file_path: Where the image of file type volumes should be placed; defaults to libvirt_volume_default_images_path
      • device: Control how device appears in guest OS. All options for libvirt_volume_default_device are valid here. Default is libvirt_volume_default_type.
      • capacity: volume capacity, can be suffixed with k, M, G, T, P or E when type is network or MB,GB,TB, etc when type is disk (required when type is disk or network)
      • auth: Authentication details should they be required. If auth is required, username, type, and uuid or usage will need to be supplied. uuid and usage should not be both supplied.
      • source: Where the remote volume comes from when type is network. protocol, name and hosts_list should be supplied. port is optional.
      • format: Format of the volume. All options for libvirt_volume_default_format are valid here. Default is libvirt_volume_default_format.
      • image: (optional) a URL to an image with which the volume is initalised (full copy).
      • checksum: (optional) checksum of the image to avoid download when it's not necessary.
      • backing_image: (optional) name of the backing volume which is assumed to already be the same pool (copy-on-write).
      • image and backing_image are mutually exclusive options.
      • target: (optional) Manually influence type and order of volumes
      • dev: (optional) Block device path when type is block.
      • remote_src: (optional) When type is file or block, specify wether image points to a remote file (true) or a file local to the host that launched the playbook (false). Defaults to true.
    • usb_devices: a list of usb devices to present to the vm from the host.

      Each usb device is defined with the following dict:

      • vendor: The vendor id of the USB device.
      • product: The product id of the USB device.

      Note - Libvirt will error if the VM is provisioned and the USB device is not attached.

      To obtain the vendor id and product id of the usb device from the host running as sudo / root with the usb device plugged in run lsusb -v. Example below with an attached Sandisk USB Memory Stick with vendor id: 0x0781 and product id: 0x5567

      lsusb -v | grep -A4 -i sandisk
      
        idVendor           0x0781 SanDisk Corp.
        idProduct          0x5567 Cruzer Blade
        bcdDevice            1.00
        iManufacturer           1 
        iProduct                2 
      
    • interfaces: a list of network interfaces to attach to the VM. Each network interface is defined with the following dict:

      • type: The type of the interface. Possible values:

        • network: Attaches the interface to a named Libvirt virtual network. This is the default value.
        • direct: Directly attaches the interface to one of the host's physical interfaces, using the macvtap driver.
      • network: Name of the network to which an interface should be attached. Must be specified if and only if the interface type is network.

      • mac: "Hardware" address of the virtual instance, if absent one is created

      • source: A dict defining the host interface to which this VM interface should be attached. Must be specified if and only if the interface type is direct. Includes the following attributes:

        • dev: The name of the host interface to which this VM interface should be attached.
        • mode: options include vepa, bridge, private and passthrough. See man virsh for more details. Default is vepa.
      • trust_guest_rx_filters: Whether to trust guest receive filters. This gets mapped to the trustGuestRxFilters attribute of VM interfaces. Default is libvirt_vm_trust_guest_rx_filters.

      • model: The name of the interface model. Eg. e1000 or ne2k_pci, if undefined it defaults to virtio.

      • alias: An optional interface alias. This can be used to tie specific network configuration to persistent network devices via name. The user defined alias is always prefixed with ua- to be compliant (aliases without ua- are ignored by libvirt. If undefined it defaults to libvirt managed vnetX.

    • console_log_enabled: if true, log console output to a file at the path specified by console_log_path, instead of to a PTY. If false, direct terminal output to a PTY at serial port 0. Default is false.

    • console_log_path: Path to console log file. Default is {{ libvirt_vm_default_console_log_dir }}/{{ name }}-console.log.

    • start: Whether to immediately start the VM after defining it. Default is true.

    • autostart: Whether to start the VM when the host starts up. Default is true.

    • boot_firmware: Can be one of: bios, or efi. Defaults to bios.

    • xml_file: Optionally supply a modified XML template. Base customisation off the default vm.xml.j2 template so as to include the expected jinja expressions the role uses.

N.B. the following variables are deprecated: libvirt_vm_state, libvirt_vm_name, libvirt_vm_memory_mb, libvirt_vm_vcpus, libvirt_vm_engine, libvirt_vm_machine, libvirt_vm_cpu_mode, libvirt_vm_volumes, libvirt_vm_interfaces and libvirt_vm_console_log_path. If the variable libvirt_vms is left unset, its default value will be a singleton list containing a VM specification using these deprecated variables.

Dependencies

If using qcow2 format drives qemu-img (in qemu-utils package) is required.

Example Playbook

---
- name: Create VMs
  hosts: hypervisor
  roles:
    - role: stackhpc.libvirt-vm
      libvirt_vms:
        - state: present
          name: 'vm1'
          memory_mb: 512
          vcpus: 2
          volumes:
            - name: 'data1'
              device: 'disk'
              format: 'qcow2'
              capacity: '400GB'
              pool: 'my-pool'
            - name: 'debian-10.2.0-amd64-netinst.iso'
              type: 'file'
              device: 'cdrom'
              format: 'raw'
              target: 'hda'  # first device on ide bus
            - name: 'networkfs'
              type: 'network'
              format: 'raw'
              capacity: '50G'
              auth:
                username: 'admin'
                type: 'ceph'
                usage: 'rbd-pool'
              source:
                protocol: 'rbd'
                name: 'rbd/volume'
                hosts_list:
                  - 'mon1.example.org'
                  - 'mon2.example.org'
                  - 'mon3.example.org'
            - type: 'block'
              format: 'raw'
              dev: '/dev/sda'

          interfaces:
            - network: 'br-datacentre'
          
          usb_devices:
            - vendor: '0x0781'
              product: '0x5567'

        - state: present
          name: 'vm2'
          memory_mb: 1024
          vcpus: 1
          volumes:
            - name: 'data2'
              device: 'disk'
              format: 'qcow2'
              capacity: '200GB'
              pool: 'my-pool'
            - name: 'filestore'
              type: 'file'
              file_path: '/srv/cloud/images'
              capacity: '900GB'
          interfaces:
            - type: 'direct'
              source:
                dev: 'eth123'
                mode: 'private'
            - type: 'bridge'
              source:
                dev: 'br-datacentre'

Author Information

ansible-role-libvirt-vm's People

Contributors

brtkwr avatar gavinwill avatar goetzk avatar gprocunier avatar jlsm-se avatar jovial avatar jpvriel avatar jpvriel-sb avatar laddp avatar markgoddard avatar michaeltchapman avatar mtb-xt avatar nickbroon avatar oneswig avatar priteau avatar raspbeguy avatar roumano avatar solacelost avatar stackhpc-ci avatar t2d avatar thomasschlien avatar thomwiggers avatar w-miller avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ansible-role-libvirt-vm's Issues

Permissions of the Vm volume file

Hi,

When I tried to create a VM from a qcow image with directive "image:", in the step "Ensure the VM is running and started at boot", I had this error :

2019-03-31T09:39:26.332744Z qemu-system-x86_64: -drive file=/way_of_the_qcow/Test,format=qcow2,if=none,id=drive-virtio-disk0: Could not open '/way_of_the_qcow/Test': Permission denied

I solved it with adding this section in the task file vm.yml :

  • name: Permissions VM

    file:
    path: /PATH_TO_THE_VM/{{ volumes | selectattr('name', 'defined') | map(attribute='name') | list | join('\n') }}
    owner: libvirt-qemu
    group: libvirt-qemu
    mode: 0644

Regards,

VM created but it wont boot

Hi
i am having a problem using this module correctly, when i create a vm instance i am unable to actually get it to boot. I did check the xml and it seem to create storage in a completely different way which in might be ok but i noticed three things for me

  1. It wont boot permission error on storage create as pool instead of source file=. If manually fixed then i then hit the next problem
  2. No media is created by the module and it wont boot. If i manually fix xml then
  3. vnc part is set to /usr/bin/kvm and not to /usr/bin/kvm-spice.

Anyway deleting the whole instance and keeping the same storage created by the module i am able to manually create instance and boot with no problems. I am using just a basic example provided to create a vm in readme.

can't get volumes to work.

    - role: libvirt_host
      libvirt_host_pools:
        - name: "default"
          type: "dir"
          path: "/var/lib/libvirt/pools/default"

    - role: libvirt_vms
      libvirt_vms:
        - state: "present"
          name: "storage1"
          vcpus: 4
          memory_mb: 24576
          volumes:
            - type: "file"
              pool: "default"
              name: "storage_root"
              capacity: "16GB"

            - name: "debian-11.6.0-amd64-netinst.iso"
              device: "cdrom"
              type: "file"
              source: "/var/lib/libvirt/images/debian-11.6.0-amd64-netinst.iso"
              format: "raw"
              target: "hda"
          interfaces:
            - type: "bridge"
              source:
                dev: "eno1"
    <disk type='dir' device='disk'>\n
      <driver name='qemu' type='qcow2' />\n
      <source pool='default' volume='storage_root'/>\n
      <target dev='vda' />\n
    </disk>\n
    <disk type='file' device='cdrom'>\n
      <driver name='qemu' type='raw' />\n
      <source file='/var/lib/libvirt/images/debian-11.6.0-amd64-netinst'/>\n
        <target dev='hda' bus='sata'/>\n
    </disk>\n

Notice

  • volume disk type "dir" results in qcow2, pool, volume.
  • volume disk type "file" results in image name cut off. debian-11.6.0-amd64-netinst.iso -> debian-11.6.0-amd64-netinst

I'm not sure what to do here.

How is the hostname / machine-id expected to be set using this role?

I'm in the middle of moving to ansible for all my machine provisioning. After successfully integrating your libvirt-host role, I got to this one.

However, after inspecting the code, I don't see how this role takes care of making the newly provisioned VMs unique in terms of hostname and machine-id. I don't know if that's all I need, but at least these two are all but mandatory in order for a machine to even work.

Should I include another role that takes care of this or does this one do it in a way I can't see?

Domain xml template doesn't include guest agent channel

Suggestion: Include guest channel when appropriate?

Noticed from some testing that vm.xml.j2 differs to a virt-install --os_varient rhe7 in that the template does not include guest agent channel, where virt-install creates a domain definition which affectivly has:

<channel type='unix'>
         <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/>
</channel>

In addition, some virtio serial controller and channel options are not set.

Full example for virt-install command:

virt-install --name test --memory 2048 --vcpus 4 --disk size=512 --network none --accelerate --location /var/lib/libvirt/boot/rhel7.6.iso --os-variant rhel7 --nographics --extra-args \"ks=cdrom:/ks.cfg console=tty0 console=ttyS0,115200n8\"

Tested on Red Hat 7.6.

Failed to use this roles with lvm

I've failed to use this roles with lvm (at least on debian 10)

Error

if data to created VM is :

  volumes:
    - name: 'ns2'
      type: 'volume'
      capacity: '10GB'
      pool: 'lvm_pool'
      format: 'raw'

The role is crashing with this error

    "msg": "internal error: qemu unexpectedly closed the monitor: 2020-09-11T12:51:05.508705Z qemu-system-x86_64: -drive file=/dev/lvm_pool/ns3,format=raw,if=none,id=drive-virtio-disk0: Could not open '/dev/lvm_pool/ns3': Permission denied"

Debugging

  • Debugging, virsh dumpxml <nodename> give me :
    <disk type='volume' device='disk'>
      <driver name='qemu' type='raw'/>
      <source pool='lvm_pool' volume='ns3'/>
      <target dev='vda' bus='virtio'/>
    </disk>
  • Update the node via virsh edit <node>
    • replace <disk type='volume' device='disk'> with <disk type='block' device='disk'>
    • replace <source pool='lvm_pool' volume='ns3'/> with <source dev='/dev/lvm_pool/ns3'/>

It's now can boot

Change need

To permit these two line change (done on virsh edit) via ansible and your role :

  • Need to add this in the template vm.xml.j2 :
      {% elif volume.pool is defined %}
      <source dev='/dev/{{ volume.pool }}/{{ volume.name }}'/>
  • Need to change type from volume to block, so my variable will be :
  volumes:
    - name: 'ns2'
      type: 'block'
      capacity: '10GB'
      pool: 'lvm_pool'
      format: 'raw'
  • As it's now a block volume, the tasks Ensure the VM volumes exist will be skipped (due to when condition)
    So needed to update volumes.yml condition (line 30)

from : when: item.type | default(libvirt_volume_default_type) == 'volume'
to : when: item.type | default(libvirt_volume_default_type) == 'volume' or ( item.type | default(libvirt_volume_default_type) == 'block' and item.pool is defined )

  • Maybe, it's will also good to update the README to have a lvm example .

Changing definition of VM does not seem to update VM

I've changed the definition of my vm, eg. the amount of RAM, but the "ensure the VM is defined" task doesn't do anything.

I'm running Debian 10 with KVM. libvirt-python is version 5.0. I've checked using virsh define updated.xml. updated.xml was created via virsh dumpxml > updated.xml and then changing the number of CPU cores. If I then do virsh dumpxml --inactive | grep vcpu I see the new value.

But if I try to make the same change via this Ansible module, nothing happens.

When creating a volume with format == raw, the resize step will fail.

Can reproduce with:

(venv) [verne@kef1c-cde-ucd0001 vm]$ sudo virsh vol-create-as --pool libvirt-storage --name test --capacity 1GiB --format raw
Vol test created

(venv) [verne@kef1c-cde-ucd0001 vm]$ sudo virsh vol-resize --pool libvirt-storage --vol test --capacity 1GiB
error: Failed to change size of volume 'test' to 1GiB
error: invalid argument: can't shrink capacity below existing allocation

(venv) [verne@kef1c-cde-ucd0001 vm]$ sudo virsh vol-resize --pool libvirt-storage --vol test --capacity 2GiB
Size of volume 'test' successfully changed to 2GiB

`state: absent` should first destroy VMs before attempting to remove volumes

While using an LVM based storage pool, I noticed Failed to delete volume errors. libvirt seems to tolerate removing qcow2 storage before destroying the VM, but lvm logical volumes cannot be remove before the VM has been stopped (destroyed).

A simple fix is to switch the order of included tasks in tasks/main.yml.

(I hope to submit a PR with some enhancements in due course)

XML_template should be XML_file in README

Hi,

The README mentions that the custom xml template can be specified by using xml_template as variable. However in defaults/main.yml, the template is being defined by xml_file, so it should be accessible using vm.xml_file for a particular vm which is what it's defined in tasks/vm.xml

Thanks

CDROM devices backed by volumes broken in v1.8.0

PR #37 added support for booting from a CDROM. It also made volume creation dependent upon the device being disk. However, this breaks CDROM devices which use a volume, e.g.

    <disk type='volume' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source pool='default' volume='seed-configdrive'/>
      <target dev='vdc' bus='virtio'/>
      <readonly/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </disk>

howto pass "extra-args" to a VM

Sorry for opening an "issue", it's more a question (I searched for quite a while but couldn't figure it out):

is there any way to create a VM with this role and pass "extra-args" for kickstart to it? Like you do a

virt-install -x "ks=URL ksdevice=eth0 ip=IP netmask=IP dns=IP gateway=IP"

Thanks for any answer!

Torsten

Install VM with virt-builder

Hello,

I open this issue because I wish to implement a way to install VMs with virt-builder (from libguestfs project).

I wish to list here everything we need to think about and the useful information.

  • virt-builder can set up some basic settings on the VM OS, like the hostname and the root password. It can also copy a file to the desired location on the VM volume.
  • virt-builder downloads (from a libguestfs-maintained repository or a custom one) a template of a given OS (containing partitions) and deploy it on a file (raw or qcow) or on a block device, with ability to automatically expand the partitions to fit the given device.
  • Sadly, virt-builder can't yet use qemu storage backends to access target storage, so you have to map it to a block device in order to deploy on it using virt-builder. For instance, for RBD, you have to map it using RADOS kernel module or RBD-NBD implementation.

Do you think this is a good idea?

Network interface driver should be tunable

There are reasonable reasons to not use virtio as the default network driver for network interfaces.

Eg.
Windows Guests (native boot)
Linux guests being used in a network simulator which need mii-mon (e1000 supports it).

Modifying this seems trivial :

https://github.com/stackhpc/ansible-role-libvirt-vm/blob/master/templates/vm.xml.j2#L82

to be

{% if interface.driver is defined %}
      <model type='{{ interface.driver }}/>'
{% else %}
      <model type='virtio'/>
{% endif %}

Using libvirt modules

More a question than an issue. Why aren't you using only libvirt's modules (especially volumes) ?
My first role, before migrating to this one, was only using it and did not require all that bash glue.
It would make the entire thing more standard and easier to maintain.

Changing role variables does nothing to existing domains

Hello,

I wish there is a way to edit a domain property just by editing the role variable. For now, editing the variables does not affect the domains that are already defined.

For instance, if a domain that has 1024 MB of memory, if I change the memory_mb to 2048 and then execute the role, nothing happens. Same for adding a disk.

Would that be reasonable to fix this behavior ?

Deleting VM fails if volume-type is "disk"

You get

The error was: 'dict object' has no attribute 'pool'
The error appears to be in '/home/thom/.ansible/roles/stackhpc.libvirt-vm/tasks/destroy-volumes.yml': line 2, column 3

graphics spice

Personnaly, i'm using qemu/kvm with Virtual Machine Manager 2.0.0
I'm connecting to qemy/kvm server with qemu+ssh://root@server/system

To see the console via this tools, i need to enable spice graphics on the VM

image vs backing_image -> how can I use image that's already in the pool, but not as backing?

My VM host already has a template disk file in its storage pool.
I would like to use that file as full copy template, not as COW backing image.

However, specifying 'image' parameter for storage volume expect the image file to be on the controller machine.
OTOH, specifying 'backing_image' will use that image as COW backing image which I'm also not too fond of.

I'm looking at volumes.yml and I don't see how I could do what I want. Is it possible?

Fails resizing qcow from 15G to 50G

Hi,
This feels like a variation of #23 , but I'm reporting in case.
I'm trying to build instances copying an existing qcow image; the process fails due to a resizing error.

rvar/lib/libvirt/images# virsh vol-create-as --pool default --name tv --capacity 50GB --format qcow2
Vol tv created

/var/lib/libvirt/images# virsh vol-upload --pool default --vol tv --file /var/lib/libvirt/staging/W10-V1.vdi.qcow 

/var/lib/libvirt/images# virsh vol-resize --pool default --vol tv --capacity 50GB
error: Failed to change size of volume 'tv' to 50GB
error: invalid argument: Can't shrink capacity below current capacity unless shrink flag explicitly specified

/var/lib/libvirt/images# ls -lh /var/lib/libvirt/staging/W10-V1.vdi.qcow
-rwxr-xr-x 1 root root 16G May 14 20:20 /var/lib/libvirt/staging/W10-V1.vdi.qcow

I haven't investigated beyond this yet but I will try and update if I find a cause for the issue.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.