Giter Site home page Giter Site logo

canonical / lxd-ui Goto Github PK

View Code? Open in Web Editor NEW
225.0 17.0 24.0 7.92 GB

Easy and accessible container and virtual machine management. A browser interface for LXD

License: GNU General Public License v3.0

JavaScript 21.04% HTML 0.03% SCSS 3.24% Shell 0.09% TypeScript 75.35% Makefile 0.18% Dockerfile 0.06%
containers lxc lxd virtual-machine webapp hacktoberfest web

lxd-ui's People

Contributors

aaryanporwal avatar alandsleman avatar edlerd avatar jacobcoffee avatar lorumic avatar mas-who avatar nottrobin avatar renovate[bot] avatar rubinaga avatar samhotep avatar xlmnxp avatar yurii-vasyliev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lxd-ui's Issues

Add Operations

I want to add operations list (in bottom bar or separate page)
but I don't know if there a design for it in Figma (I don't have access)

Cannot edit any configuration option with unsupported network config

Hi Team,

I have LXD VMs created using the MAAS KVM Pod functionality. It has 3 network interfaces, bridged.

When I try edit the instance configuration in LXD-UI and attempt to change the CPU core count, it fails to save with the error:

Instance update failed
Invalid devices: Device validation failed for "eth0": Failed loading device "eth0": Unsupported device type

Despite having not edited that panel. It seems it tries to persist all config options even if not edited.

If I switch to the Networks panel, the network names (eth0/eth1/eth2) show but each "network device" dropdown has a list of all the interfaces on my system (including vsw0/vsw1, which are just linux bridges), but nothing is selected (hence the error, the text shows "Select option). Despite vsw0 being in the list.

Here is the network config from "lxc config show". In case it matters reproduction wise, this VM is in a non-default 'maas' project.
I guess this bug is in two parts

  • Firstly ideally it would not try to persist unchanged configuration particularly if it may have some unsupported feature (there are many ways to configure networks that don't seem currently supported in the UI).
  • Secondly I guess this basic bridged network configuration should work

Versions:

LXD snap:
tracking:     latest/stable/ubuntu-22.04
installed:          5.15-002fa0f             (25112) 181MB -

MAAS: 3.3.4
devices:
  eth0:
    boot.priority: "1"
    name: eth0
    nictype: bridged
    parent: vsw0
    type: nic
  eth1:
    name: eth1
    nictype: bridged
    parent: vsw1
    type: nic
  eth2:
    name: eth2
    nictype: bridged
    parent: vsw1
    type: nic

Allow Easy Creation of VM from an ISO

Currently "Create an instance" only allows base images that are in the LXD image repo.

I am sure you have plans for users to be able to add their own images repos and to select from them etc. But for news users and those just starting to play with LXD it would be good to be able to create a VM straight from an iso. So have something like a radio select on the "Create an instance" that allows "Base Image" or from "Install from .iso". This probably auto-mounts the ISO and adds to the VM before starting it. Installing from an .iso has some advantages especially where custom installs are wanted (and use hasn't wrapped their head around cloud.init yet if this is even supported by the O/S).

Kind of related to this, from a user point of view from using this UI it makes sense to me to have a selection of Container/VM as the first thing on the "Create an instance" page rather than selecting this later when the image is selected (I found this initially confusing). Selecting Container/VM first then just shows compatible images if installing from an image.

Show volumes in storage pools

Currently UI shows only Instances, Profiles, Images and Snapshots in the "storage pool" screen. It also should show volumes of this storage pool

Address already in use

After install maas not up lxdbr0

июл 28 07:52:04 supermicro systemd-networkd[1005]: lxdbr0: Link DOWN июл 28 07:52:05 supermicro lxd.daemon[1592]: time="2023-07-28T04:52:05Z" level=error msg="Failed initializing network" err="Failed starting: The DNS and DHCP service exited prematurely: Process exited with non-zero value 2 (\"dnsmasq: failed to create listening socket for 10.10.10.1: Address already in use\")" network=lxdbr0 project=default июл 28 07:53:05 supermicro systemd-networkd[1005]: lxdbr0: Link UP июл 28 07:53:05 supermicro audit[5747]: AVC apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="lxd_dnsmasq-lxdbr0_</var/snap/lxd/common/lxd>" pid=5747 comm="apparmor_parser"

Extract LXD TypeScript client

I'm not sure, how the source code of this software is organized, but there should be some kind of typescript wrapper around LXD API. Can it be extracted and published as a separate npm package please?

dotrun fails with "haproxy not found"

I'm playing around with LDX/LXC right now and getting used to it. I've tried installing the UI and ran into a problem.

After running dotrun I'm presented with the following error:

webpack 5.75.0 compiled successfully in 13487 ms
$ sass static/sass/styles.scss build/ui/styles.css --load-path=node_modules --style=compressed && postcss --use autoprefixer --replace 'build/**/*.css' --no-map
$ watch -p 'static/sass/**/*.scss' -p 'static/js/**/*' -c 'yarn run build'
$ ./entrypoint
booting on https://localhost:9443
./entrypoint: line 27: haproxy: command not found
error Command failed with exit code 127.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.

I've already installed haproxy via sudo apt install haproxy since there was no snap version, but it still says the above error.

I'm currently using Ubuntu 22.04.1 LTS in a VM with the latest upgrades installed to test all of this out.

Missing CGroup Controllers / Network Priority and Blkio Weight

Warning stating, "Couldn't find the CGroup network priority controller." in LXD UI "Warnings"

In an attempt to resolve this warning, I added the parameter "systemd.unified_cgroup_hierarchy=0" to the kernel command line in the GRUB configuration file as mentioned in https://discuss.linuxcontainers.org/t/getting-below-erro-while-starting-lxd/10005

This action resolved the network priority controller warning but introduced a new warning, "Couldn't find the CGroup blkio.weight."

By removing "systemd.unified_cgroup_hierarchy=0" parameter from the GRUB configuration, the original warning about the missing network priority controller reappeared.

snap lxd Version:git-1573b79 Rev:24886 Tracking:latest/edge
Ubuntu 23.04 kernel:6.2.0-20-generic

Chrome Browser on Windows Certificate Issues

The instructions on how to set up the certificates on the Windows Chrome browser do not seem to be correct or work for me.

Using Chrome Version 113.0.5672.63

The /ui/certificates/generate web page states:

Paste into the address bar:
chrome://settings/security 
Scroll all the way down and click the Manage Certificates button.
Click the Import button and select the lxd-ui.pfx file you just downloaded. Confirm an empty password.

My Chrome does not have a Manage Certificates button.

This seems to work for my to import the key pair:

Paste into the address bar: chrome://settings/security
Scroll down to the Advanced settings at the bottom of the screen.
Click on "Manage device certificates" - make sure to click on the text and not on the link icon.
This opens a certificates management dialog - Click Import...
Click Next.
Hit Browse, select Downloads, select the Personal Information Exchange file type and select the lxd-ui.pfx file. Then click Next.
Leave the password empty and click Next.
Select "Automatically select the certificate store..." and click Next.
Click Finish

This seems to have imported the lxd-ui.pfx file correctly into Chrome.
LXD CERT

I have also imported the crt into LXD:

$ lxc config trust list
+--------+------------+-------------+--------------+-----------------------------+------------------------------+
|  TYPE  |    NAME    | COMMON NAME | FINGERPRINT  |         ISSUE DATE          |         EXPIRY DATE          |
+--------+------------+-------------+--------------+-----------------------------+------------------------------+
| client | lxd-ui.crt |             | 2203b9ac2948 | May 6, 2023 at 2:09pm (UTC) | Jan 30, 2026 at 2:09pm (UTC) |
+--------+------------+-------------+--------------+-----------------------------+------------------------------+

But this does not seem to be working as the LXD web host keeps redirecting me to the /ui/certificates/generate page and telling me to generate certificates.

Breadcrumb is Missing for Storages Page

On other pages such as instances and profiles as you drill down into pages a breadcrumb is shown top-left (e.g. Profiles > default) that allows you click the previous page name to traverse back up the chain.

This is missing from the Storages page.

If you click on default in storages - it should show "Storages > default" top left with the "Storages" part being a link to take you back to the default Storages page.

IP Addresses Missing from Instances Page

Given the following:

testuser@R520-03:~$ lxc list
+------+---------+-------------------------+-------------------------------------------------+-----------------+-----------+
| NAME |  STATE  |          IPV4           |                      IPV6                       |      TYPE       | SNAPSHOTS |
+------+---------+-------------------------+-------------------------------------------------+-----------------+-----------+
| c1   | RUNNING | 10.216.170.7 (eth0)     | fd42:2590:3f4d:11dc:216:3eff:fedc:d9cf (eth0)   | CONTAINER       | 0         |
+------+---------+-------------------------+-------------------------------------------------+-----------------+-----------+
| v1   | RUNNING | 10.216.170.154 (enp5s0) | fd42:2590:3f4d:11dc:216:3eff:febf:2bd6 (enp5s0) | VIRTUAL-MACHINE | 0         |
+------+---------+-------------------------+-------------------------------------------------+-----------------+-----------+
| v2   | RUNNING | 10.216.170.64 (enp5s0)  | fd42:2590:3f4d:11dc:216:3eff:fe82:a46b (enp5s0) | VIRTUAL-MACHINE | 0         |
+------+---------+-------------------------+-------------------------------------------------+-----------------+-----------+

I would expect that the instances page shows the IP addresses for the running VMs but it only shows IPs for the container.

lxd ui screenshot 2

Communication Error

I've depoly my lxd server and everything work fine on linux commad line. But I want to manage all the operation over http restful apis which is available in official documentation. I've certificate and key. Now how can I call https://Ixd-host.secret.com/1.0/containers:8443 from any postman or javascript or php or anyother language?

Hardcoded port 9443 is occupied by the snap microcloud

Trying to deploy the UI on the machine where the microcloud snap is installed.
When running "dotrun" - the build process fails with
docker.errors.APIError: 500 Server Error for http+docker://localhost/v1.41/containers/cab6cc4470e2ae694c4c4cbc41b80eab42f1662d19e8fa4fee3e934fd69b7829/start: Internal Server Error ("driver failed programming external connectivity on endpoint dotrun-lxd-ui-1677710801 (092d2c89c715d8cec781c06f6c15db977903ad7c32463d2fa8257847eb689063): Error starting userland proxy: listen tcp4 0.0.0.0:9443: bind: address already in use")

Quick scanning shows that the port 9443 is occupied by the microcloud snap:
$ sudo netstat -antpl|grep 9443 tcp 0 0 10.113.213.121:9443 0.0.0.0:* LISTEN 2558/microcloudd

The port should be configurable and the default value should be changed.

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

github-actions
.github/workflows/pr.yaml
  • actions/checkout v3
  • actions/checkout v3
  • actions/checkout v3
npm
package.json
  • @canonical/react-components 0.37.6
  • formik 2.2.9
  • react 18.2.0
  • react-dom 18.2.0
  • react-router-dom 6.4.2
  • react-scripts 5.0.1
  • serve 14.0.1
  • vanilla-framework 3.8.0
  • xterm-addon-fit 0.6.0
  • xterm-for-react 1.0.4
  • yup 0.32.11
  • @babel/core 7.19.6
  • @babel/eslint-parser 7.19.1
  • @babel/preset-env 7.19.4
  • @babel/preset-react 7.18.6
  • @babel/preset-typescript 7.18.6
  • @types/react 18.0.21
  • @types/react-dom 18.0.6
  • @types/react-router-dom 5.3.3
  • @typescript-eslint/eslint-plugin 5.40.1
  • @typescript-eslint/parser 5.40.1
  • autoprefixer 10.4.12
  • babel-loader 8.2.5
  • babel-plugin-transform-es2015-modules-commonjs 6.26.2
  • concurrently 7.4.0
  • copy-webpack-plugin 11.0.0
  • css-loader 6.7.1
  • eslint 8.25.0
  • eslint-config-prettier 8.5.0
  • eslint-plugin-prettier 4.2.1
  • eslint-plugin-react 7.31.10
  • postcss 8.4.18
  • postcss-cli 10.0.0
  • prettier 2.7.1
  • sass 1.55.0
  • sass-loader 13.1.0
  • style-loader 3.3.1
  • stylelint 14.14.0
  • stylelint-config-prettier 9.0.3
  • stylelint-config-standard-scss 5.0.0
  • stylelint-order 5.0.0
  • stylelint-prettier 2.0.0
  • stylelint-scss 4.3.0
  • ts-loader 9.4.1
  • typescript 4.8.4
  • watch-cli 0.2.3
  • webpack 5.74.0
  • webpack-cli 4.10.0

  • Check this box to trigger a request for Renovate to run again on this repository

Allow additional simplestream remotes via the ui

Unfortunately in its current state, the web ui only allows for the Ubuntu and LinuxContainers simplestreams to be accessible via the web ui. It would be nice if there was some way to allow additional remotes to be used. This then could also expand to have a remote dropdown in the ui when selecting an image.

I had a few ideas. The first would be based on a user.* config setting on the current project the user is attempting to make the instance in. The config setting could be something similar to user.remote.0: https://mysimplestream.remote.com where we can infinitely create additional remotes. You could also define a remote's name via user.remote.0.name: mysimplestream

The other idea was to use the global remote's settings via the lxd api* so then the additional remotes could be accessed from the web ui.

  • * - would have to be implemented to core lxd api

Creating a new VM with custom ISO via UI states VM started, but remains stopped

Required information

  • Distribution: Ubuntu Server
  • Distribution version: 22.04
  • The output of "lxc info" or if that fails:
config:
  core.https_address: '[::]:8443'
  core.metrics_address: :8444
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- macaroon_authentication
- network_sriov
- console
- restrict_devlxd
- migration_pre_copy
- infiniband
- maas_network
- devlxd_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- devlxd_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- candid_authentication
- backup_compression
- candid_config
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- candid_config_key
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- snapshot_schedule_aliases
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- rbac
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
- container_disk_ceph
- virtual-machines
- image_profiles
- clustering_architecture
- resources_disk_id
- storage_lvm_stripes
- vm_boot_priority
- unix_hotplug_devices
- api_filtering
- instance_nic_network
- clustering_sizing
- firewall_driver
- projects_limits
- container_syscall_intercept_hugetlbfs
- limits_hugepages
- container_nic_routed_gateway
- projects_restrictions
- custom_volume_snapshot_expiry
- volume_snapshot_scheduling
- trust_ca_certificates
- snapshot_disk_usage
- clustering_edit_roles
- container_nic_routed_host_address
- container_nic_ipvlan_gateway
- resources_usb_pci
- resources_cpu_threads_numa
- resources_cpu_core_die
- api_os
- container_nic_routed_host_table
- container_nic_ipvlan_host_table
- container_nic_ipvlan_mode
- resources_system
- images_push_relay
- network_dns_search
- container_nic_routed_limits
- instance_nic_bridged_vlan
- network_state_bond_bridge
- usedby_consistency
- custom_block_volumes
- clustering_failure_domains
- resources_gpu_mdev
- console_vga_type
- projects_limits_disk
- network_type_macvlan
- network_type_sriov
- container_syscall_intercept_bpf_devices
- network_type_ovn
- projects_networks
- projects_networks_restricted_uplinks
- custom_volume_backup
- backup_override_name
- storage_rsync_compression
- network_type_physical
- network_ovn_external_subnets
- network_ovn_nat
- network_ovn_external_routes_remove
- tpm_device_type
- storage_zfs_clone_copy_rebase
- gpu_mdev
- resources_pci_iommu
- resources_network_usb
- resources_disk_address
- network_physical_ovn_ingress_mode
- network_ovn_dhcp
- network_physical_routes_anycast
- projects_limits_instances
- network_state_vlan
- instance_nic_bridged_port_isolation
- instance_bulk_state_change
- network_gvrp
- instance_pool_move
- gpu_sriov
- pci_device_type
- storage_volume_state
- network_acl
- migration_stateful
- disk_state_quota
- storage_ceph_features
- projects_compression
- projects_images_remote_cache_expiry
- certificate_project
- network_ovn_acl
- projects_images_auto_update
- projects_restricted_cluster_target
- images_default_architecture
- network_ovn_acl_defaults
- gpu_mig
- project_usage
- network_bridge_acl
- warnings
- projects_restricted_backups_and_snapshots
- clustering_join_token
- clustering_description
- server_trusted_proxy
- clustering_update_cert
- storage_api_project
- server_instance_driver_operational
- server_supported_storage_drivers
- event_lifecycle_requestor_address
- resources_gpu_usb
- clustering_evacuation
- network_ovn_nat_address
- network_bgp
- network_forward
- custom_volume_refresh
- network_counters_errors_dropped
- metrics
- image_source_project
- clustering_config
- network_peer
- linux_sysctl
- network_dns
- ovn_nic_acceleration
- certificate_self_renewal
- instance_project_move
- storage_volume_project_move
- cloud_init
- network_dns_nat
- database_leader
- instance_all_projects
- clustering_groups
- ceph_rbd_du
- instance_get_full
- qemu_metrics
- gpu_mig_uuid
- event_project
- clustering_evacuation_live
- instance_allow_inconsistent_copy
- network_state_ovn
- storage_volume_api_filtering
- image_restrictions
- storage_zfs_export
- network_dns_records
- storage_zfs_reserve_space
- network_acl_log
- storage_zfs_blocksize
- metrics_cpu_seconds
- instance_snapshot_never
- certificate_token
- instance_nic_routed_neighbor_probe
- event_hub
- agent_nic_config
- projects_restricted_intercept
- metrics_authentication
- images_target_project
- cluster_migration_inconsistent_copy
- cluster_ovn_chassis
- container_syscall_intercept_sched_setscheduler
- storage_lvm_thinpool_metadata_size
- storage_volume_state_total
- instance_file_head
- instances_nic_host_name
- image_copy_profile
- container_syscall_intercept_sysinfo
- clustering_evacuation_mode
- resources_pci_vpd
- qemu_raw_conf
- storage_cephfs_fscache
- network_load_balancer
- vsock_api
- instance_ready_state
- network_bgp_holdtime
- storage_volumes_all_projects
- metrics_memory_oom_total
- storage_buckets
- storage_buckets_create_credentials
- metrics_cpu_effective_total
- projects_networks_restricted_access
- storage_buckets_local
- loki
- acme
- internal_metrics
- cluster_join_token_expiry
- remote_token_expiry
- init_preseed
- storage_volumes_created_at
- cpu_hotplug
- projects_networks_zones
- network_txqueuelen
- cluster_member_state
- instances_placement_scriptlet
- storage_pool_source_wipe
- zfs_block_mode
- instance_generation_id
- disk_io_cache
- amd_sev
- storage_pool_loop_resize
- migration_vm_live
- ovn_nic_nesting
- oidc
- network_ovn_l3only
- ovn_nic_acceleration_vdpa
- cluster_healing
- instances_state_total
- auth_user
- security_csm
- instances_rebuild
- numa_cpu_placement
- custom_volume_iso
- network_allocations
- storage_api_remote_volume_snapshot_copy
- zfs_delegate
- operations_get_query_all_projects
- metadata_configuration
- syslog_socket
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
auth_user_name: removed
auth_user_method: unix
environment:
  addresses:
  removed
  architectures:
  - x86_64
  - i686
  certificate: |
    -----BEGIN CERTIFICATE-----
    REMOVED
    -----END CERTIFICATE-----
  certificate_fingerprint: 
  driver: lxc | qemu
  driver_version: 5.0.3 | 8.0.4
  firewall: nftables
  kernel: Linux
  kernel_architecture: x86_64
  kernel_features:
    idmapped_mounts: "true"
    netnsid_getifaddrs: "true"
    seccomp_listener: "true"
    seccomp_listener_continue: "true"
    shiftfs: "false"
    uevent_injection: "true"
    unpriv_fscaps: "true"
  kernel_version: 5.15.0-86-generic
  lxc_features:
    cgroup2: "true"
    core_scheduling: "true"
    devpts_fd: "true"
    idmapped_mounts_v2: "true"
    mount_injection_file: "true"
    network_gateway_device_route: "true"
    network_ipvlan: "true"
    network_l2proxy: "true"
    network_phys_macvlan_mtu: "true"
    network_veth_router: "true"
    pidfd: "true"
    seccomp_allow_deny_syntax: "true"
    seccomp_notify: "true"
    seccomp_proxy_send_notify_fd: "true"
  os_name: Ubuntu
  os_version: "22.04"
  project: default
  server: lxd
  server_clustered: false
  server_event_mode: full-mesh
  server_name: turlough
  server_pid: 14171
  server_version: "5.18"
  storage: zfs | btrfs
  storage_version: 2.1.5-1ubuntu6~22.04.1 | 5.16.2
  storage_supported_drivers:
  - name: dir
    version: "1"
    remote: false
  - name: lvm
    version: 2.03.11(2) (2021-01-08) / 1.02.175 (2021-01-08) / 4.45.0
    remote: false
  - name: zfs
    version: 2.1.5-1ubuntu6~22.04.1
    remote: false
  - name: btrfs
    version: 5.16.2
    remote: false
  - name: ceph
    version: 17.2.6
    remote: true
  - name: cephfs
    version: 17.2.6
    remote: true
  - name: cephobject
    version: 17.2.6
    remote: true

Issue description

When creating a new VM using the LXD UI and a custom ISO, once the creation is complete an info box is displayed to say success and that the instance has been launched; however, the instance is not launched and remains in a stopped state.

A minor issue, but would be great if it could either just say success and leave VM stopped, or if it says launched to really launch it.

Steps to reproduce

  1. Start creating a new instance via the LXD UI.
  2. Rather than browsing for an available image, upload a custom ISO.
  3. Complete the setup and click Create and start

The VM is created successfully, but not launched.

Information to attach

Before launching the VM there are no logs, so nothing to attach.

Screenshot from 2023-10-12 14-20-10

Incorrect instance count during instance creation

Steps to reproduce:

  • Go to instance creation, select image, input a name, click “Create”
  • Check that the “temporary” item appears in the table - the instance count should be correct at this stage
  • Move the focus outside of the window/tab where the LXD-UI is running, e.g. to another window/tab of the browser
  • Move the focus back to the LXD-UI window/tab
  • Check the instance count at the bottom of the table - it shows an incorrect value

Screen capture:

Peek 2023-10-10 09-25

Web Page Shows Blank Screen for Storages.

To replicate the issue.

From the main web page
Click Storages on the left hand menu
Click on the default storage to show details of that.
Click Storages on the left hand menu - now a blank screen is shown.

Console shows the following error:

index-c8e2af3d.js:38  Uncaught TypeError: r.map is not a function
    at Q (StorageList-88a31094.js:2:865)
    at am (index-c8e2af3d.js:38:19520)
    at Sp (index-c8e2af3d.js:40:3139)
    at sS (index-c8e2af3d.js:40:44557)
    at oS (index-c8e2af3d.js:40:39763)
    at fT (index-c8e2af3d.js:40:39691)
    at ss (index-c8e2af3d.js:40:39545)
    at Ap (index-c8e2af3d.js:40:35913)
    at l0 (index-c8e2af3d.js:40:36716)
    at Jn (index-c8e2af3d.js:38:3274)

Also the Storages menu item and page title should probably be Storage (as the plural of Storage is Storage).

Reverse Proxy

Hello, total noob here!

I am looking to make the lxd-ui available on my domain (lxd.domain.com) using NGINX reverse proxy mananger.
I already have a bunch of my servers applications running through this reverse proxy, but I cannot get it to work for LXD-UI. I am guessing it has something to do with the certificates, but not too sure. I know too little about that stuff unfortunately.

I appreciate any help and love the project!
Keep it up!

Require selecting a storage pool, or leave the selected one

I used the UI to create an instance and wanted to increase the storage pool size, so I configured that before launching the instance:
Screenshot from 2023-05-12 14-00-13

The result was:
Failed creating instance record: Failed initialising instance: Invalid devices: Device validation failed for \"root\": Root disk entry must have a \"pool\" property set

Before I selected "Root storage pool" to configure the size, the default pool was selected there, so I didn't even look at the dropdown (the only thing I wanted to change was the size).
But as soon as I selected "Root storage pool", the dropdown selection changed to "Select option", and it let me launch the instance without selecting anything, which caused the error.

It makes sense in hindsight, but I think we could improve the usability here - either by requiring a pool to be selected, or by keeping the dropdown on default unless something else is selected.

No metrics displayed if user is restricted to see just one project

If I set with lxc config trust that user is restricted to a certain restricted project (project was auto-created by using multi-user setup), ui doesn't show Usage and is just displaying indefinitely "Loading metrics ...".

If I again set user to see all projects (with lxc config trust edit xxxxyyyyzzzz, set restricted to false), GUI shows everything, including metrics for an instance in user-1000 project.

Instance Terminal Error: Connection closed abnormally in LXD UI (Server 5.17)

Description:
I'm experiencing an issue with the LXD UI on a bare metal installation where the Instance Terminal doesn't successfully load the web-based terminal. Instead, an error message is displayed.

Error
The connection was closed abnormally, e.g., without sending or receiving a Close control frame

System Details:
OS: Ubuntu 22.04.3 LTS x86_64
Host: NUC11PAHi5 M15533-306
Kernel: 5.15.0-83-generic
Shell: bash 5.1.16

LXD Version:
Name Version Rev Tracking Publisher Notes
lxd 5.17-e5ead86 25505 latest/stable canonical✓ -

Additional VM Info:
Network Interface: Bridge
Image Info: Ubuntu jammy cloud all ubuntu/22.04/cloud

Logs:
VM Logs: All appear clean

disk.mount.log
qemu.log
qemu.early.log
LXD Logs: No evident issues

dnsmasq.lxdbr0.log
lxd.log.1

Note: Continuous monitoring of sudo snap logs lxd.daemon -f showed no abnormalities.

Any help diagnosing and resolving this issue would be greatly appreciated!

InstanceTerminalError

How to deploy for a cluster?

How should I think about the lxd-ui capability when deploying an LXD cluster?

  1. Should each LXD host have its own lxd-ui running and then I enable some kind of load balancer in front of them to consolidate to a single IP/domain?
  2. Should only one LXD host have the lxd-ui running? but then what if that host goes offline for patching or other...
  3. Should the lxd-ui be run on some other HA solution (e.g. a separate web cluster) and "pointed at" all three lxd-hosts?

Just trying to figure out the best architecture to provide the web-ui in an HA way. Thx!

How to contribute on LXD-UI development?

Hello,
I'm JavaScript+TypeScript Developer and I have knowledge of LXD and how it work and it's Rest APIs,
I saw this project and I want to help on develop it but I didn't find any roadmap or project tickets/cards todo

Is there a way to contribute?

How to install on production LXD?

I have an LXD installed from arch linux package (https://archlinux.org/packages/extra/x86_64/lxd/), not snap. How I can install LXD-UI to it? I tried to build lxd-ui using yarn build and move build/ directory to /var/lib/lxd, but this didn't help, there still no ui available (even after restarting LXD). Where should I place compiled files to make LXD serving them? ARCHITECTURE.md does not contain this information

LXD UI with Traefik

Hello,

I am hoping to get some guidance on using Traefik to proxy requests to the LXD UI. I have Traefik successfully configured for other applications, but I am encountering challenges when attempting to integrate it with LXD.

If anyone has a working example of Traefik routing to the LXD UI, lxd.example.com, and login with the cert, I would greatly appreciate it if you could share it.

I tried this, but not sure what I'm missing

http:
  routers:
    lxd-ui:
      rule: "Host(`lxd.example.com`)"
      entryPoints:
        - websecure
      tls:
        certResolver: leresolver
        options: lxd-ui@file
      service: lxd-ui
      middlewares:
        - lxd-ui-passtlsclientcert

  services:
    lxd-ui:
      loadBalancer:
        servers:
          - url: https://xxx.xxx.xxx.xxx:8443

  middlewares:
    lxd-ui-passtlsclientcert:
      passTLSClientCert:
        pem: true

tls:
  options:
    lxd-ui:
      clientAuth:
        clientAuthType: RequireAnyClientCert

Thank you for your time and assistance.
-Jeffery

Add network topology

Hi ! as a heavy user of LXD what I found most complicated is to do was to represent my OVN topology acorss projects and networks.

It could be great to have a openstack horizon like topology to visualize and configure networks across one or more clusters.

image

I'm not and expert and I don't know if this could be really helpful for everyone but i'll help if the idea is kept.

Let me know if the idea is good or not and if I have to bring more informations.

Support client tokens for authentication

The current TLS support is nice, but it has the downside of pushing users towards generating one client certificate per LXD server.
This then gets them into the annoying authentication flow of having to figure out which certificate to select in their browser.

Instead it may be interesting to support LXD's token based authentication where the user would basically:

  • Hit up /ui
  • UI detects that no client certificate is used (not sure if possible), then offers to generate a PFX for the user
  • UI instructs user to do lxc config trust add --name some-name on the LXD server
  • UI takes in the token string provided by the user and adds the browser's certificate to the target LXD's trust store

In this scenario, the user never needs to pass a .crt to the server and they can also rely on a single certificate per browser instead of per server.

Clearly distinguish LXD instances in page title

If you have multiple LXD UIs open in, e.g. Firefox tabs, then it's very hard to tell them apart. The tabs are all named "LXD UI", and the pages are all visually the same with the name "Canonical LXD" at the top left.

image

Compare this with e.g. Syncthing, where each instance has a name, that name is the in name of the tab, and is clearly displayed at the top of the page.

image

Please consider adding a something to the LXD UI, ideally settable from the Settings page.

It would be nice if the defaults for this setting at least included the hostname.

Thanks!

lxd-ui.pfx does not work on a Mac because of empty password.

When using any browser on a Mac, it uses the OS for certificates. Macos, at least Ventura doesn't allow blank passwords for PFX files. The easy fix would be for the generator on LXD US to prompt for a password when generating the certs, that way the user can type that in when they add it to the MacOS keychain.

I did figure out a workaround...
If I run this in the trerminal on my Mac, it will ass a password to a new file:
openssl pkcs12 -in lxd-ui.pfx -password pass:"" -passin pass:"" -passout pass:"InsertPasswordHere" -out "lxd-ui-mac.pfx"

Then all I had to do is double click on the lxd-ui-mac.pfx file, and enter the password I put in above. Now I have access!

Add Instances CRUD + terminal

  • Prototype exists with instance list similar to CLI output
  • Terminal exists
  • Need design and UX iteration on the instance list view and instance creation form

Page Not Found 404 Page/Error Not Shown Consistently.

If I select a non-existent page such as https://myserver:8443/ui/default/instancesx then a nice "404 Page not found" error is shown.

But on other pages such as https://myserver:8443/ui then JSON is shown on the screen -

{"type":"error","status":"","status_code":0,"operation":"","error_code":404,"error":"not found","metadata":null}

Possibly as this path exists but no page is specified?

This is inconsistent and the JSON does not mean much to the user and there's no way back to a valid web page from here.

When selecting base image, Ubuntu images should show 22.04 version number, not just jammy

When creating a VM and you click "Select base image", the offered Ubuntu releases simply have the shortname (e.g. focal, jammy, lunar). Ideally it would also display the release version (e.g. 22.04, for jammy). It would be even better if it indicated which as an LTS release.

In various products or places, we commonly interchange whether 22.04 or Jammy is used, however we have recently used the version number in some new places. For example, juju 3.x uses Ubuntu 'bases' of [email protected], deprecated the older method of specifying 'series: jammy'. Ideally we'd have at a minimum consistency but realistically just display both in as many scenarios as possible.

While I work on Ubuntu every day and I mostly have the version to release name memorised, I struggle with this in other projects, for example OpenStack. I am sure the same happens for people using Ubuntu.

Screenshot 2023-08-09 at 1 29 03 pm

Allow creating empty instances

Currently, it is required to choose base image when creating instance, so it is impossible to create empty instance (for example to install from iso or pxe). Please change that.

Foreign language keyboards not working fully in console

Hello. I've been trying the Spanish keyboard in an Ubuntu and OpenSuse VMs and there are keys that do not correspond with the layout, i.e:
Ñ - 'º'
< - '-'
Same for both VMs after selecting Spanish (Spain) keyboard and rebooting (just in case).

Regards.
PS: Alt Gr key is not working in Chrome, it says "no map for 225" so no special characters (like @ or # for my spanish keyboard) can be used.

Sort volumes by size

Currently volumes can be sorted by all fields (name, date, used by), but not by size. I'm finding this really odd, because it is the most obvious parameter to sort them by

Snapshot functionality seems incomplete, not working yet?

Fantastic work so far on this UI... absolutely gorgeous and well thought out. As I was poking through all the features and options in my LXD cluster, I realized that snapshots isn't working (yet?), but snapshots do work from the CLI as expected, and those snapshots do show up in the UI in real-time, but I can't create snapshots from the UI, only Restore/Delete existing snapshots.

I'll keep muddling around with it and see if I've missed something, but that stood out in the first hour of test-driving it.

Add snapshots

  • Add gear icon next to the snapshots number in instance list
  • opens a modal with a table with columns: name, created at, expires at, stateful form api
  • add api file for snapshots
  • add types for snapshots
  • add actions to the modal to remove / restore to snapshots
  • add action to create a snapshot (in the modal?)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.