Giter Site home page Giter Site logo

intel / tdx-tools Goto Github PK

View Code? Open in Web Editor NEW
187.0 18.0 59.0 26.72 MB

Cloud Stack and Solutions for Intel TDX (Trust Domain Extension)

License: Apache License 2.0

Shell 14.27% Python 81.26% Makefile 0.05% Rust 4.42%
tdx virtualization kvm confidential-computing build qemu uefi

tdx-tools's Introduction

PROJECT NOT UNDER ACTIVE MANAGEMENT

This project will no longer be maintained by Intel.

Intel has ceased development and contributions including, but not limited to, maintenance, bug fixes, new releases, or updates, to this project.

Intel no longer accepts patches to this project.

If you have an ongoing need to use this project, are interested in independently developing it, or would like to maintain patches for the open source software community, please create your own fork of this project.

Contact: [email protected]

Intel® TDX (Trust Domain Extensions)

CI Check Shell CI Check Python CI Check License CI Check Document

NOTE: The project is end of life. See more information of TDX Early Preview in Software Availability.

For solutions of Intel TDX Full Disk Encryption and building Trusted Chain for Cloud Native in Confidential Computing Envrionment, please see Unified API for Trusted Execution Environment.

1. Overview

1.1 Intel® Trust Domain Extensions(TDX)

Intel® Trust Domain Extensions(TDX) refers to an Intel technology that extends Virtual Machine Extensions(VMX) and Multi-Key Total Memory Encryption(MK-TME) with a new kind of virtual machine guest called a Trust Domain(TD). A TD runs in a CPU mode that protects the confidentiality of its memory contents and its CPU state from any other software, including the hosting Virtual Machine Monitor (VMM). Please see details at here.

1.2 Hardware Availability

1.3 Software Availability

NOTE: The project "Linux TDX SW Stack" is end of life. This branch only sustains tools for TDX early preview.

Please refer to Red Hat blog Enabling hardware-backed confidential computing with a CentOS SIG or Ubuntu blog Intel® TDX 1.0 technology preview available on Ubuntu 23.10 or SUSE blog Intel® TDX Support Coming to SUSE Linux Enterprise Server for more information of TDX Early Preview.

The corresponding git repositories are as below.

2. How to launch TD

Use the script start-qemu.sh to start a TD via QEMU.

A simple usage of the script to launch TD would be as follows:

./start-qemu.sh -i <guest image file> -k <guest kernel file>

Or to use the guest's grub bootloader:

./start-qemu.sh -i <guest image file> -b grub

For more advanced configurations, please check the help menu:

./start-qemu.sh -h

Once the TD guest VM is launched, you can verify it is truly TD VM by querying cpuinfo. It's supposed to have tdx_guest flag.

cat /proc/cpuinfo | grep tdx_guest

tdx-tools's People

Contributors

bfuhry avatar clsulliv avatar dongx1x avatar hairongchen avatar haokunx-intel avatar hector-cao avatar intelzhongjie avatar jialeif avatar jurobystricky avatar kenplusplus avatar kepingwa avatar leizhou-97 avatar matti avatar qhongye avatar rdower avatar ruomengh avatar ruoyu-y avatar tenderbulk avatar vli11 avatar zhlsunshine avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tdx-tools's Issues

Not even SGX supported 6421N has TDX

The Intel(R) Xeon(R) Gold 6421N on X13SEI-TF motherboard has TDX disabled despite it lists SGX being supported with "Intel SPS"

It is starting to look like no retail available motherboard/cpu supports TDX.

Error of install shim in guest_repo

Hi, I run the following commends in order and have an error when installing shim in guest_repo:
apt install --no-install-recommends --yes build-essential
fakeroot devscripts wget git equivs liblz4-tool sudo python-is-python3 pkg-config unzip
cd tdx-tools/build/ubuntu-22.04
./build-repo.sh
cd host_repo
sudo apt -y --allow-downgrades install ./.deb
cd ../guest_repo
sudo apt -y --allow-downgrades install ./
.deb

Setting up shim (15.4-mvp3) ...
/var/lib/dpkg/info/shim.postinst: 99: update-secureboot-policy: not found
dpkg: error processing package shim (--configure):
installed shim package post-installation script subprocess returned error exit status 127
Errors were encountered while processing:
shim
needrestart is being skipped since dpkg has failed
E: Sub-process /usr/bin/dpkg returned an error code (1)

Could you please give me some advice? By the way, I do not have the 'Download is performed unsandboxed as root as file as file ... couldn't be accessed by user '_apt'. - pkgAcquire::Run (13: Permission denied)' when running .deb in the host_repo directory, is that normal?

Thank you!

W5-2445 support

If I purchase $3000usd hp z4 g5 workststion with the entry level W5-2445 processor, will TDX tooling work on it?

Eg. can I run secure VMs on it? Are all sapphire rapids cpus okay for this?

Timezone settings are hardcoded

The build tool has the timezone hardcoded.
Add mechanism to automatically detect the geo and set the timezone appropriately during the build process

Error when running the start-qemu.sh in tdx host machine

Hi All,

I am getting an error 'qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such device' when I execute './start-qemu.sh -i td-guest-ubuntu-22.04.qcow2 -b grub'?

I am using ubuntu-22.04 and the mvp-tdx-5.15 branch and followed the instructions for ubuntu 22.04 (https://github.com/intel/tdx-tools/tree/mvp-tdx-5.15/build/ubuntu-22.04).

I have also enabled virtualization in the machines through the BIOS. It seems like the issue is with running QEMU inside another virtualization. Could you please help me with this issue?

./start-qemu.sh -i td-guest-ubuntu-22.04.qcow2 -b grub

WARN: Using HVC console for grub, could not accept key input in grub menu
=========================================
Guest Image       : td-guest-ubuntu-22.04.qcow2
Kernel binary     : 
OVMF_CODE         : /usr/share/qemu/OVMF_CODE.fd
OVMF_VARS         : /home/hasini/Desktop/tdx-tools/OVMF_VARS.fd
VM Type           : td
CPUS              : 1
Boot type         : grub
Monitor port      : 9001
Enable vsock      : false
Enable debug      : false
Console           : HVC
=========================================
Remapping CTRL-C to CTRL-]
Launch VM:
/usr/bin/qemu-system-x86_64 -accel kvm -name process=tdxvm,debug-threads=on -m 2G -vga none -monitor pty -no-hpet -nodefaults -drive file=/home/hasini/Desktop/tdx-tools/td-guest-ubuntu-22.04.qcow2,if=virtio,format=qcow2 -monitor telnet:127.0.0.1:9001,server,nowait -device loader,file=/usr/share/qemu/OVMF_CODE.fd,id=fd0,config-firmware-volume=/home/hasini/Desktop/tdx-tools/OVMF_VARS.fd -object tdx-guest,id=tdx -cpu host,-kvm-steal-time,pmu=off -machine q35,pic=no,kernel_irqchip=split,kvm-type=tdx,confidential-guest-support=tdx -device virtio-net-pci,netdev=mynet0 -smp 1 -netdev user,id=mynet0,hostfwd=tcp::10026-:22 -chardev stdio,id=mux,mux=on,logfile=/home/hasini/Desktop/tdx-tools/vm_log_2023-01-25T1323.log -device virtio-serial,romfile= -device virtconsole,chardev=mux -monitor chardev:mux -serial chardev:mux -nographic
char device redirected to /dev/pts/1 (label compat_monitor0)
ioctl(KVM_CREATE_VM) failed: 19 No such device
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such device

When I check whether the TDX is enabled in the host machine I am getting this message:

sudo dmesg | grep -i TDX

[    0.000000] Linux version 5.15.0-mvp16v11+0-generic  (gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0, GNU ld (GNU Binutils for Ubuntu) 2.38) #mvp16v11+tdx SMP Wed Jan 11 16:24:50 EST 2023
[    0.125235] tdx-tests: bp_hit: 19
[    0.125235] tdx-tests: ovf_hit: 21
[    0.125235] tdx-tests: PASS: single step #VE emulated instructions
[    0.125235] tdx-tests: PASS: single step TDX module emulated CPUID 0
[    0.125235] tdx-tests: PASS: single step TDX module emulated RDMSR 0x1a0

Does this means TDX is enable in the host machine?

Build fails for Ubuntu TD

Build is failing for Ubuntu TD with the latest tdx-tools repo commit 3d7809d

W: grub-efi-amd64-bin: debian-news-entry-has-strange-distribution UNRELEASED
W: grub-efi-amd64-bin: duplicate-override-context statically-linked-binary (lines 2 7) [usr/share/lintian/overrides/grub-efi-amd64-bin:7]
W: grub-efi-amd64-bin: duplicate-override-context statically-linked-binary (lines 3 8) [usr/share/lintian/overrides/grub-efi-amd64-bin:8]
W: grub-efi-amd64-bin: duplicate-override-context unstripped-binary-or-object (lines 1 6) [usr/share/lintian/overrides/grub-efi-amd64-bin:6]
W: grub-efi-amd64-bin: duplicate-override-context unstripped-binary-or-object (lines 4 9) [usr/share/lintian/overrides/grub-efi-amd64-bin:9]
W: grub-efi-amd64-dbg: duplicate-override-context statically-linked-binary (lines 2 5) [usr/share/lintian/overrides/grub-efi-amd64-dbg:5]
W: grub-efi-amd64-dbg: duplicate-override-context statically-linked-binary (lines 3 6) [usr/share/lintian/overrides/grub-efi-amd64-dbg:6]
W: grub-efi-amd64-dbg: duplicate-override-context unstripped-binary-or-object (lines 1 4) [usr/share/lintian/overrides/grub-efi-amd64-dbg:4]
W: grub-efi-amd64-bin: syntax-error-in-debian-changelog line 11 "badly formatted trailer line"
W: grub-efi-amd64-bin: syntax-error-in-debian-changelog line 13 "found start of entry where expected more change data or trailer"
W: grub-efi-amd64-bin: syntax-error-in-debian-changelog line 17 "badly formatted trailer line"
W: grub-efi-amd64-bin: syntax-error-in-debian-changelog line 19 "found start of entry where expected more change data or trailer"
W: grub-efi-amd64-bin: syntax-error-in-debian-changelog line 5 "badly formatted trailer line"
W: grub-efi-amd64-bin: syntax-error-in-debian-changelog line 7 "found start of entry where expected more change data or trailer"
W: grub-efi-amd64-bin: unknown-field Efi-Vendor
W: grub-efi-amd64: using-question-in-extended-description-in-templates grub-efi/install_devices_failed
N: 266 hints overridden (266 errors); 5 unused overrides
Finished running lintian.

  • cp grub-efi-amd64_2.06-mvp3_amd64.deb grub-efi-amd64-bin_2.06-mvp3_amd64.deb ../guest_repo/
  • cd ..
  • sudo apt remove grub2-build-deps-depends grub2-unsigned-build-deps-depends -y
    Reading package lists... Done
    Building dependency tree... Done
    Reading state information... Done
    E: Unable to locate package grub2-build-deps-depends
  • true
  • build_kernel
  • cd intel-mvp-spr-kernel
    ./build-repo.sh: line 36: cd: intel-mvp-spr-kernel: No such file or directory

Can not build successfully with 2023WW27 SW stack about build-intel-mvp-vtpm-td

/home/tdx-tools-2023ww27/build/rhel-8/intel-mvp-vtpm-td/rpmbuild/BUILD/vtpm-td
++ pushd deps/td-shim
/home/tdx-tools-2023ww27/build/rhel-8/intel-mvp-vtpm-td/rpmbuild/BUILD/vtpm-td/deps/td-shim /home/tdx-tools-2023ww27/build/rhel-8/intel-mvp-vtpm-td/rpmbuild/BUILD/vtpm-td
++ cargo xbuild -p td-shim --target x86_64-unknown-none --release --features=main,tdx --no-default-features
warning: user-defined alias xbuild is shadowing an external subcommand found at: /root/.cargo/bin/cargo-xbuild
This was previously accepted but is being phased out; it will become a hard error in a future release.
For more information, see issue #10049 rust-lang/cargo#10049.
Updating crates.io index
Compiling compiler_builtins v0.1.82
Compiling core v0.0.0 (/root/.rustup/toolchains/nightly-2022-11-15-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core)
Compiling proc-macro2 v1.0.63
Compiling unicode-ident v1.0.10
Compiling quote v1.0.29
Compiling syn v1.0.109
Compiling autocfg v1.1.0
Compiling rustversion v1.0.13
Compiling version_check v0.9.4
Compiling typenum v1.16.0
Compiling log v0.4.19
Compiling scopeguard v1.1.0
Compiling libc v0.2.147
Compiling bit_field v0.10.2
Compiling anyhow v1.0.71
Compiling spin v0.5.2
Compiling r-efi v3.2.0
Compiling bitflags v1.3.2
Compiling volatile v0.4.6
Compiling x86 v0.47.0
Compiling either v1.8.1
Compiling cc v1.0.79
Compiling lazy_static v1.4.0
Compiling generic-array v0.14.7
Compiling lock_api v0.4.10
Compiling spin v0.9.8
Compiling x86_64 v0.14.10
Compiling which v4.4.0
Compiling scroll_derive v0.10.5
Compiling zerocopy-derive v0.3.2
Compiling scroll v0.10.2
Compiling td-uefi-pi v0.1.0 (/home/tdx-tools-2023ww27/build/rhel-8/intel-mvp-vtpm-td/rpmbuild/BUILD/vtpm-td/deps/td-shim/td-uefi-pi)
Compiling tdx-tdcall v0.1.0 (/home/tdx-tools-2023ww27/build/rhel-8/intel-mvp-vtpm-td/rpmbuild/BUILD/vtpm-td/deps/td-shim/tdx-tdcall)
Compiling td-layout v0.1.0 (/home/tdx-tools-2023ww27/build/rhel-8/intel-mvp-vtpm-td/rpmbuild/BUILD/vtpm-td/deps/td-shim/td-layout)
Compiling td-shim v0.1.0 (/home/tdx-tools-2023ww27/build/rhel-8/intel-mvp-vtpm-td/rpmbuild/BUILD/vtpm-td/deps/td-shim/td-shim)
Compiling rustc-std-workspace-core v1.99.0 (/root/.rustup/toolchains/nightly-2022-11-15-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/rustc-std-workspace-core)
Compiling alloc v0.0.0 (/root/.rustup/toolchains/nightly-2022-11-15-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/alloc)
Compiling cpufeatures v0.2.9
Compiling cfg-if v1.0.0
Compiling byteorder v1.4.3
Compiling bitfield v0.13.2
Compiling raw-cpuid v10.7.0
Compiling zerocopy v0.6.1
Compiling td-loader v0.1.0 (/home/tdx-tools-2023ww27/build/rhel-8/intel-mvp-vtpm-td/rpmbuild/BUILD/vtpm-td/deps/td-shim/td-loader)
Compiling spinning_top v0.2.5
Compiling linked_list_allocator v0.10.5
Compiling block-buffer v0.10.4
Compiling crypto-common v0.1.6
Compiling digest v0.10.7
Compiling td-logger v0.1.0 (/home/tdx-tools-2023ww27/build/rhel-8/intel-mvp-vtpm-td/rpmbuild/BUILD/vtpm-td/deps/td-shim/td-logger)
Compiling td-exception v0.1.0 (/home/tdx-tools-2023ww27/build/rhel-8/intel-mvp-vtpm-td/rpmbuild/BUILD/vtpm-td/deps/td-shim/td-exception)
Compiling sha2 v0.10.7
Compiling cc-measurement v0.1.0 (/home/tdx-tools-2023ww27/build/rhel-8/intel-mvp-vtpm-td/rpmbuild/BUILD/vtpm-td/deps/td-shim/cc-measurement)
Compiling td-paging v0.1.0 (/home/tdx-tools-2023ww27/build/rhel-8/intel-mvp-vtpm-td/rpmbuild/BUILD/vtpm-td/deps/td-shim/td-paging)
Finished release [optimized] target(s) in 12.38s
++ popd
/home/tdx-tools-2023ww27/build/rhel-8/intel-mvp-vtpm-td/rpmbuild/BUILD/vtpm-td
++ cargo xbuild --target x86_64-unknown-none --features=td-logger/tdx,sha256,sha384 -p vtpmtd --release
warning: user-defined alias xbuild is shadowing an external subcommand found at: /root/.cargo/bin/cargo-xbuild
This was previously accepted but is being phased out; it will become a hard error in a future release.
For more information, see issue #10049 rust-lang/cargo#10049.
warning: /home/tdx-tools-2023ww27/build/rhel-8/intel-mvp-vtpm-td/rpmbuild/BUILD/vtpm-td/src/vtpmtd/Cargo.toml: unused manifest key: dependencies.td-shim.default-featuers
Compiling compiler_builtins v0.1.82
Compiling core v0.0.0 (/root/.rustup/toolchains/nightly-2022-11-15-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core)
Compiling proc-macro2 v1.0.63
Compiling quote v1.0.29
Compiling unicode-ident v1.0.9
Compiling syn v1.0.109
Compiling autocfg v1.1.0
Compiling version_check v0.9.4
Compiling rustversion v1.0.12
Compiling cc v1.0.79
Compiling typenum v1.16.0
Compiling scopeguard v1.1.0
Compiling log v0.4.19
Compiling libc v0.2.147
Compiling serde v1.0.164
Compiling bit_field v0.10.2
Compiling spin v0.5.2
Compiling bitflags v1.3.2
Compiling r-efi v3.2.0
Compiling volatile v0.4.6
Compiling anyhow v1.0.71
Compiling serde_json v1.0.99
Compiling either v1.8.1
Compiling unicode-xid v0.2.4
Compiling ryu v1.0.13
Compiling x86 v0.47.0
Compiling itoa v1.0.6
Compiling paste v1.0.12
Compiling tpm v0.1.0 (/home/tdx-tools-2023ww27/build/rhel-8/intel-mvp-vtpm-td/rpmbuild/BUILD/vtpm-td/src/tpm)
Compiling lazy_static v1.4.0
Compiling generic-array v0.14.7
Compiling proc-macro-error-attr v1.0.4
Compiling proc-macro-error v1.0.4
Compiling lock_api v0.4.10
Compiling ring v0.16.20 (/home/tdx-tools-2023ww27/build/rhel-8/intel-mvp-vtpm-td/rpmbuild/BUILD/vtpm-td/deps/rust-spdm/external/ring)
warning: unnecessary parentheses around match arm expression
--> deps/rust-spdm/external/ring/build.rs:628:21
|
628 | "x86_64" => ("win64"),
| ^ ^
|
= note: #[warn(unused_parens)] on by default
help: remove these parentheses
|
628 - "x86_64" => ("win64"),
628 + "x86_64" => "win64",
|

warning: unnecessary parentheses around match arm expression
--> deps/rust-spdm/external/ring/build.rs:629:18
|
629 | "x86" => ("win32"),
| ^ ^
|
help: remove these parentheses
|
629 - "x86" => ("win32"),
629 + "x86" => "win32",
|

Compiling spin v0.9.8
Compiling x86_64 v0.14.10
Compiling which v4.4.0
Compiling syn v2.0.22
warning: ring (build script) generated 2 warnings
Compiling synstructure v0.12.6
Compiling scroll_derive v0.10.5
Compiling zerocopy-derive v0.3.2
Compiling der_derive v0.5.0
Compiling der_derive v0.4.1
Compiling scroll v0.10.2
Compiling td-uefi-pi v0.1.0 (/home/tdx-tools-2023ww27/build/rhel-8/intel-mvp-vtpm-td/rpmbuild/BUILD/vtpm-td/deps/td-shim/td-uefi-pi)
Compiling tdx-tdcall v0.1.0 (/home/tdx-tools-2023ww27/build/rhel-8/intel-mvp-vtpm-td/rpmbuild/BUILD/vtpm-td/deps/td-shim/tdx-tdcall)
Compiling serde_derive v1.0.164
Compiling zeroize_derive v1.4.2
Compiling td-layout v0.1.0 (/home/tdx-tools-2023ww27/build/rhel-8/intel-mvp-vtpm-td/rpmbuild/BUILD/vtpm-td/deps/td-shim/td-layout)
Compiling td-shim v0.1.0 (/home/tdx-tools-2023ww27/build/rhel-8/intel-mvp-vtpm-td/rpmbuild/BUILD/vtpm-td/deps/td-shim/td-shim)
Compiling rustc-std-workspace-core v1.99.0 (/root/.rustup/toolchains/nightly-2022-11-15-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/rustc-std-workspace-core)
Compiling spdmlib v0.1.0 (/home/tdx-tools-2023ww27/build/rhel-8/intel-mvp-vtpm-td/rpmbuild/BUILD/vtpm-td/deps/rust-spdm/spdmlib)
Compiling alloc v0.0.0 (/root/.rustup/toolchains/nightly-2022-11-15-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/alloc)
Compiling cfg-if v1.0.0
Compiling untrusted v0.7.1
Compiling byteorder v1.4.3
Compiling zeroize v1.6.0
Compiling cpufeatures v0.2.8
Compiling time-core v0.1.1
Compiling conquer-util v0.3.0
Compiling bitfield v0.13.2
Compiling der v0.4.5
Compiling bytes v1.4.0
Compiling codec v0.2.0 (/home/tdx-tools-2023ww27/build/rhel-8/intel-mvp-vtpm-td/rpmbuild/BUILD/vtpm-td/deps/rust-spdm/codec)
Compiling const-oid v0.7.1
Compiling bit_field v0.9.0
Compiling getrandom v0.2.10
Compiling raw-cpuid v10.7.0
Compiling time v0.3.22
Compiling conquer-once v0.3.2
Compiling bitmap-allocator v0.1.0 (https://github.com/rcore-os/bitmap-allocator?rev=03bd9909#03bd9909)
Compiling der v0.5.1
Compiling rust-tpm-20-ref v0.1.0 (/home/tdx-tools-2023ww27/build/rhel-8/intel-mvp-vtpm-td/rpmbuild/BUILD/vtpm-td/deps/rust-tpm-20-ref)
Compiling zerocopy v0.6.1
Compiling td-loader v0.1.0 (/home/tdx-tools-2023ww27/build/rhel-8/intel-mvp-vtpm-td/rpmbuild/BUILD/vtpm-td/deps/td-shim/td-loader)
Compiling spinning_top v0.2.5
warning: unused doc comment
--> deps/rust-spdm/external/ring/src/aead/chacha.rs:110:9
|
110 | / /// XXX: Although this takes an Iv, this actually uses it like a
111 | | /// Counter.
| |____________^
112 | / extern "C" {
113 | | fn GFp_ChaCha20_ctr32(
114 | | out: *mut u8,
115 | | in
: *const u8,
... |
119 | | );
120 | | }
| |
- rustdoc does not generate documentation for extern blocks
|
= help: use // for a plain comment
= note: #[warn(unused_doc_comments)] on by default

Compiling linked_list_allocator v0.10.5
Compiling global v0.1.0 (/home/tdx-tools-2023ww27/build/rhel-8/intel-mvp-vtpm-td/rpmbuild/BUILD/vtpm-td/src/global)
Compiling protocol v0.1.0 (/home/tdx-tools-2023ww27/build/rhel-8/intel-mvp-vtpm-td/rpmbuild/BUILD/vtpm-td/src/protocol)
warning: crate-level attribute should be in the root module
--> src/tpm/src/std_lib.rs:5:24
|
5 | #![cfg_attr(not(test), no_std)]
| ^^^^^^
|
= note: #[warn(unused_attributes)] on by default

warning: crate-level attribute should be in the root module
--> src/tpm/src/std_lib.rs:7:1
|
7 | #![feature(alloc_error_handler)]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

warning: crate-level attribute should be in the root module
--> src/tpm/src/std_lib.rs:8:1
|
8 | #![feature(naked_functions)]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Compiling sys_time v0.1.0 (/home/tdx-tools-2023ww27/build/rhel-8/intel-mvp-vtpm-td/rpmbuild/BUILD/vtpm-td/deps/rust-spdm/sys_time)
warning: field cpu_features is never read
--> deps/rust-spdm/external/ring/src/digest.rs:49:5
|
38 | pub(crate) struct BlockContext {
| ------------ field in this struct
...
49 | cpu_features: cpu::Features,
| ^^^^^^^^^^^^
|
= note: BlockContext has a derived impl for the trait Clone, but this is intentionally ignored during dead code analysis
= note: #[warn(dead_code)] on by default

Compiling td-exception v0.1.0 (/home/tdx-tools-2023ww27/build/rhel-8/intel-mvp-vtpm-td/rpmbuild/BUILD/vtpm-td/deps/td-shim/td-exception)
Compiling td-logger v0.1.0 (/home/tdx-tools-2023ww27/build/rhel-8/intel-mvp-vtpm-td/rpmbuild/BUILD/vtpm-td/deps/td-shim/td-logger)
Compiling webpki v0.22.0 (/home/tdx-tools-2023ww27/build/rhel-8/intel-mvp-vtpm-td/rpmbuild/BUILD/vtpm-td/deps/rust-spdm/external/webpki)
Compiling crypto-common v0.1.6
Compiling block-buffer v0.10.4
Compiling digest v0.10.7
Compiling sha2 v0.10.7
warning: ring (lib) generated 2 warnings
warning: extern block uses type u128, which is not FFI-safe
--> src/tpm/src/tpm2_sys.rs:7488:10
|
7488 | ) -> u128;
| ^^^^ not FFI-safe
|
= note: 128-bit integers don't currently have a known stable ABI
= note: #[warn(improper_ctypes)] on by default

Compiling cc-measurement v0.1.0 (/home/tdx-tools-2023ww27/build/rhel-8/intel-mvp-vtpm-td/rpmbuild/BUILD/vtpm-td/deps/td-shim/cc-measurement)
error: could not find native static library smallc, perhaps an -L flag is missing?

warning: tpm (lib) generated 4 warnings
error: could not compile tpm due to previous error; 4 warnings emitted
warning: build failed, waiting for other jobs to finish...
error: Bad exit status from /var/tmp/rpm-tmp.Z4688Y (%build)

RPM build errors:
Bad exit status from /var/tmp/rpm-tmp.Z4688Y (%build)

where to get TDX SEAM Module intel-mvp-tdx-module-spr

README says:

NOTE: Please get separated RPM for signed build TDX SEAM Module and install via sudo dnf install intel-mvp-tdx-module-spr."

Where do I get this module?

I've already built everything and hacked distribution through dockerhub (while waiting for #83)

docker create --name intel-tdx-tools-repo mattipaksula/intel-tdx-tools-repo true
docker cp intel-tdx-tools-repo:/host /srv/tdx-host
docker cp intel-tdx-tools-repo:/guest /srv/tdx-guest

echo """[tdx-host-local]
name=tdx-host-local
baseurl=file:///srv/tdx-host
enabled=1
gpgcheck=0
module_hotfixes=true""" > /etc/yum.repos.d/tdx-host-local.repo

sudo dnf install intel-mvp-spr-kernel intel-mvp-tdx-tdvf intel-mvp-spr-qemu-kvm intel-mvp-tdx-libvirt

# separate signed module whatever it means
# sudo dnf install intel-mvp-tdx-module-spr ??

dpkg-buildpackage: error: debian/rules build subprocess returned exit status 2

Hi, I built it on ubuntu 22.04. I ran "./build-repo.sh", after a long time , it failed. The last few log is below.
By the way, it needs at least 45GB disk space and takes a long time to build. I think it's better to be written in document.

dpkg-buildpackage: info: host architecture amd64
dpkg-source: info: using options from mvp-tdx-qemu-7.0.50/debian/source/options: --extend-diff-ignore=^capstone/|^dtc/|^slirp/|^meson/|^roms/./|^.git-submodule-status$
fakeroot debian/rules clean
echo '# autogenerated file, please edit debian/control-in' > debian/control.tmp
sed -e 's/^:ubuntu://'
-e '/^:[a-z]
:/D' debian/control-in >> debian/control.tmp
mv -f debian/control.tmp debian/control
chmod -w debian/control
dh_testdir
rm -rf b
find scripts/ -name '*.pyc' -delete || :
find: ‘scripts/’: No such file or directory
rm -f debian/qemu-user.1
rm -f configs/devices/x86_64-softmmu/microvm.mak
dh_clean
debian/rules build
make: *** No rule to make target 'configure', needed by 'b/qemu/configured'. Stop.
dpkg-buildpackage: error: debian/rules build subprocess returned exit status 2
debuild: fatal error at line 1182:
dpkg-buildpackage -us -uc -ui -b failed

My machine info:
Distributor ID: Ubuntu
Description: Ubuntu 22.04.2 LTS
Release: 22.04
Codename: jammy

encounter error after executing ./run -s all under RHEL 8.7 with TDX1.0 2023WW22

========================================= ERRORS ==========================================
_________________________ ERROR at setup of test_tdvm_acpi_reboot _________________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
----------------------------------- Captured log setup ------------------------------------
2023-07-08 12:39:57 ERROR Found the duplicate key in yaml file /home/tdx1.0/tdx-tools-2023ww22.rdc/tests/tests/../artifacts.yaml
Traceback (most recent call last):
File "/home/tdx1.0/tdx-tools-2023ww22.rdc/utils/pycloudstack/pycloudstack/artifacts.py", line 291, in load
self._dict = yaml.load(fobj, Loader=yaml.FullLoader)
File "/home/tdx1.0/tdx-tools-2023ww22.rdc/tests/venv/lib/python3.6/site-packages/yaml/init.py", line 114, in load
return loader.get_single_data()
File "/home/tdx1.0/tdx-tools-2023ww22.rdc/tests/venv/lib/python3.6/site-packages/yaml/constructor.py", line 51, in get_single_data
return self.construct_document(node)
File "/home/tdx1.0/tdx-tools-2023ww22.rdc/tests/venv/lib/python3.6/site-packages/yaml/constructor.py", line 55, in construct_document
data = self.construct_object(node)
File "/home/tdx1.0/tdx-tools-2023ww22.rdc/tests/venv/lib/python3.6/site-packages/yaml/constructor.py", line 100, in construct_object
data = constructor(self, node)
File "/home/tdx1.0/tdx-tools-2023ww22.rdc/utils/pycloudstack/pycloudstack/artifacts.py", line 332, in _no_duplicates_constructor
f"found duplicate key ({key})", key_node.start_mark)
yaml.constructor.ConstructorError: while constructing a mapping
in "/home/tdx1.0/tdx-tools-2023ww22.rdc/tests/tests/../artifacts.yaml", line 11, column 1
found duplicate key (source)
in "/home/tdx1.0/tdx-tools-2023ww22.rdc/tests/tests/../artifacts.yaml", line 13, column 1
_________________________ ERROR at setup of test_efi_acpi_reboot __________________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
________________________ ERROR at setup of test_legacy_acpi_reboot ________________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
________________________ ERROR at setup of test_tdvm_acpi_shutdown ________________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
________________________ ERROR at setup of test_efi_acpi_shutdown _________________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
_______________________ ERROR at setup of test_legacy_acpi_shutdown _______________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
___________________________ ERROR at setup of test_td_max_vcpu ____________________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
___________________________ ERROR at setup of test_efi_max_vcpu ___________________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
_________________________ ERROR at setup of test_legacy_max_vcpu __________________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
___________________ ERROR at setup of test_tdvms_coexist_create_destroy ___________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
_______________ ERROR at setup of test_tdvm_lifecycle_virsh_suspend_resume ________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
_______________ ERROR at setup of test_tdvm_lifecycle_virsh_start_shutdown ________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
____________________________ ERROR at setup of test_tdvm_wget _____________________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
_________________________ ERROR at setup of test_tdvm_ssh_forward _________________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
______________________ ERROR at setup of test_tdvm_bridge_network_ip ______________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
_______________________ ERROR at setup of test_tdvm_clocksource_tsc _______________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
________________________ ERROR at setup of test_tdvm_cpuid_tscfreq ________________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
____________________ ERROR at setup of test_tdvm_compare2_host_tscfreq ____________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
_______________________ ERROR at setup of test_tdvm_tdx_initialized _______________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
_____________________ ERROR at setup of test_tdguest_with_legacy_base _____________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
_________________________ ERROR at setup of test_tdvm_qga_reboot __________________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
__________________________ ERROR at setup of test_efi_qga_reboot __________________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
________________________ ERROR at setup of test_legacy_qga_reboot _________________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
___________________ ERROR at setup of test_vm_shutdown_mode[td-default] ___________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
____________________ ERROR at setup of test_vm_shutdown_mode[td-acpi] _____________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
____________________ ERROR at setup of test_vm_shutdown_mode[td-agent] ____________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
__________________ ERROR at setup of test_vm_shutdown_mode[efi-default] ___________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
____________________ ERROR at setup of test_vm_shutdown_mode[efi-acpi] ____________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
___________________ ERROR at setup of test_vm_shutdown_mode[efi-agent] ____________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
_________________ ERROR at setup of test_vm_shutdown_mode[legacy-default] _________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
__________________ ERROR at setup of test_vm_shutdown_mode[legacy-acpi] ___________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
__________________ ERROR at setup of test_vm_shutdown_mode[legacy-agent] __________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
________________________ ERROR at setup of test_tdvm_qga_shutdown _________________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
_________________________ ERROR at setup of test_efi_qga_shutdown _________________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
_______________________ ERROR at setup of test_legacy_qga_shutdown ________________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
____________________________ ERROR at setup of test_tdvm_nginx ____________________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError
____________________________ ERROR at setup of test_tdvm_redis ____________________________

@pytest.fixture(scope="session")
def artifact_factory():
    """
    The artifact factory from artifacts.yaml
    """
    manifest_file = os.path.join(os.path.dirname(__file__), "../", "artifacts.yaml")
    fobj = artifacts.ArtifactManifest(manifest_file)
  assert fobj.load() is not None

E assert None is not None
E + where None = <bound method ArtifactManifest.load of {}>()
E + where <bound method ArtifactManifest.load of {}> = {}.load

tests/conftest.py:119: AssertionError

  • generated html file: file:///home/tdx1.0/tdx-tools-2023ww22.rdc/tests/output/all-localhost.localdomain-rhel-root-report-2023-07-08-12-39-53.html -
    ================================= short test summary info =================================
    ERROR tests/test_acpi_reboot.py::test_tdvm_acpi_reboot - assert None is not None
    ERROR tests/test_acpi_reboot.py::test_efi_acpi_reboot - assert None is not None
    ERROR tests/test_acpi_reboot.py::test_legacy_acpi_reboot - assert None is not None
    ERROR tests/test_acpi_shutdown.py::test_tdvm_acpi_shutdown - assert None is not None
    ERROR tests/test_acpi_shutdown.py::test_efi_acpi_shutdown - assert None is not None
    ERROR tests/test_acpi_shutdown.py::test_legacy_acpi_shutdown - assert None is not None
    ERROR tests/test_max_cpu.py::test_td_max_vcpu - assert None is not None
    ERROR tests/test_max_cpu.py::test_efi_max_vcpu - assert None is not None
    ERROR tests/test_max_cpu.py::test_legacy_max_vcpu - assert None is not None
    ERROR tests/test_multiple_tdvms.py::test_tdvms_coexist_create_destroy - assert None is n...
    ERROR tests/test_tdvm_lifecycle.py::test_tdvm_lifecycle_virsh_suspend_resume - assert No...
    ERROR tests/test_tdvm_lifecycle.py::test_tdvm_lifecycle_virsh_start_shutdown - assert No...
    ERROR tests/test_tdvm_network.py::test_tdvm_wget - assert None is not None
    ERROR tests/test_tdvm_network.py::test_tdvm_ssh_forward - assert None is not None
    ERROR tests/test_tdvm_network.py::test_tdvm_bridge_network_ip - assert None is not None
    ERROR tests/test_tdvm_tsc.py::test_tdvm_clocksource_tsc - assert None is not None
    ERROR tests/test_tdvm_tsc.py::test_tdvm_cpuid_tscfreq - assert None is not None
    ERROR tests/test_tdvm_tsc.py::test_tdvm_compare2_host_tscfreq - assert None is not None
    ERROR tests/test_tdx_guest_status.py::test_tdvm_tdx_initialized - assert None is not None
    ERROR tests/test_vm_coexist.py::test_tdguest_with_legacy_base - assert None is not None
    ERROR tests/test_vm_reboot_qga.py::test_tdvm_qga_reboot - assert None is not None
    ERROR tests/test_vm_reboot_qga.py::test_efi_qga_reboot - assert None is not None
    ERROR tests/test_vm_reboot_qga.py::test_legacy_qga_reboot - assert None is not None
    ERROR tests/test_vm_shutdown_mode.py::test_vm_shutdown_mode[td-default] - assert None is...
    ERROR tests/test_vm_shutdown_mode.py::test_vm_shutdown_mode[td-acpi] - assert None is no...
    ERROR tests/test_vm_shutdown_mode.py::test_vm_shutdown_mode[td-agent] - assert None is n...
    ERROR tests/test_vm_shutdown_mode.py::test_vm_shutdown_mode[efi-default] - assert None i...
    ERROR tests/test_vm_shutdown_mode.py::test_vm_shutdown_mode[efi-acpi] - assert None is n...
    ERROR tests/test_vm_shutdown_mode.py::test_vm_shutdown_mode[efi-agent] - assert None is ...
    ERROR tests/test_vm_shutdown_mode.py::test_vm_shutdown_mode[legacy-default] - assert Non...
    ERROR tests/test_vm_shutdown_mode.py::test_vm_shutdown_mode[legacy-acpi] - assert None i...
    ERROR tests/test_vm_shutdown_mode.py::test_vm_shutdown_mode[legacy-agent] - assert None ...
    ERROR tests/test_vm_shutdown_qga.py::test_tdvm_qga_shutdown - assert None is not None
    ERROR tests/test_vm_shutdown_qga.py::test_efi_qga_shutdown - assert None is not None
    ERROR tests/test_vm_shutdown_qga.py::test_legacy_qga_shutdown - assert None is not None
    ERROR tests/test_workload_nginx.py::test_tdvm_nginx - assert None is not None
    ERROR tests/test_workload_redis.py::test_tdvm_redis - assert None is not None
    ============================== 11 passed, 37 errors in 0.35s ==============================

Allow user to specify root partition of guest VM

Currently when booting a VM, its root partition takes pre-defined values from kernel command line. It cannot take customized parameter. In the long-term, it's better to let user specify what the root partition is in a configuration file in order to fit different guest images.

can guest vm verify tdx

Can guest vm know if it's being run in TDX?

Eg. if azure claims that the machine is TDX enabled, how can I be sure?

Is there a way to simulate a TDX capable CPU?

When trying to setup TDX tools I encountered a issue similar to #223:

./start-qemu.sh -i td-guest-ubuntu-22.04.qcow2 -b grub
   ....
qemu-system-x86_64: -accel kvm: vm-type X86_TDX_VM not supported by KVM

The problem according to the response in issue #223 seems to be that I do not have a CPU with TDX. As far as I know no such CPUs have been released for the public yet.

Is there another way how I can emulate/simulate TDX? I do have a computer with a SGX 1.0 capable CPU.
If there is documentation in this regard I must have missed it.

Thank you for your help.

Capture build logs to a file

Current build mechanism does not capture the build logs for all the components it builds. It requires to capture build logs manually.

Please capture individual component build logs to a file to better understand the build failures automatically during the build process.

speed up building

Currently with my parallel building #81 branch it takes ~21 minutes to build all packages. Without this optimization the package build times are even longer.

suggestion: let's add a Dockerfile for each package which splits build.sh in parts so that once built the result is cached in docker build (and potentially utilizing buildkit caches). Then calling build again would be a NOP because of the caches until a build arg, for example version is changed.

tdx hardware

hot on the heels of #86 - now it looks that various youtubers like Linus Tech Tips and Serve The Home are receiving samples

where do we, githubbers, get samples as all stores list prebuilts/cpus as "coming soon"

vm-type X86_TDX_VM not supported by KVM (Ubuntu 22.04)

Hi,

I followed all the instructions of the build/ubuntu-22.04/ README.md and got everything well built but then I'm getting this error while running the command ./start-qemu.sh -i td-guest-ubuntu-22.04.qcow2 -b grub .

qemu-system-x86_64: -accel kvm: vm-type X86_TDX_VM not supported by KVM

I tried to change the path of the qemu binary in order to use the one I built : QEMU_EXEC="~/projects/tdx-tools/build/ubuntu-22.04/intel-mvp-tdx-qemu-kvm/mvp-tdx-qemu-v7.2.0/b/qemu/qemu-system-x86_64" but the error remains.

Am I not using the good KVM stack somehow? The lsmod command shows that the kvm and kvm_intel modules are loaded though.

Thanks in advance for your help.

What is TDX supported platform?

Hi,

I use ubuntu 22.04 on my Intel(R) Xeon(R) CPU E3-1280 v6 @ 3.90GHz. Following instruction for Ubuntu, I pass all steps but do not test the guest image successfully in the last step. It prompted:

qemu-system-x86_64: -accel kvm: vm-type X86_TDX_VM not supported by KVM

Then I follow the 'dmesg | grep -i TDX' in verify_tdx_host.md. It shows:

[ 0.000000] Linux version 5.19.0-mvp12v2+2-generic (sean@rdma17) (gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0, GNU ld (GNU Binutils for Ubuntu) 2.38) #mvp12v2+tdx SMP PREEMPT_DYNAMIC Thu Feb 9 07:52:57 UTC 2023
[ 0.292060] KVM-debug: PASS: single step TDX module emulated CPUID 0
[ 0.292060] KVM-debug: PASS: single step TDX module emulated RDMSR 0x1a0
[ 16.735980] tdx: Cannot enable TDX on TDX disabled platform

May I know what is the pre-requirement for running the Github repo? And where can I refer to for the TDX abled/disabled platform?

Thank you very much!

centos8 docker build fails on installing docker

on fresh centos8-stream (only using it because of #73 ) can't even get to docker build because of:

$ curl -Lsf get.docker.io | sh
...
+ '[' -n '' ']'
+ sh -c 'yum install -y -q docker-ce'
Error:
 Problem: problem with installed package buildah-1:1.24.2-2.module_el8.7.0+1106+45480ee0.x86_64
  - package buildah-1:1.24.2-2.module_el8.7.0+1106+45480ee0.x86_64 requires runc >= 1.0.0-26, but none of the providers can be installed
  - package buildah-1.24.0-0.7.module_el8.6.0+944+d413f95e.x86_64 requires runc >= 1.0.0-26, but none of the providers can be installed
  - package buildah-1:1.23.1-2.module_el8.6.0+954+963caf36.x86_64 requires runc >= 1.0.0-26, but none of the providers can be installed
  - package buildah-1.22.3-2.module_el8.6.0+926+8bef8ae7.x86_64 requires runc >= 1.0.0-26, but none of the providers can be installed
  - package buildah-1.22.3-2.module_el8.5.0+911+f19012f9.x86_64 requires runc >= 1.0.0-26, but none of the providers can be installed
  - package buildah-1.22.3-1.module_el8.5.0+901+79ce9cba.x86_64 requires runc >= 1.0.0-26, but none of the providers can be installed
...

dwarves missing on centos8

on fresh centos8-stream (only using it because #73 )

$ dnf install -y git createrepo

$ git clone https://github.com/intel/tdx-tools.git
$ cd tdx-tools/build/centos-stream-8
$ ./build-repo.sh
...
+ echo Build...
Build...
+ sudo -E dnf builddep -y /root/tdx-tools/build/centos-stream-8/intel-mvp-spr-kernel/spr-kernel.spec
Last metadata expiration check: 0:03:55 ago on Sun 24 Apr 2022 12:13:07 PM EEST.
Package bash-4.4.20-4.el8.x86_64 is already installed.
Package bc-1.07.1-5.el8.x86_64 is already installed.
Package binutils-2.30-114.el8.x86_64 is already installed.
Package bzip2-1.0.6-26.el8.x86_64 is already installed.
Package diffutils-3.6-6.el8.x86_64 is already installed.
No matching package to install: 'dwarves'

check-tdx-host.sh install has wrong package in install help

on ubuntu 22.04

root@sage:~/tdx-tools/utils# ./check-tdx-host.sh

    *** TDX Host Check ***

Error: "rdmsr" is not installed.
Please install via apt install msr-tool (Ubuntu) or dnf install msr-tools (RHEL/CentOS)
root@sage:~/tdx-tools/utils# apt install msr-tool
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
E: Unable to locate package msr-tool

--> it's msr-tools

Build Failure for intel-mvp-tdx-qemu-kvm

tdx-tools/build/ubuntu-22.04/intel-mvp-tdx-qemu-kvm/mvp-tdx-qemu-v7.2.0/roms/openbios/packages/init.c
In function ‘ppc64_patch_handlers’,
inlined from ‘cpu_970_init’ at /home/kkbodke/tdx-tools/build/ubuntu-22.04/intel-mvp-tdx-qemu-kvm/mvp-tdx-qemu-v7.2.0/roms/openbios/arch/ppc/qemu/init.c:433:5:
/home/kkbodke/tdx-tools/build/ubuntu-22.04/intel-mvp-tdx-qemu-kvm/mvp-tdx-qemu-v7.2.0/roms/openbios/arch/ppc/qemu/init.c:400:10: error: array subscript 0 is outside array bounds of ‘uint32_t[0]’ {aka ‘unsigned int[]’} [-Werror=array-bounds]
400 | *dsi = 0x48002002;
| ~~~~~^~~~~~~~~~~~
/home/kkbodke/tdx-tools/build/ubuntu-22.04/intel-mvp-tdx-qemu-kvm/mvp-tdx-qemu-v7.2.0/roms/openbios/arch/ppc/qemu/init.c:403:10: error: array subscript 0 is outside array bounds of ‘uint32_t[0]’ {aka ‘unsigned int[]’} [-Werror=array-bounds]
403 | *isi = 0x48002202;

cc1: all warnings being treated as errors
make[1]: *** [rules.mak:323: target/arch/ppc/qemu/init.o] Error 1
make[1]: *** Waiting for unfinished jobs....
make[1]: Leaving directory '/home/kkbodke/tdx-tools/build/ubuntu-22.04/intel-mvp-tdx-qemu-kvm/mvp-tdx-qemu-v7.2.0/b/openbios/obj-ppc'
make: *** [debian/rules:466: b/openbios/obj-ppc/.built] Error 2
dpkg-buildpackage: error: debian/rules build subprocess returned exit status 2
debuild: fatal error at line 1182:
dpkg-buildpackage -us -uc -ui -b failed

No Sapphire Rapids Xeon supports TDX outside of major hyperscalers

TDX support is coming in Emerald Rapids: https://www.techpowerup.com/304114/intel-xeon-sapphire-rapids-to-be-quickly-joined-by-emerald-rapids-granite-rapids-and-sierra-forest-in-the-next-two-years

The README is misleading with "Please contact Intel sales representative for on-premise bare metal server or processor or please refer What Intel® Xeon Processors Support for Intel® Trust Domain Extensions (Intel® TDX)?" - there is no server or processor that anybody else than a major cloud vendor has access to.

TDX disabled in bios

I know this is not directly related to tdx-tools, but what could be missing? I can not enable TDX it's not selectable and looks disabled.

The CPU is w3-2423, motherboard is Asus Pro WS W790E-SAGE SE

image

'tdx' not supported by KVM

Hi,

Could you please give me advice on possible reasons for the error 'qemu-system-x86_64: -accel kvm: kvm-type 'tdx' not supported by KVM' when I execute './start-qemu.sh -i td-guest-ubuntu-22.04.qcow2 -b grub'?

I use the version for ubuntu-22.04 and the mvp-tdx-5.15 branch. I follow the instruction for ubuntu 22.04 (https://github.com/intel/tdx-tools/tree/mvp-tdx-5.15/build/ubuntu-22.04). The detailed error information is as follows:

WARN: Using HVC console for grub, could not accept key input in grub menu

Guest Image : ./build/ubuntu-22.04/guest-image/td-guest-ubuntu-22.04.qcow2
Kernel binary :
OVMF_CODE : /usr/share/qemu/OVMF_CODE.fd
OVMF_VARS : /home/patrick/tdx-tools/OVMF_VARS.fd
VM Type : td
CPUS : 1
Boot type : grub
Monitor port : 9001
Enable vsock : false
Enable debug : false
Console : HVC

Remapping CTRL-C to CTRL-]
Launch VM:
/usr/bin/qemu-system-x86_64 -accel kvm -name process=tdxvm,debug-threads=on -m 2G -vga none -monitor pty -no-hpet -nodefaults -drive file=/home/patrick/tdx-tools/build/ubuntu-22.04/guest-image/td-guest-ubuntu-22.04.qcow2,if=virtio,format=qcow2 -monitor telnet:127.0.0.1:9001,server,nowait -device loader,file=/usr/share/qemu/OVMF_CODE.fd,id=fd0,config-firmware-volume=/home/patrick/tdx-tools/OVMF_VARS.fd -object tdx-guest,id=tdx -cpu host,-kvm-steal-time,pmu=off -machine q35,pic=no,kernel_irqchip=split,kvm-type=tdx,confidential-guest-support=tdx -device virtio-net-pci,netdev=mynet0 -smp 1 -netdev user,id=mynet0,hostfwd=tcp::10026-:22 -chardev stdio,id=mux,mux=on,logfile=/home/patrick/tdx-tools/vm_log_2023-01-05T1922.log -device virtio-serial,romfile= -device virtconsole,chardev=mux -monitor chardev:mux -serial chardev:mux -nographic
char device redirected to /dev/pts/5 (label compat_monitor0)
qemu-system-x86_64: -accel kvm: kvm-type 'tdx' not supported by KVM

Attestation procedure

Hi! First, thanks for your great work. Could you please describe how can an application running in your patched QEMU be attested remotely? Is it possible at all on the TDX level? Should any third-party solutions be used as when using Gramine SGX?

Thanks in advance!

Secret migration in TDX

Hello,
I have a general question about the functionalities of TDX as I could not find the corresponding information in the docs.
In SGX, the enclave author was allowed to move secrets between two of their own enclaves. As I understand, this was mainly to facilitate updates to enclaves.
Does this feature still exist in TDX? For containers, it might still be useful.
I appreciate your help.

hetzner.com Dell DX293 with XEON Gold 6438Y+

https://www.hetzner.com/dedicated-rootserver/dx293

lscpu

Architecture:            x86_64
  CPU op-mode(s):        32-bit, 64-bit
  Address sizes:         46 bits physical, 57 bits virtual
  Byte Order:            Little Endian
CPU(s):                  128
  On-line CPU(s) list:   0-127
Vendor ID:               GenuineIntel
  Model name:            Intel(R) Xeon(R) Gold 6438Y+
    CPU family:          6
    Model:               143
    Thread(s) per core:  2
    Core(s) per socket:  32
    Socket(s):           2
    Stepping:            8
    CPU max MHz:         4000.0000
    CPU min MHz:         800.0000
    BogoMIPS:            4000.00
    Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulq
                         dq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stib
                         p ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust sgx bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_l
                         lc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcn
                         tdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd sgx_lc fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization features:
  Virtualization:        VT-x
Caches (sum of all):
  L1d:                   3 MiB (64 instances)
  L1i:                   2 MiB (64 instances)
  L2:                    128 MiB (64 instances)
  L3:                    120 MiB (2 instances)
NUMA:
  NUMA node(s):          2
  NUMA node0 CPU(s):     0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126
  NUMA node1 CPU(s):     1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111,113,115,117,119,121,123,125,127
Vulnerabilities:
  Itlb multihit:         Not affected
  L1tf:                  Not affected
  Mds:                   Not affected
  Meltdown:              Not affected
  Mmio stale data:       Not affected
  Retbleed:              Not affected
  Spec store bypass:     Mitigation; Speculative Store Bypass disabled via prctl and seccomp
  Spectre v1:            Mitigation; usercopy/swapgs barriers and __user pointer sanitization
  Spectre v2:            Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
  Srbds:                 Not affected
  Tsx async abort:       Not affected

and check-tdx-host.sh:

image

TDX can not be enabled in BIOS even after SGX was enabled.

Encrypting of mounted folders

Hi! When using SGX instruments (like Scone, Gramine or Edgeless) it's possible to create so called protected volumes - folders mounted into the image that are accessible for writing from an enclave, are encrypted by the SGX mechanisms for everyone who can access them from the outside and any change of data located in these folders is handled by the SGX mechanism preventing any interference from the outside.

Is it possible do do something similar with TDX? Like mount a shared folder with QEMU and make it encrypted and accessible only from the running virtual system?

/usr/lib/shim not found & ./debian/shim/install not executable

Hi, I am running the build-repo.sh in tdx-tools/build/ubuntu-22.04 .

My machine info:
Distributor ID: Ubuntu
Description: Ubuntu 22.04.1 LTS
Release: 22.04
Codename: jammy

The error info (part):

install: WARNING: ignoring --strip-program option as -s option was not specified
install: WARNING: ignoring --strip-program option as -s option was not specified
install --strip-program=true -d -m 0755 /home/xian/tdx-tools/build/ubuntu-22.04/intel-mvp-tdx-guest-shim/mvp-tdx-guest-shim-15.4/debian/tmp/
install: WARNING: ignoring --strip-program option as -s option was not specified
install --strip-program=true -d -m 0700 /home/xian/tdx-tools/build/ubuntu-22.04/intel-mvp-tdx-guest-shim/mvp-tdx-guest-shim-15.4/debian/tmp/boot/efi/
install: WARNING: ignoring --strip-program option as -s option was not specified
install --strip-program=true -d -m 0755 /home/xian/tdx-tools/build/ubuntu-22.04/intel-mvp-tdx-guest-shim/mvp-tdx-guest-shim-15.4/debian/tmp/boot/efi/EFI/BOOT/
install: WARNING: ignoring --strip-program option as -s option was not specified
install --strip-program=true -d -m 0755 /home/xian/tdx-tools/build/ubuntu-22.04/intel-mvp-tdx-guest-shim/mvp-tdx-guest-shim-15.4/debian/tmp/boot/efi/EFI/ubuntu/
install: WARNING: ignoring --strip-program option as -s option was not specified
install --strip-program=true -d -m 0755 /home/xian/tdx-tools/build/ubuntu-22.04/intel-mvp-tdx-guest-shim/mvp-tdx-guest-shim-15.4/debian/tmp/usr/lib/shim
install: WARNING: ignoring --strip-program option as -s option was not specified
install --strip-program=true -m 0700 shimx64.efi /home/xian/tdx-tools/build/ubuntu-22.04/intel-mvp-tdx-guest-shim/mvp-tdx-guest-shim-15.4/debian/tmp/boot/efi/EFI/BOOT//BOOTX64.EFI
install: WARNING: ignoring --strip-program option as -s option was not specified
install --strip-program=true -m 0700 shimx64.efi /home/xian/tdx-tools/build/ubuntu-22.04/intel-mvp-tdx-guest-shim/mvp-tdx-guest-shim-15.4/debian/tmp/boot/efi/EFI/ubuntu//
install: WARNING: ignoring --strip-program option as -s option was not specified
install --strip-program=true -m 0700 shimx64.efi /home/xian/tdx-tools/build/ubuntu-22.04/intel-mvp-tdx-guest-shim/mvp-tdx-guest-shim-15.4/debian/tmp/usr/lib/shim/
install: WARNING: ignoring --strip-program option as -s option was not specified
install --strip-program=true -m 0700 BOOTX64.CSV /home/xian/tdx-tools/build/ubuntu-22.04/intel-mvp-tdx-guest-shim/mvp-tdx-guest-shim-15.4/debian/tmp/boot/efi/EFI/ubuntu//
install: WARNING: ignoring --strip-program option as -s option was not specified
install --strip-program=true -m 0700 BOOTX64.CSV /home/xian/tdx-tools/build/ubuntu-22.04/intel-mvp-tdx-guest-shim/mvp-tdx-guest-shim-15.4/debian/tmp/usr/lib/shim/
install: WARNING: ignoring --strip-program option as -s option was not specified
install --strip-program=true -m 0644 fbx64.efi /home/xian/tdx-tools/build/ubuntu-22.04/intel-mvp-tdx-guest-shim/mvp-tdx-guest-shim-15.4/debian/tmp/boot/efi/EFI/BOOT//
install: WARNING: ignoring --strip-program option as -s option was not specified
install --strip-program=true -m 0700 fbx64.efi /home/xian/tdx-tools/build/ubuntu-22.04/intel-mvp-tdx-guest-shim/mvp-tdx-guest-shim-15.4/debian/tmp/usr/lib/shim/
install: WARNING: ignoring --strip-program option as -s option was not specified
install --strip-program=true -m 0644 mmx64.efi /home/xian/tdx-tools/build/ubuntu-22.04/intel-mvp-tdx-guest-shim/mvp-tdx-guest-shim-15.4/debian/tmp/boot/efi/EFI/BOOT//
install: WARNING: ignoring --strip-program option as -s option was not specified
install --strip-program=true -m 0644 mmx64.efi /home/xian/tdx-tools/build/ubuntu-22.04/intel-mvp-tdx-guest-shim/mvp-tdx-guest-shim-15.4/debian/tmp/boot/efi/EFI/ubuntu//
install: WARNING: ignoring --strip-program option as -s option was not specified
install --strip-program=true -m 0700 mmx64.efi /home/xian/tdx-tools/build/ubuntu-22.04/intel-mvp-tdx-guest-shim/mvp-tdx-guest-shim-15.4/debian/tmp/usr/lib/shim/
install: WARNING: ignoring --strip-program option as -s option was not specified
make: Leaving directory '/home/xian/tdx-tools/build/ubuntu-22.04/intel-mvp-tdx-guest-shim/mvp-tdx-guest-shim-15.4'
rm -rf debian/tmp/usr/src
rm -rf debian/tmp/boot/efi/EFI/BOOT/BOOT*.EFI
make[1]: Leaving directory '/home/xian/tdx-tools/build/ubuntu-22.04/intel-mvp-tdx-guest-shim/mvp-tdx-guest-shim-15.4'
dh_install
/home/xian/tdx-tools/build/ubuntu-22.04/intel-mvp-tdx-guest-shim/mvp-tdx-guest-shim-15.4/debian/shim.install: 1: /usr/lib/shim: not found
dh_install: warning: debian/shim.install is marked executable but does not appear to an executable config.
dh_install: warning:
dh_install: warning: If debian/shim.install is intended to be an executable config file, please ensure it can
dh_install: warning: be run as a stand-alone script/program (e.g. "./debian/shim.install")
dh_install: warning: Otherwise, please remove the executable bit from the file (e.g. chmod -x "debian/shim.install")
dh_install: warning:
dh_install: warning: Please see "Executable debhelper config files" in debhelper(7) for more information.
dh_install: warning:
dh_install: error: debian/shim.install (executable config) returned exit code 127
make: *** [debian/rules:38: binary] Error 25
dpkg-buildpackage: error: fakeroot debian/rules binary subprocess returned exit status 2
debuild: fatal error at line 1182:
dpkg-buildpackage -us -uc -ui -i -I -b failed

I wonder if you could give some advice on this. Thank you.

Build does not do an incremental build

If for some reason, the first build fails, and fix the error and re-issue the build, it is building all the packages from the beginning, which is time consuming.
Can the build check for the already built packages and skip rebuilding the same and continue to build where it failed?

Need to support custom ip addresses for multiple TD VMs

Hi,
I need to create a k8s cluster with multiple TD VMs in my use case. However, no matter how many TD VMs I launch, the IP and MAC addresses of enp0s1 always are 10.0.2.15 and 52:54:00:12:34:56. Therefore, I can not successfully deploy the kubernetes cluster with these TD VMs via start-qemu.sh.

SGX is deprecated and not available in majority of Sapphire Rapids

Running utils/check-tdx-host.sh gives me

image

Isn't SGX deprecated? I can't find any place to enable it in Asus Pro WS W790E-SAGE SE latest bios.

This is all "SGX" there is:

image

And at https://www.intel.com/content/www/us/en/products/sku/233484/intel-xeon-w32423-processor-15m-cache-2-10-ghz/specifications.html

image

Same with for example https://ark.intel.com/content/www/us/en/ark/products/233481/intel-xeon-w52445-processor-26-25m-cache-3-10-ghz.html

Can I create an independent OS image and Kernel image for the guest

I was wondering if it was possible to create an OS image and a Kernel vmlinuz image for the guest without having to do complete the entire build-repo.sh script. I think I've gotten it to create an operating system image correctly but when I run the build.sh script in the intel-mvp-tdx-kernel directory there is no vmlinuz file generated from what I can see.
Thanks in advance,
Darragh.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.