Giter Site home page Giter Site logo

extremeshok / xshok-proxmox Goto Github PK

View Code? Open in Web Editor NEW
782.0 46.0 226.0 425 KB

proxmox (pve) post installation optimizing and helper scripts

Home Page: https://eXtremeSHOK.com

License: Other

Shell 100.00%
proxmox proxmox-cluster proxmox-ve install debian scripts

xshok-proxmox's Introduction

xshok-proxmox :: eXtremeSHOK.com Proxmox (pve)

Scripts for working with and optimizing proxmox

Maintained and provided by https://eXtremeSHOK.com

Please Submit Patches / Pull requests

Optimization / Post Install Script (install-post.sh aka postinstall.sh) run once

Turns a fresh proxmox install into an optimised proxmox host not required if server setup with installimage-proxmox.sh

'reboot-quick' command which uses kexec to boot the latest kernel, its a fast method of rebooting, without needing to do a hardware reboot

  • Disable the enterprise repo, enable the public repo, Add non-free sources
  • Fixes known bugs (public key missing, max user watches, etc)
  • Update the system
  • Detect AMD EPYC CPU and Apply Fixes
  • Force APT to use IPv4
  • Update proxmox and install various system utils
  • Customise bashrc
  • add the latest ceph provided by proxmox
  • Disable portmapper / rpcbind (security)
  • Ensure Entropy Pools are Populated, prevents slowdowns whilst waiting for entropy
  • Protect the web interface with fail2ban
  • Detect if is running in a virtual machine and install the relavant guest agent
  • Install ifupdown2 for a virtual internal network allows rebootless networking changes (not compatible with openvswitch-switch)
  • Limit the size and optimise journald
  • Install kernel source headers
  • Install kexec, allows for quick reboots into the latest updated kernel set as primary in the boot-loader.
  • Ensure ksmtuned (ksm-control-daemon) is enabled and optimise according to ram size
  • Set language, if chnaged will disable XS_NOAPTLANG
  • Increase max user watches, FD limit, FD ulimit, max key limit, ulimits
  • Optimise logrotate
  • Lynis security scan tool by Cisofy
  • Increase Max FS open files
  • Optimise Memory
  • Pretty MOTD BANNER
  • Enable Network optimising
  • Save bandwidth and skip downloading additional languages, requires XS_LANG="en_US.UTF-8"
  • Disable enterprise proxmox repo
  • Remove subscription banner
  • Install openvswitch for a virtual internal network
  • Detect if this is an OVH server and install OVH Real Time Monitoring
  • Set pigz to replace gzip, 2x faster gzip compression
  • Bugfix: high swap usage with low memory usage
  • Enable TCP BBR congestion control
  • Enable TCP fastopen
  • Enable testing proxmox repo
  • Automatically Synchronize the time
  • Set Timezone, empty = set automatically by IP
  • Install common system utilities
  • Increase vzdump backup speed
  • Optimise ZFS arc size accoring to memory size
  • Install zfs-auto-snapshot

https://raw.githubusercontent.com/extremeshok/xshok-proxmox/master/install-post.sh

return value is 0

wget https://raw.githubusercontent.com/extremeshok/xshok-proxmox/master/install-post.sh -c -O install-post.sh && bash install-post.sh && rm install-post.sh

TO SET AND USE YOUR OWN OPTIONS (using xs-post-install.env)

User Defined Options for (install-post.sh) post-installation script for Proxmox are set in the xs-install-post.env, see the sample : xs-install-post.env.sample

wget https://raw.githubusercontent.com/extremeshok/xshok-proxmox/master/xs-install-post.env.sample -c -O xs-install-post.env
wget https://raw.githubusercontent.com/extremeshok/xshok-proxmox/master/install-post.sh -c -O install-post.sh
nano xs-install-post.env
bash install-post.sh

TO SET AND USE YOUR OWN OPTIONS (using ENV)

Examnple to disable the MOTD banner

wget https://raw.githubusercontent.com/extremeshok/xshok-proxmox/master/install-post.sh -c -O install-post.sh
export XS_MOTD="no"
bash install-post.sh

Install Proxmox Recommendations

Recommeneded partitioning scheme:

  • Raid 1 (mirror) 40 000MB ext4 /
  • Raid 1 (mirror) 30 000MB ext4 /xshok/zfs-cache only create if an ssd and there is 1+ unused hdd which will be made into a zfspool
  • Raid 1 (mirror) 5 000MB ext4 /xshok/zfs-slog only create if an ssd and there is 1+ unused hdd which will be made into a zfspool
  • SWAP
    • HDD less than 130gb = 16GB swap
    • HDD more than 130GB and RAM less than 64GB = 32GB swap
    • HDD more than 130GB and RAM more than 64GB = 64GB swap
  • Remaining for lv xfs /var/lib/vz (LVM)

Hetzner Proxmox Installation Guide

see hetzner folder

OVH Proxmox Installation Guide

see ovh folder

------- SCRIPTS ------

Convert from Debian 11 to Proxmox 7 (debian11-2-proxmox7.sh) optional

Assumptions: Debian11 installed with a valid FQDN hostname set

  • Tested on KVM, VirtualBox and Dedicated Server
  • Will automatically detect cloud-init and disable.
  • Will automatically generate a correct /etc/hosts
  • Note: will automatically run the install-post.sh script
curl -O https://raw.githubusercontent.com/extremeshok/xshok-proxmox/master/debian-2-proxmox/debian11-2-proxmox7.sh && chmod +x debian11-2-proxmox7.sh
./debian11-2-proxmox7.sh

Convert from Debian 10 to Proxmox 6 (debian10-2-proxmox6.sh) optional

curl -O https://raw.githubusercontent.com/extremeshok/xshok-proxmox/master/debian-2-proxmox/debian10-2-proxmox6.sh && chmod +x debian10-2-proxmox6.sh
./debian10-2-proxmox6.sh

Convert from Debian 9 to Proxmox 5 (debian9-2-proxmox5.sh) optional

curl -O https://raw.githubusercontent.com/extremeshok/xshok-proxmox/master/debian-2-proxmox/debian9-2-proxmox5.sh && chmod +x debian9-2-proxmox5.sh
./debian9-2-proxmox5.sh

Enable Docker support for an LXC container (pve-enable-lxc-docker.sh) optional

There can be security implications as the LXC container is running in a higher privileged mode.

curl https://raw.githubusercontent.com/extremeshok/xshok-proxmox/master/helpers/pve-enable-lxc-docker.sh --output /usr/sbin/pve-enable-lxc-docker && chmod +x /usr/sbin/pve-enable-lxc-docker
pve-enable-lxc-docker container_id

Convert from LVM to ZFS (lvm-2-zfs.sh) run once

Converts the a MDADM BASED LVM into a ZFS raid 1 (mirror)

  • Defaults to mount point: /var/lib/vz
  • Optional: specify the LVM_MOUNT_POINT ( ./lvm-2-zfs.sh LVM_MOUNT_POINT )
  • Creates the following storage/rpools
  • zfsbackup (rpool/backup)
  • zfsvmdata (rpool/vmdata)
  • /var/lib/vz/tmp_backup (rpool/tmp_backup)
  • Will automatically detect the required raid level and optimise.
  • 1 Drive = zfs
  • 2 Drives = mirror
  • 3-5 Drives = raidz-1
  • 6-11 Drives = raidz-2
  • 11+ Drives = raidz-3

NOTE: WILL DESTROY ALL DATA ON LVM_MOUNT_POINT

wget https://raw.githubusercontent.com/extremeshok/xshok-proxmox/master/zfs/lvm-2-zfs.sh -c -O lvm-2-zfs.sh && chmod +x lvm-2-zfs.sh
./lvm-2-zfs.sh

Create ZFS from devices (createzfs.sh) optional

Creates a zfs pool from specified devices

  • Will automatically detect the required raid level and optimise
  • 1 Drive = zfs (single)
  • 2 Drives = mirror (raid1)
  • 3-5 Drives = raidz-1 (raid5)
  • 6-11 Drives = raidz-2 (raid6)
  • 11+ Drives = raidz-3 (raid7)

NOTE: WILL DESTROY ALL DATA ON SPECIFIED DEVICES

wget https://raw.githubusercontent.com/extremeshok/xshok-proxmox/master/zfs/createzfs.sh -c -O createzfs.sh && chmod +x createzfs.sh
./createzfs.sh poolname /dev/device1 /dev/device2

Create ZFS cache and slog from /xshok/zfs-cache and /xshok/zfs-slog partitions and adds them to a zpool (xshok_slog_cache-2-zfs.sh) optional

Creates a zfs pool from specified devices

  • Will automatically mirror the slog and stripe the cache if there are multiple drives

NOTE: WILL DESTROY ALL DATA ON SPECIFIED PARTITIONS

wget https://raw.githubusercontent.com/extremeshok/xshok-proxmox/master/zfs/xshok_slog_cache-2-zfs.sh -c -O xshok_slog_cache-2-zfs.sh && chmod +x xshok_slog_cache-2-zfs.sh
./xshok_slog_cache-2-zfs.sh poolname

CREATES A ROUTED vmbr0 AND NAT vmbr1 NETWORK CONFIGURATION FOR PROXMOX (network-configure.sh) run once

Autodetects the correct settings (interface, gatewat, netmask, etc) Supports IPv4 and IPv6, Private Network uses 10.10.10.1/24 Also installs and properly configures the isc-dhcp-server to allow for DHCP on the vmbr1 (NAT) ROUTED (vmbr0): All traffic is routed via the main IP address and uses the MAC address of the physical interface. VM's can have multiple IP addresses and they do NOT require a MAC to be set for the IP via service provider

NAT (vmbr1): Allows a VM to have internet connectivity without requiring its own IP address Assignes 10.10.10.100 - 10.10.10.200 via DHCP

Public IP's can be assigned via DHCP, adding a host define to the /etc/dhcp/hosts.public file

Tested on OVH and Hetzner based servers

ALSO CREATES A NAT Private Network as vmbr1

NOTE: WILL OVERWRITE /etc/network/interfaces A backup will be created as /etc/network/interfaces.timestamp

wget https://raw.githubusercontent.com/extremeshok/xshok-proxmox/master/networking/network-configure.sh -c -O network-configure.sh && chmod +x network-configure.sh
./network-configure.sh && rm network-configure.sh

Creates default routes to allow for extra ip ranges to be used (network-addiprange.sh) optional

If no interface is specified the default gateway interface will be detected and used.

wget https://raw.githubusercontent.com/extremeshok/xshok-proxmox/master/networking/network-addiprange.sh -c -O network-addiprange.sh && chmod +x network-addiprange.sh
./network-addiprange.sh ip.xx.xx.xx/cidr interface_optional

Create Private mesh vpn/network (tincvpn.sh)

tinc private mesh vpn/network which supports multicast, ideal for private cluster communication

wget https://raw.githubusercontent.com/extremeshok/xshok-proxmox/master/networking/tincvpn.sh -c -O tincvpn.sh && chmod +x tincvpn.sh
./tincvpn.sh -h

Example for 3 node Cluster

cat /etc/hosts

global ips for tinc servers

11.11.11.11 host1

22.22.22.22 host2

33.33.33.33 host3

First Host (hostname: host1)

bash tincvpn.sh -i 1 -c host2

Second Host (hostname: host2)

bash tincvpn.sh -i 2 -c host3

Third Host (hostname: host3)

bash tincvpn.sh -i 3 -c host1

NOTES

Alpine Linux KVM / Qemu Agent Client Fix

Run the following on the guest alpine linux

apk update && apk add qemu-guest-agent acpi
echo 'GA_PATH="/dev/vport2p1"' >> /etc/conf.d/qemu-guest-agent
rc-update add qemu-guest-agent default
rc-update add acpid default
/etc/init.d/qemu-guest-agent restart

Proxmox ACME / Letsencrypt

Run the following on the proxmox server, ensure you have a valid DNS for the server which resolves

pvenode acme account register default [email protected]
pvenode config set --acme domains=example.invalid
pvenode acme cert order

ZFS Snapshot Usage

# list all snapshots
zfs list -t snapshot
# create a pre-rollback snapshot
zfs-auto-snapshot --verbose --label=prerollback -r //
# rollback to a specific snapshot
zfs rollback <snapshotname>

xshok-proxmox's People

Contributors

adrian-hoasted avatar extremeshok avatar jbarnaby-medallia avatar jodumont avatar mczdsm avatar mjkl-gh avatar orfeous avatar tinof avatar unstable-deadlock avatar webaseo avatar yorch avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

xshok-proxmox's Issues

OS root on ZFS?

I have a soyoustart dedicated server and your script works like a charm! Is there any way to move also the OS? Maybe in rescue mode?

Hetzner Install

ERROR: You need a /boot partition when using x x software RAID level 0, 5, 6 or 10

Server has 4 identical drives so its trying to use raid 10

cat: /sys/block/sda/queue/rotational: No such file or directory

On Hetzner AX-51-nvme

./install-hetzner.sh HOST

RAID ENABLED
RAID Devices: sda,sdb,sdc,sdd
Set RAID level to 10
Detecting and setting optimal swap partition size
cat: /sys/block/sda/queue/rotational: No such file or directory
SSD Detected, RAID 10 enabled, ignoring slog partition
cat: /sys/block/sda/queue/rotational: No such file or directory
SSD Detected, RAID 10 enabled, ignoring cache partition
ERROR: Drive is too small

So the installation does not even start. This happens after first boot to rescue mode as per instructed.

I guess NVMEs are not yet supported?

install-post.sh: line 350: syntax error

pve-kernel-5.15 is already the newest version (7.2-1).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
install-post.sh: line 350: syntax error near unexpected token fi' install-post.sh: line 350: fi'

install-post.sh : chown /var/lib/ceph/osd/*/block* no such directory

Maybe since this is on a fresh install, and I'm not intentionally doing anything with ceph, that might explain the following error:

chown: cannot access '/var/lib/ceph/osd//block': No such file or directory

If I understood Ceph better and actually had a cluster of machines setup in the same location, I might try ceph.

5.4 ISO of Proxmox VE downloaded 5/12/2019
Downloaded the install-post.sh script also 5/12/2019

Script failing when using latest Proxmox 6

Thanks for this project, it's awesome!

I've just found an issue when running the script following the instructions on OVH, on a new Proxmox 6, the script silently fails to finish, specifically in this line:

## Install zfs-auto-snapshot
/usr/bin/env DEBIAN_FRONTEND=noninteractive apt-get -y -o Dpkg::Options::='--force-confdef' install zfs-auto-snapshot

Because the package can not be found. Reason behind it is because this line:

## Add non-free to sources
sed -i "s/main contrib/main non-free contrib/g" /etc/apt/sources.list

As contrib is not configured by default:

# See http://www.debian.org/releases/stable/i386/release-notes/ch-upgrading.html
# for how to upgrade to newer versions of the distribution.
deb http://deb.debian.org/debian buster main
deb-src http://deb.debian.org/debian buster main

Thanks!

Needs some updating

Needs some updating but this script is INSANELY useful and time saving... I will look into updating it once I can get a hang of what I did to make it all work lol.

Proxmox + UEFI Installation Issue @Hetzner

Hi

Tried to install using your [installimage-proxmox.sh] script with UEFI activated on my AX101 server.

It worked perfectly WITHOUT UEFI activated but with UEFI on the script simply exits the bash window straight after opening.
The server is inaccessible after rebooting and must go into rescue again.

I read that the approach could be to install with UEFI off and after completion turn it back on.

Any experience with this?

Thanks!

Drive to small error with NVMe drives

Hello, your scripts don't work with nve drives, yet less with already initialised software raid. When installation script come to check drives, of course it fail with "drive to small" error as those drivers aren't sd* ones .
drivers are type /dev/nvme0n1 and /dev/nvme1n1 for 2 drives system

ZFS Fails on kernel upgrades

After installing Proxmox 5.4 with ZFS Raid 0 atop my storage the system boots fine, but after upgrading the kernel, something breaks and I end up with an unbootable system.

**Reading all physical volumes. This may take a while …
… zfs: ‘$MY_ZFS_ARC_MIN’ invalid for parameter zfs_arc_min’ … zfs: $MY_ZFS_ARC_MAX’ invalid for parameter `zfs_arc_max’

Failed to load ZFS modules.
Manually load the modules and exit.**

The immediate workaround is to boot from an earlier installed kernel which seems to work. Then edit /etc/modprobe.d/zfs.conf to have direct values for zfs caching values (min & max) .

Well, I have a 16 GB RAM system so here's some direct values I put into
/etc/modprobe.d/zfs.conf

eXtremeSHOK.com ZFS tuning

Use 1/16 RAM for MAX cache, 1/8 RAM for MIN cache, or 1GB

options zfs zfs_arc_min=1073741824
options zfs zfs_arc_max=2147483648

use the prefetch method

options zfs l2arc_noprefetch=0

max write speed to l2arc

tradeoff between write/read and durability of ssd (?)

default : 8 * 1024 * 1024

setting here : 500 * 1024 * 1024

options zfs l2arc_write_max=524288000

Script NETWORKING (vmbr0 vmbr1) failing when using latest Proxmox 6

I did try your script for Hetzner dedicated server, but the scrip failed.

./network-configure.sh && rm network-configure.sh
Downloading network-addiprange.sh script
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 4892 100 4892 0 0 154k 0 --:--:-- --:--:-- --:--:-- 154k
Auto detecting existing network settings
ERROR: Could not detect all IPv4 varibles
IP: Netmask: 255.255.255.255 Gateway: 5.9.xx.xxx

How can I fix your script ?

Same result on my Proxmox in our local network with FritzBox as gateway.

Hetzner Install Issue

Hetzner issue not resolved, When you pull your image location it pulls a Debian image that is not available anymore in the rescue system

problems with special hardware @ hetzner

Hiho from "good old germany",

I've a brandnew AX101 from Hetzner with two NVME and additionally an SSD (for boot and root) and a HDD for separate backups.
The automated install works fine, OS is on the SSD.
But the lvm-2-zfs is not working for me , maybe due to the missing software-raid.
I would like to add the two NVEM as ZFS pool and switching the mountpoint to /var/lib/vz, which is currently on the default LVM on the SSD.
I know, i can do this manually.
But it would be nice, to get it up and running with a script.

Tried also createzfs.sh
result:

./createzfs.sh rpool /dev/nvme0n1 /dev/nvme1n1
Clearing partitions: /dev/nvme0n1
/dev/nvme0n1 ->
Clearing partitions: /dev/nvme1n1
/dev/nvme1n1 ->
Enable ZFS to autostart and mount
Ensure ZFS is started
Creating the array
Creating ZFS mirror (raid1)
invalid vdev specification: mirror requires at least 2 devices
ERROR: creating ZFS

Need just a little hint, what to change, to fix this.

regards

Rico

install-post.sh bug

Hi!
This script failed with such error on fresh installed proxmox

Executing: /lib/systemd/systemd-sysv-install enable fail2ban
vm.swappiness = 10
vm.swappiness = 10
vm.min_free_kbytes = 524288
install-post.sh: line 334: syntax error: unexpected end of file

After removing from script the part about ## Pretty MOTD BANNER all working how intended

Uninstall Script

Hi,
is there an uninstall Script?
There are lots of changes against the original. But I need the original configuration.
Or do i have to install proxmox again?

PVE 8 support?

Any news on modifications for the 8 update? I'm seeing a few changes that seem as though we might want to look more closely, especially in /etc/systemd/system.conf and /etc/systemd/user.conf having significant changes (albeit that they are all commented out), so just curious about this. Presently just updating to new and putting old edits back in since they seem relevant still perhaps.

tinc routing issue

Hello and thank you for all your hard work.

I 'm testing your tinc script for 2 hosts on seperate networks.

192.168.10.1 host1
192.168.11.1 host2

I can see the vpn is running by pinging between them on the 10.10.1.x network

How can I make them talk by using their hostnames/192.168.x.x addresses?
What am I missing? Do I need to run any of your other scripts? Do I need a new routing rule?

Uninstall Xshok

how can i Uninstall this? i'm using hetzner node

reason why i'm removing it, i'm getting a weird DHCP error, i think it's cause xshok but not 100% sure
image
my network is seems to be fine

so how can i uninstall this?

lvm-2-zfs.sh : ERROR: /var/lib/vz not found (on a NVMe)

Says
-----)
STARTING CONVERSION
ERROR: /var/lib/vz not found
(-----
but
cd /var/lib/
ll
gives
-----)
drwxr-xr-x 7 root root 4096 Apr 7 01:54 vz/
(-----
Note : on a Debian GNU/Linux 9.13 (stretch)
-----)
[root@**** lib]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 931.5G 0 disk
├─nvme0n1p1 259:1 0 2M 0 part
├─nvme0n1p2 259:2 0 1G 0 part /boot
├─nvme0n1p3 259:3 0 1G 0 part [SWAP]
└─nvme0n1p4 259:4 0 929.5G 0 part
└─VolGroup00-system_root 253:0 0 929.5G 0 lvm /
(-----
Thanks !

Ceph repo url

The url added in ceph.list is invalid. It should be http:// not https://.

Problem install on dedicated server Hetzener

Hi, when i lauch a script i recive tris error:

root@rescue ~ # ./hetzner-install-proxmox.sh "proxmox.mydomain"
There is no screen to be resumed matching proxmox-install.

OS: PVE
LVM: TRUE
RAID: 1
BOOT: 1
ROOT: 100
SWAP: 128
ZFS_L2ARC: 0
ZFS_SLOG: 0
Total+1: 230
TARGET_SIZE_GB: 1788
INSTALL_TARGET: nvme0n1,nvme1n1
NVME_COUNT: 2
NVME_TARGET: nvme0n1,nvme1n1
NVME_TARGET_COUNT: 2
SSD_COUNT: 0
SSD_TARGET:
SSD_TARGET_COUNT: 0
HDD_COUNT: 0
HDD_TARGET:
HDD_TARGET_COUNT: 0

--2022-03-29 15:31:13-- https://raw.githubusercontent.com/extremeshok/xshok-proxmox/master/hetzner-install/pve
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8002::154, 2606:50c0:8001::154, 2606:50c0:8000::154, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8002::154|:443... connected.
HTTP request sent, awaiting response... 404 Not Found
2022-03-29 15:31:14 ERROR 404: Not Found.

Error: postinstall file was not found: /root/pve

OVH network config

The network config script throws an error
"ERROR: Could not detect all IPv4 varibles
IP: Netmask: 255.255.255.0 Gateway: redacted"

It does show the right ipv4 ip though. I set the partitions in the manager and had the manager execute the post install script and received no errors. I also ran the zfs script as instructed with no errors and rebooted the server.

ERROR: no devices found for sda3 in /proc/mdstat

Hi,

when I run your script, I get this output:

Found partition, continuing

Found raid, continuing
sda3
Found lv, continuing

ERROR: no devices found for sda3 in /proc/mdstat

My Proxmox is running on top of a HP server with included RAID controller ...

install-post.sh dbus response

after
Processing triggers for dbus (1.10.26-0+deb9u1) ...

this error came up...
W: APT had planned for dpkg to do more than it reported back (325 vs 516).

Proxmox 5.4 iso downloaded 5/12/2019
install-post.sh downloaded 5/12/2019

zfs_arc_max can not be the same or less than zfs_arc_min

Dear maintainers,

I've been using bits off your script for maintaining my proxmox servers for a couple of years now. First of all many thanks for this great work. It helped me a lot

Recently I noticed my homelab server started running out of RAM. It's an older microserver with only 16GB of RAM, therefore min and max were both set to 1GB

Somehow it was ignoring zfs_arc_max set in the modprobe directory. After some searching I found this

apparently something was changed and min and max can't be set to the same amount any longer. Doing so will make it go to default (half of memory)

no log in auth.log and daemon.log

Hello my people.

I need help with the following, I am using fail2ban, but when I check the logs of proxmox auth.log and daemon.log, they are without a record.

How do I fix this, syslog in webgui if it works.

Register syslog webgui proxmox
pvedaemon[1885456]: authentication failure; rhost=::ffff:5.42.199.51 user=root@pam msg=Authentication failure

Register daemon.log and auth.log no record.

Thx.

License

Hi,

could you please add a LICENSE file to the repository?

Thanks :)

How to uninstall changes?

Installed your script, but it brakes my network traffic from LCX containers and made my home internal DNS resolving impossible (runing pihole as LX on the host)

So, how can i reverse the changes made? there is a closed issue with the same question, but no documentation...

best regards,
Tom

mismatch Remove subscription banner AND pve-manager 7.2-xx

little in install-post.sh

  1. mismatch 2 sed command on file /etc/cron.daily/xs-pve-nosub :
    cat <<'EOF' > /etc/cron.daily/xs-pve-nosub
    with sed command on
    echo "DPkg::Post-Invoke { \"dpkg -V proxmox-widget-toolkit | grep -q '/proxmoxlib\.js$'; if [ \$? -eq 1 ]; then { echo 'Removing subscription nag from UI...'; sed -i '/data.status/{s/\!//;s/Active/NoMoreNagging/}' /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js; }; fi\"; };" > /etc/apt/apt.conf.d/xs-pve-no-nag && apt --reinstall install proxmox-widget-toolkit
  2. not really work on pve-manager 7.2-11/b76d3178 with proxmox-widget-toolkit 3.5.1
    • applying 2 sed command on install-post.sh line 570-571, subscription banner dissappear but modal dissappear too when click Package versions (Datacenter > pve > Summary > Package versions)
    • try this to apply on pve-manager 7.2-11/b76d3178 with proxmox-widget-toolkit 3.5.1

regard
ngadmini

Install script not installing OVH RMT

Just noticed that the OVH RMT installation part fails with the following error:

--2020-01-30 01:05:23--  ftp://ftp.ovh.net/made-in-ovh/rtm/install_rtm.sh                                                                                                                                                                                                                  => ‘install_rtm.sh’                                                                                                                                                                                                                                                  Resolving ftp.ovh.net (ftp.ovh.net)... 213.186.33.9                                                                                                                                                                                                                             Connecting to ftp.ovh.net (ftp.ovh.net)|213.186.33.9|:21... failed: Connection refused.

According to this page, it can be installed with:

wget -qO - https://last-public-ovh-infra-yak.snap.mirrors.ovh.net/yak/archives/apply.sh | OVH_PUPPET_MANIFEST=distribyak/catalog/master/puppet/manifests/common/rtmv2.pp bash

pve enable lxc docker ??

Hi again;

on which version of proxmox you use this script (pve-enable-lxc-docker.sh) ?
I don't doubt those manipulation was effective before but on a Proxmox 5.4-6
I simply install docker without aufs and bingo it's working out of the box inside a standard unprivileged debian container

apt-get install -y \
  apt-transport-https \
  ca-certificates curl \
  gnupg2 \
  software-properties-common

curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -

echo "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | \
  tee /etc/apt/sources.list.d/docker.list

apt update

apt-get install -y --no-install-recommends \
  containerd.io \
  cgroupfs-mount \
  docker-ce docker-ce-cli \
  libltdl7 \
  git \
  pigz

OVH error on raid1 -> required varible not found or the server is already converted to zfs

I've follow the guide the readme to the letter on ovh installing their Proxmox 5.4-6.
When I launch the lvm-2-zfs.sh I get this error:

ERROR: required varible not found or the server is already converted to zfs

What am I doing wrong?

root@server4:~#  ./lvm-2-zfs.sh && rm lvm-2-zfs.sh
+++++++++++++++++++++++++
WILL DESTROY ALL DATA ON
/var/lib/vz
+++++++++++++++++++++++++
[CTRL]+[C] to exit
+++++++++++++++++++++++++
5..
4..
3..
2..
1..
STARTING CONVERSION
Found partition, continuing
MY_LVM_DEV=/dev/md4
Found raid, continuing
MY_MD_RAID=
Found lv, continuing
MY_LV=
ERROR: required varible not found or the server is already converted to zfs

If you need it I can give you access to the OVH manager :)

Getting error when updating system with apt

By running a apt-upgrade I am getting a weird error message but really don't know to interpret it.
At the en of the upgrade process I was requested to select the device to install grub. Not knowing which device to choose I selected them all. Unfortunately, the update could not write on the device partition and I had to choose the cancel option.

At the end I got the following messages (Do I need to worry if I reboot my server, not booting?):

File descriptor 3 (pipe:[373447891]) leaked on vgs invocation. Parent PID 4052325: grub-install.real File descriptor 3 (pipe:[373447891]) leaked on vgs invocation. Parent PID 4052325: grub-install.real grub-install.real: error: disk lvmid/PS4tyV-43fJ-yUhL-QAzj-qNw1-21rB-l7itxc/9577Sg-SHNG-qpUs-9khR-8bht-w6hj-rSYSlB' not found.
Installing for i386-pc platform.
File descriptor 3 (pipe:[373447891]) leaked on vgs invocation. Parent PID 4052414: grub-install.real
File descriptor 3 (pipe:[373447891]) leaked on vgs invocation. Parent PID 4052414: grub-install.real
grub-install.real: error: disk lvmid/PS4tyV-43fJ-yUhL-QAzj-qNw1-21rB-l7itxc/9577Sg-SHNG-qpUs-9khR-8bht-w6hj-rSYSlB' not found. Installing for i386-pc platform. File descriptor 3 (pipe:[373447891]) leaked on vgs invocation. Parent PID 4052482: grub-install.real File descriptor 3 (pipe:[373447891]) leaked on vgs invocation. Parent PID 4052482: grub-install.real grub-install.real: error: disk lvmid/PS4tyV-43fJ-yUhL-QAzj-qNw1-21rB-l7itxc/9577Sg-SHNG-qpUs-9khR-8bht-w6hj-rSYSlB' not found.
Installing for i386-pc platform.
File descriptor 3 (pipe:[373447891]) leaked on vgs invocation. Parent PID 4052501: grub-install.real
File descriptor 3 (pipe:[373447891]) leaked on vgs invocation. Parent PID 4052501: grub-install.real
grub-install.real: error: disk lvmid/PS4tyV-43fJ-yUhL-QAzj-qNw1-21rB-l7itxc/9577Sg-SHNG-qpUs-9khR-8bht-w6hj-rSYSlB' not found. Installing for i386-pc platform. File descriptor 3 (pipe:[373447891]) leaked on vgs invocation. Parent PID 4053976: grub-install.real File descriptor 3 (pipe:[373447891]) leaked on vgs invocation. Parent PID 4053976: grub-install.real grub-install.real: error: disk lvmid/PS4tyV-43fJ-yUhL-QAzj-qNw1-21rB-l7itxc/9577Sg-SHNG-qpUs-9khR-8bht-w6hj-rSYSlB' not found.
Installing for i386-pc platform.
File descriptor 3 (pipe:[373447891]) leaked on vgs invocation. Parent PID 4053994: grub-install.real
File descriptor 3 (pipe:[373447891]) leaked on vgs invocation. Parent PID 4053994: grub-install.real
grub-install.real: error: disk lvmid/PS4tyV-43fJ-yUhL-QAzj-qNw1-21rB-l7itxc/9577Sg-SHNG-qpUs-9khR-8bht-w6hj-rSYSlB' not found. Installing for i386-pc platform. File descriptor 3 (pipe:[373447891]) leaked on vgs invocation. Parent PID 4054020: grub-install.real File descriptor 3 (pipe:[373447891]) leaked on vgs invocation. Parent PID 4054020: grub-install.real grub-install.real: error: disk lvmid/PS4tyV-43fJ-yUhL-QAzj-qNw1-21rB-l7itxc/9577Sg-SHNG-qpUs-9khR-8bht-w6hj-rSYSlB' not found.
Installing for i386-pc platform.
File descriptor 3 (pipe:[373447891]) leaked on vgs invocation. Parent PID 4054072: grub-install.real
File descriptor 3 (pipe:[373447891]) leaked on vgs invocation. Parent PID 4054072: grub-install.real
grub-install.real: error: disk lvmid/PS4tyV-43fJ-yUhL-QAzj-qNw1-21rB-l7itxc/9577Sg-SHNG-qpUs-9khR-8bht-w6hj-rSYSlB' not found. Installing for i386-pc platform. File descriptor 3 (pipe:[373447891]) leaked on vgs invocation. Parent PID 4055594: grub-install.real File descriptor 3 (pipe:[373447891]) leaked on vgs invocation. Parent PID 4055594: grub-install.real grub-install.real: error: disk lvmid/PS4tyV-43fJ-yUhL-QAzj-qNw1-21rB-l7itxc/9577Sg-SHNG-qpUs-9khR-8bht-w6hj-rSYSlB' not found.
Installing for i386-pc platform.
File descriptor 3 (pipe:[373447891]) leaked on vgs invocation. Parent PID 4055612: grub-install.real
File descriptor 3 (pipe:[373447891]) leaked on vgs invocation. Parent PID 4055612: grub-install.real
grub-install.real: error: disk lvmid/PS4tyV-43fJ-yUhL-QAzj-qNw1-21rB-l7itxc/9577Sg-SHNG-qpUs-9khR-8bht-w6hj-rSYSlB' not found. Installing for i386-pc platform. File descriptor 3 (pipe:[373447891]) leaked on vgs invocation. Parent PID 4055630: grub-install.real File descriptor 3 (pipe:[373447891]) leaked on vgs invocation. Parent PID 4055630: grub-install.real grub-install.real: error: disk lvmid/PS4tyV-43fJ-yUhL-QAzj-qNw1-21rB-l7itxc/9577Sg-SHNG-qpUs-9khR-8bht-w6hj-rSYSlB' not found.
Installing for i386-pc platform.
File descriptor 3 (pipe:[373447891]) leaked on vgs invocation. Parent PID 4055656: grub-install.real
File descriptor 3 (pipe:[373447891]) leaked on vgs invocation. Parent PID 4055656: grub-install.real
grub-install.real: error: disk lvmid/PS4tyV-43fJ-yUhL-QAzj-qNw1-21rB-l7itxc/9577Sg-SHNG-qpUs-9khR-8bht-w6hj-rSYSlB' not found. Generating grub configuration file ... Found linux image: /boot/vmlinuz-5.15.60-1-pve Found initrd image: /boot/initrd.img-5.15.60-1-pve /usr/sbin/grub-probe: error: disk lvmid/PS4tyV-43fJ-yUhL-QAzj-qNw1-21rB-l7itxc/9577Sg-SHNG-qpUs-9khR-8bht-w6hj-rSYSlB' not found.
/usr/sbin/grub-probe: error: disk lvmid/PS4tyV-43fJ-yUhL-QAzj-qNw1-21rB-l7itxc/9577Sg-SHNG-qpUs-9khR-8bht-w6hj-rSYSlB' not found. /usr/sbin/grub-probe: error: disk lvmid/PS4tyV-43fJ-yUhL-QAzj-qNw1-21rB-l7itxc/9577Sg-SHNG-qpUs-9khR-8bht-w6hj-rSYSlB' not found.
Found linux image: /boot/vmlinuz-5.15.39-4-pve
Found initrd image: /boot/initrd.img-5.15.39-4-pve
/usr/sbin/grub-probe: error: disk lvmid/PS4tyV-43fJ-yUhL-QAzj-qNw1-21rB-l7itxc/9577Sg-SHNG-qpUs-9khR-8bht-w6hj-rSYSlB' not found. /usr/sbin/grub-probe: error: disk lvmid/PS4tyV-43fJ-yUhL-QAzj-qNw1-21rB-l7itxc/9577Sg-SHNG-qpUs-9khR-8bht-w6hj-rSYSlB' not found.
Found linux image: /boot/vmlinuz-5.15.30-2-pve
Found initrd image: /boot/initrd.img-5.15.30-2-pve
/usr/sbin/grub-probe: error: disk lvmid/PS4tyV-43fJ-yUhL-QAzj-qNw1-21rB-l7itxc/9577Sg-SHNG-qpUs-9khR-8bht-w6hj-rSYSlB' not found. /usr/sbin/grub-probe: error: disk lvmid/PS4tyV-43fJ-yUhL-QAzj-qNw1-21rB-l7itxc/9577Sg-SHNG-qpUs-9khR-8bht-w6hj-rSYSlB' not found.
/usr/sbin/grub-probe: error: disk lvmid/PS4tyV-43fJ-yUhL-QAzj-qNw1-21rB-l7itxc/9577Sg-SHNG-qpUs-9khR-8bht-w6hj-rSYSlB' not found. Found memtest86+ image: /boot/memtest86+.bin Found memtest86+ multiboot image: /boot/memtest86+_multiboot.bin Warning: os-prober will not be executed to detect other bootable partitions. Systems on them will not be added to the GRUB boot configuration. Check GRUB_DISABLE_OS_PROBER documentation entry. done

Getting error with network-configure.sh on OVH server

Hi Extremshock,

I'm getting following error with network-configure.sh script, basically script cannot detect IP, can you please let me know how to resolve this issue?


/etc/network/interfaces

The loopback network interface

auto lo
iface lo inet loopback

vmbr0: Bridging. Make sure to use only MAC adresses that were assigned to you.

auto vmbr0
iface vmbr0 inet static
address ###.###.###.###/24
gateway ###.###.###.254
bridge_ports enp3s0
bridge_stp off
bridge_fd 0


Generating /etc/default/isc-dhcp-server...
Job for isc-dhcp-server.service failed because the control process exited with error code.
See "systemctl status isc-dhcp-server.service" and "journalctl -xe" for details.
invoke-rc.d: initscript isc-dhcp-server, action "start" failed.

  • isc-dhcp-server.service - LSB: DHCP server
    Loaded: loaded (/etc/init.d/isc-dhcp-server; generated; vendor preset: enabled)
    Active: failed (Result: exit-code) since Wed 2019-07-17 01:12:32 UTC; 5ms ago
    Docs: man:systemd-sysv-generator(8)
    Process: 11642 ExecStart=/etc/init.d/isc-dhcp-server start (code=exited, status=1/FAILURE)
    CPU: 28ms

Jul 17 01:12:30 host2 systemd[1]: Starting LSB: DHCP server...
Jul 17 01:12:30 host2 isc-dhcp-server[11642]: Launching both IPv4 and IPv6 servers (pl…er).
Jul 17 01:12:30 host2 dhcpd[11668]: Wrote 0 leases to leases file.
Jul 17 01:12:32 host2 isc-dhcp-server[11642]: Starting ISC DHCPv4 server: dhcpdcheck s…led!
Jul 17 01:12:32 host2 isc-dhcp-server[11642]: failed!
Jul 17 01:12:32 host2 systemd[1]: isc-dhcp-server.service: Control process exited, co…tus=1
Jul 17 01:12:32 host2 systemd[1]: Failed to start LSB: DHCP server.
Jul 17 01:12:32 host2 systemd[1]: isc-dhcp-server.service: Unit entered failed state.
Jul 17 01:12:32 host2 systemd[1]: isc-dhcp-server.service: Failed with result 'exit-code'.
Hint: Some lines were ellipsized, use -l to show in full.
Processing triggers for libc-bin (2.24-11+deb9u4) ...
Processing triggers for systemd (232-25+deb9u11) ...
Downloading network-addiprange.sh script
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 4892 100 4892 0 0 24828 0 --:--:-- --:--:-- --:--:-- 24832
Creating /etc/sysctl.d/99-networking.conf
Auto detecting existing network settings
ERROR: Could not detect all IPv4 varibles
IP: Netmask: 255.255.255.0 Gateway: 94.23.2.254

Regards,
Riz

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.