Proxmox VE

Proxmox Virtual Environment (Proxmox VE) is an open-source server virtualization platform that lets you manage both virtual machines and containers in a unified environment. It utilizes the Type 1 (bare-metal) KVM hypervisor for full hardware virtualization, providing robust and efficient management of diverse workloads while also offering lightweight container-based virtualization through LXC.
🌐 Resources 🔗
📌 Some of the following commands are based on the Proxmox VE Helper-Scripts - make sure they are updated
❗ Use the Proxmox shell on the main node via the pve web GUI
Updating PVE - Manually
Open the Proxmox shell on the main node (or SSH into PVE -> risky)
pveupgrade
reboot
# apt update && apt -y dist-upgradeUse this Proxmox VE Helper-script to
Correct Proxmox VE Sources
Disable
pve-enterpriserepositoryEnable
pve-no-subscriptionrepositoryEnable
ceph package repositoriesAdd (disabled)
pvetestrepositoryDisable subscription nag (Delete browser cache)
Disable high availability
Update Proxmox VE
bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/tools/pve/post-pve-install.sh)"
# It is recommended to answer “yes” (y) to all options presented during the process.Kernel Clean
bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/tools/pve/kernel-clean.sh)"Processor Microcode
bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/tools/pve/microcode.sh)"Network configuration
cat /etc/network/interfacesauto lo
iface lo inet loopback
iface enp2s0 inet manual
auto vmbr0
iface vmbr0 inet static
address 192.168.5.2/24
gateway 192.168.5.254
bridge-ports enp2s0
bridge-stp off
bridge-fd 0
iface wlp3s0 inet manualcat /etc/resolv.confsearch lan.syselement.com
nameserver 9.9.9.9
nameserver 1.1.1.1 cat /etc/hosts127.0.0.1 localhost.localdomain localhost
192.168.5.2 pve.lan.syselement.com pve
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhostsQuick Benchmark
wget https://cdn.geekbench.com/Geekbench-6.5.0-Linux.tar.gz
tar -xzvf Geekbench-6.5.0-Linux.tar.gz
cd Geekbench-6.5.0-Linux
./geekbench6curl -sL https://yabs.sh | bashSoftware on PVE
apt install -y btop duf eza fio gdu htop ipcalc jq lm-sensors nano net-tools nvme-cli tmux tree ugrepbash Config
Set custom aliases
nano ~/.bashrc# Custom aliases
alias ipa='ip -br -c a'
#alias l='exa -lah'
alias l='eza -lah --group-directories-first'
alias la='ls -A'
alias ll='l -T'
alias ls='ls -lh --color=auto'
alias ports='ss -lpntu'
alias updatepve='apt update && apt -y dist-upgrade'# Load changes:
source ~/.bashrcNetdata observability
wget -O /tmp/netdata-kickstart.sh https://get.netdata.cloud/kickstart.sh && sh /tmp/netdata-kickstart.sh --stable-channel --disable-telemetryor use the Proxmox VE Netdata script
Backup Proxmox Config
Backup
Download the script
cd /root/; wget -qO- https://raw.githubusercontent.com/DerDanilo/proxmox-stuff/master/prox_config_backup.shSet the permanent backups directory environment variable or edit the script to set the
$DEFAULT_BACK_DIRvariable to your preferred backup directory
export BACK_DIR="/path/to/backup/directory"Make the script executable
chmod +x ./prox_config_backup.shShut down ALL VMs + LXC Containers if you want to go the safe way. (Not required)
Run the script
./prox_config_backup.shNotification
The script supports healthchecks.io notifications, either to the hosted service, or a self-hosted instance. The notification sends during the final cleanup stage, and either returns 0 to tell Healthchecks that the command was successful, or the exit error code (1-255) to tell Healthchecks that the command failed. To enable:
Set the
$HEALTHCHECKvariable to 1Set the
$HEALTHCHECK_URLvariable to the full ping URL for your check. Do not include anything after the UUID, the status flag will be added by the script.
bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/ct/proxmox-backup-server.sh)"
# PBS Interface <IP>:8007
# Set a root password if using autologin. This will be the PBS password.
# Login to WebGUI and open PBS shell
sudo passwd rootPROXMOX - Network > edit eth0 and set the Static IP.
PBS post install
Disable the Enterprise Repo
Add/Correct PBS Sources
Enable the No-Subscription Repo
Add Test Repo
Disable Subscription Nag
Update and reboot Proxmox Backup Server
Run the command below in the Proxmox Backup Server Shell and answer "yes" to all options presented
bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/misc/post-pbs-install.sh)"LXC
LXCs - Undo Autologin + Temporary SSH root login
If you don't set a root password first, you will not be able to login to the container again, ever.
set the root password
sudo passwd rootremove
--autologin rootfrom/etc/systemd/system/[email protected]/override.confreboot
# Login via Console and set password for root user
passwd
# Temp set root login
nano /etc/ssh/sshd_config
# Add line
PermitRootLogin yes
systemctl restart ssh❗ Remember to disable root login with
PermitRootLogin nowhen no more necessary
LXCs - Cleaner
bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/misc/clean-lxcs.sh)"LXCs - Updater
bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/misc/update-lxcs.sh)"LXC - Filesystem Trim
bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/misc/fstrim.sh)"Ubuntu LXC + UniFi Network Server
Unifi Network Server - https://192.168.5.10:8443 on Ubuntu LXC
Ubuntu LXC
First, install the Ubuntu LXC with the following specs (defaults are 1 vCPU, 512MB, 2 GB) necessary to the UniFi Network Server - using Advanced Settings during Helper Script launch:
2 vCPU
2GB RAM
8 GB Disk
bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/ubuntu.sh)"# Manual Advanced Settings
🧩 Using Advanced Settings on node pve
🖥️ Operating System: ubuntu
🌟 Version: 24.04
📦 Container Type: Unprivileged
🔐 Root Password: ********
🆔 Container ID: 110
🏠 Hostname: unifi
💾 Disk Size: 8 GB
🧠 CPU Cores: 2
🛠️ RAM Size: 2048 MiB
🌉 Bridge: vmbr0
📡 IPv4 Address: 192.168.5.10/24
🌐 Gateway IP Address: 192.168.5.254
📡 IPv6: Disabled
📡 APT-Cacher IP Address: Default
⚙️ Interface MTU Size: Default
🔍 DNS Search Domain: Host
📡 DNS Server IP Address: Host
🏷️ Vlan: Default
📡 Tags: community-script;os;unifi
🔑 Root SSH Access: yes
🗂️ Enable FUSE Support: no
🔍 Verbose Mode: yesPROXMOX - Network > edit eth0 and set the Static IP - if not already done by the Advanced installer.
UniFi Network Server
Open the LXC console or SSH into it and proceed with installing the UniFi Network Server manually via the UniFi Installation/Update Scripts - Ubiquiti Community
For more commands, check my guide here -> UniFi Network Server
# Quick install commands
sudo sh -c '
rm unifi-latest.sh &> /dev/null; wget https://get.glennr.nl/unifi/install/install_latest/unifi-latest.sh && bash unifi-latest.sh
'
sudo systemctl status unifiBrowse the web page - https://192.168.5.10:8443/ and configure the UniFi Network Server
Adopt your devices
# Update both OS and UniFi
sudo sh -c '
rm unifi-update.sh &> /dev/null; wget https://get.glennr.nl/unifi/update/unifi-update.sh && bash unifi-update.sh
'Arcane - http://192.168.5.15:3000
Install the Docker LXC with the desired specs - TESTING Default
bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/docker.sh)"
# Options to Install Portainer and/or Docker Compose V2
# If the LXC is created Privileged, the script will automatically set up USB passthrough.
# Run Compose V2 by replacing the hyphen (-) with a space, using docker compose, instead of docker-compose.PROXMOX - Network > edit eth0 and set the Static IP.
# Login via Console
# Temp set root login
nano /etc/ssh/sshd_config
# Add line
PermitRootLogin yes
systemctl restart ssh# SSH into the LXC
mkdir yamls
cd yamlsInstalled containers list:
nano arcane-compose.yamlservices:
arcane:
image: ghcr.io/ofkm/arcane:latest
container_name: arcane
ports:
- '3552:3552'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- arcane-data:/app/data
- /root/yamls:/app/data/projects
environment:
- APP_URL=http://localhost:3552
- PUID=1000
- PGID=1000
- ENCRYPTION_KEY=xxxxxxxxxxxxxxx= # Generate: openssl rand -base64 32
- JWT_SECRET=xxxxxxxxxxxxxxx
restart: unless-stopped
volumes:
arcane-data:
driver: localdocker compose -f arcane-compose.yaml upDelete
arcane:arcane-adminuser and create your own
nano upsnap-compose.yamlservices:
upsnap:
container_name: upsnap
image: ghcr.io/seriousm4x/upsnap:5 # images are also available on docker hub: seriousm4x/upsnap:5
network_mode: host
restart: unless-stopped
volumes:
- upsnap-data:/app/pb_data
# # To use a non-root user, create the mountpoint first (mkdir data) so that it has the right permission.
# user: 1000:1000
# environment:
# - TZ=Europe/Berlin # Set container timezone for cron schedules
# - UPSNAP_INTERVAL=*/10 * * * * * # Sets the interval in which the devices are pinged
# - UPSNAP_SCAN_RANGE=192.168.1.0/24 # Scan range is used for device discovery on local network
# - UPSNAP_SCAN_TIMEOUT=500ms # Scan timeout is nmap's --host-timeout value to wait for devices (https://nmap.org/book/man-performance.html)
# - UPSNAP_PING_PRIVILEGED=true # Set to false if you don't have root user permissions
# - UPSNAP_WEBSITE_TITLE=Custom name # Custom website title
# # dns is used for name resolution during network scan
# dns:
# - 192.18.0.1
# - 192.18.0.2
# # you can change the listen ip:port inside the container like this:
# entrypoint: /bin/sh -c "./upsnap serve --http 0.0.0.0:5000"
# healthcheck:
# test: curl -fs "http://localhost:5000/api/health" || exit 1
# interval: 10s
# # or install custom packages for shutdown
# entrypoint: /bin/sh -c "apk update && apk add --no-cache <YOUR_PACKAGE> && rm -rf /var/cache/apk/* && ./upsnap serve --http 0.0.0.0:8090"
volumes:
upsnap-data:
name: upsnap-data
driver: localdocker compose -f upsnap-compose.yaml upLogin to via Web at
http://<IP>:8090/Create account
Create Devices
Network scan works if devices are already on - Scan the
/24network
DELETED
bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/ct/uptimekuma.sh)"bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/ct/adguard.sh)"
# Setup interface <IP>:3000
# To Manually Update AdGuard Home, run the command above (or type update) in the AdGuard LXC Console.PROXMOX - Network > edit eth0 and set the Static IP.
Vaultwarden - http://192.168.5.7:8000
Vaultwarden Admin - http://192.168.5.7:8000/admin
Based on Alpine Linux
bash -c "$(wget -qO - https://github.com/community-scripts/ProxmoxVE/raw/main/ct/alpine-vaultwarden.sh)"
# To Update Alpine-Vaultwarden, or Set the Admin Token, run the command above in the Vaultwarden LXC Console.
# or run
apk update && apk upgradePROXMOX - Network > edit eth0 and set the Static IP.
Set https://vaultwarden.lab.syselement.com in the General settings - Domain URL admin menu http://192.168.5.7:8000/admin.
Vaultwarden needs to be behind a proxy (e.g. Zoraxy) to obtain HTTPS and to allow clients to connect.
bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/ct/zoraxy.sh)"
# To Manually Update Zoraxy, run the command above (or type update) in the Zoaxy LXC Console.PROXMOX - Network > edit eth0 and set the Static IP.
Set Proxy Roottolocalhost:8080Status- setUse TLS to serve proxy requestandStart ServiceCreate Proxy Rules- new proxy rule for VaultwardenProxy Type -
Sub-domainSubdomain Matching Keyword -
vaultwarden.lab.syselement.comTarget IP -
192.168.5.7:8000(Vaultwarden LXC IP)Create Endpoint
Local HOST/DNS - set vaultwarden.lab.syselement.com to Zoraxy LXC IP (or forward port 80 and 443 from your router to your Zoraxy LXC IP).
# e.g. C:\Windows\System32\drivers\etc\hosts
192.168.5.6 vaultwarden.lab.syselement.com
192.168.5.6 wiki.lab.syselement.comCheck Technitium DNS configuration too and use the Technitium server IP as DNS Server.
bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/ct/wikijs.sh)"
# Wiki.js Interface <IP>:3000
# To Manually Update Wiki.js, run the command above (or type update) in the Wiki.js LXC Console.PROXMOX - Network > edit eth0 and set the Static IP.
bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/ct/technitiumdns.sh)"
# Technitium DNS Interface <IP>:5380
# To Manually Update Technitium DNS, run the command above (or type update) in the Technitium DNS LXC Console.PROXMOX - Network > edit eth0 and set the Static IP.
Open the webpage and navigate to Zones
Add Zone- Primary Zone:lab.syselement.comEnter the
lab.syselement.comzoneAdd RecordName:
vaultwardenIPv4 Address:
192.168.5.6Save it
Add another record for
wikiwith the same IP
Settings - Blocking
Enable BlockingAllow/Block List URLs -
Quick Add- e.g.Steven Black...Save Settings
Settings - Proxy & Forwarders
Forwarders -
Quick Select- e.g.Quad9 Secure (DNS-over-HTTPS)Save Settings
📌 To use Techitium as a DNS server, set its IP
192.168.5.11as DNS server in the client PC network configuration# e.g. Windows ipconfig /all DNS Servers . . . : 192.168.5.11 9.9.9.9 DoH: https://dns.quad9.net/dns-query
OFF
bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/ct/pihole.sh)"
# Reboot Pi-hole LXC after install
#P i-hole Interface <IP>/admin
# To set your password:
pihole -a -pDELETED
bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/ct/homepage.sh)"
# Homepage Interface: IP:3000
# To Manually Update Homepage, run the command above (or type update) in the Homepage LXC Console.PROXMOX - Network > edit eth0 and set the Static IP.
Configuration (bookmarks.yaml, services.yaml, widgets.yaml) path
cd /opt/homepage/config/bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/ct/runtipi.sh)"PROXMOX - Network > edit eth0 and set the Static IP.
bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/ct/prometheus.sh)"PROXMOX - Network > edit eth0 and set the Static IP.
🌐 Resources
Proxmox
bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/jellyfin.sh)"PROXMOX - Network > edit eth0 and set the Static IP.
Comes already with Privileged/Unprivileged Hardware Acceleration Support
FFmpeg path:
/usr/lib/jellyfin-ffmpeg/ffmpegFor NVIDIA graphics cards, you'll need to install the same drivers in the container that you did on the host. In the container, run the driver installation script and add the CLI arg
--no-kernel-module
Location of config file
cd /etc/jellyfin/Configure Transcoding (and Hardware Acceleration) in the Jellyfin WebUI
Windows
Installed on Windows via
exeat https://repo.jellyfin.org/?path=/server/windows/latest-stable/amd64Update
Download the latest version.
Close or Stop Jellyfin (service) if it is running.
Run the installer.
If everything was completed successfully, the new version is installed.
Run
services.mscopen
Jellyfin Serverservice propertiesset Log On to
Local System accountsave and start the service
bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/ct/bookstack.sh)"
# BookStack Interface <IP>:80
# Bookstack works only with static ip. If you Change the IP of your LXC, you Need to edit the .env File nano /opt/bookstack/.env
# To Manually Update BookStack, run the command above (or type update) in the BookStack LXC Console.# Default Login Credentials
# Username:
[email protected]
# Password:
passwordbash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/ct/phpipam.sh)"
# Run the "3 Advanced Setting" to set static IP
# phpIPAM Interface <IP>:80
# To Manually Update phpIPAM, run the command above (or type update) in the phpIPAM LXC Console.# Default Login Credentials
# Username:
Admin
# Password:
ipamadminbash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/ct/cosmos.sh)"
# Cosmos Interface <IP>:80
# To Manually Update Cosmos, run the command above (or type update) in the Cosmos LXC Console.bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/ct/kavita.sh)"
# Kavita Interface <IP>:5000
# To enable folder adding append your lxc.conf on your host with 'lxc.environment: DOTNET_SYSTEM_GLOBALIZATION_INVARIANT=1'
# To Manually Update Kavita, run the command above (or type update) in the Kavita LXC Console.bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/ct/netbox.sh)"Checkmk
bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/ct/checkmk.sh)"Kasm
bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/ct/checkmk.sh)"Ubuntu Server VM
🔗 ➡️ My Ubuntu Server - VM additional/updated guide
bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/vm/ubuntu2404-vm.sh)"Turn OFF the VM (if ON).
Follow the instruction at Useful Ubuntu 22.04 VM Commands to set up Cloud-Init on the VM:
User
Password
SSH public key for SSH Key login
Upgrade packages -
NoStatic IP (may need DHCP)
Click
Regenerate Image
Start the VM.
Open the VM Console using
xterm.js
Resize disk
PROXMOX - Hardware > Hard Disk (scsi0) > Disk Action > Resize
In the VM Console:
sudo parted /dev/sda
resizepart 1
# Fix
# Partition number: 1
# Yes
# End? -0
quit
sudo rebootFirst Config
SSH
### Settings for SSH with Password
sudo sed -i -e 's/^PasswordAuthentication.*/PasswordAuthentication yes/' /etc/ssh/sshd_config && sudo systemctl restart sshd
#### Settings SSH with SSH Key + Disable root login
# Paste your SSH Public Key into ~/.ssh/authorized_keys (if not set by Proxmox Cloud-Init) and set sshd_config accordingly
sudo nano /etc/ssh/sshd_config
# Paste these lines
PermitRootLogin no
ChallengeResponseAuthentication no
PasswordAuthentication no
UsePAM no
AuthenticationMethods publickey
PubkeyAuthentication yes
PermitEmptyPasswords no
# Save and exit the file
# Restart the sshd service
sudo systemctl restart sshd
# Check sshd config with
sudo sshd -TTimezone and Updates
# TIMEZONE
sudo timedatectl set-timezone Europe/Rome
# DISABLE AUTOMATIC UPDATES
sudo nano /etc/apt/apt.conf.d/20auto-upgrades
# make sure all the directives are set to “0”
sudo systemctl disable apt-daily-upgrade.timer
sudo systemctl mask apt-daily-upgrade.service
sudo systemctl disable apt-daily.timer
sudo systemctl mask apt-daily.service
# Change "root" user password
sudo passwd rootSoftware
SSHinto the VM
sudo apt update -y && sudo apt -y upgrade
sudo apt install -y btop curl duf eza iftop locate nano ncdu fastfetch net-tools nload npm pipx qemu-guest-agent sysstat ugrep wget zshsudo apt-add-repository ppa:zanchey/asciinema
sudo apt update && sudo apt install asciinemaZsh & Oh-My-Zsh
Follow the guide here to setup
ZSHwithOh-My-Zsh- Zsh & Oh-My-Zsh - syselement
Docker
Evaluate Docker LXC
sudo su
# Docker Engine
sh <(curl -sSL https://get.docker.com)
# Docker Compose
LATEST=$(curl -sL https://api.github.com/repos/docker/compose/releases/latest | grep '"tag_name":' | cut -d'"' -f4)
DOCKER_CONFIG=${DOCKER_CONFIG:-$HOME/.docker}
mkdir -p $DOCKER_CONFIG/cli-plugins
curl -sSL https://github.com/docker/compose/releases/download/$LATEST/docker-compose-linux-x86_64 -o ~/.docker/cli-plugins/docker-compose
chmod +x $DOCKER_CONFIG/cli-plugins/docker-compose
docker compose version
# Add the current user to the "docker" group to let it run Docker
sudo groupadd docker
sudo gpasswd -a "${USER}" dockermkdir -p /opt/{dockge,stacks}
wget -q -O /opt/dockge/compose.yaml https://raw.githubusercontent.com/louislam/dockge/master/compose.yaml
cd /opt/dockge
docker compose up -d# Install Portainer
docker volume create portainer_data
docker run -d \
-p 8000:8000 \
-p 9443:9443 \
--name=portainer \
--restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data \
portainer/portainer-ce:latestUpdating Docker Standalone Portainer
Go to Settings > Back up Portainer - Download backup file
Proceed with updating
# Update Portainer
docker stop portainer
docker rm portainer
docker pull portainer/portainer-ce:2.20.2
docker run -d -p 8000:8000 -p 9443:9443 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:2.20.2sudo mkdir -p ~/docker/mobsf
sudo chown 9901:9901 ~/docker/mobsf
docker run -it --rm --name mobsf -p 8010:8010 -v ~/docker/mobsf:/home/mobsf/.MobSF opensecurity/mobile-security-framework-mobsf:latestdocker run -d --name wyl \
-e "IFACE=eth0" \
-e "TZ=Europe/Rome" \
--network="host" \
-v watchyourlan_data:/data \
aceberg/watchyourlanLogin to Tailscale
Open the VM shell and run:
curl -fsSL https://tailscale.com/install.sh | sh
sudo tailscale up
# Follow the instruction to register the deviceOn the host being connected to, you need to advertise that Tailscale is managing SSH connections which originate from the Tailscale network to this host
sudo tailscale up --ssh
# This generates a host keypair, shares its public half with the Tailscale control plane for distribution to clients, and configures tailscaled to intercept all traffic from your tailnet that is routed to port 22 on the Tailscale IP address. This SSH initialization only needs to be done once per host.Install microk8s
sudo snap install microk8s --classic
###
sudo usermod -a -G microk8s $USER
sudo mkdir -p ~/.kube
sudo chmod 0700 ~/.kube
sudo chown -f -R $USER ~/.kube
# Close SSH session and reopen it
microk8s status --wait-readySome commands
microk8s stop
microk8s start
microk8s kubectl get nodes
microk8s kubectl get services
microk8s kubectl get pods
microk8s enable dns
microk8s enable hostpath-storage
microk8s enable ingress
microk8s enable core/metrics-server
# Community Add-ons repository
microk8s enable community
microk8s enable portainer
# microk8s disable portainerSet
.kube/configfile for k9s
microk8s.kubectl config view --raw > $HOME/.kube/config
# Install k9s
brew install derailed/k9s/k9s
# Run it and check microk8s cluster
k9sBookStack (only on fresh Ubuntu)
🔗 BookStack Admin Documentation - Installation
🔗 docker-bookstack
Install a fresh Ubuntu Server VM
SSH into the Ubuntu VM and run the
BookStackUbuntu Installation script
❗ A script to install BookStack on a fresh instance of Ubuntu 24.04 is available. This script is ONLY FOR A FRESH OS, it will install Apache, MySQL 8.0 & PHP 8.3 and could OVERWRITE any existing web setup on the machine. It also does not set up mail settings or configure system security so you will have to do those separately. You can use the script as a reference if you’re installing on a non-fresh machine.
# Download the script
wget https://raw.githubusercontent.com/BookStackApp/devops/main/scripts/installation-ubuntu-22.04.sh
# Make it executable
chmod a+x installation-ubuntu-22.04.sh
# Run the script with admin permissions
sudo ./installation-ubuntu-22.04.sh
# Set the VM IP as domain during the first run of BookStack📌 Default login:
[email protected]:password
bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/ct/alpine.sh)"# To update
apk update && apk upgradeLast updated
Was this helpful?