Skip to content

Lightweight hypervisor

When managing servers either in a small shop or even at scale, Cockpit is a great solution. It even comes pre-installed on various fedora based distro's.

Today I will show you how to make a lightweight hypervisor out of your linux server by using the native qemu with virsh & cockpit as a management gui

Install QEMU and Libvirt

Before installing Cockpit, you need to install QEMU/KVM and Libvirt, which provide the virtualization capabilities. The required packages vary depending on your Linux distribution:

bash
sudo apt update
sudo apt install -y qemu-kvm qemu-system qemu-utils libvirt-daemon-system libvirt-clients virtinst bridge-utils
bash
sudo dnf install -y @virtualization

Packages explained:

  • qemu-kvm / qemu-system: QEMU emulator with KVM acceleration
  • libvirt-daemon-system / libvirt: Libvirt daemon for managing virtualization
  • libvirt-clients: Command-line tools like virsh
  • virtinst: Provides virt-install for creating VMs
  • bridge-utils: Utilities for creating network bridges

After installation, enable and start the libvirt service:

bash
sudo systemctl enable --now libvirtd

add the current user so you don't need to sudo each time

bash
sudo usermod -aG libvirt $USER

Verify the installation:

bash
sudo virsh version

Install Cockpit

Install Cockpit for server management

Cockpit is my personal preference for a modern web-based UI that’s actively maintained and easy to use.

bash
sudo apt update
sudo apt install cockpit cockpit-machines -y
sudo systemctl enable --now cockpit
bash
sudo dnf update
sudo dnf install cockpit cockpit-machines -y
sudo systemctl enable --now cockpit

TIP

Once installed, you can access the UI via:
👉 https://your-server-ip:9090

Choosing a Networking Mode

Before proceeding, decide how you want your VM to connect to the network.

1. Bridged Network (br0) - Recommended for Servers

  • The VM gets its own IP address from your local router (e.g., 192.168.1.50).
  • It acts like a completely separate device on your LAN.
  • Best if you want to run services accessible from other computers.

2. Virtual Network (virbr0 / NAT) - Default & Easiest

  • The VM sits behind the host in a private network (e.g., 192.168.122.x).
  • It has internet access via the host, but cannot be seen by other devices on your LAN without port forwarding.
  • Best for testing, development, or isolated environments.

Method 1: Bridged Network (LAN IP)

To put a VM directly on to the hosts LAN:

  1. Create a bridge (br0) on the host.
  2. Attach the physical interface (enp0s31f6 or whatever the actual NIC is) to the bridge.
  3. Attach the VM to br0.

Step 1: Create a Real Bridge (br0)

Find the physical interface name (e.g., enp0s31f6 or enp1s0f0):

bash
ip link show
bash
# Create the bridge config (replace 'enp1s0f0' with your interface name)
# Note: Uses systemd-networkd - interfaces will appear as "unmanaged" in Cockpit
cat <<EOF | sudo tee /etc/netplan/51-net-bridge-cfg.yaml
network:
  version: 2
  renderer: networkd
  ethernets:
    enp1s0f0:
      dhcp4: false
  bridges:
    br0:
      interfaces:
        - enp1s0f0
      dhcp4: yes
      parameters:
        stp: false
        forward-delay: 0
EOF

# Apply the config
sudo chmod 600 /etc/netplan/51-net-bridge-cfg.yaml
sudo netplan apply
bash
# Find the current connection name for your physical interface
nmcli con show

# Create the bridge interface 'br0' with STP disabled
sudo nmcli con add type bridge ifname br0 con-name br0 bridge.stp no
sudo nmcli con modify br0 ipv4.method auto

# Add the physical interface as a slave to the bridge
# Replace 'enp1s0f0' with your actual interface name
# Replace 'Wired connection 1' with your actual connection name from above
sudo nmcli con add type bridge-slave ifname enp1s0f0 master br0

# Bring down the original connection on the physical interface
sudo nmcli con down "Wired connection 1"

# Bring up the bridge
sudo nmcli con up br0

# Optional but recommended: disable autoconnect on the old connection to prevent conflicts
sudo nmcli con modify "Wired connection 1" connection.autoconnect no

Confirm the bridge exists:

bash
ip a show br0

You should see it with an IP on your LAN.


Step 2: Attach the VM to br0

Run virt-install and use the existing br0 bridge:

bash
sudo virt-install \
--name <YOUR-VM-NAME> \
--memory 10000 \
--vcpus 4 \
--disk path=/home/path/to/your/image.qcow2,format=qcow2 \
--os-variant <YOUR-OS-VARIANT> \
--import \
--network bridge=br0,model=virtio \
--noautoconsole

Method 2: Virtual Network (NAT)

If you chose this method, you don't need to configure br0 or mess with your host's network config. You just use the default virbr0.

Step 1: Ensure the Default Network is Active

Run:

bash
sudo virsh net-list --all

You should see default as active and autostart.

 Name      State    Autostart    Persistent
--------------------------------------------
 default   active   yes          yes

If it's inactive, start it:

bash
sudo virsh net-start default
sudo virsh net-autostart default

Then, check if DHCP is handing out leases:

bash
sudo cat /var/lib/libvirt/dnsmasq/default.leases

Step 2: Attach the VM to default (virbr0)

Use --network network=default instead of bridge=br0:

bash
sudo virt-install \
--name <YOUR-VM-NAME> \
--memory 10000 \
--vcpus 4 \
--disk path=/home/path/to/your/image.qcow2,format=qcow2 \
--os-variant <YOUR-OS-VARIANT> \
--import \
--network network=default,model=virtio \
--noautoconsole

Step 3: Accessing the VM (Port Forwarding)

Since Libvirt uses NAT for virbr0 (Method 2), the VM is behind a virtual subnet (e.g., 192.168.122.x).

  • From Host: You can access it directly (ssh/ping).
  • From LAN: You generally cannot see it.

Solution: Use Port Forwarding (Optional) If you need to access it from another computer on your LAN, you must forward ports or use ssh tunnels.

1. Find the VM's IP on the Host:

bash
sudo virsh domifaddr <YOUR-VM-NAME>

Example output:

 Name       MAC address          Protocol     Address
-------------------------------------------------------------------------------
 vnet0      52:54:00:6b:3c:21    ipv4         192.168.122.50/24

👉 Take note of the VM’s IP (192.168.122.50).

2. Forward a Port (Example): To SSH into your VM from your laptop, run this on your workstation (not the server):

bash
ssh -L 2222:192.168.122.50:22 user@your-server-ip

Then ssh -p 2222 user@localhost to connect.


Step 3: Restart the VM

bash
sudo virsh shutdown <YOUR-VM-NAME>
sudo virsh start <YOUR-VM-NAME>

Check vm status

bash
sudo virsh list --all

once the vm is up an running we need to configure the interface inside the vm.

Use virsh console to access vm

If your VM has a serial console configured, you can access it with:

bash
virsh console <YOUR-VM-NAME>

Press Enter after connecting to see the login prompt. To exit, use Ctrl+].


Step 4: Configure the networking interface on the virtual machine

NOTE

The following example is for distros using netplan.

Run:

bash
sudo nano /etc/netplan/00-installer-config.yaml

Paste this:

yaml
network:
  version: 2
  ethernets:
    enp1s0:
      dhcp4: true
      dhcp6: false
      optional: true
      nameservers:
        addresses:
          - 1.1.1.1

This does the following:

  • Enables DHCP for IPv4 (dhcp4: true).
  • Marks the interface as optional, so the VM won’t hang on boot if DHCP is slow.
  • Uses Cloudflare DNS.

Apply the Configuration

Run:

bash
chmod 600 /etc/netplan/00-installer-config.yaml
sudo netplan apply

Troubleshooting

Manually Bring Up the Interface Inside the VM

Try forcing the interface up inside the VM:

bash
sudo ip link set enp1s0 up
sudo dhclient enp1s0

Check again:

bash
ip a

If dhclient successfully gets an IP, Libvirt’s DHCP is working.

Manually Request an IP

If you still don’t get an IP, force DHCP to assign one:

bash
sudo dhclient enp1s0

Now check again:

bash
ip a

To operate the vm with virsh some example commands

shell
sudo virsh start <vm-name>           # start
sudo virsh shutdown <vm-name>        # graceful shutdown
sudo virsh destroy <vm-name>         # forced shutdown
sudo virsh console <vm-name>         # access the console
sudo virsh undefine <vm-name>        # fully remove VM