Virtualization: Difference between revisions

From Traxel Wiki
Jump to navigation Jump to search
Line 110: Line 110:
apt -y install nvidia-container-toolkit
apt -y install nvidia-container-toolkit
# systemctl restart docker  
# systemctl restart docker  
systemctl restart incus
</pre>
</pre>



Revision as of 19:28, 16 August 2025

Initial Goal: containerized ollama-gpu

Incus

See Also: media/video/hacking/linux/just_me_and_opensource/lxd_lxc

Chopsticks

Incus Install

$ sudo apt install -t bookworm-backports incus
...
Incus has been installed. You must run `sudo incus admin init` to
perform the initial configuration of Incus.
Be sure to add user(s) to either the 'incus-admin' group for full
administrative access or the 'incus' group for restricted access,
then have them logout and back in to properly setup their access.
...

Incus Init

$ sudo gpasswd -a bob incus-admin
Adding user bob to group incus-admin
$ su -l bob
$ incus admin init
Would you like to use clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (dir, lvm, lvmcluster, btrfs) [default=btrfs]: dir
Where should this storage pool store its data? [default=/var/lib/incus/storage-pools/default]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=incusbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
Would you like the server to be available over the network? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: 
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]: 

Maybe LXC Install?

sudo apt install lxc # maybe

Kick The Tires

incus version
incus help
incus help storage
incus storage list
incus image list images:debian bookworm
incus image list images:debian/12

Launch Instance

$ incus launch images:debian/12
Launching the instance
Instance name is: amazed-tuna                      
incus list
incus stop amazed-tuna
incus delete amazed-tuna
incus image list
incus launch images:debian/12 incus-test
incus list
incus copy incus-test incus-test2
incus list
incus start incus-test2
incus stop incus-test2
incus move incus-test2 incus-test3
incus list
incus start incus-test3

Hip-fire: Ollama

incus exec incus-test3 bash
apt install wget git curl
adduser bob
apt install openssh-client openssh-server
gpasswd -a bob sudo
incus list
ssh bob@10.47.160.211
curl -fsSL https://ollama.com/install.sh | sh
# NOTE: failed to detect GPU, presumably the -gpu libs were not installed, for a ollama-gpu I should configure GPU pass-through first
sudo systemctl enable ollama --now
sudo systemctl status ollama
ollama pull mistral:7b
ollama run mistral:7b

Adding GPU

incus copy incus-test incus-gpu-test
incus config device add incus-gpu-test gpu gpu
incus start incus-gpu-test
incus exec incus-gpu-test bash

Adding NVidia Container Tools

curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey > /etc/apt/keyrings/nvidia-docker.key
curl -s -L https://nvidia.github.io/nvidia-docker/debian11/nvidia-docker.list > /etc/apt/sources.list.d/nvidia-docker.list
sed -i -e "s/^deb/deb \[signed-by=\/etc\/apt\/keyrings\/nvidia-docker.key\]/g" /etc/apt/sources.list.d/nvidia-docker.list
apt update
apt -y install nvidia-container-toolkit
# systemctl restart docker 
systemctl restart incus

ARCHIVE: LXD / LXC

Archived because Mark Shuttleworth and Canonical are terrible.

Oh! I almost forgot! Ubuntu is terrible too. Sorry I left you out, ya big knob.

Note: go to work on LXD with a blowtorch and a pair of vice grips.

Chopsticks

Install and Start

sudo apt install lxd lxc
lsb_release -dirc
DO NOT SEEK THE TREASURE: sudo systemctl status l-no-x-no-d-nonononono
sudo systemctl start lxd
getent group lxd
sudo gpasswd -a bob lxd
getent group lxd
su -l $USER

lxd init

$ lxd init
# no clustering needed for this demo
Would you like to use LXD clustering? (yes/no) [default=no]: 
# since this is a new install, there is no existing storage pool, which is where the filesystem gets stored
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
# can be any name, which would probably help with connecting new instances to an existing pool, leaving default for now
Name of the new storage pool [default=default]: 
# btrfs gives you fancy things like snapshots and volume management, but "dir" just uses a local directory for simplicity
Name of the storage backend to use (lvm, btrfs, dir) [default=btrfs]: dir
# "which is "metal access assess server"?" dunno - he didn't use it
Would you like to connect to a MAAS server? (yes/no) [default=no]: 
# yes, create a bridge, which will allow the new envs to access the network
Would you like to create a new local network bridge? (yes/no) [default=yes]:
# Name it anything you like, I'm leaving it default
What should the new bridge be called? [default=lxdbr0]: 
# auto address ranges are fine
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
# It will be available on this machine, but should it be exposed to other machines?
Would you like the LXD server to be available over the network? (yes/no) [default=no]: 
# this will automagically update your image under your feet. choose wisely.
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: 
# spew this all back to me?
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: 

lxc chopsticks

lxc version
lxc help
lxc help storage
lxc storage list
$ lxc remote list
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
|      NAME       |                   URL                    |   PROTOCOL    |  AUTH TYPE  | PUBLIC | STATIC | GLOBAL |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| images          | https://images.linuxcontainers.org       | simplestreams | none        | YES    | NO     | NO     |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| local (current) | unix://                                  | lxd           | file access | NO     | YES    | NO     |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| ubuntu          | https://cloud-images.ubuntu.com/releases | simplestreams | none        | YES    | YES    | NO     |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| ubuntu-daily    | https://cloud-images.ubuntu.com/daily    | simplestreams | none        | YES    | YES    | NO     |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
lxc image list

No images listed.

much pain and suffering

linuxcontainers.org no longer supports lxd, because Canonical and Mark Shuttleworth are terrible.

Incus is the new open source version of LXD. Coming in Deb 13, backported to Deb 12.

sudo apt remove --autoremove lxd lxd-agent lxd-client
sudo apt install -t bookworm-backports incus

...
incus.service is a disabled or a static unit, not starting it.
incus-user.service is a disabled or a static unit, not starting it.

Incus has been installed. You must run `sudo incus admin init` to
perform the initial configuration of Incus.
Be sure to add user(s) to either the 'incus-admin' group for full
administrative access or the 'incus' group for restricted access,
then have them logout and back in to properly setup their access.
...
$ sudo gpasswd -a bob incus-admin
Adding user bob to group incus-admin
$ su -l bob
$ incus admin init
Would you like to use clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (dir, lvm, lvmcluster, btrfs) [default=btrfs]: dir
Where should this storage pool store its data? [default=/var/lib/incus/storage-pools/default]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=incusbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
Would you like the server to be available over the network? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: 
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]: 
$ sudo apt remove lxc lxc-templates 

This is exactly why I want virtualization. So when some knob like Mark Shuttleworth screws over the world, I don't have to have his feces smeared all over my hard drive.