Virtualization: Difference between revisions

From Traxel Wiki
Jump to navigation Jump to search
 
(47 intermediate revisions by the same user not shown)
Line 1: Line 1:
[[Category:Linux]]
[[Category:Linux]]
Initial Goal: containerized ollama-gpu
Initial Goal: containerized ollama-gpu
= LXD / LXC =
= Incus =
See Also: media/video/hacking/linux/just_me_and_opensource/lxd_lxc
See Also: media/video/hacking/linux/just_me_and_opensource/lxd_lxc
== Chopsticks ==
=== Incus Install ===
<pre>
$ sudo apt install -t bookworm-backports incus
...
Incus has been installed. You must run `sudo incus admin init` to
perform the initial configuration of Incus.
Be sure to add user(s) to either the 'incus-admin' group for full
administrative access or the 'incus' group for restricted access,
then have them logout and back in to properly setup their access.
...
</pre>
=== Incus Init ===
<pre>
$ sudo gpasswd -a bob incus-admin
Adding user bob to group incus-admin
$ su -l bob
$ incus admin init
Would you like to use clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (dir, lvm, lvmcluster, btrfs) [default=btrfs]: dir
Where should this storage pool store its data? [default=/var/lib/incus/storage-pools/default]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=incusbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like the server to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]:
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]:
</pre>
=== Maybe LXC Install? ===
<pre>
sudo apt install lxc # maybe
</pre>
=== Kick The Tires ===
<pre>
incus version
incus help
incus help storage
incus storage list
incus image list images:debian bookworm
incus image list images:debian/12
</pre>
=== Launch Instance ===
<pre>
$ incus launch images:debian/12
Launching the instance
Instance name is: amazed-tuna                     
</pre>
<pre>
incus list
incus stop amazed-tuna
incus delete amazed-tuna
</pre>
<pre>
incus image list
</pre>
<pre>
incus launch images:debian/12 incus-test
incus list
incus copy incus-test incus-test2
incus list
incus start incus-test2
incus stop incus-test2
incus move incus-test2 incus-test3
incus list
incus start incus-test3
</pre>
=== Hip-fire: Ollama ===
Ollama Models: https://ollama.com/library
<pre>
incus exec incus-test3 bash
apt install wget git curl
adduser bob
apt install openssh-client openssh-server
gpasswd -a bob sudo
</pre>
<pre>
incus list
ssh bob@10.47.160.211
curl -fsSL https://ollama.com/install.sh | sh
# NOTE: failed to detect GPU, presumably the -gpu libs were not installed, for a ollama-gpu I should configure GPU pass-through first
sudo systemctl enable ollama --now
sudo systemctl status ollama
ollama pull mistral:7b
ollama run mistral:7b
</pre>
=== Adding GPU ===
<pre>
incus copy incus-test incus-gpu-test
incus config device add incus-gpu-test gpu gpu
incus start incus-gpu-test
incus exec incus-gpu-test bash
</pre>
<pre>
$ incus stop incus-gpu-test
$ incus config set incus-gpu-test nvidia.runtime=true
Error: Initialize LXC: The NVIDIA container tools couldn't be found
</pre>
==== Adding NVidia Container Tools ====
<pre>
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey > /etc/apt/keyrings/nvidia-docker.key
curl -s -L https://nvidia.github.io/nvidia-docker/debian11/nvidia-docker.list > /etc/apt/sources.list.d/nvidia-docker.list
sed -i -e "s/^deb/deb \[signed-by=\/etc\/apt\/keyrings\/nvidia-docker.key\]/g" /etc/apt/sources.list.d/nvidia-docker.list
apt update
apt -y install nvidia-container-toolkit
# systemctl restart docker
systemctl restart incus
incus config set incus-gpu-test nvidia.runtime=true
</pre>
==== Continuing With GPU ====
<pre>
incus start incus-gpu-test
incus exec incus-gpu-test bash
nvidia-smi
adduser bob
usermod -aG sudo bob
apt install openssh-server
ip a
</pre>
<pre>
ssh bob@10.47.160.15
</pre>
<pre>
#!/bin/bash
set -euo pipefail
# Update and install dependencies
echo "Updating system and installing dependencies..."
sudo apt-get update -qq
sudo apt-get install -y --no-install-recommends curl wget git ca-certificates pciutils
# Install Ollama
echo "Installing Ollama..."
curl -fsSL https://ollama.com/install.sh | sh
# Pull a small model (e.g., phi3:3.8b)
echo "Pulling model..."
ollama pull phi3:3.8b
# Quantize the model (optional)
# echo "Quantizing model..."
# cat <<EOF > Modelfile
# FROM phi3:3.8b
# PARAMETER quantize q4_0
# EOF
# ollama create phi3-3.8b-q4_0 -f Modelfile
# Start Ollama service
echo "Starting Ollama service..."
sudo systemctl enable ollama --now
# Test Ollama with the GPU
echo "Testing Ollama..."
# ollama run phi3-3.8b-q4_0 "Why is the sky blue?"
ollama run phi3:3.8b "Why is the sky blue?"
# echo "Ollama is ready! Model: phi3-3.8b-q4_0"
echo "Ollama is ready! Model: phi3:3.8b"
</pre>
=== Adding GPU Take 2 ===
<pre>
incus copy incus-test incus-gpu
incus config device add incus-gpu gpu gpu
incus config set incus-gpu nvidia.runtime=true
incus start incus-gpu
incus exec incus-gpu bash
nvidia-smi
adduser bob
usermod -aG sudo bob
apt install openssh-server
ip a
</pre>
<pre>
ssh bob@10.47.160.15
</pre>
=== Ollama GPU Script ===
<pre>
cat > ollama-gpu.sh
</pre>
<pre>
#!/bin/bash
set -euo pipefail
# Update and install dependencies
echo "Updating system and installing dependencies..."
sudo apt-get update -qq
sudo apt-get install -y --no-install-recommends curl wget git ca-certificates pciutils
# Install Ollama
echo "Installing Ollama..."
curl -fsSL https://ollama.com/install.sh | sh
# Start Ollama service
echo "Starting Ollama service..."
sudo systemctl enable ollama --now
# Pull a small model (e.g., phi3:3.8b)
echo "Pulling model..."
ollama pull phi3:3.8b
# Test Ollama with the GPU
echo "Testing Ollama..."
ollama run phi3:3.8b "Why is the sky blue?"
echo "Ollama is ready! Model: phi3:3.8b"
</pre>
<pre>
bash ollama-gpu.sh
</pre>
= ARCHIVE: LXD / LXC =
Archived because Mark Shuttleworth and Canonical are terrible.
Oh! I almost forgot! Ubuntu is terrible too. Sorry I left you out, ya big knob.


Note: go to work on LXD with a blowtorch and a pair of vice grips.
Note: go to work on LXD with a blowtorch and a pair of vice grips.

Latest revision as of 23:31, 16 August 2025

Initial Goal: containerized ollama-gpu

Incus

See Also: media/video/hacking/linux/just_me_and_opensource/lxd_lxc

Chopsticks

Incus Install

$ sudo apt install -t bookworm-backports incus
...
Incus has been installed. You must run `sudo incus admin init` to
perform the initial configuration of Incus.
Be sure to add user(s) to either the 'incus-admin' group for full
administrative access or the 'incus' group for restricted access,
then have them logout and back in to properly setup their access.
...

Incus Init

$ sudo gpasswd -a bob incus-admin
Adding user bob to group incus-admin
$ su -l bob
$ incus admin init
Would you like to use clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (dir, lvm, lvmcluster, btrfs) [default=btrfs]: dir
Where should this storage pool store its data? [default=/var/lib/incus/storage-pools/default]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=incusbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
Would you like the server to be available over the network? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: 
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]: 

Maybe LXC Install?

sudo apt install lxc # maybe

Kick The Tires

incus version
incus help
incus help storage
incus storage list
incus image list images:debian bookworm
incus image list images:debian/12

Launch Instance

$ incus launch images:debian/12
Launching the instance
Instance name is: amazed-tuna                      
incus list
incus stop amazed-tuna
incus delete amazed-tuna
incus image list
incus launch images:debian/12 incus-test
incus list
incus copy incus-test incus-test2
incus list
incus start incus-test2
incus stop incus-test2
incus move incus-test2 incus-test3
incus list
incus start incus-test3

Hip-fire: Ollama

Ollama Models: https://ollama.com/library

incus exec incus-test3 bash
apt install wget git curl
adduser bob
apt install openssh-client openssh-server
gpasswd -a bob sudo
incus list
ssh bob@10.47.160.211
curl -fsSL https://ollama.com/install.sh | sh
# NOTE: failed to detect GPU, presumably the -gpu libs were not installed, for a ollama-gpu I should configure GPU pass-through first
sudo systemctl enable ollama --now
sudo systemctl status ollama
ollama pull mistral:7b
ollama run mistral:7b

Adding GPU

incus copy incus-test incus-gpu-test
incus config device add incus-gpu-test gpu gpu
incus start incus-gpu-test
incus exec incus-gpu-test bash
$ incus stop incus-gpu-test
$ incus config set incus-gpu-test nvidia.runtime=true
Error: Initialize LXC: The NVIDIA container tools couldn't be found

Adding NVidia Container Tools

curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey > /etc/apt/keyrings/nvidia-docker.key
curl -s -L https://nvidia.github.io/nvidia-docker/debian11/nvidia-docker.list > /etc/apt/sources.list.d/nvidia-docker.list
sed -i -e "s/^deb/deb \[signed-by=\/etc\/apt\/keyrings\/nvidia-docker.key\]/g" /etc/apt/sources.list.d/nvidia-docker.list
apt update
apt -y install nvidia-container-toolkit
# systemctl restart docker 
systemctl restart incus
incus config set incus-gpu-test nvidia.runtime=true

Continuing With GPU

incus start incus-gpu-test
incus exec incus-gpu-test bash
nvidia-smi
adduser bob
usermod -aG sudo bob
apt install openssh-server
ip a
ssh bob@10.47.160.15
#!/bin/bash
set -euo pipefail

# Update and install dependencies
echo "Updating system and installing dependencies..."
sudo apt-get update -qq
sudo apt-get install -y --no-install-recommends curl wget git ca-certificates pciutils

# Install Ollama
echo "Installing Ollama..."
curl -fsSL https://ollama.com/install.sh | sh

# Pull a small model (e.g., phi3:3.8b)
echo "Pulling model..."
ollama pull phi3:3.8b

# Quantize the model (optional)
# echo "Quantizing model..."
# cat <<EOF > Modelfile
# FROM phi3:3.8b
# PARAMETER quantize q4_0
# EOF
# ollama create phi3-3.8b-q4_0 -f Modelfile

# Start Ollama service
echo "Starting Ollama service..."
sudo systemctl enable ollama --now

# Test Ollama with the GPU
echo "Testing Ollama..."
# ollama run phi3-3.8b-q4_0 "Why is the sky blue?"
ollama run phi3:3.8b "Why is the sky blue?"

# echo "Ollama is ready! Model: phi3-3.8b-q4_0"
echo "Ollama is ready! Model: phi3:3.8b"

Adding GPU Take 2

incus copy incus-test incus-gpu
incus config device add incus-gpu gpu gpu
incus config set incus-gpu nvidia.runtime=true
incus start incus-gpu
incus exec incus-gpu bash
nvidia-smi
adduser bob
usermod -aG sudo bob
apt install openssh-server
ip a
ssh bob@10.47.160.15

Ollama GPU Script

cat > ollama-gpu.sh
#!/bin/bash
set -euo pipefail

# Update and install dependencies
echo "Updating system and installing dependencies..."
sudo apt-get update -qq
sudo apt-get install -y --no-install-recommends curl wget git ca-certificates pciutils

# Install Ollama
echo "Installing Ollama..."
curl -fsSL https://ollama.com/install.sh | sh

# Start Ollama service
echo "Starting Ollama service..."
sudo systemctl enable ollama --now

# Pull a small model (e.g., phi3:3.8b)
echo "Pulling model..."
ollama pull phi3:3.8b

# Test Ollama with the GPU
echo "Testing Ollama..."
ollama run phi3:3.8b "Why is the sky blue?"

echo "Ollama is ready! Model: phi3:3.8b"
bash ollama-gpu.sh

ARCHIVE: LXD / LXC

Archived because Mark Shuttleworth and Canonical are terrible.

Oh! I almost forgot! Ubuntu is terrible too. Sorry I left you out, ya big knob.

Note: go to work on LXD with a blowtorch and a pair of vice grips.

Chopsticks

Install and Start

sudo apt install lxd lxc
lsb_release -dirc
DO NOT SEEK THE TREASURE: sudo systemctl status l-no-x-no-d-nonononono
sudo systemctl start lxd
getent group lxd
sudo gpasswd -a bob lxd
getent group lxd
su -l $USER

lxd init

$ lxd init
# no clustering needed for this demo
Would you like to use LXD clustering? (yes/no) [default=no]: 
# since this is a new install, there is no existing storage pool, which is where the filesystem gets stored
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
# can be any name, which would probably help with connecting new instances to an existing pool, leaving default for now
Name of the new storage pool [default=default]: 
# btrfs gives you fancy things like snapshots and volume management, but "dir" just uses a local directory for simplicity
Name of the storage backend to use (lvm, btrfs, dir) [default=btrfs]: dir
# "which is "metal access assess server"?" dunno - he didn't use it
Would you like to connect to a MAAS server? (yes/no) [default=no]: 
# yes, create a bridge, which will allow the new envs to access the network
Would you like to create a new local network bridge? (yes/no) [default=yes]:
# Name it anything you like, I'm leaving it default
What should the new bridge be called? [default=lxdbr0]: 
# auto address ranges are fine
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
# It will be available on this machine, but should it be exposed to other machines?
Would you like the LXD server to be available over the network? (yes/no) [default=no]: 
# this will automagically update your image under your feet. choose wisely.
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: 
# spew this all back to me?
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: 

lxc chopsticks

lxc version
lxc help
lxc help storage
lxc storage list
$ lxc remote list
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
|      NAME       |                   URL                    |   PROTOCOL    |  AUTH TYPE  | PUBLIC | STATIC | GLOBAL |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| images          | https://images.linuxcontainers.org       | simplestreams | none        | YES    | NO     | NO     |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| local (current) | unix://                                  | lxd           | file access | NO     | YES    | NO     |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| ubuntu          | https://cloud-images.ubuntu.com/releases | simplestreams | none        | YES    | YES    | NO     |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| ubuntu-daily    | https://cloud-images.ubuntu.com/daily    | simplestreams | none        | YES    | YES    | NO     |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
lxc image list

No images listed.

much pain and suffering

linuxcontainers.org no longer supports lxd, because Canonical and Mark Shuttleworth are terrible.

Incus is the new open source version of LXD. Coming in Deb 13, backported to Deb 12.

sudo apt remove --autoremove lxd lxd-agent lxd-client
sudo apt install -t bookworm-backports incus

...
incus.service is a disabled or a static unit, not starting it.
incus-user.service is a disabled or a static unit, not starting it.

Incus has been installed. You must run `sudo incus admin init` to
perform the initial configuration of Incus.
Be sure to add user(s) to either the 'incus-admin' group for full
administrative access or the 'incus' group for restricted access,
then have them logout and back in to properly setup their access.
...
$ sudo gpasswd -a bob incus-admin
Adding user bob to group incus-admin
$ su -l bob
$ incus admin init
Would you like to use clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (dir, lvm, lvmcluster, btrfs) [default=btrfs]: dir
Where should this storage pool store its data? [default=/var/lib/incus/storage-pools/default]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=incusbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
Would you like the server to be available over the network? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: 
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]: 
$ sudo apt remove lxc lxc-templates 

This is exactly why I want virtualization. So when some knob like Mark Shuttleworth screws over the world, I don't have to have his feces smeared all over my hard drive.