From Bare Metal to AI Sidekick: Building a Personal AI Ecosystem in Proxmox

In my last post, I introduced the "why" behind this blog and the new server I built to dive headfirst into the world of AI. Today, it's time to get our hands dirty. I want to show you how I turned a bare Ubuntu container into a powerful, GPU-accelerated AI engine that's already starting to change how I manage my digital life.
My end goal is ambitious: to build a service that can intelligently alter images, like swapping models in catalog photos or enhancing backgrounds. But first, I needed a rock-solid foundation. My stack of choice? InvokeAI for image generation and Ollama for large language models, both running in a lightweight Proxmox LXC with direct access to my server's NVIDIA RTX 3090.
Here's a breakdown of the journey.
The Foundation: A GPU-Powered LXC
For efficiency, I chose a Proxmox Linux Container (LXC) over a full VM. I spun up a fresh instance using the Ubuntu 22.04 template, and then came the most critical step: GPU Passthrough. Configuring the LXC to have direct access to the host's RTX 3090 is the key that unlocks its true potential.
With the container online and the GPU connected, I was ready for the drivers. This is where I hit my first, and biggest, snag.
The NVIDIA Driver Rabbit Hole (And the Simple Way Out)
My first instinct was the classic, hands-on approach: download the .run
file from NVIDIA and compile the driver from source inside the LXC. This immediately led me down a frustrating rabbit hole of mismatched kernel headers and missing packages. It was a dead end.
After a lot of trial and error, the breakthrough came: for a container, you don't need to build the kernel module inside it! The Proxmox host handles that. The LXC just needs the user-space tools (like CUDA) to talk to the driver that's already running.
The magic bullet was a single flag on the NVIDIA installer:
Bash
# The key is the --no-kernel-module flag!
sudo ./NVIDIA-Linux-x86_64-*.run --no-kernel-module --no-x-check
An installation that had been a roadblock for hours was suddenly finished in minutes.
The Main Event: Installing the AI toolkit
With the drivers sorted, it was time for the fun part. I installed two main tools in the LXC:
- Ollama: For running large language models locally. It's incredibly versatile and has become the brains of my personal AI ecosystem. I've already hooked it up to my OpenWebUI container for a ChatGPT-like experience and even integrated it with my Nextcloud instance to help with summarizing and tagging documents.
- InvokeAI: This is the specialized tool for my image generation project. It’s a robust Stable Diffusion platform with a great web interface and, importantly for my long-term goals, a powerful API.
Installing InvokeAI was seamless once the driver issue was solved. The process involves setting up a clean, isolated Python environment.
1. Install Prerequisites:
Bash
sudo apt update
sudo apt install python3-venv python3-pip git -y
2. Create a Python Virtual Environment:
Bash
mkdir invokeai && cd invokeai
python3 -m venv .venv
source .venv/bin/activate
3. Install InvokeAI:
Bash
pip install --upgrade InvokeAI
With the latest versions, the initial setup is handled beautifully right from the web interface, which makes getting started incredibly simple.
Making It a Real Service
A tool isn't truly part of a self-hosted setup until it runs on its own. The final step was to turn InvokeAI into a proper systemd
service that starts on boot. I created a service file at /etc/systemd/system/invokeai.service
:
Ini, TOML
[Unit]
Description=InvokeAI Web UI
After=network.target
[Service]
Type=simple
User=root
WorkingDirectory=/root/invokeai
ExecStart=/root/invokeai/.venv/bin/invokeai-web
Restart=always
RestartSec=3
[Install]
WantedBy=multi-user.target
A few quick commands to enable and start the service, and it was done:
Bash
sudo systemctl daemon-reload
sudo systemctl enable invokeai
sudo systemctl start invokeai
Conclusion: The Journey Begins
And there it is! In just a few steps (and one major learning experience), I went from a bare container to a powerful AI service that's already becoming a core part of my homelab. The biggest lesson was definitely with the NVIDIA drivers—a great reminder that sometimes the most complex-seeming problems have surprisingly simple solutions.
Of course, this is just the beginning. This setup is brand new, and I'm sure it will evolve as I spend more time with it. I already have a few ideas for linking other self-hosted services to the InvokeAI API to play with automated image generation. We'll see where that goes!
For now, the platform is built, and the real fun can begin. Stay tuned for future posts as I share my experiences and dive into using InvokeAI's API to bring more ideas to life.