Skip to main content

Ollama with Open WebUI on Ubuntu 24.04

Private AI chat assistant with web interface. Choose from 50+ models like Llama and Mistral. Ready in 2 minutes.

Raff Technologies avatar
Written by Raff Technologies
Updated this week

Version: Latest
​OS: Ubuntu 24.04 LTS
​Category: AI & Machine Learning

Description

Deploy your own private ChatGPT-like AI assistant in minutes. Complete web interface with access to 50+ open-source AI models, running entirely on your server.

Software Included

Component

Version

License

Ollama

Latest

MIT

Open WebUI

Latest

MIT

Docker

Latest

Apache 2.0

Nginx

Latest

BSD-2

Key Benefits

  • Private AI Chat - Your conversations never leave your server

  • No Usage Fees - Run unlimited AI queries without per-token costs

  • Ready in 5 Minutes - Complete setup with web interface included

  • 50+ AI Models - Choose from Llama, Mistral, CodeLlama, and more

Getting Started

When you create your VM, installation begins automatically and takes about 1-2 minutes.

  1. Navigate to http://your-server-ip

  2. Click "Get Started" on the main page

  3. Create your admin account

  4. Download an AI model (try llama2:7b for general chat)

  5. Start chatting with your private AI assistant

System Requirements

  • Minimum: 4GB RAM, 2 CPU cores, 50GB storage

  • Recommended: 8GB RAM, 4 CPU cores, 100GB storage

  • Model sizes: 7B models use ~4GB RAM, 13B models use ~8GB RAM

Common Use Cases

  • Code assistance and debugging

  • Content writing and editing

  • Research and analysis

  • Learning and education support

Troubleshooting

Can't access interface? Wait up to 2 minutes for installation to complete, then try http://your-server-ip

Models slow to download? Large models (4-40GB) may take 10-30 minutes depending on connection

Need better performance? Upgrade server specs or use smaller 7B models instead of 13B+

Did this answer your question?