Prerequisites
Before installing Local Assistant, make sure you have:
- Ollama installed and running - Download Ollama
- Python 3.10 or higher - Download Python
- At least one Ollama model downloaded (e.g.,
ollama pull llama3.2)
Installing Python
Local Assistant requires Python 3.10 or higher. If you don't have Python installed:
Windows
- Download Python from python.org
- Run the installer
- Important: Check "Add Python to PATH" during installation
- Click "Install Now"
macOS
Python can be installed via Homebrew:
brew install python@3.12
Linux
sudo apt update
sudo apt install python3 python3-pip python3-venv
Installation Steps
Step 1: Download Local Assistant
Download the latest version from our downloads page.
Step 2: Extract the ZIP File
After downloading, extract (unzip) the file to a folder of your choice. For example:
- Windows:
C:\LocalAssistantorD:\Apps\LocalAssistant - macOS:
~/Applications/LocalAssistantor~/LocalAssistant - Linux:
~/LocalAssistantor/opt/LocalAssistant
To extract on Windows: Right-click the ZIP file and select "Extract All...", then choose your destination folder.
Step 3: Run Local Assistant
Option A: Double-Click (Easiest)
Windows: Navigate to the extracted folder and double-click the start.bat file.
macOS/Linux: Double-click start.sh (you may need to right-click and select "Open" the first time).
Option B: Command Line
Windows:
- Open Command Prompt (press Win + R, type
cmd, press Enter) - Navigate to your Local Assistant folder:
cd C:\LocalAssistant - Run the start script:
start.bat
macOS/Linux:
- Open Terminal
- Navigate to your Local Assistant folder:
cd ~/LocalAssistant - Make the script executable (first time only):
chmod +x start.sh - Run the start script:
./start.sh
What Happens on First Run
When you run start.bat (or start.sh) for the first time, it will automatically:
- Create a Python virtual environment in the folder
- Install all required dependencies (Flask, etc.)
- Start the Local Assistant server
- Open your default web browser to
http://localhost:5000
This initial setup may take 1-2 minutes. Subsequent launches will be much faster.
Using Local Assistant
Once running, Local Assistant will automatically detect your installed Ollama models. Select a model from the dropdown menu and start chatting!
Stopping Local Assistant
To stop the application:
- Close the Command Prompt/Terminal window, or
- Press Ctrl + C in the terminal
Troubleshooting
If you encounter issues:
- Ollama not detected: Make sure Ollama is running (check for the icon in your system tray)
- Python not found: Verify Python is installed and added to PATH. Open a terminal and run
python --version - No models available: Download at least one model with
ollama pull llama3.2 - Port already in use: Another application may be using port 5000. Close it or check the console for an alternative port.