Prerequisites

Before installing Local Assistant, make sure you have:

  • Ollama installed and running - Download Ollama
  • Python 3.10 or higher - Download Python
  • At least one Ollama model downloaded (e.g., ollama pull llama3.2)

Installing Python

Local Assistant requires Python 3.10 or higher. If you don't have Python installed:

Windows

  1. Download Python from python.org
  2. Run the installer
  3. Important: Check "Add Python to PATH" during installation
  4. Click "Install Now"

macOS

Python can be installed via Homebrew:

brew install python@3.12

Linux

sudo apt update
sudo apt install python3 python3-pip python3-venv

Installation Steps

Step 1: Download Local Assistant

Download the latest version from our downloads page.

Step 2: Extract the ZIP File

After downloading, extract (unzip) the file to a folder of your choice. For example:

  • Windows: C:\LocalAssistant or D:\Apps\LocalAssistant
  • macOS: ~/Applications/LocalAssistant or ~/LocalAssistant
  • Linux: ~/LocalAssistant or /opt/LocalAssistant

To extract on Windows: Right-click the ZIP file and select "Extract All...", then choose your destination folder.

Step 3: Run Local Assistant

Option A: Double-Click (Easiest)

Windows: Navigate to the extracted folder and double-click the start.bat file.

macOS/Linux: Double-click start.sh (you may need to right-click and select "Open" the first time).

Option B: Command Line

Windows:

  1. Open Command Prompt (press Win + R, type cmd, press Enter)
  2. Navigate to your Local Assistant folder:
    cd C:\LocalAssistant
  3. Run the start script:
    start.bat

macOS/Linux:

  1. Open Terminal
  2. Navigate to your Local Assistant folder:
    cd ~/LocalAssistant
  3. Make the script executable (first time only):
    chmod +x start.sh
  4. Run the start script:
    ./start.sh

What Happens on First Run

When you run start.bat (or start.sh) for the first time, it will automatically:

  1. Create a Python virtual environment in the folder
  2. Install all required dependencies (Flask, etc.)
  3. Start the Local Assistant server
  4. Open your default web browser to http://localhost:5000

This initial setup may take 1-2 minutes. Subsequent launches will be much faster.

Using Local Assistant

Once running, Local Assistant will automatically detect your installed Ollama models. Select a model from the dropdown menu and start chatting!

Stopping Local Assistant

To stop the application:

  • Close the Command Prompt/Terminal window, or
  • Press Ctrl + C in the terminal

Troubleshooting

If you encounter issues:

  • Ollama not detected: Make sure Ollama is running (check for the icon in your system tray)
  • Python not found: Verify Python is installed and added to PATH. Open a terminal and run python --version
  • No models available: Download at least one model with ollama pull llama3.2
  • Port already in use: Another application may be using port 5000. Close it or check the console for an alternative port.