Minimum Hardware Requirements
| Component | Minimum | Recommended |
|---|---|---|
| CPU | 4-core processor (Intel i5/AMD Ryzen 5 or equivalent) | 8+ cores (Intel i7/AMD Ryzen 7 or better) |
| RAM | 8GB | 16GB or more |
| Storage | 10GB free (HDD or SSD) | 50GB+ free on SSD |
| GPU | Not required (CPU-only works) | NVIDIA GPU with 6GB+ VRAM |
| OS | Windows 10, macOS 10.15+, Linux | Windows 11, macOS 13+, Ubuntu 22.04+ |
RAM Requirements by Model Size
The amount of RAM you need depends on which AI models you want to run:
| Model Size | RAM Required | Example Models |
|---|---|---|
| 1-3B parameters | 8GB | llama3.2:1b, phi3:mini, gemma:2b |
| 7-8B parameters | 16GB | llama3.1:8b, mistral, gemma2:9b |
| 13-14B parameters | 24GB | llama2:13b, codellama:13b |
| 30B+ parameters | 32GB+ | llama2:70b, mixtral |
GPU Support
A GPU is optional but significantly speeds up AI responses:
NVIDIA GPUs (Best Support)
- Requires CUDA-compatible GPU with 4GB+ VRAM
- 6GB+ VRAM recommended for 7B models
- 12GB+ VRAM recommended for 13B+ models
- GeForce RTX 3060 or better recommended
AMD GPUs
- ROCm support on Linux
- Limited support on Windows
- RX 6000/7000 series recommended
Apple Silicon (M1/M2/M3)
- Excellent native support via Metal
- Unified memory means all RAM is available to the model
- M1 Pro/Max/Ultra or M2/M3 recommended for larger models
Storage Requirements
AI models can be large. Plan your storage accordingly:
| Model Size | Disk Space |
|---|---|
| Small models (1-3B) | 1-3 GB each |
| Medium models (7-8B) | 4-5 GB each |
| Large models (13B+) | 8-40 GB each |
We recommend an SSD for faster model loading times.
Required Software
| Software | Version | Download |
|---|---|---|
| Python | 3.10 or higher | python.org |
| Ollama | Latest version | ollama.ai |
| Web Browser | Modern version | Chrome, Firefox, Safari, or Edge |
Network Requirements
- Initial Setup: Internet required to download Ollama, Python, and AI models
- After Setup: No internet required - everything runs locally
- Optional: Internet needed only for web search feature (if enabled)
Python Installation
Python is required to run Local Assistant. The start.bat file will automatically set up a virtual environment and install dependencies, but Python must be installed first.
Windows Users
When installing Python on Windows, make sure to check "Add Python to PATH" during installation. This is essential for the start script to work correctly.
Verify Python Installation
Open a terminal/command prompt and run:
python --version
You should see something like Python 3.12.0 or higher.
Performance Tips
- Close other apps: AI models use a lot of RAM - close unnecessary applications
- Use SSD: Model loading is much faster from an SSD
- Start small: Begin with smaller models (3B) and work up based on your hardware
- GPU acceleration: If you have a compatible GPU, Ollama will use it automatically