๐Ÿง  How to set up Your Own Private Uncensored Local GPT-Like AI Agent

How to set up and run the Dolphin-Llama3 model via Ollama, including external drive usage, and cross-platform support for Windows, Linux, and macOS.

Overview of Dolphin-Llama3 with Ollama

Ollama is a self-hosted platform that lets you run LLMs like Dolphin-Llama3 locally. It offers a simple CLI interface and supports custom model directories, ideal for privacy and offline usage.


๐Ÿ–ฅ๏ธ Windows Setup Instructions

1. Installation and Initial Model Run

  • Download Ollama from: https://ollama.com/

  • Open two terminals:

    • Terminal 1: ollama serve

    • Terminal 2: ollama run dolphin-llama3

2. Transfer to External Drive

  • Format the USB drive to NTFS (if needed).

  • Copy:

    • C:\Ollama (models)

    • C:\Users\<user>\AppData\Local\Programs\Ollama (binary files)

3. Run from External Drive

  • Terminal 1:

    powershell
    cd H:\ $env:OLLAMA_MODELS = "H:\ollama\models" ollama\ollama.exe serve
  • Terminal 2:

    powershell
    cd H:\ ollama\ollama.exe run dolphin-llama3

4. Batch File Automation (start.bat)

Place in the root of external drive:

bat@echo off
set DRIVE_LETTER=%~d0
set OLLAMA_MODELS=%DRIVE_LETTER%\ollama\models
echo Starting Ollama…
start "" %DRIVE_LETTER%\ollama\ollama.exe serve
:waitloop
netstat -an | find "LISTENING" | find ":11434" >nul 2>&1
if errorlevel 1 (
timeout /t 1 /nobreak >nul
goto waitloop
)
echo Starting AnythingLLM…
start "" %DRIVE_LETTER%\anythingllm\AnythingLLM.exe

5. Optional Autorun (autorun.inf)

ini[Autorun]
label=Dolphin LLM
shellexecute=start.bat
icon=customicon.ico
action=Start local Dolphin LLM

๐Ÿง Linux Instructions

1. Setup

bashmkdir -p ~/ollama/bin
chmod +x ~/ollama/bin/ollama
export PATH=$HOME/ollama/bin:$PATH

2. Run from Local or External

bashexport OLLAMA_MODELS="$HOME/ollama/models"
ollama serve
ollama run dolphin-llama3

For external drives:

bashcd /media/username/drive_name
export OLLAMA_MODELS="/media/username/drive_name/ollama/models"
./ollama serve
./ollama run dolphin-llama3

3. Startup Script (start.sh)

bash#!/bin/bash
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
export OLLAMA_MODELS="$SCRIPT_DIR/ollama/models"
"$SCRIPT_DIR/ollama" serve &
while ! curl -s http://localhost:11434 >/dev/null 2>&1; do sleep 1; done
"$SCRIPT_DIR/anythingllm/AnythingLLM.AppImage" &

Make it executable: chmod +x start.sh


๐ŸŽ macOS Instructions

1. Setup

bashmkdir -p ~/ollama/bin
chmod +x ~/ollama/bin/ollama
export PATH=$HOME/ollama/bin:$PATH

2. Run the Model

bashexport OLLAMA_MODELS="$HOME/ollama/models"
ollama serve
ollama run dolphin-llama3

For external:

bashcd /Volumes/YourDriveName
export OLLAMA_MODELS="/Volumes/YourDriveName/ollama/models"
./ollama serve
./ollama run dolphin-llama3

3. macOS Startup Script

bash#!/bin/bash
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
export OLLAMA_MODELS="$SCRIPT_DIR/ollama/models"
"$SCRIPT_DIR/ollama/ollama" serve &
while ! nc -z localhost 11434; do sleep 1; done
open "$SCRIPT_DIR/anythingllm/AnythingLLM.app"

Make executable: chmod +x start.sh


๐Ÿ“ฆ Models and RAM Requirements

Model Parameters Min RAM Needed Notes
dolphin-llama3 8B ~6GB Uncensored, Llama 3-based
dolphin-llama3.1 8B+ ~6GB Uses Llama 3.1 architecture
tinydolphin 1.1B ~1-2GB Lightweight alternative
deepseek-R1:14b 14B ~12-16GB+ More powerful
deepseek-R1:32b 32B ~30GB+ High-end usage
 

๐Ÿงพ Final Notes

  • AnythingLLM works as a UI layer, configurable with a .env file.

  • Ensure paths are OS-specific (slashes, drive letters).

  • Use NTFS for large file transfers (>4GB).

  • Store long-term backups on Blu-ray for durability (20+ years).

  • For lower-end devices, choose lightweight models like tinydolphin.


Was this article helpful?
ยฉ 2025 Clayton Johnson SEO, AI & Automation | Martech Strategist