forked from wmantly/mc-bot-town
7.2 KiB
7.2 KiB
Ollama Integration Guide
This project now supports Ollama as an AI backend alongside Google Gemini, with per-bot configuration allowing you to mix providers and personalities across multiple bots.
Configuration Hierarchy
AI settings are merged in this order:
- Global defaults in
conf/base.js→aiobject - Bot-specific overrides in
conf/secrets.js→mc.bots.{botName}.plugins.Ai
Configuration
Global Defaults (Optional)
Edit conf/base.js to set defaults for all bots:
"ai":{
// Default provider (can be overridden per-bot)
"provider": "gemini", // or "ollama"
// Gemini API key (used by Gemini provider)
"key": "<configure in conf/secrets.js>",
// Ollama settings (used by Ollama provider)
"baseUrl": "http://localhost:11434",
"model": "llama3.2",
"timeout": 30000,
// Generation settings (applies to both providers)
"temperature": 1,
"topP": 0.95,
"topK": 64,
"maxOutputTokens": 8192,
"interval": 20,
// ... prompts
}
Per-Bot Configuration
Edit conf/secrets.js to configure each bot individually:
Example 1: Bot using default global settings
"art": {
"username": "art@vm42.us",
"commands": ['fun', 'invite', 'default'],
"auth": "microsoft",
"plugins": {
"Ai":{
"promptName": "helpful",
// Uses global provider settings from base.js
}
},
},
Example 2: Bot using specific Ollama instance
"ayay": {
"username": "limtisengyes@gmail.com",
"commands": ['fun', 'invite', 'default'],
"auth": "microsoft",
"plugins": {
"Ai":{
"promptName": "asshole",
"provider": "ollama",
"baseUrl": "http://192.168.1.50:11434", // Remote Ollama
"model": "llama3.2:7b",
"interval": 25,
}
}
},
Example 3: Bot using Gemini with custom settings
"nova": {
"username": "your@email.com",
"auth": "microsoft",
"commands": ['default', 'fun'],
"plugins": {
"Ai":{
"promptName": "helpful",
"provider": "gemini",
"model": "gemini-2.0-flash-exp",
"temperature": 0.7,
}
}
},
Multiple Bots with Different Providers
You can run multiple bots with different providers simultaneously:
// conf/secrets.js
"bots": {
"bot1": {
"plugins": {
"Ai": {
"promptName": "helpful",
"provider": "gemini", // Uses Google Gemini
"model": "gemini-2.0-flash-exp",
}
}
},
"bot2": {
"plugins": {
"Ai": {
"promptName": "asshole",
"provider": "ollama", // Uses local Ollama
"baseUrl": "http://localhost:11434",
"model": "llama3.2",
}
}
},
"bot3": {
"plugins": {
"Ai": {
"promptName": "Ashley",
"provider": "ollama", // Uses remote Ollama
"baseUrl": "http://192.168.1.50:11434",
"model": "mistral",
}
}
}
}
Mixing Personalities and Models
Each bot can have:
- Different provider (Gemini or different Ollama instances)
- Different model (llama3.2, mistral, qwen2.5, etc.)
- Different personality (helpful, asshole, Ashley, custom)
- Different settings (temperature, interval, etc.)
"helpfulBot": {
"plugins": {
"Ai": {
"promptName": "helpful",
"provider": "ollama",
"baseUrl": "http://server1:11434",
"model": "llama3.2:3b",
"temperature": 0.5,
"interval": 15,
}
}
},
"toxicBot": {
"plugins": {
"Ai": {
"promptName": "asshole",
"provider": "ollama",
"baseUrl": "http://server2:11434",
"model": "llama3.2:70b",
"temperature": 1.2,
"interval": 30,
}
}
},
Ollama Setup
Install Ollama
# Linux/macOS
curl -fsSL https://ollama.com/install.sh | sh
# Or download from https://ollama.com/download
Pull Models
# Recommended for chat bots:
ollama pull llama3.2
ollama pull mistral
ollama pull qwen2.5
# Specific sizes for performance tuning:
ollama pull llama3.2:3b # Fast, lightweight
ollama pull llama3.2:7b # Good balance
ollama pull llama3.2:70b # Smarter, slower
Start Ollama Server
# Local (only)
ollama serve
# Allow remote connections (for multiple servers)
OLLAMA_HOST=0.0.0.0:11434 ollama serve
Configure Remote Ollama
To use Ollama on another machine:
-
On the Ollama server:
OLLAMA_HOST=0.0.0.0:11434 ollama serve -
In bot config:
"Ai": { "provider": "ollama", "baseUrl": "http://ollama-server-ip:11434", "model": "llama3.2", }
Ollama Model Recommendations
| Model | Size | Speed | Quality | Best For |
|---|---|---|---|---|
llama3.2:3b |
3B | Very Fast | Good | Bots needing fast responses |
llama3.2:7b |
7B | Fast | Very Good | General purpose |
llama3.2:70b |
70B | Moderate | Excellent | Smart bots, complex prompts |
mistral |
7B | Fast | Good | Balanced solution |
qwen2.5:7b |
7B | Fast | Very Good | Good instruction following |
gemma2:9b |
9B | Fast | Good | Lightweight alternative |
Troubleshooting
Connection Refused
# Check if Ollama is running
curl http://localhost:11434/api/tags
# Check specific server
curl http://192.168.1.50:11434/api/tags
Model Not Found
# Check available models
ollama list
# Pull missing model
ollama pull llama3.2
JSON Parsing Errors
Most models support JSON mode. If issues occur:
- Switch to
llama3.1,qwen2.5, ormistral - Lower
temperature:(e.g., 0.7) - Increase
maxOutputTokens:for longer responses
Slow Responses
- Use smaller models (
llama3.2:3bvs70b) - Increase
interval:in config - Reduce
maxOutputTokens: - Check network latency for remote Ollama instances
Multiple Bots Overloading Ollama
If running many bots on one Ollama server:
- Use lighter models for less important bots
- Increase
interval:to space requests - Distribute bots across multiple Ollama instances
Available Personality Prompts
| Personality | Description | Best Model |
|---|---|---|
helpful |
Shy, helpful Jimmy | llama3.2, mistral |
asshole |
Sarcastic, unfiltered | llama3.2:70b, gemini |
Ashley |
Adult content | llama3.2, gemini |
custom |
Template for custom prompts | Any |
Comparing Providers
| Feature | Gemini | Ollama |
|---|---|---|
| Cost | API cost | Free (local) |
| Latency | 200-500ms | 50-500ms (local) |
| Privacy | Cloud | 100% local |
| Multiple Servers | No | Yes |
| Model Choice | Limited | Any |
| Hardware | None Required | GPU Recommended |
| Offline | No | Yes |
Command Reference
/msg botname ai <personality> # Change personality
/msg botname ai <personality> custom message # Use custom prompt
/msg wmantly load botname Ai <personality> # Reload AI with new config