All Guides
Running Local AI Models with Ollama
Run local AI models on your computer using Ollama. LLama, Mistral, Gemma and more.
Intermediate20 min
Setup Steps
1. Install Ollama on Linux:
curl -fsSL https://ollama.com/install.sh | sh2. For macOS:
brew install ollama3. Start the Ollama service:
ollama serve4. Download and run a model (e.g., Llama 3.1):
ollama run llama3.15. Other popular models:
ollama run mistral
ollama run gemma2
ollama run codellama
ollama run phi36. List installed models:
ollama list7. API usage:
curl http://localhost:11434/api/generate -d '{
"model": "llama3.1",
"prompt": "Hello!"
}'8. Python usage:
pip install ollamapython
import ollama
response = ollama.chat(model='llama3.1', messages=[{'role': 'user', 'content': 'Hello!'}])
print(response['message']['content'])Related Guides
Claude Code Setup
Install Anthropic Claude Code CLI on your server or PC. API key configuration and basic commands.
ChatGPT API Setup
Integrate OpenAI ChatGPT API into your project. Getting API keys and sending your first request.
Google Gemini API Setup
Set up Google Gemini (formerly Bard) API and use it in your projects.
Midjourney Usage Guide
Create AI-powered images with Midjourney. Step-by-step guide for Discord usage.