All Guides

Запуск локальных ИИ-моделей с Ollama

Запускайте локальные ИИ-модели с помощью Ollama. LLama, Mistral, Gemma и другие.

Intermediate20 мин.

Setup Steps

1. Install Ollama on Linux:

curl -fsSL https://ollama.com/install.sh | sh

2. For macOS:

brew install ollama

3. Start the Ollama service:

ollama serve

4. Download and run a model (e.g., Llama 3.1):

ollama run llama3.1

5. Other popular models:

ollama run mistral
ollama run gemma2
ollama run codellama
ollama run phi3

6. List installed models:

ollama list

7. API usage:

curl http://localhost:11434/api/generate -d '{
  "model": "llama3.1",
  "prompt": "Hello!"
}'

8. Python usage:

pip install ollama
python
import ollama
response = ollama.chat(model='llama3.1', messages=[{'role': 'user', 'content': 'Hello!'}])
print(response['message']['content'])