Skip to main content

Ollama Installation Guide

This guide provides all the key information and getting-started instructions after a successful Ollama installation via DeploySage-CLI. Ollama makes it easy to run large language models (LLMs) locally.

Getting Started

Your Ollama service is running. To start using it, you first need to download a model.

# Pull the Llama 2 (7B) model from the library
ollama pull llama2

# Once downloaded, start a conversation
ollama run llama2

# To exit the conversation, type /bye

Model Management

You can manage all your local models from the command line.

# List all models you have downloaded
ollama list

# Pull a different model (e.g., Mistral)
ollama pull mistral

# Remove a model to free up space
ollama rm llama2

Using the API

Ollama exposes a REST API for programmatic access, which runs on port 11434 by default.

# Generate a response from the API using curl
curl http://localhost:11434/api/generate -d '{ "model": "llama2", "prompt": "Why is the sky blue?" }'