Deploy Ollama-based GenAI LLMs (AI stack)
First published: Wednesday, May 28, 2025 | Last updated: Wednesday, May 28, 2025Deploy Ollama-based GenAI LLMs (AI stack) using the SloopStash Docker starter-kit.
Previous: Configure |
Deploy and manage AI stack (Ollama-based GenAI LLMs) environments
Docker
The Linux machine must have at least 4 GB of RAM for better performance while running these Ollama-based GenAI LLMs.
# Switch to SloopStash Docker starter-kit directory.
$ cd /opt/kickstart-docker
# Provision OCI containers using Docker compose.
$ sudo docker compose -f compose/ai/ollama/main.yml --env-file compose/${ENVIRONMENT^^}.env -p sloopstash-${ENVIRONMENT}-ai-s1 up -d
# Stop OCI containers using Docker compose.
$ sudo docker compose -f compose/ai/ollama/main.yml --env-file compose/${ENVIRONMENT^^}.env -p sloopstash-${ENVIRONMENT}-ai-s1 down
# Restart OCI containers using Docker compose.
$ sudo docker compose -f compose/ai/ollama/main.yml --env-file compose/${ENVIRONMENT^^}.env -p sloopstash-${ENVIRONMENT}-ai-s1 restart
Ollama
Run DeepSeek R1 (1.5b)
# Pull DeepSeek R1 (1.5b) GenAI LLM in existing OCI container.
$ sudo docker container exec -ti sloopstash-${ENVIRONMENT}-ai-s1-ollama-deep-seek-r1-1 supervisorctl start ollama:pull
# Run DeepSeek R1 (1.5b) GenAI LLM in existing OCI container.
$ sudo docker container exec -ti sloopstash-${ENVIRONMENT}-ai-s1-ollama-deep-seek-r1-1 ollama run deepseek-r1:1.5b
Run DeepSeek coder (1.3b)
# Pull DeepSeek coder (1.3b) GenAI LLM in existing OCI container.
$ sudo docker container exec -ti sloopstash-${ENVIRONMENT}-ai-s1-ollama-deep-seek-coder-1 supervisorctl start ollama:pull
# Run DeepSeek coder (1.3b) GenAI LLM in existing OCI container.
$ sudo docker container exec -ti sloopstash-${ENVIRONMENT}-ai-s1-ollama-deep-seek-coder-1 ollama run deepseek-coder:1.3b
Run Gemma 2 (2b)
# Pull Gemma 2 (2b) GenAI LLM in existing OCI container.
$ sudo docker container exec -ti sloopstash-${ENVIRONMENT}-ai-s1-ollama-gemma-2-1 supervisorctl start ollama:pull
# Run Gemma 2 (2b) GenAI LLM in existing OCI container.
$ sudo docker container exec -ti sloopstash-${ENVIRONMENT}-ai-s1-ollama-gemma-2-1 ollama run gemma2:2b
Run Llama 3.2 (3b)
# Pull Llama 3.2 (3b) GenAI LLM in existing OCI container.
$ sudo docker container exec -ti sloopstash-${ENVIRONMENT}-ai-s1-ollama-llama-3.2-1 supervisorctl start ollama:pull
# Run Llama 3.2 (3b) GenAI LLM in existing OCI container.
$ sudo docker container exec -ti sloopstash-${ENVIRONMENT}-ai-s1-ollama-llama-3.2-1 ollama run llama3.2:3b
Run Qwen 2.5 coder (3b)
# Pull Qwen 2.5 coder (3b) GenAI LLM in existing OCI container.
$ sudo docker container exec -ti sloopstash-${ENVIRONMENT}-ai-s1-ollama-qwen-2.5-coder-1 supervisorctl start ollama:pull
# Run Qwen 2.5 coder (3b) GenAI LLM in existing OCI container.
$ sudo docker container exec -ti sloopstash-${ENVIRONMENT}-ai-s1-ollama-qwen-2.5-coder-1 ollama run qwen2.5-coder:3b
Previous: Configure |