Howto ramalama: Difference between revisions
Jump to navigation
Jump to search
Mandulete1 (talk | contribs) |
Mandulete1 (talk | contribs) |
||
Line 15: | Line 15: | ||
run model openai gpt-oss: | run model openai gpt-oss: | ||
ramalama pull gpt-oss:latest | ramalama pull gpt-oss:latest | ||
run model as service: | |||
ramalama serve gpt-oss | |||
run model deekseek-r1: | run model deekseek-r1: | ||
ramalama pull deepseek | ramalama pull deepseek | ||
run model as service with llama-stack and other options: | run model as service with llama-stack and other options: | ||
ramalama serve --port 8085 --api llama-stack --name deepseek-service -d deepseek | ramalama serve --port 8085 --api llama-stack --name deepseek-service -d deepseek |
Revision as of 06:20, 12 August 2025
install
install on fedora:
sudo dnf install python3-ramalama
install via pypi:
pip install ramalama
install script linux/mac:
curl -fsSL https://ramalama.ai/install.sh | bash
usage
set variables:
RAMALAMA_CONTAINER_ENGINE=docker CUDA_VISIBLE_DEVICES="0"
run model ibm granite:
ramalama run granite
run model openai gpt-oss:
ramalama pull gpt-oss:latest
run model as service:
ramalama serve gpt-oss
run model deekseek-r1:
ramalama pull deepseek
run model as service with llama-stack and other options:
ramalama serve --port 8085 --api llama-stack --name deepseek-service -d deepseek
stop model service:
ramalama stop deepseek-service