Howto ramalama: Difference between revisions
Jump to navigation
Jump to search
Mandulete1 (talk | contribs) |
Mandulete1 (talk | contribs) |
||
Line 1: | Line 1: | ||
= install = | = install = | ||
install | * fedora | ||
sudo dnf install python3-ramalama | install podman: | ||
sudo dnf -y install podman podman-compose | |||
install ramala: | |||
sudo dnf -y install python3-ramalama | |||
install via pypi: | install via pypi: | ||
pip install ramalama | pip install ramalama | ||
install | * debian/ubuntu | ||
install podman: | |||
apt install podman podman-compose -y | |||
install ramala: | |||
curl -fsSL https://ramalama.ai/install.sh | bash | curl -fsSL https://ramalama.ai/install.sh | bash | ||
= usage = | = usage = |
Revision as of 22:35, 19 August 2025
install
- fedora
install podman:
sudo dnf -y install podman podman-compose
install ramala:
sudo dnf -y install python3-ramalama
install via pypi:
pip install ramalama
- debian/ubuntu
install podman:
apt install podman podman-compose -y
install ramala:
curl -fsSL https://ramalama.ai/install.sh | bash
usage
set variables:
RAMALAMA_CONTAINER_ENGINE=docker CUDA_VISIBLE_DEVICES="0"
run model ibm granite:
ramalama run granite
pull model openai gpt-oss:
ramalama pull gpt-oss:latest
serve model:
ramalama serve gpt-oss
serve model with vulkan backend:
ramalama serve --image=quay.io/ramalama/ramalama:latest gemma3:4b
serve model with intel-gpu backend:
ramalama serve --image=quay.io/ramalama/intel-gpu:latest gemma3:4b
pull model deekseek-r1:
ramalama pull deepseek
serve model as daemon with llama-stack and other options:
ramalama serve --port 8080 --api llama-stack --name deepseek-service -d deepseek
chat webui for ramalama:
podman run -it --rm --name ramalamastack-ui -p 8501:8501 -e LLAMA_STACK_ENDPOINT=http://host.containers.internal:8080 quay.io/redhat-et/streamlit_client:latest
show container runtime command output without executing it:
ramalama --dryrun run deepseek
stop model service:
ramalama stop deepseek-service
convert specified model to an oci formatted ai model:
ramalama convert ollama://tinyllama:latest oci://quay.io/rhatdan/tiny:latest
create yaml from containers deployed:
podman kube generate containerid -f myapp.yaml
remove all containers running:
podman rm -af
deploy containers using created yaml:
podman play myapp.yaml