Howto ramalama: Difference between revisions

From Vidalinux Wiki
Jump to navigation Jump to search
Line 21: Line 21:


= usage =
= usage =
set variables:
pull model openai gpt-oss:
  RAMALAMA_CONTAINER_ENGINE=docker
  ramalama pull gpt-oss:latest
CUDA_VISIBLE_DEVICES="0"
run model ibm granite:
run model ibm granite:
  ramalama run granite
  ramalama run granite
pull model openai gpt-oss:
ramalama pull gpt-oss:latest
serve model:
serve model:
  ramalama serve gpt-oss
  ramalama serve gpt-oss

Revision as of 22:39, 19 August 2025

install

  • fedora

install podman:

sudo dnf -y install podman podman-compose

install ramala:

sudo dnf -y install python3-ramalama

install via pypi:

pip install ramalama
  • debian/ubuntu

install podman:

apt install podman podman-compose -y

install ramala:

curl -fsSL https://ramalama.ai/install.sh | bash
  • archlinux

install podman:

pacman -Sy podman podman-compose --noconfirm

install yay using chaotic repo:

https://wiki.vidalinux.org/index.php?title=Howto_NVK#enable_chaotic_repo

install ramalama using yay:

yay -S ramala

usage

pull model openai gpt-oss:

ramalama pull gpt-oss:latest

run model ibm granite:

ramalama run granite

serve model:

ramalama serve gpt-oss

serve model with vulkan backend:

ramalama serve --image=quay.io/ramalama/ramalama:latest gemma3:4b

serve model with intel-gpu backend:

ramalama serve --image=quay.io/ramalama/intel-gpu:latest gemma3:4b

pull model deekseek-r1:

ramalama pull deepseek

serve model as daemon with llama-stack and other options:

ramalama serve --port 8080 --api llama-stack --name deepseek-service -d deepseek

chat webui for ramalama:

podman run -it --rm --name ramalamastack-ui -p 8501:8501 -e LLAMA_STACK_ENDPOINT=http://host.containers.internal:8080 quay.io/redhat-et/streamlit_client:latest

show container runtime command output without executing it:

ramalama --dryrun run deepseek

stop model service:

ramalama stop deepseek-service

convert specified model to an oci formatted ai model:

ramalama convert ollama://tinyllama:latest oci://quay.io/rhatdan/tiny:latest

create yaml from containers deployed:

podman kube generate containerid -f myapp.yaml

remove all containers running:

podman rm -af

deploy containers using created yaml:

podman play myapp.yaml

references