Howto ramalama: Difference between revisions

From Vidalinux Wiki
Jump to navigation Jump to search
Line 25: Line 25:
run model ibm granite:
run model ibm granite:
  ramalama run granite
  ramalama run granite
pull model deekseek-r1:
ramalama pull deepseek
serve model:
serve model:
  ramalama serve gpt-oss
  ramalama serve gpt-oss
Line 33: Line 35:
serve model with nvidia-gpu backend:
serve model with nvidia-gpu backend:
  ramalama serve --image=quay.io/ramalama/cuda:latest deepseek
  ramalama serve --image=quay.io/ramalama/cuda:latest deepseek
pull model deekseek-r1:
serve model as daemon:
ramalama pull deepseek
  ramalama serve --port 8080 --name llamaserver -d deepseek
serve model as daemon with llama-stack and other options:
  ramalama serve --port 8080 --api llama-stack --name llamaserver -d deepseek
chat webui for ramalama:
chat webui for ramalama:
  podman run -it --rm --name ramalamastack-ui -p 8501:8501 -e LLAMA_STACK_ENDPOINT=http://host.containers.internal:8080 quay.io/redhat-et/streamlit_client:latest
  podman run -it --rm --name ramalamastack-ui -p 8501:8501 -e LLAMA_STACK_ENDPOINT=http://host.containers.internal:8080 quay.io/redhat-et/streamlit_client:latest

Revision as of 23:46, 19 August 2025

install

  • fedora

install podman:

sudo dnf -y install podman podman-compose

install ramala:

sudo dnf -y install python3-ramalama

install via pypi:

pip install ramalama
  • debian/ubuntu

install podman:

apt install podman podman-compose -y

install ramala:

curl -fsSL https://ramalama.ai/install.sh | bash
  • archlinux

install podman:

pacman -Sy podman podman-compose --noconfirm

install yay using chaotic repo:

https://wiki.vidalinux.org/index.php?title=Howto_NVK#enable_chaotic_repo

install ramalama using yay:

yay -S ramala

usage

pull model openai gpt-oss:

ramalama pull gpt-oss:latest

run model ibm granite:

ramalama run granite

pull model deekseek-r1:

ramalama pull deepseek

serve model:

ramalama serve gpt-oss

serve model with vulkan backend:

ramalama serve --image=quay.io/ramalama/ramalama:latest deepseek

serve model with intel-gpu backend:

ramalama serve --image=quay.io/ramalama/intel-gpu:latest deepseek

serve model with nvidia-gpu backend:

ramalama serve --image=quay.io/ramalama/cuda:latest deepseek

serve model as daemon:

ramalama serve --port 8080 --name llamaserver -d deepseek

chat webui for ramalama:

podman run -it --rm --name ramalamastack-ui -p 8501:8501 -e LLAMA_STACK_ENDPOINT=http://host.containers.internal:8080 quay.io/redhat-et/streamlit_client:latest

show container runtime command output without executing it:

ramalama --dryrun run deepseek

stop model service:

ramalama stop deepseek-service

convert specified model to an oci formatted ai model:

ramalama convert ollama://tinyllama:latest oci://quay.io/rhatdan/tiny:latest

running as daemon

for running container as systemd daemon create this directory:

mkdir ~/.config/containers/systemd/

create yaml from containers deployed:

CONTAINERID=$(podman ps|grep -v CONTAINER|awk '{print $1}')
podman kube generate ${CONTAINERID} -f ~/.config/containers/systemd/llamaserver.yaml

remove all containers running:

podman rm -af

create systemd file:

cat > ~/.config/containers/systemd/llamaserver.kube << EOF
[Unit]
Description = Run Kubernetes YAML with podman kube play

[Kube]
Yaml=llamaserver.yaml
EOF

reload systemd:

systemctl --user daemon-reload

start service using systemd:

systemctl --user start llamaserver.service

references