When following the SAP tutorial for deploying a custom LLM using Ollama on SAP AI Core, things look straightforward—until you try building the Docker image on Windows.
This blog walks through the real issues I faced, the exact error messages, and the fixes that actually worked. If you’re on Windows + Docker Desktop, this will save you hours.
Background
SAP provides an excellent tutorial for deploying a custom LLM using Ollama with SAP AI Core:
https://developers.sap.com/tutorials/ai-core-custom-llm.html
The tutorial assumes:
- Docker is installed
- You’re building a Linux-based image
- The Ollama installer “just works”
On Windows, that assumption breaks in a few places.
Environment
- OS: Windows 10/11
- Docker: Docker Desktop (buildx enabled by default)
- Target platform:
linux/amd64(required by SAP AI Core) - Base image:
ubuntu:22.04
❌ Issue 1: Docker buildx requires 1 argument
Error
ERROR: docker: 'docker buildx build' requires 1 argumentRoot cause
I forgot the build context (.) at the end of the command.
Docker internally uses buildx, and without the context it doesn’t know what to build.
✅ Fix
Always include the dot:
docker build --platform=linux/amd64 -t docker.io/<username>/ollama:ai-core .❌ Issue 2: Dockerfile not found (Windows classic)
Error
failed to read dockerfile: open Dockerfile: no such file or directoryRoot cause
On Windows, Notepad silently created:
Dockerfile.txtDocker requires the file to be named exactly:
Dockerfile(no extension, capital D)
✅ Fix
Rename it from PowerShell:
ren Dockerfile.txt DockerfileVerify:
dirYou should see:
Dockerfile❌ Issue 3: Ollama installer fails – zstd not found
Error
/bin/sh: 1: zstd: not foundor
ERROR: This version requires zstd for extractionWhy this happens
- The Ollama installer downloads a
.tar.zstarchive - Ubuntu 22.04 does not include
zstdby default - The SAP tutorial doesn’t mention this dependency
- On Windows Docker Desktop, this fails consistently
Important: Installing zstd on Windows does nothing — the error is inside the Linux container.
✅ The Fix That Actually Works
You must install zstd inside the Docker image before running the Ollama installer.
✅ Working Dockerfile snippet
# Specify the base layers (default dependencies) to use
ARG BASE_IMAGE=ubuntu:22.04
FROM ${BASE_IMAGE}
# Update and install dependencies
RUN apt-get update && \
apt-get install -y \
ca-certificates \
nginx \
curl && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y zstd curl && rm -rf /var/lib/apt/lists/*
# Install ollama
RUN curl -fsSL https://ollama.com/install.sh | sh
# Expose port and set environment variables for ollama
ENV ollama_HOST=0.0.0.0
ENV PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ENV LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
# Configure nginx for reverse proxy
RUN echo "events { use epoll; worker_connections 128; } \
http { \
server { \
listen 8080; \
location ^~ /v1/api/ { \
proxy_pass http://localhost:11434/api/; \
proxy_set_header Host \$host; \
proxy_set_header X-Real-IP \$remote_addr; \
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for; \
proxy_set_header X-Forwarded-Proto \$scheme; \
} \
location ^~ /v1/chat/ { \
proxy_pass http://localhost:11434/v1/chat/; \
proxy_set_header Host \$host; \
proxy_set_header X-Real-IP \$remote_addr; \
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for; \
proxy_set_header X-Forwarded-Proto \$scheme; \
} \
} \
}" > /etc/nginx/nginx.conf && \
chmod -R 777 /var/log/nginx /var/lib/nginx /run
EXPOSE 8080
# Create directory for user nobody SAP AI Core run-time
RUN mkdir -p /nonexistent/.ollama && \
chown -R nobody:nogroup /nonexistent && \
chmod -R 770 /nonexistent
# chmod -R 777 /nonexistent/.ollama
# Start nginx and ollama service
CMD service nginx start && /usr/local/bin/ollama serve🔁 Rebuild (Important)
Always rebuild without cache after changing dependencies:
docker build --no-cache --platform=linux/amd64 \
-t docker.io/<username>/ollama:ai-core .❌ Issue 4: Docker Registry Secret – Repository URL Not Clearly Documented
While deploying the custom LLM image to SAP AI Core, I also faced confusion when creating the Docker registry secret.
The SAP template references a registry secret, but does not clearly document the correct repository URL, especially for individual developers using Docker Hub.
This leads to failed deployments even though:
- The image exists
- The Docker build succeeded
- The image is public
🧩 Context: Why a Registry Secret Is Still Required
Even if your Docker image is public on Docker Hub, SAP AI Core still expects a registry secret to be defined and referenced in the ServingTemplate.
This is required because:
- SAP AI Core always pulls images via a registry abstraction
- The secret defines where and how to pull the image
❌ Common Mistake
One common source of confusion when creating a Docker registry secret for SAP AI Core is thinking that the link to the repository page is the registry URL.
For example, developers often assume that the below one is the correct registry URL:
https://hub.docker.com/repository/docker/<User Name>/ollama❌ This is wrong.
SAP AI Core cannot use it to pull the image.
That link is for humans to browse the image on Docker Hub.
✅ Correct Registry URL for Docker Hub (Individual / Public)
For Docker Hub (public or private), the correct registry server URL is:
https://index.docker.io/✅ This is the official Docker Hub registry endpoint
✅ This works for individual accounts
✅ This works for public images
✅ This works for SAP AI Core
{
".dockerconfigjson": "{\"auths\":{\"https://index.docker.io\":{\"username\":\"<User Name>\",\"password\":\"<Personal Access Token>\"}}}"
}
✅ Example: Creating the Registry Secret (conceptually)
When creating the Docker registry secret in SAP AI Core (via CLI or UI), use:
| Field | Value |
|---|---|
| Registry Server | https://index.docker.io/ |
| Username | Your Docker Hub username |
| Password | Docker Hub Access token |
💡 Best practice: use a Docker Hub access token, not your password.
✅ Example: Referencing the Secret in ServingTemplate.yaml
spec:
containers:
- name: ollama
image: docker.io/<username>/ollama:ai-core
imagePullSecrets:
- name: dockerhub-secret <Name of Docker Secret you create on SAP AI Launchpad>
Where:
dockerhub-secretis the name you created in SAP AI Core- The image path remains
docker.io/<username>/<image>:<tag>
⚠️ Why SAP Documentation Is Confusing Here
- The tutorial assumes enterprise registries (Artifactory, ACR, ECR)
- Docker Hub specifics are not clearly called out
- The registry URL is not the same as the image path
This leads to trial-and-error failures during deployment.
This experience highlights an important lesson for SAP AI Core developers:
Always validate assumptions in tutorials against real-world environments—especially on Windows.