From foundational concepts to agentic systems — everything you need to know about how SAP is reshaping enterprise software with AI
Introduction
Artificial Intelligence is no longer a distant promise on an enterprise roadmap. It is embedded in the tools people use every day, reshaping how finance teams close their books, how HR managers attract talent, and how supply chains recover from disruption. At the center of this transformation stands SAP — a company with decades of process expertise now channeling that knowledge into one of the most ambitious AI strategies in enterprise software.
This post walks through the full SAP AI landscape: from the core building blocks of AI and foundational models, to SAP Joule, agentic AI, and the technical infrastructure that makes it all possible. Whether you are a business leader trying to understand where to start, a consultant helping clients unlock value, or a developer building custom AI solutions on SAP BTP — this guide is for you.
Part 1: Understanding AI — The Foundation Before the Platform
What Is Artificial Intelligence?
At its core, AI is a branch of computer science that teaches machines to do what we previously assumed only humans could — reasoning, learning, problem-solving, and language comprehension. Instead of being explicitly coded for every scenario, AI systems absorb patterns from data and use those patterns to respond intelligently to new situations.
Traditional software follows rules. AI learns them.
Three Levels of AI Capability
When discussing AI maturity, it is useful to think in three tiers:
Narrow AI is what we have today. It excels at specific tasks — recognising speech, classifying invoices, recommending products — but cannot generalise beyond its training domain. SAP Joule, GPT-4, and virtually all commercial AI in production fall into this category.
General AI refers to systems that can apply reasoning across any domain, just as a human can move between unrelated tasks. This remains largely theoretical and a subject of active research.
Super AI — where machines surpass human intelligence in every dimension — is conceptual for now, but its possibility shapes how we think about ethics and governance today.
The AI Technology Stack
Modern AI does not exist as a single technology. It is a layered ecosystem:
- Machine Learning (ML) gives systems the ability to learn from data without being explicitly programmed for each outcome
- Deep Learning extends ML using multi-layered neural networks that can recognise patterns in images, audio, or text at a level of complexity previously unachievable
- Natural Language Processing (NLP) enables machines to read, interpret, and generate human language — the foundation for every chatbot and document processing tool
- Generative AI takes things further: rather than analysing existing content, it creates new content — writing, code, images, summaries — by learning from what already exists
- Large Language Models (LLMs) are the engines behind generative AI. Trained on billions of parameters across vast text datasets, they understand context, intent, and nuance in ways that earlier models simply could not
Part 2: How AI Actually Works — From Data to Decision
The Role of Data in AI
A key insight that often gets lost in AI conversations: the quality of an AI system is inseparable from the quality of the data that trained it. AI does not possess inherent knowledge — it reflects the data it was given.
For enterprise AI to deliver real value, that data must be:
- Organised and accessible — well-structured and reachable by AI systems
- Accurate and relevant — current data that reflects the actual state of the business
- Complete and trustworthy — comprehensive enough to support confident predictions
Organisations that invest in strong data governance are not just doing housekeeping. They are building the foundation that determines how useful their AI will ultimately be.
The AI Lifecycle in Six Steps
- Data collection — structured and unstructured inputs are gathered from business systems, sensors, and other sources
- Data preparation — cleaning, labelling, and organising data to ensure quality
- Model training — algorithms analyse the data to discover patterns and relationships
- Testing and evaluation — the model is validated against new data before deployment
- Prediction and decision-making — the trained system begins generating outputs, automating tasks, or assisting decisions
- Continuous learning — as more data flows through the system, accuracy and relevance improve over time
LLMs: The Engine of Modern AI
Large Language Models represent a genuine step change in what AI can do. Unlike earlier text models designed for specific tasks, LLMs are general-purpose language engines built on transformer architecture. Their key innovation is the self-attention mechanism — a way of understanding how words relate to each other across the full length of a sentence or document, not just in sequence.
This is why an LLM can read a complex contract, understand the commercial intent, and produce an accurate summary — without ever having been explicitly taught what a contract is. LLMs do not memorise language. They learn the statistical relationships between words, phrases, and ideas across an enormous range of text.
Retrieval-Augmented Generation (RAG)
One of the most important concepts in enterprise AI is RAG — Retrieval-Augmented Generation. Here is the problem it solves: LLMs are trained on data up to a certain point in time and have no access to live business information. Ask a standard LLM about your Q3 revenue by region, and it will either fabricate an answer or admit it does not know.
RAG changes this by giving the model a retrieval step before it generates a response:
- The user submits a query
- The system searches a live knowledge base or document store for relevant information
- That retrieved content is combined with the original query
- The LLM generates its response grounded in real, current data
The results are dramatically better: fewer hallucinations, more relevant answers, and outputs that can actually be trusted in business contexts. For SAP customers, this means Joule can answer questions like “What are my open purchase orders?” by pulling live S/4HANA data before formulating a response.
Part 3: SAP Business AI — Intelligence at Enterprise Scale
The Market Moment
The pace of AI adoption has accelerated faster than most organisations anticipated. According to McKinsey research cited by SAP, one-third of organisations are already using generative AI regularly in at least one business function, and 40% plan to increase AI investment because of generative AI’s potential. Morgan Stanley projects generative AI could add up to $4.1 trillion to the global economy over the next three years — roughly equivalent to Germany’s entire GDP.
SAP’s Strategic Approach
SAP Business AI is the company’s overarching strategy for embedding intelligence throughout its software portfolio. It is not a standalone product or an add-on layer — it is the philosophy that AI should work where business processes actually happen, inside the applications people already use.
Three principles anchor this approach:
- Transformative impact across every function — AI should create measurable value in finance, HR, procurement, supply chain, customer experience, and IT, not just in isolated pilots
- Intuitive, productivity-first tools — the benefit of AI should be accessible without requiring technical expertise
- Responsible scaling — governance, transparency, and ethical standards must grow alongside capability
The Three Pillars of SAP Business AI
Embedded AI — intelligent capabilities woven directly into core applications like S/4HANA, SuccessFactors, Ariba, and SAP Analytics Cloud. These work in the background, surfacing predictions and automating steps that would otherwise require manual intervention.
Joule — SAP’s omnipresent AI copilot and the primary interface through which users interact with AI across the portfolio. Joule bridges applications, allowing a single conversational experience to reach across finance, HR, procurement, and more.
AI Foundation — the technical infrastructure on SAP BTP that enables organisations to build, deploy, and govern custom AI solutions. This is where developers and AI engineers work when out-of-the-box capabilities need to be extended or tailored.
Part 4: Joule — SAP’s AI Copilot
More Than a Chatbot
Joule is frequently described as a copilot, but that framing undersells what it actually does. It is a role-aware, context-driven assistant that understands the user’s position in the organisation, the applications they are connected to, and the business processes relevant to their work.
When a CFO asks Joule to summarise regional revenue trends, it does not just search the internet. It queries the right SAP system, retrieves live financial data, and frames the answer in business terms that matter to a CFO. When a procurement manager asks about vendors with delayed shipments, Joule reaches into the supply chain data and returns a relevant, actionable response.
How Joule Processes a Request
- The user submits a query through the Joule interface
- Joule identifies the user’s context — which applications are connected, what roles and permissions they hold
- A vector search scans the scenario catalog to find the best-matched capability
- If relevant documents exist (shared via SharePoint or other grounding sources), those are retrieved too
- Dialog management combines user input, conversation history, system context, and retrieved content into a structured prompt
- The LLM accessed via SAP’s Generative AI Hub generates the response
- The answer is returned in the appropriate format — summary, table, action, or next-step guidance
Joule for Every Role
- CHROs — accelerate hiring, reduce manual HR processes, and deliver more personalised employee experiences
- CFOs — automate financial workflows, improve cash flow visibility, and streamline compliance tasks
- COOs — optimise logistics, adjust forecasts in real time, and manage inventory more intelligently
- Developers — generate and validate code, navigate SAP documentation, and accelerate application delivery
- Consultants — instant access to over 200,000 pages of SAP knowledge, equivalent to more than 100 SAP certifications, through natural conversation
Joule for Developers: A Specialised Capability
The LLMs underpinning Joule’s developer mode have been trained on SAP-specific code — ABAP, CDS views, UI5, and related technologies. This means code suggestions and generation align with SAP standards rather than generic programming patterns. Developers can turn ideas into working SAP extensions faster, legacy ABAP code can be modernised more quickly, and citizen developers can contribute meaningfully through low-code tooling in SAP Build.
Part 5: Agentic AI — From Answering Questions to Taking Action
The Evolution Beyond Copilots
There is an important distinction between a copilot and an agent. A copilot responds to questions and assists with tasks when asked. An agent understands an objective, formulates a plan, takes actions across systems, monitors outcomes, and adjusts its approach — often without waiting to be prompted at each step. This shift from reactive assistance to proactive action is what SAP calls Agentic AI.
What Joule Agents Actually Do
Joule Agents are modular, task-specific AI systems that operate within SAP applications to carry out discrete business functions:
- A Purchase Order Creation Agent that turns a request into a completed PO
- A Timesheet Approval Agent that processes approvals based on policy rules
- A Talent Intelligence Agent in SuccessFactors that analyses skill gaps and recommends development paths
- A Sales Forecast Agent in SAP CX that projects pipeline and surfaces risk
The Architecture Behind Agentic AI
SAP Knowledge Graph — a semantic layer that connects data across SAP and non-SAP systems, giving agents the contextual understanding of business relationships. It is not enough to know what a purchase order says — the agent needs to understand how that PO connects to supplier performance, contract terms, and inventory levels.
SAP Business Data Cloud — a unified, governed data environment that brings together SAP transactional data and external sources into a single, trustworthy foundation. Agents can act with confidence because the data they are working from has been validated and harmonised.
Model Context Protocol (MCP)
MCP is the open standard that defines how AI agents connect to tools, data sources, and business systems securely — without requiring custom integrations for every combination of model and system. It distinguishes between three types of capabilities: Tools (executable functions with side effects), Resources (data sources for information retrieval), and Prompts (structured instructions ensuring consistent AI outputs).
Agent-to-Agent Communication: The A2A Protocol
Google Cloud and SAP co-developed the Agent-to-Agent (A2A) protocol — an open standard enabling secure, scalable collaboration between AI agents regardless of who built them or where they run. Each agent publishes its capabilities in a standardised Agent Card format, allowing other agents to discover what it can do and delegate tasks accordingly. This interoperability enables true end-to-end automation across the enterprise.
Part 6: AI Across Business Functions
Finance
AI in finance goes well beyond automating accounts payable. Embedded AI can match accounts receivable automatically (cutting processing time by up to 70%), predict late payments before they happen, detect errors during financial close, and enable analysts to query financial data in plain language — delivering a finance function that acts on insight faster and closes with higher confidence.
Supply Chain
Embedded AI in SAP’s supply chain solutions explains forecast variances in natural language, validates inbound deliveries by extracting and cross-checking shipping documents automatically, and guides transportation planning through conversational interfaces. Productivity improvements consistently reach 25–50% for planners and supervisory teams.
Procurement
Generative AI drafts RFPs and RFQs using historical sourcing data, identifies supplier risk by analysing market, ESG, and performance signals, and classifies spend automatically to surface savings opportunities that manual analysis would miss — moving procurement from reactive purchasing to strategic sourcing.
Human Resources
SAP Business AI embeds intelligence into SuccessFactors HCM at every stage of the employee lifecycle — creating equitable job descriptions, guiding employees through onboarding conversationally, assisting managers with performance goals and compensation recommendations, and answering policy questions instantly without routing a case to HR services.
Customer Experience
AI-powered CX capabilities enable a shift from reactive service to proactive engagement. Conversational agents handle order and product queries, quote creation agents turn email requests into formatted proposals, and case classification agents route support tickets while automatically generating knowledge articles for future deflection.
IT and Development
AI on SAP BTP can shorten development delivery cycles by up to 75% while reducing maintenance costs by around 30%. Code generation, logic explanation, documentation, and SAP-standards validation within familiar developer tools eliminates context-switching and accelerates delivery for both seasoned developers and citizen developers using low-code tooling.
Part 7: The AI Foundation — Building and Governing Custom AI on SAP BTP
Four Layers of the AI Foundation
- OS Interfaces — developer-facing tools including AI Playground, Joule Studio, and orchestration interfaces
- AI Kernel — the runtime layer managing models, workloads, agents, and compliance
- AI Integration — the connective tissue linking data sources, models, and services
- Peripheral and Data — where SAP data, non-SAP data, infrastructure, and partner models come together
Generative AI Hub
The Generative AI Hub provides governed, enterprise-grade access to frontier AI models including GPT, Claude, Gemini, and SAP’s own foundation models — through a single interface. Model selection is centralised, prompt lifecycle is managed, content filtering is built in, and a “bring your own model” strategy allows organisations to plug in fine-tuned models trained on proprietary data.
The Orchestration Service
Enterprise AI use cases rarely involve a single model call. The Orchestration Service manages grounding, data masking, input filtering, translation, LLM processing, output filtering, and response delivery as a coordinated pipeline — all executing through a single API call. Developers define the workflow once; it runs consistently across environments without application code changes when models evolve.
SAP Document AI and Knowledge Graph
SAP Document AI automates document intake and processing in any format using OCR, transformer models, and LLMs — reducing certificate processing time from ~10 minutes to ~3 minutes and eliminating manual sales order entry errors.
SAP Knowledge Graph connects data across finance, procurement, HR, and supply chain, enabling AI to understand not just what data says but what it means in business context — allowing Joule to answer nuanced questions by following entity relationships rather than returning flat query results.
Part 8: Responsible AI — Ethics, Governance, and Trust
SAP does not use customer data to train or refine its AI models. This is a firm commitment, not a policy preference.
SAP structures its AI ethics framework across three dimensions:
System Definition — SAP sets clear red lines: surveillance, discrimination, and manipulation are prohibited. Sustainability is embedded as a design requirement aligned with SAP’s Net Zero 2030 commitments. Human oversight is maintained through HITL (Human-in-the-Loop), HOTL (Human-on-the-Loop), and HIC (Human-in-Command) models.
System Engineering — fairness testing, bias mitigation, accessibility requirements, and transparency are built into the development process. Post-hoc explanation tools including SHAP and LIME help users understand why a model produced a particular output — critical in regulated industries.
Organisational Practice — humans, not machines, are accountable for AI outcomes. SAP maintains multi-stakeholder governance, engages with regulators and academic partners, and provides free learning resources to promote responsible AI literacy across its ecosystem. Its framework aligns with the EU AI Act and OECD AI Principles.
Part 9: Getting Started with SAP Generative AI Hub
The Setup Path
- Provision SAP AI Core (Extended edition required for Gen AI Hub access)
- Provision SAP AI Launchpad (Standard edition for model visibility)
- Connect your tools — Launchpad, Postman, or the SAP Cloud SDK for AI in Python or JavaScript
- Assign roles:
genai_manager,prompt_manager,genai_experimenter,prompt_experimenter - Deploy an Orchestration Service instance — the execution runtime without which no LLM calls can be made
Prompt Engineering Techniques That Matter
- Few-shot prompting — provide examples of the desired input-output pattern before the new task
- Meta prompting — define not just what the model should do, but how it should reason and validate its responses
- Multi-modal prompting — combine text with images or structured data to improve accuracy where visual context matters
SAP’s Prompt Optimizer (developed with Not Diamond) automates prompt conversion when switching LLMs — eliminating days of manual migration work each time a better model becomes available. The Prompt Registry treats prompts like code: versioned, governed, and deployed via CI/CD pipelines.
Conclusion: Building the Intelligent Enterprise
SAP’s AI vision is coherent in a way that matters to enterprise buyers: it is not a collection of features bolted onto existing software. It is a rethinking of how enterprise applications create value — where intelligence is woven into processes, accessible through natural language, governed responsibly, and grounded in the business data that makes outputs trustworthy.
For organisations navigating this landscape, a few principles stand out:
- Start with the data — data quality is inseparable from AI quality. Investments in governance and harmonisation pay dividends across every AI initiative
- Think processes, not tools — the most impactful deployments are built around specific, high-value business processes, not general-purpose AI access
- Plan for governance from day one — responsible AI is not a retrofit. Security, compliance, and transparency are design requirements, not afterthoughts
- Experiment with purpose — SAP’s Generative AI Hub and Joule Studio lower the barrier to experimentation, but anchor experiments to real business questions
The intelligent enterprise is not a future state. For organisations prepared to invest thoughtfully, it is available today.
This post covers SAP Business AI, Joule, AI Foundation, Agentic AI, and related capabilities as of 2025–2026. For a deeper dive into any individual topic, explore the SAP Business AI: The Complete Learning Series.