Part 1 of the SAP Business AI Series | Back to Series Hub
Before you can make sense of SAP Joule, the Generative AI Hub, or agentic workflows, you need a solid grounding in what Artificial Intelligence actually is — not the marketing version, but the practical understanding that lets you ask better questions, evaluate claims critically, and make smarter decisions when advising clients or building solutions.
This post covers the essentials: the definition of AI, how it evolved, the three capability tiers every professional should understand, and the technology stack that powers modern enterprise AI.
What Is Artificial Intelligence?
Artificial Intelligence is a branch of computer science focused on building systems that can perform tasks normally requiring human intelligence — things like understanding language, recognising patterns, solving problems, and making decisions based on incomplete information.
The key distinction between traditional software and AI is this: traditional software follows rules that a programmer explicitly defines. AI learns its own rules from data. Feed it enough examples, and it figures out the patterns on its own — often finding correlations that no human would have thought to look for.
Traditional software follows rules. AI learns them.
This distinction matters enormously in an enterprise context. When SAP embeds AI into invoice matching, demand forecasting, or employee onboarding, it is not running a scripted process with pre-programmed outcomes. It is running a system that has learned from millions of similar transactions and continuously refines its performance as new data arrives.
The Three Tiers of AI Capability
AI professionals and vendors often speak loosely about AI capability without distinguishing what is real, what is emerging, and what is speculative. These three tiers give you a precise framework:
Narrow AI — Where We Are Today
Narrow AI is designed to do one thing exceptionally well. It cannot generalise beyond the problem it was built for — but within that domain, it can outperform humans consistently at scale. Everything you interact with today, from SAP Joule to voice assistants, recommendation engines to fraud detection systems, is Narrow AI.
Its power comes from focus. A demand forecasting model trained on three years of supply chain data does not get distracted, does not have bad days, and processes thousands of SKUs in the time it takes a human analyst to review ten. That focused, tireless performance is what creates enterprise value.
General AI — The Research Frontier
General AI refers to systems that can apply intelligence across any domain, switching between tasks with the same flexibility a human does. A human accountant can also write a policy memo, coach a junior colleague, and evaluate a supplier contract — drawing on a broad base of reasoning and lived experience. General AI would do the same.
We do not have General AI today. The most capable LLMs appear general because they span many domains, but they have no real understanding — they are pattern-matching at extraordinary scale. True General AI remains an active research challenge with no clear timeline.
Super AI — Conceptual, Not Imminent
Super AI is the stage where machines exceed human intelligence across every dimension — creativity, emotional understanding, strategic reasoning, and self-directed learning. It is conceptual today, but its possibility shapes how we think about ethics, governance, and the long-term trajectory of AI development. For enterprise practitioners, understanding it as a direction rather than an immediate concern is the right posture.
The AI Technology Stack
Modern enterprise AI is not a single technology — it is a stack of interconnected disciplines, each building on the one below. Understanding how they relate helps you map vendor claims to real capabilities.
Machine Learning (ML)
The foundation. Machine Learning gives systems the ability to learn from data and improve their performance over time without being explicitly reprogrammed. ML algorithms identify patterns and relationships in datasets, then apply those patterns to make predictions or decisions on new data. SAP uses ML extensively for invoice matching, anomaly detection, customer sentiment analysis, and demand forecasting.
Deep Learning
A specialised subset of ML that uses neural networks — layered structures loosely inspired by the human brain. Each layer processes information and passes refined signals forward, with more layers producing a “deeper” level of understanding. Deep learning excels at unstructured data: images, audio, and text. It is the technology that made modern speech recognition, image classification, and document processing reliable enough for enterprise deployment.
Natural Language Processing (NLP)
NLP gives machines the ability to read, interpret, and generate human language. It is the layer that powers chatbots, document intelligence, and conversational interfaces. Without NLP, a user could not ask Joule a question in plain English and receive a meaningful response — every interaction would require structured inputs and formatted commands.
Generative AI
Generative AI is the technology that produces new content rather than just analysing existing content. Given a prompt, a generative model can write a paragraph, generate a product description, produce code, summarise a document, or create an image — not by retrieving stored content, but by constructing something new based on patterns it learned during training. This is what makes tools like SAP Joule and the Generative AI Hub genuinely different from search engines or rule-based assistants.
Large Language Models (LLMs)
LLMs are the engines behind generative AI for language tasks. Built on transformer architecture and trained on billions of parameters across vast text datasets, they understand context, intent, tone, and the relationships between ideas across a document. They do not memorise text — they learn statistical patterns across language, which is why they can generate coherent, contextually appropriate responses to questions they have never specifically encountered before.
Foundation Models
Foundation Models are large AI systems trained via self-supervised learning on massive, diverse datasets. They can handle a wide range of tasks without task-specific training, and they serve as the base on which more specialised models are built through fine-tuning. GPT-4, Claude, and Gemini are all foundation models. SAP’s Generative AI Hub provides access to multiple foundation models, allowing organisations to choose the right one for each use case.
How Data Shapes AI Quality
One of the most important — and most overlooked — factors in enterprise AI success is data quality. AI does not possess inherent knowledge. It reflects what it was trained on. An AI system trained on incomplete, inconsistent, or biased data will produce incomplete, inconsistent, or biased outputs — at scale and at speed.
For enterprise AI to deliver genuine value, the underlying data needs to be:
- Organised and accessible — well-structured, stored where AI systems can retrieve it reliably
- Accurate and current — clean data that reflects the real state of the business today, not six months ago
- Complete and comprehensive — enough coverage that the AI does not have to fill gaps with guesswork
- Governed and trustworthy — managed with clear ownership, provenance, and privacy controls
This is why SAP’s investment in the SAP Business Data Cloud and the SAP Knowledge Graph is so strategically important. They are not just data management tools — they are the foundation that makes enterprise AI trustworthy and actionable rather than impressive but unreliable.
The AI Ecosystem: How It All Fits Together
It helps to visualise these technologies as nested layers rather than separate products:
- AI is the broadest field — machines doing what humans do
- ML sits inside AI — specifically the learning-from-data approach
- Deep Learning sits inside ML — the neural network approach for complex patterns
- Foundation Models sit inside Deep Learning — large, versatile pre-trained systems
- LLMs are Foundation Models specialised for language
- Generative AI refers to models (often LLMs) that create new content
- RAG is a technique layered on top of Generative AI to ground outputs in real data
When you hear that SAP Joule is “powered by LLMs via the Generative AI Hub with RAG grounding,” every part of that sentence maps to one of these layers. Understanding the stack means you can evaluate claims, spot gaps, and ask the right questions.
Key Takeaways
- AI learns rules from data rather than following explicitly programmed instructions
- All commercial AI today is Narrow AI — highly capable within defined domains, but not generalisable
- The technology stack runs from ML → Deep Learning → Foundation Models → LLMs → Generative AI → RAG
- Data quality is not a prerequisite for AI — it is AI. Bad data means bad AI, at scale
- Understanding these layers lets you evaluate vendor claims and make better implementation decisions
Next in the series: Post 2 — How LLMs, RAG & Generative AI Actually Work →