Foundry IQ – Agentic retrieval solution – part 2

Context

In this blog post, we are going to deep dive into the end-to-end solution, including a walk through of the source code. For background, please ensure you have read part 1 of the Foundry IQ solution.

I also presume you now have your Foundry Project as well as your local development environment setup as per the guidance provided in part 1.

Let us jump in!

Is your local Dev environment ready

In part 1, we covered Local dev environment setup. You can access the source code used in this solution from my GitHub repository to get started.

Your solution should look like the screenshot below:

Ensure you have a file named .env within the solution folder, where we are configuring the endpoints and other settings that we need for this solution.

These endpoints and resource ID are available in the Azure portal (within Foundry portal as shown below).

  • AZURE_SEARCH_ENDPOINT is on the Overview page of your search service.
  • PROJECT_ENDPOINT is on the Endpoints page of your project.
  • PROJECT_RESOURCE_ID is on the Properties page of your project.
  • AZURE_OPENAI_ENDPOINT is on the Endpoints page of your project’s parent resource

Below is a sample .env file with these endpoints and settings populated and explained

AZURE_SEARCH_ENDPOINT = https://{your-service-name}.search.windows.net
PROJECT_ENDPOINT = https://{your-resource-name}.services.ai.azure.com/api/projects/{your-project-name}
PROJECT_RESOURCE_ID = /subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.CognitiveServices/accounts/{account-name}/projects/{project-name}
AZURE_OPENAI_ENDPOINT = https://{your-resource-name}.openai.azure.com
AZURE_OPENAI_EMBEDDING_DEPLOYMENT = text-embedding-3-large
AGENT_MODEL = gpt-4.1-mini

Activate the environment

Use the steps below to activate your Python virtual environment. These were already covered in part 1, however I am sharing them again here for your convenience.

  • For macOS / Linux (in Terminal) source ai-env/bin/activate
  • For Windows (in Command Prompt or PowerShell): .\ai-env\Scripts\activate
  • You’ll know it worked because your terminal prompt will change to show the name of your environment, like this: (ai-env) C:\Users\YourName\Desktop\MyProject>.

Install required packages

With the environment active, install required packages 

#Bash
pip3 install azure-ai-projects==2.0.0b1 azure-mgmt-cognitiveservices azure-identity ipykernel dotenv azure-search-documents==11.7.0b2 requests openai

Authenticate with Azure CLI

For keyless authentication with Microsoft Entra ID, sign in to your Azure account.

From your Terminal, run az login as shown below. Follow the prompts to authenticate

If you have multiple subscriptions, select the one that contains your Azure AI Search service and Microsoft Foundry project.

Start Jupyter Notebook

To start the application, type:

#Bash
jupyter-lab

If successful, you will have JupyterLab launcher accessible on your browser. For example, on http://localhost:8888/lab

Start building and running code

At this point, we are setup to run code snippets in our ai-agentic-retrieval.ipynb Notebook, as shown below

How to run Jupyter Notebook code listings

To run the code, simply select and highlight the cell or code block within the Jupyter Notebook. Once highlighted, press the ‘Run’ or ‘Play’ button on the top navigation bar.  Alternatively, you can press Shift + Enter keyboard short cuts as well.

Any expected response or output will then appear below the cell or code block, as shown below.

Step 1 – run code to Load environment variables

The following code loads the environment variables from your .env file and establishes connections to Azure AI Search and Microsoft Foundry.

Step 2 – run code to Create a search index

In Azure AI Search, an index is a structured collection of data. The following code creates an index to store searchable content for your knowledge base.

The index schema contains

  • field for document identification
  • field for page content
  • embeddings
  • configurations for semantic ranking and vector search

Step 3 – run code to Upload documents to the index

The following code populates the index with JSON documents from NASA’s Earth at Night e-book. As required by Azure AI Search, each document conforms to the fields and data types defined in the index schema.

Step 4 – run code to Create a knowledge source

A knowledge source is a reusable reference to source data. The following code creates a knowledge source that targets the index you previously created.

source_data_fields specifies which index fields are included in citation references. This example includes only human-readable fields to avoid lengthy, uninterpretable embeddings in responses.

Step 5 – run code to Create a knowledge base

The following code creates a knowledge base that orchestrates agentic retrieval from your knowledge source. The code also stores the MCP endpoint of the knowledge base, which your agent will use to access the knowledge base.

For integration with Foundry Agent Service, the knowledge base is configured with the following parameters:

  • output_mode is set to extractive data, which provides the agent with verbatim, unprocessed content for grounding and reasoning. The alternative mode, answer synthesis, returns pregenerated answers that limit the agent’s ability to reason over source content.
  • retrieval_reasoning_effort is set to minimal effort, which bypasses LLM-based query planning to reduce costs and latency. For other reasoning efforts, the knowledge base uses an LLM to reformulate user queries before retrieval.

Step 6 – run code to Set up a project client

Use AIProjectClient to create a client connection to your Microsoft Foundry project. Your project might not contain any agents yet, but if you’ve already completed this tutorial, the agent is listed here.

Step 7 – run code to Create a project connection

The following code creates a project connection in Microsoft Foundry that points to the MCP endpoint of your knowledge base. This connection uses your project managed identity to authenticate to Azure AI Search.

Step 8 – run code to Create an agent with the MCP tool

The following code creates an agent configured with the MCP tool. When the agent receives a user query, it can call your knowledge base through the MCP tool to retrieve relevant content for response grounding.

The agent definition includes instructions that specify its behavior and the project connection you previously created. Based on our experiments, these instructions are effective in maximizing the accuracy of knowledge base invocations and ensuring proper citation formatting.

Review of Foundry portal changes

At this point, you can explore Foundry portal to review our agent, the knowledge base among other resources we created for this solution,

Screenshot 1 – Agents page

Screenshot 2 – knowledge base page

Screenshot 3 – our knowledge base details page

Screenshot 4 – our knowledge source (showing Azure AI Search index)

Step 9 – run code to Chat with the agent

Your client app uses the Conversations and Responses APIs from Azure OpenAI to interact with the agent.

The following code creates a conversation and passes user messages to the agent, resembling a typical chat experience. The agent determines when to call your knowledge base through the MCP tool and returns a natural-language answer with references. Setting tool_choice="required" ensures the agent always uses the knowledge base tool when processing queries.

The response should be similar to the following:

Step 10 – run code to Clean up resources

When you work in your own subscription, it’s a good idea to finish a project by determining whether you still need the resources you created. Resources that are left running can cost you money.

In the Azure portal, you can manage your Azure AI Search and Microsoft Foundry resources by selecting All resources or Resource groups from the left pane.

You can also run the following code to delete individual objects:

Full source code and references

You can access the full source code used in this solution from my GitHub repository. It is worth mentioning that I have adopted my solution from a similar tutorial from Microsoft Learn page.

Next steps

In this blog post, we deep dived into part 2 of our Foundry IQ – Agentic retrieval solution. We leveraged Jupyter Notebook to run through the various steps for our source code for implementing the end-to-end solution. The complete code listing in GitHub repository was also shared for your reference.

Stay tuned for future posts, feel free to leave us comments and feedback as well.

Foundry IQ – Agentic retrieval solution – part 1

Context

In this blog post, we are going to explore how to leverage Azure AI search and Microsoft Foundry to create an end-to-end retrieval pipeline. Agentic retrieval is a design pattern intended for Retrieval Augumented Generation (RAG) scenarios as well as agent-to-agent workflows.

With Azure AI search you can now leverage the new multi-query pipeline designed for complex questions posed by users or agents in chat and copilot apps.

Using a sample end-to-end solution, complete with screenshots, I will walk you through creating this Foundry IQ solution.

What is Foundry IQ

Foundry IQ creates a separation of concerns between domain knowledge and agent logic, enabling retrieval-augmented generation (RAG) and grounding at scale. Instead of bundling retrieval complexity into each agent, you create a knowledge base that represents a complete domain of knowledge, such as human resources or sales. Your agents then call the knowledge base to ground their responses in relevant, up-to-date information.

This separation has two key benefits:

  • Multiple agents can share the same knowledge base, avoiding duplicate configurations.
  • You can independently update a knowledge base without modifying agents.

Powered by Azure AI Search, Foundry IQ consists of knowledge sources (what to retrieve) and knowledge bases (how to retrieve). The knowledge base plans and executes subqueries and outputs formatted results with citations.

High level architecture

The diagram shown above shows a high-level architecture of a Foundry IQ solution. The elements of the architecture are explained below.

1. Your App

This your agentic application, a conversational application that require complex reasoning over large knowledge domains

2. Foundry Agent Service

Microsoft Foundry is your one-stop shop for hosting your Azure OpenAI model deployments, project creations and agents

3. Azure AI Search

Azure AI Search is a fully managed, cloud-hosted service that connects your data to AI. It hosts the knowledge base, which handles query planning, query execution and results synthesis

Microsoft Foundry project setup

Follow the following steps to setup a Microsoft Foundry project

  1. Navigate and login to your Azure portal subscription.
  2. Search and select Microsoft Foundry
    • Azure portal Microsoft Foundry resource
  3. On the Microsoft Foundry overview page, select Create a resource
  4. Specify the Subscription details, Resource group to use, the Foundry Resource name, Region as well as Foundry Project name. Ensure you use the recommended naming conventions and best practices for your resources. Also ensure the Region selected supports Azure AI Search.
    • Below is Basics dialog page
    • Below is the Storage dialog page. You will notice I have created new CosmosDB, AI Search and Storage account for my Foundry project.
    • On the Identity dialog page, ensure you Enable a system-assigned managed identity for both your search service and your project.
    • Review and Create the Foundry Resources and Project. This may take a short while to complete
    • When completed, you will see confirmation page below. You can then view the resources using Got to resource button
  5. Go to resource will redirect to the Microsoft Foundry portal as shown below.
    • This is where to retrieve key settings that we will be using in this solution. Such as the following
      • AZURE_SEARCH_ENDPOINT is on the Overview page of your search service.
      • PROJECT_ENDPOINT is on the Endpoints page of your project.
      • PROJECT_RESOURCE_ID is on the Properties page of your project.
      • AZURE_OPENAI_ENDPOINT is on the Endpoints page of your project’s parent resource.
  6. On your search service, enable role-based access and assign the following roles. First, navigate to the Keys section and enable role-based access control
    • Then assign these roles shown below
RoleAssigneePurpose
Search Service ContributorYour user accountCreate objects
Search Index Data ContributorYour user accountLoad data
Search Index Data ReaderYour user account and project managed identityRead indexed content

On your project’s parent resource, assign the following roles.

RoleAssigneePurpose
Azure AI UserYour user accountAccess model deployments and create agents
Azure AI Project ManagerYour user accountCreate project connection and use MCP tool in agents
Cognitive Services UserSearch service managed identityAccess knowledge base

Local Dev environment setup

We will be using Python in this sample solution. I presume you already have Python installed in your development environment.

To verify Python is setup and working correctly, open your command line tool and type the following commands:

Bash
# Check the Python version.
python3 --version
# Check the pip version.
pip3 --version

If you are on Ubuntu, you should see similar output as my screenshot below:

Follow the steps provided in the README file in my accompanying source code repository for steps on how to set up your local a virtual environment. When successful, your ai-agentic-retrieval.ipynb Jupyter Notebook should look like the one shown below:

Source code repository

You can access the source code used in this solution from my GitHub repository.

Next steps

In this blog post, we looked at a Foundry IQ – Agentic retrieval solution. We started with a high-level architecture and the various elements within the Foundry IQ solution. I also walked you through the process of creating a project within Microsoft Foundry and configured access needed. I also shared steps of preparing your local development environment, ready for the Foundry IQ solution. In the follow-up blog post, we will deep dive into the end-to-end solution, including a walk through of the source code.

Stay tuned for future posts, feel free to leave us comments and feedback as well.

Sitecore Content Hub DevOps: New Import/Export engine with breaking changes is now default

Context and background

If you already using DevOps for deployments with your Content Hub environments, then you probably already aware of the breaking change that Sitecore introduced a few months ago. You can read the full notification on the Sitecore Support page The new version of the package import/export engine become the default in both the UI and CLI from Tuesday, September 30 according to the notification. Because of the breaking changes introduced, this means existing CICD pipelines won’t work. In fact, there is a high risk of breaking your environments if you try use existing CICD pipelines without refactoring.

In this blog post, I will look into details what breaking changes were introduced and how to re-align your existing CICD pipelines to work with the new import/export engine.

So what has changed in the new Import/Export engine?

Below is a screenshot from the official Sitecore docs summarizing the change. You can also access the change log here.

There is no further details available from the docs on specifics of the breaking change. However, it is very straightforward to figure out that Sitecore fundamentally changed the package architecture in the new import/export engine.

Resources are grouped by type

Within Sitecore Content Hub Import/Export UI, you have an option to Export components using both the previous/legacy engine and the new engine. As shown below, you can notice a toggle for Enable Legacy version, which when switched on will allow you to export a package with previous/legacy engine.

Also we can note that Publish definition configurations and Email templates are now available for Import/Export with the new engine. Email templates are unchecked by default.

If you did a quick comparison between the export package from the old/legacy engine vs the new engine, it comes clear that Sitecore has updated the packaging structure to organise content by resource type rather than by export category

This change makes navigation more straightforward and ensures greater consistency throughout the package.

Summary of the changes between legacy and new export packages

Below is a graphic showing how the package structure was changed. On the left hand-side, we have the legacy/old package and on the right hand side is the new one.

Full comparison of package contents between old and new

Below is a more detailed comparison, showing how the packages differ.

ComponentLegacy package sub foldersNew package sub folders
Copy profilescopy_profilesentities
Email templatesn/aentities
Entity definitionsentities
schema
option_lists
datasources
entities
schema
Export profilesexport_profilesentities
Media processingmedia_processing_setsentities
Option listsoption_listsdatasources
Policiespoliciesdatasources
entities
policies
schema
Portal pagesentities
portal_pages
datasources
entities
policies
schema
Publish definition configurationsn/aentities
Rendition linksrendition_linksentities
Settingssettingsentities
State flowsstate_flowsdatasources
entities
policies
schema
Taxonomiestaxonomiesdatasources
entities
schema
Triggersactions
triggers
entities
Scriptsactions
scripts
entities

Resources are grouped by type

Instead of separate folders like portal_pages, media_processing_sets, or option_lists, the new export engine places files according to their resource type. ​

For example:​

  • All entities are stored in the entities/ folder.​
  • All datasources (such as option lists) are found in datasources/ folder​
  • Policies and schema files have their own dedicated folders.​

Each resource is saved as an individual JSON file named with its unique identifier.

Related components are now separated

When a resource includes related items—such as a portal page referencing multiple components—each component is now saved in its own JSON file. ​

These files are no longer embedded or nested under the parent resource. ​

Updating your CICD pipelines

It is very straight forward to update you existing CICD pipelines once we have analysed and understood the new package architecture. You can revisit my previous blog post where I covered this topic in detail You need to simply map your previous logic to work with the new package architecture. You will also need to re-baseline your Content Hub environments within your source control so that you are using the new package architecture.

Next steps

In this blog post, I have looked at the new Content Hub Import/Export engine. I dived into how you can analyse the packages produced from the legacy/old engine and compared it with the new engine. I hope you find this valuable and the analysis provides a view of what has changed in the new package architecture.

Please let me know if you have any comments above and would like me to provide further or additional details.

Everything Sitecore AI – Marketer MCP integration with Microsoft Copilot Studio

Context

As you may be aware, the Marketer MCP now has a capability to integrate with Microsoft Copilot studio. You can now connect your Microsoft Copilot Studio agents to the Sitecore Marketer MCP for seamless access to Sitecore’s marketing features.

The Marketer MCP is the Model Context Protocol (MCP) for marketing in Sitecore. It connects AI agents to Sitecore tools through the Agent API, providing secure access across the entire digital experience lifecycle.

In this blog post, I will walk you through a step-by-step guide, complete with screenshots.

Pre-requisites

Before you begin, make sure you have:

  • A valid Sitecore account with required permissions
  • A valid Microsoft Copilot studio account with access permissions to Create agents and Create Custom Connectors

Step 1 – Create a new agent in Copilot Studio

  • Open Copilot Studio and either create a new agent or open an existing one.
  • As shown in the screenshot below, specify the following minimal details for your agent:
    • Name: The name of your agent
    • Description: Description of your agent
    • Icon: You can choose an icon for your agent (optional)
  • Create agent in Copilot Studi0

Step 2 – Add a tool to the agent

  • Go to the Tools tab for your agent then click Add a tool.
  • Select New tool then choose Model Context Protocol. The MCP onboarding wizard opens
  • Enter the following details, as show in screenshot below
  • Under Authentication, select OAuth 2.0 and Dynamic discovery type. Then click Create.
    • The Add tool dialog will be displayed as shown below.
    • In the Add tool dialog, in Connection, click Not connected > Create new connection. Then click Create.
    • A pop-up dialog appears as per the screenshot below, with the message Resource parameter is required. This is expected. Follow the workaround below.
    • Copy the entire URL shown in the dialog. Append the following resource parameter to the end of the URL:
      • &resource=https%3A%2F%2Fedge-platform.sitecorecloud.io%2Fmcp%2Fmarketer-mcp-prod
    • Open a new browser window, paste the updated URL into the address bar and press Enter.
    • In the Marketer MCP authorization request dialog (see screenshot below), click Allow Access.
    • This will prompt you to login to your Sitecore Cloud Portal
    • Then select the organization and tenant you want to use when interacting with the MCP server (as per screenshot below)
  • Return to the Add tool dialog in Copilot Studio. When it shows that you’re connected to the MCP server, click Add and configure.

You should now see the Marketer MCP details and its tools enabled and ready to use. You can begin entering prompts to interact with Sitecore through the MCP.

Step 3 – Get prompting

From your Copilot prompt text area, you can now use natural language to prompt and perform actions in SitecoreAI. The first time you write a prompt, you may see a connection warning message shown below.

Simply follow the Open connection manager link to get connected. The link will open the dialog shown below

Click on Connect link. You will now get a response from your Sitecore AI as shown below.

Troubleshooting

You may come across some issues when establishing the connectivity into Marketer MCP from Copilot Studio. Below are the issues I encountered and how I resolved them.

Issue 1: Timeout error

I got this error when Creating the connection:

Issue 1 Resolution:

I simply repeated that step for the second time and issue was resolved

Issue 2: Environment Access permission error

The error below may occur when your Copilot Studio account doesn’t have access permissions to create a custom connection

Issue 2 Resolution:

Work with your ITS teams to provision the correct level of needed access in Copilot Studio

Next steps

In this blog post, we looked at a step-by-step guide on how to set the Marketer MCP integration with Microsoft Copilot Studio. We looked at potential connectivity issues that you may encounter and how to resolve them to get it working.

The Marketer MCP provides tools to create content, manage campaigns, run marketing automation, and handle content management. This is an evolving tool and remember to check latest updates from Sitecore.

The Marketer MCP is only reliable for the supported use cases listed here. Responses outside this scope have not been validated by Sitecore and might be inaccurate.

SitecoreAI docs

Stay tuned for future posts, feel free to leave us comments and feedback as well.

The Agentic Leap: Sitecore Symposium 2025 AI Deep Dive into Strategy, Use Cases, and Client Success

The content below was generated with help of AI. I prompted AI to help create an executive summary and curated list that covers everything AI in upcoming Sitecore Symposium 2025, scheduled for November 3-5 in Orlando.

Executive Summary: The AI Mandate at Sitecore Symposium 2025

Sitecore Symposium 2025, will center on the fundamental redefinition of digital experience driven by Artificial Intelligence. The overarching theme, “Next is Now,” acknowledges that AI has profoundly altered consumer behavior, replacing traditional search with summarization and converting clicks into definitive conclusions. This shift mandates that brands meet customers across novel channels and touchpoints that did not exist recently.

  • The New Paradigm: Augmentation vs. Automation – A key strategic indicator of Sitecore’s direction is articulated by CEO Eric Stine, who will emphasize the limitations of existing, disconnected marketing solutions and the fragmented experiences they create. The strategic focus of Sitecore’s platform evolution is clear: prioritizing augmentation over automation and relevance over reach. This positioning redefines Sitecore’s AI not merely as a tool for reducing labor costs, but as a critical growth engine designed to scale human creativity and accelerate the marketer’s capacity to connect, create, and compete.
  • Industry Context and Strategic Sponsorship: Accenture’s Agentic Shift – The focus on agentic frameworks at Sitecore mirrors a broader industry trend among global professional services firms. Accenture, for example, is leveraging similar principles, having recently launched its “Physical AI Orchestrator” solution. This cloud-based offering integrates AI agents from Accenture’s AI Refinery™ platform to help manufacturers create software-defined facilities and live digital twins, highlighting the enterprise reliance on autonomous, goal-oriented AI systems. Accenture is participating as a Strategic Sponsor at Sitecore Symposium 2025. Furthermore, they are the exclusive partner for the highly anticipated #ExecutiveExchange, a premier program designed for senior industry leaders to explore the possibilities of content, data, and AI, and gain valuable actionable insights. Joined by Sitecore’s leadership and a select group of global thought leaders, this Exchange is designed to spark ideas and foster meaningful connections for C-suite attendees.
  • The Three Pillars of Sitecore’s AI Strategy – The Symposium agenda is structured to address the complete lifecycle of enterprise AI adoption, from strategic planning to implementation and governance. The sessions reveal a focus on three core pillars essential for achieving the AI advantage:
    • Strategic Vision: Establishing the architectural foundation, the product roadmap, and the necessary governance frameworks for the future “Agentic Web.”
    • Tactical Execution: Showcasing specific, measurable use cases in both content orchestration (velocity and compliance) and developer acceleration (code generation and platform governance).
    • Real-World Validation: Presenting client stories that validate how global brands build trust, achieve scalable personalization, and modernize their architecture to be truly “AI-ready.”

Strategic Vision: Sitecore’s AI Roadmap and Agentic Future

The strategic sessions at Symposium will focus on establishing the architectural and corporate intent behind Sitecore’s AI investments, demonstrating a shift beyond rudimentary generative capabilities toward advanced, integrated intelligence.

  • Defining the Advantage: Speed, Scale, and Trust – The General Session, “Speed, scale, and trust: The AI advantage for marketers,” sets the executive tone for the event. Sitecore Chief Product Officer Roger Connolly, joined by leaders from major global enterprises including Infor, Regal Rexnord, AFL Global, and Berkeley Homes, will discuss how intelligence and imagination must work together to unlock entirely new ways to compete in the digital space. The discussion centers on utilizing AI to accelerate content delivery, amplify creative output, and, most critically, build lasting trust at scale.  The repeated emphasis on “trust” is highly significant in the enterprise context. It signals that Sitecore recognizes the profound corporate liability associated with AI outputs, such as hallucinations or compliance breaches. Therefore, the core strategy involves developing AI solutions with inherent guardrails for brand safety, data integrity, and regulatory adherence, positioning AI not just as an efficiency tool, but as a critical competitive advantage built on verifiable governance.
  • The Agentic Framework: The Technical Roadmap – A key session provides an exclusive look into Sitecore’s product roadmap, focusing on how its platform is being reimagined with intelligence. This session details how advanced agentic frameworks will power intelligent and seamless workflows across the entire product suite. This move toward agentic frameworks signifies a major architectural transition. Rather than relying on rigid, sequential DXP workflows, Sitecore is building decentralized, goal-oriented AI entities—or “agents”—that possess the autonomy and context needed to execute complex, cross-product tasks. These AI agents will be empowered to “plan, synthesize, and act” wherever assistance is needed. The necessary precursor for this capability is a fully composable, API-first backbone, such as XM Cloud, which provides agents with the required data context and execution permissions to operate effectively and autonomously across the DXP ecosystem.
  • The Foundational Imperative: The Experience OS – For AI acceleration to be effective, a unified and governed digital ecosystem must be in place. The session “The Experience OS: Preparing Digital Foundations for AI Acceleration” addresses this foundational requirement. The analysis indicates that fragmented, ungoverned legacy systems (characterized by low content and data maturity) inevitably lead to inconsistent AI outputs and dramatically increase the human resources required for correction and oversight. Leaders from Horizontal Digital and Gradial will share how AI can accelerate site operations only after a foundation is built that successfully unifies teams, standardizes components, and establishes scalable governance. Establishing this “Experience OS” is therefore the fundamental structural step that enables the desired strategic outcome: efficiency, consistency, and accelerated delivery across all digital properties.

Strategic Partner Spotlight: Accenture’s AI and Innovation Sessions

Accenture’s presence at Sitecore Symposium 2025 extends far beyond the Executive Exchange, featuring a strong lineup of engaging sessions that showcase innovation and customer impact, particularly around AI-driven strategies. Attendees are encouraged to visit Accenture at booth 229 to connect with their Sitecore experts and explore how AI-driven content strategies are shaping the future of digital experience.

Key sessions featuring Accenture include:

The Curated AI Session Guide

The following table provides a curated list of key Sitecore Symposium 2025 agenda items, summarizing their strategic importance, technical focus, and client application.

Session TitleCore Theme/FocusSpeakers/Client Case StudySummary and Key TakeawaysRegistration Link
Speed, scale, and trust: The AI advantage for marketers (General Session)Executive Strategy & TrustRoger Connolly (Sitecore CPO), Infor, Regal Rexnord, AFL Global, Berkeley HomesExecutive insights on positioning AI as an advantage, leveraging intelligence and imagination to accelerate delivery and build lasting trust at scale, moving beyond mere automation.Link
Sitecore’s AI roadmap: How agentic frameworks will transform digital experienceAI Roadmap & Agentic ArchitectureSitecore Product TeamExclusive look at the AI roadmap, demonstrating how future AI agents will be empowered to “plan, synthesize, and act” across the DXP for seamless, intelligent workflows.Link
Harnessing AI for content creation: Introducing AI Experience Generation for Sitecore XM CloudContent Use Case & Product DemoRichard Seal (Principal Engineer, Sitecore), Mo CherifPractical demonstration of the new AI Experience Generation app in the XM Cloud Marketplace, detailing AI-driven page creation and the technical architecture for building Marketplace extensions.Link
Orchestrating the Future: AI-Powered Content Operations with Sitecore, Gradial, and EPAMContent Supply Chain & Operational EfficiencyTimothy Marsh, Amanda Follit (EPAM)Strategic methods to overcome content fragmentation and slow cycles by implementing intelligent, orchestrated AI content supply chains that enhance audience resonance.Link
Navigating AI readiness – it’s not what you thinkOrganizational Strategy & AdoptionVickie Bertini (EPAM)Addresses the non-technical hurdles of AI implementation, focusing on organizational resistance, strategic roadblocks, and how to effectively manage change in the enterprise.Link
Creating an AI-powered content supply chain for regulatory markets using SitecoreIndustry Use Case & ComplianceMike Shaw (CI Digital)Essential session for high-compliance sectors (H&LS, FS) detailing how AI-driven automation ensures governance, streamlines approvals, and transforms regulatory data into compliant narratives.Link
Cutting Sitecore development time by up to 80% with AIDeveloper Tooling & Code GenerationRajitha Khandavalli (Meritage Homes)Deep dive into leveraging context augmentation with MCP Servers and integrating external data (Jira, Figma) to enable accurate, high-speed, AI-generated code generation, drastically reducing development cycles.Link
Deliver Measurable Operation Efficiency with Agentic AIPlatform Governance & DevOpsN/A (HelixGuard Focus)Introduction to the HelixGuard “AI co-pilot,” blending MCP intelligence, analytics, and automation to proactively elevate platform performance, governance, and experience delivery in an agentic framework.Link
Scaling smarter: How Vizient uses Sitecore to personalize, integrate, and innovate in a complex B2B healthcare landscapeClient Story & AI FoundationJonathan Price (Americaneagle.com)Case study detailing how a complex B2B healthcare organization established “AI-ready infrastructure” and achieved smart personalization and faster time to market following a digital transformation.Link
The Experience OS: Preparing Digital Foundations for AI AccelerationInfrastructure Strategy & GovernancePam Butkowski (Horizontal Digital)Strategic session outlining the necessary pre-conditions for AI success: unifying teams, standardizing components, and establishing scalable governance models for faster delivery and consistency.Link

Conclusion: Leading the Future of DX with Sitecore AI

The Sitecore Symposium 2025 agenda confirms a decisive strategic pivot: Sitecore is transitioning from providing a traditional Digital Experience Platform (DXP) to offering an Intelligent Experience OS built on the principles of augmentation and agency. The analysis of the session topics suggests that the focus is on three intertwined strategic vectors: the deployment of agentic frameworks to automate complex, cross-platform workflows; the delivery of measurable efficiency via AI-driven content orchestration and developer acceleration; and the commitment to enterprise governance to ensure trust and compliance in high-stakes environments.

For digital leaders and technologists, the Symposium presents a compelling narrative that moves beyond simple generative AI experimentation. It outlines a comprehensive, strategic path forward where AI serves as the catalyst for continuous innovation, provided that organizations first establish the required foundational agility, standardization, and change management principles necessary to support a truly intelligent and composable digital future.

Next steps

In this blog post, we looked at the upcoming Sitecore Symposium 2025, where we looked at everything AI related. With help of AI, I have managed to review the agenda items and come up with this curated list. Stay tuned for future posts, feel free to leave us comments and feedback as well.

Everything AI – RAG, MCP, A2A integration architectures

Context

In this blog post, we are going to explore Agentic AI prominent integration architectures. We are going to discuss RAG, MCP and A2A architectures. If you are not familiar with these terminologies, don’t worry as you are in good company. Let us begin with how we got here in the first place.

What is an Agentic AI?

An AI agent is a system designed to pursue a goal autonomously by combining perception, reasoning, action, and memory. Often built using a large language model (LLM) and integrated with external tools. These agents perceive inputs, reason about what to do, act on those plans, and whilst also remembering any past interactions (memory).

We will now expound more on some of the key words below:

  • Perception – this is how your agent recognises or receives inputs such as a user prompt or some event occurring
  • Reasoning – this is the capability to break down a goal or objective into individual steps, identify which tools to use and adapt plans. This will usually be powered by an LLM
  • Tool – is any external system the agent can call or interact with, such as an API call or a database
  • Action – is the execution of the plan or decision by the agent, the act of sending an email for example, or submitting a form. Agent will perform the action leveraging the tools

What is Retrieval Augmented Generation (RAG)?

Carrying on with our AI agent conversation, suppose we need to empower our agent with deep, factual knowledge of a particular domain. Then RAG is the architectural pattern to use. As an analogy, think of RAG as an expert with instant access to your particular domain knowledge.

This pattern allows us to connect an LLM to an external knowledge source, which is typically a vector database. Therefore, the agent’s prompts are then “augmented” with this more relevant, retrieved data before the final response is generated.

Key benefit

With RAG, agents drastically reduces “noise” or “hallucinations” ensuring that the responses and answers are based on specific and latest domain knowledge or enterprise data

Some use cases

  • Q&A scenarios over Enterprise Knowledge – think of an HR agent that answers employee questions by referencing HR policy documents. Ensures the answers are accurate and citations of policies
  • Legal Team agent – that analyses company data rooms, summarizing risks and cross-referencing findings with internal documents and playbooks

What is Model Context Protocol (MCP)?

MCP is an open-source standard for connecting AI applications to external systems. As an analogy, think of MCP as a highly skilled employee who knows exactly which department (API) to call for a particular task.

MCP architecture, adapted from: https://modelcontextprotocol.io/docs/getting-started/intro

This is an emerging standard for enabling agents to discover and interact with external systems (APIs) in a structured and also predicable manner. It is like a USB-C for AI agents

Key benefit

MCP provides a governable, secure and standardized way for our agents to take action and interact with enterprise systems, doing more and going beyond simple data retrieval as in the use cases for RAG

Some use cases

  • Self-service sales agent – think of a Sales agent that allows a salesperson to create a new opportunity in a company CRM, then set up and add standard follow-up tasks as required. The agent does discovery of available CRM APIs, understand the required parameters and executes the transactions securely.
  • An accounting agent – think of automated financial operations where upon receiving an invoice in a email inbox, the agent calls the ERP system to create a draft bill, match it to Purchase Order and schedule a payment.

What is Agent-to-Agent (A2A)?

This does what is says on the tin. Multiple, specialized or utility agents collaborate to solve a problem that is too complex for a single agent. The graphic below illustrates this collaboration. As an analogy, think of a team of specialists collaborating on a complex project.

Key benefit

A2A enables tackling highly complex, multi-domain problems by leveraging specialized skills, similar to a human workforce.

Some use cases

  • Autonomous product development team – think of an autonomous product development teams consisting of “PM agent”, “Developer agent”, “QA agent” all working together. PM writes specs, Developer writes code and QA tests the code, iterating until a feature is completed. Specialization means agents can achieve higher quality of outputs at each stage of a complex workflow.

So which is it, RAG, MCP or A2A?

As architects we often rely on rubrics when we need to make architectural decisions. With Agent AI solutions, you can use a set of guidelines that best helps you assess the business domain problem and come up with the right solution. Below is an example rubric to help with your assessments and criteria when to leverage RAG, MCP or A2A.

Start with a goal

Agentic AI solutions are not any different. There is no “one size fits all” solutions. Always start with a goal, business objective so you can map the right Agentic AI solution for it. Sometimes Agentic AI many not be the right solution at all, don’t just jump on the bandwagon.

Trends and road ahead

Agentic AI is at very early stages and expect more emergence patterns in coming days and months. We may need to combine RAG and MCP and leverage a hybrid approach to solving AI problems. We already seeing the most valuable enterprise agents are not pure RAG or MCP but a hybrid.

Next steps

In this blog post, we looked at prominent integration architectures in this age of Agent AI. We explored RAG, MCP and A2A architectural patterns. We also looked at some of the use cases for each as well as key benefits we get from each pattern. We finished with a sample architecture rubric that can be leveraged.

Stay tuned for future posts, feel free to leave us comments and feedback as well.

Step-by-step guide to integrating with Sitecore Stream Brand Management APIs

I previously blogged about Sitecore Stream Brand Management and looked at a high level architecture on how the Brand Kit works under the hood. Today, I continue this conversation and look at a more detailed step-by-step guide on how you can start integrating with the Stream Brand Management APIs.

As a quick recap, Sitecore have evolved the Stream Brand Management to provide a set of REST APIs to manage life-cycle of the brand kit as well as getting a list of all brand kits. You can now use REST APIs to create a new brand kit, including sections and subsections, and create or update the content of individual subsections. You can also upload brand documents and initiate the brand ingestion process.

  • Brand Management REST API (brand kits, sections/subsections)
  • Document Management REST API (upload/retrieve brand documents).

These new capabilities opens opportunities such as allowing you to ingest brand documents directly from your existing DAM. You could also integrate them with your AI agents so that you can enforce you brand rules

Step 1 – Register and get Brand Kit keys

Brand Management REST APIs use OAuth 2.0 to authorize all REST API requests. Follow these steps below:

a) From your Sitecore Stream portal navigate to the the Admin page and then navigate to Brand Kit Keys section, as shown below.

b) Then click on Create credential button which opens the Create New Client dialog similar to one shown below. Populate with the required client name and a description, then click on Create

c) Your new client will be created as shown below. Ensure you copy the Client ID and Client Secret and keep them in a secure location. You will not be able to view the Client Secret after you close the dialog.

Step 2 – Requesting an access token

You can use your preferred tool to a request the access token. In the sample below, I am leveraging Postman to send a POST request to the https://auth.sitecorecloud.io/oauth/token endpoint.

  • client_id This is the Client ID from previous step
  • client_secret This is the Client Secret from previous step
  • grant_type This defaults to client_credentials
  • audience This defaults https://api.sitecorecloud.io

If successful, you will get the response that contains the access_token as shown below

  {
    "access_token": "{YOUR_ACCESS_TOKEN}",
    "scope": "ai.org.brd:w ai.org.brd:r ai.org.docs:w ai.org.docs:r ai.org:adminai.org.brd:w ai.org.docs:w ai.org:admin",
    "expires_in": 86400,
    "token_type": "Bearer"
  }

Step 3 – Query Brand Kit APIs

You can start making REST APIs securely by using the access token in the request header.

Get list of all brand kits

Below is a sample request that I used to get a list of available brand kits for my organisation. I am leveraging Postman to send a GET request to the https://ai-brands-api-euw.sitecorecloud.io/api/brands/v1/organizations/{{organizationId}}/brandkits endpoint.

You can get your organisationId from your Sitecore Cloud portal

https://portal.sitecorecloud.io/?organization=org_xyz

Full list of Brand Kit REST APIs

Sitecore API Catalog lists all the REST APIs plus sample code on how to integrate with them. Below is a snapshot of the list of operations at the time of writing this post:

Ensure you are using the correct Brand Management server. Visit Sitecore API catalog for list of all the servers. Below is a snapshot of the list at the time of writing this post:

Next steps

Have you started integrating Sitecore Stream Brand Management APIs yet? I hope this step-by-step guide helps you start exploring the REST APIs so you can integrate them with your systems.

Stay tuned for future posts, feel free to leave us comments and feedback as well.

How do I add custom class to CKEditor in XM Cloud?

Do you know how to add a custom class using “new” text editor CKEditor? I want to add a custom "<p class="my-custom-class"></p>"

Sounds familiar? This is a common query which XM Cloud developers are grappling on Sitecore developer community channels, including Slack.

In fact, a quick search reveals an active topic as shown below:

What is CKEditor?

On May 8, 2025 Sitecore deprecated the legacy Pages rich text editor in XM Cloud. This was previously accessible from the right-hand side panel. The newer CKEditor rich text editor becomes the default editor. The link I have shared above provides guidance on how to enable the newer CKEditor rich text editor. This involves adding the env variable PAGES_ENABLE_NEW_RTE_EDITOR in the Deploy app and set its value to ‘true’

What are some of the benefits of CKEditor

The CKEditor significantly improves Rich Text Editing experience within XM Cloud Pages, introducing several new features such as:

  • capability to add tables
  • capability to find and replace text
  • capability to style images with more options
  • supporting new markup (this was not possible with previous legacy editor)
*The image above has been adopted from developers.sitecore.com

But how do I add custom class to CKEditor?

With all these new capabilities and UX improvements, unfortunately you can not customize the CKEditor in XM Cloud (yet). As mentioned earlier, there seems to be demand for a capability to customize the list of the options available from the formatting drop down.

Makes sense to be able to create custom classes to be included in the list right?

Back to you Sitecore XM Cloud team.

What are other developers of XM Cloud saying?

Below are some of the answers to this question on Slack:

Next steps

Joined Sitecore Slack channel yet? Head over to https://sitecore.chat to join the community. In the meantime, stay tuned and please keep any eye on this feature request. Also please give us any feedback or comments.

Everything Sitecore AI and value to marketers – part two

Context

Welcome to part two of  this series about Everything Sitecore AI and value to marketers. In the previous session, we introduced Sitecore Stream and looked at the three main features: Brand aware AI, Copilots & agents and Agentic workflows.

In this blog post, we will explore further How the Brand Aware AI works, by looking at the architecture of Agents and how we bring them to life in our Stream Brand Assistant agent. There is also an accompanying video series on my YouTube channel.

What is an agent?

An agent is simply a software service that uses AI to assist users with information and task automation: An agent does a task, Take this do it and let me know when you are done.

We have three main elements on an Agent, as shown in the architecture below:

  1. Model – We now have access many Large Language Models and Small Language models that does the thinking
  2. Knowledge – This is the Instructions, data sources that enable the agent to ground prompts with Contextual data
  3. Tools –  A set of tools that agents can invoke such as retrieving information, Actions such as making API calls, and keeping a thread in memory of current conversation. You can also create custom tools using your own code or Azure Functions

How Brand Assistant agent works

Below are the steps involved when interacting with the Brand Assistant within Sitecore Stream:

  1. The user enters a prompt – the user enters a prompt in Brand Assistant – such as a question or an instruction as shown during the demo by Alessandro earlier.
  2. The system passes information from the Brand Context – the system automatically provides information from the Brand Context brand kit section as a system prompt.
  3. Copilot analyzes if it can answer from Brand Context – the thinking process begins. The Brand Assistant evaluates whether the information passed from the Brand Context alone is enough to answer the prompt.
  4. Based on the analysis, the process continues in one of two ways:
    • Generate a direct response – if the Brand Context provides sufficient information, the Brand Assistant generates a direct response using only that content.
    • Invoke other AI agents – if the Brand Context doesn’t answer the prompt and more information is needed, the Brand Assistant automatically activates one or more AI agents to search and organize the information and generate a response:
      • Search agent – uses tools to find information from your brand knowledge, web searches, or both.
      • Brief agent – activated only when the user specifically requests a campaign or creative brief
      • Summary agent – condenses all retrieved information into a concise, relevant response.

Next steps

Have you started using Sitecore Stream with your Sitecore products yet? You can reach out to Sitecore directly by filling in the ‘Sitecore Steam: Get Your Demo’ form on their website. You can now access our YouTube video series accompanying the blog posts, which is available to watch on demand.

You can also get started integration Sitecore Stream Brand Management APIs with your solution by following this step-by-step guide.

Stay tuned for future posts, feel free to leave us comments and feedback as well.

Everything Sitecore AI and value to marketers part one

Context

As generative AI continues to evolve and become more deeply embedded in our digital landscape, its applications have expanded well beyond simple chat interfaces. Today, these models are powering intelligent agents capable of autonomously executing complex tasks and streamlining operations.

Forward-thinking organizations are now harnessing this potential to build AI-driven agents that orchestrate business processes and manage workloads in ways that were once out of reach.

In this post, we’ll take a closer look at how Sitecore is embracing this shift—leveraging Brand-Aware AI to transform the way enterprise marketing teams operate. There is also an accompanying video series on my YouTube channel.

Some of pain points that marketers face today

Before we look at how Sitecore are leveraging AI with Sitecore Stream, let me set  the context around some of the pain points that marketers face today:

  1. Keeping brand consistency – challenges around keeping their brands aligned with latest trends, efficiently improving previous campaigns & briefs, assets to keep a consistent brand tonal voice
  2. Taking longer time to make decisions – challenges around decision making turnaround time due to manual processes and large volumes of content and material that needs reviewing as part of the creative process
  3. Availability of robust Self-Serve tools –  challenges around lack of tools for efficient task planning, content supply chains, moving faster removing blockers and having more control
  4. Generic AI/ChatGPT has gapsChatGPT or similar generic AI products are not specific to marketers

What is Sitecore Stream

To address these challenges, Sitecore has taken steps to introduce AI-Driven marketing by creating Sitecore Stream. Sitecore Stream is the way Sitecore are infusing AI capabilities across their products.

Sitecore Stream consists of three components: Brand-aware AI, Copilots & agents and Agentic workflows, as discussed below.

Brand-aware AI

This is what powers Sitecore AI tools to generate content that reflects your brand’s identity. This is made possible by a foundational understanding of your brand called brand knowledge. In the next slide, I will show in detail how this brand knowledge is created in Sitecore Stream.

Brand-aware AI enables marketers to create high-quality content faster, by combining deep brand knowledge with real-time Web insights to generate outlines and long-form drafts in seconds.

Copilots and agents

These are the AI assistants designed to increase marketers’ productivity by speeding up decision-making and task execution. Copilots are for humans, Agents are for processes. Copilot is the UI for AI – the chat based interface is where you can ask specific questions about your brand. Better still, you can actually brainstorm with AI, e.g. you want to create new content for blog post or a campaign brief.

Agentic workflows

These are advanced tooling to orchestrate tasks and streamline marketing project management across teams. This capability enables you to discover gaps in your campaigns and reduce planning time with help of AI that understands  your brand and project context.

You can essentially ideate & plan entire campaign with help of AI. AI will recommend key top deliverables to bring your campaigns to life and recommend tasks to get them completed within seconds. Providing full agentic experiences to marketing campaigns, which human-in-the-loop too keep or discard suggestions

How to create your brand knowledge in Stream

This involves a 6-step process as outlined in the infographic shown below. If you are managing a multi-brand enterprise, you can repeat this process for each of your separate brands. Essentially creating multiple brand kits within Sitecore Stream.

Next steps

Have you started using Sitecore Stream with your Sitecore products yet? You can reach out to Sitecore directly by filling in the ‘Sitecore Steam: Get Your Demo’ form on their website.

I have also created a YouTube video series accompanying the blog posts, which is available to watch on demand.

Stay tuned for future posts, feel free to leave us comments and feedback as well.