Sitecore Content Hub DevOps: New Import/Export engine with breaking changes is now default

Context and background

If you already using DevOps for deployments with your Content Hub environments, then you probably already aware of the breaking change that Sitecore introduced a few months ago. You can read the full notification on the Sitecore Support page The new version of the package import/export engine become the default in both the UI and CLI from Tuesday, September 30 according to the notification. Because of the breaking changes introduced, this means existing CICD pipelines won’t work. In fact, there is a high risk of breaking your environments if you try use existing CICD pipelines without refactoring.

In this blog post, I will look into details what breaking changes were introduced and how to re-align your existing CICD pipelines to work with the new import/export engine.

So what has changed in the new Import/Export engine?

Below is a screenshot from the official Sitecore docs summarizing the change. You can also access the change log here.

There is no further details available from the docs on specifics of the breaking change. However, it is very straightforward to figure out that Sitecore fundamentally changed the package architecture in the new import/export engine.

Resources are grouped by type

Within Sitecore Content Hub Import/Export UI, you have an option to Export components using both the previous/legacy engine and the new engine. As shown below, you can notice a toggle for Enable Legacy version, which when switched on will allow you to export a package with previous/legacy engine.

Also we can note that Publish definition configurations and Email templates are now available for Import/Export with the new engine. Email templates are unchecked by default.

If you did a quick comparison between the export package from the old/legacy engine vs the new engine, it comes clear that Sitecore has updated the packaging structure to organise content by resource type rather than by export category

This change makes navigation more straightforward and ensures greater consistency throughout the package.

Summary of the changes between legacy and new export packages

Below is a graphic showing how the package structure was changed. On the left hand-side, we have the legacy/old package and on the right hand side is the new one.

Full comparison of package contents between old and new

Below is a more detailed comparison, showing how the packages differ.

ComponentLegacy package sub foldersNew package sub folders
Copy profilescopy_profilesentities
Email templatesn/aentities
Entity definitionsentities
schema
option_lists
datasources
entities
schema
Export profilesexport_profilesentities
Media processingmedia_processing_setsentities
Option listsoption_listsdatasources
Policiespoliciesdatasources
entities
policies
schema
Portal pagesentities
portal_pages
datasources
entities
policies
schema
Publish definition configurationsn/aentities
Rendition linksrendition_linksentities
Settingssettingsentities
State flowsstate_flowsdatasources
entities
policies
schema
Taxonomiestaxonomiesdatasources
entities
schema
Triggersactions
triggers
entities
Scriptsactions
scripts
entities

Resources are grouped by type

Instead of separate folders like portal_pages, media_processing_sets, or option_lists, the new export engine places files according to their resource type. ​

For example:​

  • All entities are stored in the entities/ folder.​
  • All datasources (such as option lists) are found in datasources/ folder​
  • Policies and schema files have their own dedicated folders.​

Each resource is saved as an individual JSON file named with its unique identifier.

Related components are now separated

When a resource includes related items—such as a portal page referencing multiple components—each component is now saved in its own JSON file. ​

These files are no longer embedded or nested under the parent resource. ​

Updating your CICD pipelines

It is very straight forward to update you existing CICD pipelines once we have analysed and understood the new package architecture. You can revisit my previous blog post where I covered this topic in detail You need to simply map your previous logic to work with the new package architecture. You will also need to re-baseline your Content Hub environments within your source control so that you are using the new package architecture.

Next steps

In this blog post, I have looked at the new Content Hub Import/Export engine. I dived into how you can analyse the packages produced from the legacy/old engine and compared it with the new engine. I hope you find this valuable and the analysis provides a view of what has changed in the new package architecture.

Please let me know if you have any comments above and would like me to provide further or additional details.

XM Cloud tips & tricks: Extending CLI with handy productivity plugins and tools

Context and background

I presume most of us are familiar with Sitecore XM Cloud plugin that provides the cloud command that help you manage XM Cloud projects, environments and deployments from the Command Line Interface (CLI). CLI plugins provide us with powerful tools that enable us to automate management of Sitecore XM Cloud workloads by leveraging existing DevOps processes and tooling. In this blog post, I will share some of my tips and tricks on how you can quickly stand-up on of these plugins to automate, say, migration of content between two Sitecore XM Cloud instances.

Anatomy of a CLI plugin – Sitecore XM Cloud plugin example

A CLI plugin typically consists of a command which has one or more subcommands. Using an example of Sitecore XM Cloud plugin, you can check the cloud command is available as shown below:

I have assumed that you have already installed the required pre-requisites on your local developer environment. If this is not the case, help is available on how to do this setup, from the official Sitecore XM Cloud docs.

Subcommands

As you can see from above screenshot, we have the cloud subcommands for login, logout, project, environment, deployment and organization

I will not be focusing on what these mean for now. You can read more about them in the official docs pages. The point I am simply illustrating the anatomy of the CLI plugin, demonstrating what the command is and subcommands are. We will apply this to our own custom plugin next.

Tips & tricks 1 – Creating your own custom CLI Plugin – command and subcommands

I am going to share tips and tricks of creating a custom plugin to extend the base sitecore plugin with an new migrate command, which has only one subcommand to start the migration.

A typical use-case is imagine using this command to start content migration from Sitecore XM Cloud environment A into environment B. After we have created the plugin, we can run it with the dotnet sitecore --help command to display the plugin info as shown below.

Notice the migrate command is now listed alongside the out of the box subcommands that come with sitecore command. Very cool, we have now extended the sitecore command.

And we can now explore the available subcommands for our custom plugin using dotnet sitecore migrate --help command as shown below.

Tips & tricks 2 – Creating a Visual Studio Project for CLI Plugin

To extend the base Sitecore CLI plugin, we need to create a new class library project.

  1. In Visual Studio, select File > New > Project.
  2. In the Create a new project window, select C#Windows, and Library in the dropdown lists.
  3. In the resulting list of project templates, select Class Library (with the description, A project for creating a class library that targets .NET or .NET Standard), and then select Next.
  4. In the Configure your new project window, enter a name of your choice  for the Project name, and then select Next.
  5. In the Additional information window, select an appropriate Framework, and then select Create.

It is recommended we use the best practices such as have a clear project names to help clearly identify them and give a hint what the CLI plugin is all about. Below is a sample project skeleton, with all necessary classes needed for my CLI Plugin.

You will notice this project has the following structure. I will explain the main code classes below

  1. MigrateCommand.cs – this is the class code file that gives my plugin the migrate command
  2. StartMigrateCommand.cs – this is the class code file that adds my start subcommand to migrate command
  3. StartMigrateCommandArgs.cs – this is the code file that defines any command line arguments needed for the migration process. Below are the arguments supported
    • source-url: The URL of the source XM Cloud environment
    • destination-url: The URL of the target XM Cloud environment
    • root-item: The GUID of root item you want to migrate
    • include-children: Whether to include child items (default is true)
  4. PerformMigrateTask.cs – this is the code file that defines the tasks being accomplished by the subcommands
  5. RegisterExtension.cs – This is the main entry point code file, and implements the ISitecoreCliExtension

The rest of the code files are used for the business logic for this plugin. In my case, I have encapsulated my business logic for content migration from Sitecore XM Cloud environment A into environment B within the MigrationClient.cs code file. Consider this a black-box for now.

Which dependencies are needed for this project

We need to reference the Sitecore.DevEx.Client.Cli NuGet package from Sitecore, as shown below.

Verifying my project file (.csproj)

To ensure no errors with your CLI Plugin, you can verify your project file looks similar to the one shown below. Especially with Target Framework of netcoreapp3.1

Tips & tricks 3 – Configure NuGet package properties

We are going to deploy this project as a NuGet package. So follow the steps below to configure package properties

  1. Select your project in Solution Explorer, and then select Project > <project name> Properties, where <project name> is the name of your project.
  2. Expand the Package node, and then select General.
  3. Give your package a unique Package ID and fill out any other desired properties
  4. Below is my sample settings:

Tips & tricks 4 -Publishing your CLI Plugin to NuGet public feed

Now that we understand the high level anatomy of a CLI plugin and how to create its C#/.NET project, I will also share some tips on how to publish it to a public repository or feed. This is to make it available for the community to use the plugin.

Head over to Nuget.org and register for an account. You will need a Microsoft account to sign in or sign up. On successful login, select your user name at upper right, and then select API Keys as shown in step 1 and step 2 in screenshot below

Follow these steps once on the API Keys page

  1. Select Create, and provide a name for your key.
  2. Under Select Scopes, select Push.
  3. Under Select Packages > Glob Pattern, enter *.
  4. Select Create.
  5. Select Copy to copy the new key.

Important to note:

  • Always keep your API key a secret. The API key is like a password that allows anyone to manage packages on your behalf. Delete or regenerate your API key if it’s accidentally revealed.
  • Save your key in a secure location, because you can’t copy the key again later. If you return to the API key page, you need to regenerate the key to copy it. You can also remove the API key if you no longer want to push packages.

With our API-Key at hand, use the following command to publish your CLI Plugin. Notice we need to specify the name of our .nupkg file as well as API-Key when using the dotnet nuget push command.

When successful, you can view the list of your published packages using the ‘Manage Packages’ in step 3 on the previous screenshot. You should see the package listed such as shown below

For detailed guidance and further reference, follow the Microsoft Learn tutorial on NuGet packages.

Next steps

In this blog post, we have looked at tips and tricks on extending CLI for your XM Cloud instance. We looked at the CLI plugin architecture including how to create a sample .NET project for it. We also looked at how to package the CLI plugin and deploy to Nuget packages repository. I hope you find this useful and can adopt it for your own productivity use cases. Look out for a follow up blog post, where I will take a deep dive at some of the code I have used for my plugin. I intent to open up the black-box.

Stay tuned and please give us any feedback or comments.

Content Hub DevOps: Resolving unable to delete entity because it’s being used in one or more policies errors

Context and background

Content Hub permissions and security model is underpinned by the user group policies model, whereby Content Hub users can perform actions based on their access rights. The official docs provides clear definition of the anatomy and architecture of the user group policies. For example, a user group policy consists of one or more rules, with each rule determining the conditions under which group members have permissions to do something.

While all the technical details of group policies are nicely abstracted away from our business users, there are use cases when you will need to in fact grapple with technical details of the policies. Such as when you can’t delete your taxonomies or entities, simply because you have used them in one or more rules in your policies.

In this blog post, I will outline this pain point and recommend a solution.

Unable to delete entity ‘…’ because it’s being used

Yes that is right. If you have used a taxonomy value or some other entity as part of your user group policy definition – then it makes sense you can not delete it. That is expected logic, we have a clear dependency within the system. In which case, we need to break or remove this dependency first.

Below is a sample screenshot of this error message. In this example, the highlighted taxonomy value can not be deleted yet, until the dependency has been removed.

User group policy serialization as JSON

If you haven’t set up DevOps as part of your Content Hub development workflow, then we need to cover some basics around user group policies serialization. You can leverage Content Hub Import/Export feature to export all polices into a ZIP package, as detailed below:

  1. Using the Manage page, navigate to Import/Export.
  2. On the Import/Export page, in the Export section, select only the Policies check box and click Export. This will generate a ZIP package with all policies.
  3. Click View downloads at the right bottom of the screen.
  4. On the Downloads page, click the Download Order icon when the status of the package is ready for download. This will download the ZIP package with all policies.
  5. Unzip the downloaded package. This will have JSON files of all policies

A look at M.Builtin.Readers.json for example

Below is a snippet from the M.Builtin.Readers.json, which a serialized version of the M.Builtin.Readers user group (one of the out-of-the-box user groups)

Remember a group policy consists of one or more rules, with each rule having one or more conditions under which group members have permissions to do something.

I have highlighted one of the conditions within the first rule in this user group policy. This condition shows the dependency on one of the taxonomies, M.Final.LifeCycle.Status

On line 20, the reference (“href”) indicates which taxonomy child value is being used, which is M.Final.LifeCycle.Status.Approved

The full content of this file are available from my GitHub Gist for your reference.

How to safely delete or remove taxonomy references from user group policy JSON file

The serialized user group policy JSON file is a plain text file. So any a text editor of choice can be used to edit this file, and delete all references to the taxonomy with a dependency. And then save the changes in updated JSON file. That is it.

Due care has to be taken to ensure that the rest of the JSON file is not modified.

Once all references are deleted and verified, you can create a new ZIP package with the changed files, to be imported back into Content Hub.

It is recommended your certified Content Hub developers should make these changes (and validate them, say, using a text file comparison tool such as Beyond Compare). For example, you need to compare the original ZIP package with the newly created one to make sure that their structure is the same.

Finally, the newly created ZIP package can be imported using the Import/Export functionality as detailed in official docs.

DevOps: Automating removing taxonomy or entities references from user group policies

I have previously blogged about enabling DevOps as part of your Content Hub development workflow.

The current pain point becomes a bread-and-butter problem to solve assuming you already embraced Content Hub DevOps.

With some business logic implemented as part of your CI/CD pipelines, all references to taxonomy values or entities can be safely and reliably deleted from user group polices. This can be done with automation scripts and other tooling that comes with DevOps, truly bringing you ROI in your DevOps infrastructure.

Sample/suggested CI/CD pseudo code

  1. Define a CI/CD “user group policies clean-up” step to be invoked whenever we are deleting “entities” from your Content Hub instance.
  2. Using a Regex, scan and systematically delete such entities from your user group policies JSON files (depending on how you’ve setup your DevOps, all policies should be serialized to policies folder)
  3. Ensure your “user group policies clean-up” step runs ahead of any deletion of the entities (or taxonomy values). Remember you can’t delete an entity if it is being referenced in your user group policy.
  4. Work with your DevOps engineers to validate the steps and test any changes in non-production environment(s), before applying to production environment.

Remember to also look at my related blog post on DevOps automation for your Action Scripts.

Next steps

In this blog post, I have discussed a common pain point when you are Unable to delete entity because it’s being used in one or more policies. I explained why this is the case, and looked into technical details of user group policy architecture. I provided a solution, which can be automated with a robust DevOps culture adoption for Content Hub.

I hope my approach helps you address similar scenarios in your use-cases. Please let me know if you have any comments above and would like me to provide further or additional details.

Content Hub DevOps: Managing your action script code lifecycle in CI/CD pipelines

Context and background

When working with automated CI/CD pipelines with you Sitecore Content Hub, you need to be aware of the development lifecycle for your Action Scripts. This is to ensure your source code repo for your scripts doesn’t get ‘bloated with orphaned‘ script code files. In this blog post, I will cover how to manage the development lifecycle of your Action Scripts to mitigate against this problem.

What happens when you serialize action scripts into source control

I have previously blogged on Content Hub DevOps, especially on leveraging Content Hub CLI to extract a baseline of your Content Hub components. For example ch-cli system export --type Scripts --wait --order-id command allows you to export Actions, Scripts and Triggers package. When you unzip or extract the files within this package, you will notice there is a scripts folder. This will have two types of files: .json files and .csx files (assuming your actions scripts are written in C#.NET)

Script .json file type

For each action script packaged from your Sitecore Content Hub instance, it will have two files. One of them is the script .json file.
Below is a sample action script json file:

This file contains all the relevant meta-data about the action script. In particular, you will notice that it is referencing a second file using the ScriptToActiveScriptContent relations property. Using our sample above, this json file is referencing this code file “ZOGG4GbbQpyGlTYM7r1GfA

Script .csx code file

The code file based on C#.NET, is similar to the sample shown below.

What happens when you modify the code in your scripts

Each time you make changes to your Action Script source code and successfully build it, Content Hub will generate a new code file version behind the scenes. This will be automatically linked to its corresponding script .json file.

To visualise this, you will notice that when you serialise your Action Script again from your Content Hub instance, a new code file will be generated.

If you now compare the previous code file with the new one, it will become obvious which changes Content Hub has made to the .json file. Below is a sample comparison.

What should you do with the ‘old’ code file

We have now established what is going on whenever the source code in your action script is changed and successfully rebuilt. Each time, a new file will be generated. The old file will remain in your source code repository, unused and effectively ‘orphaned’.

My recommendation is to design your DevOps process that will always clean-up or delete all files from your scripts folder in your source code, before pulling the latest serialised files from your Content Hub instance.

You can do this in an automated way leveraging the Content Hub CLI commands. Alternatively, you can do it old school way leveraging PowerShell commands to delete all files from scripts folder before serializing new ones again. Whichever mechanism you leverage, ensure old and used code files do not bloat your source code repo.

Next steps

In this blog post, I have discussed what happens when you make code changes to your action scripts. I explained why you will have ‘old’ or ‘orphaned’ code files within your script folder that will bloat your source code repo. I also covered steps you can take to mitigate this problem.

I hope my approach helps you address similar scenarios in your use-cases. Please let me know if you have any comments above and would like me to provide further or additional details.

Streamlining Content Hub DevOps: Deploying Environment Variables and API Settings to QA and PROD

Context and background

I recently worked on an exciting Content Hub project which required automation of deployments from DEV environments to QA/TEST and PROD. One of the challenges I faced was how to handle environment specific variables and settings. One particular use case is the API call Action type, which has references to some API call endpoint and using an Api Key. Typically, such an API call will point to a non-production endpoint in your QA/TEST Content Hub and a production facing endpoint for the PROD Content Hub

Sounds familiar, should be easy right?

I thought so. I thought I put this question to my favourite search engine to see what is out there. The truth is Content Hub DevOps is nothing new really. There is plenty of documentation on how to go about it, including this blog post from the community From the Sitecore official docs, you can also find details about how to leverage Content Hub CLI to enable your DevOps workflows.

However, I couldn’t come across an end-to-end guide that solves my current problem. Nicky’s blog post “How to: Environment Variables in Content Hub Deployments” was pretty good actually and I have to say I found the approach quite compelling and detailed. However, I didn’t adopt Nicky’s approach as I would like to use automated end-to-end DevOps pipelines. Unfortunately, Nicky’s approach doesn’t.

My approach

Below is a high level process I have used.

  • Leveraging Content Hub CLI to extract a baseline of your Content Hub components. For example ch-cli system export --type Scripts --wait --order-id command allows you to export Actions, Scripts and Triggers package, which you can extract all yours Actions, Scripts and Triggers as JSON files. These can then be source controlled, allowing you to track future updates on a file-by-file basis. For a full list of components that you can export, you can pass --help param as shown below.
  • Without DevOps, you will typically package and deploy your Actions, Scripts and Triggers, say from DEV Content Hub into QA Content Hub instance. You will then have to manually update any of your API call Actions with the QA specific endpoint URL.
  • With Content Hub CLI, I am able to source control and compare my Content Hub DEV and QA files as shown below. Left-hand side is my DEV mock API action, right-hand side is my QA. Please keep note the identifier is the same (680QcX1ZDEPeVTKwKIklKXD) to ensure the same file can be deployed across to Content Hub QA and PROD
  • This is quite powerful, since I can take this to another level and define Environment specific variables for my mock API action, as shown below. I have identified I will need #{myMockApiUrl}# and {myMockApiKey} variables.
    • Notice I am leveraging the ReplaceTokens Azure pipelines task. Left-hand side is my DEV mock API action, this time I have parameterised the variables. Right-hand side is my QA to help illustrate the differences. During the QA deployment, my CI/CD pipelines will transform the source controlled file on the left-hand side into QA file on the right.
  • This is it, I have solved my problem. I have identified which component(s) have environment specific variables and parameters. I can now leverage DevOps CI/CD pipelines to package all my components, generate a deploy package specific for Content Hub QA environment.
  • Deploying a package using Content Hub CLI uses this command: ch-cli system import --source "path to your deploy package.zip" --job-id --wait
  • Wearing my DevOps hat, I am able to write a complete end-to-end CI/CD pipelines to automate the deployments.

Using Azure DevOps CI/CD pipelines

It is very straight forward to define and implement an end-to-end Azure DevOps CI/CD pipelines once we have defined our process and development workflows.

Azure variables template definition

One capability you can leverage is the Azure variables template definitions to allow you to define Content Hub QA and PROD variables, such as below. Please notice #{myMockApiUrl}# and {myMockApiKey} variables in this template file. They now have Content Hub QA specific values. We will need a similar file to hold Content Hub PROD variables.

Referencing Azure variables template file in main pipeline

The Azure variable template file for QA (qa-variable-template.yml, in my case) can then be linked to the main Azure CI/CD pipeline yaml file, such as shown below:

Replacing tokens in main pipelines

Replacing tokens sample is shown below. Please notice the API call Action Identifier 680QcX1ZDEPeVTKwKIklKXD that was referenced in my previous screenshots above

Next steps

In this blog post, I have introduced the problem and use-case when you need to manage and deploy Content Hub environment specific variables. I have used an API call Action type to illustrate this use case. I have also covered how to leverage Content Hub CLI to serialise Content Hub components and demonstrated an example of using Actions, Scripts and Triggers components. I finished with my own approach and how I did an implementation of an end-to-end automated DevOps process. I hope my approach helps you address similar scenarios in your use-cases. Please let me know if you have any comments above and would like me to provide further or additional details.

Creating and publishing a Content Hub custom connector – Func app settings and debugging

Introduction

In my previous blog post, I covered how to set-up your Func app within Visual Studio. In this post, I would like to walk you through how to configure your Func app to allow you to run and debug it in your local development environement.

Func app local.settings.json file

Within your Visual Studio project, create local.settings.json file at the root of the project. A sample json file is shown below. This will be used to configure all the configuration settings to allow you to run and debug the Func app locally.

The local.settings.json file stores app settings and settings used by local development tools. Settings in the local.settings.json file are used only when you’re running your project locally.

Because the local.settings.json may contain secrets, such as connection strings, you should never store it in a remote repository.

DevOps best practices

Microsoft Azure portal func app application settings

Similarly, you will need to configure all the configuration setting on your Microsoft Azure portal for your test or production Func app instances.

Clicking on Configuration menu, then Application settings tab will launch the page similar to the one shown below.

Depending on your needs, these application settings can be managed manually or very easily automated using DevOps pipelines.

List of required application settings

Below is a complete list of the Func app application settings

  • cf_account_id your Cloudflare account identifier
  • cf_api_base_url your Cloudflare API base URL
  • cf_api_token your Cloudflare API token
  • cf_webhook_url your Cloudflare webhook URL
  • ch_base_url your Content Hub instance base URL
  • ch_client_id your Content Hub instance client identifier
  • ch_client_secret your Content Hub instance client secret
  • ch_create_publiclinks_script_id your Content Hub action scrip identifier for creating public links
  • ch_get_data_script_id your Content Hub action scrip identifier for getting data
  • ch_password your Content Hub integration user password
  • ch_username your Content Hub integration user username
  • funcapp_api_key your custom Func app API key configured within your Content Hub integration

Next steps

In this blog post, we have explored at how to configure your Function app application settings to allow you to run and debug it in your local development environment. We also looked at configuring them on your published Func app on your Microsoft Azure portal.

Feel free to watch the rest of my YouTube playlist where I am demonstrating the end-to-end custom connector in action. Stay tuned.

Creating and publishing a Content Hub custom connector – Visual Studio project set-up guide

Background

I recently shared a video playlist on how I built a Content Hub custom connector that allows you to publish video assets into your existing video streaming platforms

On this blog post, I wanted to share details on how to setup your Visual Studio project and solution as well us how to deploy or publish the connector to your Microsoft Azure cloud.

Create a function app project

Using latest Visual Studio IDE, create a new Azure Functions project, using C# template as shown below

Choose a meaningful name for your project and progress through the next step. Ensure you select Http triggered function as show below.

Finalise the create project wizard to get your project created.

Add Content Hub web SDK reference

As shown below, add a reference to Stylelabs.M.Sdk.WebClient NuGet Package to your project.

In addition, ensure you have added the Microsoft NuGet Packages below to enable dependency injection to your Func app.

Enabling FunctionStartup in your Func app

To enable dependency injection in your project, add a Startup class similar to the one shown below. The Startup class needs to inherit the FunctionStartup, which allows us to configure and register our interfaces.

Creating Function App on Microsoft Azure portal

As explained in the video playlist, you will need to publish your Func app into your Microsoft Azure subscription.

You will need to create a Function App app in your Microsoft Azure subscription using the create template as shown below. Ensure you select relevant Subscription, Resource Group and .NET Runtime stack.

Progress through the create wizard screens to stand up your Function app in the portal.

Getting publish profile from the Microsoft Azure portal

On your newly created Function app, navigate to the Deployment Center as shown below

Clicking on the the Manage publish profile link should present a pop up window, from which you can download the publish profile. Keep note on the location where you have downloaded this publish profile file.

Importing publish profile into your Visual Studio project

Right-click on your project within VS, which should pop-up the menu shown below.

Click on the Publish… menu to launch the Publish window similar to the one shown below.

Using the Import Profile option will allow you to browse and select the publish profile file that you previously downloaded from Microsoft Azure portal. This will then successfully setup the publish profile to your Microsoft Azure subscription.

Publishing the custom connector from VS into Microsoft Azure portal

On the Publish window, you will notice your publish profile is selected, similar to one below.

Clicking on Publish button will deploy the Function app to your Microsoft Azure subscription.

Next steps

In this blog post, we have explored at how to set-up a Function app in your local developer environment, add required NuGet Packages as well us publishing it to your Microsoft Azure subscription

Feel free to watch the rest of my YouTube playlist where I am demonstrating the end-to-end custom connector in action. Stay tuned.

Sitecore AKS blue-green Search indexes deployments

This is a follow up to my Sitecore Zero Down time deployments series of blogs. I have also previously presented on this top during the Sitecore Symposium 2021.

If you haven’t read my previous blogs or watched my Sitecore Symposium 2021 session, I suggest you please pause now and go have a read or watch before proceeding with this blog post.

In this blog post, I will deep dive into the approach for Zero down-time deployments for Sitecore Indexes. I am going to use the Sitecore AKS workload for my scenario, although the same concepts can be applied for your Sitecore PaaS workloads too.

Blue-Green Web Index deployment strategy

To ensure complete isolation of your Sitecore Web Index during the Zero down-time deployments, you need to create two sets of the Web indexes:

  • Web Green Index to correspond to your CD green instance
  • And Web Blue Index to correspond to your Blue CD instance

This means during a deployment; you can do a full re-indexing of your staging CD without risk of breaking your LIVE CD

Blue Green Sitecore Web Index deployment strategy

This infographic captures the initial state when CD GREEN is LIVE and shows the transition when we do a new deployment.

You will notice that in the final state we have swapped CD BLUE to serve the live traffic, achieving Zero down-time deployments.

CI/CD Pipelines

How do you implement this?

Below, I am sharing a high-level CI/CD pipeline that I have used in my scenario for reference.

Sample CI/CD pipelines

In Blue-Green deployment strategy, you typically deploy to Blue CD instance, when Green CD is currently in production and serving LIVE traffic OR vice-versa. This is how we achieve zero downtime deployments. 

You can now extend your CI/CD pipelines to be aware of Web Blue Index and Web Green Index. You do this by ensuring that you update you Content Management (CM) instance configuration to point to the correct Web Blue or Web Green Index. This is achieved by parameterizing the Sitecore Index configuration patch files.

Your CI/CD process will then update your CM and CD images with correct Web Blue or Web Green index, before building them accordingly as show above. And that is it.

Prefer to watch the video instead?

If you prefer to watch my video instead, I have included the link below.

Video to walk you through the Sitecore AKS Blue-Green Search Index strategy

Next steps

If you have any feedback or questions, please leave me a comment, and I am happy to get back to you.

Also, you can subscribe to my YouTube channel, so you don’t miss out on latest updates.

Sitecore Zero-downtime deployments – Part 4

Sitecore PaaS/AKS blue-green deployments

With modern and mature DevOps, we all want smooth, sleek and painless automated deployments with zero-downtime. Sitecore deployments are no exception. Have you embraced zero-downtime deployments? This is not a new topic. If you look around Sitecore community, you see an odd question popping here and there regarding this topic.

The journey towards achieving zero-downtime deployments for any application in fact starts with your code base. So, in this series of blog posts, we will refresh ourselves on concepts like “Code Freeze” and the CI/CD process before deep diving into implementing Sitecore zero-downtime deployments.

Sitecore XP PaaS Blue-Green architecture

Sitecore XP PaaS reference architecture

The infographic above shows a typical Azure PaaS architecture for Sitecore XP scaled topology. In summary we have:

  • our Sitecore XP application roles such as CM, CD, ID among others
  • these role have access to Sitecore databases (master, web, core among others)
  • access to rest of the services such as Azure Key Vault, Azure Redis cache, App Insights, Azure Search among others

You will notice in this architecture, we have Blue-Web and Green-Web databases, which are corresponding to the BLUE-GREEN deployment slots for the CD App Service. We need separate web databases to enable us achieve content-safe deployments

The CM App Service also has BLUE-GREEN deployment slots specifically for code deployment, but with a shared master database. There is no compelling reason to have BLUE-GREEN master databases purely on basis of complexity introduced by such architecture (although it is not impossible to implement if you prefer this approach).

The rest of our XP scaled topology resources are shared

The Azure DevOps organisation typically will have access to run the CI/CD pipelines, is also included in the architecture.

How to manage settings

App Service Settings section can be leveraged to manage your Sitecore configuration settings including Sitecore connections Strings

Sitecore XP PaaS CI/CD process summary

Sitecore XP PaaS CI/CD process

Required steps:

  1. Tigger CD process
  2. Make copy of your web-db – this is for content safe deployment. Both CM and BLUE CD pointing to original web-db at this point. BLUE CD still in production with our live users accessing it
  3. Now deploy your new version to both CM instance and GREEN CD Staging slot instance – pointing them to use copy of web-db. Perform content deployment as usual, publish, rebuild the Sitecore indexes and perform any tests. This will not affect your BLUE CD at this stage.
  4. Once happy with deployment, then Swap CD production and staging slots. The GREEN CD with our new version is now production and our live users accessing it now. Zero down time achieved! Our previous version is still running in BLUE CD. If we have issues, we swap again to roll back.

Some notes:

 This example doesn’t have BLUE-GREEN for the CM instance, as I want to keep it simple – This though means your content editors will have to wait for deployment to finish to use the CM. If you really need CM zero down time, then you need to deploy CM BLUE-GREEN deployment slots as well. Alternatively, you can keep the deployment time to CM to a minimum and avoid BLUE-GREEN

You can be more also be creative with your Sitecore templates changes such that your changes are always backward compatible between successive releases  (e.g. don’t delete fields immediately, mark them as obsolete) This means you can safely rollback your changes without breaking the application

Sitecore XP AKS Blue-Green architecture

Sitecore using Containers makes use of Azure Kubernetes Service. This infographics shows a very simplified AKS blue-green strategy allows us to achieve zero downtime deployments.

Kubernetes Blue-Green strategy

How does it work?

  1. You will define a blue deployment for v1 and apply it to your desired state of your cluster.
  2. When version 2 comes along, you define a green deployment, apply it to your cluster, test and validate it without affecting blue deployment
  3. You then gradually replace V1 with V2
  4. Version 1 can be deleted if no longer needed.

Below we have a typical Sitcore XP Azure Kubernetes Service architecture for Sitecore XP scaled topology – the AKS cluster containing various pods running our containers.

Sitecore XP AKS Blue-Green reference architecture

You can see the scaled out Sitecore XP application roles running as individual Pods within this AKS cluster backed by a Windows Node Pool.

We also have access to Sitecore databases as well as other services such as Azure Key Vault, Azure Redis cache, App Insights among others.

I am showing our Azure DevOps organisations which will typically have access to run the CI/CD pipelines

Similar to the Azure PaaS architecture, AKS zero downtime deployments will make use of BLUE-GREEN deployment strategy for CD or CM instance

AKS Zero downtime deployments process

How do we do that? we don’t need to provision a separate cluster for GREEN environment. Instead, we define an additional GREEN deployment with its corresponding service and then label it accordingly, alongside our BLUE deployment.

For content-safe deployments, we will also be pointing to a copy of web database (Green) as shown.

Once we have tested and are happy with our new GREEN deployment, we switch traffic or routing to point to GREEN. We do this by updating our Ingress controller specification

Sitecore AKS Blue-Green (Green deployment)

In the above infographic, you can see now our end-users can access V2 in the GREEN deployment

BLUE deployment is on stand-by in case of roll back. And can be deleted if no longer required.

Note as previously discussed in PaaS deployments, you can implement BLUE-GREEN for the CM if required

Sitecore XP AKS CI/CD process

Sitecore XP AKS CI/CD process

Steps summary

  1. Trigger release pipeline process
  2. Make copy of your web-db – this is for content safe deployment. Both CM and BLUE CD pointing to web-db at this point. BLUE CD still in production with live users accessing it
  3. Apply your green deployment desired state onto the cluster. This creates the green pods with new version of docker images, and our Sitecore deployment including content deployment. This will use the copy of web-db we created earlier.  Publish and Rebuild indexes as usual and test and verify the deployment
  4. Once happy with deployment, Update traffic routing in Ingress Controller and live users can now access our new Sitecore version. In event of roll-back, update traffic routing in Ingress controller. If BLUE deployment no longer needed, clean it up to save on resources

Next steps

An this is a wrap. This post concludes this series of blog posts where we looked into implementing Sitecore Zero Downtime deployments. I hope you found this useful and can start your own journey towards achieving Zero Downtime deployments with your Sitecore workloads. If you have any comments or queries, please leave me a comment at the end of this post.

Sitecore Zero-downtime deployments – Part 3

Blue-Green Deployments

With modern and mature DevOps, we all want smooth, sleek and painless automated deployments with zero-downtime. Sitecore deployments are no exception. Have you embraced zero-downtime deployments? This is not a new topic. If you look around Sitecore community, you see an odd question popping here and there regarding this topic.

The journey towards achieving zero-downtime deployments for any application in fact starts with your code base. So, in this series of blog posts, we will refresh ourselves on concepts like “Code Freeze” and the CI/CD process before deep diving into implementing Sitecore zero-downtime deployments.

Blue-Green deployments architecture

Blue-green deployments strategy

In software engineering, blue-green deployment is a method of installing changes to a web, app, or database server by swapping alternating production and staging servers

Wikipedia

Key Concepts

In its purest form,  true BLUE/GREEN deployments means that we need two separate but identical environments, one is live (BLUE) and the other is on stand-by (GREEN). When you have  new version of your application, you deploy to the staging environment (GREEN) , test it without affecting BLUE. When you are happy with this new version, you can then swap it to be LIVE instance.

However, in practice, it doesn’t always make sense to run a copy of every resource. Furthermore, this may introduce some complexity to the process.

This is why we now have some shared resources as you can see in the infographic above, while others belong to BLUE or GREEN environment.

As part of this architecture, we need some way of switching or routing incoming traffic between the two environments.

Blue-Green deployment strategy effectively enables us to achieve zero down time deployments. This is because your users will not notice any downtime during deployments.

CI/CD process for Blue-Green deployments

CI/CD process for Blue-Green deployments

On the top part of the infographic above, – BLUE is currently production environment and our users accessing this environment. When we have, a new version of our application, it is deployed to GREEN environment, without affecting our users.

On the bottom part of the infographic above, – now GREEN is the production environment and our users are accessing this environment.  This leaves the BLUE environment available for us to deploy the next version of our application

We deploy to BLUE and GREEN in turns, this achieving zero downtime deployments. The process repeats in each deployment cycle.

Some benefits of Blue-Green strategy

If you haven’t already adopted the cloud for your Sitecore workloads – be it PaaS or Containers, then perhaps you need to start thinking about this seriously as there are benefits you will get.

“Blue-green deployments made easier with the cloud.”

fact

The cloud provides tooling you need to:

  • Automate your provisioning and tearing down of environments
  • Automate starting or stopping of services
  • Kubernetes simplifies container orchestration for us,  the Azure Kubernetes Service (AKS) provide a Control Plane for free
  • The flexibility and cost reductions the cloud offers makes blue-green deployments within everyone’s reach at this time and age, please embrace them.

Next steps

Hopefully, these blog post help you understand key concepts about BLUE-GREEN deployments.

In the next blog post in this series, we will look at implementing Sitecore Zero Downtime deployments.