Sitecore Content Hub DevOps: New Import/Export engine with breaking changes is now default

Context and background

If you already using DevOps for deployments with your Content Hub environments, then you probably already aware of the breaking change that Sitecore introduced a few months ago. You can read the full notification on the Sitecore Support page The new version of the package import/export engine become the default in both the UI and CLI from Tuesday, September 30 according to the notification. Because of the breaking changes introduced, this means existing CICD pipelines won’t work. In fact, there is a high risk of breaking your environments if you try use existing CICD pipelines without refactoring.

In this blog post, I will look into details what breaking changes were introduced and how to re-align your existing CICD pipelines to work with the new import/export engine.

So what has changed in the new Import/Export engine?

Below is a screenshot from the official Sitecore docs summarizing the change. You can also access the change log here.

There is no further details available from the docs on specifics of the breaking change. However, it is very straightforward to figure out that Sitecore fundamentally changed the package architecture in the new import/export engine.

Resources are grouped by type

Within Sitecore Content Hub Import/Export UI, you have an option to Export components using both the previous/legacy engine and the new engine. As shown below, you can notice a toggle for Enable Legacy version, which when switched on will allow you to export a package with previous/legacy engine.

Also we can note that Publish definition configurations and Email templates are now available for Import/Export with the new engine. Email templates are unchecked by default.

If you did a quick comparison between the export package from the old/legacy engine vs the new engine, it comes clear that Sitecore has updated the packaging structure to organise content by resource type rather than by export category

This change makes navigation more straightforward and ensures greater consistency throughout the package.

Summary of the changes between legacy and new export packages

Below is a graphic showing how the package structure was changed. On the left hand-side, we have the legacy/old package and on the right hand side is the new one.

Full comparison of package contents between old and new

Below is a more detailed comparison, showing how the packages differ.

ComponentLegacy package sub foldersNew package sub folders
Copy profilescopy_profilesentities
Email templatesn/aentities
Entity definitionsentities
schema
option_lists
datasources
entities
schema
Export profilesexport_profilesentities
Media processingmedia_processing_setsentities
Option listsoption_listsdatasources
Policiespoliciesdatasources
entities
policies
schema
Portal pagesentities
portal_pages
datasources
entities
policies
schema
Publish definition configurationsn/aentities
Rendition linksrendition_linksentities
Settingssettingsentities
State flowsstate_flowsdatasources
entities
policies
schema
Taxonomiestaxonomiesdatasources
entities
schema
Triggersactions
triggers
entities
Scriptsactions
scripts
entities

Resources are grouped by type

Instead of separate folders like portal_pages, media_processing_sets, or option_lists, the new export engine places files according to their resource type. ​

For example:​

  • All entities are stored in the entities/ folder.​
  • All datasources (such as option lists) are found in datasources/ folder​
  • Policies and schema files have their own dedicated folders.​

Each resource is saved as an individual JSON file named with its unique identifier.

Related components are now separated

When a resource includes related items—such as a portal page referencing multiple components—each component is now saved in its own JSON file. ​

These files are no longer embedded or nested under the parent resource. ​

Updating your CICD pipelines

It is very straight forward to update you existing CICD pipelines once we have analysed and understood the new package architecture. You can revisit my previous blog post where I covered this topic in detail You need to simply map your previous logic to work with the new package architecture. You will also need to re-baseline your Content Hub environments within your source control so that you are using the new package architecture.

Next steps

In this blog post, I have looked at the new Content Hub Import/Export engine. I dived into how you can analyse the packages produced from the legacy/old engine and compared it with the new engine. I hope you find this valuable and the analysis provides a view of what has changed in the new package architecture.

Please let me know if you have any comments above and would like me to provide further or additional details.

Content Hub DevOps: Managing your action script code lifecycle in CI/CD pipelines

Context and background

When working with automated CI/CD pipelines with you Sitecore Content Hub, you need to be aware of the development lifecycle for your Action Scripts. This is to ensure your source code repo for your scripts doesn’t get ‘bloated with orphaned‘ script code files. In this blog post, I will cover how to manage the development lifecycle of your Action Scripts to mitigate against this problem.

What happens when you serialize action scripts into source control

I have previously blogged on Content Hub DevOps, especially on leveraging Content Hub CLI to extract a baseline of your Content Hub components. For example ch-cli system export --type Scripts --wait --order-id command allows you to export Actions, Scripts and Triggers package. When you unzip or extract the files within this package, you will notice there is a scripts folder. This will have two types of files: .json files and .csx files (assuming your actions scripts are written in C#.NET)

Script .json file type

For each action script packaged from your Sitecore Content Hub instance, it will have two files. One of them is the script .json file.
Below is a sample action script json file:

This file contains all the relevant meta-data about the action script. In particular, you will notice that it is referencing a second file using the ScriptToActiveScriptContent relations property. Using our sample above, this json file is referencing this code file “ZOGG4GbbQpyGlTYM7r1GfA

Script .csx code file

The code file based on C#.NET, is similar to the sample shown below.

What happens when you modify the code in your scripts

Each time you make changes to your Action Script source code and successfully build it, Content Hub will generate a new code file version behind the scenes. This will be automatically linked to its corresponding script .json file.

To visualise this, you will notice that when you serialise your Action Script again from your Content Hub instance, a new code file will be generated.

If you now compare the previous code file with the new one, it will become obvious which changes Content Hub has made to the .json file. Below is a sample comparison.

What should you do with the ‘old’ code file

We have now established what is going on whenever the source code in your action script is changed and successfully rebuilt. Each time, a new file will be generated. The old file will remain in your source code repository, unused and effectively ‘orphaned’.

My recommendation is to design your DevOps process that will always clean-up or delete all files from your scripts folder in your source code, before pulling the latest serialised files from your Content Hub instance.

You can do this in an automated way leveraging the Content Hub CLI commands. Alternatively, you can do it old school way leveraging PowerShell commands to delete all files from scripts folder before serializing new ones again. Whichever mechanism you leverage, ensure old and used code files do not bloat your source code repo.

Next steps

In this blog post, I have discussed what happens when you make code changes to your action scripts. I explained why you will have ‘old’ or ‘orphaned’ code files within your script folder that will bloat your source code repo. I also covered steps you can take to mitigate this problem.

I hope my approach helps you address similar scenarios in your use-cases. Please let me know if you have any comments above and would like me to provide further or additional details.

Content Hub tips & tricks: Overriding Created by or Modify by ‘ScriptUser’ in your scripts

Background

If you currently leveraging Action scripts as part of your Content Hub solution, you probably noticed that any updates done to your entities are tagged as Created by or Modified by ‘Scriptuser’. You may as well not be aware that actually this is how the Content Hub platform works by design. At present, all scripts operate under the Scriptuser account

But how do I override the ‘Scriptuser’ account?

Frustratingly, there isn’t much documentation around this ‘Scriptuser’ on the official docs. You are not alone in this. That is why I have put together this blog post to help provide a solution to this problem.

It is not all doom and gloom. In fact, if you look around in Sitecore docs, the IMClient provides a capability to impersonate a user, using the the ImpersonateAsync method.

Below is a definition from Sitecore docs:

ImpersonateAsync(string)

Creates a new IMClient that acts on behalf of the user with the specified username. The current logger will be copied to the new client.

So we will need a function to get an impersonated MClient

Looks like we have a solution then. All we need to do now is define function within our Action Script, that will create an instance of an impersonated MClient. This impersonated client will in effect allow us to override the Scriptuser with the current user triggering our Action Script. Which is the solution to our current problem.

Good news, I have already defined such a function for you. I have named this function GetImpersonatedClientAsync, a shown below.

You should be familiar on how to load an entity property, by leveraging the PropertyLoadOption in your EntityLoadConfiguration, as shown above. That is how I am able to retrieve the Username property from the current user entity (currently authenticated user within Content Hub). This will be the user that has triggered our Action Script.

It is need as a parameter when calling the MClient.ImpersonateAsync as shown on line 35 above.

I have left the details of how triggering of an Action script happens, since I have covered this topic in my previous blog posts, such as this one.

However, I would like to call out that the UserId of the authenticated user is always available within the Action Script via accessing the Context object. Below is a snippet of how I have done this on line 10.

Saving changes with Impersonated MClient

We now need to use the impersonated MClient in all instances where we are saving changes in our Action Script. Below is a code snippet on how to achieve this. You can see on line 15, I am using the impersonated userClient. You can easily compare this with line 20, which is a fallback option that uses default MClient.

You can view the full source code of the above script from my public GitHub Gist.

Next steps

In this blog post, we have looked at a how to override the Scriptuser within an action script. I have walked you through a sample action script and also shared sample source code for your reference. I hope you find this useful for your similar use cases.

Stay tuned and leave please us any feedback or comments.

Content Hub tips & tricks: Getting entity created by, modified by and other audit data in your scripts

Background

I have previously blogged about creating Action Scripts to help aggregate your Content Hub data for sharing with external or third party integrations. In this blog post, I wanted to demonstrate how to access your entities audit data such as when the entity was created and who created it. Also when it was last modified and who made the modification.

IEntity is all you need

IEntity represents an entity within Content Hub.

Luckily enough, IEntity already inherits the IResource type which defines the audit fields that we need, as highlighted above.

Sample Action script code

Below is a snippet of an action script, to demonstrate how to access these audit fields.

And below is the sample output from the above script.

Script explanation

  • Line 14 through 27 demonstrates how to get the entity object within your action script. The Context object provides you with the triggering context properties. In this case we are can try obtain the entity from Context.Target as shown on Line 14. The rest of the code checks if we succeeded to get the entity object, in which case we can try load it using a custom function GetEntity as shown in Line 20. I will cover details about this in a minute.
  • Line 29 demonstrates how to obtain the entity definition name
  • Line 32 through 35 demonstrates how to obtain the entity auditing metadata.

How to load entity using custom fuction

Below is the code snippet how to load an entity using an entity Id, by leveraging the MClient.Entities.GetAsync method. Notice you will need to specify the load configuration, where additionally we can specify which properties and relations we need to load for the entity.

For example, I am loading the “AssetTitle” property as well as the “AssetTypeToAsset” relation for my M.Asset entity type.

You can view the full source code of the above script from my public GitHub Gist.

Sample Trigger for our script

Below is a sample trigger, indicating the triggering events we are interested in. As you can see, our trigger will fire whenever an entity has been created, modified or deleted within Content Hub.

I haven’t shown the triggering conditions, and how to link this to an Action. This information is available within Content Hub documentation, if you would like to explore further.

Next steps

In this blog post, we have looked at a how to access an entity’s auditing metadata within an action script. I walked you through a sample action script and a sample output, and also shared sample source code for your reference. I hope you find this useful for your similar use cases.

Stay tuned and leave please us any feedback or comments.

Streamlining Content Hub DevOps: Deploying Environment Variables and API Settings to QA and PROD

Context and background

I recently worked on an exciting Content Hub project which required automation of deployments from DEV environments to QA/TEST and PROD. One of the challenges I faced was how to handle environment specific variables and settings. One particular use case is the API call Action type, which has references to some API call endpoint and using an Api Key. Typically, such an API call will point to a non-production endpoint in your QA/TEST Content Hub and a production facing endpoint for the PROD Content Hub

Sounds familiar, should be easy right?

I thought so. I thought I put this question to my favourite search engine to see what is out there. The truth is Content Hub DevOps is nothing new really. There is plenty of documentation on how to go about it, including this blog post from the community From the Sitecore official docs, you can also find details about how to leverage Content Hub CLI to enable your DevOps workflows.

However, I couldn’t come across an end-to-end guide that solves my current problem. Nicky’s blog post “How to: Environment Variables in Content Hub Deployments” was pretty good actually and I have to say I found the approach quite compelling and detailed. However, I didn’t adopt Nicky’s approach as I would like to use automated end-to-end DevOps pipelines. Unfortunately, Nicky’s approach doesn’t.

My approach

Below is a high level process I have used.

  • Leveraging Content Hub CLI to extract a baseline of your Content Hub components. For example ch-cli system export --type Scripts --wait --order-id command allows you to export Actions, Scripts and Triggers package, which you can extract all yours Actions, Scripts and Triggers as JSON files. These can then be source controlled, allowing you to track future updates on a file-by-file basis. For a full list of components that you can export, you can pass --help param as shown below.
  • Without DevOps, you will typically package and deploy your Actions, Scripts and Triggers, say from DEV Content Hub into QA Content Hub instance. You will then have to manually update any of your API call Actions with the QA specific endpoint URL.
  • With Content Hub CLI, I am able to source control and compare my Content Hub DEV and QA files as shown below. Left-hand side is my DEV mock API action, right-hand side is my QA. Please keep note the identifier is the same (680QcX1ZDEPeVTKwKIklKXD) to ensure the same file can be deployed across to Content Hub QA and PROD
  • This is quite powerful, since I can take this to another level and define Environment specific variables for my mock API action, as shown below. I have identified I will need #{myMockApiUrl}# and {myMockApiKey} variables.
    • Notice I am leveraging the ReplaceTokens Azure pipelines task. Left-hand side is my DEV mock API action, this time I have parameterised the variables. Right-hand side is my QA to help illustrate the differences. During the QA deployment, my CI/CD pipelines will transform the source controlled file on the left-hand side into QA file on the right.
  • This is it, I have solved my problem. I have identified which component(s) have environment specific variables and parameters. I can now leverage DevOps CI/CD pipelines to package all my components, generate a deploy package specific for Content Hub QA environment.
  • Deploying a package using Content Hub CLI uses this command: ch-cli system import --source "path to your deploy package.zip" --job-id --wait
  • Wearing my DevOps hat, I am able to write a complete end-to-end CI/CD pipelines to automate the deployments.

Using Azure DevOps CI/CD pipelines

It is very straight forward to define and implement an end-to-end Azure DevOps CI/CD pipelines once we have defined our process and development workflows.

Azure variables template definition

One capability you can leverage is the Azure variables template definitions to allow you to define Content Hub QA and PROD variables, such as below. Please notice #{myMockApiUrl}# and {myMockApiKey} variables in this template file. They now have Content Hub QA specific values. We will need a similar file to hold Content Hub PROD variables.

Referencing Azure variables template file in main pipeline

The Azure variable template file for QA (qa-variable-template.yml, in my case) can then be linked to the main Azure CI/CD pipeline yaml file, such as shown below:

Replacing tokens in main pipelines

Replacing tokens sample is shown below. Please notice the API call Action Identifier 680QcX1ZDEPeVTKwKIklKXD that was referenced in my previous screenshots above

Next steps

In this blog post, I have introduced the problem and use-case when you need to manage and deploy Content Hub environment specific variables. I have used an API call Action type to illustrate this use case. I have also covered how to leverage Content Hub CLI to serialise Content Hub components and demonstrated an example of using Actions, Scripts and Triggers components. I finished with my own approach and how I did an implementation of an end-to-end automated DevOps process. I hope my approach helps you address similar scenarios in your use-cases. Please let me know if you have any comments above and would like me to provide further or additional details.

Creating and publishing a Content Hub custom connector – Func app integration testing

In my previous blog post, I covered how to configure your Function app for debugging within your local development environment. In this blog post, we are going to carry on from where I left and look at how to start doing some end-to-end integrations tests with your Func app.

Getting your Sitecore Content Hub ready

As explained in my YouTube video – 3 Content Hub: API call action set-up, you need to set up an API call action within Content Hub. This is the action that will trigger the execution of our Func app.

Below are steps to follow to accomplish this.

  • Using Manage -> Actions menu, click on ‘New action’ button to launch the create New action popup, similar to the one shown below.
  • Populate all the required details as shown above:
    • Name: Specify the name of the API call
    • Label: Specify the display label for the API call
    • Type: Select API call from the Type dropdown list, similar to one shown below
    • Method: Select POST. This means a POST request will be sent to our Func app when this action is executed
    • API URL: Specify your Func app URL. I will provide further details later on how to obtain your Func app URL in your local Dev environment
    • Timeout: Specify a value of timeout in milliseconds. Content Hub provide a choice of 30, 60, 90, 120, 300 and 600 as of writing this blog
    • Headers: Add any custom HTTP Headers for the POST request. In my case, I have a custom X-Api-Key which I am using to specify the Func app Api key
    • Values: Add any additional meta data that you want passed into the Func app.

Getting your local Dev Func app ready for testing and debugging

As explained in my previous blog, you need to ensure your local.settings.json has all the required application settings. This is to ensure we can establish an integration between Func app and your Content Hub instance.

Then you need to build your Visual Studio project and ensure you have no Build errors. You can do this using Build -> Rebuild solution option. Alternatively, right-click on your project, and select Rebuild

Your Visual Studio Output window will show Rebuild All succeeded message, similar to the one below.

Setting up Breakpoint within the code files

To be able to debug and step-into your code files, add the relevant Breakpoints on a particular line of code using Debug -> Toggle Breakpoint menu or F9 shortcut key. In my example below, I have togged a Breakpoint on line 32.

Start running your local Dev Func app in Debug mode

We are now ready to execute our local Dev Func app in debug mode. From Debug menu, click on Start Debugging menu or select F5 shortcut key.

This will launch a Func app runtime command window, similar to the one shown below:

Your local Dev Func app is now ready to process your POST requests. The URL is indicated using the arrow above. Remember earlier on we needed to configure the Content Hub API call URL, this will be the value to use. Copy and apply this in your Content Hub accordingly.

Triggering your local Dev Func app from Content Hub

As I demonstrated in my YouTube video – 1 Cloudflare Stream connector demo, I am triggering the API call using a custom button. In my use case, I have added this button to the Blog details page. Clicking on ‘Send to Cloudflare’ button, will trigger the local Func app.

You will notice the local Func app will have activated any Breakpoints you have set up within the Visual Studio project. This will allow you to step into any lines of code that you would like to inspect at debug-level.

Sample active Breakpoint with request Debug details

Below you can see my Breakpoint on line 32, where I am inspecting debug-level details of my POST request.

Next steps

In this blog post, we have explored how to prepare both your Sitecore Content Hub API call and your local Dev Function app for integration testing. We also looked at steps of configuring and triggering the Func app whilst in debug mode. Feel free to watch the rest of my YouTube playlist where I am demonstrating the end-to-end custom connector in action. Stay tuned.

Creating and publishing a Content Hub custom connector – Func app settings and debugging

Introduction

In my previous blog post, I covered how to set-up your Func app within Visual Studio. In this post, I would like to walk you through how to configure your Func app to allow you to run and debug it in your local development environement.

Func app local.settings.json file

Within your Visual Studio project, create local.settings.json file at the root of the project. A sample json file is shown below. This will be used to configure all the configuration settings to allow you to run and debug the Func app locally.

The local.settings.json file stores app settings and settings used by local development tools. Settings in the local.settings.json file are used only when you’re running your project locally.

Because the local.settings.json may contain secrets, such as connection strings, you should never store it in a remote repository.

DevOps best practices

Microsoft Azure portal func app application settings

Similarly, you will need to configure all the configuration setting on your Microsoft Azure portal for your test or production Func app instances.

Clicking on Configuration menu, then Application settings tab will launch the page similar to the one shown below.

Depending on your needs, these application settings can be managed manually or very easily automated using DevOps pipelines.

List of required application settings

Below is a complete list of the Func app application settings

  • cf_account_id your Cloudflare account identifier
  • cf_api_base_url your Cloudflare API base URL
  • cf_api_token your Cloudflare API token
  • cf_webhook_url your Cloudflare webhook URL
  • ch_base_url your Content Hub instance base URL
  • ch_client_id your Content Hub instance client identifier
  • ch_client_secret your Content Hub instance client secret
  • ch_create_publiclinks_script_id your Content Hub action scrip identifier for creating public links
  • ch_get_data_script_id your Content Hub action scrip identifier for getting data
  • ch_password your Content Hub integration user password
  • ch_username your Content Hub integration user username
  • funcapp_api_key your custom Func app API key configured within your Content Hub integration

Next steps

In this blog post, we have explored at how to configure your Function app application settings to allow you to run and debug it in your local development environment. We also looked at configuring them on your published Func app on your Microsoft Azure portal.

Feel free to watch the rest of my YouTube playlist where I am demonstrating the end-to-end custom connector in action. Stay tuned.

Content Hub gems: Leveraging action scripts to aggregate CMP content and your linked assets – part 2

Introduction

We previously looked at how to leverage action scripts to simply how to access content and linked assets with a single web API call. In this blog post, we follow up with a deep dive into the code and logic within the action script itself.

The script

The first part of the script is shown below.

  1. Line 1 to 5 has all the required libraries being used in the script
  2. Line Line 7 & 8 has logic for extracting the id of the content item, which we are gathering the data in this script. Data from the web API request is specified in Context.Data, which is a JToken. The script expects it to be a JObject containing a contentId property.
  3. Line 10 to 14 contains logic for checking whether the content id could not be extracted from the data. In which case, a response http status-code 400 (bad request) together with a user-friendly error-message is returned. This is done with help of helper function SetError, as shown below:
  4. Line 16 to 19 contain the EntityLoadConfiguration we are going to use when loading the assets linked to our content item. Only Filename, Title and Description as well as AssetToSubtitleAsset relation will be loaded in our use case.
  5. Line 21 to 24 similarly contain the EntityLoadConfiguration we are going to use when loading the content item (our blog post content type). Blog_Title, Blog_Quote, Blog_Body as well as CmpContentToMasterLinkedAsset & CmpContentToLinkedAsset relations will be loaded here. CmpContentToMasterLinkedAsset relation holds the link to the master image associated with this item. CmpContentToLinkedAsset relation has the assets linked with this item, such as the video asset.
  6. Line 26 to 31 contain the logic for loading the content (Blog post), by leveraging MClient.Entities.GetAsync function and specifying the content id and the EntityLoadConfiguration already defined above. We have a check on line 27 whether the content entity was actually found, and return a response http status-code 400 (bad request) together with a user-friendly error-message, when none was found.
  7. Line 33 to 37 start preparing the output object for our script. We have created a new JObject which has the shown properties. We have added the values of properties Blog_Title and Blog_Quote and Blog_Body. We are going to add more properties as we walk through the rest of the script.

Second part of the script

The code listing from line 39 through to 83 has the logic for loading the video asset linked with this content item.

  1. Line 39 get the list of video asset ids by using a helper function GetLinkedVideos shown below. This function makes use of the LINQ query, which filters down only entities of type M.Asset which are linked to current content id (parent in the CmpContentToLinkedAsset relation). In my use case, I have used the file extension .mp4 to identify video asset (but you could use any other property or combination of properties to meet your specific use cases)
  2. Line 40 checks if our GetLinkedVideos found any video ids, in which case the rest of the logic will try and process them
  3. Line 42 extract the first video id that was found. I have used the MCient.Logger.Info method to log user friendly messages that helped me show which video ids were found. These message appear on the Action Script’s View Logs window.
  4. Line 45 to 46 contain the logic for loading the video asset entity, by leveraging MClient.Entities.GetAsync function and specifying the video asset id and the EntityLoadConfiguration already defined before in first part of the script. Line 46 checks to ensure the video asset was found, for us to do further processing
  5. Line 48 and 49 contains the logic for getting the video asset public link, which is required as part of the output from the script. On line 48, I am leveraging GetPublicLinks function, which I have defined as shown below. I am interested in the original rendition of my video asset. Please note that if the video asset does not have original public link generated, nothing will be retrieved.
  6. Which is why the code on line 49 further makes use of a function named GetFirstPublicLinkUrl which helps load the M.PublicLink entity and inspect the RelativeUrl property, as shown below.
  7. Line 50 to 55 we are now creating a new JObject which has the shown properties as expected the output of the script. This object is added to videoAsset section of our main result object.
  8. Line 57 contain logic for getting the video asset subtitles. The AssetToSubtitleAsset is a IParentToManyChildrenRelation, so we get hold of subtitles using the Children property from this relation. In essence, a video subtitle is an asset in its own right. So the code listing from Line 59 is trying to load each of the subtitle asset and we are interested in the Title property as well as the Public Link (original rendition). This is now familiar to how we got public link for the video asset itself. We add each of these properties in a JArray, which in turn, is added to the result.

Part three of the script

In the last part of the script, we also get the master asset linked to our content item. In this case, we are interested in the asset Public Link, asset file name, Blog_Title and Blog_Body properties. We create a new JObject which has the shown properties as expected and added to the result object.

Line 103 stores our result object that we have been preparing onto the Context. This tells the script to return it to the user.

Final script output

The script output is similar to the one shown below.

This completes the code listing for this script.

Next steps

In this blog post, I have looked at the second part of the Content Hub Action Scripts for Web API use cases. We have taken a deep dive into the source code for the script, covering the various components and how they work together. For a practical use cases, look at my blog post on how I have created a custom connector for publishing video assets from Content Hub into Cloudflare Stream

Stay tuned and leave us any feedback or comments.