Friday, December 15, 2017

An Approach to Blue/Green Sitecore Deployments

You may have heard someone speak of blue-green deployments, and you may have found yourself wondering what those colors have to do with software development. Simply put, blue-green deployments reduces downtime and risk when deploying changes. The central idea is to have two identical production environments labeled Blue and Green. At any given time one of the environments is live and the other is not. New code is always pushed to the non-live environment where it can be tested. Once validated, web traffic is pointed to the freshly updated environment. The other environment no longer serves traffic and can serve as a rollback in the unlikely event an issue surfaces in the newly live environment.

Obviously, zero-downtime deployments are very valuable to enterprise clients. Since blue-green deployments minimize downtime (perhaps even to zero) it behooves us as Sitecore architects to utilize the strategy. Of course, if you have any familiarity with the complexity of a Sitecore environment, blue-green deployments may seem like an unattainable goal.

My motivation for this blog post is to sketch out an approach for blue-green deployments with Sitecore. I'd like to think through the problem and demonstrate it is possible insofar as we can trust a thought-experiment.

The Challenge

The primary problem posed by Sitecore with blue-green deployments is the database layer. Since the database is a shared resource amongst all Sitecore servers in an environment, any change there can affect the entire environment. Additionally, since we are dealing with a CMS, we must expect that authors regularly introduce changes to the database.

This database challenge is exacerbated by two factors:
  1. We cannot control the schema. Sitecore must own that.
  2. We must actually think about two database layers (SQL and Mongo) -- in which some databases are inter-related -- as well as their attendant search indexes

Point #1 should be self-evident. Sitecore's API encapsulates the database layer. Any changes to the database must be done through the API; this is de rigueur for any CMS and Sitecore is no exception.

Point #2 probably requires a little more explanation. The databases in Mongo function as a very large "net" that captures all interaction data with visitors to the site. Mongo is organized in such a way to make writes very fast. The Reporting database in SQL represents data from Mongo that has been reorganized to support efficient reads so that report performance is optimized. The Analytics index lets us query Mongo data from the API efficiently. Thus, Mongo is the source of truth for visitor interaction data and the Reporting database and the Analytics index are coupled to it. This means any changes introduced to the data in Mongo must also be represented in SQL and in the Analytics index. We must treat those three systems as a unit.

A Solution

Clearly, we must mitigate the problems posed by the database layers. I believe we can, but let's first pose a few assumptions:
  1. Content Authors will be inactive during deployments.
  2. We cannot use InProc mode for session state on CD servers.
  3. Sticky sessions (server affinity) should be disabled.
  4. Load-balancer supports configuration changes through scripting

Step 0: Initial State

Now to the fun part! Before the deployment begins we imagine blue and green PROD environments where the green environment is live and one version ahead of the blue environment. For this initial state, I've greatly simplified the server topology. I'm representing all of Sitecore servers with a single app server and all databases as a single database. I've also eliminated search indexes.

Step 1: Synchronize Content

The first step in the deployment process is to synchronize all data managed by content authors in Sitecore from the live (green) environment to the blue environment. Typical examples of this sort of data would be the site pages and data items which we would expect to be descendants of /sitecore/content and media assets in the Media Library. One possible mechanism for automating the synchronization process is to use Razl. Here's a nice YouTube video demonstrating the capability. It's important we synchronize content prior to deploying new code so that our deployment is working against the same Sitecore data as it would in a 'conventional' deployment to PROD. Note in the diagram below that SQL data in the blue environment is still version 0. The asterisk represents the addition of managed content synchronized from the green environment.

Step 2: Deploy New Version

The next step is to deploy our changes to the blue environment. I've represented this work performed by Octopus Deploy. It is my preferred tool for this job, but it's not the only one. Octopus manages making changes to all Sitecore servers (files placed on the file system and Sitecore items written to SQL) as well as ensuring we publish and rebuild indexes as required. When this step completes the blue environment will be one version ahead of the green environment (while retaining the synchronized content from green.)

Update: There is nothing, I believe, about this strategy the requires indexes to be rebuilt. There could be something about your particular solution that needs an index to be rebuilt. If so, this is the correct step to perform that work.

Step 3: Testing

At this point, the blue environment is ready for testing. The specific forms of testing are wide-open: starting with someone doing basic smoke-testing all the way through a full suite of automated tests.

Step 4: Change Connection Strings and Analytics Index

Now comes the hard part: dealing with databases. Fear not, however, we have a plan. Recall from step 1 that we've already dealt with any differences between blue and green due to changes made by content authors (also, we assume a content freeze during deployment.) In step 2 we deployed the the latest solution as well as published and rebuilt indexes if required. Therefore, the last hurdle is analytics and session data.

First, let's take a moment to think through analytics data. It all starts with the four Mongo databases. From the Mongo databases we have the derived data in the SQL Reporting database and the Analytics search index. So, really we need to think of these three subsystems as a unit. Another important consideration is that we never want to mix the live and non-live analytics data. For example, we don't want to dirty visitor interaction data with clicks generated by smoke-tests performed during a deployment. We also need to be careful with session data. Live users will be transitioned from the green environment to the blue environment. All of their serialized session data must remain valid and coherent. do we do this?

We change the connection strings for the Mongo, Reporting, and Session databases for all Sitecore servers in the blue environment. We also modify the analytics index config to use the 'analytics_live' Solr core which is replicated from a corresponding 'analytics_live' Solr core in the green environment. In this way, we guarantee that all analytics and session data for a blue Sitecore server corresponds to live data.

Step 5: Make Blue the Live Environment

We are now ready to redirect incoming traffic from the Internet to the blue environment, thus making it the live environment. At the same time, we also need to reverse the direction of Solr replication so that the blue analytics_live index is now the master index and green's version is the slave. Since we assume there are no sticky sessions. The traffic switch should be nearly instantaneous.

Step 6: Finish Retiring the Green Environment

Finally, we must finish retiring the green environment from being live. This step is really just the inverse of step 4. That is, we change the connection strings for the Mongo, Reporting, and Session databases for all Sitecore servers in the green environment. Again, we also modify the analytics index config. This time, however, we wire up the analytics index in Sitecore to use the 'analytics' Solr core rather than the 'analytics_live' Solr core.


While I separated steps 4-6 are listed above as discrete steps to help illustrate the idea, as a practical matter we should think of them as a single continuous step. Even better, if we automate 4-6 as a single operation, the process of rolling back from blue to green becomes much easier.

Imagine you finish step 6 and proudly watch traffic seamlessly flow to the blue servers only to discover that despite all the testing in step 3, an issue pops up related to the code just deployed. You could rollback to the green environment by inverting steps 4-6.  That's a simple proposition if you took the time to automate 4-6 as a single operation.

Sunday, April 17, 2016

Sitecore Installer Re-visited

Hard to believe, but it was close to a year ago that I released my Sitecore PowerShell Installer script. Since then I've continued to refine and improve upon it. Now seems like as good a time as any to take stock of the script.


Installs 8.1 or 8.0

I've added a version checker in the script that uses Reflection to examine the version of the Sitecore.Kernel.dll assembly. This check then forces some behavioral differences during the script's runtime.

Robust Pre-Install Sanity Checking

Roughly 1/4 of the script is dedicated to validating configuration input and environmental health before any changes are made to the server. If I had a time machine, I would go back in time to the day I started the script and add a install counter. It's hard to pin down an accurate estimate of how many times I've used the script to install Sitecore, but I know I would need at least three digits! :)

More Sitecore Roles

Originally, the script was intended for CM/CD installs. Since then I have added more specialized roles like a Preview CD and a Publishing server.

Optionally Disable All Database Operations

Do you have to deal with intransigent DBAs? If, for some reason, you simply cannot allow the script to connect to SQL and write changes, then you can still use the script to automate everything else.

Full Compliance with MongoDB's Connection String Specification

If Mongo allows for it, then so does the script. You can review the specification here.

Multiple Bindings in IIS

This is useful if you are running a multi-site instance or use load-balanced servers and wish to provide dedicated host names/IPs in addition to the load-balanced binding.

Change Default Admin Password

I worked very hard to ensure that servers were properly hardened during install. Yet, somehow, overlooked this basic but necessary change! Of course you could make this change manually, but if you wish to automate the installation of entire server environments this is a simple but important addition to the script.

Role-Based Config File Examples

The installer script's run-time is governed by a .config file that contains all the information needed to install Sitecore. To help people get started using the script, I've created a number of example configuration files according to Sitecore role.

Philosophy of Use

My general approach to managing and customizing Sitecore is through package deployment -- typically .update package. This, in turn, informs how I think about installing Sitecore. As a general rule of thumb, I want to make any system change that I cannot (easily) do through an .update package. I want to configure the server into a standard role that is suitable for smoke testing, but I don't want to turn an installer into an application management solution. This is same basic position I've maintained since I wrote the original version.

What doe this mean in a practical sense? As much as possible, I try to avoid modifying stock .config files. There are a handful of exceptions, some of which I wouldn't have predicted back when I wrote my original blog post. Here are the list of files I do modify depending upon the specifics of the install:

  • web.config
  • DataFolder.config
  • Sitecore.ContentSearch.Solr.DefaultIndexConfiguration.config.example
  • ScalabilitySettings.config
  • Sitecore.Publishing.Parallel.config

In addition, I will enable/disable any number of files depending upon the server role specified. For example, a CD server will have the SwitchMasterToWeb.config file enabled. Furthermore, I will place that file in an appropriate folder (e.g. zzzMustBeLast or Z.SwitchMasterToWeb).

So what are some examples of what I won't do? My script will not 'turn on' Solr. Doing so depends upon having a Solr environment ready to go. I have provided a script to switch between search providers however. I won't create new indexes or new indexing strategies if you provision a Preview CD server, nor will I ensure your indexing strategies are appropriately configured on a CD vs. a CM server. In my opinion, that is the job of some other process such as deploying an .update package that contains the appropriate .config file patches. The job of the install script is to ensure that each server is in a state ready to receive those packages and to eliminate the need to make any system changes outside the domain of an .update package.

Installer Configuration Tour

Future Enhancements

One idea I have is to isolate all sitecore config changes made by the installer to a single patch file. While the list of files that I make changes to today is fairly small, it would be an improvement if I made no modifications at all.

How else could I improve the script? Actually, I'd like to ask you that, dear reader. What would you like to see done, if anything?

Monday, August 10, 2015

Install Sitecore.Ship From NuGet with PowerShell

Download from GitHub.

Now that I have a solution for quickly installing Sitecore in higher environments, I want a way to automate .update package installs. There are a number of clever solutions out there to assist with this. My choice is Sitecore.Ship. The biggest advantage Sitecore.Ship has over other solutions I have seen is the ability to perform remote installations, including sending the package over the wire. Great stuff!

The stumbling block for me, and possibly others, is that Sitecore.Ship isn't very easy to install. The typical technique is to use NuGet to introduce Sitecore.Ship into Visual Studio, and include it in your custom solution's deployment workstream. But...this is a chicken-and-egg problem for me. I want Sitecore.Ship installed from day one, because it is a building block of my deployment workstream not my custom solution!

PowerShell To The Rescue

Given my experience with PowerShell to install Sitecore my thought process was, "If PowerShell can install Sitecore, surely it can install Sitecore.Ship." (Yes, it surely can.) I've created a script that will consume one or more NuGet feeds to:

  1. Download Sitecore.Ship
  2. Recursively download dependent packages
  3. Create a web.config transform from Sitecore.Ship's NuGet packages and your chosen options
  4. Install assemblies and apply config files
  5. Write to an (optional) log file

NuGet And You

As of this writing, if you want to use Sitecore.Ship with Sitecore 8 then you need version 0.4.0 of the Sitecore.Ship NuGet package. Unfortunately, this package isn't yet available on I downloaded the development branch of Sitecore.Ship from GitHub and built a NuGet package with Sitecore 8 update 4 assemblies. I published this NuGet package to Arke's private feed, so if you are an Arke employee, lucky you! If not, I've included the package in my GitHub repository to save you, dear reader, the trouble of generating the NuGet package. In any event, I expect that a public release of Sitecore.Ship 0.4.0+ will be available soon.

The script supports basic authentication should you choose to host Sitecore.Ship or any dependent packages on a private feed. The script also supports search across multiple feeds. Thus, you could host a private build of Sitecore.Ship on a private feed, but pull dependent packages from public feed(s).

Version Testing

I've tested the install script against all versions of Sitecore 8 (initial release through update 4.) I've not tried the script with earlier versions of Sitecore, though I suspect it would mostly work. Depending upon demand or my own needs I may extend it to support earlier versions of Sitecore/Sitecore.Ship.

Happy deployments!

Wednesday, July 8, 2015

Basic Tips to Prevent Solr Downtime

If you've followed my series on installing Solr for Sitecore then you should have a shiny, new Solr instance somewhere in your environment happily indexing Sitecore data and returning results to queries. Hopefully, that never changes, but we all know that hiccups can happen. This post suggests a few things you can do to mitigate or prevent down-time.


If you find yourself troubleshooting, you'll be very glad to have Solr-specific logs to refer to. Given how easy this is to configure, you owe it to yourself to do so. Assuming you have the downloaded .zip from Solr:
  1. Copy the Jar files from  solr/example/lib/ext to Tomcat's lib/ folder.
  2. Copy properties file from solr/example/resources to Tomcat's lib/ folder.
All done! You will find your new Solr logs in the install path of Tomcat in the logs/ folder.


When dealing with Solr there are two kinds of RAM to consider. One is the amount of RAM dedicated to the Java heap and the second is OS disk cache. While I can't give specific guidance on how much RAM you should devote and where, I will provide some general advice and guidance.

Java Heap

To set the Java heap size is pretty straightforward matter once you understand the implementation details of Tomcat for your machine. Mainly, this means what version of Tomcat are you running and which OS do you use. I'll be covering Tomcat 8 as a Windows service. If you differ from me in one or more regards, don't despair. Most of what I say still applies to you, you'll probably need to look a little to find the equivalent spots to make your setting changes.

First, let's review the four different memory management parameters you may control
  • Xms - The minimum size of your heap
  • Xmx - The maximum heap size
  • XX:PermSize - Specifies the initial size allocated to the JVM at startup
  • XX:MaxPermSize - If necessary, up this maximum will be allocated to the JVM during startup

Most likely, you won't need to worry about XX:PermSize and XX:MaxPermSize unless you see errors like Java.lang.OutOfMemoryError: PermGen Space. Much more likely, you will want to control the bounds on your already-running heap through Xms and Xmx. If you are running Tomcat as a Windows service then this is as simple as filling in a text box. For example:

The above screenshot shows the equivalent of setting -Xms=256m and -Xmx=512m. Additionally, I elected to specify the XX:PermSize as 128MB.

As a final note on heap size, be aware that for heap sizes greater than 2GB, garbage collection can cause performance problems. Symptoms are occasional pauses in program execution during a full GC. This can be mitigated through GC tuning of your JVM or electing to use a commercial JVM.

Disk Cache

For disk cache, you would ideally have enough RAM to hold the entire index in memory. Whatever memory remains unused once the OS, running programs, and the Java heap have been satisfied is fair game for disk cache. Thus, if 12GB of RAM is unused, you could potentially fit 12GB of index data into memory before the OS is forced to start paging. In practice, you must use trial and error to find the right memory fit for your data and usage patterns.

Secondary Cores

Given that you have elected to use Solr, you probably treat search as a first-class citizen in your environment. If you aren't using secondary cores to provide data continuity during an index rebuild, you're simply doing it wrong. It helps that the process for configuring secondary cores is easy to follow.

Note: every time a rebuild occurs, the name values in the files for the two related cores will swap. This is normal behavior, of course, but can be horribly confusing if you aren't aware of it. I.e. don't just assume that the name of the core you are viewing matches the core's folder name in your Solr home directory!


This topic is actually quite broad and probably deserves a blog post or several all of its own. Nevertheless, we can at least imagine the base case of wishing to provision a second Solr instance that is slaved to a master instance. Fail-over will not be automatic although you could script it.
  1. Modify the file in your cores to set whether the core is a master or a slave.
    • On Master
      • enable.master=true
      • enable.slave=false
    • On Slave
      • enable.master=false
      • enable.slave=true
  2. Modify conf/solrconfig.xml file in each core to include a request handler for replication. Below is a snippet of XML you can use. Simply replace the "remote_host" and "core_name" in the snippet's XML with your environment's values. Note: the way I have constructed this snippet means you can apply it "as is" to any core on your master OR your slave instance. The trick I used was to associate the state of the "enable" property for the master and slave elements with the value of the enable.master and enable.slave properties from the core's file which you should have set in step 1. This makes your bookkeeping duties a little less painful, especially if you ever find yourself swapping the master and slave around.
What to do in the event your master goes down? Edit the master/slave properties in the file and change the ServiceBaseAddress used by Solr in the Sitecore.ContentSearch.Solr.DefaultIndexConfiguration.config file. You should also (as soon as time allows) edit the Replication handler XML appropriately: either change the URL or comment it out entirely.

Further Reading

Monday, July 6, 2015

PowerShell Sitecore Install Script

Update (4-17-2016): Here I discuss enhancements I've made to the script since last year as well as a video tour of the config file used by the script.

Download from GitHub.

I've been searching for a solution to automating Sitecore installations in any environment higher than my personal development VM--for that we already have SIM. I can be stubborn and exacting, sometimes to a fault, and while a manual install affords me complete control over an environment, it also is horribly time-consuming. Also, if I'm being honest with myself (I'm sure this plainly obvious to you) this process is mistake-prone.

Search for a Solution

The following are the three most prominent examples of existing solutions I looked at, but I looked at many more.

Sitecore Instance Manager

I've been using SIM for a while now to manage my Sitecore instances on my development box. It's a wonderful solution, but it's not suitable for a production environment without a lot of post-install intervention. I also looked at the console app for SIM. Alas, while it seems to extend SIM to the command-line, it does not allow for greater flexibility in how SIM installs a Sitecore instance.

Sitecore's Installer

Jeremy Davis had the very clever idea of deconstructing Sitecore's .exe installer to get at the underlying .msi file. He successfully identified all of the command-line switches the .msi accepts. I very nearly settled on this approach. After all, one would expect Sitecore's own installer to follow Sitecore's installation guideline's recommendations. It does a better job than SIM, but you are also rather constrained in some of your options and that was a deal breaker for me.

PowerShell Script

All-star Alex Shyba wrote a PowerShell script to automate his installs. His use case is the same as mine for SIM, however: he built the script to install development instances. Like the previous solutions discussed, concerns such as file system permissions, user mappings in SQL, using a domain account for the application pool identity, and CD-hardening are left as post-install exercises for a human.


Alex's script gave me the push I needed to write my own. My goal is to completely automate a production-ready Sitecore CM or CD server install. Once you run my script the only thing left to do is install your desired modules. Actually, let's take a minute to unpack that, because buried inside that sentence is a subtle point on my deployment philosophy, and it impacts the way I designed my Sitecore installer. I believe that any Sitecore change that can be managed through a .update package (or .zip module) should. For me, this includes managing changes like SwitchMasterToWeb, scalability, and web.config amongst many others. Thus, my over-arching design philosophy for an automated install is to do everything I would normally in SQL, IIS, the file system, and (yes...) the .config files but only enough to create a working instance and no more. Once the installer is done, the instance should be 100% ready for management via .update packages. That is my goal.

The Solution

I decided to make my script available on GitHub for a couple reasons.
  • I suspect and hope other will want to make use of it
  • Community feedback will help me improve it

Major Features

  • Install Sitecore with or without the databases.
  • Script sanity checks SQL and input validation prior to making any changes
  • Write output to the screen and to a log file.
  • Fine-grained control of the application pool identity (built-in or domain account)
  • Assign recommended file system permissions on web server.
  • Add application pool identity to recommended local groups on web server.
  • Create user mappings for login in SQL.
  • Install database files on any valid path or UNC
  • SQL Login used during install doesn't have to be the same account executing the script.
  • May specifiy a host name and port used for MongoDB
  • May supply a Solr base address
  • Choose to use SQL as a session state server
  • Many CD-hardening options

One limitation of the script today is I do not support choosing MongoDB as a session state server. My suspicion is that this would be a very easy change to make, and I will be including it soon. The script is strictly limited to automating the Sitecore install itself, not MongoDB or Solr. While it's not necessary, it would be a good idea to provisions those applications first if you plan to use them. Speaking of Solr, if you do plan to use it, then be sure to check out my other PowerShell script to change the search provider from Lucene to Solr.

Finally, I built this script to install Sitecore 8.0. I've briefly tested it with Sitecore 7.5 and it mostly works, but breaks on some assumptions about the existence of .config files like SwitchMasterToWeb.config.example and Sitecore.ContentSearch.Solr.DefaultIndexConfiguration.config.example. Even earlier versions of Sitecore would need some more adjustments (example: deal with with differences in databases.) Depending upon the level of interest expressed I will consider making the script compatible with prior Sitecore releases.

Tuesday, June 30, 2015

Unit Testing in Sitecore

Unit testing seems to be one of those topics that everyone generally agrees is a Good Idea, but when no one is looking it becomes fantastically easy to justify to oneself why for this project at this particular time it's ok to forgo unit tests.

My purpose here isn't to convince (guilt) you into doing unit tests. Rather, I'd like to demonstrate that actually implementing them can be trivially easy. Now, of course, anyone can write a unit test that is easy but does it add value? Well, that'll depend upon you, of course. My advice is target the hot spots in your code first and anytime you encounter a bug write a unit test that validates the bug is resolved. Aiming for 100% code coverage is admirable but don't let the perfect be the enemy of the good. Ok—enough preaching philosophizing!


First a word about tools. I favor Glass (version 3 for the purpose of this blog post) as my Sitecore ORM; this will have some ramifications for unit tests. As far as a unit testing framework, I have settled upon xUnit. Is this terribly important? Not especially. NUnit is quite popular and powerful. The designers of NUnit actually wrote xUnit. You can view their reasons for doing so here. Regardless of which framework you select, most of what I document here will still apply, the main difference is the syntax. For mocking, I have come to really like Sitecore.FakeDb. It's an amazing product. Add a NuGet package to your project and you suddenly have to power to run all of Sitecore's API without the need of a website. There are other Sitecore mocking tools out there, but I highly recommend this one. Finally, for a bit of syntactic sugar I suggest Fluent Assertions. Again, simply add the NuGet to your project and your assertion statements will read very nearly like English.

Some Lessons Learned

  • I don't need FakeDb for data mocking so long as I am working solely with Glass object. Since every Glass-mapped class is an implemenation of an interface, my mock test data can also simply implement that interface.
  • FakeDb is convenient, nonetheless, because it allows Glass's CreateFakeItem method to 'just work' without having to go the extra length of providing a testing database along with hand-(re)creating all of the config settings necessary to connect Sitecore's API layer to a data provider. This means that for very simple API calls you could avoid mocking with FakeDb and do it all through Glass. In practice, however, I find FakeDb to be so fast as to leave me wonder if there is any advantage to this.'
  • I can (if I want) cast from a FakeDb item to a Glass object. Nathanael Mann warns against this practice for performance reasons and because it blurs the line between unit testing and integration testing. In fairness, I am over-simplifying his thesis a bit, nonetheless, for me, the raw convenience of being able easily to mock Sitecore data is simply too powerful to ignore. As far as performance goes, once Glass's context has been created (~800ms hit) unit tests involving Glass run in less than 10ms even when casting. 10 seconds per 1000 tests? I can accept that.

Everybody Loves an Example

namespace MyPOCO.Tests.Data.Domain
    public class My_POCO_Tests
        public class DoesQueryStringMatchMethod

            public static DbTemplate GetMyPOCOTemplate()
                return new DbTemplate(IMy_POCOConstants.TemplateName, IMy_POCOConstants.TemplateId)
                    new DbField(IMy_POCOConstants.Query_String_Value_To_MatchFieldName, 
            public void RequestedUrlHasUpperCaseValues()
                string requestedUrl = "";
                using (var db = new Db
                    new DbItem("test", ID.NewID, IMy_POCOConstants.TemplateId)
                        {IMy_POCOConstants.Query_String_Value_To_MatchFieldName, requestedUrl}
                    global::Sitecore.Data.Items.Item home = db.GetItem("/sitecore/content/test");
                    My_POCO poco = home.GlassCast<My_POCO>();

                    bool result = poco.DoesQueryStringMatch(requestedUrl);


So, what does this code demonstrate? It shows a Glass-mapped object that has a method I wish to unit test. The method is named DoesQueryStringMatch. Presumably, the method compares the requested URL against a field value and returns a boolean. Of course, a more realistic and more useful test would be against a method that had a more complicated job to do, but for purposes of illustrating the technique we'll stick with a contrived example.

 I could have mocked my Glass object without the need for FakeDb, and I do need to create my Glass context (that's the GlassMapperContext.CreateContext() method call.) Once that work is done, however, I am free and clear for all further unit tests where I may have need to test against Sitecore items—example: an action for the rules engine. I leverage the fact that Glass maintains constants about my template and its fields. Necessary? No, but very useful if I have to create an item with a complicated structure. Creating the Glass context is simple enough. I just replicate the code called in the Start() method of GlassMapperSc.cs.

Friday, May 22, 2015

MSBuild and TDS

I recently had a need to get continuous integration working for a project that uses Team Development for Sitecore (TDS)—most of us do right? :) While some of this blog post will deal specifically with creating a build definition in Team Foundation Server (TFS), the vast majority of this article applies to any software that uses MSBuild under the hood: TeamCity, Jenkins, CruiseControl, etc.
The wrinkle in my requirements was that I could not install TDS on the build server. There's already a very helpful resource on this topic, but I did find I had to do things slightly different in my environment. Additionally, I also made it a goal to remove the SlowCheetah dependency (I make use of xml transformations) from my build server. Finally, I ran into a couple of other small roadblocks that I thought I might as well document here while I was at it.


As I said, Mike Edwards has an immensely useful article that shows how to avoid installing TDS on your build server. The only things I will add are where I diverged from his steps. For clarity, Mike added a folder called TDSFiles at the root of his solution. I added a folder called MSBuild Support Files with a child folder called TDS.

  1. The HedgehogDevelopmentSitecoreProject.targets file has many references to the HedgehogDevelopment.SitecoreProject.Tasks.dll. For each one of these you need to modify the path. In my case, the correct path is no path at all. This was because (I assume) TFS used the working directory of the .targets file itself as a starting location—the .targets file and the DLL live side-by-side.

  2. In the same .targets file you will also need to modify the paths of the TdsService.asmx and the HedgehogDevelopment.SitecoreProject.Service.dll. Here is a screenshot of my modifications.


After corresponding with Hedgehog's Charlie Turano, I decided to eliminate MSBuild's dependency on SlowCheetah. This step is only necessary if you do not supply the DLLs and .targets files to MSBuild. One easy way of doing this is to simply include the "packages" folder from NuGet in source control. This guarantees that MSBuild will be able to make use of the files. In fact, this is how my solution was already set up. Nonetheless, TDS is perfectly capable of doing XML transformations during the build. I want to be ready should a future release of TDS completely replace SlowCheetah (a possibility since SlowCheetah's developer has said he will no longer maintain it.)

This is very easy to do. Simply comment out the following line in any .csproj file that uses SlowCheetah

Some Miscellaneous Issues

  1. I encountered another .targets related issue. This time it was:

    The imported project "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v11.0\WebApplications\Microsoft.WebApplication.targets" was not found. Confirm that the path in the <Import> declaration is correct, and that the file exists on disk.

    What's happening? Inside the .csproj file there is a variable $VSToolsPath getting set that ends up being used by MSBuild to resolve the path of the Microsoft.WebApplication.targets. You could modify the .csproj to prevent this behavior, but it's much easier to simply use a command-line switch like so:

    msbuild myproject.csproj /p:VisualStudioVersion=12.0.

    If you are using TFS then the fix is just as easy: in your build definition on the process tab set your MSBuild Arguments

  2. I was receiving a post-build error:

    API restriction: The assembly 'file:///D:\Builds\6\XXXXXXXXXX\XXX-TestBuild\Binaries\_PublishedWebsites\TDS.MyProject\bin\MyProject.Tests.dll' has already loaded from a different location. It cannot be loaded from a new location within the same appdomain.

    The full explanation of what is happening is here. The resolution is again very simple. In the build definition make sure you do not recursively match all test DLLs: