A Continuous Delivery Implementation for SalesForce

Today, I come to you with a co-author. Since February, I’ve been working with Mark Thias on a Continuous Delivery (CD) consulting engagement.

Introduction

One of our tasks has been implementing a CD pipeline for SalesForce (SF). SalesForce is one of the largest cloud-based Customer Relationship Management (CRM) platforms available. Unfortunately, its current implementation makes it hard to work in a CD fashion, where you start off with code in a development environment and move it forward through successive environments until you reach production.

Problem Statement

Wikipedia defines Continuous Delivery as:

Continuous Delivery (CD) is a software engineering approach in which teams keep producing valuable software in short cycles and ensure that the software can be reliably released at any time.

The main issue with trying to implement CD for SalesForce is that each distinct object has a unique identifier (ID) that can be different in each different environment. These unique identifiers are either 15- or 18-character strings, and they’re automatically generated for you by SalesForce.

So let’s say that you’re building a simple CRM database. It has a Customer object, and related to that Customer object is an Address object.

Now imagine that you have a development server where your developers do all of their work, then a test server where the business folks get to check out all of the functionality, and then it goes to production where the customers can use your live application.

In the development environment, your Customer object might have an ID of R000013bab945ca, but once you move your code to your test server, the same object’s ID might change to C0128562vrb86q3, and further, in the production server, the exact same object’s ID then becomes T348920nr62w542.

Normally, that’s fine, and as a SF developer, you don’t ever need to care about it…Until, that is, you need to write some code inside your Address object that references the Customer object. In that case, your life just became harder. While the Customer object’s ID gets updated between each successive environment, your code references do not automatically get updated.

One of the principles of CD is to use the same code in every environment that you deploy to. This helps increase confidence in your ability to rapidly deliver and test the same software as it progresses through each environment. However, when your code has to change between environments, you can never truly have that same level of confidence. For us, even a single character of change is too much.

Originally, the solution that was implemented at our client was to keep a list of all places where one object referenced another object, and every time code was deployed from one environment to the next, someone would manually run through that list and change all of the object ID references manually.

As you can imagine, this was a long and error-prone process. Sometimes, one of the steps in the process would be skipped. Other times, the object’s ID wouldn’t be updated correctly (cut and paste is trickier than you’d think!) Various other failure modes also crept in over time.

Over time, this customer decided that the right thing to do was to store multiple copies of the code inside their source control tool; one copy for each environment. That way, when an object’s ID got updated between the development and testing environments, they only had to make the object ID change once and commit it. However, this meant that EVERY code change (not just object ID references, but every single change) had to be made/committed once for each environment. Further, it means that it’s no longer possible to deploy the same code to every environment.

Our Solution

The solution that we came up with is to create a set of tokens inside the SF code. After the code containing the tokens is deployed to the SF environment, we run some custom Java code that uses the SF SOAP APIs to look up objects that contain our tokens, find the values that those tokens are supposed to be, and then replace the token with the proper ID for that object in that environment.

As an example, here is some original code that you might find in a SF object. In this case, the RecordType value is the object ID that is referenced.

if ({!NOT($Customer.Address_1 || $Customer.Address_2)}) {
    alert("{!Customer.No_Address}");
} else {
    top.location.href="/123/e?retURL=%2F123%2Fo&RecordType=R000013bab945ca;
}

Once this ID is replaced with a token, the code that’s now committed to our source code repository looks like this:

if ({!NOT($Customer.Address_1 || $Customer.Address_2)}) {
    alert("{!Customer.No_Address}");
} else {
    top.location.href="/123/e?retURL=%2F123%2Fo&RecordType=$$RECORD_TYPE;
}

We have stored the list of tokens in a CSV file. The custom Java code reads in the CSV file, loops over each line in it, goes out to each of the objects referenced and replaces the token with the proper ID. There are multiple different categories of objects, and each different category requires a different SF SOAP API lookup to get the proper ID. When a new category of object is identified, it only takes a little bit of time to add the new type to our CSV file, determine what the proper SF SOAP API calls are that need to be made, and add the code to the custom Java tool.

Once we have deployed the SF code containing the tokens, we then run the custom Java tool that invokes the SOAP APIs to replace the tokens in the code. Thus, the code containing $$RECORD_TYPE above will look like this when deployed to the development server:

if ({!NOT($Customer.Address_1 || $Customer.Address_2)}) {
    alert("{!Customer.No_Address}");
} else {
    top.location.href="/123/e?retURL=%2F123%2Fo&RecordType=R000013bab945ca;
}

And it will look like this when deployed to the test server:

if ({!NOT($Customer.Address_1 || $Customer.Address_2)}) {
    alert("{!Customer.No_Address}");
} else {
    top.location.href="/123/e?retURL=%2F123%2Fo&RecordType=C0128562vrb86q3;
}

This solution allows us to maintain only a single copy of the Apex code inside our source code repository, and deploy the same code to every single environment, from development all the way through to production.

Conclusion

We have been using this solution for the past couple of months at the client, and they’re extremely pleased with it. It allows them to have a single codebase that’s deployed to all environments. Deployment defects are down. The time required to stand-up entirely new environments has dropped from 3 days per environment to around 4 hours per environment. And their confidence that when they deploy their SalesForce application to a new environment, and it will work the first time, has skyrocketed.

This, my friends, is what it’s all about.