Managing Business Central Development with Git: Amending History


This is the start of a series of posts about managing AL development with Git. I don’t profess to be a Git expert and much of what I write about will not exclusively apply to Business Central development. This is a collection of approaches I’ve found to be useful when it comes to managing our source code. Take what you will, discard the rest, vociferously argue with me if you feel strongly enough about it.

Preamble over. Let’s get on with it.

(Re)Writing History

My introduction to source control was using TFVC (more here). As a centralised source control system when you check code in it is immediately pushed to the server. All the changes that anyone pushes make a nice, neat, straight line. Check-ins are given a changeset number. Those numbers are unique, always increase and can never be changed. History has been written.

Some changesets in the history of a branch in TFVC

Stands to reason. We can’t go back and change the past. But what if we could…?

You can use Git like this if you want. Make a change, commit the change, make a change, commit the change. Keep committing in a straight line and keep your history really simple.

* cd03362 (HEAD -> master) Add missing caption for new field
* 94388de Populate new Customer field OnInsert
* c49b9c9 Add new field to Customer card

Unlike TFVC you have to push those commits to the server before anyone can see them. Do that on a regular basis and make sure your colleagues are pulling your changes before they commit theirs and not much can go wrong.

That’s fine as far as it goes, but it’s not particularly elegant. What about when you make another commit correcting a typo in the caption? (Reading the history from bottom to top)

* 1ee22a6 (HEAD -> master) Correct typo in caption
* cd03362 Add missing caption for new field
* 94388de Populate new Customer field OnInsert
* c49b9c9 Add new field to Customer card

Now we’ve got two commits in the history of the project just to add a caption and get the caption correct. With TFVC you’re stuck with it, but with Git, we’ve got complete control over the history of the project.

Tell a Story

Having control over the history of the project ought to make us think differently about it. What is the history for anyway? It’s to help other developers, including our future selves, understand what changes have been made to the code and why they have been made. The best way I’ve heard this described is that we can use the commits to tell the story of the changes that we’ve made.

When you were working on this feature what changes did you make? What code did you add or remove?

The reality might be something like:

  1. Added a field to the customer table
  2. Added an OnInsert trigger to populate the new field
  3. Added a missing caption
  4. Corrected a typo in the caption
  5. Added the field to the customer card
  6. Realised the field was in the wrong group, moved it
  7. Added a missing application area
  8. Realised I should have included a suffix to the field name, renamed the field

Development can be messy. We make mistakes and fix them as we go. But is that history easy to read? Do other developers care about all the steps in the process? Does future you need to reminded of all those mistakes? No. We can do better than that.


From here on in we’re going to use a terminal – command prompt / bash / PowerShell to manipulate the history of the repository. Don’t be intimidated – it’s fine with a little practice. I’d recommend a combination of PowerShell and posh-git module – its tab completion and status in the prompt makes life easier.

Incidentally, to show the graphs of the history in this post I’ve used:

git log --graph --oneline --all

i.e. show the log (history) of the branch as a graph with each commit on a single line.

git commit –amend

The first tool we’ve got to put some of this mess right is the –amend switch to the commit command. Perfect for when you realise you’d made a mistake with the latest commit. You’ve found a typo or forgotten to include some changes that should have been made with it.

Stage the changes that you want to include with the previous commit (using git add or VS Code or some other UI tool). Rather than committing them in the UI switch to a terminal and type git commit –amend

Amending a commit

Git will open a text file with the commit comment at the top and details of the changes which are included in the commit underneath. Change the commit comment if you want and close the file. You’ll have selected the text editor you want to use when installing Git. If you can’t remember doing that then you’ll find out what you chose now. You can change the editor in Git’s config if you like.

Congratulations. You just rewrote the history of the repo. You can do that perfectly safely on commits that are only in your local copy of the repository.

Only Share Your Changes When You’re Ready

This is one of the big benefits of a distributed source control system like Git. It’s your copy of the repo. You can do whatever you like to it without affecting anyone else until you are ready. Make mistakes. Muck about with different ideas. Start again. Redesign. Whatever.

When you are happy with the changes that you’ve made and the story that the commits tell – push those changes to the server and let your colleagues get their hands on them.

Different Versions of History

Before going on to describe other methods for manipulating history it is probably responsible to briefly discuss the consequences of rewriting commits that have been already been pushed to the server.

If this is a commit that has already been pushed to the server you should know that your history no longer matches the history on the remote.

The graph will end up looking something like this. My local copy of the commit has a different commit hash (c1152b2) to the remote copy (aea8ffa) – usually, but not necessary, called “origin”. Notice the posh-git prompt indicates this with the up and down arrows. 1 commit ahead of master, 1 commit behind master.

* c1152b2 (HEAD -> master) Correct typo in caption
| * aea8ffa (origin/master) Correct typo in caption
* cd03362 Add missing caption for new field
* 94388de Populate new Customer field OnInsert
* c49b9c9 Add new field to Customer card
C:\Users\james.pearson.TECMAN\Desktop\GitDemo [master ↓1 ↑1]>

While this is the case I won’t be able to push my changes to the remote. This is what happens when I run git push

! [rejected] master -> master (non-fast-forward)
error: failed to push some refs to 'C:\users\james.pearson.TECMAN\Desktop\GitDemo-Origin.git'
hint: Updates were rejected because the tip of your current branch is behind
hint: its remote counterpart. Integrate the remote changes (e.g.
hint: 'git pull …') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.

Updates were rejected. There is a danger that some commits on the server will be lost if my copy of the master branch is pulled as is.

The advice is to pull the commits that are on the server and incorporate them into my local copy before I push my changes again. Usually good advice. Only, in this case I want that change to be lost. The commit that is in the server’s copy but not mine is the commit that I want to overwrite. In which case, I can safely force my changes onto the server with git push -f

Before forcing your changes make sure that you know which changes are going to be lost i.e. everything from the point at which the graph diverges.

If that all sounds a little daunting, don’t do it. Practice amending local commits first and getting them into shape before you push them to the server. Being able to confidently manipulate the history of the repo with a few key commands will prove an invaluable tool in your own work and especially as you collaborate on the same code with others.

Next up, interactive rebasing.

An Approach to Package Management in Dynamics 365 Business Central


We use PowerShell to call the Azure DevOps API and retrieve Build Artefacts from the last successful build of the repository/repositories that we’re dependent on.


Over the last few years I’ve moved into a role where I’m managing a development team more than I’m writing code myself. I’ve spent a lot of that time looking at tools and practices in the broader software development community. After all, whether you’re writing C/AL, AL, PowerShell or JavaScript it’s all code and it’s unlikely that we’ll face any challenges that haven’t already been faced in one way or another in a different setting.

In that time we’ve introduced:

Package Management

The next thing to talk about is package management. I’ve written about the benefits of trying to avoid dependencies between your apps before (see here). However, if app A relies on app B and you cannot foresee ever deploying A without B then you have a dependency. There is no point trying to code your way round the problems that avoiding the dependency will create.

Accepting that your app has one or more dependencies – and most of our apps have at least one – opens up a bunch of questions and presents some interesting challenges.

Most obviously you need to know, where can I get the .app files for the apps that I am dependent on? Is it at least the minimum version required by my app? Is this the correct app for the version of the Dynamics NAV / Dynamics 365 Business Central that I am developing against? Are the apps that I depend on themselves dependent on other apps? If so, where do I get those from? Is there another layer of dependencies below that? Is it really turtles all the way down?

These are the sorts of questions that you don’t want to have to worry about when you are setting up an environment to develop in. Docker gives us a slick way to quickly create disposable development and testing environments. We don’t want to burn all the time that Docker saves us searching for, publishing and installing app files before we can start work.

This is what a package manager is for. The developer just needs to declare what their app depends on and leave the package manager to retrieve and install the appropriate packages.

The Goal

Why are we talking about this? What are we trying to achieve?

We want to keep the maintenance of all apps separate. When writing app A I shouldn’t need to know or care about the development of app B beyond my use of its API. I just need to know:

  • The minimum version that includes the functionality that I need – this will go into my app.json file
  • I can acquire that, or a later, version of the app from somewhere as and when I need it

I want to be able to specify my dependencies and with the minimum of fuss download and install those apps into my Docker container.

We’ve got a PowerShell command to do just that.

Get-ALDependencies -Container BCOnPrem -Install

There are a few jigsaw pieces we need to gather before we can start putting it all together.

Locating the Apps

We need somewhere to store the latest version of the apps that we might depend upon. There is usually some central, public repository where the packages are hosted – think of the PowerShell Gallery or Docker Hub for example.

We don’t have an equivalent repository for AL apps. AppSource performs that function for Business Central SaaS but that’s not much use to us while we are developing or if the apps we need aren’t on AppSource. We’re going to need to set something up ourselves.

You could just use a network folder. Or maybe SharePoint. Or some custom web service that you created. Our choice is Azure DevOps build artefacts. For a few reasons:

  • We’ve already got all of our AL code going through build pipelines anyway. The build creates the .app files, digitally signs them and stores them as build artefacts
  • The artefacts are only stored if all the tests ran successfully which ought to give us more confidence relying on them
  • The build automatically increments the app version so it should always be clear which version of the app is later and we shouldn’t get caught in app version purgatory when upgrading an app that we’re dependent on
  • We’re already making use of Azure DevOp’s REST API for loads of other stuff – it was easy to add some commands to retrieve the build artefacts (hence my earlier post on getting started with the API)

Identifying the Repository

There is a challenge here. In the app.json file we identify dependencies by app name, id and publisher. To find a build – and its artefacts – we need to know the project and repository name in Azure DevOps.

Seeing as we can’t add extra details into the app.json file itself we hold these details in a separate json file – environment.json. This file can have an array of dependency objects with a:

  • name – which should match the name of the dependency in the app.json file
  • project – the Azure DevOps project to to find this app in
  • repo – the Git repository in that project to find this app in

Once we know the right repository we can use the Azure DevOps API to find the most recent successful build and download its artefacts.

I’m aware that we could use Azure DevOps to create proper releases, rather than downloading apps that are still in development. We probably should – maybe I’ll come back and update this post some day. For now, we find that using the artefacts from builds is fine for the two main purposes we use them: creating local development environments and creating a Docker container as part of a build. We have a separate, manual process for uploading new released versions to SharePoint for now.

The Code

So much for the theory, let’s look at some code. In brief we:

  1. Read app.json and iterate through the dependencies
  2. For each dependency, find the corresponding entry in the environment.json file and read the project and repo for that dependency
  3. Download the app from the last successful build for that repo
  4. Acquire the app.json of the dependency
  5. Repeat steps 2-5 recursively for each branch of the dependency tree
  6. Optionally publish and install the apps that have been found (starting at the bottom of the tree and working up)

A few notes about the code:

  • It’s not all here – particularly the definition of Invoke-TFSAPI. That is just a wrapper for the Invoke-WebRequest command which adds the authentication headers (as previously described)
  • These functions are split across different files and grouped into a module, I’ve bundled them into a single file here for ease

(The PowerShell is hosted here if you can’t see it embedded below:

function Get-ALDependencies {
[string]$SourcePath = (Get-Location),
[string]$ContainerName = (Split-Path (Get-Location) Leaf),
if (!([IO.Directory]::Exists((Join-Path $SourcePath '.alpackages')))) {
CreateEmptyDirectory (Join-Path $SourcePath '.alpackages')
$AppJson = ConvertFrom-Json (Get-Content (Join-Path $SourcePath 'app.json') Raw)
Get-ALDependenciesFromAppJson AppJson $AppJson SourcePath $SourcePath ContainerName $ContainerName Install:$Install
function Get-ALDependenciesFromAppJson {
[string]$SourcePath = (Get-Location),
foreach ($Dependency in $AppJson.dependencies) {
$EnvDependency = Get-DependencyFromEnvironment SourcePath $SourcePath Name $
$Apps = Get-AppFromLastSuccessfulBuild ProjectName $EnvDependency.project RepositoryName $EnvDependency.repo
$DependencyAppJson = Get-AppJsonForProjectAndRepo ProjectName $EnvDependency.project RepositoryName $EnvDependency.repo
Get-ALDependenciesFromAppJson AppJson $DependencyAppJson SourcePath $SourcePath RepositoryName $RepositoryName ContainerName $ContainerName Install:$Install
foreach ($App in $Apps) {
if (!$App.FullName.Contains('Tests')) {
Copy-Item $App.FullName (Join-Path (Join-Path $SourcePath '.alpackages') $App.Name)
if ($Install.IsPresent) {
try {
Publish-NavContainerApp containerName $ContainerName appFile $App.FullName sync install
catch {
if (!($_.Exception.Message.Contains('already published'))) {
throw $_.Exception.Message
function Get-AppJsonForProjectAndRepo {
$VSTSProjectName = (Get-VSTSProjects | where name -like ('*{0}*' -f $ProjectName)).name
$AppContent = Invoke-TFSAPI ('{0}{1}/_apis/git/repositories/{2}/items?path=app.json' -f (Get-TFSCollectionURL), $VSTSProjectName, (Get-RepositoryId ProjectName $VSTSProjectName RepositoryName $RepositoryName)) GetContents
$AppJson = ConvertFrom-Json $AppContent
function Get-DependencyFromEnvironment {
Get-EnvironmentKeyValue SourcePath $SourcePath KeyName 'dependencies' | where name -eq $Name
function Get-EnvironmentKeyValue {
[string]$SourcePath = (Get-Location),
if (!(Test-Path (Join-Path $SourcePath 'environment.json'))) {
return ''
$JsonContent = Get-Content (Join-Path $SourcePath 'environment.json') Raw
$Json = ConvertFrom-Json $JsonContent
function Get-VSTSProjects {
(Invoke-TFSAPI Url ('{0}_apis/projects?$top=1000' -f (Get-TFSCollectionURL))).value
function Get-RepositoryId {
$Repos = Invoke-TFSAPI ('{0}{1}/_apis/git/repositories' -f (Get-TFSCollectionURL), $ProjectName)
if ($RepositoryName -ne '') {
$Id = ($Repos.value | where name -like ('*{0}*' -f $RepositoryName)).id
else {
$Id = $Repos.value.item(0).id
if ($Id -eq '' -or $Id -eq $null) {
$Id = Get-RepositoryId ProjectName $ProjectName RepositoryName ''

An Introduction to Pull Requests in Azure DevOps

An Intro to the Intro

I’ve previously written about our experience with source control and our eventual migration to Git. I said that pull requests in Azure DevOps are awesome and are one of the biggest reasons to consider the switch to Git. In this post we’ll dig a little more into the details of why they are so good and how to use them.

What Are You Trying to Achieve?

Before we start, don’t forget that code review (i.e. pull requests in Git) and source control are tools. They are a means to an end and not an end in themselves.

I get it. We’re developers and typically we love the latest tools and gadgets. We go to a conference and we hear “You should be using… Docker / PowerShell / Agile / Azure DevOps / pair programming / test-driven development / insert some other tech or best practice here…” That’s great, as long as we don’t lose sight of why we should be using them. What are you trying to achieve? What problem do you have that this new tool or practice will alleviate? What will its introduction make more efficient?

Think about how you’d answer those questions. Write them down. Discuss with colleagues. Leave yourself a voice memo. Whatever works. Just make sure you’ve got some idea of how introducing this tool is going to help achieve your team’s goals.

The Goal

OK, let’s start with the goal. Better quality software, delivered faster.

  • Better quality means the code is clear, easy to read and maintain, does what it is supposed to do and doesn’t do more than it is supposed to do
  • Delivered faster means we are able to take a requirement or bug, make the code changes and get them out to our users in a shorter space of time

One of the ways we will work towards that goal is by reviewing code before it is shipped. You might query how adding a review step allows us to deliver faster but consider time that is sometimes wasted going back and forth with a consultant or customer fixing bugs that could have been found during  a code review.

The Process

Before we get stuck into the specifics of pull requests in Azure DevOps, take a minute to think about how you’d want this process to work. Consider the requirements of both the reviewers and the author. This is my list.

  • Clearly identify the code changes that are under review
  • Select one or more colleagues to review the code
  • Allow the reviewers to add comments. It must be clear which line(s) of code the comments are about. Comments must be visible to all reviewers
  • Allow for discussion of particular issues. The author may need to answer questions, reviewers may need to add clarifications to their comments
  • The author must be able to make further code changes to create a new version of the code under review. Reviewers should be able to see the changes that have been made between versions
  • Send notifications to reviewers when a change is made to a review that they are involved in
  • Record when reviewers are satisfied that the changes can be shipped
  • Keep a record of the review after it has been completed so that it can be referred back to, if necessary

Beyond the scope of this post, but related:

  • Run automated tests against the code under review and record the test results
  • Prevent a review from being completed if any associated tests have failed
  • Mandate that code can only be shipped after it has been through a code review

Do you agree with those requirements? What does your current process look like? How many of those points can you tick off? Would you see value in adopting a process that would allow you to tick more, or all, of those points of the list?

Pull Requests

On to the topic at hand. A pull request is the process of merging code changes between branches in Git repositories – or in our scenario between two branches in the same repository.

Pull Request.gif

  • Developer clones the repository to their local machine
  • Create a new local branch to start some new feature e.g. the branch might be called feature/some-new-feature
  • Start developing and committing their changes to that local branch
  • Push local branch to create a copy on the server (usually referred to as origin)
  • Create a pull request to merge the changes from the feature/some-new-feature branch to the master branch
  • Reviewers and author discuss the changes. Author (or another developer) pushes new commits to create an update to the pull request. Repeat as necessary
  • Complete the pull request to merge the changes into the master branch
    • While completing, optionally squash the commits into a new single commit (as shown in the gif)

Creating the Pull Request

You’ve done some work in a new branch in your local repository and have pushed that branch to the server. When you view the branches in Azure DevOps in the browser portal it prompts you to create a pull request for this new branch.

Typically you will be prompted to create a pull request from your new branch (referred to as the “source branch”) into the master branch (the “target branch”). If you follow some workflow that merges your changes into a development / release / some other branch first you can change the target branch and the request will update accordingly.

You will see the code differences between the source and target branches – these are the changes that are under review. If you have already associated the commit(s) in the source branch with work items they will be automatically associated with the pull request. You can manually add or remove work items as well. This provides useful context for the reviewers. Also some might ask, if you don’t have a work item describing the changes you’ve made…why have you changed anything?

Add individual or groups of reviewers and they will receive email notifications that their expertise and opinions are required.

Identifying Changes

PR Identifying Changes.jpg

The pull request shows a tree of folders/files that have been modified. The changes for each file are highlighted on the right. It’s nice and easy for everyone to see the code changes that are included in this pull request. You can also see the work item(s) that are associated with this pull request for a description of the requirements that these changes are designed to meet.


By default you’ll be looking at the changes that have been made across all updates made to the pull request i.e. all pushes to the source branch since the request has been opened. You can, however, just view changes made in a given update. Imagine you’ve already reviewed the code and given some feedback and the author has made a small change to address your comments. You can select the latest update to only see the latest changes.

PR Update Selection.jpg


The most impressive thing about the pull request flow is the comments. Highlighting the code that the comment relates to and posting your message creates a new thread which supports:

  • Others posting new messages in context to that thread
  • Tracking the status of the comment (active, resolved, won’t fix)
  • @mentioning colleagues to alert them to something
  • Linking to work items with #work item no.
  • Pasting images and emoji, liking comments
  • Seeing which update the comment refers to
  • Tracking how the code in question has changed between updates

If you have a requirement to get your team reviewing each other’s work and collaborating on code (and if you don’t…really?) then this is a lovely tool to help you do it.

The last point is especially good. If I arrive late to a review and some comments and updates have already been made I am easily able to catch up. I can see the comments that have already been made and the code changes that were made to resolve them.

PR View Original Diff.gif


Azure DevOps provides a lot of flexibility to configure how and when you want to be notified about pull requests. You can receive an email when:

  • You are included as a reviewer on a new pull request
  • A new update is created i.e. new commits are pushed to the source branch
  • The request is completed or abandoned
  • A reply is posted to a comment thread that you opened
  • You are @mentioned

In addition to notifications the _pulls view ( provides an overview of the pull requests that you have created or are a reviewer for and their status.


When you’ve reviewed the code changes you cast your vote on the pull request. The options are: Approve, Approve with suggestions, Wait for author, Reject.


Once the comments have been commented upon and the votes voted on you can hit the big Complete button. This marks the pull request as being complete and merges its code changes from the source branch into the target branch. With the following options:

  • Complete linked worked items
  • Delete source branch
  • Squash changes into a single, new commit on the target branch

We tend to have all three ticked. If there are a bunch of tiny changes in the source branch e.g. fixing typos then I don’t particularly want to see those in the target branch. Generally we’re happy with all the changes related to the request being grouped into a single commit.

The request, complete with comments, commits and votes is archived and remains on Azure DevOps if you need to refer back to it. Like most things in Azure DevOps you can access them through the REST API as well – as I did the other day to get some stats on how many requests we had completed in 2018.


And there is a load more than that as well. Beyond this post, but maybe a topic for another day. I hope the above has been enough to whet your code review appetite to try it out and investigate further.

  • Protecting branches to only allow changes from a pull request (as opposed to pushing commits directly to the branch)
  • Enforcing a minimum number of reviewers and preventing users from reviewing their own changes
  • Enforcing that a build must run – and succeed – before the request can be completed
  • Enforcing that all comments are resolved before completing the request
  • Automatically include certain users or groups as reviewers on specified branches

Automatically Creating a CI Pipeline in Azure DevOps with YAML


Name your yml file .vsts-ci.yml and put it in the root of your project.

What Does the Title Mean?

There is a lot of chat about build pipelines and continuous integration (CI) at the moment. For the uninitiated let’s break down the title of this post:

  • CI = continuous integration, the practice of integrating ongoing development into your master development branch as soon as possible, making use of automated testing and building of your .app/.fob/.txt files
  • Azure DevOps = Microsoft’s platform for hosting your development projects, track tasks, builds and releases (formerly called Visual Studio Team Services, formerly called Team Foundation Server)
  • YAML = a markup language you can use to define the steps included in your automated build

This post isn’t an introduction to these concepts. You can find out more here:

YAML Pipeline

These days the cool kids are using .yml files to define the steps in their build. We’ve used the visual editor the define our pipelines in Azure DevOps for a while, but I think a .yml file is better, because:

  • Your build definition becomes part of your source code, meaning you get version history, you can do code review on its changes and link changes to your build with corresponding changes to the source code
  • Reusing the same pipeline across multiple Azure DevOps projects is easier – just copy the .yml file between the repositories
  • Azure DevOps can automatically create the CI pipeline for you (finally he gets to the point of the post)

Automatically Creating the Pipeline

Simply name your YAML build definition file .vsts-ci.yml, put it in the root of the repository and push it to Azure DevOps. The platform will automatically create a new CI pipeline for the project, using the steps defined in the file and kick off the build.

This makes me pretty happy.

Credit to Abel Wang:

Business Central Development With CI/CD

If you follow blogs about Dynamics 365 Business Central / NAV development, attended development sessions at Directions or have seen the schedule for NAVTechDays then you may have noticed the terms “CI/CD” or “pipeline” being thrown around.

What do those terms actually refer to? And how does it affect the way we approach development?


CI = “continuous integration”
CD = “continuous delivery” (or “continuous deployment”, if you prefer)

These are pretty old development concepts. Check out the Wikipedia entry if you want an overview and some of the history. I would summarise it like this.

Continuous integration: incorporate new development into your main development branch as soon as possible.

Continuous delivery: get that development in front of your end users as quickly as possible.

The concept of a pipeline is having a defined series of steps that new development goes through. Build, test, publish and install into target environment(s) – automated as much as possible


All this talk of  “as soon as possible” sounds a little reckless. Is this really a good idea?

In a nutshell, we’re trying to minimise the time between identifying some changes that the customer needs (some new feature or bug fix) and those changes actually being deployed onto the customer’s system.

We want to avoid work in progress changes hanging around for ages. You’ve probably experienced the problems that come with that:

  • The work becomes harder to merge back into the master branch as time goes by
  • Future development dependent on these changes is held up or goes ahead with the worry it will clash with work in progress
  • People start to forget, or lose interest, in why the changes were required in the first place making testing and code review harder or less effective
  • The customer loses interest in the development and is less inclined to test or use the new development



All my experience is with Azure DevOps (what used to be called Visual Studio Team Services and used to be called Team Foundation Server) but other platforms provide similar functionality.

We start by defining small, discrete work items. I don’t have a fixed rule, but if the work can’t be completed in a single sprint (say, 2 weeks) then it’s probably too big and you should split it into smaller chunks.

The developer gets to work and puts their changes in for review. Pushing those changes up to the server triggers the build pipeline. Typically this is a series of tasks performed by a build agent running on a server that you control. Azure DevOps provides several options for agents hosted by Microsoft but for now they don’t provide the option we need to build AL packages.

I won’t go into detail about our build pipeline now but it includes:

  • Creating a Docker container
  • Compiling the AL source with the compiler included in the container
  • Running the automated tests (the developer should have included new tests to cover their changes)
  • Uploading the test results and the .app files (we split the product and its tests into two separate apps) as build artefacts
  • Notifying the developer of the build result

By the time any of the reviewers comes to look at the code review we should already that:

  • All the tests have passed
  • The changes can be merged into the master branch without any conflicts

Nice. We can be much more confident hitting the Approve button knowing it passes the tests and will merge neatly with master. We get the changes incorporated back into the product quickly and have a clean starting point for the next cycle.


Delivery is a different story. At the time of writing our release process is to make the new .app package available on SharePoint. We don’t automate that.

With Dynamics NAV / BC on-premise there is scope for automating the publish & install of the new app package into target environments and tenants. That would involve the definition of a release pipeline. An agent on the target environment could collect the app package (or fob, or text file) created by the build pipeline and use PowerShell to import/compile/publish/install into one or more databases.

We don’t attempt this as in many cases we don’t control the environments that our apps are installed into. The servers are not ours to install agent software onto and be responsible for.

This is especially true of Business Central SaaS as we are developing apps for AppSource. No app package* makes it onto the platform until it has passed the AppSource validation process and deployed by Microsoft on their own schedule.

*unless it is developed in the 50,000 – 99,999 object range and uploaded.

Getting Started

I hope that’s whet your appetite to go and investigate some more. Before you do you’ll need to be up and running with source code management and automated tests (perhaps more of that another time).