An Approach to Package Management in Dynamics 365 Business Central

TL;DL

We use PowerShell to call the Azure DevOps API and retrieve Build Artefacts from the last successful build of the repository/repositories that we’re dependent on.

Background

Over the last few years I’ve moved into a role where I’m managing a development team more than I’m writing code myself. I’ve spent a lot of that time looking at tools and practices in the broader software development community. After all, whether you’re writing C/AL, AL, PowerShell or JavaScript it’s all code and it’s unlikely that we’ll face any challenges that haven’t already been faced in one way or another in a different setting.

In that time we’ve introduced:

Package Management

The next thing to talk about is package management. I’ve written about the benefits of trying to avoid dependencies between your apps before (see here). However, if app A relies on app B and you cannot foresee ever deploying A without B then you have a dependency. There is no point trying to code your way round the problems that avoiding the dependency will create.

Accepting that your app has one or more dependencies – and most of our apps have at least one – opens up a bunch of questions and presents some interesting challenges.

Most obviously you need to know, where can I get the .app files for the apps that I am dependent on? Is it at least the minimum version required by my app? Is this the correct app for the version of the Dynamics NAV / Dynamics 365 Business Central that I am developing against? Are the apps that I depend on themselves dependent on other apps? If so, where do I get those from? Is there another layer of dependencies below that? Is it really turtles all the way down?

These are the sorts of questions that you don’t want to have to worry about when you are setting up an environment to develop in. Docker gives us a slick way to quickly create disposable development and testing environments. We don’t want to burn all the time that Docker saves us searching for, publishing and installing app files before we can start work.

This is what a package manager is for. The developer just needs to declare what their app depends on and leave the package manager to retrieve and install the appropriate packages.

The Goal

Why are we talking about this? What are we trying to achieve?

We want to keep the maintenance of all apps separate. When writing app A I shouldn’t need to know or care about the development of app B beyond my use of its API. I just need to know:

  • The minimum version that includes the functionality that I need – this will go into my app.json file
  • I can acquire that, or a later, version of the app from somewhere as and when I need it

I want to be able to specify my dependencies and with the minimum of fuss download and install those apps into my Docker container.

We’ve got a PowerShell command to do just that.

Get-ALDependencies -Container BCOnPrem -Install

There are a few jigsaw pieces we need to gather before we can start putting it all together.

Locating the Apps

We need somewhere to store the latest version of the apps that we might depend upon. There is usually some central, public repository where the packages are hosted – think of the PowerShell Gallery or Docker Hub for example.

We don’t have an equivalent repository for AL apps. AppSource performs that function for Business Central SaaS but that’s not much use to us while we are developing or if the apps we need aren’t on AppSource. We’re going to need to set something up ourselves.

You could just use a network folder. Or maybe SharePoint. Or some custom web service that you created. Our choice is Azure DevOps build artefacts. For a few reasons:

  • We’ve already got all of our AL code going through build pipelines anyway. The build creates the .app files, digitally signs them and stores them as build artefacts
  • The artefacts are only stored if all the tests ran successfully which ought to give us more confidence relying on them
  • The build automatically increments the app version so it should always be clear which version of the app is later and we shouldn’t get caught in app version purgatory when upgrading an app that we’re dependent on
  • We’re already making use of Azure DevOp’s REST API for loads of other stuff – it was easy to add some commands to retrieve the build artefacts (hence my earlier post on getting started with the API)

Identifying the Repository

There is a challenge here. In the app.json file we identify dependencies by app name, id and publisher. To find a build – and its artefacts – we need to know the project and repository name in Azure DevOps.

Seeing as we can’t add extra details into the app.json file itself we hold these details in a separate json file – environment.json. This file can have an array of dependency objects with a:

  • name – which should match the name of the dependency in the app.json file
  • project – the Azure DevOps project to to find this app in
  • repo – the Git repository in that project to find this app in

Once we know the right repository we can use the Azure DevOps API to find the most recent successful build and download its artefacts.

I’m aware that we could use Azure DevOps to create proper releases, rather than downloading apps that are still in development. We probably should – maybe I’ll come back and update this post some day. For now, we find that using the artefacts from builds is fine for the two main purposes we use them: creating local development environments and creating a Docker container as part of a build. We have a separate, manual process for uploading new released versions to SharePoint for now.

The Code

So much for the theory, let’s look at some code. In brief we:

  1. Read app.json and iterate through the dependencies
  2. For each dependency, find the corresponding entry in the environment.json file and read the project and repo for that dependency
  3. Download the app from the last successful build for that repo
  4. Acquire the app.json of the dependency
  5. Repeat steps 2-5 recursively for each branch of the dependency tree
  6. Optionally publish and install the apps that have been found (starting at the bottom of the tree and working up)

A few notes about the code:

  • It’s not all here – particularly the definition of Invoke-TFSAPI. That is just a wrapper for the Invoke-WebRequest command which adds the authentication headers (as previously described)
  • These functions are split across different files and grouped into a module, I’ve bundled them into a single file here for ease

(The PowerShell is hosted here if you can’t see it embedded below: https://gist.github.com/jimmymcp/37c6f9a9981b6f503a6fecb905b03672)

An Introduction to Pull Requests in Azure DevOps

An Intro to the Intro

I’ve previously written about our experience with source control and our eventual migration to Git. I said that pull requests in Azure DevOps are awesome and are one of the biggest reasons to consider the switch to Git. In this post we’ll dig a little more into the details of why they are so good and how to use them.

What Are You Trying to Achieve?

Before we start, don’t forget that code review (i.e. pull requests in Git) and source control are tools. They are a means to an end and not an end in themselves.

I get it. We’re developers and typically we love the latest tools and gadgets. We go to a conference and we hear “You should be using… Docker / PowerShell / Agile / Azure DevOps / pair programming / test-driven development / insert some other tech or best practice here…” That’s great, as long as we don’t lose sight of why we should be using them. What are you trying to achieve? What problem do you have that this new tool or practice will alleviate? What will its introduction make more efficient?

Think about how you’d answer those questions. Write them down. Discuss with colleagues. Leave yourself a voice memo. Whatever works. Just make sure you’ve got some idea of how introducing this tool is going to help achieve your team’s goals.

The Goal

OK, let’s start with the goal. Better quality software, delivered faster.

  • Better quality means the code is clear, easy to read and maintain, does what it is supposed to do and doesn’t do more than it is supposed to do
  • Delivered faster means we are able to take a requirement or bug, make the code changes and get them out to our users in a shorter space of time

One of the ways we will work towards that goal is by reviewing code before it is shipped. You might query how adding a review step allows us to deliver faster but consider time that is sometimes wasted going back and forth with a consultant or customer fixing bugs that could have been found during  a code review.

The Process

Before we get stuck into the specifics of pull requests in Azure DevOps, take a minute to think about how you’d want this process to work. Consider the requirements of both the reviewers and the author. This is my list.

  • Clearly identify the code changes that are under review
  • Select one or more colleagues to review the code
  • Allow the reviewers to add comments. It must be clear which line(s) of code the comments are about. Comments must be visible to all reviewers
  • Allow for discussion of particular issues. The author may need to answer questions, reviewers may need to add clarifications to their comments
  • The author must be able to make further code changes to create a new version of the code under review. Reviewers should be able to see the changes that have been made between versions
  • Send notifications to reviewers when a change is made to a review that they are involved in
  • Record when reviewers are satisfied that the changes can be shipped
  • Keep a record of the review after it has been completed so that it can be referred back to, if necessary

Beyond the scope of this post, but related:

  • Run automated tests against the code under review and record the test results
  • Prevent a review from being completed if any associated tests have failed
  • Mandate that code can only be shipped after it has been through a code review

Do you agree with those requirements? What does your current process look like? How many of those points can you tick off? Would you see value in adopting a process that would allow you to tick more, or all, of those points of the list?

Pull Requests

On to the topic at hand. A pull request is the process of merging code changes between branches in Git repositories – or in our scenario between two branches in the same repository.

Pull Request.gif

  • Developer clones the repository to their local machine
  • Create a new local branch to start some new feature e.g. the branch might be called feature/some-new-feature
  • Start developing and committing their changes to that local branch
  • Push local branch to create a copy on the server (usually referred to as origin)
  • Create a pull request to merge the changes from the feature/some-new-feature branch to the master branch
  • Reviewers and author discuss the changes. Author (or another developer) pushes new commits to create an update to the pull request. Repeat as necessary
  • Complete the pull request to merge the changes into the master branch
    • While completing, optionally squash the commits into a new single commit (as shown in the gif)

Creating the Pull Request

You’ve done some work in a new branch in your local repository and have pushed that branch to the server. When you view the branches in Azure DevOps in the browser portal it prompts you to create a pull request for this new branch.

Typically you will be prompted to create a pull request from your new branch (referred to as the “source branch”) into the master branch (the “target branch”). If you follow some workflow that merges your changes into a development / release / some other branch first you can change the target branch and the request will update accordingly.

You will see the code differences between the source and target branches – these are the changes that are under review. If you have already associated the commit(s) in the source branch with work items they will be automatically associated with the pull request. You can manually add or remove work items as well. This provides useful context for the reviewers. Also some might ask, if you don’t have a work item describing the changes you’ve made…why have you changed anything?

Add individual or groups of reviewers and they will receive email notifications that their expertise and opinions are required.

Identifying Changes

PR Identifying Changes.jpg

The pull request shows a tree of folders/files that have been modified. The changes for each file are highlighted on the right. It’s nice and easy for everyone to see the code changes that are included in this pull request. You can also see the work item(s) that are associated with this pull request for a description of the requirements that these changes are designed to meet.

Updates

By default you’ll be looking at the changes that have been made across all updates made to the pull request i.e. all pushes to the source branch since the request has been opened. You can, however, just view changes made in a given update. Imagine you’ve already reviewed the code and given some feedback and the author has made a small change to address your comments. You can select the latest update to only see the latest changes.

PR Update Selection.jpg

Comments

The most impressive thing about the pull request flow is the comments. Highlighting the code that the comment relates to and posting your message creates a new thread which supports:

  • Others posting new messages in context to that thread
  • Tracking the status of the comment (active, resolved, won’t fix)
  • @mentioning colleagues to alert them to something
  • Linking to work items with #work item no.
  • Pasting images and emoji, liking comments
  • Seeing which update the comment refers to
  • Tracking how the code in question has changed between updates

If you have a requirement to get your team reviewing each other’s work and collaborating on code (and if you don’t…really?) then this is a lovely tool to help you do it.

The last point is especially good. If I arrive late to a review and some comments and updates have already been made I am easily able to catch up. I can see the comments that have already been made and the code changes that were made to resolve them.

PR View Original Diff.gif

Notifications

Azure DevOps provides a lot of flexibility to configure how and when you want to be notified about pull requests. You can receive an email when:

  • You are included as a reviewer on a new pull request
  • A new update is created i.e. new commits are pushed to the source branch
  • The request is completed or abandoned
  • A reply is posted to a comment thread that you opened
  • You are @mentioned

In addition to notifications the _pulls view (https://dev.azure.com/organisation/_pulls) provides an overview of the pull requests that you have created or are a reviewer for and their status.

Voting

When you’ve reviewed the code changes you cast your vote on the pull request. The options are: Approve, Approve with suggestions, Wait for author, Reject.

Completing

Once the comments have been commented upon and the votes voted on you can hit the big Complete button. This marks the pull request as being complete and merges its code changes from the source branch into the target branch. With the following options:

  • Complete linked worked items
  • Delete source branch
  • Squash changes into a single, new commit on the target branch

We tend to have all three ticked. If there are a bunch of tiny changes in the source branch e.g. fixing typos then I don’t particularly want to see those in the target branch. Generally we’re happy with all the changes related to the request being grouped into a single commit.

The request, complete with comments, commits and votes is archived and remains on Azure DevOps if you need to refer back to it. Like most things in Azure DevOps you can access them through the REST API as well – as I did the other day to get some stats on how many requests we had completed in 2018.

More

And there is a load more than that as well. Beyond this post, but maybe a topic for another day. I hope the above has been enough to whet your code review appetite to try it out and investigate further.

  • Protecting branches to only allow changes from a pull request (as opposed to pushing commits directly to the branch)
  • Enforcing a minimum number of reviewers and preventing users from reviewing their own changes
  • Enforcing that a build must run – and succeed – before the request can be completed
  • Enforcing that all comments are resolved before completing the request
  • Automatically include certain users or groups as reviewers on specified branches

Automatically Creating a CI Pipeline in Azure DevOps with YAML

TL;DR

Name your yml file .vsts-ci.yml and put it in the root of your project.

What Does the Title Mean?

There is a lot of chat about build pipelines and continuous integration (CI) at the moment. For the uninitiated let’s break down the title of this post:

  • CI = continuous integration, the practice of integrating ongoing development into your master development branch as soon as possible, making use of automated testing and building of your .app/.fob/.txt files
  • Azure DevOps = Microsoft’s platform for hosting your development projects, track tasks, builds and releases (formerly called Visual Studio Team Services, formerly called Team Foundation Server)
  • YAML = a markup language you can use to define the steps included in your automated build

This post isn’t an introduction to these concepts. You can find out more here:

YAML Pipeline

These days the cool kids are using .yml files to define the steps in their build. We’ve used the visual editor the define our pipelines in Azure DevOps for a while, but I think a .yml file is better, because:

  • Your build definition becomes part of your source code, meaning you get version history, you can do code review on its changes and link changes to your build with corresponding changes to the source code
  • Reusing the same pipeline across multiple Azure DevOps projects is easier – just copy the .yml file between the repositories
  • Azure DevOps can automatically create the CI pipeline for you (finally he gets to the point of the post)

Automatically Creating the Pipeline

Simply name your YAML build definition file .vsts-ci.yml, put it in the root of the repository and push it to Azure DevOps. The platform will automatically create a new CI pipeline for the project, using the steps defined in the file and kick off the build.

This makes me pretty happy.

Credit to Abel Wang: https://www.youtube.com/watch?v=u3PNaLjTak4

Business Central Development With CI/CD

If you follow blogs about Dynamics 365 Business Central / NAV development, attended development sessions at Directions or have seen the schedule for NAVTechDays then you may have noticed the terms “CI/CD” or “pipeline” being thrown around.

What do those terms actually refer to? And how does it affect the way we approach development?

Definitions

CI = “continuous integration”
CD = “continuous delivery” (or “continuous deployment”, if you prefer)

These are pretty old development concepts. Check out the Wikipedia entry if you want an overview and some of the history. I would summarise it like this.

Continuous integration: incorporate new development into your main development branch as soon as possible.

Continuous delivery: get that development in front of your end users as quickly as possible.

The concept of a pipeline is having a defined series of steps that new development goes through. Build, test, publish and install into target environment(s) – automated as much as possible

Why?

All this talk of  “as soon as possible” sounds a little reckless. Is this really a good idea?

In a nutshell, we’re trying to minimise the time between identifying some changes that the customer needs (some new feature or bug fix) and those changes actually being deployed onto the customer’s system.

We want to avoid work in progress changes hanging around for ages. You’ve probably experienced the problems that come with that:

  • The work becomes harder to merge back into the master branch as time goes by
  • Future development dependent on these changes is held up or goes ahead with the worry it will clash with work in progress
  • People start to forget, or lose interest, in why the changes were required in the first place making testing and code review harder or less effective
  • The customer loses interest in the development and is less inclined to test or use the new development

How?

Integration

All my experience is with Azure DevOps (what used to be called Visual Studio Team Services and used to be called Team Foundation Server) but other platforms provide similar functionality.

We start by defining small, discrete work items. I don’t have a fixed rule, but if the work can’t be completed in a single sprint (say, 2 weeks) then it’s probably too big and you should split it into smaller chunks.

The developer gets to work and puts their changes in for review. Pushing those changes up to the server triggers the build pipeline. Typically this is a series of tasks performed by a build agent running on a server that you control. Azure DevOps provides several options for agents hosted by Microsoft but for now they don’t provide the option we need to build AL packages.

I won’t go into detail about our build pipeline now but it includes:

  • Creating a Docker container
  • Compiling the AL source with the compiler included in the container
  • Running the automated tests (the developer should have included new tests to cover their changes)
  • Uploading the test results and the .app files (we split the product and its tests into two separate apps) as build artefacts
  • Notifying the developer of the build result

By the time any of the reviewers comes to look at the code review we should already that:

  • All the tests have passed
  • The changes can be merged into the master branch without any conflicts

Nice. We can be much more confident hitting the Approve button knowing it passes the tests and will merge neatly with master. We get the changes incorporated back into the product quickly and have a clean starting point for the next cycle.

Delivery

Delivery is a different story. At the time of writing our release process is to make the new .app package available on SharePoint. We don’t automate that.

With Dynamics NAV / BC on-premise there is scope for automating the publish & install of the new app package into target environments and tenants. That would involve the definition of a release pipeline. An agent on the target environment could collect the app package (or fob, or text file) created by the build pipeline and use PowerShell to import/compile/publish/install into one or more databases.

We don’t attempt this as in many cases we don’t control the environments that our apps are installed into. The servers are not ours to install agent software onto and be responsible for.

This is especially true of Business Central SaaS as we are developing apps for AppSource. No app package* makes it onto the platform until it has passed the AppSource validation process and deployed by Microsoft on their own schedule.

*unless it is developed in the 50,000 – 99,999 object range and uploaded.

Getting Started

I hope that’s whet your appetite to go and investigate some more. Before you do you’ll need to be up and running with source code management and automated tests (perhaps more of that another time).

Source Code Management: Conclusions

I stated in the first post in this series that I wasn’t going to offer any advice. I will, however, attempt to draw some conclusions from our experiences and hope that you’ll find them helpful, or at least interesting.

(Not) Migrating to Git

A few months before we trialled Git in earnest as a team I tried it out for myself. I had a look because I’d heard various reasons that we should migrate:

  • “It’s faster” – yes, in my experience all the key operations are faster in Git than TFVC (committing vs. checking-in, cloning vs. getting latest version, viewing differences between versions)
    • Is that a compelling reason in itself to migrate? You can be the judge of that
  • “Microsoft are moving to it themselves” – who cares? Do you have the same requirements as Microsoft?
    • This would only be a valid argument if they stopped supporting TFVC. As far as I can see they are adding support for more version control systems not removing them (the Build system can now retrieve code from Subversion)
  • “VS Code has built in support for Git” – true, which is great.
    • You can add support for TFVC through a VS Code extension published by Microsoft
    • Again, you can decide how important the convenience of having support for Git in your IDE is weighed against other factors
    • Having tried a few GUIs for Git, VS Code is not my personal favourite – Git Extensions is

I decided at the time that we didn’t need to migrate to Git. The benefits didn’t outweigh the challenges in having the team learn a new system and migrate to it in my estimation. This was during the days of us developing in a central NAV development database (more on that here).

Moving to a distributed version control system while we were working in a single development database didn’t seem to make a lot of sense and I figured that our development practice should drive our choice of version control – not the other way round.

Migrating to Git

All of that said, now that we have migrated to Git I can’t imagine going back to TFVC. Some of our key experiences and learning points:

  • Git has a steeper learning curve. Getting your head round cloning the entire repository, how branches work, pushing, fetching and pulling changes – it’s all a little more involved than TFVC
  • It’s worth investing the time to understand the core concepts. I watched many hours of YouTube videos about Git and read lots of blogs – you can use Git as centralised version control (by pushing to the remote every time you commit) but you’re missing most of the power if you do
  • Personally, I forced myself to use the command line rather than a GUI for common tasks – this helped me grasp what the commands were actually doing and how they can be manipulated in different situations. That knowledge will come in handy when someone in your team asks why their rebase has resulted in a conflict and how to fix it
  • We create a lot of branches now, because it’s so easy and because we use pull requests (see below)
  • Git gives you much finer control over the repositories and your commits than TFVC: interactive rebasing, resetting, reverting, cherry-picking, squashing, fixing, amending commits – with a little practice and research (see above) there aren’t really any ways to screw up the repository so badly that you can’t clean it up again
  • The complexity that makes Git harder to get started with also makes it very flexible and powerful
  • Did I mention that pull requests are awesome? The tools to collaborate on code in Azure DevOps have revolutionised our development workflow. We got started with code review in TFVC but moving to Git has allowed us to move on to another level

At the End of the Day

At the end of the day, source code management is a tool to help us turn out working software for our customers. Different dev teams use different systems in different ways. That’s because they have different development practices and procedures.

However, I think it’s fair to say that we’ve found source code management is not a substitute for good process. Implementing it was initially difficult because we weren’t following a consistent, disciplined development process. It was clear that we weren’t going to be able to extract much value from our system until we were.

As we have changed, and sought to improve our process over the years we have changed our system to suit, which feels like the right way round to me.