Managing Business Central Development with Git: Platforms

Another post about Business Central development and Git. Maybe the last one. Who knows?

Whatever your precise circumstances, if you are developing apps for Business Central you have to be mindful of the differences between BC versions and how it affects your app. If you are only developing for SaaS you might only care about the current and next version.

If you are developing and maintaining apps for the on-prem/PaaS market then likely you need to concern yourself with a wider range of BC versions. Even a we-only-support-Business-Central-and-not-NAV stance means we are now supporting four major versions – 13, 14, 15 and 16. I refuse to call the versions “Business Central <Year> <Spring Release/Fall Release/Wave One/Wave Two> – a number makes much more sense to me. Also, I’m British – “fall” is an accident, not a season.

Nomenclature aside, all of this does present us with a challenge. How do we maintain the source code for our apps, for different Business Central versions, in an efficient manner?

Changes Between Versions

For the uninitiated, what sorts of changes are we talking about between platform versions? There are various things to think about:

  • Runtime differences e.g. new mandatory properties in app.json
    • contextSensitiveHelp
    • target – using “Cloud” instead of “Extension”
    • dependencies – using “id” instead of “appId”
    • depending on the “System Application” and “Base Application” apps rather than using the application property
  • Standard fields that have recently been converted to enums
  • Standard functionality that has been moved, methods that have new signatures

And of course, many of you will have experienced the pain of the BC14 -> BC15 upgrade. TempBlob, Base64, Languages, Tenant Mgt. / Environment Info, Calendar Management – all breaking changes. Microsoft were criticised, rightly so, for breaking BC14 compatible apps so badly in BC15.

To their credit, however, Microsoft said that they would minimise future breaking changes, instead marking code as becoming obsolete for at least 12 months until it is removed. That has been borne out with the release of BC16. All but one of our BC15 apps works without any changes in BC16. The exception was an app that was using the Sorting Method on Warehouse Activity Header which has been converted from an option to an enum and now has different values. Microsoft sent me an email with the details of the compilation error.

Strategy

How to manage this then? When we first switched from developing AL apps on NAV2018 (don’t – it’s more trouble than it’s worth) to BC13 we created a new Git repo for each app. It became obvious that we don’t want to keep doing that. We don’t want as many repos as number of apps * number of supported BC versions. We need something smarter than that.

We’ve settled on something like this instead:

  • the master branch has the stable code for the current release of BC (as of this week, BC16), app.json has a platform value to match the latest version (16.0.0.0) and is built against the current Docker image (mcr.microsoft.com/businesscentral/sandbox)
  • new development is done against the current BC (worldwide) version in release, bug, and feature branches (as described here)
  • the code for each version of BC that we are supporting is in a BC13, BC14 or BC15 branch – this branch has an appropriate platform value in app.json and is built against a sandbox Docker image of that version

Imagine a repo like this:

* 3b6ba3c (HEAD -> BC14) Env. Info changes for BC14
* 5e3fda6 TempBlob changes for BC14
* 0f85829 Update Docker image for BC14
* e4fe665 app.json for BC14
| * 517615b (BC15) Update Docker image for BC15
| * e893e39 app.json for BC15
|/
* 60fe758 (tag: 1.1.0, master) Some changes for v1.1.0
* 8bd6f26 (tag: 1.0.0) Initial version

The current version of our app is 1.1.0 and we are supporting the current version of BC (BC16, in the master branch) and BC15 and BC14 in their respective branches. Revisiting an earlier idea, I like to think of these branches as telling a story, answering the question – “what changes do you have to make to the current version of the code to make it compatible with this version of BC?” For BC15 the answer is “not much” – just change app.json and the Docker image. For BC14 the answer is likely to be somewhat longer.

Now we are going to work on the next version of our app, v1.2.0. These changes would go through feature branches, pull requests, a release branch and eventually into master. I’ll skip all that and just show a new commit in the master branch.

* cd7b2ff (HEAD -> master, tag: 1.2.0) Changes for v1.2.0
| * 3b6ba3c (BC14) Env. Info changes for BC14
| * 5e3fda6 TempBlob changes for BC14
| * 0f85829 Update Docker image for BC14
| * e4fe665 app.json for BC14
|/
| * 517615b (BC15) Update Docker image for BC15
| * e893e39 app.json for BC15
|/
* 60fe758 (tag: 1.1.0) Some changes for v1.1.0
* 8bd6f26 (tag: 1.0.0) Initial version

Pushing those changes to the master branch triggers a build against BC16. Now, we want to include the 1.2.0 changes in the BC15 and BC14 versions of our app. We can simply rebase the BC15 and BC14 branches back on top of the master branch.

* f4a7d9b (HEAD -> BC14) Env. Info changes for BC14
* dd072ea TempBlob changes for BC14
* ad8905b Update Docker image for BC14
* 184001e app.json for BC14
| * 71140f4 (BC15) Update Docker image for BC15
| * 2981c4d app.json for BC15
|/
* cd7b2ff (tag: 1.2.0, master) Changes for v1.2.0
* 60fe758 (tag: 1.1.0) Some changes for v1.1.0
* 8bd6f26 (tag: 1.0.0) Initial version

(Force) Pushing the changes to the BC15 and BC14 branches will trigger new builds of the app against their respective Docker images.

Depending on what the v1.2.0 changes actually were, we may need to do some more work in the BC14 branch to make the new code compatible e.g. if the new code included some use of the TempBlob codeunit.

* ff1455b (HEAD -> BC14) More TempBlob changes
* f4a7d9b Env. Info changes for BC14
* dd072ea TempBlob changes for BC14
* ad8905b Update Docker image for BC14
* 184001e app.json for BC14
| * 71140f4 (BC15) Update Docker image for BC15
| * 2981c4d app.json for BC15
|/
* cd7b2ff (tag: 1.2.0, master) Changes for v1.2.0
* 60fe758 (tag: 1.1.0) Some changes for v1.1.0
* 8bd6f26 (tag: 1.0.0) Initial version

Going back to the idea of the BC14 branch telling a coherent story of making the app compatible with BC14, does it make much sense to have two commits of TempBlob changes? No. It doesn’t add anything for a developer looking at the repo in the future. We can sort that with an interactive rebase: git rebase -i master

pick 184001e app.json for BC14
pick ad8905b Update Docker image for BC14
pick dd072ea TempBlob changes for BC14
pick f4a7d9b Env. Info changes for BC14
pick ff1455b More TempBlob changes

Change the rebase script to tell Git to “fixup” the second TempBlob change into the first.

pick 184001e app.json for BC14
pick ad8905b Update Docker image for BC14
pick dd072ea TempBlob changes for BC14
fixup ff1455b More TempBlob changes
pick f4a7d9b Env. Info changes for BC14

Those changes will be melded together and keep the history of the repo neat and readable.

Problem with Case Statement in Business Central 13

This is a a pretty niche post. Hopefully this problem only exists in a specific set of circumstances, in Business Central v13, but we wasted enough hours of our lives chasing it down yesterday that I thought I’d share it.

Consider this case statement:

i := 3;
test := true;

case i of
  1:
    exit('i equals 1');
  2:
    if test then
      exit('i equals 2, test is true')
    else
      if true then
        exit('you shouldn''t be here');
  else
    exit('i equals something else');
end;

exit('something bad has happened');

It exits with the result “i equals something else” – as you’d expect.

But if you replace the tests in the case with something more complex – one() and two() are methods that return integer 1 and 2:

i := 3;
test := true;

case i of
  one():
    exit('i equals 1');
  two():
    if test then
      exit('i equals 2, test is true')
    else
      if true then
        exit('you shouldn''t be here');
  else
    exit('i equals something else');
end;

exit('something bad has happened');

Now it exits with “something bad has happened”. The else of the case statement (line 10) is jumped over and the final exit line is hit instead.

It seems like the final else is attached to the previous if and no longer as part of the case statement. Putting begin…end around the previous case solves the problem.

i := 3;
test := true;

case i of
  one():
    exit('i equals 1');
  two():
    begin
      if test then
        exit('i equals 2, test is true')
      else
        if true then
          exit('you shouldn''t be here');
    end;
  else
    exit('i equals something else');
end;

exit('something bad has happened');

Putting a begin/end here is something I would have done previously anyway – to make the intention of the code clear – even though the code analyser now warns that they aren’t needed here.

As far as we can see this all works as expected in BC 14 and above, but is broken in all versions of BC 13.

Tip: Format AL Files OnSave in Visual Studio Code

Maybe everyone else is already doing this and I’m just slow on the uptake but Visual Studio Code has options to automatically format files at various points.

The AL extension for VS Code provides a formatter for .al files. You can run it manually with the Format Document command (Shift+Alt+F). This inserts the correct number of spaces between parameters and brackets, indents lines correctly and generally tidies the current document up.

You can have VS Code automatically format the document when pasting, typing and saving. Search for format in the settings.

These settings will be applied globally. Alternatively you can enable the formatting just for specific file types. Click on the AL link in the right hand corner of the status bar and choose “Configure AL language based settings…”

This opens the VS Code settings JSON file in your AppData folder (on Windows) and adds an [al] object to the file. Create the “editor.formatOnSave” and set its value to true to enable AL formatting when the files are saved. You can use intellisense (Ctrl+Space) to list the valid options in this file.

VS Code for the win.

Testing Against a Remote Docker Host with AL Test Runner

Apologies for another post about AL Test Runner. If you don’t use or care about the extension you can probably stop reading now and come back next time. It isn’t my intention to keep banging on about it – but the latest version (v0.2.1) does plug a significant gap.

Next time I’ll move onto a different subject – some thoughts about how we use Git to manage our code effectively.

Developing Against a Remote Docker Container

While I still prefer developing against a local Docker container I know that many others publish their apps to a container hosted somewhere else. In which case your options for running tests against that container are:

  • Using the Remote Development capability of VS Code to open a terminal and execute PowerShell on the remote host – discussed here and favoured by Tobias Fenster (although his views on The Beautiful Game may make you suspicious of any of his opinions 😉)
  • Enabling PS-Remoting and opening a PowerShell session to the host to execute some commands over the network – today’s topic

Again, shout out to Max and colleagues for opening a pull request with their changes to enable this and for testing these latest mods.

Enable PS Remoting

Firstly, you’re going to need to be able to open a PowerShell session to the Docker host with:

New-PSSession <computer name>

I won’t pretend to understand the intricacies of setting this up in different scenarios – you should probably read the blog of someone who knows what they are talking about if you need help with it.

The solution will likely include:

  • Opening a PowerShell session on the host as administrator and running Enable-PSRemoting
  • Making sure the firewall is open to the port that you are connecting over
  • Passing a credential and possibly an authentication type to New-PSSession

To connect to my test server in Azure I run the following:

New-PSSession <server name> -Credential (Get-Credential) -Authentication Basic

AL Test Runner Config

They are several new keys in the AL Test Runner config file to accommodate remote containers. There are also a few new commands to help create the required config.

The Open Config File command will open the config JSON file or create it, if it doesn’t already exist. Set Container Credential and Set VM Credential can be used to set the credentials used to connect to the container and the remote host respectively.

The required config keys are:

Sample AL Test Runner config
  • dockerHost – the name of the server that is hosting the Docker containers. This name will be used to create the remote PowerShell session. Leaving this blank implies that containers are hosted locally and the extension will work as before
  • vmUserName / vmSecurePassword – the credentials used to connect to the Docker host
  • remoteContainerName – the name of the container to run tests against
  • newPSSessionOptions – switches and parameters that should be added to New-PSSession to open the session to the Docker host (see below)

The extension uses New-PSSession to open the PowerShell session to the Docker host. The ComputerName and Credential parameters will populated from the dockerHost and vmUserName / vmSecurePassword config values respectively.

Any additional parameters that must be specified should be added to the newPSSessionOptions config key. As in my case I run

New-PSSession <server name> -Credential <credential> -Authentication Basic

I need to set newPSSessionOptions in the config file to “-Authentication Basic”. You can use this key for -useSSL, -Port, -SessionOption or whatever else you need to open the session.

With the config complete you should be able to execute tests, output the results and decorate the test codeunits as if you were working locally. Beautiful.

As ever, feedback, suggestions and contributions welcome. Hosted on GitHub.

Putting Queries to Use in Business Central

We’ve had query objects for a while now – since NAV 2013 (I think). In theory they sound great, link a bunch of tables together, aggregate some columns, get distinct values, left join, right join, cross join – executed as a single SQL query.

Why Don’t We Use Queries Much?

In practice I haven’t seen them used that much. There’s probably a variety of reasons for that:

  • We have limited control over the design and execution of the query at runtime. The design is pretty static making it difficult to create useful generic queries short of throwing all the fields from the relevant tables in – which feels heavy-handed
  • I find how the links between dataitems unintuitive
  • It isn’t easy to visualise the dataset that you are creating when writing the AL file
  • Habit – we all learnt FindFirst/Set, Repeat and Until when we first started playing with CAL development and are more comfortable working with loops than datasets. Let’s be honest, the RDLC report writing experience hasn’t done much to convert us to a set-based approach to our data

However, just because there is room for improvement doesn’t mean that we can’t find good uses for queries now. Queries are perfect for:

  • Using queries to select DISTINCT values in the dataset
  • Aggregates – min, max, sum, average, count – especially for scenarios that aren’t suited to flowfields
  • Date functions – convert date columns to day/month/year – which allows you to easily aggregate another column by different periods
  • Outer joins – it’s possible, but far more expensive, to create this kind of dataset by hand with temporary tables
  • Selecting the top X rows
  • Exposing as OData web services

It’s the last point in that list that I particularly want to talk about. We’ve been working on a solution lately where Business Central consumes its own OData web services.

What?! What kind of witchcraft is this? Why would you consume a query via a web service when you can call it directly with a query object? Hear me out…

Consuming Queries

I think you’ve got two main options for consuming queries via AL code.

Query Variable

You can define a variable of type query and specify the query that you want to run. This gives you some control over the query before you execute it – you can set filters on the query columns and set the top number of rows. Call Query.Open and Query.Read to execute the query and step through the result set.

The main downside is that you have to specify the query that you want to use at design-time. That might be fine for some specific requirement but is a problem if you are trying to create something generic.

Query Keyword

Alternatively we can use the Query keyword and execute a query by its ID. Choose whether you want the results in CSV (presumably this is popular among the same crowd that are responsible for an increase in cassette tape sales) or XML and save them either to disk or to a stream.

The benefit is that you can decide on the query that you want to call at runtime. Lovely. Unfortunately you have to sacrifice even the limited control that a query variable gave you in order to do so.

OData Queries/Pages

Accessing the query via OData moves us towards having the best of both worlds. Obviously there is significant extra overhead in this approach:

  • Adding the query to the web service table and publishing
  • Acquiring the correct URL to a service tier that is serving OData requests for your query
  • Creating the HTTP request with appropriate authentication
  • Parsing the JSON response to get at the data that you want

This is significantly more work than the other approaches – let’s not pretend otherwise. However, it does give you all the power of OData query parameters to control the query. While I’ve been talking about queries up til now almost all of this applies to pages exposed as OData services as well.

  • $filter: specify column name, operator and filter value that you want to apply to the query, join multiple filters together with and/or operators
  • $select: a comma-separated list of columns to return i.e. only return the columns that you are actually interested in
  • $orderBy: specify a column to order the results by – use in combination with $top to get the min/max value of a column in the dataset
  • $top: the number of rows to return
  • $skip: skip this many rows in the dataset before returning – useful if the dataset is too large for the service tier to return in a single call
  • $count: just return the count of the rows in the dataset – if you only want the count there is no need to parse the JSON response and count them yourself