(Slightly) More Elegant Error Handling in Business Central

This is an intro post to the Error Message Mgt. codeunit and related objects. NAV has never brilliant when it comes to error handling, for a couple of reasons.

  1. The error messages themselves sometimes leave a lot to be desired
  2. The whac-a-mole nature of fixing multiple errors by finding one at a time and attempting to post/register again

There isn’t a lot we can do about the standard error messages, but we can write more considerate errors for our users. Describe the problem and guide the user to the solution as much as possible i.e. not “There was nothing to handle”.

What about #2? What if we could line up all the moles so that the user could whack them all in one go?1 We can use the Error Message Management objects to do just that.

This is a useful post you could take a look at – http://www.mynavblog.com/2019/04/09/how-to-write-error-and-confirm/ – but even having read that I still struggled my way through how to use it and write tests around it. I didn’t find the framework particularly easy or intuitive to work with so hope I can save someone else some head-scratching.

1 metaphorical moles. I’m not endorsing whacking actual moles.

Scenario

You might want to use this framework when you’ve got some process that could throw errors for multiple different reasons. Usually these are going to be some posting or registering routine for a journal line or a document.

Often there are all sorts of things that can wrong with those routines – posting date ranges, dimension errors, mandatory fields that have not been populated, missing posting setup, missing no. series… blah blah blah.

Rather than just throwing an error for the first problem we encounter we want to collect them together so that the user can fix them before posting again.

I was going to attempt a single overview post, but I’ve decided against that. I think it will be more useful (hopefully) to work through an example – albeit a silly one – in stages. I’ve got a small app for posting a record of video calls – because that’s what we need right now, more video calls and more admin.

The app adds a journal page to record the platform (Teams, Zoom, WhatsApp or Skype) the type of call, date, duration and participants. Before the journal can be posted there is a deliberately convoluted process to check for various errors. Concisely summarised below:

  1. No. of Participants must not be 0
  2. Duration (mins) must not be 0
  3. Posting Date must not be blank
  4. Posting Date must be within allowed posting dates for the user
  5. Zoom calls cannot be over 40 minutes when the No. of Participants is > 2 – I’m a cheapskate and have got a free account
  6. Teams cannot be used for a call type of Family Quiz – surely no family is that corporate?
  7. WhatsApp should not be used for groups of more than 4 – it’s bad enough with 2
  8. WhatsApp should not be used for a call type of Customer Demo – I mean, you don’t…do you?
  9. Skype isn’t used for more than 2 participants – I know technically it can be…it just isn’t
  10. Family quizzes can’t be held Monday – Thursday
  11. A call type of Daily Team Call cannot be more than 45 minutes long – you need to find a smaller team to have daily calls with
  12. A call type of Daily Team Call cannot have more than 30 participants – see #11

While this is daft, it is an example of how a journal might not be able to be posted for lots of different reasons. Normally we expect the user to fix those errors one at a time and, if they’ve still got the will to live, post the batch when they have resolved them all.

At the moment I’m just using Testfield and Error to throw errors when a journal line is valid. Over the next few posts I’m going to see if I can the Error Message Mgt objects to build a list of errors and display them all at once. Then we’re going to talk about how to test this behaviour.

The source code is here: https://github.com/jimmymcp/error-message-mgt Disclaimer: it sucks. This is an example of using the error handling tools, not of how to write a good journal.

Sales Header Posting

In the meantime, if you want to see an example of this sort of error collection in the base application then look at SendToPosting on the Sales Header table.

ErrorMessageMgt.Activate(ErrorMessageHandler);
ErrorMessageMgt.PushContext(ErrorContextElement, RecordId, 0, '');
IsSuccess := CODEUNIT.Run(PostingCodeunitID, Rec);
if not IsSuccess then
  ErrorMessageHandler.ShowErrors;

Then in the Sales-Post codeunit you’ll see this kind of thing (this from CheckAndUpdate)

ErrorMessageMgt.PushContext(ErrorContextElement, RecordId, 0, CheckSalesHeaderMsg);
CheckMandatoryHeaderFields(SalesHeader);
if GenJnlCheckLine.IsDateNotAllowed("Posting Date", SetupRecID) then
  ErrorMessageMgt.LogContextFieldError(...);

Testing Internal Functionality

Internal Access Modifier

We’ve had access modifiers in Business Central for a little while now. You can use them to protect tables, fields, codeunits and queries that shouldn’t be accessible to code outside your app.

For example, you might have a table that contains some sensitive data. Perhaps some part of a licensing mechanism or internal workings of your app that no one else should have access to. Mark the table as:

Access = Internal;

and only code in your app will be able to access it. Even if someone develops an app that depends on your app they will receive a compile error if they create a variable to the table: “<table> is inaccessible due to its protection level.” Before you ask about RecordRefs – I don’t know, I haven’t tested. I assume that Microsoft have thought of that and prevent another app from opening a RecordRef to an internal table belonging to another app.

Alternatively you might have a function in a codeunit that shouldn’t be called from outside your app. The function needs to be public so that other objects in your app can call it, but you can mark it as internal to prevent anyone else calling it:

internal procedure SomeSensitiveMethod()
begin
  //some sensitive code that shouldn't be accessible from outside this app
end;

Testing

Cool.

But wait…how do we test this functionality? We develop our tests alongside the app code but split the test codeunits out into a separate app in our build pipeline – because that’s how Microsoft like it for AppSource submissions.

The result is that the tests run fine against the local Docker container that I am developing and testing against. I push my changes to Azure DevOps to create a pull request and…the build fails. My (separate) test app is now trying to access the internal objects of the production app and fails to compile.

The solution is to use the internalsVisibleTo key in app.json of the production app. List one or more apps (by id, name and publisher) that are allowed to access the internals of the production app. More about that here.

Maybe you already develop your tests as a separate app and so can copy the app id from app.json of the test app.

In our case we usually generate a new guid for the test app as part of the build process – because we don’t usually care what id it has. For times we do want to specify the id of the test app we have an environment.json file that holds some settings for the build – Docker image, credentials, translations to test etc. We can set a testappid in that file and include it in the internalsVisibleTo key in app.json.

Now the build splits the apps into two and creates a test app with the id specified by testappid which compiles and can access internal objects and functions of the production app.

“Discrete Business Logic” implements Interface

If you’ve read anything about what is new in BC16 for developers then you’ll have seen that AL now supports interfaces (*fanfare*). No more convoluted workarounds with events – at least once our customers have upgraded.

I won’t explain what interfaces are and how to use them in AL. You can read the release notes here or blog posts like this and this. Instead I’m going to share some thoughts about where we might make use of interfaces. The obvious disclaimer is that we’ve only just started working with them and I’ve no doubt that we’ll learn more about how to use them properly as we go.

TL;DR

Look for discrete blocks of business logic that can be abstracted to interfaces rather than entities. Identify which business logic you, or other devs dependent on your app, would like to be able to replace. Identify the essential methods that would need to be implemented to replace that logic.

Entities

If you do a quick search for interface examples in other languages you’ll find that most of them are based on entities i.e. real world things or concepts that you are modelling in your software. The common actions those entities can perform or properties that they have are grouped into an interface.

The favourite examples seem to be animals and vehicles e.g. animals can make a sound, eat, sleep etc. You end up with:

interface IAnimal
{
  int noOfLegs;
  procedure MakeSound();
  procedure Eat();
  procedure Sleep();
}

or

interface IVehicle
{
  decimal: maxSpeed;
  int: noOfWheels;
  procedure Start();
  procedure Drive();
  procedure Stop();
  procedure GetMaxSpeed();
}

In other languages you might model your entities with classes that implement those interfaces.

class TreeFrog implements IAnimal
...

class Segway implements IVehicle
...

The trouble is, that doesn’t map particularly well into AL. We model our entities in tables and tables don’t support interfaces, at least not yet. You could image for instance IPerson and IAddress interfaces which describes the properties and methods of people and addresses.

interface IPerson
{
  field("First Name"; Text[50]);
  field(Surname; Text[50]);
}

interface IAddress
{
  field(Address; Text[50]);
  field("Address 2"; Text[50]);
  field(City; Text[50];
  field(County; Text[50]);
  field("Post Code"; Text[50]);
  field("Country/Region Code"; Code[10]);
  procedure Format(AddrArray: Array[8] of Text[50]);
}

Those interfaces could be implemented by various entities in Business Central.

table Customer implements IAddress;
table Employee implements IAddress, IPerson;
table "Salesperson/Purchaser" implements IPerson;

You could imagine that – but that’s all you can do for now. AL interfaces don’t support properties and can’t be implemented by tables.

That’s a long way of saying that it might be useful not to focus on entities when we are looking at candidates for abstracting into an interface.

Discrete Business Logic

Instead of thinking in terms of entities, try thinking in terms of discrete business logic. What self-contained blocks of business logic do you have in your app that it might be useful to replace with a different implementation? Or, what business logic do you have several implementations of that you need to choose between?

Consider that Microsoft’s first major use of an interface in the app wasn’t to do with customer, vendors, sales orders, G/L accounts or any other major entity. It was the sales price calculation logic. From BC16 you can select between different standard implementations or implement your own. More abstract than IVehicle.GetMaxSpeed() but probably far more useful in real life.

It occurs to me that there are probably several benefits to looking for existing and new blocks of business logic that are candidates for implementing via an interface.

Separation of Concerns

Hopefully obvious, but worth a mention. Anything that helps us to consider what our discrete block of business logic are and how to separate them from one another is a good thing. It’s easier to develop – especially in a team and easier to understand and maintain later on.

The process of designing the interface and deciding exactly what is the required to implement some business logic is valuable in itself.

Extensibility

Other developers want to build on top of our logic – our replace parts of our logic with their own. We try to make that as easy as possible with OnBefore/OnAfter events, Handled parameters and developer documentation. It is still a relatively complex task though and easy to subscribe to the wrong event, not respect the Handled flag or make some other mistake that breaks stuff.

An event? For me?

If you are able to just tell a 3rd party developer they can extend an enum and implement an interface life becomes a little easier. The interface, and the compiler, tell them the methods they need to implement and your app can call their implementation rather than having them subscribe to events and fire up an instance of their codeunit every time – like an excitable chihuahua barking every time post comes through the letter box only to find that the mail isn’t addressed to them.

Speaking of which I imagine there must be some performance benefit – probably negligible in most cases – but there must be one.

Obsolescence

We’ve got the Obsolete properties now and can move methods around and use the ObsoleteReason to tell dependent developers how they need to change their code. That’s all great. What if you are having a much bigger restructuring of your code though?

There might be an opportunity to follow Microsoft’s lead. Introduce an interface and have the existing business logic implement that interface. That frees you up to create a new implementation and switch customers and custom development over in a controlled fashion.

For example, we’ve got a case at the moment where existing customisation has been done through event subscriptions. In future we’re going to want dependent developers to extend an enum and implement an interface.

We’re introducing an interface and a default implementation. The default implementation calls the existing events so custom development will continue to work. Then we and dependent developers can migrate over to the interface over time.

Performance of Test Code

Let’s talk about the performance of the test code that we write for Business Central. What do I mean by “performance” and how can we improve it?

Defining “Performance”

Obviously, before we set out to improve something we need to have an idea of what it is we’re trying to optimise for. I’m coming to think of the performance of test code in a couple of key ways:

  1. How easy/quick is it to write test code?
  2. How quickly do the tests run?

Performance of Writing Tests

I suppose none of the below points are specific to test code. They are relevant to any sort of code that we are writing but we can be more inclined to neglect them for test code than production code. If you embrace any sort of automated tested discipline you’re going to spend a significant proportion of your time reading and writing test code – perhaps even as much as you do on production code. It is well worth investing a little time to clean up the code and making it easier to read and maintain.

Comments

Say what you like about comments in production code – variable names should declare the intent, comments are evil blah blah – I do find a few comments valuable in a test.

In fact, I write them first. Given some set of circumstances, when this happens then this is the expected result. Writing those sentences first helps to be clear about what I’m trying to test and what the desired behaviour actually is.

Of course the code in between those comments should be readable and easy to follow – but if you are diligent with a few comments per test you can describe the expected behaviour of that part of the system without having to read any code.

Grouping Tests Logically

I appreciate the situation is different for VARs but as an ISV we have large object ranges to do our development in. There is no reason for us to bundle unrelated code into the same codeunit. If we are starting work on separate from existing business logic then it belongs in its own codeunit. In which case, why wouldn’t the corresponding tests also go in their own codeunit?

Ideally I want to be able to glance down the list of test codeunits that we have and see logical grouping of tests that correspond to recognisable entities or concepts in the production code. If I want to see how our app changes the behaviour of warehouse receipts I can look in WhseReceiptTests.Codeunit.al.

Refactoring Into Library Codeunits

As soon as you start writing tests you’ll start working with the suite of library codeunits provided by Microsoft. You’ll notice that they are separated into different areas of the system e.g. Library – Sales, Library – Warehouse, Library – Manufacturing and so on.

Very likely you’ll want to create your own library codeunit to:

  • Initialise some setup, perhaps create your app’s setup record, create some No. Series etc.
  • Create new records required by your tests – it is useful to follow the convention LibraryCodeunit.CreateXYZ(var XYZ: Record XYZ);
  • Consider having those Create functions returning the primary key of the new record as well so the result can be used inline in the test e.g.
    • LibraryCodeunit.CreateXYZ(var XYZ: Record XYZ): Code[20]
    • LibraryCodeunit.CreateXYZNo(): Code[20]
  • Use method overloads for the create functions – have an overload with more parameters when you need to specify extra field values but keep a simple overload for when you don’t
  • Identify blocks of code that are often required in tests and consider moving them to a library method e.g. creating an item with some bespoke fields populated in a certain way

Having a comprehensive library codeunit brings two benefits:

  1. Tests are easier and faster to write if you already have library methods to implement the Given part of the test
  2. Less code in the tests, making them easier to read

Performance of Running Tests

First, why do we care about how long tests take to run? Does it really matter if your test suite takes an extra minute or two to run?

Obviously, we want our builds to complete as quickly as possible, while still performing all the checks and steps that we want to include. The longer a build takes the more likely another one is going to be queued at the same time and eventually someone is going to end up having to wait. We’ve got a finite number of agents to run simultaneous builds (we host our own – more on that here if you’re curious).

But that isn’t the biggest incentive.

I’m a big fan of running tests while I’m developing – both new tests that I’m writing to cover my new code and existing tests (more on that here). I usually run:

  • the test that I’m working on very frequently (at least every few minutes – see it fail, see it pass)
  • all the tests in that codeunit frequently (maybe each time I’ve finished with a new test)
  • the whole test suite every so often (at least 2-3 times before pushing my work and creating a pull request)

After all, if you’ve got the tests, why not run them? I should know sooner rather than later if some code that I’ve changed has broken something. If the whole test suite takes 60 seconds to run that’s fine. If it takes 10 minutes that’s more of a problem.

In that case I’ll be more inclined to push my changes to the server without waiting, keep a build agent busy for half an hour, start working on something else and then get an email saying the build has failed. Something I could have realised and fixed if I’d run the test suite before I pushed my changes.

So, how to make them faster?

Minimal Setup

First, only create the scenario that is sufficient for your test. For example, we work with warehousing functionality a lot. If we’re testing something to do with warehouse shipments do we need a location with advanced warehousing, zones, bin types, bins, warehouse employees…?

Probably not. Likely I can create a location without Bin Mandatory or Requires Pick and still create a sufficient test.

If you need ledger entries to test with you may be able to create and post the relevant journals rather than creating documents and posting them. Creating an item ledger entry by posting an item journal line is faster than posting a sales order.

Or, you probably want to prevent negative inventory in real life – but does that matter for your test? Save yourself the trouble of having to post some inventory before shipping an item and just allow it to go negative.

Try to restrict the setup of your tests to what is actually essential for the scenario that you are testing. Answering that question is, in itself, a useful thought process.

Shared Fixture

Better yet, set something up once per test codeunit and reuse it in each of the tests. This what Luc van Vugt refers to as a “shared fixture”. You should check out his blog for more about that.

I feel a little mixed about this. I like the idea that each test is entirely responsible for creating its own given scenario and isn’t dependent on anything else but this is denying that this is faster. Finding a posted sales invoice that already exists is much faster than creating a customer, item, sales order and shipping and invoicing it.

No Setup

What is even faster than setting up some data one time? Doing it no times. Depending on what you are testing you may just be able to insert the records you need or call field validation on a record without inserting it.

If I’m testing that validating a field behaves in a certain way I may not need a record to actually be inserted in the table.

Alternatively if you need a sales invoice header to test with you might be able just to initialise it and call SalesInvHeader.Insert; It feels so wrong – but if your test just needs a record and doesn’t care where it came from, who cares? It will all be rolled back after the tests have run anyway.

Managing Business Central Development with Git: Platforms

Another post about Business Central development and Git. Maybe the last one. Who knows?

Whatever your precise circumstances, if you are developing apps for Business Central you have to be mindful of the differences between BC versions and how it affects your app. If you are only developing for SaaS you might only care about the current and next version.

If you are developing and maintaining apps for the on-prem/PaaS market then likely you need to concern yourself with a wider range of BC versions. Even a we-only-support-Business-Central-and-not-NAV stance means we are now supporting four major versions – 13, 14, 15 and 16. I refuse to call the versions “Business Central <Year> <Spring Release/Fall Release/Wave One/Wave Two> – a number makes much more sense to me. Also, I’m British – “fall” is an accident, not a season.

Nomenclature aside, all of this does present us with a challenge. How do we maintain the source code for our apps, for different Business Central versions, in an efficient manner?

Changes Between Versions

For the uninitiated, what sorts of changes are we talking about between platform versions? There are various things to think about:

  • Runtime differences e.g. new mandatory properties in app.json
    • contextSensitiveHelp
    • target – using “Cloud” instead of “Extension”
    • dependencies – using “id” instead of “appId”
    • depending on the “System Application” and “Base Application” apps rather than using the application property
  • Standard fields that have recently been converted to enums
  • Standard functionality that has been moved, methods that have new signatures

And of course, many of you will have experienced the pain of the BC14 -> BC15 upgrade. TempBlob, Base64, Languages, Tenant Mgt. / Environment Info, Calendar Management – all breaking changes. Microsoft were criticised, rightly so, for breaking BC14 compatible apps so badly in BC15.

To their credit, however, Microsoft said that they would minimise future breaking changes, instead marking code as becoming obsolete for at least 12 months until it is removed. That has been borne out with the release of BC16. All but one of our BC15 apps works without any changes in BC16. The exception was an app that was using the Sorting Method on Warehouse Activity Header which has been converted from an option to an enum and now has different values. Microsoft sent me an email with the details of the compilation error.

Strategy

How to manage this then? When we first switched from developing AL apps on NAV2018 (don’t – it’s more trouble than it’s worth) to BC13 we created a new Git repo for each app. It became obvious that we don’t want to keep doing that. We don’t want as many repos as number of apps * number of supported BC versions. We need something smarter than that.

We’ve settled on something like this instead:

  • the master branch has the stable code for the current release of BC (as of this week, BC16), app.json has a platform value to match the latest version (16.0.0.0) and is built against the current Docker image (mcr.microsoft.com/businesscentral/sandbox)
  • new development is done against the current BC (worldwide) version in release, bug, and feature branches (as described here)
  • the code for each version of BC that we are supporting is in a BC13, BC14 or BC15 branch – this branch has an appropriate platform value in app.json and is built against a sandbox Docker image of that version

Imagine a repo like this:

* 3b6ba3c (HEAD -> BC14) Env. Info changes for BC14
* 5e3fda6 TempBlob changes for BC14
* 0f85829 Update Docker image for BC14
* e4fe665 app.json for BC14
| * 517615b (BC15) Update Docker image for BC15
| * e893e39 app.json for BC15
|/
* 60fe758 (tag: 1.1.0, master) Some changes for v1.1.0
* 8bd6f26 (tag: 1.0.0) Initial version

The current version of our app is 1.1.0 and we are supporting the current version of BC (BC16, in the master branch) and BC15 and BC14 in their respective branches. Revisiting an earlier idea, I like to think of these branches as telling a story, answering the question – “what changes do you have to make to the current version of the code to make it compatible with this version of BC?” For BC15 the answer is “not much” – just change app.json and the Docker image. For BC14 the answer is likely to be somewhat longer.

Now we are going to work on the next version of our app, v1.2.0. These changes would go through feature branches, pull requests, a release branch and eventually into master. I’ll skip all that and just show a new commit in the master branch.

* cd7b2ff (HEAD -> master, tag: 1.2.0) Changes for v1.2.0
| * 3b6ba3c (BC14) Env. Info changes for BC14
| * 5e3fda6 TempBlob changes for BC14
| * 0f85829 Update Docker image for BC14
| * e4fe665 app.json for BC14
|/
| * 517615b (BC15) Update Docker image for BC15
| * e893e39 app.json for BC15
|/
* 60fe758 (tag: 1.1.0) Some changes for v1.1.0
* 8bd6f26 (tag: 1.0.0) Initial version

Pushing those changes to the master branch triggers a build against BC16. Now, we want to include the 1.2.0 changes in the BC15 and BC14 versions of our app. We can simply rebase the BC15 and BC14 branches back on top of the master branch.

* f4a7d9b (HEAD -> BC14) Env. Info changes for BC14
* dd072ea TempBlob changes for BC14
* ad8905b Update Docker image for BC14
* 184001e app.json for BC14
| * 71140f4 (BC15) Update Docker image for BC15
| * 2981c4d app.json for BC15
|/
* cd7b2ff (tag: 1.2.0, master) Changes for v1.2.0
* 60fe758 (tag: 1.1.0) Some changes for v1.1.0
* 8bd6f26 (tag: 1.0.0) Initial version

(Force) Pushing the changes to the BC15 and BC14 branches will trigger new builds of the app against their respective Docker images.

Depending on what the v1.2.0 changes actually were, we may need to do some more work in the BC14 branch to make the new code compatible e.g. if the new code included some use of the TempBlob codeunit.

* ff1455b (HEAD -> BC14) More TempBlob changes
* f4a7d9b Env. Info changes for BC14
* dd072ea TempBlob changes for BC14
* ad8905b Update Docker image for BC14
* 184001e app.json for BC14
| * 71140f4 (BC15) Update Docker image for BC15
| * 2981c4d app.json for BC15
|/
* cd7b2ff (tag: 1.2.0, master) Changes for v1.2.0
* 60fe758 (tag: 1.1.0) Some changes for v1.1.0
* 8bd6f26 (tag: 1.0.0) Initial version

Going back to the idea of the BC14 branch telling a coherent story of making the app compatible with BC14, does it make much sense to have two commits of TempBlob changes? No. It doesn’t add anything for a developer looking at the repo in the future. We can sort that with an interactive rebase: git rebase -i master

pick 184001e app.json for BC14
pick ad8905b Update Docker image for BC14
pick dd072ea TempBlob changes for BC14
pick f4a7d9b Env. Info changes for BC14
pick ff1455b More TempBlob changes

Change the rebase script to tell Git to “fixup” the second TempBlob change into the first.

pick 184001e app.json for BC14
pick ad8905b Update Docker image for BC14
pick dd072ea TempBlob changes for BC14
fixup ff1455b More TempBlob changes
pick f4a7d9b Env. Info changes for BC14

Those changes will be melded together and keep the history of the repo neat and readable.