(Slightly) More Elegant Error Handling in Business Central

This is an intro post to the Error Message Mgt. codeunit and related objects. NAV has never brilliant when it comes to error handling, for a couple of reasons.

  1. The error messages themselves sometimes leave a lot to be desired
  2. The whac-a-mole nature of fixing multiple errors by finding one at a time and attempting to post/register again

There isn’t a lot we can do about the standard error messages, but we can write more considerate errors for our users. Describe the problem and guide the user to the solution as much as possible i.e. not “There was nothing to handle”.

What about #2? What if we could line up all the moles so that the user could whack them all in one go?1 We can use the Error Message Management objects to do just that.

This is a useful post you could take a look at – http://www.mynavblog.com/2019/04/09/how-to-write-error-and-confirm/ – but even having read that I still struggled my way through how to use it and write tests around it. I didn’t find the framework particularly easy or intuitive to work with so hope I can save someone else some head-scratching.

1 metaphorical moles. I’m not endorsing whacking actual moles.

Scenario

You might want to use this framework when you’ve got some process that could throw errors for multiple different reasons. Usually these are going to be some posting or registering routine for a journal line or a document.

Often there are all sorts of things that can wrong with those routines – posting date ranges, dimension errors, mandatory fields that have not been populated, missing posting setup, missing no. series… blah blah blah.

Rather than just throwing an error for the first problem we encounter we want to collect them together so that the user can fix them before posting again.

I was going to attempt a single overview post, but I’ve decided against that. I think it will be more useful (hopefully) to work through an example – albeit a silly one – in stages. I’ve got a small app for posting a record of video calls – because that’s what we need right now, more video calls and more admin.

The app adds a journal page to record the platform (Teams, Zoom, WhatsApp or Skype) the type of call, date, duration and participants. Before the journal can be posted there is a deliberately convoluted process to check for various errors. Concisely summarised below:

  1. No. of Participants must not be 0
  2. Duration (mins) must not be 0
  3. Posting Date must not be blank
  4. Posting Date must be within allowed posting dates for the user
  5. Zoom calls cannot be over 40 minutes when the No. of Participants is > 2 – I’m a cheapskate and have got a free account
  6. Teams cannot be used for a call type of Family Quiz – surely no family is that corporate?
  7. WhatsApp should not be used for groups of more than 4 – it’s bad enough with 2
  8. WhatsApp should not be used for a call type of Customer Demo – I mean, you don’t…do you?
  9. Skype isn’t used for more than 2 participants – I know technically it can be…it just isn’t
  10. Family quizzes can’t be held Monday – Thursday
  11. A call type of Daily Team Call cannot be more than 45 minutes long – you need to find a smaller team to have daily calls with
  12. A call type of Daily Team Call cannot have more than 30 participants – see #11

While this is daft, it is an example of how a journal might not be able to be posted for lots of different reasons. Normally we expect the user to fix those errors one at a time and, if they’ve still got the will to live, post the batch when they have resolved them all.

At the moment I’m just using Testfield and Error to throw errors when a journal line is valid. Over the next few posts I’m going to see if I can the Error Message Mgt objects to build a list of errors and display them all at once. Then we’re going to talk about how to test this behaviour.

The source code is here: https://github.com/jimmymcp/error-message-mgt Disclaimer: it sucks. This is an example of using the error handling tools, not of how to write a good journal.

Sales Header Posting

In the meantime, if you want to see an example of this sort of error collection in the base application then look at SendToPosting on the Sales Header table.

ErrorMessageMgt.Activate(ErrorMessageHandler);
ErrorMessageMgt.PushContext(ErrorContextElement, RecordId, 0, '');
IsSuccess := CODEUNIT.Run(PostingCodeunitID, Rec);
if not IsSuccess then
  ErrorMessageHandler.ShowErrors;

Then in the Sales-Post codeunit you’ll see this kind of thing (this from CheckAndUpdate)

ErrorMessageMgt.PushContext(ErrorContextElement, RecordId, 0, CheckSalesHeaderMsg);
CheckMandatoryHeaderFields(SalesHeader);
if GenJnlCheckLine.IsDateNotAllowed("Posting Date", SetupRecID) then
  ErrorMessageMgt.LogContextFieldError(...);

Share Docker Containers With VS Code Live Share

If you weren’t working remotely before 2020 then we’ve all had to guess used to it this year. Collaboration tools like Teams are invaluable for keeping up with teammates during the day, discussing work and bouncing ideas and banter off each other. Video calls, screen sharing, chat, shared documents, gifs – that’s all great – but not quite a replacement for sitting at the same desk and looking at some code together.

There is a better way.

What is Live Share?

If you’re not already using VS Code Live Share then check it out. Get the extension pack rather than just the Live Share extension itself which adds audio calling through the sharing session. Sign in with a Microsoft or GitHub account and then start a live share session. A URL is generated that you can share for others to join, from VS Code, Visual Studio or even a browser.

Others will be able to see the source code, navigate and edit it. You can see each other’s cursors and choose to follow someone or work independently. Just like real-time collaborating on a shared Google doc or Word document but in the IDE. Your own extensions, your own theme, but working on shared code.

Whether you’re into pair-programming or just occasionally want a colleague to look at your code and help with something this is a great solution. Being able to independently navigate the source code and go-to definitions is far better than simply screen sharing.

However, seeing the source code is one thing, you really want to be able to publish the changes and see them in the web client.

Sharing a Server

We develop against local Docker containers. Which is great for giving each of us complete control over our own environment. It’s not so great for sharing that environment with others. But wait, you can share your local Docker container through the Live Share session.

First, make sure that you are publishing all ports when you create the Docker container. This will publish the container’s internal ports (80, 443, 7045-7049, 8080) to ports on the host. You’ll be able to access ports inside the container via the ports they are mapped to on the host.

Run docker ps to see the which host ports the container has been hooked up to. In my example, I’m accessing the web client over http so I’m interested in what port 80 is mapped to.

0.0.0.0:60425->80 indicates that port 60425 on the host (my local machine) has been mapped to port 0 in the container. In which case, http://localhost:60425/bc should load the web client. Check that you can open a browser to that address and log in.

Docker container details with docker ps

Now we can share that address as a shared server. Click on Share server… and enter the web client URL in the dialog box. You can give a friendly name to the shared server as well if you like.

Once you’ve shared the container anyone else who has joined the session will be able to click on the link under Shared Servers. That will load the web client to your local Docker container through the Live Share session.

I’ll say that again – it will load the web client, running in your local Docker container, through the Live Share session 👀 As far as your collaborator is concerned they just load the same URL as you (http://localhost:60425/bc) and have access.

Before you ask, I don’t know. Probably magic.

Now What?

Now you can genuinely collaborate on some code, in real-time and publish to the same Docker container to see the results and test. Maybe you want to work together on some code, maybe you need want some help with a tricky bug or requirement. Last week I used it to ask a colleague to quickly test a fix for a bug that he had reported.

Use your imagination. This is just one of the many benefits that moving our development to VS Code has brought. Good times.

Tip: Octopus Merges in Git

If you occasionally glance at my blog you might have noticed that I am a big fan of pull requests as served up by Azure DevOps (exhibit A). I briefly described our typical branching strategy here, including how and why we use pull requests.

I love it. Writing, testing and reviewing discrete pieces of development independently of one another helps spread the work around the team and prevent work getting blocked by some unrelated development in the same project.

However, occasionally we’ll have several pull requests open for the same project which I want to publish to some testing environment (either a local Docker container or SaaS sandbox) all together. Maybe it made sense to develop those work items separately but it’s hard to test them in isolation.

I tend to create a new local branch called testing or something equally banal, merge all the changes into that branch and publish. Here’s where the octopus comes in. We usually use git merge to merge a single other branch into the current branch e.g.

git checkout master
git merge release/1.2.0 --ff-only

To merge the commits in the release branch into the master branch (but only if it is possible to fast-forward merge).

Git doesn’t restrict you to a single other branch though. You can add as many as you like. Say we’ve got 3 feature branches on the go which I want to push to a SaaS sandbox for a consultant to test.

git checkout -b testing
git merge feature/one feature/two feature/three

This will create and checkout a new testing branch and then merge the three feature branches into it. Octopuses for the win.

Testing Internal Functionality

Internal Access Modifier

We’ve had access modifiers in Business Central for a little while now. You can use them to protect tables, fields, codeunits and queries that shouldn’t be accessible to code outside your app.

For example, you might have a table that contains some sensitive data. Perhaps some part of a licensing mechanism or internal workings of your app that no one else should have access to. Mark the table as:

Access = Internal;

and only code in your app will be able to access it. Even if someone develops an app that depends on your app they will receive a compile error if they create a variable to the table: “<table> is inaccessible due to its protection level.” Before you ask about RecordRefs – I don’t know, I haven’t tested. I assume that Microsoft have thought of that and prevent another app from opening a RecordRef to an internal table belonging to another app.

Alternatively you might have a function in a codeunit that shouldn’t be called from outside your app. The function needs to be public so that other objects in your app can call it, but you can mark it as internal to prevent anyone else calling it:

internal procedure SomeSensitiveMethod()
begin
  //some sensitive code that shouldn't be accessible from outside this app
end;

Testing

Cool.

But wait…how do we test this functionality? We develop our tests alongside the app code but split the test codeunits out into a separate app in our build pipeline – because that’s how Microsoft like it for AppSource submissions.

The result is that the tests run fine against the local Docker container that I am developing and testing against. I push my changes to Azure DevOps to create a pull request and…the build fails. My (separate) test app is now trying to access the internal objects of the production app and fails to compile.

The solution is to use the internalsVisibleTo key in app.json of the production app. List one or more apps (by id, name and publisher) that are allowed to access the internals of the production app. More about that here.

Maybe you already develop your tests as a separate app and so can copy the app id from app.json of the test app.

In our case we usually generate a new guid for the test app as part of the build process – because we don’t usually care what id it has. For times we do want to specify the id of the test app we have an environment.json file that holds some settings for the build – Docker image, credentials, translations to test etc. We can set a testappid in that file and include it in the internalsVisibleTo key in app.json.

Now the build splits the apps into two and creates a test app with the id specified by testappid which compiles and can access internal objects and functions of the production app.

Getting Started with Git Hooks

Git hooks have been on my “to mess about with and learn a little some day” list for a while. It’s the old conundrum: I might use them if I knew what they could do, but I’m not going to learn about them until I’ve got a use for them. Chickens-and-eggs for developers.

The Problem

Until recently, that is. I wondered:

What is the best way to save a different launch configuration in VS Code for each branch in my repo?

Why would I want to do that? We have branches for versions of our apps compatible with BC14, 15 and 16. I’ve got separate Docker containers of each of those versions to develop and test against. I need to change the launch config when I want to test in another container. Not a great hardship admittedly, but it would be nice to automate somehow.

Source Control?

I could of course just source-control the launch.json file and commit the relevant changes to each branch. I don’t like that though. We ignore the .vscode folder in .gitignore. To my mind anything that happens in that folder is for the benefit of the local development environment. No one else cares what my launch config is – they’ll be pushing code to their own Docker containers or to SaaS sandboxes.

I suppose you could have some local commits that include the launch.json file that you don’t push to the server. Sounds horrible to manage though. Don’t do that.

So we want something that is related to source control – we want a launch config per branch – but doesn’t involve actually committing the config file. Enter hooks.

Hooks

Hooks are points in the Git workflow at which you can execute a custom script. Some are executed server-side, some client-side. You can read the documentation at https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks

When a Git repo is initialised a .git folder is created which contains all the version history of the repo and Git’s guts. If you’re curious to learn more about how Git actually works have a browse through these folders. You might want to load up a presentation on the subject as well – like this: https://www.youtube.com/watch?v=ig5E8CcdM9g

You’ll notice that one of the folders is called hooks. This contains some sample hook files which you can take a look through. The .sample file type stops them from actually doing anything but you get a flavour of what’s possible and how you can use them.

Post-Checkout Example

For our example we’re going to want to use the post-checkout hook. This allows us to create a script that Git will execute after a checkout command. (Side note: I tend to use the integrated terminal in VS Code to execute Git commands, including checking out branches but this hook is also executed if you switch branches with the UI).

First, post-checkout isn’t one of the samples that is created when you init a new repo. You can copy the post-update.sample file instead and rename it to post-checkout (no file extension).

Writing the Hook

Okay…so what are we supposed to put in this file? Can we just start typing PowerShell? Erm…no, we can’t. The file looks distinctly Linuxy…*shudder* I’ve got nothing against Linux, I just have next to no idea what I’m doing.

The top line of the file indicates the language that you are going to script in. It’s possible to use Perl, Python, Ruby and a bunch of other stuff, including PowerShell. I won’t pretend to really know what I’m talking about but there is an useful thread on Stack Overflow on the topic.

#!/bin/sh

It is possible to write PowerShell directly into the hook file, but from what I’ve read it seems you need PowerShell Core to run it. I’ve settled for this:

#!/bin/sh

c:/Windows/System32/WindowsPowerShell/v1.0/powershell.exe -File '.git\hooks\post-checkout.ps1' $1 $2 $3

Call powershell.exe to execute the .git\post-checkout.ps1 file and pass it the three parameters that have been received. Great. How do we know that there are three parameters? And what are they? You can read the documentation here: https://git-scm.com/docs/githooks#_post_checkout

The parameters are:

  • The previous HEAD (the commit that we’ve come from)
  • The new HEAD (the commit that we’ve just checked out)
  • A flag to indicate whether a branch was checked out (it has a value of 1 if it was)

Writing the Script

Now we’re getting somewhere.

The plan is:

  • Save the launch configuration as it stands for the branch that we are coming from
  • Find a saved launch configuration for the branch that we have moved to and overwrite launch.json

What Branch Have We Come From?

This is a subtler question than we might have expected. The parameter that Git passes is the hexadecimal hash of the commit that was previously checked out. Some jumble of numbers and letters that is integral to how Git works but probably meaningless to the developer. We need to convert that hash into a branch. Or branches – potentially more than one branch might be pointing at that commit.

git branch --points-at <commit hash>

The above command will give us zero or more branches that are pointing at a given commit. Note that the current branch is denoted with an asterisk which we’ll need to trim off the front of the name.

Which Branch Have We Moved To?

We can find the name of the new, current branch with:

git branch --show-current

Finally, branch names can contain characters that cannot be used in filenames. Specifically we use forward-slashes a lot in our branch names. I’ve got a Convert-BranchToFileName function to clean it up.

Putting all the jigsaw pieces together this is the content of my post-checkout.ps1 file (which is in the .git\hooks directory).

[CmdletBinding()]
param (
    $PreviousHead,
    $NewHead,
    $BranchCheckout
)

function Convert-BranchToFileName() {
    param(
        $branch
    )

    $filename = $branch
    if ($branch.StartsWith('*')) {
        $filename = $branch.Substring(2)
    }

    $filename = $filename.Split([IO.Path]::GetInvalidPathChars()) -join ''
    return $filename.Trim()
}

if (!(Test-Path '.vscode\launch')) {
    New-Item '.vscode\launch' -ItemType Directory | Out-Null
}

$branches = git branch --points-at $PreviousHead
foreach ($branch in $branches) {
    if (Test-Path '.vscode\launch.json') {
        $filename = Convert-BranchToFileName $branch
        Copy-Item '.vscode\launch.json' ".vscode\launch\$filename.json" -Force
    }
}

if ($BranchCheckout -eq 1) {
    $newBranch = git branch --show-current
    if (Test-Path ".vscode\launch\$(Convert-BranchToFileName $newBranch).json") {
        Copy-Item ".vscode\launch\$(Convert-BranchToFileName $newBranch).json" '.vscode\launch.json' -Force
    }
}

Result

The result is something like this. When I checkout a new branch launch.json is copied to .vscode\launch\<branchname>.json and the saved json file matching the new branch name overwrites launch.json

A pretty trivial example you might argue – and you’d be right – but hopefully enough to demonstrate some of the possibilities. More useful examples might include:

  • Cleaning up the source directory between checkouts e.g. deleting .app files from previous builds or removing duplicate apps in the .alpackages folder
  • Checking Git configuration before committing e.g. is the email address that will be associated with the commit in an acceptable format e.g. does it make a particular regex pattern?
  • Enforcing some policy about compile errors or warnings before a new commit is made
  • Use your imagination…