Debugging Business Central Tests with AL Test Runner

TL;DR

  1. Install the Test Runner Service app (see https://github.com/jimmymcp/test-runner-service; direct download of the app file from here) or use the “Install Test Runner Service” command from VS Code to install into the Docker container specified in the config file
  2. Set the URL to the test runner service in the testRunnerServiceUrl key of the AL Test Runner config file
  3. Define a debug configuration of request type ‘attach’ in launch.json to attach the debugger to the service tier that you want to debug (should be the same service tier as specified by the testRunnerServiceUrl key)

Overview

From v0.4.0 of the AL Test Runner app it is possible to debug Business Central tests without leaving Visual Studio Code. There’s a lot of scope for improvements but if you’re interested in trying it out it’s included in the marketplace version now.

Test Runner Service App

This is a very simple app that exposes a codeunit as a web service to accept a codeunit ID and test name to run. Those values are passed to a test runner codeunit (codeunit isolation) to actually run the tests. This is so that the tests are executed in a session type of WebService which the debugger can attach to (the PowerShell runner creates a session type of ClientService).

The app is in the per tenant object range: 79150-79160 to be precise (a number picked pretty much at random). If that clashes with some other object ranges present in the database you can clone the repo and renumber the codeunits if you want. The source is here: https://github.com/jimmymcp/test-runner-service

You can use the Install Test Runner Service command in VS Code to automatically download the app and install into the container specified in the AL Test Runner config file.

The app is not code signed so you’ll need to use the -SkipVerification switch when you install it.

testRunnerServiceUrl

A new key is required in the AL Test Runner config file. This specifies the OData endpoint of the test runner service that is exposed by the Test Runner Service app. The service will be called from the VS Code terminal – so consider where the terminal is runner and where the service is hosted.

We develop against local Docker containers so the local VS Code instance will be able to access the web service without any trouble. If you develop against a remote Docker host make sure that the OData port is available externally. If you use VS Code remote development remember that the PowerShell session will be running on the VS Code server host.

The url will be in the format:

http[s]://[BC host]:[OData port]/[BC instance]/ODataV4/TestRunner?company=[BC company]

for example against a local Docker container called bc with OData exposed on the default port of 7048 and a company name of CRONUS International Ltd.:

"testRunnerServiceUrl": "http://bc:7048/BC/ODataV4/TestRunner?company=CRONUS%20International%20Ltd."

Debug Configuration

You will need a debug configuration of type attach in the launch.json file. This should attach the debugger to the same service as identified by the testRunnerServiceUrl key. breakOnNext should be set WebServiceClient. Currently UserPassword authentication is the only authentication method supported.

{
    "name": "Attach bc",
    "type": "al",
    "request": "attach",
    "server": "http://bc",
    "serverInstance": "bc",
    "authentication": "UserPassword",
    "breakOnError": true,
    "breakOnRecordWrite": false,
    "enableSqlInformationDebugger": true,
    "enableLongRunningSqlStatements": true,
    "longRunningSqlStatementsThreshold": 500,
    "numberOfSqlStatements": 10,
    "breakOnNext": "WebServiceClient"
}

Debugging

Codelens actions will be added at the top of test codeunits and before each test method. Set a breakpoint in the test method that you want to debug or allow the debugger to break on an error.

Clicking on Debug Test (Ctrl+Alt+D) will attach the first debug configuration specified in launch.json and call the web service to run the test with the Test Runner Service app.

Attaching the debugger and running a test from VS Code

Step in/out/over as usual. When the code execution has finished if an error was encountered the error message and callstack will be displayed in the terminal.

Limitations

There are some limitations to running tests in a web service session. Most importantly TestPage variables are not supported. There may also be some differences in the behaviour of tests in web services and the PowerShell runner.

Tip: Evaluating DateTime with Type Helper

Dates. What a nightmare. Day/Month/Year? Month/Day/Year? 24 hour time? 12 hour time? It’s almost enough to make you sympathetic to the idea of decimal time…almost.

Type Helper codeunit to the rescue. It has a method to allow you to evaluate the text of a date, time or datetime into the corresponding type according to a format that you specify.

AVariant := DateResult;
FormatString := 'ddMMyy';
if TypeHelper.Evaluate(AVariant, DateText, FormatString, '') then
  DateResult := AVariant

The first parameter is of type Variant. The actual data type that the variant contains determines whether the method will attempt to evaluate to a date, time or datetime. Unfortunately because that parameter is passed by reference (var) you have to declare a variant variable and then assign its value to another variable afterwards – but apart from that its pretty self explanatory.

See https://docs.microsoft.com/en-us/dotnet/standard/base-types/custom-date-and-time-format-strings for info about the formats you can use. Don’t do what I did and miss the distinction between lowercase ‘m’ (minute) and uppercase ‘M’ (month) *facepalm*

Record Rename/Modify Considerations

TL;DR

Use table extensions to extend the OnModify trigger rather than OnBefore/AfterModify subscriptions where possible. If you must use subscribers then be aware of some of the unexpected situation they are called in.

One of those situations is when a related table has been renamed. The Modify events are thrown in secondary tables e.g. if an Item record is renamed than all of its Item Ledger Entry records will have OnBeforeModifyEvent and OnAfterModifyEvent fired.

Record Modification

If you need to hang some custom logic off the back of a record modification (in a standard table) then I think you’ve got three main options:

  1. Create a table extension and add an OnModify trigger
  2. Add a subscription to the OnBeforeModifyEvent or OnAfterModifyEvent for the table in question
  3. Subscribe to the events in the Global Triggers codeunit to set the table mask and listen for modifications on the table you are interested in

In general I think that is the order of preference i.e. if you can use a table extension, do.

Why Use Events Then?

This isn’t really the point of this post – but why might you use events then. In short, when you can’t use a table extension. That will usually be for one of two reasons:

  1. The modify trigger isn’t called i.e. the base app calls Modify(false)
  2. You don’t know which tables you want to work with at design time – your app has got some setup to determine which tables to support maybe in some integration scenario or like Change Log Setup

Considerations For Modify Subscriptions

OK, you’ve decided to use event subscriptions to the OnBefore/AfterModify events. Now to the point of the post. There are some things you need to be aware of:

  • They are called for temporary records – most of our subscribers start with a Rec.IsTemporary() check
  • They are called whether or not the modify trigger has been called – the RunTrigger parameter indicates whether Modify(true) or Modify(false) was called
  • You need to explicitly call Rec.Modify if you make any changes to Rec with an OnAfterModifyEvent subscription otherwise your changes will not be persisted

What’s in a Rename?

You probably knew all of that anyway, What I didn’t know until today is that the events are fired if a parent table is renamed. For example, if you rename an Item record then these events will be fired for each Item Ledger Entry record. Which makes sense and might be exactly what you want.

We hadn’t thought of that and it was the cause of a bug in our app. Shame on us.

(Slightly) More Elegant Error Handling in Business Central

This is an intro post to the Error Message Mgt. codeunit and related objects. NAV has never brilliant when it comes to error handling, for a couple of reasons.

  1. The error messages themselves sometimes leave a lot to be desired
  2. The whac-a-mole nature of fixing multiple errors by finding one at a time and attempting to post/register again

There isn’t a lot we can do about the standard error messages, but we can write more considerate errors for our users. Describe the problem and guide the user to the solution as much as possible i.e. not “There was nothing to handle”.

What about #2? What if we could line up all the moles so that the user could whack them all in one go?1 We can use the Error Message Management objects to do just that.

This is a useful post you could take a look at – http://www.mynavblog.com/2019/04/09/how-to-write-error-and-confirm/ – but even having read that I still struggled my way through how to use it and write tests around it. I didn’t find the framework particularly easy or intuitive to work with so hope I can save someone else some head-scratching.

1 metaphorical moles. I’m not endorsing whacking actual moles.

Scenario

You might want to use this framework when you’ve got some process that could throw errors for multiple different reasons. Usually these are going to be some posting or registering routine for a journal line or a document.

Often there are all sorts of things that can wrong with those routines – posting date ranges, dimension errors, mandatory fields that have not been populated, missing posting setup, missing no. series… blah blah blah.

Rather than just throwing an error for the first problem we encounter we want to collect them together so that the user can fix them before posting again.

I was going to attempt a single overview post, but I’ve decided against that. I think it will be more useful (hopefully) to work through an example – albeit a silly one – in stages. I’ve got a small app for posting a record of video calls – because that’s what we need right now, more video calls and more admin.

The app adds a journal page to record the platform (Teams, Zoom, WhatsApp or Skype) the type of call, date, duration and participants. Before the journal can be posted there is a deliberately convoluted process to check for various errors. Concisely summarised below:

  1. No. of Participants must not be 0
  2. Duration (mins) must not be 0
  3. Posting Date must not be blank
  4. Posting Date must be within allowed posting dates for the user
  5. Zoom calls cannot be over 40 minutes when the No. of Participants is > 2 – I’m a cheapskate and have got a free account
  6. Teams cannot be used for a call type of Family Quiz – surely no family is that corporate?
  7. WhatsApp should not be used for groups of more than 4 – it’s bad enough with 2
  8. WhatsApp should not be used for a call type of Customer Demo – I mean, you don’t…do you?
  9. Skype isn’t used for more than 2 participants – I know technically it can be…it just isn’t
  10. Family quizzes can’t be held Monday – Thursday
  11. A call type of Daily Team Call cannot be more than 45 minutes long – you need to find a smaller team to have daily calls with
  12. A call type of Daily Team Call cannot have more than 30 participants – see #11

While this is daft, it is an example of how a journal might not be able to be posted for lots of different reasons. Normally we expect the user to fix those errors one at a time and, if they’ve still got the will to live, post the batch when they have resolved them all.

At the moment I’m just using Testfield and Error to throw errors when a journal line is valid. Over the next few posts I’m going to see if I can the Error Message Mgt objects to build a list of errors and display them all at once. Then we’re going to talk about how to test this behaviour.

The source code is here: https://github.com/jimmymcp/error-message-mgt Disclaimer: it sucks. This is an example of using the error handling tools, not of how to write a good journal.

Sales Header Posting

In the meantime, if you want to see an example of this sort of error collection in the base application then look at SendToPosting on the Sales Header table.

ErrorMessageMgt.Activate(ErrorMessageHandler);
ErrorMessageMgt.PushContext(ErrorContextElement, RecordId, 0, '');
IsSuccess := CODEUNIT.Run(PostingCodeunitID, Rec);
if not IsSuccess then
  ErrorMessageHandler.ShowErrors;

Then in the Sales-Post codeunit you’ll see this kind of thing (this from CheckAndUpdate)

ErrorMessageMgt.PushContext(ErrorContextElement, RecordId, 0, CheckSalesHeaderMsg);
CheckMandatoryHeaderFields(SalesHeader);
if GenJnlCheckLine.IsDateNotAllowed("Posting Date", SetupRecID) then
  ErrorMessageMgt.LogContextFieldError(...);

Testing Internal Functionality

Internal Access Modifier

We’ve had access modifiers in Business Central for a little while now. You can use them to protect tables, fields, codeunits and queries that shouldn’t be accessible to code outside your app.

For example, you might have a table that contains some sensitive data. Perhaps some part of a licensing mechanism or internal workings of your app that no one else should have access to. Mark the table as:

Access = Internal;

and only code in your app will be able to access it. Even if someone develops an app that depends on your app they will receive a compile error if they create a variable to the table: “<table> is inaccessible due to its protection level.” Before you ask about RecordRefs – I don’t know, I haven’t tested. I assume that Microsoft have thought of that and prevent another app from opening a RecordRef to an internal table belonging to another app.

Alternatively you might have a function in a codeunit that shouldn’t be called from outside your app. The function needs to be public so that other objects in your app can call it, but you can mark it as internal to prevent anyone else calling it:

internal procedure SomeSensitiveMethod()
begin
  //some sensitive code that shouldn't be accessible from outside this app
end;

Testing

Cool.

But wait…how do we test this functionality? We develop our tests alongside the app code but split the test codeunits out into a separate app in our build pipeline – because that’s how Microsoft like it for AppSource submissions.

The result is that the tests run fine against the local Docker container that I am developing and testing against. I push my changes to Azure DevOps to create a pull request and…the build fails. My (separate) test app is now trying to access the internal objects of the production app and fails to compile.

The solution is to use the internalsVisibleTo key in app.json of the production app. List one or more apps (by id, name and publisher) that are allowed to access the internals of the production app. More about that here.

Maybe you already develop your tests as a separate app and so can copy the app id from app.json of the test app.

In our case we usually generate a new guid for the test app as part of the build process – because we don’t usually care what id it has. For times we do want to specify the id of the test app we have an environment.json file that holds some settings for the build – Docker image, credentials, translations to test etc. We can set a testappid in that file and include it in the internalsVisibleTo key in app.json.

Now the build splits the apps into two and creates a test app with the id specified by testappid which compiles and can access internal objects and functions of the production app.