Testing Internal Functionality

Internal Access Modifier

We’ve had access modifiers in Business Central for a little while now. You can use them to protect tables, fields, codeunits and queries that shouldn’t be accessible to code outside your app.

For example, you might have a table that contains some sensitive data. Perhaps some part of a licensing mechanism or internal workings of your app that no one else should have access to. Mark the table as:

Access = Internal;

and only code in your app will be able to access it. Even if someone develops an app that depends on your app they will receive a compile error if they create a variable to the table: “<table> is inaccessible due to its protection level.” Before you ask about RecordRefs – I don’t know, I haven’t tested. I assume that Microsoft have thought of that and prevent another app from opening a RecordRef to an internal table belonging to another app.

Alternatively you might have a function in a codeunit that shouldn’t be called from outside your app. The function needs to be public so that other objects in your app can call it, but you can mark it as internal to prevent anyone else calling it:

internal procedure SomeSensitiveMethod()
begin
  //some sensitive code that shouldn't be accessible from outside this app
end;

Testing

Cool.

But wait…how do we test this functionality? We develop our tests alongside the app code but split the test codeunits out into a separate app in our build pipeline – because that’s how Microsoft like it for AppSource submissions.

The result is that the tests run fine against the local Docker container that I am developing and testing against. I push my changes to Azure DevOps to create a pull request and…the build fails. My (separate) test app is now trying to access the internal objects of the production app and fails to compile.

The solution is to use the internalsVisibleTo key in app.json of the production app. List one or more apps (by id, name and publisher) that are allowed to access the internals of the production app. More about that here.

Maybe you already develop your tests as a separate app and so can copy the app id from app.json of the test app.

In our case we usually generate a new guid for the test app as part of the build process – because we don’t usually care what id it has. For times we do want to specify the id of the test app we have an environment.json file that holds some settings for the build – Docker image, credentials, translations to test etc. We can set a testappid in that file and include it in the internalsVisibleTo key in app.json.

Now the build splits the apps into two and creates a test app with the id specified by testappid which compiles and can access internal objects and functions of the production app.

Performance of Test Code

Let’s talk about the performance of the test code that we write for Business Central. What do I mean by “performance” and how can we improve it?

Defining “Performance”

Obviously, before we set out to improve something we need to have an idea of what it is we’re trying to optimise for. I’m coming to think of the performance of test code in a couple of key ways:

  1. How easy/quick is it to write test code?
  2. How quickly do the tests run?

Performance of Writing Tests

I suppose none of the below points are specific to test code. They are relevant to any sort of code that we are writing but we can be more inclined to neglect them for test code than production code. If you embrace any sort of automated tested discipline you’re going to spend a significant proportion of your time reading and writing test code – perhaps even as much as you do on production code. It is well worth investing a little time to clean up the code and making it easier to read and maintain.

Comments

Say what you like about comments in production code – variable names should declare the intent, comments are evil blah blah – I do find a few comments valuable in a test.

In fact, I write them first. Given some set of circumstances, when this happens then this is the expected result. Writing those sentences first helps to be clear about what I’m trying to test and what the desired behaviour actually is.

Of course the code in between those comments should be readable and easy to follow – but if you are diligent with a few comments per test you can describe the expected behaviour of that part of the system without having to read any code.

Grouping Tests Logically

I appreciate the situation is different for VARs but as an ISV we have large object ranges to do our development in. There is no reason for us to bundle unrelated code into the same codeunit. If we are starting work on separate from existing business logic then it belongs in its own codeunit. In which case, why wouldn’t the corresponding tests also go in their own codeunit?

Ideally I want to be able to glance down the list of test codeunits that we have and see logical grouping of tests that correspond to recognisable entities or concepts in the production code. If I want to see how our app changes the behaviour of warehouse receipts I can look in WhseReceiptTests.Codeunit.al.

Refactoring Into Library Codeunits

As soon as you start writing tests you’ll start working with the suite of library codeunits provided by Microsoft. You’ll notice that they are separated into different areas of the system e.g. Library – Sales, Library – Warehouse, Library – Manufacturing and so on.

Very likely you’ll want to create your own library codeunit to:

  • Initialise some setup, perhaps create your app’s setup record, create some No. Series etc.
  • Create new records required by your tests – it is useful to follow the convention LibraryCodeunit.CreateXYZ(var XYZ: Record XYZ);
  • Consider having those Create functions returning the primary key of the new record as well so the result can be used inline in the test e.g.
    • LibraryCodeunit.CreateXYZ(var XYZ: Record XYZ): Code[20]
    • LibraryCodeunit.CreateXYZNo(): Code[20]
  • Use method overloads for the create functions – have an overload with more parameters when you need to specify extra field values but keep a simple overload for when you don’t
  • Identify blocks of code that are often required in tests and consider moving them to a library method e.g. creating an item with some bespoke fields populated in a certain way

Having a comprehensive library codeunit brings two benefits:

  1. Tests are easier and faster to write if you already have library methods to implement the Given part of the test
  2. Less code in the tests, making them easier to read

Performance of Running Tests

First, why do we care about how long tests take to run? Does it really matter if your test suite takes an extra minute or two to run?

Obviously, we want our builds to complete as quickly as possible, while still performing all the checks and steps that we want to include. The longer a build takes the more likely another one is going to be queued at the same time and eventually someone is going to end up having to wait. We’ve got a finite number of agents to run simultaneous builds (we host our own – more on that here if you’re curious).

But that isn’t the biggest incentive.

I’m a big fan of running tests while I’m developing – both new tests that I’m writing to cover my new code and existing tests (more on that here). I usually run:

  • the test that I’m working on very frequently (at least every few minutes – see it fail, see it pass)
  • all the tests in that codeunit frequently (maybe each time I’ve finished with a new test)
  • the whole test suite every so often (at least 2-3 times before pushing my work and creating a pull request)

After all, if you’ve got the tests, why not run them? I should know sooner rather than later if some code that I’ve changed has broken something. If the whole test suite takes 60 seconds to run that’s fine. If it takes 10 minutes that’s more of a problem.

In that case I’ll be more inclined to push my changes to the server without waiting, keep a build agent busy for half an hour, start working on something else and then get an email saying the build has failed. Something I could have realised and fixed if I’d run the test suite before I pushed my changes.

So, how to make them faster?

Minimal Setup

First, only create the scenario that is sufficient for your test. For example, we work with warehousing functionality a lot. If we’re testing something to do with warehouse shipments do we need a location with advanced warehousing, zones, bin types, bins, warehouse employees…?

Probably not. Likely I can create a location without Bin Mandatory or Requires Pick and still create a sufficient test.

If you need ledger entries to test with you may be able to create and post the relevant journals rather than creating documents and posting them. Creating an item ledger entry by posting an item journal line is faster than posting a sales order.

Or, you probably want to prevent negative inventory in real life – but does that matter for your test? Save yourself the trouble of having to post some inventory before shipping an item and just allow it to go negative.

Try to restrict the setup of your tests to what is actually essential for the scenario that you are testing. Answering that question is, in itself, a useful thought process.

Shared Fixture

Better yet, set something up once per test codeunit and reuse it in each of the tests. This what Luc van Vugt refers to as a “shared fixture”. You should check out his blog for more about that.

I feel a little mixed about this. I like the idea that each test is entirely responsible for creating its own given scenario and isn’t dependent on anything else but this is denying that this is faster. Finding a posted sales invoice that already exists is much faster than creating a customer, item, sales order and shipping and invoicing it.

No Setup

What is even faster than setting up some data one time? Doing it no times. Depending on what you are testing you may just be able to insert the records you need or call field validation on a record without inserting it.

If I’m testing that validating a field behaves in a certain way I may not need a record to actually be inserted in the table.

Alternatively if you need a sales invoice header to test with you might be able just to initialise it and call SalesInvHeader.Insert; It feels so wrong – but if your test just needs a record and doesn’t care where it came from, who cares? It will all be rolled back after the tests have run anyway.

Testing Against a Remote Docker Host with AL Test Runner

Apologies for another post about AL Test Runner. If you don’t use or care about the extension you can probably stop reading now and come back next time. It isn’t my intention to keep banging on about it – but the latest version (v0.2.1) does plug a significant gap.

Next time I’ll move onto a different subject – some thoughts about how we use Git to manage our code effectively.

Developing Against a Remote Docker Container

While I still prefer developing against a local Docker container I know that many others publish their apps to a container hosted somewhere else. In which case your options for running tests against that container are:

  • Using the Remote Development capability of VS Code to open a terminal and execute PowerShell on the remote host – discussed here and favoured by Tobias Fenster (although his views on The Beautiful Game may make you suspicious of any of his opinions 😉)
  • Enabling PS-Remoting and opening a PowerShell session to the host to execute some commands over the network – today’s topic

Again, shout out to Max and colleagues for opening a pull request with their changes to enable this and for testing these latest mods.

Enable PS Remoting

Firstly, you’re going to need to be able to open a PowerShell session to the Docker host with:

New-PSSession <computer name>

I won’t pretend to understand the intricacies of setting this up in different scenarios – you should probably read the blog of someone who knows what they are talking about if you need help with it.

The solution will likely include:

  • Opening a PowerShell session on the host as administrator and running Enable-PSRemoting
  • Making sure the firewall is open to the port that you are connecting over
  • Passing a credential and possibly an authentication type to New-PSSession

To connect to my test server in Azure I run the following:

New-PSSession <server name> -Credential (Get-Credential) -Authentication Basic

AL Test Runner Config

They are several new keys in the AL Test Runner config file to accommodate remote containers. There are also a few new commands to help create the required config.

The Open Config File command will open the config JSON file or create it, if it doesn’t already exist. Set Container Credential and Set VM Credential can be used to set the credentials used to connect to the container and the remote host respectively.

The required config keys are:

Sample AL Test Runner config
  • dockerHost – the name of the server that is hosting the Docker containers. This name will be used to create the remote PowerShell session. Leaving this blank implies that containers are hosted locally and the extension will work as before
  • vmUserName / vmSecurePassword – the credentials used to connect to the Docker host
  • remoteContainerName – the name of the container to run tests against
  • newPSSessionOptions – switches and parameters that should be added to New-PSSession to open the session to the Docker host (see below)

The extension uses New-PSSession to open the PowerShell session to the Docker host. The ComputerName and Credential parameters will populated from the dockerHost and vmUserName / vmSecurePassword config values respectively.

Any additional parameters that must be specified should be added to the newPSSessionOptions config key. As in my case I run

New-PSSession <server name> -Credential <credential> -Authentication Basic

I need to set newPSSessionOptions in the config file to “-Authentication Basic”. You can use this key for -useSSL, -Port, -SessionOption or whatever else you need to open the session.

With the config complete you should be able to execute tests, output the results and decorate the test codeunits as if you were working locally. Beautiful.

As ever, feedback, suggestions and contributions welcome. Hosted on GitHub.

Remote Development with VS Code and AL Test Runner

The most obvious limitation of the AL Test Runner extension for VS Code has been that you need to run VS Code on the Docker host machine. That’s fine for us because we do all our development on local Docker containers but I’m aware that this isn’t everyone’s preferred process.

Local Repo and VS Code, Remote Docker Host

I guess if you’re not hosting the Docker container locally then you are hosting it on some remote server – maybe on your own hardware or maybe in Azure or another cloud. To get AL Test Runner working in this scenario you’d need the AL Test Runner PowerShell module imported on the host and PS Remoting enabled to execute PowerShell on the host from your local VS Code terminal.

This post isn’t about getting that working. It’s not supported yet – although I do have a pull request to review from a team that are using it like this (thanks Max).

Remote Development with VS Code

An alternative approach is to use remote development with VS Code. The files that you are working on and the Docker host are remote but you are using VS Code locally. Kind of like RemoteDesktop Apps – the benefit of running on a server and using its resources but with the experience of an app that is running locally.

Install Open SSH server on the remote machine, install some VS Code extensions (using an insider build of VS Code – for now) and connect over SSH to the machine. Some magic happens at the other end and a few spells, invocations and minutes later a VS Code Server is installed on the remote machine.

I won’t go into the setup. I mostly followed this excellent blog post from Tobias Fenster and used aka.ms/getbc to create a new VM in Azure to test with.

It allows you to work in a local VS Code window but access the file system of the server and execute commands on it. Install VS Code extensions and PowerShell modules as if you are working locally and they are installed on the remote.

It is smooth. Impressively so. You quickly forget that you aren’t just working with files and extensions on your local machine. This clip shows:

  • Selecting a folder on the remote server from my recent history
  • Connecting via SSH
  • Entering the password for the remote account that I am authenticating as (a local account as my VM in Azure is not joined to a domain)
  • Running all the tests in the project and working with VS Code as I would do locally
Connecting to remote host via SSH and running some tests

I still prefer actual local development, but I have to admit that this is pretty great.

AL Test Runner

I span up a remote development scenario out of curiosity but I also wanted to test how/if AL Test Runner would work. It works almost seamlessly. Almost. There is just a little stretch of exposing stitching – but it’s easy to work around.

Local and remote extensions in VS Code

Once you’ve opened a remote development window you’ll need to:

  • Install the AL extension
  • Install the AL Test Runner extension
  • Install the navcontainerhelper PowerShell module (you can use install-module in the integrated terminal)

If you try to run tests you’ll find that it appears to hang indefinitely. Actually it has popped a window to enter the credentials to connect to the server with – but you can’t see it and it won’t continue until you dismiss the window.

If you’re interested in trying it the workaround for now is to manually edit the config.json file in the .altestrunner folder.

When you first install the extension you won’t have a config.json file. Running a test, any test is enough to create it. You’ll also notice that the command appears to hang in the terminal. You can kill that terminal once the file has been created.

Open config.json and enter the userName to authenticate with BC. Next you need to enter the securePassword (this is not your plain text password). You can get the secure password running the following in the terminal:

ConvertTo-SecureString 'your password' -AsPlainText -Force | ConvertFrom-SecureString

Copy the huge string from the output into the securePassword key of the config file. After that you should be good to go.

At some point I’ll also work on the ability to use remote PowerShell to execute tests on a remote Docker host from your local machine. After all, Max has already done most of the work for me 🙂

Scheduling Azure DevOps Pipelines with YAML

I had the pleasure of presenting some thoughts about developing apps for SaaS with James Crowter to the Dutch Dynamics Community yesterday. We were sharing some of our experiences of the maintenance challenge that comes with having published apps on AppSource.

How can you continuously test your apps against past, current and upcoming versions of Business Central? Perhaps two ways:

  1. Slowly drive yourself to despair with the monotony of creating different versions of Business Central environments and testing manually
  2. Automate as much of the tedious infrastructure and repetitive testing work as possible so you can concentrate on some fun stuff instead

We have two main reasons to trigger the execution of the pipeline for a given branch of an app in Azure DevOps:

  1. We have changed some code
  2. Microsoft have changed some code that we depend on

If we have changed some of our own code we should run it through the pipeline to ensure that it passes our checks, the automated tests run and that the resulting .app files are versioned and signed correctly. It is easy to overlook some of these tasks and/or inadvertently break some existing functionality when making our changes. The pipeline is there to have our back.

At the same time, Microsoft are making changes to the base and system applications that we rely on. Even if we don’t have any planned changes for our apps we may need to make some code changes to accommodate what Microsoft have done to the ground underneath our feet.

With a bit of luck we’ll see this sort of thing:

warning AL0432: Method 'FilterReservFor' is marked for removal. Reason: Replaced by ProdOrderLine.SetReservationFilters(FilterReservEntry)

warning AL0432: Method 'CreateReservEntryFor' is marked for removal. Reason: Replaced by CreateReservEntryFor(ForType, ForSubtype, ForID, ForBatchName, ForProdOrderLine, ForRefNo, ForQtyPerUOM, Quantity, QuantityBase, ForReservEntry)

We’re using a method that Microsoft are making obsolete and will be removed at some point in the future. No need to panic, but be aware that you should switch to the new method. Very civilised. Thanks.

With less luck we’ll find that Microsoft have introduced a change that breaks our app in some way – with a compilation error or unintended behaviour. Either way, it’s something that we want to know about.

Scheduling pipelines can help with that.

Typically we:

  • Develop against a W1 version of the latest sandbox image, run pipelines against our latest commits against mcr.microsoft.com/businesscentral/sandbox with a continuous integration trigger
  • Migrate changes backwards to BC14 and BC13 compatible versions of our apps, run pipelines against appropriate Docker images for those versions
  • Have separate branches which we rebase onto the latest commit to run pipelines against bcinsider.azurecr.io/bcsandbox and bcinsider.azurecr.io/bcsandbox-master with a schedule

The continuous integration trigger is straightforward enough. At the top of our .azure-pipelines.yml we have:

trigger:
  - '*'

The schedule is defined in a separate section of the yml file, like this:

schedules:
  - cron: 0 3 * * Sun
    displayName: Schedule insider builds
    branches:
      include: ['build/insider', 'build/insider-master']
    always: true

Those branches are the ones that are set to build against the insider Docker images. I hadn’t come across cron before, but it’s pretty simple. The schedule is defined as:

  • Minute
  • Hour
  • Day of month
  • Month
  • Day of week

Our schedule comes out as 03:00 every Sunday. Asterisks stand for any value. https://crontab.guru/ is useful for getting your head around the format.

The branches key defines which branches are included in the schedule and the always indicates that we always want to run the pipeline, even if there haven’t been any code changes since it was last run.