AL Test Runner for Visual Studio Code

TL;DR

I’ve written an extension for VS Code to help run your AL tests in local Docker containers. Search for “AL Test Runner” in the extension marketplace or click here. Feedback, bugs, feature suggestions all gratefully received on the GitHub repo or james@jpearson.blog

Intro

As soon as Freddy added capability to the navcontainerhelper module to execute automated tests I was excited about the potential for:

  1. Making test execution in our build pipeline simpler and more reliable
  2. Running tests from Visual Studio Code as part while developing

I’ve written about both aspects in the past, but especially #1 recently – about incorporating automated tests into your Azure DevOps pipeline.

This post is about #2 – incorporating running tests as early as possible into your development cycle.

Finding Bugs ASAP

You’ve probably heard the idea – and it’s common sense even if you haven’t – that the cost of finding a bug in your software increases the later in the development/deployment cycle you find it.

If you realise you made a silly mistake in code that you wrote 2 minutes ago – there’s likely no harm done. Realise there is a bug in software that is now live in customers’ databases and the implications could be much greater. Potentially annoyed customers, data that now needs fixing, support cases, having to rush out a hotfix etc.

We’ve all been there. It’s not a nice place to visit. I once deleted all the (hundreds of thousands of) records in the Purch. Rcpt. Line table with a Rec.DELETEALL on a temporary table…turns out it wasn’t temporary…and I was working in the live database.

Writing automated tests can help catch problems before you release them out into the wild. They force you to think about the expected behaviour of the code and then test whether it actually behaves like that. Hopefully if the code that we push to a branch in Azure DevOps has a bug it will cause a test to fail, the artifacts won’t be published, the developer will get an email and the customer won’t be the hapless recipient of our mistake. No harm done.

However, the rising cost of finding a bug over time still applies. Especially if the developer has started working on something else or gone home. Getting back your head back into the code, reproducing and finding the bug and fixing it are harder if you’ve had a break from the code than if you went looking for it straight away.

Running Tests from VS Code

That’s why I’m keen that we run tests from VS Code as we are writing them. Write a test, see it fail, write the code, see the test pass, repeat.

I’ve written about this before. You can use tasks in VS Code to execute the required PowerShell to run the tests. The task gives you access to the current file and line no. so that you can fancy stuff like running only the current test or test codeunit.

AL Test Runner

However, I was keen to improve on this and so have started work on a VS Code extension – AL Test Runner.

Running the current test with AL Test Runner and navcontainerhelper

The goals are to:

  • Make it as simple as possible to run the current test, tests in the current codeunit or all tests in the extension with commands and keyboard shortcuts
  • Cache the test results
  • Decorate test methods according to the latest test results – pass, fail or untested
  • Provide extra details e.g. error message and callstack when hovering over the test name
  • Add a snippet to make it easier to create new tests with placeholders for GIVEN, WHEN and THEN statements

Important: this is for running tests with the navcontainerhelper PowerShell module against a local Docker container. Please make sure that you are using the latest version of navcontainerhelper.

Getting Started

  • Download the extension from the extension marketplace in VS Code and reload the window.
  • Open a folder containing an AL project
  • Open a test codeunit, you should notice that the names of test methods are decorated with an amber background (as there are no results available for those tests)
    • The colours for passing, failing and untested tests are configurable if you don’t like them or they don’t fit with your VS Code theme. Alternatively you can turn test decoration off altogether if you don’t like it
  • Place the cursor in a test method and run the “AL Test Runner: Run Current Test” command (Ctrl+Alt+T)
  • You should be prompted to select a debug configuration (from launch.json), company name, test suite name and credentials as appropriate (depends if you’re running BC14 or BC15, if you have multiple companies, authentication type etc.)
    • I’ve noticed that sometimes the output isn’t displayed in the new terminal when it is first created – I don’t know why. Subsequent commands always seem to show up fine ūü§∑‚Äć‚ôāÔłŹ
  • Use the “ttestprocedure” to create new test methods

.gitignore

If you’re using Git then I’d recommend adding the .altestrunner folder to your .gitignore file:

.altestrunner/

Committing the config file and the test results xml files doesn’t feel like a great idea.

Prompting the User for Input with PowerShell

Sometimes you need to prompt the user to provide some value before you can complete your PowerShell script. You’ve got a few different options depending on what you’re asking the user to select from.

Parameters

Setting a parameter as mandatory without providing a value will prompt the user to enter one, like this:

function Invoke-AmazingPowerShellFunction {
  Param(
    [Parameter(Mandatory=$true)]
    [string]$ImportantParameter
  )
}

Setting the parameter type ([string] in this case) isn’t essential but will help validate that the input is at least of the right type. The trouble with users is that they can, and will, enter any old nonsense as the parameter value and you need to be able to handle it.

The ValidateSet attribute helps out where you have a fixed set of values that are the only valid ones.

function Invoke-AmazingPowerShellFunction {
  Param(
    [Parameter(Mandatory=$true)]
    [ValidateSet('This','Or This','Or Possibly This')]
    [string]$ImportantParameter
  )
} 

If you don’t know at design-time what the valid options are going to be then you need a different approach.

Out-GridView

Out-GridView has an OutputMode parameter which allows you to specify whether the user should be able to select a value and if so, a single value or multiple values. It also allows you to set a title for the window and provides a filter to help the user find the right value. Good for when there is a lot to choose from. We use it, for example, to choose a project from Azure DevOps.

In passing I’ve also found Out-GridView useful when working with complex types e.g. from a web service response and I just want to be able to browse the values in the object. You can pipe anything to it and it will render it into a nice grid.

Write-Host ("You selected {0}" -f ('1','2','3' | Out-GridView -OutputMode Single -Title 'Please select a value'))

Roll Your Own

Recently I wanted to prompt the user to make a selection between some options in the terminal. In my experience the Out-GridView window doesn’t always open in the foreground and if you’re using multiple monitors won’t necessary open near the window you’re executing the script in. I thought I’d try keeping the focus in the terminal window instead.

I couldn’t find anything already in PowerShell to print a list of options and prompt the user to choose one, so I wrote the below. I’d be interested to know if I’ve missed something obvious already built in though.

It takes a collection of strings that represent the options to choose between and some text that you want to prompt the user with. The options are printed with numbers next to them, waits for some input from the user with Read-Host and matches it to their selection.

0 is hard-coded as a cancel option and will return an empty string, otherwise the string of the user’s selection is returned.

function Get-SelectionFromUser {
    param (
        [Parameter(Mandatory=$true)]
        [string[]]$Options,
        [Parameter(Mandatory=$true)]
        [string]$Prompt        
    )
    
    [int]$Response = 0;
    [bool]$ValidResponse = $false    

    while (!($ValidResponse)) {            
        [int]$OptionNo = 0

        Write-Host $Prompt -ForegroundColor DarkYellow
        Write-Host "[0]: Cancel"

        foreach ($Option in $Options) {
            $OptionNo += 1
            Write-Host ("[$OptionNo]: {0}" -f $Option)
        }

        if ([Int]::TryParse((Read-Host), [ref]$Response)) {
            if ($Response -eq 0) {
                return ''
            }
            elseif($Response -le $OptionNo) {
                $ValidResponse = $true
            }
        }
    }

    return $Options.Get($Response - 1)
} 

Satisfying Your Case-Sensitive Obsession with Regex

Obsession is probably a little strong, but I do like tidy code. You know – proper indentation, a sensible number of comments (which can be more than none but there shouldn’t be as much comment as code) and good names. Hungarian notation brings me out in a rash.

This extends to having keywords, variables and methods in the right case. While in CAL there was a lot of UPPERCASE, in AL there is far more lowercase. It took me a while to get used to but I prefer it now.

If you convert some CAL into AL then likely all the keywords are going to be in uppercase. The code will run fine, it just doesn’t look nice. In the below example my eye is drawn to the fact that some filters are being set, rather than what those filters are – on which records and fields.

You’ll notice that all the UPPERCASE words are highlighted in that example. That’s because they are all search results for my regular expression.

\b(?!NAV)(?!CACTMN)[A-Z]+\b
  • \b will match a word boundary – spaces, punctuation, start and end of lines – anything that denotes the start or end of a word
  • (?!) is a negative lookahead and does not find matches for the expression inside the brackets. This is useful for uppercase words that should be left uppercase like NAV or the AppSource suffix that you’ve added all over the place
    • Disclaimer: don’t ask me to explain lookaheads in any more detail than that – I don’t know. I’m not convinced that anyone actually properly knows how regex works ūüėČ
  • [A-Z] matches uppercase characters between A and Z
  • + indicates that we’re looking for one or more of the previous group i.e. one or more uppercase letters

Altogether it translate to something like: match any word of one or more uppercase letters except where it contains “NAV” or “CACTMN” (the suffix we’re using in this app).

Once you’ve found the matches find and replace is your friend. I love how VS Code gives you a preview of all the replaces that it is going to do. Very useful before you replace “IF” with “if” and realise you’ve also replaced “MODIFY” with “MODifY”.

You Can Ditch Our Build Helper for Dynamics 365 Business Central

I’m a bit of a minimalist when it comes to tooling, so I’m always happy to ditch a tool because its functionality can be provided by something else I’m already using.

In a previous post I described how we use our Build Helper AL app to prep a test suite with the test codeunits and methods that you want to run. Either as part of a CI/CD pipeline or to run from VS Code.

Freddy K has updated the navcontainerhelper PowerShell module and improved the testing capabilities – see this post for full details.

The new extensionId parameter for the Run-TestsInBCContainer function removes the need to prepare the test suite before running the tests. Happily, that means we can dispense with downloading, publishing, installing, synchronising and calling the Build Helper app.

The next version of our own PowerShell module will read the app id from app.json and use the extensionId parameter to run the tests. Shout out to Freddy for making it easier than ever to run the tests from the shell ūüĎć

Stop Writing Automated Tests and Get On With Some Real Code

To be fair, these weren’t the exact words that were used, but a view was expressed from the keynote stage at Directions last week along these lines. Frustration that developers now have to concern themselves with infrastructure, like Docker, and writing automated tests rather than “real” code.

I couldn’t resist a short post in response to this view.

If It Doesn’t Add Value, Stop Doing It!

First, no one is forcing you to write automated tests – apart from Microsoft, who want them with your AppSource app submission. Even then, I haven’t heard of Microsoft rejecting an app because it wasn’t accompanied by enough automated tests.

I’m an advocate of developers taking responsibility for their own practices. Don’t follow a best practice simply because someone else tells you it’s a best practice. You know your scenario, your team, your code and your customers better than anyone else. You are best placed to judge whether implementing a new practice is worth the cost of getting started with it.

AppSource aside, if you are complaining about the amount of time you have to spend on writing tests then you have no one to blame but yourself. Or maybe your boss. If you don’t see the value in writing automated tests then you probably should stop wasting your time writing them!

Automated Tests vs “Real” Code

Part of the frustration with tests seemed to be that they aren’t even “real” code. If by “real” code we are referring to the code that we deliver and sell to customers then no, tests aren’t real code.

But what are we trying to achieve? Surely working, maintainable code that adds value for our customers.

We might invest in lots of things in pursuit of that goal. Time spent manually testing, sufficient hardware to develop and test the code on, an internet connection to communicate with each other and the customer, office space to work in, training courses and materials, coffee. We’re not selling these things to the customer either but no one would question that they are necessary to achieve the goal of delivering working software. Especially the coffee.

Whether or not automated tests are “real” code is the wrong question. The important judgement is whether the time spent on writing them makes a big enough contribution to the quality of the product that you eventually ship.

I won’t make the case for automated testing here. That’s for a different post. Or a different book. Suffice to say, I do think it is worth the investment.

But We’ve Got a Backlog of Code Not Covered By Tests

One problem you might have is that you’ve got a backlog of legacy code that isn’t covered by any automated tests. Trying to write tests to cover it all will take ages. This frustration also seemed to be expressed by the speaker at Directions. It even got a round of applause from some of the Directions audience.

My response would be the same – you are best placed to make a pragmatic judgement. Of course it would be nice to have 100% code coverage of your tens of thousands of lines of legacy code – but if you’ll have to stop developing new features for six months to achieve it, is it worth it? Probably not.

Automated tests should give you confidence that the code works as expected. If you are already confident that your existing code works then there might be limited value in writing a suite of tests to prove it.

Try starting with tests to cover new features that you develop or bug fixes. With these cases you’ve got some code that you aren’t confident works as expected – or that you know doesn’t. Take the opportunity to document and prove the expected behaviour with some tests. Over time you’ll build a valuable suite of tests that you can run to demonstrate that each new release of your product works and that bugs haven’t been reintroduced.

With some practice you’ll find that you can use the library codeunits to create scenarios with little test code e.g. you can create a customer, item and sales order, post it and get the posted sales invoice in 2 lines of code.

Interested? More here