Part 3: Testing Microsoft Dynamics 365 Business Central from VS Code

Another instalment of my musings on running automated tests for Microsoft Dynamics 365 Business Central from Visual Studio Code.

Objective

What are we up to this time? As a brief reminder, I’m trying to make it as easy as possible to run automated tests from Visual Studio Code. I figure the faster and simpler it is to publish your code changes and run the tests the more inclined you are going to be to do it. The more you test the better your code will be.

If you’re trying to follow a discipline like Test-Drive Development you need tight feedback loops writing code and running tests. Being able to do that from the IDE and without having to switch back and forth to the browser is so much nicer.

To that end, if you are working on a particular test codeunit maybe it makes sense to only run the tests in that codeunit – in the interests of getting the feedback from those tests as quickly as possible. Or maybe just running the single test method that you’re working on. Once, you’re happy with those changes you can run the whole test suite again to make sure you haven’t broken anything else in the meantime.

Fortunately, it turns out we’ve already got the pieces we need to assemble this jigsaw.

NavContainerHelper

Being the considerate sort of chap that he is, Freddy has already included testCodeunit and testFunction parameters for the Run-TestsInNavContainer function. We just need to figure out how to plug the right values into those parameters.

Some More About VS Code Tasks

In this post I showed how you can define a task in VS Code to run some PowerShell. It turns out you can do more in tasks.json than I realised.

This is what my file looks like now:

{
    // See https://go.microsoft.com/fwlink/?LinkId=733558
    // for the documentation about the tasks.json format
    "version": "2.0.0",
    "tasks": [
        {
            "label": "Run BC Tests",
            "type": "shell",
            "command": "Run-TestsFromVSCode -CurrentFile ${file} -CurrentLine ${lineNumber}",
            "group": {
                "kind": "test",
                "isDefault": true
            },
            "presentation": {
                "echo": true,
                "reveal": "always",
                "focus": false,
                "panel": "shared",
                "showReuseMessage": true,
                "clear": true
            }
        }
    ]
}

The presentation object controls the behaviour of the terminal that the task is run in. I’m just using it to clear the terminal each time.

The really interesting part is ${file} and ${lineNumber}. These placeholders are replaced with the current file and current line number in the editor when the task is executed. Ooooo. You can probably see where I’m going with this. That’s all the information we need to run the current test codeunit and method.

I’ve created a new Run-TestsFromVSCode function in our PowerShell module to handle this. Notice I’m calling that function from the task now instead of our Run-BCTests function directly.

Run-TestsFromVSCode

Assuming a file path has been passed to the function it determines whether the current file is a test codeunit i.e. does it contain the text “Subtype = Test”? If so, use Regex to find the id of the codeunit from the first line.

Now attempt to find the previous declaration of a test function from the current line no (search backwards for a line that contains “[Test]” then forwards from that line to a procedure declaration).

Then call Run-BCTests with all the information that you’ve been able to find. Passing an asterisk rather than blank for the codeunit and/or method acts as a wildcard and will run everything, instead of nothing.

function Run-TestsFromVSCode {
    param(
        # The current file the tests are invoked from
        [Parameter(Mandatory=$false)]
        [string]
        $CurrentFile,
        # The current line no. in the current file
        [Parameter(Mandatory=$false)]
        [string]
        $CurrentLine
    )

    if ($null -eq $CurrentFile) {
        Run-BCTests
    }
    else {
        # determine if the current file is a test codeunit
        if ((Get-Content $CurrentFile -Raw).Contains('Subtype = Test')) {
            $TestCodeunit = [Regex]::Match((Get-Content $CurrentFile).Item(0),' \d+ ').Value.Trim()
            if ($null -ne $CurrentLine) {
                $Method = Get-TestFromLine -Path $CurrentFile -LineNumber $CurrentLine
                if ($null -ne $Method) {
                    Run-BCTests -TestCodeunit $TestCodeunit -TestMethod $Method
                }
                else {
                    Run-BCTests -TestCodeunit $TestCodeunit -TestMethod '*'
                }
            }
        }
        else {
            Run-BCTests
        }
    }
}

function Get-TestFromLine {
    param (
        # file path to search
        [Parameter(Mandatory=$true)]
        [string]
        $Path,
        # line number to start at
        [Parameter(Mandatory=$true)]
        [int]
        $LineNumber
    )
    
    $Lines = Get-Content $Path
    for ($i = ($LineNumber - 1); $i -ge 0; $i--) {
        if ($Lines.Item($i).Contains('[Test]')) {
            # search forwards for the procedure declaration (it might not be the following line)
            for ($j = $i; $j -le $Lines.Count; $j++)
            {
                if ($Lines.Item($j).Contains('procedure')) {
                    $ProcDeclaration = $Lines.Item($j)
                    $ProcDeclaration = $ProcDeclaration.Substring($ProcDeclaration.IndexOf('procedure') + 10)
                    $ProcDeclaration = $ProcDeclaration.Substring(0,$ProcDeclaration.IndexOf('('))
                    return $ProcDeclaration
                }
            }
        }
    }
}

Export-ModuleMember -Function Run-TestsFromVSCode

The terminal output from the task now looks like this:

Run-TestsFromVSCode.JPG

Notice the current file and line number passed to the task and the current codeunit id and method passed to navcontainerhelper to run.

To run all the methods in the current codeunit move the cursor above the line of the first test method declaration.

To run all methods in all codeunits move the cursor outside a test codeunit – although you must have some file open otherwise VS Code will fail to resolve the ${file} and ${lineNumber} placeholders.

You could of course define more tasks to run these options if you prefer.

Conclusion

All of this enables this kind of workflow, if that’s what you’re into:

  1. Create a new (failing) test and publish app
  2. Run (just) that test and check that it fails
  3. Write production code and publish app
  4. Run (just) that test and check that it now passes
  5. Run all the tests in that codeunit / your whole suite
  6. Commit your changes

Wish List

I’m pleased with how far I’ve been able to get, but there’s still some significant items not crossed off the wish list.

Debugging Tests

For now you still have to launch the browser (at least we can debug without publishing these days) and start the test from that session. I believe the old “debug next” functionality we used to have in the Windows client debugger is on the roadmap somewhere. That would do the trick.

Performance

Leaves a lot to be desired. Running a single method in a single test codeunit takes anywhere between 6 and 10 seconds. Obviously, some of that time is the test itself and is in our control, but most of that is preparing the suite and creating and connecting the client.

I know Microsoft are overhauling how tests are executed by the platform, so maybe some performance gains are also in the pipeline.

Part 2: Testing Microsoft Dynamics 365 Business Central from VS Code

Last time out we went through running automated tests from the PowerShell terminal integrated into VS Code. We saw that you could define a task in the tasks.json file to run the tests and assign a keyboard shortcut for that task.

Great. But. In order to run tests they first need to have been added to a test suite. You could add some code to an install codeunit in your app to do that for you (see here). That approach is nice and clean, but leaves a couple of things to be desired:

  • What if you want to specify the test suite, or codeunit(s) that you want to use?
  • How can you clear the test suite?

You can’t. Which is why we have a “Build Helper” app to add and remove test codeunits from test suites. It contains a single codeunit which is exposed as a web service which we call from PowerShell.

This is the codeunit and the PowerShell it is called with (see here if you can’t see the embedded gist).

I think the AL is pretty self-explanatory. The PowerShell is a little more interesting.

Install-BuildHelper acquires the latest version of the app from the build artefacts (as described here). It checks whether it is already installed first by attempting to reach the address of the WSDL (http://<server + port>/NAV/WS/Codeunit/AutomatedTestingMgt). I’ve found that to be a little faster than Get-NavContainerAppInfo.

It uses New-WebServiceProxy (as described here) to call the web service methods.

Get-ContainerCompanyToTest fetches the name of (usually) the first company in the container to call the service against. It does this calling the SystemService web service rather than Get-CompanyInNavContainer – again, as it is slightly faster.

By “faster” I mean a couple of seconds. That’s trivial compared to the execution time of the test suite but given that I’m trying to move to a tighter {develop test -> run tests -> develop app -> run tests} loop I’ll take the saving.

I’m a fan of default values so it:

  • Takes the container from launch.json
  • Creates a credential object using the credentials we store in our environment.json file
  • Takes the start and end of the range of codeunits to add from the idrange set in app.json

There is a Clear-TestSuite function as well. I haven’t bothered pasting it here because it’s just a simplified version of Get-TestCodeunitsInContainer.

Testing Microsoft Dynamics 365 Business Central from VS Code

Execute your Microsoft Dynamics 365 Business Central tests – with a keyboard shortcut – without leaving the comfort of your favourite IDE. What’s not to love?

Background

We’ve come a long way with testing our apps in Microsoft Dynamics 365 Business Central / NAV. By “we” I mean our internal development practices but also the capabilities of the platform.

  1. We didn’t have any automated testing – we had a joke written on the white board in the development office, “F11 is testing”
  2. Microsoft made it possible to create and run automated tests in Dynamics NAV – we didn’t use it
  3. Microsoft improved the tooling with the release of the “Test Tool” page
  4. We started to experiment with it and to write our own tests
  5. We picked up the idea of automating test runs with a build pipeline…

…and that was a bit of a problem. The best way to run the tests was through the Windows client. That meant using the dynamicsnav:// protocol to open the client, creating a ClientUserSettings.config file, ending the client process when it had finished…a big improvement on not testing at all – but hardly elegant.

With Microsoft’s adoption of Docker to distribute NAV / Business Central images came Freddy’s navcontainerhelper PowerShell module. A big, and relatively recent, step forward was the ability to run a test suite in a Docker container from PowerShell.

If you don’t know I’m talking about you’re best heading over to Freddy’s blog and doing some background reading.

Running the Tests

This is all great for our build process. We can create a Docker container, install the app(s), prepare the test suite (see here), run the tests from PowerShell, upload the results and bin the container.

As soon as the ability to run tests from PowerShell became available I was interested in how we could use them in our everyday development, not just in the pipeline. If you do any reading about Test Driven Development you’ll find that it is based on a very tight feedback loop. Write a test, write a small chunk of production code, run the tests. Repeat.

I want to be able to run the tests from the same environment that I write the code – Visual Studio Code. I don’t want to be switching back and forth from VS Code to the browser, refreshing test codeunits or methods and running them there. I just don’t find it a nice way to work.

With VS Code’s built in terminal and navcontainerhelper loaded you don’t have to.

  1. Set “launchBrowser” to false in launch.json
  2. Write a test
  3. Execute Run-TestsInNavContainer in the terminal and review the results
  4. Write some production code
  5. Publish without debugging
  6. Execute Run-TestsInNavContainer in the terminal and review the results
  7. Repeat steps 2-5

I actually use Run-BCTests, a function in our own PowerShell module. Run-BCTests is a wrapper for Run-TestsInNavContainer which:

  • Reads the name of the container from launch.json (see below) – unless a different container is specified
  • Creates a PowerShell credential object from the credentials we store in a json file in the repo
  • Optionally downloads our “Build Helper” app (from its last successful build – like this) to load our test codeunits and methods into the DEFAULT suite
  • Use Run-TestsInNavContainer to run the tests and output the results to the terminal

Get-ContainerFromLaunchJson

You already have to set the name of your container in the launch.json file to publish your app, so why not read it from there rather than typing your container name all the time?

function Get-ContainerFromLaunchJson {
  param (
    # Path to launch.json
    [Parameter(Mandatory=$false)]
    [string]
    $LaunchJsonPath = (Join-Path (Get-Location) '.vscode\launch.json')
  )

  if (!(Test-Path $LaunchJsonPath)) {
    return ''
  }

  $LaunchJson = ConvertFrom-Json (Get-Content $LaunchJsonPath -Raw)
  if ($LaunchJson.configurations.Count -ne 1) {
    return ''
  }
  else {
    $Container = $LaunchJson.configurations.Item(0).server
    $Container = $Container.Substring($Container.IndexOf('//') + 2)
    $Container
  }
}

Note that the container name in launch.json will need to match the case of the Docker container name. For reasons best known to Docker container names are case-sensitive.

VS Code Tasks: Running Tests With a Keyboard Shortcut

In pursuit of making it as quick and easy as possible to execute the tests from VS Code we can go a step further and create a VS Code task.

Create a tasks.json file in the .vscode folder. Mine looks like this:

{
    // See https://go.microsoft.com/fwlink/?LinkId=733558
    // for the documentation about the tasks.json format
    "version": "2.0.0",
    "tasks": [
        {
            "label": "Run BC Tests",
            "type": "shell",
            "command": "Run-BCTests -DoNotPrepTestSuite",            
            "group": {
                "kind": "test",
                "isDefault": true
            }
        }
    ]
}

This defines a task with the label “Run BC Tests” which runs the command “Run-BCTests -DoNotPrepTestSuite” (the PowerShell function described above). It is set as the default task in the test group.

This allows you to run “Run Test Task” from the command palette.

Run Test Task

Now you can assign a keyboard shortcut to that command. Open “Preferences: Open Keyboard Shortcuts” from the command palette and search for “Run Test Task”.

Run Test Task Keyboard Shortcut.JPG

Double click that entry to set whatever keyboard combination you like. I’ve opted for ctrl+shift+T.

Conclusion

This takes us a step closer to having a code-and-test-in-the-IDE development experience and allows this kind of tight test/production code iteration without having to open the browser:

  1. Write some test code
  2. Publish without debugging (Ctrl+F5)
  3. Run the tests (Ctrl+Shift+T) and check that they fail
  4. Write production code
  5. Publish without debugging (Ctrl+F5)
  6. Run the tests (Ctrl+Shift+T) and check that they pass
  7. Repeat

Of course, this does not remove the need to open the browser, check the look and feel and test your code manually at some point but it does go some way to alleviating the pain of publishing and executing the tests in the browser.

Future

I like it, but it’s still a little clunky. For one, it runs too slowly – the gif at the head of this post is real-time. Also, you still have to populate the test suite with the test codeunits and methods that you want to run at some point. Our PowerShell module can do that, but it’s another manual step to run.

There’s only so much we can do about these issues until Microsoft overhaul the whole testing framework – which I understand they are working on. In the meantime, here’s some other ideas that we haven’t implemented yet.

  • Use the previous commit, or uncommitted changes to determine the test codeunit(s) to run. Run-TestsInNavContainer has optional parameters to specify what to run
  • Filter the results so that you are only notified of failures instead of having to pick them out of the successes
  • Work some magic in VS Code to have the test task automatically triggered when the app is published to the server

Calling SOAP Services from PowerShell

Like most of my posts this has its origin in Microsoft Dynamics 365 Business Central development – specifically our build process – although it isn’t limited to that.

We had a need to call a SOAP web service from PowerShell (see below for the background if you’re interested). In the past I’ve used Invoke-WebRequest and added content-type and a SOAPAction header to the request. Similar to this.

Joel, one of the guys in my team introduced me to New-WebServiceProxy. Well…that’s been an oversight, it makes life so much simpler, as a quick example will illustrate. As the name suggests, PowerShell reads the definition of the service and creates a PowerShell object and complex types that you need to interact with it.

Invoke-WebRequest

First, this is the old, cumbersome way that I would have used to call a SOAP web service from PowerShell.

$Credential = [System.Management.Automation.PSCredential]::new('admin',(ConvertTo-SecureString 'P@ssword1' -AsPlainText -Force))
$Body = '<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:sal="urn:microsoft-dynamics-schemas/page/salesorder">
<soapenv:Header/>
<soapenv:Body>
<sal:Read>
<sal:No>101005</sal:No>
</sal:Read>
</soapenv:Body>
</soapenv:Envelope>'
Invoke-WebRequest -Credential $Credential -Uri http://localhost:60799/NAV/WS/CRONUS%20International%20Ltd./Page/SalesOrder -Headers (@{SOAPAction='Read'}) -Method Post -Body $Body -ContentType application/xml

Create a credential object, define the body (copied from the request created by SoapUI) then call with Invoke-WebRequest setting the credential, content-type, headers and body.

New-WebServiceProxy

And now the enlightened way.

$Credential = [System.Management.Automation.PSCredential]::new('admin',(ConvertTo-SecureString 'P@ssword1' -AsPlainText -Force))
$Client = New-WebServiceProxy -Uri 'http://localhost:60799/NAV/WS/CRONUS%20International%20Ltd./Page/SalesOrder' -Credential $Credential
$Client.Read('101005')

Create the $Client object with New-WebServiceProxy and call the method that you are interested in. If you use PowerShell ISE you can browse the intellisense like a champion.

PowerShell WebService Proxy Intellisense.JPG

If the service uses complex data types you’ll find that PowerShell has auto-generated the types for you to access them. Look under [Microsoft.PowerShell.Commands.NewWebServiceProxy.AutogeneratedTypes…] – you could use it with Business Central services that take an xmlport as a VAR parameter, for example (use the [ref] keyword in PowerShell).

What Are You Calling SOAP for Anyway?

If you’re only interested in the PowerShell details you can stop reading now – thanks for your attention.

If you’re still here then I’ll explain why we’re doing this. As part of our build process for AL apps we need to prepare the DEFAULT test suite with the test codeunits and methods that we want to run. We then run the tests with Run-TestsInNavContainer in the navcontainerhelper module.

We used to import a codeunit from a text file and use Invoke-NavContainerCodeunit to call its methods. In the last few builds of Business Central some bug in the platform has stopped this working.

Freddy posted here about installing an app and calling its API as a replacement for importing fob files. Inspired by that, we’ve done the same. Publish an app that includes a codeunit exposed as a web service and call that service to prep the test suite.

Perhaps we should have used an API page and REST instead? I don’t know if there are any strong arguments either way. This particular cat can be skinned in several different ways (I wonder how that idiom translates to non-UK readers).

An Approach to Package Management in Dynamics 365 Business Central

TL;DL

We use PowerShell to call the Azure DevOps API and retrieve Build Artefacts from the last successful build of the repository/repositories that we’re dependent on.

Background

Over the last few years I’ve moved into a role where I’m managing a development team more than I’m writing code myself. I’ve spent a lot of that time looking at tools and practices in the broader software development community. After all, whether you’re writing C/AL, AL, PowerShell or JavaScript it’s all code and it’s unlikely that we’ll face any challenges that haven’t already been faced in one way or another in a different setting.

In that time we’ve introduced:

Package Management

The next thing to talk about is package management. I’ve written about the benefits of trying to avoid dependencies between your apps before (see here). However, if app A relies on app B and you cannot foresee ever deploying A without B then you have a dependency. There is no point trying to code your way round the problems that avoiding the dependency will create.

Accepting that your app has one or more dependencies – and most of our apps have at least one – opens up a bunch of questions and presents some interesting challenges.

Most obviously you need to know, where can I get the .app files for the apps that I am dependent on? Is it at least the minimum version required by my app? Is this the correct app for the version of the Dynamics NAV / Dynamics 365 Business Central that I am developing against? Are the apps that I depend on themselves dependent on other apps? If so, where do I get those from? Is there another layer of dependencies below that? Is it really turtles all the way down?

These are the sorts of questions that you don’t want to have to worry about when you are setting up an environment to develop in. Docker gives us a slick way to quickly create disposable development and testing environments. We don’t want to burn all the time that Docker saves us searching for, publishing and installing app files before we can start work.

This is what a package manager is for. The developer just needs to declare what their app depends on and leave the package manager to retrieve and install the appropriate packages.

The Goal

Why are we talking about this? What are we trying to achieve?

We want to keep the maintenance of all apps separate. When writing app A I shouldn’t need to know or care about the development of app B beyond my use of its API. I just need to know:

  • The minimum version that includes the functionality that I need – this will go into my app.json file
  • I can acquire that, or a later, version of the app from somewhere as and when I need it

I want to be able to specify my dependencies and with the minimum of fuss download and install those apps into my Docker container.

We’ve got a PowerShell command to do just that.

Get-ALDependencies -Container BCOnPrem -Install

There are a few jigsaw pieces we need to gather before we can start putting it all together.

Locating the Apps

We need somewhere to store the latest version of the apps that we might depend upon. There is usually some central, public repository where the packages are hosted – think of the PowerShell Gallery or Docker Hub for example.

We don’t have an equivalent repository for AL apps. AppSource performs that function for Business Central SaaS but that’s not much use to us while we are developing or if the apps we need aren’t on AppSource. We’re going to need to set something up ourselves.

You could just use a network folder. Or maybe SharePoint. Or some custom web service that you created. Our choice is Azure DevOps build artefacts. For a few reasons:

  • We’ve already got all of our AL code going through build pipelines anyway. The build creates the .app files, digitally signs them and stores them as build artefacts
  • The artefacts are only stored if all the tests ran successfully which ought to give us more confidence relying on them
  • The build automatically increments the app version so it should always be clear which version of the app is later and we shouldn’t get caught in app version purgatory when upgrading an app that we’re dependent on
  • We’re already making use of Azure DevOp’s REST API for loads of other stuff – it was easy to add some commands to retrieve the build artefacts (hence my earlier post on getting started with the API)

Identifying the Repository

There is a challenge here. In the app.json file we identify dependencies by app name, id and publisher. To find a build – and its artefacts – we need to know the project and repository name in Azure DevOps.

Seeing as we can’t add extra details into the app.json file itself we hold these details in a separate json file – environment.json. This file can have an array of dependency objects with a:

  • name – which should match the name of the dependency in the app.json file
  • project – the Azure DevOps project to to find this app in
  • repo – the Git repository in that project to find this app in

Once we know the right repository we can use the Azure DevOps API to find the most recent successful build and download its artefacts.

I’m aware that we could use Azure DevOps to create proper releases, rather than downloading apps that are still in development. We probably should – maybe I’ll come back and update this post some day. For now, we find that using the artefacts from builds is fine for the two main purposes we use them: creating local development environments and creating a Docker container as part of a build. We have a separate, manual process for uploading new released versions to SharePoint for now.

The Code

So much for the theory, let’s look at some code. In brief we:

  1. Read app.json and iterate through the dependencies
  2. For each dependency, find the corresponding entry in the environment.json file and read the project and repo for that dependency
  3. Download the app from the last successful build for that repo
  4. Acquire the app.json of the dependency
  5. Repeat steps 2-5 recursively for each branch of the dependency tree
  6. Optionally publish and install the apps that have been found (starting at the bottom of the tree and working up)

A few notes about the code:

  • It’s not all here – particularly the definition of Invoke-TFSAPI. That is just a wrapper for the Invoke-WebRequest command which adds the authentication headers (as previously described)
  • These functions are split across different files and grouped into a module, I’ve bundled them into a single file here for ease

(The PowerShell is hosted here if you can’t see it embedded below: https://gist.github.com/jimmymcp/37c6f9a9981b6f503a6fecb905b03672)