Putting Queries to Use in Business Central

We’ve had query objects for a while now – since NAV 2013 (I think). In theory they sound great, link a bunch of tables together, aggregate some columns, get distinct values, left join, right join, cross join – executed as a single SQL query.

Why Don’t We Use Queries Much?

In practice I haven’t seen them used that much. There’s probably a variety of reasons for that:

  • We have limited control over the design and execution of the query at runtime. The design is pretty static making it difficult to create useful generic queries short of throwing all the fields from the relevant tables in – which feels heavy-handed
  • I find how the links between dataitems unintuitive
  • It isn’t easy to visualise the dataset that you are creating when writing the AL file
  • Habit – we all learnt FindFirst/Set, Repeat and Until when we first started playing with CAL development and are more comfortable working with loops than datasets. Let’s be honest, the RDLC report writing experience hasn’t done much to convert us to a set-based approach to our data

However, just because there is room for improvement doesn’t mean that we can’t find good uses for queries now. Queries are perfect for:

  • Using queries to select DISTINCT values in the dataset
  • Aggregates – min, max, sum, average, count – especially for scenarios that aren’t suited to flowfields
  • Date functions – convert date columns to day/month/year – which allows you to easily aggregate another column by different periods
  • Outer joins – it’s possible, but far more expensive, to create this kind of dataset by hand with temporary tables
  • Selecting the top X rows
  • Exposing as OData web services

It’s the last point in that list that I particularly want to talk about. We’ve been working on a solution lately where Business Central consumes its own OData web services.

What?! What kind of witchcraft is this? Why would you consume a query via a web service when you can call it directly with a query object? Hear me out…

Consuming Queries

I think you’ve got two main options for consuming queries via AL code.

Query Variable

You can define a variable of type query and specify the query that you want to run. This gives you some control over the query before you execute it – you can set filters on the query columns and set the top number of rows. Call Query.Open and Query.Read to execute the query and step through the result set.

The main downside is that you have to specify the query that you want to use at design-time. That might be fine for some specific requirement but is a problem if you are trying to create something generic.

Query Keyword

Alternatively we can use the Query keyword and execute a query by its ID. Choose whether you want the results in CSV (presumably this is popular among the same crowd that are responsible for an increase in cassette tape sales) or XML and save them either to disk or to a stream.

The benefit is that you can decide on the query that you want to call at runtime. Lovely. Unfortunately you have to sacrifice even the limited control that a query variable gave you in order to do so.

OData Queries/Pages

Accessing the query via OData moves us towards having the best of both worlds. Obviously there is significant extra overhead in this approach:

  • Adding the query to the web service table and publishing
  • Acquiring the correct URL to a service tier that is serving OData requests for your query
  • Creating the HTTP request with appropriate authentication
  • Parsing the JSON response to get at the data that you want

This is significantly more work than the other approaches – let’s not pretend otherwise. However, it does give you all the power of OData query parameters to control the query. While I’ve been talking about queries up til now almost all of this applies to pages exposed as OData services as well.

  • $filter: specify column name, operator and filter value that you want to apply to the query, join multiple filters together with and/or operators
  • $select: a comma-separated list of columns to return i.e. only return the columns that you are actually interested in
  • $orderBy: specify a column to order the results by – use in combination with $top to get the min/max value of a column in the dataset
  • $top: the number of rows to return
  • $skip: skip this many rows in the dataset before returning – useful if the dataset is too large for the service tier to return in a single call
  • $count: just return the count of the rows in the dataset – if you only want the count there is no need to parse the JSON response and count them yourself

AL Test Runner 0.1.15

This is a brief update about the AL Test Runner extension that I’ve been working on for VS Code. I’ve had some great feedback and suggestions via GitHub, a pull request correcting me being a muppet and 300-odd installs so far. Thanks to everyone who has got involved to improve it.

This version includes a new setting “Publish Before Test” which allows you to invoke either the “Publish without debugging” or “Rapid Application Publish without debugging” commands in the AL Language extension.

If you’re aiming for a quick write test->run test->write code->run test cycle then you might like to try setting this option to “Rapid application publish”. Create a test->hit Ctrl+Alt+T (rapid publish and run the current test)->see the test fail->write the code->run the test…

Rapid application publishing before running tests

Obviously, the above is a silly example. Don’t make your tests pass by simply commenting out any code that causes it to fail ūüėČ You’ll still need to do a full publish from time to time as not all changes are pushed in a rapid publish.

Wish List

The main improvement to the extension would come in the form of support for remote development. There are, I think, two key scenarios:

  • Visual Studio Code is running locally but Docker host is not i.e. apps are being published to some central server
    • The ALTestRunner module would need to be installed on the Docker host (maybe served from the PowerShell Gallery?) and the PowerShell commands invoked on the server (through SSH? through a PowerShell session?)
  • Remote Visual Studio Code development – where both VS Code and the Docker host are running remotely somewhere
    • I don’t know enough about this way of working yet (we’ve only ever developed on Docker containers running on our own laptops) but with AL Test Runner installed on the remote end maybe this sort of works already?

In both cases I think the results file would need to passed back somehow to the developer’s machine in order to decorate the local copy of the AL file that they are working on. If you know more about these scenarios than I do or have other suggestions I’d be glad to hear from you.

I’m planning a few other bits and pieces like adding the path to file/line of a failing test in the Output window so that you can click and jump straight to it.

And with that I’m going to sign off for this year. Merry Christmas and see you in 2020!

Some More About Translating Business Central Apps

I’ve written before about using Azure Cognitive Services to translate the captions in the .xlf file that is generated when you compile your Business Central app. For us, the motivation is to make our apps available in as many countries as possible in AppSource.

Since then S√łren Alexandersen has announced that it will not be necessary to provide all of a country’s official languages to make your app available in that country.

If you going to provide translations you might be interested in how to improve upon a the approach of the last post.

The Problem

The problem of course is that we are relying on machine translation to translate very short phrases or single words. A single word can mean different things and be translated in many different ways into other languages depending on the context. Context that the machine translation doesn’t have. That’s what makes language and etymology simultaneously fascinating and infuriating.

The problem is compounded by abbreviations and acronyms. You and I know that “Prod. Order” is short for “Production Order”. But “Prod” is itself an English word that has nothing to do with manufacturing.

We know that FA is likely short for “fixed asset” but if you don’t know that the context is an ERP system it could mean a whole range of things. How is Azure supposed to translate it?

What we need is some domain-specific knowledge.

The Solution

When we think about it we know that we’ve already got thousands of translations of captions into the languages that we want – if only we can get them into a useful format. We’ve got Docker images of Business Central localisations. They contain the base app for the location complete with source/target pairs for each caption.

If you can get hold of the xlf file it’s a relatively simple job to search for a trans-unit that has a source node matching the caption that you want to translate and find the corresponding translated target node.

As an example, I’ve created a container called ch from the image mcr.microsoft.com/businesscentral/sandbox:ch – the Swiss localisation of Business Central.

Find and expand the base application source.

$ContainerName = 'ch'
$Script = {Expand-Archive "C:\Applications.*\Base Application.Source.zip" -DestinationPath 'C:\run\my\base'}

Invoke-ScriptInBCContainer -containerName $ContainerName -scriptblock $Script

This script will find the zip file containing the localised base application and extract it to the ‘C:\run\my\base’ folder. This will take a few minutes but when it is done you should see a Translations folder containing, in the case of Switzerland, four .xlf files.

The following script will load the fr-CH .xlf file into an Xml Document, search for a trans-unit node which has a child source node matching a given string and return the target i.e. the fr-CH translation.

$Language = 'fr'
$enUSCaption = 'Prod. Order Line No.'
[xml]$xlf = Get-Content (Get-ChildItem "C:\ProgramData\NavContainerHelper\Extensions\$ContainerName\my\base\Translations" -Filter "*$Language*").FullName

$NSMgr.AddNamespace('x',$xlf.DocumentElement.NamespaceURI)
$xlf.SelectSingleNode("/x:xliff/x:file/x:body/x:group/x:trans-unit[x:source='$enUSCaption']", $NSMgr).target

Which returns “N¬į ligne O.F.” – cool.

Some Obvious Points

I’m going to leave it there for this post, save for making a few obvious points.

  • This is hopelessly inefficient. Downloading the localised Docker image, creating the container, extracting the base app – all to get at the .xlf files. We’re going to want a smarter solution before using this approach in any volume and for more languages
  • Each .xlf file is 60+MB – that takes a while to load into memory – you’ll want to keep the variable in scope and reuse it for multiple searches rather than reloading the document
  • Not all of the US English captions you create in your app will exist in the base application – you’ll still want to send those off for translation.

Maybe we can start to address these points next time…

AL Test Runner for Visual Studio Code

TL;DR

I’ve written an extension for VS Code to help run your AL tests in local Docker containers. Search for “AL Test Runner” in the extension marketplace or click here. Feedback, bugs, feature suggestions all gratefully received on the GitHub repo or james@jpearson.blog

Intro

As soon as Freddy added capability to the navcontainerhelper module to execute automated tests I was excited about the potential for:

  1. Making test execution in our build pipeline simpler and more reliable
  2. Running tests from Visual Studio Code as part while developing

I’ve written about both aspects in the past, but especially #1 recently – about incorporating automated tests into your Azure DevOps pipeline.

This post is about #2 – incorporating running tests as early as possible into your development cycle.

Finding Bugs ASAP

You’ve probably heard the idea – and it’s common sense even if you haven’t – that the cost of finding a bug in your software increases the later in the development/deployment cycle you find it.

If you realise you made a silly mistake in code that you wrote 2 minutes ago – there’s likely no harm done. Realise there is a bug in software that is now live in customers’ databases and the implications could be much greater. Potentially annoyed customers, data that now needs fixing, support cases, having to rush out a hotfix etc.

We’ve all been there. It’s not a nice place to visit. I once deleted all the (hundreds of thousands of) records in the Purch. Rcpt. Line table with a Rec.DELETEALL on a temporary table…turns out it wasn’t temporary…and I was working in the live database.

Writing automated tests can help catch problems before you release them out into the wild. They force you to think about the expected behaviour of the code and then test whether it actually behaves like that. Hopefully if the code that we push to a branch in Azure DevOps has a bug it will cause a test to fail, the artifacts won’t be published, the developer will get an email and the customer won’t be the hapless recipient of our mistake. No harm done.

However, the rising cost of finding a bug over time still applies. Especially if the developer has started working on something else or gone home. Getting back your head back into the code, reproducing and finding the bug and fixing it are harder if you’ve had a break from the code than if you went looking for it straight away.

Running Tests from VS Code

That’s why I’m keen that we run tests from VS Code as we are writing them. Write a test, see it fail, write the code, see the test pass, repeat.

I’ve written about this before. You can use tasks in VS Code to execute the required PowerShell to run the tests. The task gives you access to the current file and line no. so that you can fancy stuff like running only the current test or test codeunit.

AL Test Runner

However, I was keen to improve on this and so have started work on a VS Code extension – AL Test Runner.

Running the current test with AL Test Runner and navcontainerhelper

The goals are to:

  • Make it as simple as possible to run the current test, tests in the current codeunit or all tests in the extension with commands and keyboard shortcuts
  • Cache the test results
  • Decorate test methods according to the latest test results – pass, fail or untested
  • Provide extra details e.g. error message and callstack when hovering over the test name
  • Add a snippet to make it easier to create new tests with placeholders for GIVEN, WHEN and THEN statements

Important: this is for running tests with the navcontainerhelper PowerShell module against a local Docker container. Please make sure that you are using the latest version of navcontainerhelper.

Getting Started

  • Download the extension from the extension marketplace in VS Code and reload the window.
  • Open a folder containing an AL project
  • Open a test codeunit, you should notice that the names of test methods are decorated with an amber background (as there are no results available for those tests)
    • The colours for passing, failing and untested tests are configurable if you don’t like them or they don’t fit with your VS Code theme. Alternatively you can turn test decoration off altogether if you don’t like it
  • Place the cursor in a test method and run the “AL Test Runner: Run Current Test” command (Ctrl+Alt+T)
  • You should be prompted to select a debug configuration (from launch.json), company name, test suite name and credentials as appropriate (depends if you’re running BC14 or BC15, if you have multiple companies, authentication type etc.)
    • I’ve noticed that sometimes the output isn’t displayed in the new terminal when it is first created – I don’t know why. Subsequent commands always seem to show up fine ūü§∑‚Äć‚ôāÔłŹ
  • Use the “ttestprocedure” to create new test methods

.gitignore

If you’re using Git then I’d recommend adding the .altestrunner folder to your .gitignore file:

.altestrunner/

Committing the config file and the test results xml files doesn’t feel like a great idea.

Prompting the User for Input with PowerShell

Sometimes you need to prompt the user to provide some value before you can complete your PowerShell script. You’ve got a few different options depending on what you’re asking the user to select from.

Parameters

Setting a parameter as mandatory without providing a value will prompt the user to enter one, like this:

function Invoke-AmazingPowerShellFunction {
  Param(
    [Parameter(Mandatory=$true)]
    [string]$ImportantParameter
  )
}

Setting the parameter type ([string] in this case) isn’t essential but will help validate that the input is at least of the right type. The trouble with users is that they can, and will, enter any old nonsense as the parameter value and you need to be able to handle it.

The ValidateSet attribute helps out where you have a fixed set of values that are the only valid ones.

function Invoke-AmazingPowerShellFunction {
  Param(
    [Parameter(Mandatory=$true)]
    [ValidateSet('This','Or This','Or Possibly This')]
    [string]$ImportantParameter
  )
} 

If you don’t know at design-time what the valid options are going to be then you need a different approach.

Out-GridView

Out-GridView has an OutputMode parameter which allows you to specify whether the user should be able to select a value and if so, a single value or multiple values. It also allows you to set a title for the window and provides a filter to help the user find the right value. Good for when there is a lot to choose from. We use it, for example, to choose a project from Azure DevOps.

In passing I’ve also found Out-GridView useful when working with complex types e.g. from a web service response and I just want to be able to browse the values in the object. You can pipe anything to it and it will render it into a nice grid.

Write-Host ("You selected {0}" -f ('1','2','3' | Out-GridView -OutputMode Single -Title 'Please select a value'))

Roll Your Own

Recently I wanted to prompt the user to make a selection between some options in the terminal. In my experience the Out-GridView window doesn’t always open in the foreground and if you’re using multiple monitors won’t necessary open near the window you’re executing the script in. I thought I’d try keeping the focus in the terminal window instead.

I couldn’t find anything already in PowerShell to print a list of options and prompt the user to choose one, so I wrote the below. I’d be interested to know if I’ve missed something obvious already built in though.

It takes a collection of strings that represent the options to choose between and some text that you want to prompt the user with. The options are printed with numbers next to them, waits for some input from the user with Read-Host and matches it to their selection.

0 is hard-coded as a cancel option and will return an empty string, otherwise the string of the user’s selection is returned.

function Get-SelectionFromUser {
    param (
        [Parameter(Mandatory=$true)]
        [string[]]$Options,
        [Parameter(Mandatory=$true)]
        [string]$Prompt        
    )
    
    [int]$Response = 0;
    [bool]$ValidResponse = $false    

    while (!($ValidResponse)) {            
        [int]$OptionNo = 0

        Write-Host $Prompt -ForegroundColor DarkYellow
        Write-Host "[0]: Cancel"

        foreach ($Option in $Options) {
            $OptionNo += 1
            Write-Host ("[$OptionNo]: {0}" -f $Option)
        }

        if ([Int]::TryParse((Read-Host), [ref]$Response)) {
            if ($Response -eq 0) {
                return ''
            }
            elseif($Response -le $OptionNo) {
                $ValidResponse = $true
            }
        }
    }

    return $Options.Get($Response - 1)
}