Some More About Translating Business Central Apps

I’ve written before about using Azure Cognitive Services to translate the captions in the .xlf file that is generated when you compile your Business Central app. For us, the motivation is to make our apps available in as many countries as possible in AppSource.

Since then S√łren Alexandersen has announced that it will not be necessary to provide all of a country’s official languages to make your app available in that country.

If you going to provide translations you might be interested in how to improve upon a the approach of the last post.

The Problem

The problem of course is that we are relying on machine translation to translate very short phrases or single words. A single word can mean different things and be translated in many different ways into other languages depending on the context. Context that the machine translation doesn’t have. That’s what makes language and etymology simultaneously fascinating and infuriating.

The problem is compounded by abbreviations and acronyms. You and I know that “Prod. Order” is short for “Production Order”. But “Prod” is itself an English word that has nothing to do with manufacturing.

We know that FA is likely short for “fixed asset” but if you don’t know that the context is an ERP system it could mean a whole range of things. How is Azure supposed to translate it?

What we need is some domain-specific knowledge.

The Solution

When we think about it we know that we’ve already got thousands of translations of captions into the languages that we want – if only we can get them into a useful format. We’ve got Docker images of Business Central localisations. They contain the base app for the location complete with source/target pairs for each caption.

If you can get hold of the xlf file it’s a relatively simple job to search for a trans-unit that has a source node matching the caption that you want to translate and find the corresponding translated target node.

As an example, I’ve created a container called ch from the image mcr.microsoft.com/businesscentral/sandbox:ch – the Swiss localisation of Business Central.

Find and expand the base application source.

$ContainerName = 'ch'
$Script = {Expand-Archive "C:\Applications.*\Base Application.Source.zip" -DestinationPath 'C:\run\my\base'}

Invoke-ScriptInBCContainer -containerName $ContainerName -scriptblock $Script

This script will find the zip file containing the localised base application and extract it to the ‘C:\run\my\base’ folder. This will take a few minutes but when it is done you should see a Translations folder containing, in the case of Switzerland, four .xlf files.

The following script will load the fr-CH .xlf file into an Xml Document, search for a trans-unit node which has a child source node matching a given string and return the target i.e. the fr-CH translation.

$Language = 'fr'
$enUSCaption = 'Prod. Order Line No.'
[xml]$xlf = Get-Content (Get-ChildItem "C:\ProgramData\NavContainerHelper\Extensions\$ContainerName\my\base\Translations" -Filter "*$Language*").FullName

$NSMgr.AddNamespace('x',$xlf.DocumentElement.NamespaceURI)
$xlf.SelectSingleNode("/x:xliff/x:file/x:body/x:group/x:trans-unit[x:source='$enUSCaption']", $NSMgr).target

Which returns “N¬į ligne O.F.” – cool.

Some Obvious Points

I’m going to leave it there for this post, save for making a few obvious points.

  • This is hopelessly inefficient. Downloading the localised Docker image, creating the container, extracting the base app – all to get at the .xlf files. We’re going to want a smarter solution before using this approach in any volume and for more languages
  • Each .xlf file is 60+MB – that takes a while to load into memory – you’ll want to keep the variable in scope and reuse it for multiple searches rather than reloading the document
  • Not all of the US English captions you create in your app will exist in the base application – you’ll still want to send those off for translation.

Maybe we can start to address these points next time…

AL Test Runner for Visual Studio Code

TL;DR

I’ve written an extension for VS Code to help run your AL tests in local Docker containers. Search for “AL Test Runner” in the extension marketplace or click here. Feedback, bugs, feature suggestions all gratefully received on the GitHub repo or james@jpearson.blog

Intro

As soon as Freddy added capability to the navcontainerhelper module to execute automated tests I was excited about the potential for:

  1. Making test execution in our build pipeline simpler and more reliable
  2. Running tests from Visual Studio Code as part while developing

I’ve written about both aspects in the past, but especially #1 recently – about incorporating automated tests into your Azure DevOps pipeline.

This post is about #2 – incorporating running tests as early as possible into your development cycle.

Finding Bugs ASAP

You’ve probably heard the idea – and it’s common sense even if you haven’t – that the cost of finding a bug in your software increases the later in the development/deployment cycle you find it.

If you realise you made a silly mistake in code that you wrote 2 minutes ago – there’s likely no harm done. Realise there is a bug in software that is now live in customers’ databases and the implications could be much greater. Potentially annoyed customers, data that now needs fixing, support cases, having to rush out a hotfix etc.

We’ve all been there. It’s not a nice place to visit. I once deleted all the (hundreds of thousands of) records in the Purch. Rcpt. Line table with a Rec.DELETEALL on a temporary table…turns out it wasn’t temporary…and I was working in the live database.

Writing automated tests can help catch problems before you release them out into the wild. They force you to think about the expected behaviour of the code and then test whether it actually behaves like that. Hopefully if the code that we push to a branch in Azure DevOps has a bug it will cause a test to fail, the artifacts won’t be published, the developer will get an email and the customer won’t be the hapless recipient of our mistake. No harm done.

However, the rising cost of finding a bug over time still applies. Especially if the developer has started working on something else or gone home. Getting back your head back into the code, reproducing and finding the bug and fixing it are harder if you’ve had a break from the code than if you went looking for it straight away.

Running Tests from VS Code

That’s why I’m keen that we run tests from VS Code as we are writing them. Write a test, see it fail, write the code, see the test pass, repeat.

I’ve written about this before. You can use tasks in VS Code to execute the required PowerShell to run the tests. The task gives you access to the current file and line no. so that you can fancy stuff like running only the current test or test codeunit.

AL Test Runner

However, I was keen to improve on this and so have started work on a VS Code extension – AL Test Runner.

Running the current test with AL Test Runner and navcontainerhelper

The goals are to:

  • Make it as simple as possible to run the current test, tests in the current codeunit or all tests in the extension with commands and keyboard shortcuts
  • Cache the test results
  • Decorate test methods according to the latest test results – pass, fail or untested
  • Provide extra details e.g. error message and callstack when hovering over the test name
  • Add a snippet to make it easier to create new tests with placeholders for GIVEN, WHEN and THEN statements

Important: this is for running tests with the navcontainerhelper PowerShell module against a local Docker container. Please make sure that you are using the latest version of navcontainerhelper.

Getting Started

  • Download the extension from the extension marketplace in VS Code and reload the window.
  • Open a folder containing an AL project
  • Open a test codeunit, you should notice that the names of test methods are decorated with an amber background (as there are no results available for those tests)
    • The colours for passing, failing and untested tests are configurable if you don’t like them or they don’t fit with your VS Code theme. Alternatively you can turn test decoration off altogether if you don’t like it
  • Place the cursor in a test method and run the “AL Test Runner: Run Current Test” command (Ctrl+Alt+T)
  • You should be prompted to select a debug configuration (from launch.json), company name, test suite name and credentials as appropriate (depends if you’re running BC14 or BC15, if you have multiple companies, authentication type etc.)
    • I’ve noticed that sometimes the output isn’t displayed in the new terminal when it is first created – I don’t know why. Subsequent commands always seem to show up fine ūü§∑‚Äć‚ôāÔłŹ
  • Use the “ttestprocedure” to create new test methods

.gitignore

If you’re using Git then I’d recommend adding the .altestrunner folder to your .gitignore file:

.altestrunner/

Committing the config file and the test results xml files doesn’t feel like a great idea.

Prompting the User for Input with PowerShell

Sometimes you need to prompt the user to provide some value before you can complete your PowerShell script. You’ve got a few different options depending on what you’re asking the user to select from.

Parameters

Setting a parameter as mandatory without providing a value will prompt the user to enter one, like this:

function Invoke-AmazingPowerShellFunction {
  Param(
    [Parameter(Mandatory=$true)]
    [string]$ImportantParameter
  )
}

Setting the parameter type ([string] in this case) isn’t essential but will help validate that the input is at least of the right type. The trouble with users is that they can, and will, enter any old nonsense as the parameter value and you need to be able to handle it.

The ValidateSet attribute helps out where you have a fixed set of values that are the only valid ones.

function Invoke-AmazingPowerShellFunction {
  Param(
    [Parameter(Mandatory=$true)]
    [ValidateSet('This','Or This','Or Possibly This')]
    [string]$ImportantParameter
  )
} 

If you don’t know at design-time what the valid options are going to be then you need a different approach.

Out-GridView

Out-GridView has an OutputMode parameter which allows you to specify whether the user should be able to select a value and if so, a single value or multiple values. It also allows you to set a title for the window and provides a filter to help the user find the right value. Good for when there is a lot to choose from. We use it, for example, to choose a project from Azure DevOps.

In passing I’ve also found Out-GridView useful when working with complex types e.g. from a web service response and I just want to be able to browse the values in the object. You can pipe anything to it and it will render it into a nice grid.

Write-Host ("You selected {0}" -f ('1','2','3' | Out-GridView -OutputMode Single -Title 'Please select a value'))

Roll Your Own

Recently I wanted to prompt the user to make a selection between some options in the terminal. In my experience the Out-GridView window doesn’t always open in the foreground and if you’re using multiple monitors won’t necessary open near the window you’re executing the script in. I thought I’d try keeping the focus in the terminal window instead.

I couldn’t find anything already in PowerShell to print a list of options and prompt the user to choose one, so I wrote the below. I’d be interested to know if I’ve missed something obvious already built in though.

It takes a collection of strings that represent the options to choose between and some text that you want to prompt the user with. The options are printed with numbers next to them, waits for some input from the user with Read-Host and matches it to their selection.

0 is hard-coded as a cancel option and will return an empty string, otherwise the string of the user’s selection is returned.

function Get-SelectionFromUser {
    param (
        [Parameter(Mandatory=$true)]
        [string[]]$Options,
        [Parameter(Mandatory=$true)]
        [string]$Prompt        
    )
    
    [int]$Response = 0;
    [bool]$ValidResponse = $false    

    while (!($ValidResponse)) {            
        [int]$OptionNo = 0

        Write-Host $Prompt -ForegroundColor DarkYellow
        Write-Host "[0]: Cancel"

        foreach ($Option in $Options) {
            $OptionNo += 1
            Write-Host ("[$OptionNo]: {0}" -f $Option)
        }

        if ([Int]::TryParse((Read-Host), [ref]$Response)) {
            if ($Response -eq 0) {
                return ''
            }
            elseif($Response -le $OptionNo) {
                $ValidResponse = $true
            }
        }
    }

    return $Options.Get($Response - 1)
} 

Satisfying Your Case-Sensitive Obsession with Regex

Obsession is probably a little strong, but I do like tidy code. You know – proper indentation, a sensible number of comments (which can be more than none but there shouldn’t be as much comment as code) and good names. Hungarian notation brings me out in a rash.

This extends to having keywords, variables and methods in the right case. While in CAL there was a lot of UPPERCASE, in AL there is far more lowercase. It took me a while to get used to but I prefer it now.

If you convert some CAL into AL then likely all the keywords are going to be in uppercase. The code will run fine, it just doesn’t look nice. In the below example my eye is drawn to the fact that some filters are being set, rather than what those filters are – on which records and fields.

You’ll notice that all the UPPERCASE words are highlighted in that example. That’s because they are all search results for my regular expression.

\b(?!NAV)(?!CACTMN)[A-Z]+\b
  • \b will match a word boundary – spaces, punctuation, start and end of lines – anything that denotes the start or end of a word
  • (?!) is a negative lookahead and does not find matches for the expression inside the brackets. This is useful for uppercase words that should be left uppercase like NAV or the AppSource suffix that you’ve added all over the place
    • Disclaimer: don’t ask me to explain lookaheads in any more detail than that – I don’t know. I’m not convinced that anyone actually properly knows how regex works ūüėČ
  • [A-Z] matches uppercase characters between A and Z
  • + indicates that we’re looking for one or more of the previous group i.e. one or more uppercase letters

Altogether it translate to something like: match any word of one or more uppercase letters except where it contains “NAV” or “CACTMN” (the suffix we’re using in this app).

Once you’ve found the matches find and replace is your friend. I love how VS Code gives you a preview of all the replaces that it is going to do. Very useful before you replace “IF” with “if” and realise you’ve also replaced “MODIFY” with “MODifY”.

You Can Ditch Our Build Helper for Dynamics 365 Business Central

I’m a bit of a minimalist when it comes to tooling, so I’m always happy to ditch a tool because its functionality can be provided by something else I’m already using.

In a previous post I described how we use our Build Helper AL app to prep a test suite with the test codeunits and methods that you want to run. Either as part of a CI/CD pipeline or to run from VS Code.

Freddy K has updated the navcontainerhelper PowerShell module and improved the testing capabilities – see this post for full details.

The new extensionId parameter for the Run-TestsInBCContainer function removes the need to prepare the test suite before running the tests. Happily, that means we can dispense with downloading, publishing, installing, synchronising and calling the Build Helper app.

The next version of our own PowerShell module will read the app id from app.json and use the extensionId parameter to run the tests. Shout out to Freddy for making it easier than ever to run the tests from the shell ūüĎć