Dynamics 365 Business Central Queries: Erm…where are the rest of my rows?!

This is a bit off-topic to what I’ve been blogging about lately but I’ve been caught out by this before and the other day so was a colleague so I thought it was worth a post.

TL;DR

Be careful of the difference between DataItemLink and DataItemTableFilter properties. DataItemLinks define the join between the dataitems in the query while DataItemTableFilters are applied to the results after the join has been processed.

Intro

In theory the query object in Business Central/NAV ought to be very useful. Instead of using nested REPEAT…UNTIL loops like we used to with the associated many round-trips to the database (or at least the cache) we should be able to create a query to join multiple tables and return all the columns we need in a single round-trip.

In practice, I’ve often found queries frustrating to work with. Sometimes because they can’t support a more complex scenario, sometimes because the parameters don’t do quite what I’d expect. Maybe my expectations are wrong. Fine, but even so, trying to “debug” a query and figure out why the query you have designed gives the results that you are getting is not fun. Not quite as bad as developing reports – but still not fun.

Scenario

Let’s imagine that for some reason we need a list of items with the total base quantity from sales invoice lines – including where that total is zero. Typically you might write some code like this:

SalesLine.SetRange("Document Type",SalesLine."Document Type"::Invoice);
SalesLine.SetRange(Type,SalesLine.Type::Item);

if Item.FindSet() then
  repeat
    SalesLine.SetRange("No.",Item."No.");
    SalesLine.CalcSums("Quantity (Base)");

    //use that result for something...

  until Item.Next() = 0;

You figure that doing a CalcSums() for each item probably isn’t going to perform too well. Surely, this is exactly the sort of thing that we have queries for?

Version One

Knowing that we need all items records, including ones that don’t have corresponding sales line records we are going to need a left join i.e. all records from table A and matching records from table B.

For starters I’m going to create a query that just shows the data we’ve got – no grouping or summing just yet.

query 50100 "Frustrating Query"
{
    QueryType = Normal;
    elements
    {
        dataitem(Item; Item)
        {
            column(No; "No.") {}
            column(Description; Description) {}

            dataitem(Sales_Line; "Sales Line")
            {
                SqlJoinType = LeftOuterJoin;
                DataItemLink = "No." = Item."No.";
                
                column(Document_Type;"Document Type") {}
                column(Document_No;"Document No.") {}
                column(Quantity;"Quantity (Base)") {}
            }
        }
    }
}

The first few results from that query look like this.

No.DescriptionDocument TypeDocument No.Quantity
1896-SATHENS DeskInvoice1022011
1900-SPARIS Guest Chair, blackQuote0
1906-SATHENS Mobile PedestalQuote0
1908-SLONDON Swivel Chair, blueQuote0
1920-SANTWERP Conference TableOrder1010038
1920-SANTWERP Conference TableInvoice1022024
1920-SANTWERP Conference TableInvoice10220310
1920-SANTWERP Conference TableInvoice1022054

Version Two

Cool. Now we need to Sum the Quantity column. I’ll remove the Document No. as we don’t want to group by that. Change the query design to this:

query 50100 "Frustrating Query"
{
    QueryType = Normal;
    elements
    {
        dataitem(Item; Item)
        {
            column(No; "No.") {}
            column(Description; Description) {}

            dataitem(Sales_Line; "Sales Line")
            {
                SqlJoinType = LeftOuterJoin;
                DataItemLink = "No." = Item."No.";
                
                column(Document_Type;"Document Type") {}
                column(Quantity;"Quantity (Base)")
                {
                    Method = Sum;
                }
            }
        }
    }
}

Now the results are:

No.DescriptionDocument TypeQuantity
1896-SATHENS DeskInvoice1
1900-SPARIS Guest Chair, blackQuote0
1906-SATHENS Mobile PedestalQuote0
1908-SLONDON Swivel Chair, blueQuote0
1920-SANTWERP Conference TableOrder8
1920-SANTWERP Conference TableInvoice18

Version Three

Remember that we only wanted the sum of the base quantity for invoice lines. We’ve got a result for 1920-S order lines at the moment. That’s fine we can use the DataItemTableFilter to filter the Document Type.

At least, you’d think so. So would I…and we’d both be wrong. Adding DataItemTableFilter = “Document Type” = const(Invoice) to the query gives these results:

No.DescriptionDocument TypeQuantity
1896-SATHENS DeskInvoice1
1920-SANTWERP Conference TableInvoice18

Erm…where are the rest of my rows?!

Q: what has happened to items 1900-S, 1906-S and 1908-S?
A: there are no matching sales lines for those items

Q: but…that’s why we used a LeftOuterJoin. That should include items with no matching sales lines. I thought that was the point of specifying the join type?
A: yes, except DataItemTableFilter isn’t used as part of the join

Q: …eh?

Explanation

I expected, and maybe you did too, that DataItemTableFilter would be used to filter the Sales Line table before joining it to the Item table. It turns out that the join is processed first, respecting the DataItemLink properties, and the DataItemFilter property is used to filter the joined results afterwards.

In SQL terms the filters go into the HAVING clause and not the ON clause. We might have expected something like this:

SELECT Item.No_,
Item.Description,
SalesLine.[Document Type],
SUM(SalesLine.[Quantity (Base)]) AS Quantity
FROM [CRONUS International Ltd_$Item] AS Item
LEFT JOIN [CRONUS International Ltd_$Sales Line] AS SalesLine
ON SalesLine.No_ = Item.No_
AND SalesLine.[Document Type] = 2
GROUP BY Item.No_, Item.Description, SalesLine.[Document Type]

with SalesLine.[Document Type] = 2 forming part of the ON clause (the definition of the join between the tables). What you actually get is something like this:

SELECT Item.No_,
Item.Description,
SalesLine.[Document Type],
SUM(SalesLine.[Quantity (Base)]) AS Quantity
FROM [CRONUS International Ltd_$Item] AS Item
LEFT JOIN [CRONUS International Ltd_$Sales Line] AS SalesLine
ON SalesLine.No_ = Item.No_
GROUP BY Item.No_, Item.Description, SalesLine.[Document Type]
HAVING SalesLine.[Document Type] = 2

with a HAVING clause at the end which restricts the results after the tables have been joined. (The actual SQL queries you’ll see if you run SQL Server Profiler will be different – stuffed full of parameters and ISNULLs – but this is the general idea).

Conclusion

That was a long way of saying be careful how you use the DataItemTableFilter property – it might not do what you’re expecting. So how can you define an ON clause where the filter is a constant value not a field in another table? I don’t know.

As far as I can see as DataItemLink only allows you to define joins between field tables you’d need to engineer the data so that all of your joins are between fields and not constant values. I’d like to be wrong, but if I’m not this is a pretty big flaw is queries.

It’d be nice to be able add constant values into table joins for this kind of thing. While we’re wishing, it would be even better to be able to dynamically define queries at run-time, build and execute them on the fly. It seems I’m not the only one with a query wishlist: https://experience.dynamics.com/ideas/search-ideas/?q=queries&forum=e288ef32-82ed-e611-8101-5065f38b21f1

Working with Translations in Dynamics 365 Business Central

Intro

Languages: what an almighty headache. Computerphile have a great video that describes just how big the problem is: https://www.youtube.com/watch?v=0j74jcxSunY

Perhaps my perception is skewed by my ignorant native-English-speaker point of view. I haven’t grown up in a country where learning multiple languages and being able to switch between them is essential. Sure, I wish* I could speak more languages but mostly I can get by assuming other people will speak English.

*wish as in “finding a magic lamp” not as in “actually having the patience to put in the effort required to learn and practise”

However, if you are publishing apps into AppSource or any other setting where you plan on supporting different countries you are going to need to deal with translations at some point.

Overview

Visual Studio Code will create a .xlf (xliff) file containing all the literal strings that you have used in your application. They will mostly come from Caption, Tooltip and Label properties. This file will, therefore, contain your English (US) captions – assuming that you are coding in US English.

We need additional .xlf files for each translation that we want to support. The files should be in the format [language]-[COUNTRY].xlf e.g. en-GB.xlf, fr-FR.xlf, de-NL.xlf

Although xliff is a standard for software translations, after scouring the internet for hours I couldn’t find a tool that did what I wanted. It seems I’m not alone, judging by the number of people that have started to write their own tooling:

(I haven’t tried these solutions myself, I’m just aware of them). So what do I want? At least the following:

  • Somewhere to maintain a list of languages and countries that my app needs to support
  • Creation of new .xlf files for each language/country combination
  • Keeping the translation units (the strings present in the app that need translating) in sync between the master .xlf file and each translation file
  • Support for submitting strings to machine translation and feeding the results back into each translation file

See also https://community.dynamics.com/business/b/businesscentraldevitpro/posts/translate-your-extension-automatically-with-azure-translator-text which describes a similar approach to ours in a Visual Studio Code extension.

My preference is to build some support for translations into our PowerShell module. The main reason is so that we can use the functions in our build process.

Translations to Maintain

I’ve written before about us having an environment.json file which holds settings about the repository which we use in the build. This seemed like a sensible place to also keep our list of translations. It looks like this:

{
  "translationSource": ".\\Translations\\Hello World.g.xlf",
  "translations": [
    {"country": "FR", "language": "fr"},
    {"country": "BE", "language": "nl"},
    {"country": "DE", "language": "de"},
    {"country": "GB", "language": "en"}
  ]
}

The translationSource key holds the path to the main .xlf file that is updated by the compiler and translations is an array of country/language pairs that are required.

Translator Text in Azure Cognitive Services

You’ve got some choice when it comes to online translation services. We use the Translator Text service that is part of Azure Cognitive Services. We’re already using a bunch of Azure services so it makes sense to keep them in one place. It has a REST API that we can call with the strings to translate and the language to translate them into. Perfect. But, first we’ll need an API key to authenticate with the service.

  • Log in to https://azure.portal.com to manage your Azure resources
  • Use to search bar to find “Cognitive Services”
  • Click to Add
  • In the Marketplace search for “Translator Text” and click Create
  • Give the service a name
  • Select an Azure subscription to link it to – you can either grab a free trial or create a paid subscription. I’ve created a Pay-As-You-Go subscription. You need credit card details but we’re going to use the free pricing tier for now anyway
  • Select a pricing tier (see https://azure.microsoft.com/en-us/pricing/details/cognitive-services/translator-text-api/) or just select F0 for the free tier
  • Select or create a new resource group to hold your new service
  • Open your new resource from the list of Cognitive Services and click on Keys (left hand navigation menu)
  • You can now use either of the two keys that you’ve got to call the service

PowerShell

We’ve got two key functions in our PowerShell module:

  • Translate-App – this is the entry point to call other functions which:
    • Find the source .xlf file and the environment.json file
    • Create any new .xlf files that are required (by copying the source file and changing the target language)
    • Synchronise the translation units between the source and the target files – add any new strings that require translation and remove any strings that are no longer present in the source file
    • Identify strings that require translation and call the Translator Text service to translate them into the target language
    • Populate the target .xlf file with the translated strings
  • Test-TranslationIsComplete – we use this as part of our build process to verify that
    • All of the required translation files exist
    • Each of those files has all the translation units that are present in the source .xlf file
    • It will throw an error if either of those things is false otherwise it will return true

This is the code (hosted here if you can’t see it: https://gist.github.com/jimmymcp/41bd8d3ac3fd6aa742089029fcd990fb)

A few notes about it:

  • I’ve just lifted it from the PowerShell module so it won’t work as is
    • You’ll need to remove the Export-ModuleMember lines
    • Line 173 in Translate-App.ps1 makes a call to a function I haven’t given you to read the API key for the Translator Text service. The module creates a json config file with keys for various settings and this is one of them
  • The free tier of the Translator Text service is throttled. You’ll probably hit the limit if you’ve got more than a few hundred strings to translate into several languages – you just need to wait for a few minutes and run the function again (or choose a paid tier)

Of course, being an English-only speaker I don’t have any way of checking how good these translations are but at least it gives a starting point for a human to verify.

Building Microsoft Dynamics 365 Business Central Apps on Azure DevOps Hosted Agents

This is a quick follow up to this post. If you want an intro to building AL apps for Business Central you might want to check that out first.

In order to build your apps you need a build agent running somewhere which will listen for new jobs and run the scripts, create the Docker containers, run the tests or do whatever else you define in the build file.

You can install an agent on your own server somewhere and authenticate with a personal access token. You’re in charge of the hardware, install agents and scale the performance as you see fit.

Hosting

The alternative is to choose one of the hosted agents that Microsoft provide. The obvious attraction is that you don’t need to maintain any hardware. You just specify the type of machine (Ubuntu, Mac, Windows) that you want the job to run on and pay-as-you-go. Or possibly, don’t pay at all.

With the free tier of Azure DevOps you get:

  • One build job running at a time (other jobs will be queued until that has finished)
  • 1,800 minutes of build time per month

That’s cool. You can keep tabs on your usage and purchase more parallel jobs from here: https://dev.azure.com/<your organisation>/settings/buildqueue?_a=concurrentJobs

If you hit the limits of the free tier you can check out the cost of more jobs here: https://azure.microsoft.com/en-us/pricing/details/devops/azure-devops-services/. At the time of writing 40 USD gets you a second concurrent job and lifts the build minutes per month restriction to unlimited.

Self-Hosting

So…why would you not run on hosted agents? Cost is a consideration. Additional parallel jobs on self-hosted agents are only 15 USD per month. But, what’s 25 dollars per month between friends? That’s assuming you can’t live within the limits of the free tier. If you can then using hosted agents is free.

The main consideration as far as I can see is performance. If you are going to create a Docker container as part of your build (and if you aren’t then I’m not sure what you’re doing) then self-hosted agents are always going to have an advantage. You can have the right Docker image ready downloaded before the build begins but a hosted agent will always needs to download it first.

Our builds, running on a self-hosted agent, typically take between 8 and 15 minutes to complete, depending on how many tests are included in the build. Using the “Hosted Windows 2019 with VS2019” agent pool a test build (which just creates the downloads the Docker image and creates the container) takes around 18 minutes – pulling the latest production sandbox image.

NavContainerHelper is version 0.6.2.3
Host is Microsoft Windows Server 2019 Datacenter - ltsc2019
Docker Client Version is 18.09.6
Docker Server Version is 18.09.6
Pulling image mcr.microsoft.com/businesscentral/sandbox:latest-ltsc2019
latest-ltsc2019: Pulling from businesscentral/sandbox

Add in some time to actually build and publish the app, run the tests and upload the results and we’re probably looking closer to 25 minutes for the whole thing.

I’ll leave it up to you to decide whether you care enough about that performance difference to host build agents yourself. Then again, 1800 / 25 = 72 builds per month before you need to consider paying for more. Maybe that’s all you need? Especially if you’re just getting started with Azure DevOps, builds, YAML and all that jazz…

Working with Azure DevOps Pipelines in YAML

Overview

This post is an update to a post I made about YAML pipelines here. We’ll also take the opportunity to discuss why you might want to define a pipeline with YAML.

Wait…What?

What the heck are we talking about? (skip this bit if you do know what we’re talking about) A pipeline defines a series of tasks, running on defined environments that are performed with your code. In Azure DevOps they come in two flavours:

  • Build – for us that means, taking our AL source code, splitting it into two (test app and production app), compiling them, signing them, publishing and installing into a new container and running the tests and saving the .app files as artefacts of the build
  • Release – taking the built software and deploying it into one or more test and/or production environments – we don’t currently use release pipelines

Pipeline as Code, Why?

Defining the steps involved in your pipeline in a YAML file is sometimes called “pipeline as code” because the YAML file is checked-in to your repository alongside your source code.

The benefit is that your pipeline is version controlled. You can view its history, compare versions, blame/annotate etc. You could also have different versions of your pipeline in different branches and include it in a pull request.

The downside is of having yet another markup language to learn. What are you supposed to put in this file anyway?

Defining the Pipeline

Let’s consider two ways of creating and maintaining your pipeline file. I’m sticking to Visual Studio Code and Azure Repos/Pipelines in Azure DevOps as that’s what I’m familiar with. Loads of other options are available, loads of them supported in Azure DevOps.

In Azure DevOps

The features in Azure DevOps and the UI change frequently as they add new stuff. Microsoft announced loads of changes, including a new YAML editing experience (below) and YAML release pipelines, at Build 2019. You can browse through and watch sessions here: https://www.microsoft.com/en-us/build search for DevOps to jump to the sessions related to this post.

I’ve got a Hello World app with the AL code hosted in Azure Repos. Let’s walk through creating the pipeline file in the UI. Select Builds from the Pipelines menu and hit the “New pipeline” button.

Choose where you want this pipeline to fetch the source code from. In my case it’s in an Azure Repos Git repository.

And I’ve only got one in this project, so I’ll select that.

I don’t have an existing pipeline file, so I’ll create a starter pipeline.

And there it is.

Great…but what does all that mean?

Firstly, this is a pretty neat editor. It works a lot like Visual Studio Code. Maybe it even is Visual Studio Code behind the web page, for all I know. You can hover over different parts of the file and get tooltips about what they do. You also get intellisense when you hit Ctrl+Space giving you some info about the valid options for this part of the file.

Briefly, this pipeline will:

  • Trigger a build when changes are pushed to the master branch
  • Run the build on a hosted ubuntu agent (this is the “we love Linux, we love open-source” Microsoft after all)
  • Run a script to echo “Hello, World!”
  • Run another script to echo some more text

Let’s save and run the pipeline. I’ll commit straight to the master branch for now.

I’m bounced over to see the build that has been scheduled and can watch it run. This is the result:

You can click into each of the steps to see the logs for that step.

In Visual Studio Code

Notice that the file created above was automatically named .azure-pipelines.yml. That is the magic name that Azure DevOps will automatically recognise as defining a pipeline. That means if you create a file with that name and push it to Azure Repos it will automatically create a pipeline using that file as the definition for you.

When I flick back to Visual Studio Code I’ve got a commit waiting to be fetched into my local repo which was created when I saved the pipeline file. Now that I’ve got .azure-pipelines.yml locally I can edit it and source control it just like anything else.

To get the same editing experience as you had online you’re going to want to grab the Azure Pipelines extension for Visual Studio Code. That will recognise that the file is a pipeline definition and give you all the intellisense and more-info goodness you had in the browser.

Further Reading

For more information about what you can do with the yml file check out: https://aka.ms/yaml otherwise I’ll follow up with something more Business Central specific in another post.

Testing Your Microsoft Dynamics 365 Business Central Tests

Seeing as I’m on a bit of a run of posts about testing, let’s look at it from a slightly different angle.

Testing the Test

If we’re going to rely on automated tests to verify that our code (still) works then we need to have confidence that the tests themselves actually work.

Writing the Test First

This is why it is helpful to write and run the tests first. When you start developing a new feature or working on a bug fix you have identified some desired behaviour that the system doesn’t yet exhibit. Given this and this, when something or other then this is the behaviour I’m expecting.

Writing a test for that behaviour and seeing it fail confirms that the desired behaviour is missing. That gives you some confidence that you’re on the right lines – the system should do this, but doesn’t – yet.

When you write the bug fix or new feature and see the test pass it gives you much more confidence that your code actually works. You demonstrated beforehand that the desired behaviour was missing and that now it is there. Have a gold star.

Writing the Test Afterwards

You could write the test afterwards and we’ve done a lot of that as we’ve built up tests for our older code that didn’t have any. Whenever I write tests after the fact I do miss the initial stage of having an expected failing test though.

Not completing the given or the when can be a useful way to test the test. Asserting the expected results when you haven’t done all the required steps should normally cause the test to fail.

For example:

//[GIVEN] an item with my bespoke field populated
LibraryInventory.CreateItem(Item);
SomeBespokeValue := ...;

//leave these lines commented out initially to see the test fail
//Item.Validate("Bespoke Field",SomeBespokeValue);
//Item.Modify(true);

//[WHEN] the item is validated on a sales line
LibrarySales.CreateSalesDocumentWithItem(...)

//[THEN] some bespoke field on the sales line should be set
Assert.AreEqual(SalesLine."Bespoke Field",SomeBespokeValue,...)

Seeing the test fail with those lines commented out and then seeing it pass when you uncomment them will give you more confidence that the test and the behaviour that it is testing work as required. If you are testing code that you think already works and the test always passes it is hard to be sure why the test is passing. Hopefully because the code works – but possibly because the test itself is broken and will always pass, even if the code doesn’t work.

Confidence

The point is to try and get some confidence in your test results. Are you happy to ship the software when all your tests pass? If not, why not? Because you don’t have enough tests? Because you don’t trust that a passing test means working software?

Having a bunch of tests whose results you don’t trust is probably worse than having no tests at all.

Business Central

This is all pretty generic and if you’re interested in the principles you can search for Test-Driven Development (TDD) or Behaviour-Driven Development (BDD) and read what people far more qualified than me have to say about it.

Let’s talk about Business Central specifically for a minute. One of the best things about automated testing compared to manual testing is that everything is rolled back at the end to return the database to the same state it was in at the start. However, that can make life a little difficult when you are trying to inspect the data mid-test and see what is happened.

There are various ways you might want to extract the data at a given moment: write to a file, throw an error with a bunch of values you are interested in, read uncommitted data in SQL. We’ll just talk about two approaches:

Debugger

You can debug test code just like any other code. Set a breakpoint in your test, attach the debugger and run the test from the Test Tool page. Step through, add watches and evaluate debug expressions. The debugger in VS Code is getting better all the time, exposing more details about the variables you are interested in and SQL statements that have been executed.

Perfect for diving into the details and stepping through line by line, but not always the easiest to get an overview of what is happening.

Another Client Session

Another option is to open another client session while you are debugging. Set a breakpoint, attach the debugger and start running a test from the Test Tool page.

Executing Tests Dialog.JPG

Debugging the test will block the session that you started it from – you’ll get the “working on it” dialog – but you can open a different session in another tab or in another browser.

The only snag with this is that some of the records that you want to read might be locked and you’ll get an error trying to open the corresponding page.

Record Locked by Another User.JPG

“The operation could not complete because a record was locked by another user.” Bummer.

Turns out there is another way to read the data in that session.

Avoid Locks With Page=<pageid>

You can add parameters to the web client URL to navigate to specific tables, reports or pages. In my example I can’t open the Items list from the menu because the record is locked by another user.

If I go to the Item List page with http://<base web client URL>?page=32 then the page loads with my test data. I can open the item card, navigate to other pages and run the Page Inspector (Ctrl+Alt+F1) to view all the fields in the table, filters, extension details etc.

As I step through the code in the VS Code debugger I can refresh the pages in this session and see the updates to the record. Beautiful.

Item Card with Test Data.JPG

Further Reading

If you’re interested in getting stuck into testing in Business Central grab yourself a copy of Automated Testing in Microsoft Dynamics 365 Business Central.

Automated Testing in Microsoft Dynamics 365 Business CentralIt was my pleasure to make a small contribution to this book as a technical reviewer and writing the foreword.

https://www.packtpub.com/business/automated-testing-microsoft-dynamics-365-business-central