Stop Writing Automated Tests and Get On With Some Real Code

To be fair, these weren’t the exact words that were used, but a view was expressed from the keynote stage at Directions last week along these lines. Frustration that developers now have to concern themselves with infrastructure, like Docker, and writing automated tests rather than “real” code.

I couldn’t resist a short post in response to this view.

If It Doesn’t Add Value, Stop Doing It!

First, no one is forcing you to write automated tests – apart from Microsoft, who want them with your AppSource app submission. Even then, I haven’t heard of Microsoft rejecting an app because it wasn’t accompanied by enough automated tests.

I’m an advocate of developers taking responsibility for their own practices. Don’t follow a best practice simply because someone else tells you it’s a best practice. You know your scenario, your team, your code and your customers better than anyone else. You are best placed to judge whether implementing a new practice is worth the cost of getting started with it.

AppSource aside, if you are complaining about the amount of time you have to spend on writing tests then you have no one to blame but yourself. Or maybe your boss. If you don’t see the value in writing automated tests then you probably should stop wasting your time writing them!

Automated Tests vs “Real” Code

Part of the frustration with tests seemed to be that they aren’t even “real” code. If by “real” code we are referring to the code that we deliver and sell to customers then no, tests aren’t real code.

But what are we trying to achieve? Surely working, maintainable code that adds value for our customers.

We might invest in lots of things in pursuit of that goal. Time spent manually testing, sufficient hardware to develop and test the code on, an internet connection to communicate with each other and the customer, office space to work in, training courses and materials, coffee. We’re not selling these things to the customer either but no one would question that they are necessary to achieve the goal of delivering working software. Especially the coffee.

Whether or not automated tests are “real” code is the wrong question. The important judgement is whether the time spent on writing them makes a big enough contribution to the quality of the product that you eventually ship.

I won’t make the case for automated testing here. That’s for a different post. Or a different book. Suffice to say, I do think it is worth the investment.

But We’ve Got a Backlog of Code Not Covered By Tests

One problem you might have is that you’ve got a backlog of legacy code that isn’t covered by any automated tests. Trying to write tests to cover it all will take ages. This frustration also seemed to be expressed by the speaker at Directions. It even got a round of applause from some of the Directions audience.

My response would be the same – you are best placed to make a pragmatic judgement. Of course it would be nice to have 100% code coverage of your tens of thousands of lines of legacy code – but if you’ll have to stop developing new features for six months to achieve it, is it worth it? Probably not.

Automated tests should give you confidence that the code works as expected. If you are already confident that your existing code works then there might be limited value in writing a suite of tests to prove it.

Try starting with tests to cover new features that you develop or bug fixes. With these cases you’ve got some code that you aren’t confident works as expected – or that you know doesn’t. Take the opportunity to document and prove the expected behaviour with some tests. Over time you’ll build a valuable suite of tests that you can run to demonstrate that each new release of your product works and that bugs haven’t been reintroduced.

With some practice you’ll find that you can use the library codeunits to create scenarios with little test code e.g. you can create a customer, item and sales order, post it and get the posted sales invoice in 2 lines of code.

Interested? More here

Debugging the Next Session in Business Central

Business Central v15 includes some good new stuff for developers. Access modifiers for objects, smarter code analysis, background page tasks – there is a list of stuff here: https://docs.microsoft.com/en-us/dynamics365-release-plan/2019wave2/dynamics365-business-central/developer-tools

I’ve just been trying out the new debugger capability, specifically being able to attach the debugger to a service and debug the next session to hit a breakpoint or error.

A Brief Nostalgia Trip…

Excuse me if I indulge in a little nostalgia. If you don’t care about this and just want to know how it works then you can skip to “spare me the history lesson”.

The Classic Client Years

Still here? Then maybe you have been around NAV long enough to remember the introduction of the RoleTailored Client. We’d been used to having the Classic Client debugger for years. It wasn’t perfect, but we knew our way round it. We could easily switch between writing and debugging code, debug an application server or even debug a posting routine in live and lock the whole system – anyone else do that when they first started in support? Life was good.

The RoleTailored Client Years

Then the RoleTailored Client was introduced and it felt like we were developing with one arm tied behind our backs. No debugger. You could still debug in Classic Client but the clients weren’t necessarily even running the same C/AL code – thanks to the ISSERVICETIER keyword.

I know you could find the source that the service tier was actually running, attach Visual Studio to the Server.exe process and debug the C# but not many people wanted to do that. MESSAGE debugging was far more common. Especially entertaining if someone left a message box in live and you got a call from the customer wondering what some mysterious pop-up was about. Connoisseurs wrapped their MESSAGEs in

IF USERID = 'sa' THEN...

By NAV 2013, RTC was the only client customers could use and we had to be able to debug. To be fair, Microsoft came up with the goods and the new debugger was better than what we used to have in the Classic Client. Especially because we could debug other sessions connected to the same service tier or the next session to connect. Ask the user to repeat the steps that lead to error and debug their session, perfect. Also great for debugging web service calls.

The Business Central Years

And then along came Business Central. The RoleTailored Client, complete with debugger is going to be removed and we don’t quite have a replacement for everything we rely on it for. Sound familiar?

Don’t get me wrong, I love VS Code. I love the VS Code debugging experience. But how can I debug other user sessions? How can I debug web service calls?

Spare me the History Lesson, How Does it Work?

Open up launch.json and hit the Add Configuration button in the bottom right hand corner and you’ll notice a couple of new options:

  • Attach to the next client on the cloud sandbox
  • Attach to the next client on your server

Pick one of those and you’ll notice that the configuration it creates has a request value of attach.

breakOnNext determines the type of session that the debugger will be attached to: Background, WebClient or WebServiceClient.

Give the configuration a sensible name so that you’ll be able to refer to it when you attach the debugger. Attach the debugger by opening the debug pane, selecting your configuration and click on the Start Debugging button.

Set some breakpoints in your code and hit them. Either with some activity in the web client or with a web service call.

BreakOnNext Support

Note: the help for breakOnNext states “The sandbox version only supports attaching to a WebService Client”. This seems to apply to sandbox Docker containers (e.g. from mcr.microsoft.com/businesscentral/sandbox) as well as to cloud sandboxes. You can, however, use the other breakOnNext options with an on-premise Docker image (mcr.microsoft.com/businesscentral/onprem).

Using Templates in YAML Pipelines in Azure DevOps

So far we’ve been considering how you can define a yaml pipeline to define the steps required to build the code in a single repository. Create a .azure-pipelines.yml file, add the stages, jobs and steps and away you go. Cool.

What if you’re building multiple apps with the source code in multiple repositories though? You could just copy your pipeline definition from repo to repo. What happens when you want to make changes to the pipeline? Are you going to copy the changes here, there and everywhere?

No. You’ve got more self-respect than that. You want a single pipeline definition that is shared across the repos that need it. In which case, templates will be of interest.

Create a Template File

If you’ve got a yaml pipeline definition that already works for you you’re probably going to want to use that as the basis of your template. Copy and paste your pipeline into a new yaml file. You’ll probably want to create a new project or repo to hold this template file.

Remove Trigger

If you’ve got a trigger section in the pipeline you’re copying from (to trigger the pipeline when changes are pushed to certain branches) you can remove that from the template file.

Convert Variables to Parameters

If you have any variables in the pipeline you will need to convert them to parameters. Use the parameters keyword…simple enough. Notice that you can still provide default values for the parameters. If parameters values are not supplied by the pipeline that is using the template these default values will be used. For example:

parameters:
  image_name: mcr.microsoft.com/businesscentral/sandbox
  container_name: Build
  company_name: My Company
  user_name: admin
  password: P@ssword1

Any references to variables in the steps will need to be changed to refer to the parameters instead. Rather than this:

-task: PowerShell@1
  displayName: Create build container
  inputs:
    scriptType: inlineScript
    inlineScript: >
      Import-Module navcontainerhelper;
      New-NavContainer -containerName $(container_name)...

Use ${{parameters.[parameter_name]}} like this:

 -task: PowerShell@1
  displayName: Create build container
  inputs:
    scriptType: inlineScript
    inlineScript: >
      Import-Module navcontainerhelper;
      New-NavContainer -containerName ${{parameters.container_name}}...

I’ve called my template file build-template.yml and the first few lines look like this:

 parameters:
  image_name: mcr.microsoft.com/businesscentral/sandbox
  container_name: Build
  company_name: My Company
  user_name: admin
  password: P@ssword1
  license_file: C:\Users\james.pearson\Desktop\Licence.flf

stages:
- stage: build
  displayName: Build
  jobs:
  - job: Build
    pool:
      name: Default
    steps:
      - task: PowerShell@1    
        displayName: Create build container
        inputs:
          scriptType: inlineScript
          inlineScript: > 
            Import-Module navcontainerhelper;
            $Credential = [PSCredential]::new('${{parameters.user_name}}',(ConvertTo-SecureString '${{parameters.password}}' -AsPlainText -Force));
            ...

Change the Pipeline to Use the Template

Now you want to change the pipeline definition to use the template yaml file that you have created. Include a repository resource, specifying the name with repository key.

The type key refers to the host of the git repo. Confusingly, ‘git’ refers to an Azure DevOps project or you can also refer to templates in GitHub repos. Name is in the format Project/Repository – in my example both are called ‘Templates’. Define a ref (generally a branch or tag) in the template repo that specifies the version of the template you want.

trigger:
  - '*'

resources:
  repositories:
    - repository: templates
      type: git
      name: Templates/Templates
      ref: refs/heads/master

stages:
- template: build-template.yml@templates
  parameters:
    image_name: mcr.microsoft.com/businesscentral/sandbox
    company_name: My Company 

Templates can be used at different levels in the pipeline to specify stages, jobs, steps or variables – see here for more info. In my example the template file is specifying stages to use in the pipeline.

My pipeline simply becomes a template key beneath the stages key. The value is in the format [filename]@[repository]. The repository value here is taken from the repository key specified above. Supply parameter values with the parameters key. Any parameter values that are not supplied will take the default values from the template file.

And there you have it. A single template file that you can reuse across your different repos. Make changes to your pipeline once and have them used wherever the template is used.

Dynamics 365 Business Central Queries: Erm…where are the rest of my rows?!

This is a bit off-topic to what I’ve been blogging about lately but I’ve been caught out by this before and the other day so was a colleague so I thought it was worth a post.

TL;DR

Be careful of the difference between DataItemLink and DataItemTableFilter properties. DataItemLinks define the join between the dataitems in the query while DataItemTableFilters are applied to the results after the join has been processed.

Intro

In theory the query object in Business Central/NAV ought to be very useful. Instead of using nested REPEAT…UNTIL loops like we used to with the associated many round-trips to the database (or at least the cache) we should be able to create a query to join multiple tables and return all the columns we need in a single round-trip.

In practice, I’ve often found queries frustrating to work with. Sometimes because they can’t support a more complex scenario, sometimes because the parameters don’t do quite what I’d expect. Maybe my expectations are wrong. Fine, but even so, trying to “debug” a query and figure out why the query you have designed gives the results that you are getting is not fun. Not quite as bad as developing reports – but still not fun.

Scenario

Let’s imagine that for some reason we need a list of items with the total base quantity from sales invoice lines – including where that total is zero. Typically you might write some code like this:

SalesLine.SetRange("Document Type",SalesLine."Document Type"::Invoice);
SalesLine.SetRange(Type,SalesLine.Type::Item);

if Item.FindSet() then
  repeat
    SalesLine.SetRange("No.",Item."No.");
    SalesLine.CalcSums("Quantity (Base)");

    //use that result for something...

  until Item.Next() = 0;

You figure that doing a CalcSums() for each item probably isn’t going to perform too well. Surely, this is exactly the sort of thing that we have queries for?

Version One

Knowing that we need all items records, including ones that don’t have corresponding sales line records we are going to need a left join i.e. all records from table A and matching records from table B.

For starters I’m going to create a query that just shows the data we’ve got – no grouping or summing just yet.

query 50100 "Frustrating Query"
{
    QueryType = Normal;
    elements
    {
        dataitem(Item; Item)
        {
            column(No; "No.") {}
            column(Description; Description) {}

            dataitem(Sales_Line; "Sales Line")
            {
                SqlJoinType = LeftOuterJoin;
                DataItemLink = "No." = Item."No.";
                
                column(Document_Type;"Document Type") {}
                column(Document_No;"Document No.") {}
                column(Quantity;"Quantity (Base)") {}
            }
        }
    }
}

The first few results from that query look like this.

No.DescriptionDocument TypeDocument No.Quantity
1896-SATHENS DeskInvoice1022011
1900-SPARIS Guest Chair, blackQuote0
1906-SATHENS Mobile PedestalQuote0
1908-SLONDON Swivel Chair, blueQuote0
1920-SANTWERP Conference TableOrder1010038
1920-SANTWERP Conference TableInvoice1022024
1920-SANTWERP Conference TableInvoice10220310
1920-SANTWERP Conference TableInvoice1022054

Version Two

Cool. Now we need to Sum the Quantity column. I’ll remove the Document No. as we don’t want to group by that. Change the query design to this:

query 50100 "Frustrating Query"
{
    QueryType = Normal;
    elements
    {
        dataitem(Item; Item)
        {
            column(No; "No.") {}
            column(Description; Description) {}

            dataitem(Sales_Line; "Sales Line")
            {
                SqlJoinType = LeftOuterJoin;
                DataItemLink = "No." = Item."No.";
                
                column(Document_Type;"Document Type") {}
                column(Quantity;"Quantity (Base)")
                {
                    Method = Sum;
                }
            }
        }
    }
}

Now the results are:

No.DescriptionDocument TypeQuantity
1896-SATHENS DeskInvoice1
1900-SPARIS Guest Chair, blackQuote0
1906-SATHENS Mobile PedestalQuote0
1908-SLONDON Swivel Chair, blueQuote0
1920-SANTWERP Conference TableOrder8
1920-SANTWERP Conference TableInvoice18

Version Three

Remember that we only wanted the sum of the base quantity for invoice lines. We’ve got a result for 1920-S order lines at the moment. That’s fine we can use the DataItemTableFilter to filter the Document Type.

At least, you’d think so. So would I…and we’d both be wrong. Adding DataItemTableFilter = “Document Type” = const(Invoice) to the query gives these results:

No.DescriptionDocument TypeQuantity
1896-SATHENS DeskInvoice1
1920-SANTWERP Conference TableInvoice18

Erm…where are the rest of my rows?!

Q: what has happened to items 1900-S, 1906-S and 1908-S?
A: there are no matching sales lines for those items

Q: but…that’s why we used a LeftOuterJoin. That should include items with no matching sales lines. I thought that was the point of specifying the join type?
A: yes, except DataItemTableFilter isn’t used as part of the join

Q: …eh?

Explanation

I expected, and maybe you did too, that DataItemTableFilter would be used to filter the Sales Line table before joining it to the Item table. It turns out that the join is processed first, respecting the DataItemLink properties, and the DataItemFilter property is used to filter the joined results afterwards.

In SQL terms the filters go into the HAVING clause and not the ON clause. We might have expected something like this:

SELECT Item.No_,
Item.Description,
SalesLine.[Document Type],
SUM(SalesLine.[Quantity (Base)]) AS Quantity
FROM [CRONUS International Ltd_$Item] AS Item
LEFT JOIN [CRONUS International Ltd_$Sales Line] AS SalesLine
ON SalesLine.No_ = Item.No_
AND SalesLine.[Document Type] = 2
GROUP BY Item.No_, Item.Description, SalesLine.[Document Type]

with SalesLine.[Document Type] = 2 forming part of the ON clause (the definition of the join between the tables). What you actually get is something like this:

SELECT Item.No_,
Item.Description,
SalesLine.[Document Type],
SUM(SalesLine.[Quantity (Base)]) AS Quantity
FROM [CRONUS International Ltd_$Item] AS Item
LEFT JOIN [CRONUS International Ltd_$Sales Line] AS SalesLine
ON SalesLine.No_ = Item.No_
GROUP BY Item.No_, Item.Description, SalesLine.[Document Type]
HAVING SalesLine.[Document Type] = 2

with a HAVING clause at the end which restricts the results after the tables have been joined. (The actual SQL queries you’ll see if you run SQL Server Profiler will be different – stuffed full of parameters and ISNULLs – but this is the general idea).

Conclusion

That was a long way of saying be careful how you use the DataItemTableFilter property – it might not do what you’re expecting. So how can you define an ON clause where the filter is a constant value not a field in another table? I don’t know.

As far as I can see as DataItemLink only allows you to define joins between field tables you’d need to engineer the data so that all of your joins are between fields and not constant values. I’d like to be wrong, but if I’m not this is a pretty big flaw is queries.

It’d be nice to be able add constant values into table joins for this kind of thing. While we’re wishing, it would be even better to be able to dynamically define queries at run-time, build and execute them on the fly. It seems I’m not the only one with a query wishlist: https://experience.dynamics.com/ideas/search-ideas/?q=queries&forum=e288ef32-82ed-e611-8101-5065f38b21f1

Working with Translations in Dynamics 365 Business Central

Intro

Languages: what an almighty headache. Computerphile have a great video that describes just how big the problem is: https://www.youtube.com/watch?v=0j74jcxSunY

Perhaps my perception is skewed by my ignorant native-English-speaker point of view. I haven’t grown up in a country where learning multiple languages and being able to switch between them is essential. Sure, I wish* I could speak more languages but mostly I can get by assuming other people will speak English.

*wish as in “finding a magic lamp” not as in “actually having the patience to put in the effort required to learn and practise”

However, if you are publishing apps into AppSource or any other setting where you plan on supporting different countries you are going to need to deal with translations at some point.

Overview

Visual Studio Code will create a .xlf (xliff) file containing all the literal strings that you have used in your application. They will mostly come from Caption, Tooltip and Label properties. This file will, therefore, contain your English (US) captions – assuming that you are coding in US English.

We need additional .xlf files for each translation that we want to support. The files should be in the format [language]-[COUNTRY].xlf e.g. en-GB.xlf, fr-FR.xlf, de-NL.xlf

Although xliff is a standard for software translations, after scouring the internet for hours I couldn’t find a tool that did what I wanted. It seems I’m not alone, judging by the number of people that have started to write their own tooling:

(I haven’t tried these solutions myself, I’m just aware of them). So what do I want? At least the following:

  • Somewhere to maintain a list of languages and countries that my app needs to support
  • Creation of new .xlf files for each language/country combination
  • Keeping the translation units (the strings present in the app that need translating) in sync between the master .xlf file and each translation file
  • Support for submitting strings to machine translation and feeding the results back into each translation file

See also https://community.dynamics.com/business/b/businesscentraldevitpro/posts/translate-your-extension-automatically-with-azure-translator-text which describes a similar approach to ours in a Visual Studio Code extension.

My preference is to build some support for translations into our PowerShell module. The main reason is so that we can use the functions in our build process.

Translations to Maintain

I’ve written before about us having an environment.json file which holds settings about the repository which we use in the build. This seemed like a sensible place to also keep our list of translations. It looks like this:

{
  "translationSource": ".\\Translations\\Hello World.g.xlf",
  "translations": [
    {"country": "FR", "language": "fr"},
    {"country": "BE", "language": "nl"},
    {"country": "DE", "language": "de"},
    {"country": "GB", "language": "en"}
  ]
}

The translationSource key holds the path to the main .xlf file that is updated by the compiler and translations is an array of country/language pairs that are required.

Translator Text in Azure Cognitive Services

You’ve got some choice when it comes to online translation services. We use the Translator Text service that is part of Azure Cognitive Services. We’re already using a bunch of Azure services so it makes sense to keep them in one place. It has a REST API that we can call with the strings to translate and the language to translate them into. Perfect. But, first we’ll need an API key to authenticate with the service.

  • Log in to https://azure.portal.com to manage your Azure resources
  • Use to search bar to find “Cognitive Services”
  • Click to Add
  • In the Marketplace search for “Translator Text” and click Create
  • Give the service a name
  • Select an Azure subscription to link it to – you can either grab a free trial or create a paid subscription. I’ve created a Pay-As-You-Go subscription. You need credit card details but we’re going to use the free pricing tier for now anyway
  • Select a pricing tier (see https://azure.microsoft.com/en-us/pricing/details/cognitive-services/translator-text-api/) or just select F0 for the free tier
  • Select or create a new resource group to hold your new service
  • Open your new resource from the list of Cognitive Services and click on Keys (left hand navigation menu)
  • You can now use either of the two keys that you’ve got to call the service

PowerShell

We’ve got two key functions in our PowerShell module:

  • Translate-App – this is the entry point to call other functions which:
    • Find the source .xlf file and the environment.json file
    • Create any new .xlf files that are required (by copying the source file and changing the target language)
    • Synchronise the translation units between the source and the target files – add any new strings that require translation and remove any strings that are no longer present in the source file
    • Identify strings that require translation and call the Translator Text service to translate them into the target language
    • Populate the target .xlf file with the translated strings
  • Test-TranslationIsComplete – we use this as part of our build process to verify that
    • All of the required translation files exist
    • Each of those files has all the translation units that are present in the source .xlf file
    • It will throw an error if either of those things is false otherwise it will return true

This is the code (hosted here if you can’t see it: https://gist.github.com/jimmymcp/41bd8d3ac3fd6aa742089029fcd990fb)

A few notes about it:

  • I’ve just lifted it from the PowerShell module so it won’t work as is
    • You’ll need to remove the Export-ModuleMember lines
    • Line 173 in Translate-App.ps1 makes a call to a function I haven’t given you to read the API key for the Translator Text service. The module creates a json config file with keys for various settings and this is one of them
  • The free tier of the Translator Text service is throttled. You’ll probably hit the limit if you’ve got more than a few hundred strings to translate into several languages – you just need to wait for a few minutes and run the function again (or choose a paid tier)

Of course, being an English-only speaker I don’t have any way of checking how good these translations are but at least it gives a starting point for a human to verify.