Tip: List-Commits

function List-Commits {
  cd 'C:\Git'
  $Commits = @()
  Get-ChildItem . -Directory | % {
    cd "$_"
    if (Test-Path (Join-Path (Get-Location) '.git')) {
      $Commits += git log --all --format="$($_)~%h~%ai~%s~%an" | ConvertFrom-Csv -Delimiter '~' -Header ('Project,Hash,Date,Message,Author'.Split(','))
    }
    cd ..
  }
  $Commits | ? Author -EQ "$(git config --get user.name)" | sort Date -Descending | Out-GridView -Title 'Commits'
}

This function iterates through Git repositories under the same parent folder (C:\Git in my case), builds a list of all the commits that you’ve authored (i.e. that match your user.name in Git config) and displays them in descending date order in a grid view.

Change the path in the second line to suit, or just remove it to have it search for repositories under the current directory.

Sample output in a PowerShell grid view

I use it to remind myself what I’ve been working on over the last few days. Mostly for fun and only occasionally because I’m late filling in my timesheets… 🙄

Part 3: (Slightly) More Elegant Error Handling in Business Central

Intro

This is a continuation of a series of posts started around a year ago – you can find the old posts here if you are interested.

Briefly, the goal is this. I’m posting a journal and there are several checks that I need to perform on each line before they are posted. Instead of stopping and showing the first error I want to validate all of the lines and show all of the errors to the user at once.

Previously

The previous posts have been about using the Error Message Management codeunit to achieve that. It works, but I found it clunky. You need to avoid throwing an actual error, but instead collect the problems into a temporary table to display at the end. Then an actual error message is thrown to prevent the journal from being posted.

Problems

What are the problems with that approach?

First, not throwing an error if often harder than throwing one. Instead of simple calls to TestField or Error you need to make corresponding calls to ErrorMessageMgt.LogTestField and ErrorMessageMgt.LogError. In itself that’s OK, but what my be more of a problem is wrapping the posting routine with if Codeunit.Run() then

if not Codeunit.Run(Codeunit::"Posting Routine") then
  ErrorMessageHandler.ShowErrors();

It depends on the context in which you are calling the posting routine. If you are calling it from a page action it’s probably fine. If you are calling it in the middle of some other business logic, not so much.

Second, testing the routine is a bit of a pain. If you want to write an automated test that asserts that when you post the journal line without a posting date then an error is thrown, you can’t. At least not in the way that you expect.

asserterror JournalLine.Post();
Assert.ExpectedError('Posting Date must not be blank');

Something like this won’t work – because an actual TestField error is never thrown. That error message is collected in the temporary table. The only actual message that is thrown is a blank one by the Error Message Handler codeunit. So then you have to either:

  1. complicate your tests by collecting the error messages and asserting that they have the value you expect
  2. complicate your production code with an option to directly throw the errors rather than log them e.g. make sure that you don’t have an active instance of Error Message Management. It isn’t difficult, it just feels messy

Collectible Errors

From BC19 we have the concept of collectible errors. This is quite a different – and better, I think – approach to the same problem. Instead of a framework in AL which we need to dance around avoiding calling Error it is a platform feature which allows us to call Error but tell the platform that the error is collectible and that code can continue to be executed.

The method that the errors are thrown in must indicate that it allows errors to be collected with a new attribute ErrorBehaviour.

[ErrorBehaviour(ErrorBehaviour::Collect)]
local procedure CheckLine(var JournalLine: Record "Journal Line")
begin
  Error(ErrorInfo.Create('Some error message', true));
end;

The Error method has a new overload which takes an ErrorInfo type instead of some text. The ErrorInfo type indicates whether it can be collected or not.

If both the ErrorInfo and the method in which it was thrown are set to allow collection then the error message will not be immediately shown to the user and the code will continue to execute.

Show me the code…

Changes to the Video Journal Batch record

You can view the full changes in this commit: https://github.com/jimmymcp/error-message-mgt/commit/5faa82e614c13017c687d2255305675df0049b29

This is the Post method which is called from the page. You can see the benefits immediately. We can get rid of all that nonsense activating the error message framework and calling if Codeunit.Run. Just call the posting routine and let it do its thing. Much easier to follow.

Next, in the codeunit that handles the batch posting we can get rid of the calls to Error Message Management. I’ve added a new CheckLines method and decorated it with the ErrorBehaviour attribute. This tells the system that any collectible errors which occur within the scope of this method can be collected and code execution can continue.

The level at which we set the ErrorBehaviour attribute is important. I want to continue to check all journal lines in the batch and then stop and show any errors which have been collected. That’s why the ErrorBehaviour is set here – at the journal batch level – rather than at journal line level.

When the system finishes executing the code in this method it will automatically check whether any errors have been collected and show an error message if they have.

Finally, these are the changes to the codeunit which actually checks the journal line. Again, we can ditch the references to the Error Message Management codeunit and replace them with straightforward calls to Error or TestField.

Rather than passing some text with the error message we can pass an ErrorInfo type, returned by ErrorInfo.Create. This is the signature. At a minimum pass the error text, but we also want to indicate that this error can be collected via the collectible parameter. I’m including the instance of Video Journal Line and field number where appropriate as well.

Great to see that TestField has an overload which accepts an ErrorInfo object. The system will fill in the usual error text for you, “<field number> must have a value in <record id>”.

The other parameters are interesting, maybe more about those another time.

procedure Create(Message: Text, [Collectible: Boolean], var [Record: Table], [FieldNo: Integer], [PageNo: Integer], [ControlName: Text], [Verbosity: Verbosity], [DataClassification: DataClassification], [CustomDimensions: Dictionary of [Text, Text]]): ErrorInfo

How does it look?

Put that all together and attempt to post a couple of journal lines which have some validation errors. How does it look?

You get an error dialog as usual. Only this time it says that “Multiple errors occurred during the operation” and gives you the text of the first error message. Click on Detailed information to see a list of all the errors that were collected.

This is what you get.

Kind of underwhelming right?

It was all going so well up to this point but I’ve got a few issues with this:

  1. Given that I’ve gone to the trouble of collecting multiple errors to show to the user all at once it seems counter-intuitive to make the user expand the error to see all the details
  2. Is it just me or is this not easy to read? Once an error message breaks two lines it isn’t obvious how many errors there are. You can’t expand the dialog horizontally either. Even with relatively few errors I’ve had to scroll down to be able to read them all
  3. TestField errors include the record id, which is fine, but for the custom errors I’ve gone to the trouble of giving the record and field number that contains the problem…but that isn’t shown anywhere. I’ve only got 2 lines in my journal in this case, but if I had tens or hundreds it would be really difficult to match the validation errors to the lines that caused them

Custom Handling of Errors

There is a way that we can handle the UI of the error messages ourselves, which is great – and I’ll show an example of that next time. Kudos to the platform team for building that capability in from the start, it’s just a shame that it’s necessary. Call me picky, but I don’t think the standard dialog is really useable.

BC19 CU0

By the way, this doesn’t work properly in BC19 CU0. You have to set the target in app.json to OnPrem – which shouldn’t be necessary. That’s been fixed now.

Is GitHub Copilot Any Good for Business Central Development?

TL;DR

No. At least, not yet.

Maybe later.

But even then, maybe not.

Intro

For now this is necessarily a simple, first impressions post about GitHub Copilot. I’ve used it for a few weeks, tweeted enthusiastically in the first couple of days’ use and have now disabled it. What is it, how does it work, what’s good, what’s not so good?

What is GitHub Copilot?

Your AI pair programmer. With GitHub Copilot, get suggestions for whole lines or entire functions right inside your editor.

copilot.github.com

Copilot is a service from GitHub, accessed through a Visual Studio Code extension which provides suggests for code and comments as you are typing.

How Does it Work?

how it works
Image taken from https://copilot.github.com/

I won’t pretend to know much about the inner workings of the extension but this is how I understand it. GitHub clearly have an enormous amount of open source code under their control for all manner of programming languages. They have used that code to train an AI model.

The Copilot VS Code extension takes your existing code and comments, feeds that to the model and attempts to predict what you might want to type next. The best suggestion shows up faintly beyond the cursor and you can tab to accept the suggestion. Alternatively you can load the top 10 suggestions and select the best one to insert into your code.

Some Examples

Below is an example where I have created a new procedure called ValdiateEmailAddress. We’ll ignore the obvious issues with the suggestions for now and just show it as an example of how the extension works.

ValidateEmailAddress exmaple

Given that I’ve called the new method ValidateEmailAddress the extension has generated (apparently generated and not just straight copied from an existing repo) the suggested code which I have accepted by pressing TAB each time.

Here’s another example. Let’s create a new CreateSalesOrder method and view the suggested solutions.

CreateSalesOrder example

A couple of interesting things to notice here. Most of the solutions attempt to do something with the Order Origin Code field. That’s the only other code in the file, so Copilot seems to be giving a lot of weight to that in the suggestions. Some of the suggestions use keywords from other languages: this, var and new.

More Context

Those are the sorts of examples that you’ll find in YouTube videos which are enthusiastic about Copilot. Enter a method name, type a short comment that describes what the code should do and Copilot automagically suggests an implementation for your method. Any examples that you are going to be impressed by are likely to be written in JavaScript, Ruby or Python, not AL. More about that later.

Perhaps an example where we provide more context is fairer. The below example is of writing an automated test. Notice that I already have a test to check that releasing a sales order without a certain field populated throws an error. I’m going to create another test to check the same behaviour for sales invoices.

Writing an automated test with Copilot

Copilot has a much better time of suggesting the code this time, following the pattern of the above automated test. Not only do all of the suggestions compile, but they are correct. Now we’re getting somewhere.

Yes, and no. I’m not knocking it – but all it is actually doing is copying the above test and replacing all instances of “order” with “invoice”. We can already do that in VS Code in a few keystrokes – much faster than messing around with Copilot.

That said, it does demonstrate that Copilot “learns” from the code that you’ve already written and can make repetitive work more efficient. I typically always start an automated test with Init(); and finish with one or more calls to the Assert codeunit. Copilot quickly starts suggesting those lines, even with appropriate comments in calls to Assert.AreEqual(). Hence my initial enthusiasm and tweet.

Problems

What’s the problem then? Why does Copilot keep suggesting code that doesn’t compile? The biggest problem is that Copilot just hasn’t seen enough AL to know what it should look like. The suggestions do improve if you give it more context and write blocks of code which are similar to something you’ve written before, but ultimately the model hasn’t been trained on enough AL code.

But wait? Isn’t the whole of the base app for the past few versions on GitHub now? That’s thousands of AL files. Yes, it is – but that is a few H20 molecules in a drop in GitHub’s ocean. At the time of writing there are fewer than 100 GitHub repos containing AL code and approximately 31K AL files (see here).

Sounds like a lot of files…until you compare it to other languages.

LanguageApprox. No. of Files
AL31 thousand
Python100+ million
Java200+ million
PHP600+ million
C1.2+ billion
JavaScript1.4+ billion
https://github.com/search?q=language%3Ajavascript&type=code

If you’re trying to solve a problem in JavaScript it is pretty certain that someone, somewhere writing open source code has solved it before. Copilot ought to be able to generate a range of sensible solutions. Whether you want those suggestions is another matter.

Pair Programming

GitHub describe Copilot as “Your AI pair programmer.” That’s nice. As long as you like pair programming with someone in your ear the whole time suggesting the next line or two that you should write – before you have time to think for yourself.

I think that’s been my biggest issue with it. When I speak to someone else about what I’m working on I usually want to talk bigger picture. Do I understand the requirements? How does this affect some existing functionality? How does this integrate with other apps that we are writing?

When I write a method declaration I want to give some thought to how I’m going to approach the implementation. When Copilot pops up with some suggested implementation invariably I get distracted reading it rather than thinking about what code to write myself. Even if they were good suggestions I’m not sure that I’d like that. The fact that most of the suggestions use syntax from another language and don’t even compile makes it even worse.

Conclusion

Copilot is an interesting concept and I’ve enjoying playing with it. There have definitely been some moments when I’ve been very impressed with the suggestions that it has made. There have been times when it has saved me time writing some repetitive code. Or code which changes predictably between lines. You could compare it to auto-fill in Excel. Give it a few examples to demonstrate how the value changes each row and then auto-fill as many rows as you need following the same pattern.

However, I pretty quickly found the annoyances outweighed the benefits. The novelty wore off and I realised I was less productive with Copilot than without it. Maybe the suggestions will improve over time. Then again, maybe AL is just too niche and we can’t expect it ever to work as well it does with JavaScript.

Presumably at some point Microsoft are going to monetise this, but I can’t see AL developers paying for it. For now the extension is going to remain installed, but disabled.

Dmitry has submitted a session at DynamicsCon about Copilot. If you found this interesting you should check it out. Maybe he’ll be more enthusiastic than me.

Test Explorer in Visual Studio Code

The July 2021 release of Visual Studio Code (1.59) introduced a new testing API and Test Explorer UI. From v0.6.0 this API is used by AL Test Runner.

Test Explorer Demo

Improvements

UI

The biggest improvement is the Test Explorer view which shows your test codeunits, their test methods and the status of each.

Hovering over a test gives you three icons to run, debug or open an editor at the test.

You can run and debug all the tests in a given codeunit by hovering over the codeunit name or run and debug all tests at the top.

The filter box allows you to easily find specific tests, which I’ve found useful in projects which several test codeunits and hundreds of tests.

You can also filter to only show failed tests or only test which are present in the codeunit in the current editor. The explorer supports different ways of sorting and displaying the tests.

Icons are added into the gutter alongside test methods in the editor. Left click to run the test or right click to see this context menu with more options.

The old “Run Test” and “Debug Test” codelens actions are also still added above the test definition.

Commands & Shortcuts

A whole set of new commands are introduced with keyboard chords beginning with Ctrl + ; The existing AL Test Runner keyboard shortcuts still work but there are some nice options in the new set – like “Test: Rerun Last Run” to repeat the last run test without having to navigate to it again.

Using the Test Explorer

Using the Test Explorer is pretty self-explanatory if you’ve already been using AL Test Runner. When you open your workspace/folder the tests should be automatically discovered and loaded into the Test Explorer view. On first opening all of the tests will have no status i.e. neither pass or fail – but results from now on will be persisted.

Running one or more tests – regardless of where you run them from (Test Explorer, Command Palette, CodeLens, Keyboard Shortcut) – will start a test run. You’ll see “Running tests…” in the Status Bar.

Once the test(s) have finished running you’ll see the results at the top of the Test Explorer, “x / y tests passed (z %)”, and the status icons by each test will be updated.

If the tests do not actually run e.g. because your container isn’t started then the test run will not finish and “Running tests” will continue to spin at the bottom of the screen. You can stop the run manually from the top of the Test Explorer, fix the problem and go again.

Using Code Coverage in Business Central Development

Intro

Sample code coverage summary

In the latest version of AL Test Runner I’ve added an overall percentage code coverage and totals for number of lines hit and number of lines. I’ve hesitated whether to add this in previous versions. Let me explain why.

Measuring Code Coverage

First, what these stats actually are. From right to left:

Code Coverage 79% (501/636)
  1. The total number of code lines in objects which were hit by the tests
  2. The total number of lines hit by the tests
  3. The percentage of the code lines hit in objects which were hit at least once

Notice that the stats only include objects which were hit by your tests. You might have a codeunit with thousands of lines of code, but if it isn’t hit at all by your tests it won’t count in the figures. That’s just how the code coverage stats come back from Business Central. Take a look at the file that is downloaded from the test runner if you’re interested (by default it’s saved as codecoverage.json in the .altestrunner folder).

It is important to bear this is mind when you are looking at the headline code coverage figure. If you have hundreds of objects and your tests only hit the code in one of them, but all of the code in that object – the code coverage % will be a misleading 100%. (If you don’t like that you’ll have to take it up with Microsoft, not me).

What Code Coverage Isn’t Good For

OK, but assuming that my tests hit at least some of the code in the most important objects then the overall percentage should be more or less accurate right? In which case we should be able to get an idea of how good the tests are for this project? No.

Code Coverage ≠ Test Quality

The fact that one or more tests hits a block of code does not tell you anything about how good those tests are. The tests could be completely meaningless and the code coverage % alone would not tell you. For example;

procedure CalcStandardDeviation(Values: List of [Decimal]): Decimal
var
    Value, Sum, Mean, SumOfVariance : Decimal;
begin
    foreach Value in Values do
        Sum += Value;
    Mean := Sum / Values.Count();
    foreach Value in Values do
        SumOfVariance += Power((Value - Mean), 2);
    exit(SumOfVariance / Values.Count());
end;

[Test]
procedure TestCalcStandardDeviation()
var
    Values: List of [Decimal];
begin
    Values.Add(1);
    Values.Add(3);
    Values.Add(8);
    Values.Add(12);

    CalcStandardDeviation(Values);
end;

Code coverage? 100% ✅

Does the code work? No ❌ The calculation of the standard deviation is wrong. It is a pointless test, it executes the code but doesn’t verify the result and so doesn’t identify the problem. (In case you’re wondering the result should be the square root of SumOfVariance).

Setting a Target for Code Coverage

What target should we set for code coverage in our projects? Don’t.

Why not? There are a couple of good reasons.

  1. There is likely to be some code in your project that you don’t want to test
  2. You might inadvertently encourage some undesired behaviour from your developers

Why Wouldn’t You Test Some of Your Code?

Personally, I try to avoid testing any code on pages. Tests which rely on test page objects take significantly longer to run, they can’t be debugged with AL Test Runner and I try to minimise the code that I write in pages anyway. Usually I don’t test any of:

  • Code in action triggers
  • Lookup, Drilldown, AssistEdit or page field validation triggers
  • OnOpen, OnClose, OnAfterGetRecord
  • …you get the idea, any of the code on a page

You might also choose not to test code that calls a 3rd party service. You don’t want your tests to become dependent on some other service being available, it is likely to slow the test down and you might end up paying for consumption of the service.

I would test the code that handles the response from the 3rd party but not the code that actually calls it e.g. not the code that sends the HTTP request or writes to a file.

Triggers in Install or Upgrade codeunits will not be tested. You can test the code that is called from those triggers, but not the triggers themselves.

Developing to a Target

When a measure becomes a target, it ceases to be a good measure.

Marilyn Strathern

If we already know that we have some code that we will not write tests for then it doesn’t make a lot of sense to set a hard target of 100%. But, what other number can you pick? Imagine two apps:

  1. An app that is purely responsible for handling communication with some Azure Functions. Perhaps the majority of the code in that app is working with HTTP clients, headers and responses. It might not be practical to achieve code coverage of more than 50%
  2. An app that implements a new sales price mechanism. It is pure AL code and the code is almost entirely in codeunits. It might be perfectly reasonable to expect code coverage of 95%

It doesn’t make sense to have a headline target for the developers to work to on both projects. Let’s say we’ve agreed as a team that we must have code coverage of at least 75%. We might incentivise developers on the first project to write some nonsense tests just to artificially boost the code coverage.

Meanwhile on the second project some developers might feel safe skipping writing tests for some important new code because the code coverage is already at 80%.

Neither of these scenarios is great, but, in fairness, the developers are doing what we’ve asked them to.

What is Code Coverage Good For?

So what is code coverage good for? It helps to identify objects that have a lot of lines which aren’t hit by tests. That’s why the output is split by object and includes the path to the source file. You can jump to the source file with Alt+Click.

Highlight the lines which were hit by the previous test run with the Toggle Code Coverage command. That way you can make an informed opinion about whether you ought to write some more tests for this part of the code or whether it is fine as it is.

50% code coverage might be fine when 1 out of 2 lines has been hit. It might not be fine when 360 out of 720 lines have been hit – but that’s for you to decide.

Further Reading

https://martinfowler.com/bliki/TestCoverage.html