One of the things that I found counter-intuitive when I was getting started with Git is that when branches are deleted on the server they are still present in your local repository, even after you have fetched from the server.
We typically delete the source branch when completing a pull request, so this happens a lot. Usually, once the PR has been completed I want to:
remove the reference to the deleted remote branch
remove the corresponding local branch
Removing Remote Reference
The reference to the remote branch is removed when you run git fetch with the prune switch.
Local branches can be removed with the git branch command. Adding -d first checks for unmerged commits and will not delete the branch if there are any commits which are only in the branch that is being deleted. Adding -D overrides the check and deletes the branch anyway.
git branch -d test
Deleted branch test (was <commit hash>)
I’ve added a couple of PowerShell functions to my profile file – which means they are always available in my terminal. If I’m working on an app and I know that some PR’s have been merged I can clean up my workspace running Remove-BranchesWithUpstreamGone in VS Code’s terminal.
As a rule, I don’t need to keep any branches which used to have a copy of the server, but don’t any more (indicated by [gone] in the list of branches). Obviously, local branches which have never been pushed to the server won’t be deleted.
For now this is necessarily a simple, first impressions post about GitHub Copilot. I’ve used it for a few weeks, tweeted enthusiastically in the first couple of days’ use and have now disabled it. What is it, how does it work, what’s good, what’s not so good?
What is GitHub Copilot?
Your AI pair programmer. With GitHub Copilot, get suggestions for whole lines or entire functions right inside your editor.
Copilot is a service from GitHub, accessed through a Visual Studio Code extension which provides suggests for code and comments as you are typing.
How Does it Work?
I won’t pretend to know much about the inner workings of the extension but this is how I understand it. GitHub clearly have an enormous amount of open source code under their control for all manner of programming languages. They have used that code to train an AI model.
The Copilot VS Code extension takes your existing code and comments, feeds that to the model and attempts to predict what you might want to type next. The best suggestion shows up faintly beyond the cursor and you can tab to accept the suggestion. Alternatively you can load the top 10 suggestions and select the best one to insert into your code.
Below is an example where I have created a new procedure called ValdiateEmailAddress. We’ll ignore the obvious issues with the suggestions for now and just show it as an example of how the extension works.
Given that I’ve called the new method ValidateEmailAddress the extension has generated (apparently generated and not just straight copied from an existing repo) the suggested code which I have accepted by pressing TAB each time.
Here’s another example. Let’s create a new CreateSalesOrder method and view the suggested solutions.
A couple of interesting things to notice here. Most of the solutions attempt to do something with the Order Origin Code field. That’s the only other code in the file, so Copilot seems to be giving a lot of weight to that in the suggestions. Some of the suggestions use keywords from other languages: this, var and new.
Perhaps an example where we provide more context is fairer. The below example is of writing an automated test. Notice that I already have a test to check that releasing a sales order without a certain field populated throws an error. I’m going to create another test to check the same behaviour for sales invoices.
Copilot has a much better time of suggesting the code this time, following the pattern of the above automated test. Not only do all of the suggestions compile, but they are correct. Now we’re getting somewhere.
Yes, and no. I’m not knocking it – but all it is actually doing is copying the above test and replacing all instances of “order” with “invoice”. We can already do that in VS Code in a few keystrokes – much faster than messing around with Copilot.
That said, it does demonstrate that Copilot “learns” from the code that you’ve already written and can make repetitive work more efficient. I typically always start an automated test with Init(); and finish with one or more calls to the Assert codeunit. Copilot quickly starts suggesting those lines, even with appropriate comments in calls to Assert.AreEqual(). Hence my initial enthusiasm and tweet.
What’s the problem then? Why does Copilot keep suggesting code that doesn’t compile? The biggest problem is that Copilot just hasn’t seen enough AL to know what it should look like. The suggestions do improve if you give it more context and write blocks of code which are similar to something you’ve written before, but ultimately the model hasn’t been trained on enough AL code.
But wait? Isn’t the whole of the base app for the past few versions on GitHub now? That’s thousands of AL files. Yes, it is – but that is a few H20 molecules in a drop in GitHub’s ocean. At the time of writing there are fewer than 100 GitHub repos containing AL code and approximately 31K AL files (see here).
Sounds like a lot of files…until you compare it to other languages.
GitHub describe Copilot as “Your AI pair programmer.” That’s nice. As long as you like pair programming with someone in your ear the whole time suggesting the next line or two that you should write – before you have time to think for yourself.
I think that’s been my biggest issue with it. When I speak to someone else about what I’m working on I usually want to talk bigger picture. Do I understand the requirements? How does this affect some existing functionality? How does this integrate with other apps that we are writing?
When I write a method declaration I want to give some thought to how I’m going to approach the implementation. When Copilot pops up with some suggested implementation invariably I get distracted reading it rather than thinking about what code to write myself. Even if they were good suggestions I’m not sure that I’d like that. The fact that most of the suggestions use syntax from another language and don’t even compile makes it even worse.
Copilot is an interesting concept and I’ve enjoying playing with it. There have definitely been some moments when I’ve been very impressed with the suggestions that it has made. There have been times when it has saved me time writing some repetitive code. Or code which changes predictably between lines. You could compare it to auto-fill in Excel. Give it a few examples to demonstrate how the value changes each row and then auto-fill as many rows as you need following the same pattern.
Presumably at some point Microsoft are going to monetise this, but I can’t see AL developers paying for it. For now the extension is going to remain installed, but disabled.
Dmitry has submitted a session at DynamicsCon about Copilot. If you found this interesting you should check it out. Maybe he’ll be more enthusiastic than me.
The July 2021 release of Visual Studio Code (1.59) introduced a new testing API and Test Explorer UI. From v0.6.0 this API is used by AL Test Runner.
The biggest improvement is the Test Explorer view which shows your test codeunits, their test methods and the status of each.
Hovering over a test gives you three icons to run, debug or open an editor at the test.
You can run and debug all the tests in a given codeunit by hovering over the codeunit name or run and debug all tests at the top.
The filter box allows you to easily find specific tests, which I’ve found useful in projects which several test codeunits and hundreds of tests.
You can also filter to only show failed tests or only test which are present in the codeunit in the current editor. The explorer supports different ways of sorting and displaying the tests.
Icons are added into the gutter alongside test methods in the editor. Left click to run the test or right click to see this context menu with more options.
The old “Run Test” and “Debug Test” codelens actions are also still added above the test definition.
Commands & Shortcuts
A whole set of new commands are introduced with keyboard chords beginning with Ctrl + ; The existing AL Test Runner keyboard shortcuts still work but there are some nice options in the new set – like “Test: Rerun Last Run” to repeat the last run test without having to navigate to it again.
Using the Test Explorer
Using the Test Explorer is pretty self-explanatory if you’ve already been using AL Test Runner. When you open your workspace/folder the tests should be automatically discovered and loaded into the Test Explorer view. On first opening all of the tests will have no status i.e. neither pass or fail – but results from now on will be persisted.
Running one or more tests – regardless of where you run them from (Test Explorer, Command Palette, CodeLens, Keyboard Shortcut) – will start a test run. You’ll see “Running tests…” in the Status Bar.
Once the test(s) have finished running you’ll see the results at the top of the Test Explorer, “x / y tests passed (z %)”, and the status icons by each test will be updated.
If the tests do not actually run e.g. because your container isn’t started then the test run will not finish and “Running tests” will continue to spin at the bottom of the screen. You can stop the run manually from the top of the Test Explorer, fix the problem and go again.
In the latest version of AL Test Runner I’ve added an overall percentage code coverage and totals for number of lines hit and number of lines. I’ve hesitated whether to add this in previous versions. Let me explain why.
Measuring Code Coverage
First, what these stats actually are. From right to left:
Code Coverage 79% (501/636)
The total number of code lines in objects which were hit by the tests
The total number of lines hit by the tests
The percentage of the code lines hit in objects which were hit at least once
Notice that the stats only include objects which were hit by your tests. You might have a codeunit with thousands of lines of code, but if it isn’t hit at all by your tests it won’t count in the figures. That’s just how the code coverage stats come back from Business Central. Take a look at the file that is downloaded from the test runner if you’re interested (by default it’s saved as codecoverage.json in the .altestrunner folder).
It is important to bear this is mind when you are looking at the headline code coverage figure. If you have hundreds of objects and your tests only hit the code in one of them, but all of the code in that object – the code coverage % will be a misleading 100%. (If you don’t like that you’ll have to take it up with Microsoft, not me).
What Code Coverage Isn’t Good For
OK, but assuming that my tests hit at least some of the code in the most important objects then the overall percentage should be more or less accurate right? In which case we should be able to get an idea of how good the tests are for this project? No.
Code Coverage ≠ Test Quality
The fact that one or more tests hits a block of code does not tell you anything about how good those tests are. The tests could be completely meaningless and the code coverage % alone would not tell you. For example;
procedure CalcStandardDeviation(Values: List of [Decimal]): Decimal
Value, Sum, Mean, SumOfVariance : Decimal;
foreach Value in Values do
Sum += Value;
Mean := Sum / Values.Count();
foreach Value in Values do
SumOfVariance += Power((Value - Mean), 2);
exit(SumOfVariance / Values.Count());
Values: List of [Decimal];
Code coverage? 100% ✅
Does the code work? No ❌ The calculation of the standard deviation is wrong. It is a pointless test, it executes the code but doesn’t verify the result and so doesn’t identify the problem. (In case you’re wondering the result should be the square root of SumOfVariance).
Setting a Target for Code Coverage
What target should we set for code coverage in our projects? Don’t.
Why not? There are a couple of good reasons.
There is likely to be some code in your project that you don’t want to test
You might inadvertently encourage some undesired behaviour from your developers
Why Wouldn’t You Test Some of Your Code?
Personally, I try to avoid testing any code on pages. Tests which rely on test page objects take significantly longer to run, they can’t be debugged with AL Test Runner and I try to minimise the code that I write in pages anyway. Usually I don’t test any of:
Code in action triggers
Lookup, Drilldown, AssistEdit or page field validation triggers
OnOpen, OnClose, OnAfterGetRecord
…you get the idea, any of the code on a page
You might also choose not to test code that calls a 3rd party service. You don’t want your tests to become dependent on some other service being available, it is likely to slow the test down and you might end up paying for consumption of the service.
I would test the code that handles the response from the 3rd party but not the code that actually calls it e.g. not the code that sends the HTTP request or writes to a file.
Triggers in Install or Upgrade codeunits will not be tested. You can test the code that is called from those triggers, but not the triggers themselves.
If we already know that we have some code that we will not write tests for then it doesn’t make a lot of sense to set a hard target of 100%. But, what other number can you pick? Imagine two apps:
An app that is purely responsible for handling communication with some Azure Functions. Perhaps the majority of the code in that app is working with HTTP clients, headers and responses. It might not be practical to achieve code coverage of more than 50%
An app that implements a new sales price mechanism. It is pure AL code and the code is almost entirely in codeunits. It might be perfectly reasonable to expect code coverage of 95%
It doesn’t make sense to have a headline target for the developers to work to on both projects. Let’s say we’ve agreed as a team that we must have code coverage of at least 75%. We might incentivise developers on the first project to write some nonsense tests just to artificially boost the code coverage.
Meanwhile on the second project some developers might feel safe skipping writing tests for some important new code because the code coverage is already at 80%.
Neither of these scenarios is great, but, in fairness, the developers are doing what we’ve asked them to.
What is Code Coverage Good For?
So what is code coverage good for? It helps to identify objects that have a lot of lines which aren’t hit by tests. That’s why the output is split by object and includes the path to the source file. You can jump to the source file with Alt+Click.
Highlight the lines which were hit by the previous test run with the Toggle Code Coverage command. That way you can make an informed opinion about whether you ought to write some more tests for this part of the code or whether it is fine as it is.
50% code coverage might be fine when 1 out of 2 lines has been hit. It might not be fine when 360 out of 720 lines have been hit – but that’s for you to decide.
“You cannot sign in due to a technical issue. Contact your system administrator.”
Terrific. This is in a local Docker container, so I am the system administrator. Give me a second while I contact myself…
…nope, myself didn’t know what the problem was either.
It could be that the license has expired, maybe there is something wrong with the tenant, the service tier hasn’t been able to start, who knows? You should probably start by looking for errors in the event log of the container.
Maybe I’m missing a trick and there is an easier way to do this(?) but I look through the event log with PowerShell. You can run this command inside the container:
Get-EventLog Application -EntryType Error
That will return all the errors that have been logged in the Application log. Two problems though:
The list might be massive
You can’t see the full text of the messages
You can add the Newest parameter to specify the number of most recent messages that you want to return. Then you probably want to write the full text of the message so that you can actually read it.
Cool – although you still need to open a PowerShell prompt inside the container to run those commands. It would be nice if we didn’t need to do that. We can use docker exec to run a command against a local container from the outside.
Now we’re getting somewhere. But of course, you don’t want to be typing all that each time. I’ve declared a PowerShell function in my profile file (run code $profile in a PowerShell prompt to open the profile file in VS Code).
Declaring it in the profile file means that the function will always be available in a PowerShell prompt. The container name parameter must be supplied and optionally I can ask for more than just the latest one error. The string of asterisks is just to indicate where one log message ends and another begins.