Since the introduction of pages, and the deprecation of forms, we have had a fixed set of page types that we can create and a fixed set of controls that we can use on those pages. In the main, that’s a good thing. An appropriate control (text entry, date picker, checkbox etc.) is automatically used depending on the data type of the field, developers need to spend less time on the UI and the page can be automatically adapted to the web client, tablet and phone clients.
If we need finer control over how the page is laid out or want functionality that isn’t supported by the standard controls e.g. drag and drop then we can create a control-add in and use that in a usercontrol on the page instead.
This post isn’t an intro to creating custom control add-ins. There are already good posts out there and I don’t have loads of experience with them anyway.
There is a middle option to consider which might suit simple requirements. We can use the built in control add-ins, including the WebPageViewer.
Simply add a “Microsoft.Dynamics.Nav.Client.WebPageViewer” to the page. Every time I use it Microsoft have added some other capabilities to it – but the methods that we are interested in for now are Navigate and SetContent.
Pretty self-explanatory: Navigate allows you pass a URL that you want the viewer to navigate to. SetContent allows you to set some HTML content that you want to render in the viewer. I’m using this as a way to display a lot of read-only XML like this:
v0.5.0 of AL Test Runner adds some capability to measure code coverage per object and highlight the lines of code that were hit in the previous test run.
This is an example of what we’re aiming for. Running one or more tests, seeing the list of objects and the percentage of their code lines that were hit, opening those objects and highlighting those lines. This should help identify objects and code paths aren’t covered by any tests. I don’t believe code coverage should be a target in itself (maybe more on that in a separate post) but it can be a useful tool to see where you might want to bolster your test suite.
Code coverage is started and stopped before each run of run or more tests. The Test Runner Service app is called to download the code coverage details and they are saved to a JSON file. This file is read and summarised into a per-object percentage which is output with the test results. Only objects which are defined in the workspace are included – so you won’t include standard objects, but you will see test objects.
The path to each object is included so you can Alt+Click to navigate to it. A new Toggle Code Coverage command (Ctrl+Alt+C) allows you to switch the highlighting for lines which have been hit on and off.
Install the Test Runner Service app with the command in Visual Studio Code. If it is already installed you will need to uninstall and unpublish the existing version first
In the AL Test Runner config.json file
Set the path to save the code coverage JSON file to in the codeCoveragePath key. This path is relative to the folder that contains your test code e.g. .//.altestrunner//codecoverage.json to save it within the .altestrunner folder
[Edit: this is now optional – see 0.5.3 update below] Select the path to the code coverage file relative to your app code i.e. if you have your test extension in a separate top level folder you might set it to ../tests/.altestrunner/codecoverage.json This allows AL Test Runner to find and display the code coverage details from an object in your app code
Use the Exclude Code Coverage Files to define file paths that should be excluded from the code coverage summary. This is a regex pattern which is matched against the file paths. For example, setting this to “Tests” will exclude any files with “Tests” in their path.
Test Folder Name – specify the name of the folder which contains the test app. Previously if you worked in a multi-root workspace and had an editor open at the production app it would attempt to run tests in the production app, create a config file, ask you which company to test in, prompt for credentials…meh. With this setting AL Test Runner will always run tests in the app which is contained in the folder with the name given in this setting.
Some of the early feedback from people who were trying to enable code coverage was that it was a bit of a game. And not in a good, fun way. More like Ludo. You’re trying to line up all your pieces but every time you think you’ve got them where you want them someone else lands on them and messes everything up.
From 0.5.3 it isn’t necessary to set the code coverage path in VS Code’s settings (see setup #3 above). If this path is not set then the extension will attempt to find the codecoverage.json file somewhere in the workspace.
The codeCoveragePath key in the AL Test Runner config file is still required, but has a default value which will be fine in most cases.
Ideas and feedback welcome as issues (or better yet, pull requests) in the repo on GitHub. These are some that I might work on.
Maybe a setting to enter a glob pattern to include/exclude certain files and/or paths from the summary
Smoother setup of the different settings that are required – I’ve tried to provide sensible default values but they are a few things to enter correctly to get it working
Currently the code coverage file is overwritten with each test run – it would be more useful if this was cumulative so that running a single test didn’t overwrite all the results from a previous test run. If you want a complete overview you have to run the whole suite – but then maybe that isn’t a bad thing
Perhaps an overall code coverage percentage for the whole project as well as per app (with the above caveat that I’m sceptical about fixed code coverage targets)
A CodeLens at the top of an object with its percentage code coverage that toggles the highlighting on/off when clicked
I’ve been pretty quiet on the blogging front for the last few months – settling in to a new team and a new role.
For the first post of 2021 what better topic than Item Tracking? Let’s be honest, the Item Tracking Lines page is a mess. Over 3,000 lines of code, tonnes of global variables and more than 60 methods. I don’t think the fundamental design of the page has changed much since it was introduced – it’s a bullet that no one wants to bite.
To be fair, making radical changes to the design would be very disruptive to the base application and I guess to lots of solutions and bespoke that have been written on top of it. So instead the page now has a layer of 44 (yes, forty four) events on top of it – not including the built-in page events. Now we are going to add another layer of code on top and try not to knock the whole jenga tower down in the process.
I’m sorry, that’s harsh. The point of this post isn’t really to criticise the design of item tracking – it is what it is – but share a few things I’ve found trying to extend it.
Item Tracking Lines makes extensive use of global variables which can be a problem when you are trying to change the behaviour of the page. Some of the variables are shared in the protected var section.
As these variables are protected (rather than local) they can be accessed by a page extension for the Item Tracking Lines page. You can just refer directly to the variable and use it in your page extension code.
You’ll notice that all the variables are xyzVisible and xyzEditable which control whether you can see and/or edit various controls on the page.
You can get access to some more of the important global variables on the page with the GetVariables method. There is a corresponding SetVariables method which allows you to override the Tracking Specification records to insert, modify or delete.
If you want to override the value of other global variables on the page then you will have to hunt through the list of events. As there are so many events on the page its likely that there is going to be one that allows you to handle the calculation of the variable that you are interested in.
Calling Page Methods
As noted above, a lot of business logic exists in the Item Tracking Lines page itself. Some important functions are in the Item Tracking Management and Item Tracking Data Collection codeunits but it is likely that you are going to need to call one or more of the methods on the page.
I wanted to avoid having all my code in the Item Tracking Lines page extension and instead split it out into one or more codeunits to handle my specific functionality.
OK, but if my code needs to call the methods on the page itself, how am I going to be able to do that from code inside a codeunit?
Passing the Current Instance of the Page to a Codeunit
Easy, I thought. I’ll just pass the current instance of the page as a parameter to my codeunit. Declare a parameter of type Page of the Item Tracking Line page, pass it by reference with the var keyword – Robert is your mother’s brother.
procedure FunkyItemTracking(var TrackingSpecification: Record "Tracking Specification" temporary; var ItemTrackingLines: Page "Item Tracking Lines")
//some funky code
Erm…no. How do you get the current instance of a page? With CurrPage. CurrPage is an object of type CurrPage not Page. Which gives the following compilation error: “Cannot convert from CurrPage “Item Tracking Lines” to var Page “Item Tracking Lines”.
There is another way. An event can IncludeSender which passes the calling object along with any other parameters a subscriber.
I don’t want anyone else to subscribe to this event and mess about with it. For that, we can use the InternalEvent attribute. From the docs: “Specifies that the method is published as an internal event. It can only be subscribed to from within the same module.”
You can create a new internal event on your Item Tracking Lines page extension, including the sender and subscribe to it in a separate codeunit. That way you’ve got an instance of the page whose methods you can call while keeping your code separated. Have and eat your cake.
I’ve also manually bound my subscribing codeunit. That way I can have multiple subscribers in the codeunit and maintain some state in its variables.
//Item Tracking Lines page extension
FunkyItemTracking: Codeunit "Funky Item Tracking";
local procedure OnCallingFunkyItemTracking(var TempTrackingSpecification: "Tracking Specification" temporary)
local procedure FunkyItemTracking(var TempTrackingSpecification: Record "Tracking Specification" temporary; var ItemTrackingLines: Page "Item Tracking Lines")
//some funky item tracking customisation
[EventSubscriber(ObjectType::Page, Page::"Item Tracking Lines", 'OnCallingFunkyItemTracking', '', false, false)]
local procedure OnCallingFunkyItemTracking(var Sender: Page "Item Tracking Lines"; var TempTrackingSpecification: Record "Tracking Specification" temporary)
I’ve reused the same concept the simplifying retrieving variable values. If you want to read the source quantity array, undefined quantity array, the Item record or Tracking Specification records to insert, modify or delete you’ll need to call the page’s GetVariables method.
That’s fine – but that method uses 10 var parameters to retrieve the variables. If you’re only trying to get at the value of one of those variables – or in my case, one element of one the arrays – it’s just a bit messy to create 10 variables that you don’t even need.
//Item Tracking Lines page extensions
//just define the variable that you are interested in, keeps your code easier to read
local procedure OnGetUndefinedQty(var UndefinedQty: Decimal)
[EventSubscriber(ObjectType::Page, Page::"Item Tracking Lines", 'OnGetUndefinedQty', '', false, false)]
local procedure OnGetUndefinedQty(var Sender: Page "Item Tracking Lines"; var UndefinedQty: Decimal)
TempTrackingSpecInsert, TempTrackingSpecModify, TempTrackingSpecDelete : Record "Tracking Specification" temporary;
Item: Record Item;
UndefinedQtyArray: Array of Decimal;
SourceQuantityArray: Array of Decimal;
InsertIsBlocked, DeleteIsBlocked, BlockCommit : Boolean;
Sender.GetVariables(TempTrackingSpecInsert, TempTrackingSpecModify, TempTrackingSpecDelete, Item, UndefinedQtyArray, SourceQuantityArray, CurrentSignFactor, InsertIsBlocked, DeleteIsBlocked, BlockCommit);
UndefinedQty := UndefinedQtyArray;
There’s a lot more to say about the Item Tracking Lines page, but I’ll leave it there for now. Maybe somebody will find this interesting and/or useful.
One of the underrated advantages to doing a little blogging is that you can write about a subject you know a little about and have people who actually know what they are talking about reply to tell you a better way to do it.
There’s probably a minimum threshold of credibility on the subject you need in order for people to post serious replies. If I posted some nonsense about my keyhole surgery technique I doubt I’d get helpful corrections from the Royal College of Surgeons. It would also represent something of a departure from my usual DevOps, Git, testing and BC development posts.
Henrik Helgesen pointed out you can skip all the codeunits, activation, context, finishing… and just use the Error Message table directly. It has some a bunch of LogXYZ methods for recording errors or messages.
It has a method to determine if there are error messages to show and a method to show them. Nice and simple and, for the scenario that I outlined, probably more appropriate.
In the last post I complained about the lack of TestField functionality in the Error Message Mgt. codeunit. Kilian replied to say that the method exists in BC17. I doubt that has anything to do with my post – but I’m happy to take some credit if required. It has the signature that you’d expect.
That makes the error handling code far less verbose and, crucially, we don’t have to provide a label or any translations for the error message – that’s handled by the framework.
Consistent Behaviour With Base App
How useful is it for partners to invest in this sort of error handling if the base app doesn’t use it? A consistent user experience might still be better than improving our error handling but departing from standard paradigms in the process. Fair point.
Although actually, it looks like more of this is coming to the base app. This is a snip of the results when searching for “LogTestField” in the base app in BC 17.1.17104.0-W1.
49 results in 2 files. OK, so probably still a long way to go to make this the default user experience but its a start. I’m hopeful that this is an area that Microsoft will pay some attention to over the next few versions, improve the framework and make it easier for us to follow their lead.
Part 2 of the series that I said that I was going to write has been a long time coming. If you don’t know what I’m talking about you might want to read the first post in the series here. Unfortunately it is possible that this series reflects the functionality that it is describing: full of early promise, but on closer inspection a little convoluted and disappointing. I’ll leave you to be the judge of that.
Unfortunately, I’ve just found the framework a little annoying to work with. Maybe I’m missing the correct way to use it but it seems like there are too many objects involved without providing the functions that I was expecting. Then again, if I am missing the best way to use it then that illustrates my point – it’s just not very friendly to work with. I’ll try to make some constructive suggestions as we go.
A quick reminder of what we’re trying to achieve here. I’ve got a journal page to record all the video calls for work and family that I’m having.* Before the journal is posted the lines are checked for lots of potential errors. Rather than presenting the user with one error at a time we are trying to batch them all together and present them in a list to be resolved all at once. I’m using the error message handling framework to do it.
*to be clear, I’m not using it. I’m sad…but not that sad**
Some basic principles to bear in mind when dealing with the Error Message Mgt. codeunit
You need to trap all the errors
The framework provides a way of collecting messages and displaying them to the user in a list page. It doesn’t fundamentally change how error handling in BC works. If you encounter an un-trapped error the code execution will stop, transaction be rolled back etc.
Obviously that includes TestField() and FieldError() calls, not just Error()
The Error Message framework must be activated before calling the code that you want to trap errors for
Call PushContext to set the current context for which you are handling errors
Call Finish to indicate that the previous context is complete
You need to determine whether there are any errors to display and then, if so, display them
This is where the posting of my journal batch begins. We need to activate the error handling framework and if an error is trapped in the posting codeunit then show the errors that have been collected.
There is a ShowErrors method in the Error Message Management codeunit, but its only for on-prem. Don’t ask, I don’t know. You need to use if Codeunit.Run (or a TryFunction I suppose – although don’t) to determine whether to there are any errors to show. There is a HasErrors method in the Error Message Handler codeunit but that’s also only for on-prem. Still don’t ask.
VideoCallBatchPost: Codeunit "Video Call Post Batch";
ErrorMessageMgt: Codeunit "Error Message Management";
ErrorMessageHandler: Codeunit "Error Message Handler";
if not VideoCallBatchPost.Run(Rec) then
It would have been nice if there was a way to do with without declaring an extra two codeunits – but I don’t think there is.
Onto the next level on the callstack. Call PushContext with a record that gives the context within which the errors are being collected. Run the code that we want to collect errors from and then Finish. If any errors have been encountered the Finish method will throw an error with a blank error message to ensure that the transaction is rolled back to the last commit.
If Finish is called when GuiAllowed is false then SendTraceTag is called to “send a trace tag to the telemetry service”. Interesting.
local procedure PostBatch(VideoCallBatch: Record "Video Call Batch")
VideoJnlLine: Record "Video Journal Line";
ErrorMessageMgt: Codeunit "Error Message Management";
ErrorContextElement: Codeunit "Error Context Element";
ErrorMessageMgt.PushContext(ErrorContextElement, VideoCallBatch, 0, '');
VideoJnlLine.SetRange("Batch Name", VideoCallBatch.Name);
until VideoJnlLine.Next() = 0;
Now into the journal line posting and all the checks that are performed on each line. I won’t copy out the entire function – it’d be a bit tedious and you can check the source code afterwards if you’re interested.
local procedure Check(var VideoJnlLine: Record "Video Journal Line")
GLSetup: Record "General Ledger Setup";
ErrorMessageMgt: Codeunit "Error Message Management";
if VideoJnlLine."No. of Participants" = 0 then
ErrorMessageMgt.LogErrorMessage(VideoJnlLine.FieldNo("No. of Participants"), StrSubstNo('%1 must not be 0', VideoJnlLine.FieldCaption("No. of Participants")), VideoJnlLine, VideoJnlLine.FieldNo("No. of Participants"), '');
if VideoJnlLine."Duration (mins)" = 0 then
ErrorMessageMgt.LogErrorMessage(VideoJnlLine.FieldNo("Duration (mins)"), StrSubstNo('%1 must not be 0', VideoJnlLine.FieldCaption("Duration (mins)")), VideoJnlLine, VideoJnlLine.FieldNo("Duration (mins)"), '');
if VideoJnlLine."Posting Date" = 0D then
ErrorMessageMgt.LogErrorMessage(VideoJnlLine.FieldNo("Posting Date"), StrSubstNo('%1 must be set', VideoJnlLine.FieldCaption("Posting Date")), VideoJnlLine, VideoJnlLine.FieldNo("Posting Date"), '');
This is where it really starts to get a bit messy. The TestFields are gone, replaced with calls to LogErrorMessage. LogError and LogSimpleErrorMessage are alternatives with slightly different signatures. Pass in the field no, error message, record and “help article code” related to the error and they will be collected by the framework.
If any errors have been logged then the Finish function (see above) will throw an (untrapped) error and prevent the journal from actually being posted.
I really tried to enjoy working with this. I’d like to have better error handling in our apps – but I don’t think we’re going to get round to introducing this sort of thing any time soon. The parameters on this method are too clunky.
It requires a “context” field no. and a “source” field no. – I’m still not clear what the difference is
I have to provide the error message text. That’s a problem. With TestField I can leave the system to generate the correct error text, in whatever language the client is set to. This way I have to create a label (I didn’t in my example because I’m lazy) and then translate it into different languages
I don’t know what I’m supposed to provide for “help article code”
I was hoping for an ErrorMessageMgt.TestField method. Couldn’t I just pass in my record and the field no. that I’m testing? I want to leave the framework to determine if an error needs to be logged and, if so, the correct text
I’d love someone to tell me that I’ve missed how easy this framework is to work with and they’ve had a great time with it. It looked like it was going to be great but left me a bit flat. Like a roast potato that you’ve saved for your last mouthful at Sunday lunchtime only to discover it’s actually a parsnip.