Execute JavaScript with WebPageViewer for Business Central

TL;DR

The WebPageViewer add-on has an overload to accept some JavaScript. You can use that to execute arbitrary script locally. WebPageViewer.SetContent(HTML: Text; JavaScript: Text);

JSON Formatting

This post starts with me wanting to format some JSON with line breaks for the user to read. It’s the response from an Azure Function which integrates with a local SQL server (interesting subject, maybe for another time). The result from SQL server is serialized into a (potentially very long) JSON string and this is the string that I want to present in a more human-readable format.

Sometimes I converge on a solution through a series of ideas, each of which slightly less bad than the previous. This was one of those times. If you don’t care about the train of thought then the solution I settled on was to use the JavaScript parameter of the WebPageViewer’s SetContent method.

If you’re still here then here are the stations that the train of thought pulled into, starting with the worst.

Requirement

Have some control on my page for the user to view the JSON returned from the Azure Function, formatted with line breaks.

1. Format at Source

Why not just add the line breaks when I am serializing the results in the C# of my Azure Functions? That way I don’t need to change anything in AL.

No, that’s dumb. That would make every response from the function larger than it needs to be just for the rare occasions when a human might want to read it. Don’t do that.

2. Call an Azure Function to Format the Result

I could have a second Azure Function to accept the unformatted result and return the formatted version. I could have a Function App which runs node.js and return the result in a couple of lines of code.

Wait, that’s absurd. Call another Azure Function just to execute two lines of JavaScript? And store the Uri for that function somewhere? In a setup table? Hard-coded? In a key vault? Seems somewhat over-engineered.

3. Create a User Control

Hang on. I’m being thick. We can execute whatever JavaScript we want in a user control. I can create a control with a textarea, or just a div, create a function to accept the unformatted JSON, format it and set the content of the div. No need to send the JSON outside of BC.

Closer, and if you want more control over how the JSON looks on screen probably the best bet. But, is it really necessary to create a user control just to execute some JavaScript? Still seems like too much work for what is only a very simple problem.

4. Use WebPageViewer

The WebPageViewer has a SetContent method (which I’ve written about before) which can accept HTML and JavaScript.

If you pass some script it will be executed when the page control is loaded. Perfect for what I need. I can just use the JSON.parse and JSON.stringify functions to read and then re-format my JSON text. I’m also wrapping it in pre tags and removing any single quotes in the text to format (because they will screw the JavaScript and I can’t be bothered to handle them properly).

The AL code ends up looks like this:

local procedure SetResult(NewResult: Text)
var
    JS: Text;
begin
    NewResult := NewResult.Replace('''', '');
    JS := StrSubstNo('document.write(''<pre>'' + JSON.stringify(JSON.parse(''%1''), '''', 2) + ''</pre>'');', NewResult);
    CurrPage.ResultsCtrl.SetContent('', JS);
end;

If you’re not using 26 single quotes in three lines of code then you’re not doing it right 😉

AL Test Runner Pre-Release Version

TL;DR

There is now a pre-release version of the AL Test Runner extension for Visual Studio Code. It will have the latest (and possibly unstable) features.

Pre-Releases

VS Code recently added support for pre-release versions of extensions. You can install a pre-release by clicking on the “Switch to Pre-Release Version” button from the extension details within VS Code. See https://code.visualstudio.com/updates/v1_63#_pre-release-extensions for more details.

Up ’til now I have typically packaged a new version of the extension and used it myself for a week or two to check that it isn’t horribly broken before I push an update to the marketplace. Having a pre-release version will give me a better way to use the extension myself but also get feedback from anyone who is interested in being a beta tester. GitHub issues are the best place to log requests or bugs.

What’s in the Pre-Release?

There are few things which are currently in the pre-release but not in the release version.

Debug All Tests

Bit niche, but I have actually found it useful on a couple of occasions. There is an icon at the top of the Test Explorer view and a command in the command palette to debug all the tests, so I decided to add support for it in my extension.

A new version of the Test Runner Service app is required to support this. Install with the "Install Test Runner Service" command from inside VS Code or download the latest version from here: https://github.com/jimmymcp/test-runner-service/raw/master/James%20Pearson_Test%20Runner%20Service.app

Publishing Apps using PowerShell

There is a new setting to publish apps to the container using PowerShell (the bccontainerhelper module) rather than the publish command in VS Code.

Why? A couple of reasons.

  1. I can’t know whether the app has compiled and published successfully when using the AL: Publish command. If publishing the app fails then VS Code is left thinking that the tests are running when in reality they never started. You need to manually cancel the test run before you can start another from the Test Explorer. Publishing from PowerShell gives a little more control
  2. I’m toying with the idea of automating test runs in the background while developing, something along the lines that Luc suggested here: https://github.com/jimmymcp/al-test-runner/issues/42. This would require a more reliable to compile and publish the app(s) than just triggering the AL: Publish command and hoping that it worked

testRunnerCodeunitId

There is a new key in the AL Test Runner config.json file to specify the id of the test runner codeunit id to use. It defaults to the codeunit isolation runner but you can override with another if you like.

Various

Various other improvements – updated Pester tests, updated GitHub actions. Take a look on GitHub if you are interested.

Tip: Test for Tables Missing from Permission Sets

In PowerShell:

$tablesInPermissionSets = @()

$permissionSets = gci . -Recurse -Filter '*.al' | ? {(gc $_.FullName).Item(0).startsWith('permissionset')}
 $permissionSets | % {
    $content = gc $_.FullName -Raw
    [Regex]::Matches($content, '(?<=tabledata ).*(?= =)') | % {
        $tablesInPermissionSets += $_.Value
    }
 }

$tablesInTables = @()

$tables = gci . -Recurse -Filter '*.al' | Where-Object {(Get-Content $_.FullName).Item(0).StartsWith('table ')}
 $tables | % {
    $content = gc $_.FullName -Raw
    [Regex]::Matches($content, "(?<=table \d+ ).*(?=$([Environment]::NewLine))") | % {
        $tablesInTables += $_.Value
    }
 }

$missingTables = ""

Compare-Object $tablesInTables $tablesInPermissionSets | ? SideIndicator -eq '<=' | % {
    $missingTables += $_.InputObject + [Environment]::NewLine
}

if ('' -ne $missingTables) {
    throw "Missing table permissions: $missingTables"
}

In English:

  1. Find all the files in the current folder, and child folders, with a filename ending in .al and which have a first line starting with “permissionset”
  2. Build a collection of the tabledata objects that are referenced in those permission sets
  3. Find all the files in the current folder, and child folders, with a filename ending in .al and which have a first line starting with “table ” (with a space to avoid matching “tableextension”)
  4. Build a collection of the names of the tables
  5. Use Compare-Object to compare the collections and find names which appear in the list of tables but not in tabledata permissions
  6. Build an error message of missing table permissions
  7. Throw the error

PowerShell Profile:

Like most small PowerShell scripts that I write, I’ve just added it to my PowerShell profile. Run code $profile in a PowerShell prompt to open the profile file in VS Code.

function Test-Permissions() {
  #...all of the above code
}

Maybe there is already a VS Code extension that checks for this? It would make sense, but I’m pretty minimalist with the extensions that I have installed anyway. I run it from the terminal in VS Code.

JSON References

TL;DR

JSON types reference their value in memory, not the actual value. The below is snipped from https://docs.microsoft.com/en-us/dynamics365/business-central/dev-itpro/developer/methods-auto/jsonobject/jsonobject-data-type

Be careful making JSON types equal to one another. When you do that you copy the reference, not the value. This caught me out.

Example 1

I’m implementing an interface which accepts a JsonObject parameter expecting that you will assign a value which will be used later on. The interface doesn’t require that the JsonObject is passed with var. In fact, it requires that it isn’t. If you include var the compiler will complain that you haven’t implemented all of the interface methods. Something like the JsonExample action in the below code.

“That’s never going to work, the parameter needs to be passed with var” I thought. Better still, just have method return a JsonObject type. However, the interface probably pre-dates complex return types so we’ll let that go. Although, I think you could still return JSON types even before complex return types were introduced…but let it go.

pageextension 50100 "Customer List" extends "Customer List"
{
    actions
    {
        addlast(processing)
        {
            action(JsonExample)
            {
                ApplicationArea = All;

                trigger OnAction()
                var
                    JsonExample: Codeunit "Json Example";
                    Object: JsonObject;
                    Result: Text;
                begin
                    JsonExample.CalcJson(Object);
                    Object.WriteTo(Result);
                    Message(Result);
                end;
            }
            action(JsonExample2)
            {
                ApplicationArea = All;

                trigger OnAction()
                var
                    JsonExample: Codeunit "Json Example";
                    Object: JsonObject;
                    Result: Text;
                begin
                    JsonExample.CalcJson2(Object);
                    Object.WriteTo(Result);
                    Message(Result);
                end;
            }
            action(JsonExample3)
            {
                ApplicationArea = All;

                trigger OnAction()
                var
                    JsonExample: Codeunit "Json Example";
                    Object: JsonObject;
                    Result: Text;
                begin
                    JsonExample.CalcJson3(Object);
                    Object.WriteTo(Result);
                    Message(Result);
                end;
            }
        }
    }
}

codeunit 50100 "Json Example"
{
    procedure CalcJson(Object: JsonObject)
    begin
        Object.Add('aKindOf', 'magic');
    end;

    procedure CalcJson2(Object: JsonObject)
    var
        CalcJson: Codeunit "Calc. Json";
    begin
        Object := CalcJson.CalcJson();
    end;

    procedure CalcJson3(Object: JsonObject)
    var
        CalcJson: Codeunit "Calc. Json";
        JSON: Text;
    begin
        CalcJson.CalcJson().WriteTo(JSON);
        Object.ReadFrom(JSON);
    end;
}

codeunit 50101 "Calc. Json"
{
    procedure CalcJson() Result: JsonObject
    var
        Boys: JsonObject;
    begin
        Boys.Add('backInTown', true);
        Result.Add('boys', Boys);
    end;
}

I was surprised that it did work. Call JsonExample and you get:

{"aKindOf":"magic"}

That’s because even without the var keyword the JsonObject variable holds a refence to the object rather than the value itself, so it still exists after CalcJson() has finished executing.

Example 2

OK, great. I went on to create a separate codeunit to handle the creation of the JsonObject. I wanted to add some error handling and separate the boilerplate of the interface implementation from the business logic.

I wrote something like CalcJson2(). My tests started failing. It seemed that the JsonObject was empty. That puzzled me for a while. What had I done wrong? I think this is the problem.

  1. The JsonObject referenced by the Result variable in codeunit 50101 is created and has the properties added
  2. This reference goes out of scope once CalcJson has finished executing and its value is lost/garbage collected/however it works in Business Central
  3. The JsonObject referenced by the Object parameter is made equal to the first i.e. now points to the first JsonObject in memory – but that value has already gone
  4. As the result the second JsonObject is empty when it is handed back to the calling code

Example 3

Instead of making the JSON types equal to one another explicitly copy the value of one to the other. Like this:

procedure CalcJson3(Object: JsonObject)
var
    CalcJson: Codeunit "Calc. Json";
    JSON: Text;
begin
    CalcJson.CalcJson().WriteTo(JSON);
    Object.ReadFrom(JSON);
end;

In this case writing the value of one to text and then reading it back in to the other. It looks a bit weird, but it works. JsonObject also has a Clone method.

Tip: Get Current Callstack with a Collectible Error

The Code

codeunit 50104 "Get Callstack"
{
    SingleInstance = true;

    [ErrorBehavior(ErrorBehavior::Collect)]
    procedure GetCallstack() Callstack: Text
    var
        LF: Char;
    begin
        LF := 10;
        Error(ErrorInfo.Create('', true));
        Callstack := GetCollectedErrors(true).Get(1).Callstack;
        exit(Callstack.Substring(Callstack.IndexOf(Format(LF)) + 1));
    end;
}

Yea, but…why?

I dunno, I was just curious whether it was possible. And, it is 🧐 Any sensible applications are probably going to be do with error handling or reporting.

You may be tempted to have your code respond differently depending on the context in which it has been called and read the callstack for that purpose. That’s not a train you want to ride though. I’ve tried, it stops at some pretty weird stations.

One advantage of this approach over using a TryFunction (as below) is that the debugger doesn’t break on collectible errors. It can sometimes be frustrating stepping through errors that are always caught to get to the code that you actually want to debug.

procedure LessGoodGetCallstack(): Text
begin
    ThrowError();
    exit(GetLastErrorCallstack());
end;

[TryFunction]
procedure ThrowError()
begin
    Error('');
end;