Another Look at App Integration in Business Central

Intro

A while ago I posted a series of thoughts about integration between Business Central apps. You can find the original posts here. Those posts are 7 years old, presumably there must be a better way to do it now? It’s time for a fresh look…

Objective

First, let’s clarify what we are trying to achieve here. I’m interested in the ability for one app to call the functionality in another app without a dependency existing between the two.

Stop!

Wait. Why do you want to do that? Are you some sort of maniac? Allow me to explain (and yes, possibly). But first, let me spell the following out very clearly:

⚠️ If you need two apps to work with one another and you can afford to create a dependency between the two then you should do that. Disregard all of the following and define the dependency. Thank you for your attention. Goodbye.

Scenario

This post is concerned with integration between apps which cannot depend upon one another. Usually this is driven by a commercial consideration i.e. the business must be able to sell and use the apps independently of one another. We don’t want to force the customer to purchase both, but if they do have both then they must interact with one another.

Let’s imagine that we have two separte apps, one which is responsible for Web Shop Integration and another which handles Shipping Agent Integration.

We want to keep these apps separate. We don’t want to force the customer to buy both apps if they only use one of them. We will use Shipping Agent Integration with customers who don’t have a web shop and we will also use the Web Shop Integration for customers who don’t ship anything (perhaps they are selling NFTs – in which case shipping agent integration is the least of their worries).

Calculating Shipping from Web Shop Integration

However, if the customer does purchase both apps then they need to be able to work together. Maybe we need to calculate an estimated delivery date and a shipping charge and add that to the order.

How can Web Shop Integration call the functionality in Shipping Agent Integration if it doesn’t depend on it? Web Shop Integration needs to compile, publish and run correctly even if Shipping Agent Integration isn’t installed.

Options

Design Considerations

There are lots of different approaches to solving this problem. It will help evaluating them to have some criteria to judge them against.

  1. Strongly typed – I want a solution that is strongly typed i.e. I want my IDE1 to load the available methods, their signatures and documentation from Shipping Agent Integration. I want to catch development mistakes at compile-time, not run-time
  2. Separation of concerns – I don’t want either app to know anything about how the other works. Shipping Agent Integration should just advertise the available functionality that Web Shop Integration can call. The web shop doesn’t need to know or care how that is implemented

Potential Solutions

SolutionStrongly TypeSeparation of Concerns
Record & field refs
Shared data layer
Microservices
Event in shared dependency✅ (with caveats)
Bridge app✅ (also, caveats)
Interface in shared dependency

I went through some of these options in the original series of posts, but I will briefly recap them here. After all, someone has to keep creating original content for the LLMs to consume and regurgitate.

Record & field refs

Strongly typed: ❌, Separation of Concerns: ❌

Your first thought when needing to integrate may be to just crack open a record ref and read/write the data that you need. You can first check whether the target table or field actually exists so that Web Shop Integration continues to work when Shipping Agent Integration is not present.

Trouble is, you’ll need to open the reference by a hardcoded id or name. You’ll get no compile error if you get it wrong. If the table structure changes in Shipping Agent Integration for any reason then you will also need to make a change in Web Shop Integration – but you won’t get any warning or compilation error to remind you.

Shared data layer

Strongly typed: ✅, Separation of Concerns: ❌

If your apps are of a certain vintage2 you may have created a shared data layer which both apps depend on. In which case, you don’t need record refs, you can just read/write directly from/to the tables that you are interested in. Shipping Agent Integration can modify the tables which are defined in the data layer with additional validation and trigger code.

Same problem as before though, with this solution Web Shop Integration needs to know too much about how Shipping Agent Integration stores its data. If the data model changes then both apps need to be modified to match.

Microservices

Strongly typed: ❌, Separation of Concerns: ✅

Or maybe a microservices-like solution is the way to go? Have the apps send messages to one another. Of course, in this context we’re talking about HTTP calls between the microservices. You could do that I suppose – Shipping Agent Integration could define an API which Web Shop Integration calls over HTTP. Then again that would be insane. Think about the performance hit and having to handle the authentication.

But, maybe we could implement something similar but kept internal to BC? “Post” messages to the Shipping Agent Integration app with Codeunit.Run(<record which holds the message>)? Implement some sort of message queue in a shared dependency?

All possible, but not strongly typed. How does someone developing Web Shop Integration know what functionality is available in Shipping Agent Integration? I’d guess you’d need to hardcode the expected structure of the messages (presumably in JSON)?

Not such a problem in JavaScript / PowerShell / a language which serializes/deserializes between text and objects. That’s not really a thing in AL though.

Event in shared dependency

Strongly typed: ✅, Separation of Concerns: ✅

We could have an event in a shared dependency. Let’s say that we add a new app, Integration Layer, which both Web Shop Integation and Shipping Agent Integration depend on.

We define an event somewhere in that app with the signature that we need. Pass all the context that Shipping Agent Integration needs and get the result back via parameters passed by reference.

procedure OnCalculateShippingCharge(var SalesHeader; var TempSalesLine; var ShippingCharge: Decimal; var Handled: Boolean)

This has the benefit of being strongly typed – someone developing Web Shop Integration knows what they need to pass to the event but doesn’t need to know about how it is implemented.

So far, so good. It still isn’t the most elegant solution though – raising an event and hoping that someone subscribes to it isn’t quite the same as calling the functionality in Shipping Agent Integration.

Come to think of it, was it even Shipping Agent Integration which subscribed to it? Did several apps subscribe? A handled flag only tells you that 1 or more subscribers picked it up (assuming that they set handled to true, although there is nothing to guarantee that they did).

Bridge App

Strongly typed: ✅, Separation of Concerns: ✅

Rather than a shared dependency, you could have a shared dependent app.

You could define events in Web Shop Integration which the Bridge App subscribes to, calls the relevant functionality in Shipping Agent Integration and passes the results back to Web Shop Integration.

This is fine, and might be the best solution if you don’t control one of the apps e.g. you need an app installed from AppSource to integration with one of your own.

The downside is that there is nothing to guarantee that if both Web Shop Integration and Shipping Agent Integration are installed that the Bridge App is also installed. The Web Shop will work fine, but the functionality in Shipping Agent Integration won’t get called.

Presumably you only intend for the Bridge App to subscribe to these new events in Web Shop Integration, but you can’t stop anyone else subscribing to them. Well, you could by making them InternalEvent and then setting internalsVisibleTo in Web Shop Integration, but that just swaps in a new set of problems. If you have functionality in Web Shop Integration which is genuinely internal (i.e. the Bridge App should not have access to it) you’ve got no way of giving access to the events but not the other internals.

Finally, you are likely only joining two specific apps together in this way. If you have several apps which need to work together in the presence or absence of several other apps you could quickly end up with more Bridge Apps than you want to manage.

Interface in shared dependency

Strongly typed: ✅, Separation of Concerns: ✅

This will be the subject of a follow up post, but will involve defining an interface for Shipping Agent Integration in a shared integration layer and then having Web Shop Integration call its implementation directly, without needing to throw an event.

Each app remains responsible for its own data and functionality and we can have a mechanism for Web Shop Integration to call the Shipping Agent Integration functionality directly without reflection or events. Cake ✅, Eat it ✅

To be continued…

Notes

  1. a few months ago I would have written “VS Code” a second thought, but I suppose you might be using Cursor or any other mad AI-powered editor by the time you read this
  2. the few versions where Microsoft advertised the performance impact of having many table extensions to the same tables and tried to convince us that it was our problem to solve 😅

Export Test Steps as CSV to Import to Azure DevOps

I don’t know if anyone needs this. I’m not sure if even I need this yet, but I am starting to find it useful. We use test plans in Azure DevOps to record the steps and results of manual testing.

I figured that if I’m writing decent comments in my automated (integration*) tests then I should be able to just copy them to the test plan in DevOps, or at least use them as the basis.

Find your test (or test codeunit) in the testing tree, right click and export to CSV. That reads your //[GIVEN], //[WHEN] and //[THEN] lines and drops them into the right format to import into DevOps.

https://jimmymcp.github.io/al-test-runner-docs/articles/export-test-to-csv.html

Postscripts

*yes, I know, the terminology is off, don’t fight me. By “integration” tests I mean scenarios that resemble what the user is doing in the client**, as opposed to calling codeunits, table methods or field validations directly.

**although, without using TestPages. I’m not really trying to simulate user behaviour in the client, I’m trying to recreate the scenario – but these are automated tests and they should still run fast. Use the actual client and your actual eyes to test the client***.

***and maybe page scripts****

****which also now show up in your test tree when you save the yml export into your workspace.

Additional Details about Extension Settings in Business Central 25.0

Extension Settings

For a long time the only thing additional data you could see on the Extension Settings page was whether to allow Http calls from this extension (the Allow HttpClient Requests checkbox). This page has got some love in BC25.

That setting is still the only thing that you can control, but now you can also see:

Resource Protection Policies

Corresponding to resource exposure policies in app.json (maybe “exposure” sounded a little risqué for the user interface). This indicates whether you can debug, download the source code and whether the source is included when you download the symbols.

That might be useful to know before you create a project to download the symbols and attempt to debug something.

Interestingly, extensions which don’t expose their source code get the red No of shame in the Extension Management list.

Source Control Details

Includes the URL of the repository and the commit hash that the extension was created from. That’s cool – you can link straight from the Extension Settings page to the repo in DevOps / GitHub / wherever your source is. That’s a nice feature either for your own extensions or open source extensions that you are using.

It may be that each time you build an app that you already give it an unambiguous, unique version number (we include the DevOps unique build id in the extension version) but the commit hash is nice to see as well.

How Does it Know?

Where does that information come from? It is included in the NaxManifest file, extract the .app file with 7-Zip and take a look.

<ResourceExposurePolicy AllowDebugging="true" AllowDownloadingSource="true" IncludeSourceInSymbolFile="true" ApplyToDevExtension="false"/>
<KeyVaultUrls/>
<Source RepositoryUrl="https://TES365@dev.azure.com/..." Commit="625f12bc521294b252de19db8ad9530c889e35ff"/>
<Build Timestamp="2024-09-10T12:49:40.2694758Z" CompilerVersion="13.1.16.16524"/>
<AlternateIds/>

How Does That Info Get Populated?

When the app is compiled by alc.exe there are additional switches to set this information. These are some of the switches that you can set when compiling the app.

These switches are not set when you compile the app in VS Code (crack the app file open with 7-Zip and check), but you can set them during the compilation step of your build. If you are using DevOps pipelines you can make use of these built-in variables Build.SourceVersion and Build.Repository.Uri to get the correct values.

&'$(alcPath)' /project:"$(projectPath)" /sourcecommit:"$(Build.SourceVersion)" /sourcerepositoryurl:"$(Build.Repository.Uri)" ... (truncated)

That’s if you roll your own build pipelines. If you use some other tooling (AL-Go for GitHub, ALOps etc.) then the compilation step will be in their code. They may have already implemented this, I don’t know.

Side note: Microsoft want to push us to use 3rd party tooling rather than making our own (e.g. I watched this podcast with Freddy the other day) but personally I still see enough value in having control over the whole DevOps process to justify the small amount of time I spend maintaining and improving it. I’m open to changing that stance one day, but not today.

Testing Compatibility Between Runtime and Application Version in Business Central Builds

Background

Recently I got stung by this. As a rule we keep the application version in app.json low (maybe one or two versions behind the latest) so that it can be installed into older versions of Business Central. Of course this is a balance – we don’t want to have to support lots of prior versions and old functionality which is becoming obsolete (like the Invoice Posting Buffer redesign, or the new pricing experience which has been available but not enabled by default for years). Having to support multiple Business Central features which may or may not be enabled is not fun.

On the other hand, Microsoft are considering increasing the length of the upgrade window so it is more likely that we are going to want to install the latest versions of our apps into customers who are not on the latest version of Business Central.

Runtime Versions

But that wasn’t really the point of the post. The point is, there are effectively two properties in app.json which define the minimum version of Business Central required by your app.

  • application: the obvious one. We mostly have this set a major prior to the latest release unless there are specific reasons to require the latest
  • runtime: the version of the AL runtime that you are using in the app. When new features are added to the AL language (like ternary operators – who knew a question mark could provoke such passionate arguments?), as, is, and this keywords, or multiple extensions of the same object in the same project

If you want to use cool new features of the language (and we do, right? Us devs love this stuff) then you need to increase the runtime version in app.json. But, you need to be aware that you are effectively also increasing the minimum version required by your app. Even if you aren’t using anything new in the base and system applications. This is the table of currently available runtime versions: https://learn.microsoft.com/en-us/dynamics365/business-central/dev-itpro/developer/devenv-choosing-runtime#currently-available-runtime-versions

Pipelines

I didn’t want to get caught out by this again so I added a step into our pipeline to catch it. The rule I’ve gone with is, the application version must be at least 11 major version numbers higher than the runtime version. If it isn’t then fail the build. In that case we should either make a conscious decision to raise the application version or else find a way to write the code that doesn’t require raising the runtime version. Either way, we should make a decision, not sleep walk into raising our required application version.

Why 11? This is because runtime 1.0 was released with Business Central 12.0. Each subsequent major release of Business Central has come with a new major release of the runtime (with a handful of runtime releases with minor BC releases thrown in for good measure).

The step is pretty simple ($appJsonPath is a variable which has been set earlier in the pipeline).

steps:
  - pwsh: |
      $appJson = Get-Content $(appJsonPath) -Raw | ConvertFrom-Json
      $runtimeVersion = [Version]::Parse($appJson.runtime)
      $applicationVersion = [Version]::Parse($appJson.application)

      if ($applicationVersion -lt [version]::new($runtimeVersion.Major + 11, $runtimeVersion.Minor)) {
        Write-Host -ForegroundColor Red "##vso[task.logissue type=error;]Runtime version ($runtimeVersion) is not compatible with application version ($applicationVersion)."
        throw "Runtime version ($runtimeVersion) is not compatible with application version ($applicationVersion)."
      }
    displayName: Test runtime version

Code Coverage Updates in VS Code with AL Test Runner

Intro

For a while now AL Test Runner has been able to download the code coverage details after running your tests, output a summary of the objects that were hit with some stats and then highlight the lines which were hit in the previous test run or the last time you ran all the tests. More in the docs.

Recently, VS Code has added an API for test extensions to feed data into and some UI to show the coverage. It’s pretty cool.

Test Coverage

The first thing you’ll notice is this “Test Coverage” panel which is displayed after the tests have run. It displays a tree of the ojbects which have been hit by the run and the percentage coverage (in statement coverage terms).

If you click on a file in the tree it will open the file in the editor and you will see lines which were hit highlighted in the gutter.

In fact, these highlights will continue to be shown as you navigate around your source code. I’m leaving the “Code Coverage: Off/Previous/All” item in the status bar as this highlights each whole line and is much easier to see if you want to zoom out and get an impression of coverage of the whole file.

Coverage from Previous Runs

Coverage from previous test runs is stored and can be accessed from the Test Results pane (usually shown at the bottom of the screen). It might be useful to switch between test coverage results for test runs to see how to coverage % has changed over time (with the usual caveat about not using code coverage as a target).

The decorations in the gutter to indicate which lines have been covered in the current file are only shown when the latest test coverage is being displayed. That makes sense because the code coverage detail is all based on line numbers. Once you’ve made some changes to a file those line numbers are obsolete.