Chaining Builds in Azure DevOps

We are triggering a lot of builds in Azure DevOps these days. If anyone so much as looks at an AL file we start a new build.

OK, that’s a small exaggeration, but we do use our build pipelines for:

  • Continuous integration i.e. whenever code is pushed up to Azure DevOps we start a build
  • Verifying our apps compile and run against different localisations (more of that another time)
  • Checking that a dependent app hasn’t been broken by some changes (what we’re going to talk about now)
  • Building our app against different upcoming versions of Business Central (this is an idea that we haven’t implemented yet)

Background Reading

If you haven’t got a clue what I’m talking about you might find a little background reading useful. These might get you started:

Overview

We’re considering a similar problem to the one I wrote about in the last post on package management¬†– but from the other end. The question then was, “how do I fetch packages (apps) that my app depends on?” Although not explicitly stated, a benefit of the package management approach is that you’ll find out pretty quickly if there are any breaking changes in the dependency that you must handle in your app.

Obviously, you want to minimise the number of times you make a breaking change in the first place but if you can’t avoid it then change the major version no. and do your best to let any dependent devs know how it will affect them e.g. if you’re going to change how an API works, give people some notice…I’m looking at you Microsoft ūüėČ

But what if we’re developing the dependency and not the dependent app? There will be no trigger to build the dependent app and check that it still works.

Chaining Builds

Azure DevOps allows you to trigger a new build on completion of another build. In our scenario we’ve got two apps that are built from two separate Git repositories in the same Azure DevOps project. One is dependent upon the other.

It doesn’t really matter for the purposes of this post what the apps do or why they are split into two but, for the curious, the dependent app provides a little slice of extra functionality for on-prem customers that cannot be supported for SaaS. Consequently the dependency (which has the core functionality supported both for SaaS and on-prem) is developed far more frequently than the dependent app.

Build Triggers.JPG

We want to check that when we push changes to the dependency that the dependent app still works i.e. it compiles, publishes, installs and the tests still run.

You can add a “Build Completion” trigger to pipeline for the dependent app. This defines that when the dependency app is built (filtered by branch) that a build for the dependency kicks off.

That way if we’ve inadvertently made some breaking change we gives ourselves a chance to catch it before our customers do.

Limitations

Currently the triggering and to-be-triggered build pipelines must be in the same Azure DevOps project – which is a shame. I’d love to be able to trigger builds across different projects in the same organisation. No doubt this would be possible to achieve through the API – maybe I’ll attempt it some day – but I’d rather this was supported in the UI.

An Approach to Package Management in Dynamics 365 Business Central

TL;DL

We use PowerShell to call the Azure DevOps API and retrieve Build Artefacts from the last successful build of the repository/repositories that we’re dependent on.

Background

Over the last few years I’ve moved into a role where I’m managing a development team more than I’m writing code myself. I’ve spent a lot of that time looking at tools and practices in the broader software development community. After all, whether you’re writing C/AL, AL, PowerShell or JavaScript it’s all code and it’s unlikely that we’ll face any challenges that haven’t already been faced in one way or another in a different setting.

In that time we’ve introduced:

Package Management

The next thing to talk about is package management. I’ve written about the benefits of trying to avoid dependencies between your apps before (see here). However, if app A relies on app B and you cannot foresee ever deploying A without B then you have a dependency. There is no point trying to code your way round the problems that avoiding the dependency will create.

Accepting that your app has one or more dependencies – and most of our apps have at least one – opens up a bunch of questions and presents some interesting challenges.

Most obviously you need to know, where can I get the .app files for the apps that I am dependent on? Is it at least the minimum version required by my app? Is this the correct app for the version of the Dynamics NAV / Dynamics 365 Business Central that I am developing against? Are the apps that I depend on themselves dependent on other apps? If so, where do I get those from? Is there another layer of dependencies below that? Is it really turtles all the way down?

These are the sorts of questions that you don’t want to have to worry about when you are setting up an environment to develop in. Docker gives us a slick way to quickly create disposable development and testing environments. We don’t want to burn all the time that Docker saves us searching for, publishing and installing app files before we can start work.

This is what a package manager is for. The developer just needs to declare what their app depends on and leave the package manager to retrieve and install the appropriate packages.

The Goal

Why are we talking about this? What are we trying to achieve?

We want to keep the maintenance of all apps separate. When writing app A I shouldn’t need to know or care about the development of app B beyond my use of its API. I just need to know:

  • The minimum version that includes the functionality that I need – this will go into my app.json file
  • I can acquire that, or a later, version of the app from somewhere as and when I need it

I want to be able to specify my dependencies and with the minimum of fuss download and install those apps into my Docker container.

We’ve got a PowerShell command to do just that.

Get-ALDependencies -Container BCOnPrem -Install

There are a few jigsaw pieces we need to gather before we can start putting it all together.

Locating the Apps

We need somewhere to store the latest version of the apps that we might depend upon. There is usually some central, public repository where the packages are hosted – think of the PowerShell Gallery or Docker Hub for example.

We don’t have an equivalent repository for AL apps. AppSource performs that function for Business Central SaaS but that’s not much use to us while we are developing or if the apps we need aren’t on AppSource. We’re going to need to set something up ourselves.

You could just use a network folder. Or maybe SharePoint. Or some custom web service that you created. Our choice is Azure DevOps build artefacts. For a few reasons:

  • We’ve already got all of our AL code going through build pipelines anyway. The build creates the .app files, digitally signs them and stores them as build artefacts
  • The artefacts are only stored if all the tests ran successfully which ought to give us more confidence relying on them
  • The build automatically increments the app version so it should always be clear which version of the app is later and we shouldn’t get caught in app version purgatory when upgrading an app that we’re dependent on
  • We’re already making use of Azure DevOp’s REST API for loads of other stuff – it was easy to add some commands to retrieve the build artefacts (hence my earlier post on getting started with the API)

Identifying the Repository

There is a challenge here. In the app.json file we identify dependencies by app name, id and publisher. To find a build – and its artefacts – we need to know the project and repository name in Azure DevOps.

Seeing as we can’t add extra details into the app.json file itself we hold these details in a separate json file – environment.json. This file can have an array of dependency objects with a:

  • name – which should match the name of the dependency in the app.json file
  • project – the Azure DevOps project to to find this app in
  • repo – the Git repository in that project to find this app in

Once we know the right repository we can use the Azure DevOps API to find the most recent successful build and download its artefacts.

I’m aware that we could use Azure DevOps to create proper releases, rather than downloading apps that are still in development. We probably should – maybe I’ll come back and update this post some day. For now, we find that using the artefacts from builds is fine for the two main purposes we use them: creating local development environments and creating a Docker container as part of a build. We have a separate, manual process for uploading new released versions to SharePoint for now.

The Code

So much for the theory, let’s look at some code. In brief we:

  1. Read app.json and iterate through the dependencies
  2. For each dependency, find the corresponding entry in the environment.json file and read the project and repo for that dependency
  3. Download the app from the last successful build for that repo
  4. Acquire the app.json of the dependency
  5. Repeat steps 2-5 recursively for each branch of the dependency tree
  6. Optionally publish and install the apps that have been found (starting at the bottom of the tree and working up)

A few notes about the code:

  • It’s not all here – particularly the definition of Invoke-TFSAPI. That is just a wrapper for the Invoke-WebRequest command which adds the authentication headers (as previously described)
  • These functions are split across different files and grouped into a module, I’ve bundled them into a single file here for ease

(The PowerShell is hosted here if you can’t see it embedded below: https://gist.github.com/jimmymcp/37c6f9a9981b6f503a6fecb905b03672)

Working with Version Numbers in Dynamics Business Central / NAV

Specifically I’m talking about assigning version numbers to your own code and manipulating those versions in CAL / AL and PowerShell.

Version Numbering

There are lots of different systems for assigning a version number to some code. Some incorporate the date or the current year and day number within the year. Loads of background reading here if you’re interested.

The system we typically follow is:

Version number = a.b.c.d where:

  • a = major version – this is only incremented for a major refactoring or complete rewrite of the software
  • b = minor version – incremented when a significant new feature is implemented
  • c = fix – incremented for small changes and bug fixes
  • d = build – set to the ID of the build that created it in Azure DevOps

This system isn’t perfect and we don’t always follow it exactly as written. The line between what is just a fix and what is a new feature is a little blurry. We don’t run CAL code through our DevOps build process so they don’t get a build ID like AL apps do. Hit the comments section and tell me how and why you version differently.

Regardless, the important thing is you give some consideration to versioning. It is especially important that two different copies of your code must not go out to customers having the same version number. This is especially true for AL apps. If you want to publish an updated version of an app it must have a higher version number than the one you are replacing.

Automation

There are several situations where we need to work with version numbers in code and in scripts.

  • In the build process – reading the current version from app.json and setting the last element to equal the build ID
  • In our PowerShell script that creates a new navx package from CAL code (yes, we use v1 extensions. Not now, let’s go into that some other time)
  • In upgrade code – what was the previous version of the app? Was it higher or lower than a given version?

If you are considering, like we used to, just treating version numbers as strings…don’t. Think about it:

Treated as versions 1.10.0 is greater than 1.9.0 but when treated as strings it isn’t. That led us to split the versions into two arrays and compare each element. It worked, but it was convoluted. And completely unnecessary.

Some bright spark in our team wondered why we can’t just use .Net’s¬†version type. We can.

CAL

Use a DotNet variable of type Version. Construct it with the version number string. NAVAPP.GETARCHIVEVERSION returns a string that can be used.

You can use the properties of the variable to access the individual elements of the version and its methods to compare to another string (less than, less than or equal to, greater than, greater than or equal to).

Version : DotNet System.Version.'mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'
Version2 : DotNet System.Version.'mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'

Version.Version('1.10.0');
Version2.Version(NAVAPP.GETARCHIVEVERSION);

IF (Version2.op_LessThan(Version) THEN BEGIN
  //some upgrade code that must be run when coming from an older version than 1.10.0
END;

PowerShell

Declare a variable of a given DotNet type using square brackets. Create a new version with new, Parse or TryParse. The latter expects a version variable passed by reference and returns a Boolean indicating whether a value could be assigned.

Access the elements of the version through the properties of the variable.

C:\> $Version1 = [Version]::new(1,10,0)
>> $Version2 = [Version]::new('1.9.0')
>> $Version1.CompareTo($Version2)
1

C:\> $Version = [Version]::new(1,10,0)
>> $Version.Minor
10

C:\> $Version = [Version]::new()
>> [Version]::TryParse('monkey',[ref]$Version)
False

AL

AL has a native Version datatype. As above, create a new version either from its elements or from a string. NavApp.GetArchiveVersion returns a string that can be used (for migration from v1).

To get the version of the current module (app) or of another app use NavApp.GetCurrentModuleInfo or NavApp.GetModuleInfo.

var
  Ver : Version;
  Ver2 : Version;
  DataVer : Version;
  AppVer : Version;
  ModInfo : ModuleInfo;
  ModInfo2 : ModuleInfo;
begin
  Ver := Version.Create(1,10,0);
  Ver2 := Version.Create(NavApp.GetArchiveVersion());

  if Ver > Ver2 then begin
    //some upgrade code
  end;

  //version of the current app
  NavApp.GetCurrentModuleInfo(ModInfo);
  DataVer := ModInfo.DataVersion();
  AppVer := ModInfo.AppVersion();

  //app version of the first dependency
  NavApp.GetModuleInfo(ModInfo.Dependencies().Get(1).Id(),ModInfo2); //dependencies is 1 based, not 0 based
  AppVer := ModInfo2.AppVersion();
end;

Part 3: Integration Between Extensions in Dynamics 365 Business Central

Trig Calculator.gif

Sample Code: https://github.com/jimmymcp/calculator-interface

This post is in a series (parts one and two here) discussing the challenges and practical approaches to breaking your functionality into discrete extensions and getting them to integrate with one another.

In the previous post I described my attempt to declare and implement interfaces in AL with a heady mix of a discovery pattern, Codeunit.Run and manually bound subscribers. In this post I’m going to walk through an example.

The example is, of course, a calculator. Cos, sin and tan calculations will be handled by separate modules all implementing a TRIG interface and its Calculate method.

The calculator should be able to make use of any of the calculations independently of the others and it should be possible to maintain a calculation module without affecting anything else.

calculator structure.JPG

Before we start, a few things to note:

  • We can’t actually define an interface and implement it in any formal way in AL. Not in a sense that will give you a compile-time error if you don’t implement it correctly. Microsoft are aware that this is something we need and are investigating how they might bring this to AL e.g. check out the “Designing for extensibility” session at NAVTechDays 2018. This is my attempt to bring the benefits of interfaces to Business Central development until Microsoft give us something better
  • For the sake of convenience I’m using a calculator example rather than the file handler scenario I have been discussing in this series. This approach could be considered for any scenario where you have multiple, independent implementations of similar functionality
  • Also for convenience, all of the sample code is in a single app. In reality it would be split into 5 apps as per the diagram above

Registering Implementations

With all that said let’s get down to the details. The first thing is that each of the calculation modules registers themselves as an implementation of the¬†TRIG interface.

Each module has a pair of codeunits:

  1. Binding – responsible for subscribing to the discovery event and registering the implementation and for binding an instance of the Calculation codeunit
  2. Calculation – contains the methods that actually implement the interface events, is manually bound

The below code is from the CosBinding codeunit. It adds a new entry into the Interface Implementation table to register a implementation of the TRIG interface called COS. It also specifies the codeunit to run when the COS implementation needs to be used Рitself.

[EventSubscriber(ObjectType::Codeunit, Codeunit::"Interface Mgt.", 'OnRegisterInterface', '', false, false)]
local procedure OnRegisterInterface(var InterfaceImplementationBuffer: Record "Interface Implementation" temporary)
begin
  InterfaceImplementationBuffer.AddNewEntry('TRIG','COS',Codeunit::"Cos Binding",0);
end;

You’ll see the same code for the¬†SIN and¬†TAN implementations.

Looking Up Implementations

Now that we’ve got multiple implementations of the same interface we need some way of allowing code that requires the interface to select the appropriate implementation.

field(Operation; Operation)
{
  ApplicationArea = All;
  AssistEdit = true;
  trigger OnAssistEdit()
  var
    InterfaceImplementation: Record "Interface Implementation";
    InterfaceMgt: Codeunit "Interface Mgt.";
  begin
  if InterfaceMgt.LookupInterfaceImplementation('TRIG', InterfaceImplementation) then
    Operation := InterfaceImplementation."Implementation Code";
end;
}

The Operation field on the Calculator page allows the user to select the operation they want to perform i.e. which implementation of the TRIG interface to use in the calculation.

The Interface Mgt. codeunit provides a lookup of the implementations that have been registered for a given interface and returns the selected record.

Invoking Interface Methods

Now we’ve registered the implementations and selected the specific one we want to use it’s time to actually invoke it.

action(Calculate)
{
  ApplicationArea = All;
  Image = Calculate;
  Promoted = true;
  PromotedCategory = Process;
  PromotedOnly = true;

  trigger OnAction()
  var
    InterfaceMgt: Codeunit "Interface Mgt.";
    AppIntegrationData: Codeunit "App Integration Data";
    Handled: Boolean;
  begin
    AppIntegrationData.SetIntegationData('Angle', Angle);
    InterfaceMgt.InvokeInterfaceEvent('TRIG', Operation, 'Calculate', AppIntegrationData, Handled);
    if Handled then
      Result := AppIntegrationData.GetIntegrationDataDecimal('Result', 0)
  end;
}

I’m using a instance of the¬†App Integration Data codeunit as a container for the data that needs to be passed between the implementation codeunit and the codeunit that is calling it. In my case I just need to pass in an angle and retrieve the result of the calculation.

InvokeInterfaceEvent tells the Interface Mgt. codeunit to invoke the Calculate method in the TRIG interface and the implementation selected in the Operation field. The instance of App Integration Data is passed in along with a Handled flag.

If the event has been handled then retrieve the value of the Result variable Рas a decimal Рfrom the App Integration Data codeunit.

And that’s it.

InvokeInterfaceEvent

So how does the appropriate Calculation codeunit get called?

This is the InvokeInterfaceEvent method.

procedure InvokeInterfaceEvent(InterfaceCode: Code[20]; ImplementationCode: Code[20]; EventName: Text; var IntegrationData: Codeunit "App Integration Data"; var Handled: Boolean)
begin
  Clear(InterfaceCodeunit);
  if not GetInterfaceImplementation(InterfaceCode, ImplementationCode, InterfaceImplementation) then
    Error(NoInterfaceImplementationErr, InterfaceCode);

  InterfaceImplementation.TestField("Codeunit ID");
  Codeunit.Run(InterfaceImplementation."Codeunit ID");
  if not InterfaceCodeunit.IsCodeunit() then
    Error(NoInterfaceCodeunitErr, InterfaceImplementation."Codeunit ID", InterfaceImplementation."Interface Code", InterfaceImplementation."Implementation Code");

  OnInterfaceEvent(EventName, IntegrationData, Handled);
  Clear(InterfaceCodeunit);
end;

First, check that a valid interface and implementation have been specified and throw an error if not.

Then test that a Codeunit ID has been specified by the selected implementation and run that codeunit. As we saw above, when registering the implementation the (Cos/Sin/Tan)Binding was specified as the codeunit to run. That codeunit is responsible for binding an instance of the correct (Cos/Sin/Tan)Calculation codeunit and passing that instance back to the Interface Mgt. codeunit (see below).

The InovkeInterfaceEvent has a global InterfaceCodeunit variable which keeps that bound codeunit instance in scope ready to respond to the OnInterfaceEvent event call.

Before calling OnInterfaceEvent we check that the InterfaceCodeunit variable does actually contain a codeunit.

After the¬†OnInterfaceEvent¬†call the¬†InterfaceCodeunit¬†is cleared to dispose of the bound codeunit and ensure it doesn’t respond to any more events until we need it again.

Binding Codeunit OnRun

This is the OnRun trigger of the CosBinding codeunit. All it does it bind an instance of the corresponding Calculation codeunit and pass that instance back to Interface Mgt.

trigger OnRun()
var
  InterfaceMgt : Codeunit "Interface Mgt.";
  CosCalculation : Codeunit "Cos Calculation";
begin
  BindSubscription(CosCalculation);
  InterfaceMgt.SetInterfaceCodeunit(CosCalculation);
end;

OnInterfaceEvent

Now that we have a instance of the appropriate Calculation codeunit bound it will respond to the OnInterfaceEvent event and we can run whatever business logic we want.

Here is the CosCalculation codeunit. It:

  1. Subscribes to OnInterfaceEvent
  2. Has a case statement to handle the event that has been called (in real life an implementation will likely implement multiple methods)
  3. Reads the Angle variable from the App Integration Data codeunit
  4. Uses System.Math to calculate the result
  5. Stores the result in the Result variable in the App Integration Data codeunit
  6. Sets Handled to true
local procedure Calculate(var AppIntegrationData : Codeunit "App Integration Data")
var
  Math : DotNet Math;
  Angle : Decimal;
  Result : Decimal;
begin
  Angle := AppIntegrationData.GetIntegrationDataDecimal('Angle',0);
  Result := Math.Cos(Angle);
  AppIntegrationData.SetIntegationData('Result',Result);
end;

[EventSubscriber(ObjectType::Codeunit, Codeunit::"Interface Mgt.", 'OnInterfaceEvent', '', false, false)]
local procedure OnInterfaceEvent(EventName: Text; IntegrationData: Codeunit "App Integration Data"; var Handled: Boolean)
begin
  case EventName of
    'Calculate':
      begin
        Calculate(IntegrationData);
        Handled := true;
      end;
  end;
end;

Conclusion

And there you have it. Provided you can live with the shared dependency at the bottom of the dependency tree this achieves the two objectives that we set out with:

  1. Splitting functionality into multiple, discrete apps that can be developed and maintained independently of each other
  2. Having those apps integrate with each other to provide the required functionality to the end user

It’s not the most elegant solution and coding this way means you don’t get much help from the IDE. If you mistype a variable or event name somewhere everything will compile but nothing will work.

Hopefully at some point Microsoft will give us a better solution to these challenges but in the mean time take as much or as little inspiration from our approach as you like.

Part 2: Integration Between Extensions in Dynamics 365 Business Central

This post follows on from my discussion of extensions and integration and dependencies between them. Find the first part here.

TL;DR

  • You can use a base app as a common dependency for the apps that you want to integrate
  • Have one app raise an event publisher with the required event data and another app subscribe to that event
  • Use EventSubscriberInstance = Manual with BindSubscription to create an instance of the subscriber that you want for a given event call
  • Use a SingleInstance codeunit in the base app to keep the subscriber in scope to respond to events and CLEAR them when you’re done

Scenario

So far we’ve established the scenario of four apps: some business logic that is handling files from an external system and three file handler apps that are pushing and pulling those files from various sources.

The key objectives are to write each of these apps in such a way that:

  1. They integrate together to provide the overall functionality that the customer requires
  2. We can reuse one or more of the apps in other projects flexibly without needing to install dependencies that we aren’t using

Objective #2 means that we can’t have any dependencies between the apps. In Part 1 we discussed how you might achieve that with Codeunit.Run but some of the challenges that leaves us with.

Interfaces

Let’s picture how we might design a solution without worrying about the actual limitations of the AL language first.

In our example the file handlers are working with different sources (local network, FTP and Amazon S3) but they are providing common functionality. We’d probably need them all to:

  • List files in a given directory
  • Get the contents of a specific file
  • Delete files
  • Create new files

We might define all of the methods that we’d require a file handler to provide in an interface and have each file handler app implement that interface. This serves as a contract between the business logic app and the file handlers that the file handlers will always provide an agreed set of methods.

Polymorphism

A related, but slightly different idea is polymorphism. We might have a file handler base from which other file handlers inherit and override their functionality. This has the advantage of allowing the business logic app to create an instance of a file handler and call its methods without worrying about the precise type of file handler that is implementing those methods. For example, the business logic app can request that a file handler lists available files without knowing, or caring, precisely how that is being handled.

Yes, But We Code in AL not C#

Great. Thanks for the theory but none of this is possible in AL so why are talking about it? While we can’t write a solution using an interface or inheritance we can take inspiration from those approaches.

There are a couple of key challenges that we have to get a little funky in AL to overcome:

  1. How do we create an instance of a codeunit at runtime without knowing what that codeunit will be at design-time?
  2. How do we call methods in that codeunit without knowing which codeunit we’re talking about at design-time?

Use Events…Obviously

In one way the answer is simple. That’s what an event publisher is for. I call an event subscription and am able to call code in subscribers without knowing that they even exist at design-time. Perfect, apart from we are trying to avoid creating dependencies between our apps…remember? The file handlers can’t subscribe to an event in the business logic unless they depend on it or vice versa.

Common Dependency

One way to work around that is to have a common dependency between the apps that you want to integrate. Have the business logic raise an event in the base dependency that the file handler depends upon.

The base app could have some events that expose useful functionality to the business logic app (the sort of methods listed above).

Each file handler app could subscribe to those events and implement them.

We’re getting closer.

Pros

  • Only install the file handlers that you actually need
  • Decouples the business logic from the file handlers, they can be installed and maintained independently
  • We can pass AL types natively through the event parameters i.e. no need to serialize them and stuff them into a TempBlob record

Cons

  • If you want to support new methods you need to modify the base app which means you need to uninstall everything on top of it first
  • All file handlers will respond to all events raised in the base app. We’ll need to set a parameter to indicate which file handler we want to respond and have all file handlers respect it. Not insurmountable, but not particularly elegant either

Option D

With all that preamble I’ll get on to describing the Option D that I promised in the previous post.

I’ll attempt to outline our (current) approach in comprehensible English here but follow up with an example in the next post. This approach attempts to combine the best of both worlds:

  • Codeunit.Run targets a specific codeunit to run (rather than shouting for someone to help and having all the file handlers come running at the same time)
  • Events subscriptions allow you to pass native AL types

Credit to vjeko.com/i-had-a-dream-codeunit-references. This design takes some of the ideas Vjeko discusses in his post.

Listen…but Only When You’re Spoken To

We have a base app that is a common dependency for the apps that we are integrating as per the diagram above. The file handlers subscribe to an event in the base app which the business logic app is able to raise and pass appropriate parameters to. With multiple file handlers installed how do we prevent them from all responding all of the time? We want the business logic app to control which file handler’s event subscription fires each time.

The EventSubscriberInstance property. Set that to Manual for a codeunit and it will only respond to events when an instance of it is bound with BindSubscription. The codeunit will continue to respond until it is explicitly unbound or the instance goes out of scope. So, in order to have a particular subscriber respond we need a bound instance of its codeunit in scope when the event publisher is fired.

Interface Mgt.

The instances of subscribers are managed by a SingleInstance codeunit, Interface Mgt. Each file handler app requires a pair of codeunits:

  1. contains the logic i.e. the specifics of that file handler (EventSubscriberInstance = Manual)
  2. to register itself as an implementation of an interface, to bind an instance of codeunit 1 and pass that instance to Interface Mgt. when required

The flow is something like this (concentrate, this is the science bit):

  1. Interface Mgt. calls for interface implementations with a discovery event
  2. File handlers register their implementation with an Interface Code, Implementation Code, Codeunit ID (codeunit 2 as described above), Setup Page ID
    • File handlers that implement the same set of functions should have the same Interface Code e.g. “FILE HANDLER”
    • The Implementation Code uniquely identifies each handler e.g. “NETWORK”, “FTP”, “AMAZON S3”
  3. The business logic app asks Interface Mgt. to provide a lookup of available implementations for a given interface
    • Use this to assist with some setup in the business logic app
  4. The business logic app sets parameter values and asks Interface Mgt. to raise the event in a given file handler e.g.
    • Event Name = “GetFileContents”
    • Interface Code = “FILE HANDLER”
    • Implentation Code = “FTP”
    • Any other required event payload data
  5. Interface Mgt. runs the codeunit set on registration of the interface implementation (step 2)
    • That codeunit is responsible for binding an instance of the codeunit that contains the file handler logic
    • It passes that instance back to Interface Mgt. which stores it in a variant and keeps it in scope long enough to respond to the event in the following step
  6. Interface Mgt. calls the OnInterfaceEvent event with the payload set above (step 4)
  7. Regardless of how many subscriber there are to this event there should only be one bound codeunit in scope (the one set in step 5) so this is the only codeunit to respond to the event
  8. The file handler responds to the event, reading the event parameters and setting response data as appropriate
  9. The consumer reads the response information as required

Event Payload

I’ve talked about event parameters and response data above. How can you pass the required data in the OnInterfaceEvent event? We use an instance of a codeunit in the base app as a container for all the data associated with the event.

This codeunit has a bunch of methods for storing and retrieving data from the codeunit but essentially it is just an array of variants. We pass some data to the codeunit and tag it with a name and retrieve it again with the same name. This allows us to store any AL data types with their state and avoid serializing them.

Think of the Library – Variable Storage codeunit, it’s very similar.

Conclusion

Pros

  • The Interface Mgt. codeunit is generic and should be suitable for reuse in other scenarios where you have multiple implementations of given functionality
  • You can have as many implementations as you like and still be specific about the one you want to invoke each time
  • Pass instances of AL types around with their state e.g. a temporary set of Name/Value Buffer records or an xmlport without having to recreate it from JSON or XML

Cons

  • We’ve solved our objective of removing dependencies between extensions…with a dependency. Smart. Maybe if Microsoft made something like this available in the base app we could achieve our objectives with no dependencies at all
  • Complexity. Conceptually this is harder to follow than just using Codeunit.Run although once in place I don’t think the file handlers are any more difficult to write

Example

If none of that made much sense then fear not. I’ll show some example code and a calculator implementation next time.