Part 2: Integration Between Extensions in Dynamics 365 Business Central

This post follows on from my discussion of extensions and integration and dependencies between them. Find the first part here.

TL;DR

  • You can use a base app as a common dependency for the apps that you want to integrate
  • Have one app raise an event publisher with the required event data and another app subscribe to that event
  • Use EventSubscriberInstance = Manual with BindSubscription to create an instance of the subscriber that you want for a given event call
  • Use a SingleInstance codeunit in the base app to keep the subscriber in scope to respond to events and CLEAR them when you’re done

Scenario

So far we’ve established the scenario of four apps: some business logic that is handling files from an external system and three file handler apps that are pushing and pulling those files from various sources.

The key objectives are to write each of these apps in such a way that:

  1. They integrate together to provide the overall functionality that the customer requires
  2. We can reuse one or more of the apps in other projects flexibly without needing to install dependencies that we aren’t using

Objective #2 means that we can’t have any dependencies between the apps. In Part 1 we discussed how you might achieve that with Codeunit.Run but some of the challenges that leaves us with.

Interfaces

Let’s picture how we might design a solution without worrying about the actual limitations of the AL language first.

In our example the file handlers are working with different sources (local network, FTP and Amazon S3) but they are providing common functionality. We’d probably need them all to:

  • List files in a given directory
  • Get the contents of a specific file
  • Delete files
  • Create new files

We might define all of the methods that we’d require a file handler to provide in an interface and have each file handler app implement that interface. This serves as a contract between the business logic app and the file handlers that the file handlers will always provide an agreed set of methods.

Polymorphism

A related, but slightly different idea is polymorphism. We might have a file handler base from which other file handlers inherit and override their functionality. This has the advantage of allowing the business logic app to create an instance of a file handler and call its methods without worrying about the precise type of file handler that is implementing those methods. For example, the business logic app can request that a file handler lists available files without knowing, or caring, precisely how that is being handled.

Yes, But We Code in AL not C#

Great. Thanks for the theory but none of this is possible in AL so why are talking about it? While we can’t write a solution using an interface or inheritance we can take inspiration from those approaches.

There are a couple of key challenges that we have to get a little funky in AL to overcome:

  1. How do we create an instance of a codeunit at runtime without knowing what that codeunit will be at design-time?
  2. How do we call methods in that codeunit without knowing which codeunit we’re talking about at design-time?

Use Events…Obviously

In one way the answer is simple. That’s what an event publisher is for. I call an event subscription and am able to call code in subscribers without knowing that they even exist at design-time. Perfect, apart from we are trying to avoid creating dependencies between our apps…remember? The file handlers can’t subscribe to an event in the business logic unless they depend on it or vice versa.

Common Dependency

One way to work around that is to have a common dependency between the apps that you want to integrate. Have the business logic raise an event in the base dependency that the file handler depends upon.

The base app could have some events that expose useful functionality to the business logic app (the sort of methods listed above).

Each file handler app could subscribe to those events and implement them.

We’re getting closer.

Pros

  • Only install the file handlers that you actually need
  • Decouples the business logic from the file handlers, they can be installed and maintained independently
  • We can pass AL types natively through the event parameters i.e. no need to serialize them and stuff them into a TempBlob record

Cons

  • If you want to support new methods you need to modify the base app which means you need to uninstall everything on top of it first
  • All file handlers will respond to all events raised in the base app. We’ll need to set a parameter to indicate which file handler we want to respond and have all file handlers respect it. Not insurmountable, but not particularly elegant either

Option D

With all that preamble I’ll get on to describing the Option D that I promised in the previous post.

I’ll attempt to outline our (current) approach in comprehensible English here but follow up with an example in the next post. This approach attempts to combine the best of both worlds:

  • Codeunit.Run targets a specific codeunit to run (rather than shouting for someone to help and having all the file handlers come running at the same time)
  • Events subscriptions allow you to pass native AL types

Credit to vjeko.com/i-had-a-dream-codeunit-references. This design takes some of the ideas Vjeko discusses in his post.

Listen…but Only When You’re Spoken To

We have a base app that is a common dependency for the apps that we are integrating as per the diagram above. The file handlers subscribe to an event in the base app which the business logic app is able to raise and pass appropriate parameters to. With multiple file handlers installed how do we prevent them from all responding all of the time? We want the business logic app to control which file handler’s event subscription fires each time.

The EventSubscriberInstance property. Set that to Manual for a codeunit and it will only respond to events when an instance of it is bound with BindSubscription. The codeunit will continue to respond until it is explicitly unbound or the instance goes out of scope. So, in order to have a particular subscriber respond we need a bound instance of its codeunit in scope when the event publisher is fired.

Interface Mgt.

The instances of subscribers are managed by a SingleInstance codeunit, Interface Mgt. Each file handler app requires a pair of codeunits:

  1. contains the logic i.e. the specifics of that file handler (EventSubscriberInstance = Manual)
  2. to register itself as an implementation of an interface, to bind an instance of codeunit 1 and pass that instance to Interface Mgt. when required

The flow is something like this (concentrate, this is the science bit):

  1. Interface Mgt. calls for interface implementations with a discovery event
  2. File handlers register their implementation with an Interface Code, Implementation Code, Codeunit ID (codeunit 2 as described above), Setup Page ID
    • File handlers that implement the same set of functions should have the same Interface Code e.g. “FILE HANDLER”
    • The Implementation Code uniquely identifies each handler e.g. “NETWORK”, “FTP”, “AMAZON S3”
  3. The business logic app asks Interface Mgt. to provide a lookup of available implementations for a given interface
    • Use this to assist with some setup in the business logic app
  4. The business logic app sets parameter values and asks Interface Mgt. to raise the event in a given file handler e.g.
    • Event Name = “GetFileContents”
    • Interface Code = “FILE HANDLER”
    • Implentation Code = “FTP”
    • Any other required event payload data
  5. Interface Mgt. runs the codeunit set on registration of the interface implementation (step 2)
    • That codeunit is responsible for binding an instance of the codeunit that contains the file handler logic
    • It passes that instance back to Interface Mgt. which stores it in a variant and keeps it in scope long enough to respond to the event in the following step
  6. Interface Mgt. calls the OnInterfaceEvent event with the payload set above (step 4)
  7. Regardless of how many subscriber there are to this event there should only be one bound codeunit in scope (the one set in step 5) so this is the only codeunit to respond to the event
  8. The file handler responds to the event, reading the event parameters and setting response data as appropriate
  9. The consumer reads the response information as required

Event Payload

I’ve talked about event parameters and response data above. How can you pass the required data in the OnInterfaceEvent event? We use an instance of a codeunit in the base app as a container for all the data associated with the event.

This codeunit has a bunch of methods for storing and retrieving data from the codeunit but essentially it is just an array of variants. We pass some data to the codeunit and tag it with a name and retrieve it again with the same name. This allows us to store any AL data types with their state and avoid serializing them.

Think of the Library – Variable Storage codeunit, it’s very similar.

Conclusion

Pros

  • The Interface Mgt. codeunit is generic and should be suitable for reuse in other scenarios where you have multiple implementations of given functionality
  • You can have as many implementations as you like and still be specific about the one you want to invoke each time
  • Pass instances of AL types around with their state e.g. a temporary set of Name/Value Buffer records or an xmlport without having to recreate it from JSON or XML

Cons

  • We’ve solved our objective of removing dependencies between extensions…with a dependency. Smart. Maybe if Microsoft made something like this available in the base app we could achieve our objectives with no dependencies at all
  • Complexity. Conceptually this is harder to follow than just using Codeunit.Run although once in place I don’t think the file handlers are any more difficult to write

Example

If none of that made much sense then fear not. I’ll show some example code and a calculator implementation next time.

Extensible Enums in Dynamics 365 Business Central

Option fields: great for scenarios where you want to provide a fixed, predefined list of values. Only a single value can apply and the user gets a convenient dropdown to select from. Perfect, until you want to extend the list of values.

Enter enums.

Documentation is here: https://docs.microsoft.com/en-us/dynamics365/business-central/dev-itpro/developer/devenv-extensible-enums

The Theory

Enums are object types in their own right, not merely data types you can assign to fields or variables.

Let’s have a quick look at how it works. Who doesn’t love a calculator example?

Define a new enum like this:

enum 50100 Operator
{
  Extensible = true;
  value(0; Addition)
  {
    Caption = 'Addition';
  }
  value(1; Subtraction)
  {
    Caption = 'Subtraction';
  }
}

Notice the Extensible property. You need to explicitly decide that other apps can extend your values, which seems sensible. Use that enum as the data type for a table field or variable as you see fit.

Operator : Enum Operator;

As with options you’ll typically handle enums with a case statement. Also use the same double-colon syntax you use for options.

case Operator of
  Operator::Addition:
    exit(a + b);
  Operator::Subtraction:
    exit(a - b);
  else
    begin
      OnCalculate(a, b, Operator, Result, Handled); //event publisher
      if Handled then
        exit(Result);
    end;
end;

Notice the else in the case block. There isn’t much point making the enum extensible if you don’t have a way to handle the extended values. We’re throwing an event for any Operator values that we don’t recognise. Perhaps we ought to also throw an error if the event is not handled as that would indicate someone has added an enum value without handling the calculation – but you get the idea.

Now another app developer can extend your calculator with some new operators in a dependent app.

enumextension 50100 OperatorExt extends Operator
{
  value(50100; Sin)
  {
    Caption = 'Sin';
  }
  value(50101; Cos)
  {
    Caption = 'Cos';
  }
  value(50102; Tan)
  {
    Caption = 'Tan';
  }
}

The enumextension adds new values to the Operator enum. These values are not handled by the case statement above so the event is called. Subscribe to the OnCalculate event to provide the result and set the Handled flag.

The Practice

Three scenarios spring to mind where extending an enum could be particularly useful.

Adding On-premise Support

As a rule we try to write our extensions so that they can target either Business Central platform (SaaS or on-premise). The target property in app.json is set to “Extension” (or just omitted).

calculator.jpg

Let’s imagine that you want to use the .Net System.Math library to calculate the results of sin, cos and tan. You can’t use .Net in an app with a target of Extension.

What you could do instead is build your base calculator functionality in a SaaS-friendly, target-Extension app and add your .Net functionality in a dependent on-prem, target-Internal app instead.

I know, in the real world there are probably a bajillion free web services that could provide the result or you could use .Net in an Azure Function. Heck, you could even calculate the result manually if you really wanted (but seriously, don’t). Then again, in the real world you’re probably not making a calculator app.

You might want to handle things different if you’re running on-prem or on-SaaS though. For example, you might need to use .Net or interact with local resources like printers or file shares. Those are off-limits to SaaS apps. Rather than making your whole app target-Internal you could have a base app that you extend with your on-prem functionality.

Adding Additional Providers

Another model might be where you need several codeunits to provide some common functionality. Let’s say you have some integration with shipping agents – submitting consignment details, retrieving tracking numbers and label details etc.

You could create an enum with the name of the shipping agents that you integrate with in your app, but make allowance for that enum to be extended by other apps and throw appropriate events for them to handle integration with different agents.

Reusability

Finally, and perhaps most obviously is reusability. How many times have you copied option fields with the same option string and captions from one table to another? For instance, how many different places in standard does a “Document Type” field with an identical set of options occur? (I started to go through but quickly realised it was more than I could be bothered to count).

Instead of doing that you can just define the enum and its values once and reuse it – even if you don’t plan on making it extensible. You know it makes more sense.

Extension Settings in Microsoft Dynamics Business Central

Edit: The following is only relevant for Business Central sandbox environments. External service calls will always be permitted in production tenants.

Recent builds of Business Central introduce a check when your app attempts to call an external service through the HttpClient type in AL. The user will see a message like this:

“The extension [extension name] by [publisher nameis making a request to an external service. Do you want to allow this request?”

external service request

This decision is saved into the database and is editable from the Extension Settings page…

extension settings.JPG

…which stores the setting in the NAV App Setting table.

NAV App Setting record.JPG

Either search in the menu for Extension Settings or use the AssitEdit button for the extension on the Extension Management page.

extension management config.JPG

The only editable setting on the Extension Settings page at the moment is “Allow HttpClient Requests” but I guess we might see this table being used for more per-app configuration settings in future.

You can delete the record from the Extension Settings page if you like. If you do the user will be prompted to make the decision again the next time the app attempts to call an external service.

For the curious, if you choose to block the request or uncheck the “Allow HttpClient Requests” option on the Extension Settings page the user will see this message:

“The request was blocked by the runtime to prevent accidental use of production services.”

Business Central Tenant Management

One of our apps calls for Business Central to communicate with our external service some key details about the tenant:

  • The Azure tenant id
  • The type of environment (production or sandbox)

but how to get at those details?

Maybe I’m a simpleton and maybe the information is out there somewhere and I just couldn’t find…but I couldn’t.

Turns out there is a codeunit (#417) called Tenant Management with a bunch of function to provide just this sort of information.

Tenant Mgt.JPG

Good to know.

PS: in case you’re wondering, GetAadTenantID returns ‘common’ for an on-premise installation.

Business Central Development With CI/CD

If you follow blogs about Dynamics 365 Business Central / NAV development, attended development sessions at Directions or have seen the schedule for NAVTechDays then you may have noticed the terms “CI/CD” or “pipeline” being thrown around.

What do those terms actually refer to? And how does it affect the way we approach development?

Definitions

CI = “continuous integration”
CD = “continuous delivery” (or “continuous deployment”, if you prefer)

These are pretty old development concepts. Check out the Wikipedia entry if you want an overview and some of the history. I would summarise it like this.

Continuous integration: incorporate new development into your main development branch as soon as possible.

Continuous delivery: get that development in front of your end users as quickly as possible.

The concept of a pipeline is having a defined series of steps that new development goes through. Build, test, publish and install into target environment(s) – automated as much as possible

Why?

All this talk of  “as soon as possible” sounds a little reckless. Is this really a good idea?

In a nutshell, we’re trying to minimise the time between identifying some changes that the customer needs (some new feature or bug fix) and those changes actually being deployed onto the customer’s system.

We want to avoid work in progress changes hanging around for ages. You’ve probably experienced the problems that come with that:

  • The work becomes harder to merge back into the master branch as time goes by
  • Future development dependent on these changes is held up or goes ahead with the worry it will clash with work in progress
  • People start to forget, or lose interest, in why the changes were required in the first place making testing and code review harder or less effective
  • The customer loses interest in the development and is less inclined to test or use the new development

How?

Integration

All my experience is with Azure DevOps (what used to be called Visual Studio Team Services and used to be called Team Foundation Server) but other platforms provide similar functionality.

We start by defining small, discrete work items. I don’t have a fixed rule, but if the work can’t be completed in a single sprint (say, 2 weeks) then it’s probably too big and you should split it into smaller chunks.

The developer gets to work and puts their changes in for review. Pushing those changes up to the server triggers the build pipeline. Typically this is a series of tasks performed by a build agent running on a server that you control. Azure DevOps provides several options for agents hosted by Microsoft but for now they don’t provide the option we need to build AL packages.

I won’t go into detail about our build pipeline now but it includes:

  • Creating a Docker container
  • Compiling the AL source with the compiler included in the container
  • Running the automated tests (the developer should have included new tests to cover their changes)
  • Uploading the test results and the .app files (we split the product and its tests into two separate apps) as build artefacts
  • Notifying the developer of the build result

By the time any of the reviewers comes to look at the code review we should already that:

  • All the tests have passed
  • The changes can be merged into the master branch without any conflicts

Nice. We can be much more confident hitting the Approve button knowing it passes the tests and will merge neatly with master. We get the changes incorporated back into the product quickly and have a clean starting point for the next cycle.

Delivery

Delivery is a different story. At the time of writing our release process is to make the new .app package available on SharePoint. We don’t automate that.

With Dynamics NAV / BC on-premise there is scope for automating the publish & install of the new app package into target environments and tenants. That would involve the definition of a release pipeline. An agent on the target environment could collect the app package (or fob, or text file) created by the build pipeline and use PowerShell to import/compile/publish/install into one or more databases.

We don’t attempt this as in many cases we don’t control the environments that our apps are installed into. The servers are not ours to install agent software onto and be responsible for.

This is especially true of Business Central SaaS as we are developing apps for AppSource. No app package* makes it onto the platform until it has passed the AppSource validation process and deployed by Microsoft on their own schedule.

*unless it is developed in the 50,000 – 99,999 object range and uploaded.

Getting Started

I hope that’s whet your appetite to go and investigate some more. Before you do you’ll need to be up and running with source code management and automated tests (perhaps more of that another time).