Part 2: Integration Between Extensions in Dynamics 365 Business Central

This post follows on from my discussion of extensions and integration and dependencies between them. Find the first part here.

TL;DR

  • You can use a base app as a common dependency for the apps that you want to integrate
  • Have one app raise an event publisher with the required event data and another app subscribe to that event
  • Use EventSubscriberInstance = Manual with BindSubscription to create an instance of the subscriber that you want for a given event call
  • Use a SingleInstance codeunit in the base app to keep the subscriber in scope to respond to events and CLEAR them when you’re done

Scenario

So far we’ve established the scenario of four apps: some business logic that is handling files from an external system and three file handler apps that are pushing and pulling those files from various sources.

The key objectives are to write each of these apps in such a way that:

  1. They integrate together to provide the overall functionality that the customer requires
  2. We can reuse one or more of the apps in other projects flexibly without needing to install dependencies that we aren’t using

Objective #2 means that we can’t have any dependencies between the apps. In Part 1 we discussed how you might achieve that with Codeunit.Run but some of the challenges that leaves us with.

Interfaces

Let’s picture how we might design a solution without worrying about the actual limitations of the AL language first.

In our example the file handlers are working with different sources (local network, FTP and Amazon S3) but they are providing common functionality. We’d probably need them all to:

  • List files in a given directory
  • Get the contents of a specific file
  • Delete files
  • Create new files

We might define all of the methods that we’d require a file handler to provide in an interface and have each file handler app implement that interface. This serves as a contract between the business logic app and the file handlers that the file handlers will always provide an agreed set of methods.

Polymorphism

A related, but slightly different idea is polymorphism. We might have a file handler base from which other file handlers inherit and override their functionality. This has the advantage of allowing the business logic app to create an instance of a file handler and call its methods without worrying about the precise type of file handler that is implementing those methods. For example, the business logic app can request that a file handler lists available files without knowing, or caring, precisely how that is being handled.

Yes, But We Code in AL not C#

Great. Thanks for the theory but none of this is possible in AL so why are talking about it? While we can’t write a solution using an interface or inheritance we can take inspiration from those approaches.

There are a couple of key challenges that we have to get a little funky in AL to overcome:

  1. How do we create an instance of a codeunit at runtime without knowing what that codeunit will be at design-time?
  2. How do we call methods in that codeunit without knowing which codeunit we’re talking about at design-time?

Use Events…Obviously

In one way the answer is simple. That’s what an event publisher is for. I call an event subscription and am able to call code in subscribers without knowing that they even exist at design-time. Perfect, apart from we are trying to avoid creating dependencies between our apps…remember? The file handlers can’t subscribe to an event in the business logic unless they depend on it or vice versa.

Common Dependency

One way to work around that is to have a common dependency between the apps that you want to integrate. Have the business logic raise an event in the base dependency that the file handler depends upon.

The base app could have some events that expose useful functionality to the business logic app (the sort of methods listed above).

Each file handler app could subscribe to those events and implement them.

We’re getting closer.

Pros

  • Only install the file handlers that you actually need
  • Decouples the business logic from the file handlers, they can be installed and maintained independently
  • We can pass AL types natively through the event parameters i.e. no need to serialize them and stuff them into a TempBlob record

Cons

  • If you want to support new methods you need to modify the base app which means you need to uninstall everything on top of it first
  • All file handlers will respond to all events raised in the base app. We’ll need to set a parameter to indicate which file handler we want to respond and have all file handlers respect it. Not insurmountable, but not particularly elegant either

Option D

With all that preamble I’ll get on to describing the Option D that I promised in the previous post.

I’ll attempt to outline our (current) approach in comprehensible English here but follow up with an example in the next post. This approach attempts to combine the best of both worlds:

  • Codeunit.Run targets a specific codeunit to run (rather than shouting for someone to help and having all the file handlers come running at the same time)
  • Events subscriptions allow you to pass native AL types

Credit to vjeko.com/i-had-a-dream-codeunit-references. This design takes some of the ideas Vjeko discusses in his post.

Listen…but Only When You’re Spoken To

We have a base app that is a common dependency for the apps that we are integrating as per the diagram above. The file handlers subscribe to an event in the base app which the business logic app is able to raise and pass appropriate parameters to. With multiple file handlers installed how do we prevent them from all responding all of the time? We want the business logic app to control which file handler’s event subscription fires each time.

The EventSubscriberInstance property. Set that to Manual for a codeunit and it will only respond to events when an instance of it is bound with BindSubscription. The codeunit will continue to respond until it is explicitly unbound or the instance goes out of scope. So, in order to have a particular subscriber respond we need a bound instance of its codeunit in scope when the event publisher is fired.

Interface Mgt.

The instances of subscribers are managed by a SingleInstance codeunit, Interface Mgt. Each file handler app requires a pair of codeunits:

  1. contains the logic i.e. the specifics of that file handler (EventSubscriberInstance = Manual)
  2. to register itself as an implementation of an interface, to bind an instance of codeunit 1 and pass that instance to Interface Mgt. when required

The flow is something like this (concentrate, this is the science bit):

  1. Interface Mgt. calls for interface implementations with a discovery event
  2. File handlers register their implementation with an Interface Code, Implementation Code, Codeunit ID (codeunit 2 as described above), Setup Page ID
    • File handlers that implement the same set of functions should have the same Interface Code e.g. “FILE HANDLER”
    • The Implementation Code uniquely identifies each handler e.g. “NETWORK”, “FTP”, “AMAZON S3”
  3. The business logic app asks Interface Mgt. to provide a lookup of available implementations for a given interface
    • Use this to assist with some setup in the business logic app
  4. The business logic app sets parameter values and asks Interface Mgt. to raise the event in a given file handler e.g.
    • Event Name = “GetFileContents”
    • Interface Code = “FILE HANDLER”
    • Implentation Code = “FTP”
    • Any other required event payload data
  5. Interface Mgt. runs the codeunit set on registration of the interface implementation (step 2)
    • That codeunit is responsible for binding an instance of the codeunit that contains the file handler logic
    • It passes that instance back to Interface Mgt. which stores it in a variant and keeps it in scope long enough to respond to the event in the following step
  6. Interface Mgt. calls the OnInterfaceEvent event with the payload set above (step 4)
  7. Regardless of how many subscriber there are to this event there should only be one bound codeunit in scope (the one set in step 5) so this is the only codeunit to respond to the event
  8. The file handler responds to the event, reading the event parameters and setting response data as appropriate
  9. The consumer reads the response information as required

Event Payload

I’ve talked about event parameters and response data above. How can you pass the required data in the OnInterfaceEvent event? We use an instance of a codeunit in the base app as a container for all the data associated with the event.

This codeunit has a bunch of methods for storing and retrieving data from the codeunit but essentially it is just an array of variants. We pass some data to the codeunit and tag it with a name and retrieve it again with the same name. This allows us to store any AL data types with their state and avoid serializing them.

Think of the Library – Variable Storage codeunit, it’s very similar.

Conclusion

Pros

  • The Interface Mgt. codeunit is generic and should be suitable for reuse in other scenarios where you have multiple implementations of given functionality
  • You can have as many implementations as you like and still be specific about the one you want to invoke each time
  • Pass instances of AL types around with their state e.g. a temporary set of Name/Value Buffer records or an xmlport without having to recreate it from JSON or XML

Cons

  • We’ve solved our objective of removing dependencies between extensions…with a dependency. Smart. Maybe if Microsoft made something like this available in the base app we could achieve our objectives with no dependencies at all
  • Complexity. Conceptually this is harder to follow than just using Codeunit.Run although once in place I don’t think the file handlers are any more difficult to write

Example

If none of that made much sense then fear not. I’ll show some example code and a calculator implementation next time.

Integration Between Extensions in Dynamics 365 Business Central

Extensions provide the opportunity for us to write and maintain our code in tidy, discrete blocks. This is good for separating concerns and breaking our functionality into logical pieces. But how do we get those pieces to play nicely together?

Scenario

The topic is probably best discussed with an example. Imagine that you’re writing some functionality to pull some files, handle them in Business Central and push some other files back out.

It doesn’t matter what the files are for now – they could be JSON, XML, CSV, whatever. Also we won’t worry about how we’re handling them – perhaps creating items, posting documents – the usual stuff.

For our purposes, the interesting part is that we are ‘pulling’ and ‘pushing’ the files from and to different sources. Let’s say we need to support a local network share, an FTP site and Amazon S3. Three quite distinct things to support but we’re going to need common functionality i.e. checking for available files, retrieving files, deleting files, creating files.

This is a good opportunity to create separate apps: one with the business logic concerned with handling the file content and three separate apps concerned with pushing and pulling the files from the different sources.

Why separate apps? A few things to consider:

  1. Although they are doing similar things, the code for each source type isn’t going to bear much resemblance. Splitting them makes each app responsible for one thing, making it easier to write and maintain i.e. separation of concerns.
  2. Splitting the apps means you can resuse them individually. If you have a project that only requires the Amazon S3 component you only install that and avoid bundling functionality that the customer isn’t using.
  3. In this scenario, handling local files will require using code that isn’t allowed in the cloud. If you bundle everything into a single app you won’t be able to use that app for SaaS implementations i.e. you’ll need to set the target to internal in app.json

Structure

OK, so you’ve decided to split this requirement into four apps. While that’s good for the reasons given above it does present a challenge. How do you structure these apps so that they can communicate with each other?

Option A: Business Logic Depends on File Handlers

Probably the most obvious thing to do is to have the business logic app depend on the file handlers. Business logic can start a process to pull new files and push results back. The file handlers can handle the request and pass the results back to the business logic. Or maybe the file handlers could throw an event when there is a new file available. Seeing as the business logic depends on the file handlers it can call their functions and subscribe to their events directly. Nice and simple.

business logic depends on file handlers.JPG

Pros

  • The most straightforward approach
  • Business logic can call the file handler functionality directly

Cons

  • Only one, but I think it torpedoes this option. With this approach if you ever want to resuse the business logic you’re going to have to first install all the file handlers. Even if the customer isn’t using them. Including the network share app, which means you can’t deploy any of it to SaaS. Bummer.

Option B: File Handlers Depend on Business Logic

How about the other way round? Make the file handlers depend on the business logic. Business logic could raise an event requesting that the file handlers do something – push, pull, read a file. You could use the event parameters to target the request at a particular file handler and get some results back.

file handlers depend on business logic.JPG

Pros

  • Still quite straightforward to write
  • You only need install the file handlers that you are actually using in a project

Cons

  • You’ve carefully crafted some generic, reusable functionality in the file handlers so you want to make sure that you do resuse them on other projects. Trouble is, in order to do that you’re now going to have to install your business logic app with them. Even if you’re not going to use it. Also bummer.

Dependencies

And that illustrates the trouble with dependencies. They are great for simplifying how your extensions can interact with each other but makes it more difficult to have truly  resuable and interchangeable components that you can implement in other projects.

Not to mention that it adds a small amount of hassle keeping your dependency symbols up to date while you’re developing and that when you want to update an app you have to uninstall it’s dependants first.

Don’t get me wrong. I’m not suggesting that you should never use dependencies. We use them a lot. You just need to be aware of the implications before you create that relationship. You are stating that you will never find a need to install the dependant without also installing the dependency. In our example that is clearly not the case. We are going to want to be able to reuse one or more of the file handlers without reusing the business logic.

Option C: [Object].Run, RecordRef

Perhaps it’s better to try and avoid dependencies then? Maybe – but that swaps the above issues for a different set of challenges. How do you get the separate extensions to interact with each other when they are not aware of each other?

Object.Run to the rescue. The big win is that you can run an object that you don’t need to specify at design-time. Report Selections are an example that have been around just about forever. The user can pick the reports and the system can flexibly handle them (assuming they’ve picked a valid report for the usage – but let’s ignore that for now).

In a similar way RecordRefs provide access to records and related functions (getting, inserting, deleting, filtering, finding, field values etc.) without necessarily knowing the records and fields you are working with at design-time.

Codeunit.Run

Clearly the guts of your apps are going to live in codeunits. You can use Codeunit.Run to call those codeunits without each app needing to be aware of another’s inner workings or even existence. This is more like it.

Now, most likely you need to pass some data to the codeunit that you are running. How do you to that when you can only call the OnRun function? Codeunits can take a record (VAR) parameter. You can use this parameter to pass whatever you want.

Codeunit Parameter

If your app exposes some specific business logic you might find it useful to pass a record from some master data, document or journal table (Customer, Sales Header, Item Journal Line etc). In our example the file handlers need to support a range of functions so it is probably going to be more useful to pass a generic record to the codeunit with some text to tell it what you want it to do and get the result back.

Candidate tables might include:

  • TempBlob – stuff whatever you want into the Blob field e.g. JSON, XML
    • This could include a command e.g. PULL FILE, LIST FILES, PUSH FILE that the codeunit should execute
    • Some parameters e.g. the name of the file to be pulled, the content and name of the file to be pushed
  • Name/Value Buffer – only takes text up to 250 characters, but that might be sufficient in some cases
    • It avoids bothering with a Blob field (although TempBlob has functions to write and read text to and from the Blob these days)

Other Considerations

  • There are JSON helper codeunits you can use (1234, 5459) as well as native JSON data types in AL.
  • The same is true of XML (XMLports, XML Buffer table and native AL types)
  • Remember that codeunit parameters are VAR which is useful in at least two ways
    • The codeunit that is called can set values in the record and they will be passed back to the calling codeunit e.g. pass the contents of a file back in the Blob field of the TempBlob record
    • You can pass a set of records (temporary records or filtered set) e.g. a file handler might list all the files in a directory in a set of Name/Value Buffer records. The calling codeunit is then able to just REPEAT…UNTIL over the set rather than extracting the result from a string.

I won’t go into any more detail on this approach here as the subject has already been covered.

Pros

  • The apps are disconnected from each other now. We can reuse one or more of them in another project as we choose without worrying about dependencies
  • This approach is likely flexible enough for most things you need to do. As long as you can represent your data as JSON or XML you can pass it between the codeunits

Cons

  • Not as straightforward to write, maintain or debug – parameters must be FORMATted and EVALUATEd back into their native type aka serialization
  • RecordRefs and FieldRefs aren’t as nice to work with a Records. Your code will be full of object and field IDs rather than names and will be more verbose
  • There is no way to pass complex types with their state. That is possible using dependencies, but not with Codeunit.Run
    • What if I’ve started to populate a record but before inserting I need to call another extension and I want to pass that record (not a copy with the same field values)?
    • If I’ve got global variables set in a codeunit or page I can’t pass them with Codeunit.Run

Option D: To be continued…

We’ve illustrated some of the challenges that arise when splitting your functionality into separate apps. Hopefully some of the above ideas will help you overcome them.

Let’s not overcomplicate things – if creating a dependency solves your problem and you’re happy with all the implications you should just do that. Otherwise, consider clearly defining the data your apps need to exchange and pass a record to Codeunit.Run.

In the next post I will give an option D for your consideration which attempts to address some of the remaining challenges.

Extensible Enums in Dynamics 365 Business Central

Option fields: great for scenarios where you want to provide a fixed, predefined list of values. Only a single value can apply and the user gets a convenient dropdown to select from. Perfect, until you want to extend the list of values.

Enter enums.

Documentation is here: https://docs.microsoft.com/en-us/dynamics365/business-central/dev-itpro/developer/devenv-extensible-enums

The Theory

Enums are object types in their own right, not merely data types you can assign to fields or variables.

Let’s have a quick look at how it works. Who doesn’t love a calculator example?

Define a new enum like this:

enum 50100 Operator
{
  Extensible = true;
  value(0; Addition)
  {
    Caption = 'Addition';
  }
  value(1; Subtraction)
  {
    Caption = 'Subtraction';
  }
}

Notice the Extensible property. You need to explicitly decide that other apps can extend your values, which seems sensible. Use that enum as the data type for a table field or variable as you see fit.

Operator : Enum Operator;

As with options you’ll typically handle enums with a case statement. Also use the same double-colon syntax you use for options.

case Operator of
  Operator::Addition:
    exit(a + b);
  Operator::Subtraction:
    exit(a - b);
  else
    begin
      OnCalculate(a, b, Operator, Result, Handled); //event publisher
      if Handled then
        exit(Result);
    end;
end;

Notice the else in the case block. There isn’t much point making the enum extensible if you don’t have a way to handle the extended values. We’re throwing an event for any Operator values that we don’t recognise. Perhaps we ought to also throw an error if the event is not handled as that would indicate someone has added an enum value without handling the calculation – but you get the idea.

Now another app developer can extend your calculator with some new operators in a dependent app.

enumextension 50100 OperatorExt extends Operator
{
  value(50100; Sin)
  {
    Caption = 'Sin';
  }
  value(50101; Cos)
  {
    Caption = 'Cos';
  }
  value(50102; Tan)
  {
    Caption = 'Tan';
  }
}

The enumextension adds new values to the Operator enum. These values are not handled by the case statement above so the event is called. Subscribe to the OnCalculate event to provide the result and set the Handled flag.

The Practice

Three scenarios spring to mind where extending an enum could be particularly useful.

Adding On-premise Support

As a rule we try to write our extensions so that they can target either Business Central platform (SaaS or on-premise). The target property in app.json is set to “Extension” (or just omitted).

calculator.jpg

Let’s imagine that you want to use the .Net System.Math library to calculate the results of sin, cos and tan. You can’t use .Net in an app with a target of Extension.

What you could do instead is build your base calculator functionality in a SaaS-friendly, target-Extension app and add your .Net functionality in a dependent on-prem, target-Internal app instead.

I know, in the real world there are probably a bajillion free web services that could provide the result or you could use .Net in an Azure Function. Heck, you could even calculate the result manually if you really wanted (but seriously, don’t). Then again, in the real world you’re probably not making a calculator app.

You might want to handle things different if you’re running on-prem or on-SaaS though. For example, you might need to use .Net or interact with local resources like printers or file shares. Those are off-limits to SaaS apps. Rather than making your whole app target-Internal you could have a base app that you extend with your on-prem functionality.

Adding Additional Providers

Another model might be where you need several codeunits to provide some common functionality. Let’s say you have some integration with shipping agents – submitting consignment details, retrieving tracking numbers and label details etc.

You could create an enum with the name of the shipping agents that you integrate with in your app, but make allowance for that enum to be extended by other apps and throw appropriate events for them to handle integration with different agents.

Reusability

Finally, and perhaps most obviously is reusability. How many times have you copied option fields with the same option string and captions from one table to another? For instance, how many different places in standard does a “Document Type” field with an identical set of options occur? (I started to go through but quickly realised it was more than I could be bothered to count).

Instead of doing that you can just define the enum and its values once and reuse it – even if you don’t plan on making it extensible. You know it makes more sense.

Extension Settings in Microsoft Dynamics Business Central

Edit: The following is only relevant for Business Central sandbox environments. External service calls will always be permitted in production tenants.

Recent builds of Business Central introduce a check when your app attempts to call an external service through the HttpClient type in AL. The user will see a message like this:

“The extension [extension name] by [publisher nameis making a request to an external service. Do you want to allow this request?”

external service request

This decision is saved into the database and is editable from the Extension Settings page…

extension settings.JPG

…which stores the setting in the NAV App Setting table.

NAV App Setting record.JPG

Either search in the menu for Extension Settings or use the AssitEdit button for the extension on the Extension Management page.

extension management config.JPG

The only editable setting on the Extension Settings page at the moment is “Allow HttpClient Requests” but I guess we might see this table being used for more per-app configuration settings in future.

You can delete the record from the Extension Settings page if you like. If you do the user will be prompted to make the decision again the next time the app attempts to call an external service.

For the curious, if you choose to block the request or uncheck the “Allow HttpClient Requests” option on the Extension Settings page the user will see this message:

“The request was blocked by the runtime to prevent accidental use of production services.”