Extensions provide the opportunity for us to write and maintain our code in tidy, discrete blocks. This is good for separating concerns and breaking our functionality into logical pieces. But how do we get those pieces to play nicely together?
Scenario
The topic is probably best discussed with an example. Imagine that you’re writing some functionality to pull some files, handle them in Business Central and push some other files back out.
It doesn’t matter what the files are for now – they could be JSON, XML, CSV, whatever. Also we won’t worry about how we’re handling them – perhaps creating items, posting documents – the usual stuff.
For our purposes, the interesting part is that we are ‘pulling’ and ‘pushing’ the files from and to different sources. Let’s say we need to support a local network share, an FTP site and Amazon S3. Three quite distinct things to support but we’re going to need common functionality i.e. checking for available files, retrieving files, deleting files, creating files.
This is a good opportunity to create separate apps: one with the business logic concerned with handling the file content and three separate apps concerned with pushing and pulling the files from the different sources.
Why separate apps? A few things to consider:
- Although they are doing similar things, the code for each source type isn’t going to bear much resemblance. Splitting them makes each app responsible for one thing, making it easier to write and maintain i.e. separation of concerns.
- Splitting the apps means you can resuse them individually. If you have a project that only requires the Amazon S3 component you only install that and avoid bundling functionality that the customer isn’t using.
- In this scenario, handling local files will require using code that isn’t allowed in the cloud. If you bundle everything into a single app you won’t be able to use that app for SaaS implementations i.e. you’ll need to set the target to internal in app.json
Structure
OK, so you’ve decided to split this requirement into four apps. While that’s good for the reasons given above it does present a challenge. How do you structure these apps so that they can communicate with each other?
Option A: Business Logic Depends on File Handlers
Probably the most obvious thing to do is to have the business logic app depend on the file handlers. Business logic can start a process to pull new files and push results back. The file handlers can handle the request and pass the results back to the business logic. Or maybe the file handlers could throw an event when there is a new file available. Seeing as the business logic depends on the file handlers it can call their functions and subscribe to their events directly. Nice and simple.
Pros
- The most straightforward approach
- Business logic can call the file handler functionality directly
Cons
- Only one, but I think it torpedoes this option. With this approach if you ever want to resuse the business logic you’re going to have to first install all the file handlers. Even if the customer isn’t using them. Including the network share app, which means you can’t deploy any of it to SaaS. Bummer.
Option B: File Handlers Depend on Business Logic
How about the other way round? Make the file handlers depend on the business logic. Business logic could raise an event requesting that the file handlers do something – push, pull, read a file. You could use the event parameters to target the request at a particular file handler and get some results back.
Pros
- Still quite straightforward to write
- You only need install the file handlers that you are actually using in a project
Cons
- You’ve carefully crafted some generic, reusable functionality in the file handlers so you want to make sure that you do resuse them on other projects. Trouble is, in order to do that you’re now going to have to install your business logic app with them. Even if you’re not going to use it. Also bummer.
Dependencies
And that illustrates the trouble with dependencies. They are great for simplifying how your extensions can interact with each other but makes it more difficult to have truly resuable and interchangeable components that you can implement in other projects.
Not to mention that it adds a small amount of hassle keeping your dependency symbols up to date while you’re developing and that when you want to update an app you have to uninstall it’s dependants first.
Don’t get me wrong. I’m not suggesting that you should never use dependencies. We use them a lot. You just need to be aware of the implications before you create that relationship. You are stating that you will never find a need to install the dependant without also installing the dependency. In our example that is clearly not the case. We are going to want to be able to reuse one or more of the file handlers without reusing the business logic.
Option C: [Object].Run, RecordRef
Perhaps it’s better to try and avoid dependencies then? Maybe – but that swaps the above issues for a different set of challenges. How do you get the separate extensions to interact with each other when they are not aware of each other?
Object.Run to the rescue. The big win is that you can run an object that you don’t need to specify at design-time. Report Selections are an example that have been around just about forever. The user can pick the reports and the system can flexibly handle them (assuming they’ve picked a valid report for the usage – but let’s ignore that for now).
In a similar way RecordRefs provide access to records and related functions (getting, inserting, deleting, filtering, finding, field values etc.) without necessarily knowing the records and fields you are working with at design-time.
Codeunit.Run
Clearly the guts of your apps are going to live in codeunits. You can use Codeunit.Run to call those codeunits without each app needing to be aware of another’s inner workings or even existence. This is more like it.
Now, most likely you need to pass some data to the codeunit that you are running. How do you to that when you can only call the OnRun function? Codeunits can take a record (VAR) parameter. You can use this parameter to pass whatever you want.
Codeunit Parameter
If your app exposes some specific business logic you might find it useful to pass a record from some master data, document or journal table (Customer, Sales Header, Item Journal Line etc). In our example the file handlers need to support a range of functions so it is probably going to be more useful to pass a generic record to the codeunit with some text to tell it what you want it to do and get the result back.
Candidate tables might include:
- TempBlob – stuff whatever you want into the Blob field e.g. JSON, XML
- This could include a command e.g. PULL FILE, LIST FILES, PUSH FILE that the codeunit should execute
- Some parameters e.g. the name of the file to be pulled, the content and name of the file to be pushed
- Name/Value Buffer – only takes text up to 250 characters, but that might be sufficient in some cases
- It avoids bothering with a Blob field (although TempBlob has functions to write and read text to and from the Blob these days)
Other Considerations
- There are JSON helper codeunits you can use (1234, 5459) as well as native JSON data types in AL.
- The same is true of XML (XMLports, XML Buffer table and native AL types)
- Remember that codeunit parameters are VAR which is useful in at least two ways
- The codeunit that is called can set values in the record and they will be passed back to the calling codeunit e.g. pass the contents of a file back in the Blob field of the TempBlob record
- You can pass a set of records (temporary records or filtered set) e.g. a file handler might list all the files in a directory in a set of Name/Value Buffer records. The calling codeunit is then able to just REPEAT…UNTIL over the set rather than extracting the result from a string.
I won’t go into any more detail on this approach here as the subject has already been covered.
- http://vjeko.com/tempblob-faade-a-pattern-proposition/
- Gunnar & Sigurdur from Advania presented on these lines at Directions (for Directions attendees, password required)
Pros
- The apps are disconnected from each other now. We can reuse one or more of them in another project as we choose without worrying about dependencies
- This approach is likely flexible enough for most things you need to do. As long as you can represent your data as JSON or XML you can pass it between the codeunits
Cons
- Not as straightforward to write, maintain or debug – parameters must be FORMATted and EVALUATEd back into their native type aka serialization
- RecordRefs and FieldRefs aren’t as nice to work with a Records. Your code will be full of object and field IDs rather than names and will be more verbose
- There is no way to pass complex types with their state. That is possible using dependencies, but not with Codeunit.Run
- What if I’ve started to populate a record but before inserting I need to call another extension and I want to pass that record (not a copy with the same field values)?
- If I’ve got global variables set in a codeunit or page I can’t pass them with Codeunit.Run
Option D: To be continued…
We’ve illustrated some of the challenges that arise when splitting your functionality into separate apps. Hopefully some of the above ideas will help you overcome them.
Let’s not overcomplicate things – if creating a dependency solves your problem and you’re happy with all the implications you should just do that. Otherwise, consider clearly defining the data your apps need to exchange and pass a record to Codeunit.Run.
In the next post I will give an option D for your consideration which attempts to address some of the remaining challenges.
7 thoughts on “Integration Between Extensions in Dynamics 365 Business Central”