Working with Version Numbers in Dynamics Business Central / NAV

Specifically I’m talking about assigning version numbers to your own code and manipulating those versions in CAL / AL and PowerShell.

Version Numbering

There are lots of different systems for assigning a version number to some code. Some incorporate the date or the current year and day number within the year. Loads of background reading here if you’re interested.

The system we typically follow is:

Version number = a.b.c.d where:

  • a = major version – this is only incremented for a major refactoring or complete rewrite of the software
  • b = minor version – incremented when a significant new feature is implemented
  • c = fix – incremented for small changes and bug fixes
  • d = build – set to the ID of the build that created it in Azure DevOps

This system isn’t perfect and we don’t always follow it exactly as written. The line between what is just a fix and what is a new feature is a little blurry. We don’t run CAL code through our DevOps build process so they don’t get a build ID like AL apps do. Hit the comments section and tell me how and why you version differently.

Regardless, the important thing is you give some consideration to versioning. It is especially important that two different copies of your code must not go out to customers having the same version number. This is especially true for AL apps. If you want to publish an updated version of an app it must have a higher version number than the one you are replacing.

Automation

There are several situations where we need to work with version numbers in code and in scripts.

  • In the build process – reading the current version from app.json and setting the last element to equal the build ID
  • In our PowerShell script that creates a new navx package from CAL code (yes, we use v1 extensions. Not now, let’s go into that some other time)
  • In upgrade code – what was the previous version of the app? Was it higher or lower than a given version?

If you are considering, like we used to, just treating version numbers as strings…don’t. Think about it:

Treated as versions 1.10.0 is greater than 1.9.0 but when treated as strings it isn’t. That led us to split the versions into two arrays and compare each element. It worked, but it was convoluted. And completely unnecessary.

Some bright spark in our team wondered why we can’t just use .Net’s version type. We can.

CAL

Use a DotNet variable of type Version. Construct it with the version number string. NAVAPP.GETARCHIVEVERSION returns a string that can be used.

You can use the properties of the variable to access the individual elements of the version and its methods to compare to another string (less than, less than or equal to, greater than, greater than or equal to).

Version : DotNet System.Version.'mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'
Version2 : DotNet System.Version.'mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'

Version.Version('1.10.0');
Version2.Version(NAVAPP.GETARCHIVEVERSION);

IF (Version2.op_LessThan(Version) THEN BEGIN
  //some upgrade code that must be run when coming from an older version than 1.10.0
END;

PowerShell

Declare a variable of a given DotNet type using square brackets. Create a new version with new, Parse or TryParse. The latter expects a version variable passed by reference and returns a Boolean indicating whether a value could be assigned.

Access the elements of the version through the properties of the variable.

C:\> $Version1 = [Version]::new(1,10,0)
>> $Version2 = [Version]::new('1.9.0')
>> $Version1.CompareTo($Version2)
1

C:\> $Version = [Version]::new(1,10,0)
>> $Version.Minor
10

C:\> $Version = [Version]::new()
>> [Version]::TryParse('monkey',[ref]$Version)
False

AL

AL has a native Version datatype. As above, create a new version either from its elements or from a string. NavApp.GetArchiveVersion returns a string that can be used (for migration from v1).

To get the version of the current module (app) or of another app use NavApp.GetCurrentModuleInfo or NavApp.GetModuleInfo.

var
  Ver : Version;
  Ver2 : Version;
  DataVer : Version;
  AppVer : Version;
  ModInfo : ModuleInfo;
  ModInfo2 : ModuleInfo;
begin
  Ver := Version.Create(1,10,0);
  Ver2 := Version.Create(NavApp.GetArchiveVersion());

  if Ver > Ver2 then begin
    //some upgrade code
  end;

  //version of the current app
  NavApp.GetCurrentModuleInfo(ModInfo);
  DataVer := ModInfo.DataVersion();
  AppVer := ModInfo.AppVersion();

  //app version of the first dependency
  NavApp.GetModuleInfo(ModInfo.Dependencies().Get(1).Id(),ModInfo2); //dependencies is 1 based, not 0 based
  AppVer := ModInfo2.AppVersion();
end;

VS Code, PowerShell & Git: 5 Things

Visual Studio Code has moved quickly from “what’s that? Part of Visual Studio? No? Then why did they call it that?” to become the hub of much of my daily work. This post contains a few of the things (5 to be precise) that I’ve done to make it work better for me. Maybe you can glean something useful. Maybe you can teach me something about how you use it – post a comment.

Extensions

You can use VS Code to write JavaScript, C#, CSS, HTML and a raft of other languages, use its native support for Git and install extensions for AL (obviously), developing Azure Functions, integrating with Azure DevOps, managing Docker, writing Power Shell, adding support for TFVC…

Beautiful.

Having said that, I’m not a big fan of having lots of extensions that I only occasionally use. I’m pretty ruthless in uninstalling stuff I’m not using in Chrome and Android. VS Code is the same. If I don’t use it all the time I generally go without it. (For those of us that make apps for a living it’s a sobering thought that our prospective users are likely to be the same).

Right now I’ve got these extensions installed:

  • AL Language – every so often I need an upcoming version or a NAV 2018 version but most of the time I’ve got the one from the marketplace installed
  • Azure Account – provides some sign in magic required by other extensions
  • Azure Functions – like it sounds
  • Azure Pipelines – intellisense for YAML build definitions
  • CRS AL Language Extensions – for renaming AL files to follow best practices and because I don’t like the convention. Including the object type when the files are already in subfolders by object type and including objects IDs when we all agree we want to get rid of them and don’t care what they are as long as they’re unique seems pretty redundant to me…but I digress
  • GitLens – add blame annotations i.e. “how did this line of code get here”, file history, compare revisions, open the file in Azure DevOps
  • PowerShell – like it sounds
  • Night Owl – a theme. Because we can! Having suffered for years with an IDE that didn’t even highlight keywords I took my time trying out different themes. I like a dark theme but didn’t quite get on with the one that comes with VS Code.

Terminal

VS Code has a built in terminal. I use PowerShell a lot during the day to manage containers (with the navcontainerhelper module), manage Git and various tasks with our own module to call the with Azure DevOps REST API. It’s nice to also be able to do all that from within VS Code.

These ideas aren’t strictly to do with VS Code, but tweaking PowerShell and Git to make them more efficient for you.

Run as Administrator

If you’re going to use the terminal to manage docker containers you’re going to want to run VS Code (and therefore the terminal) as administrator.

You can set this in the Advanced section of the properties of the shortcut. This will force VS Code to always open as admin.

VS Code Shortcut Properties.JPG

I believe Freddy K is working on some changes to the navcontainerhelper module that will remove the requirement to run the cmdlets as admin. That would be nice.

PowerShell Profile

Have PowerShell automatically execute some script on loading by editing your profile. PowerShell has a built-in $profile variable which points to the location of your .ps1 profile file.

I use that file to import the posh-git module (below) and our own TFS Tools module. You could create the file with something like this (sc is an alias for the Set-Content command):

sc $profile 'Import-Module posh-git
Write-Host "PowerShell Ready" -ForegroundColor Green'

Opening a new terminal will look like this:

VS Code Terminal PowerShell Ready.JPG

Note: PowerShell ISE has a different profile file to PowerShell.

Posh-Git

I mostly use Git from the command line. I started using the command line rather than a GUI as I found it helped me understand what commands are actually being used – how fetch is different to pull, how to set tracking information for a branch or edit a remote.

And yes, perhaps there is small part of it that boosts my shallow sense of “I’m a real developer, I type weird commands into a prompt rather than clicking a button on a GUI”. It’s OK to admit that. I draw the line at Vim though.

Anyway.

If you’re planning on using Git in PowerShell you’re going to want to install the posh-git module.

Install-Module posh-git

It adds some details into the prompt (see above): the branch that you are on, how it compares to the remote branch that it is tracking and the status of your index. It adds tab completion all over the place as well – indispensable.

Git Aliases

If you do start using Git from the terminal you’re probably going to find typing some of the longer commands quite tedious. For instance, git log –graph is great to get an overview of your project and has loads of switches to alter its output. I tend to use:

git log --graph --oneline --all

To show a graph of all the branches (remote as well as local) with commit details on a single line each.

Git Log Graph Oneline.JPG

It gets you something like the above. You can see the commits that each of the branches is pointing at, which branches commits are included in and how work has been merged over time.

I don’t want to type the full command out each time though. Fortunately, Git doesn’t force you to. You can create an alias. I have:

  • git lol – to show the above graph
  • git fap – to fetch all changes from the remote and prune any references to remote branches that no longer exist (I’ve never understood why Git doesn’t automatically remove references to remote branches that no longer exist)
  • git pff – pull and merge changes from the remote branch, as long as your branch can be fast-forwarded

Conclusion

There are lots of opportunities – more than 5 – to enhance and tune VS Code and PowerShell to make your daily work more efficient. Check it out.

Part 3: Integration Between Extensions in Dynamics 365 Business Central

Trig Calculator.gif

Sample Code: https://github.com/jimmymcp/calculator-interface

This post is in a series (parts one and two here) discussing the challenges and practical approaches to breaking your functionality into discrete extensions and getting them to integrate with one another.

In the previous post I described my attempt to declare and implement interfaces in AL with a heady mix of a discovery pattern, Codeunit.Run and manually bound subscribers. In this post I’m going to walk through an example.

The example is, of course, a calculator. Cos, sin and tan calculations will be handled by separate modules all implementing a TRIG interface and its Calculate method.

The calculator should be able to make use of any of the calculations independently of the others and it should be possible to maintain a calculation module without affecting anything else.

calculator structure.JPG

Before we start, a few things to note:

  • We can’t actually define an interface and implement it in any formal way in AL. Not in a sense that will give you a compile-time error if you don’t implement it correctly. Microsoft are aware that this is something we need and are investigating how they might bring this to AL e.g. check out the “Designing for extensibility” session at NAVTechDays 2018. This is my attempt to bring the benefits of interfaces to Business Central development until Microsoft give us something better
  • For the sake of convenience I’m using a calculator example rather than the file handler scenario I have been discussing in this series. This approach could be considered for any scenario where you have multiple, independent implementations of similar functionality
  • Also for convenience, all of the sample code is in a single app. In reality it would be split into 5 apps as per the diagram above

Registering Implementations

With all that said let’s get down to the details. The first thing is that each of the calculation modules registers themselves as an implementation of the TRIG interface.

Each module has a pair of codeunits:

  1. Binding – responsible for subscribing to the discovery event and registering the implementation and for binding an instance of the Calculation codeunit
  2. Calculation – contains the methods that actually implement the interface events, is manually bound

The below code is from the CosBinding codeunit. It adds a new entry into the Interface Implementation table to register a implementation of the TRIG interface called COS. It also specifies the codeunit to run when the COS implementation needs to be used – itself.

[EventSubscriber(ObjectType::Codeunit, Codeunit::"Interface Mgt.", 'OnRegisterInterface', '', false, false)]
local procedure OnRegisterInterface(var InterfaceImplementationBuffer: Record "Interface Implementation" temporary)
begin
  InterfaceImplementationBuffer.AddNewEntry('TRIG','COS',Codeunit::"Cos Binding",0);
end;

You’ll see the same code for the SIN and TAN implementations.

Looking Up Implementations

Now that we’ve got multiple implementations of the same interface we need some way of allowing code that requires the interface to select the appropriate implementation.

field(Operation; Operation)
{
  ApplicationArea = All;
  AssistEdit = true;
  trigger OnAssistEdit()
  var
    InterfaceImplementation: Record "Interface Implementation";
    InterfaceMgt: Codeunit "Interface Mgt.";
  begin
  if InterfaceMgt.LookupInterfaceImplementation('TRIG', InterfaceImplementation) then
    Operation := InterfaceImplementation."Implementation Code";
end;
}

The Operation field on the Calculator page allows the user to select the operation they want to perform i.e. which implementation of the TRIG interface to use in the calculation.

The Interface Mgt. codeunit provides a lookup of the implementations that have been registered for a given interface and returns the selected record.

Invoking Interface Methods

Now we’ve registered the implementations and selected the specific one we want to use it’s time to actually invoke it.

action(Calculate)
{
  ApplicationArea = All;
  Image = Calculate;
  Promoted = true;
  PromotedCategory = Process;
  PromotedOnly = true;

  trigger OnAction()
  var
    InterfaceMgt: Codeunit "Interface Mgt.";
    AppIntegrationData: Codeunit "App Integration Data";
    Handled: Boolean;
  begin
    AppIntegrationData.SetIntegationData('Angle', Angle);
    InterfaceMgt.InvokeInterfaceEvent('TRIG', Operation, 'Calculate', AppIntegrationData, Handled);
    if Handled then
      Result := AppIntegrationData.GetIntegrationDataDecimal('Result', 0)
  end;
}

I’m using a instance of the App Integration Data codeunit as a container for the data that needs to be passed between the implementation codeunit and the codeunit that is calling it. In my case I just need to pass in an angle and retrieve the result of the calculation.

InvokeInterfaceEvent tells the Interface Mgt. codeunit to invoke the Calculate method in the TRIG interface and the implementation selected in the Operation field. The instance of App Integration Data is passed in along with a Handled flag.

If the event has been handled then retrieve the value of the Result variable – as a decimal – from the App Integration Data codeunit.

And that’s it.

InvokeInterfaceEvent

So how does the appropriate Calculation codeunit get called?

This is the InvokeInterfaceEvent method.

procedure InvokeInterfaceEvent(InterfaceCode: Code[20]; ImplementationCode: Code[20]; EventName: Text; var IntegrationData: Codeunit "App Integration Data"; var Handled: Boolean)
begin
  Clear(InterfaceCodeunit);
  if not GetInterfaceImplementation(InterfaceCode, ImplementationCode, InterfaceImplementation) then
    Error(NoInterfaceImplementationErr, InterfaceCode);

  InterfaceImplementation.TestField("Codeunit ID");
  Codeunit.Run(InterfaceImplementation."Codeunit ID");
  if not InterfaceCodeunit.IsCodeunit() then
    Error(NoInterfaceCodeunitErr, InterfaceImplementation."Codeunit ID", InterfaceImplementation."Interface Code", InterfaceImplementation."Implementation Code");

  OnInterfaceEvent(EventName, IntegrationData, Handled);
  Clear(InterfaceCodeunit);
end;

First, check that a valid interface and implementation have been specified and throw an error if not.

Then test that a Codeunit ID has been specified by the selected implementation and run that codeunit. As we saw above, when registering the implementation the (Cos/Sin/Tan)Binding was specified as the codeunit to run. That codeunit is responsible for binding an instance of the correct (Cos/Sin/Tan)Calculation codeunit and passing that instance back to the Interface Mgt. codeunit (see below).

The InovkeInterfaceEvent has a global InterfaceCodeunit variable which keeps that bound codeunit instance in scope ready to respond to the OnInterfaceEvent event call.

Before calling OnInterfaceEvent we check that the InterfaceCodeunit variable does actually contain a codeunit.

After the OnInterfaceEvent call the InterfaceCodeunit is cleared to dispose of the bound codeunit and ensure it doesn’t respond to any more events until we need it again.

Binding Codeunit OnRun

This is the OnRun trigger of the CosBinding codeunit. All it does it bind an instance of the corresponding Calculation codeunit and pass that instance back to Interface Mgt.

trigger OnRun()
var
  InterfaceMgt : Codeunit "Interface Mgt.";
  CosCalculation : Codeunit "Cos Calculation";
begin
  BindSubscription(CosCalculation);
  InterfaceMgt.SetInterfaceCodeunit(CosCalculation);
end;

OnInterfaceEvent

Now that we have a instance of the appropriate Calculation codeunit bound it will respond to the OnInterfaceEvent event and we can run whatever business logic we want.

Here is the CosCalculation codeunit. It:

  1. Subscribes to OnInterfaceEvent
  2. Has a case statement to handle the event that has been called (in real life an implementation will likely implement multiple methods)
  3. Reads the Angle variable from the App Integration Data codeunit
  4. Uses System.Math to calculate the result
  5. Stores the result in the Result variable in the App Integration Data codeunit
  6. Sets Handled to true
local procedure Calculate(var AppIntegrationData : Codeunit "App Integration Data")
var
  Math : DotNet Math;
  Angle : Decimal;
  Result : Decimal;
begin
  Angle := AppIntegrationData.GetIntegrationDataDecimal('Angle',0);
  Result := Math.Cos(Angle);
  AppIntegrationData.SetIntegationData('Result',Result);
end;

[EventSubscriber(ObjectType::Codeunit, Codeunit::"Interface Mgt.", 'OnInterfaceEvent', '', false, false)]
local procedure OnInterfaceEvent(EventName: Text; IntegrationData: Codeunit "App Integration Data"; var Handled: Boolean)
begin
  case EventName of
    'Calculate':
      begin
        Calculate(IntegrationData);
        Handled := true;
      end;
  end;
end;

Conclusion

And there you have it. Provided you can live with the shared dependency at the bottom of the dependency tree this achieves the two objectives that we set out with:

  1. Splitting functionality into multiple, discrete apps that can be developed and maintained independently of each other
  2. Having those apps integrate with each other to provide the required functionality to the end user

It’s not the most elegant solution and coding this way means you don’t get much help from the IDE. If you mistype a variable or event name somewhere everything will compile but nothing will work.

Hopefully at some point Microsoft will give us a better solution to these challenges but in the mean time take as much or as little inspiration from our approach as you like.

Part 2: Integration Between Extensions in Dynamics 365 Business Central

This post follows on from my discussion of extensions and integration and dependencies between them. Find the first part here.

TL;DR

  • You can use a base app as a common dependency for the apps that you want to integrate
  • Have one app raise an event publisher with the required event data and another app subscribe to that event
  • Use EventSubscriberInstance = Manual with BindSubscription to create an instance of the subscriber that you want for a given event call
  • Use a SingleInstance codeunit in the base app to keep the subscriber in scope to respond to events and CLEAR them when you’re done

Scenario

So far we’ve established the scenario of four apps: some business logic that is handling files from an external system and three file handler apps that are pushing and pulling those files from various sources.

The key objectives are to write each of these apps in such a way that:

  1. They integrate together to provide the overall functionality that the customer requires
  2. We can reuse one or more of the apps in other projects flexibly without needing to install dependencies that we aren’t using

Objective #2 means that we can’t have any dependencies between the apps. In Part 1 we discussed how you might achieve that with Codeunit.Run but some of the challenges that leaves us with.

Interfaces

Let’s picture how we might design a solution without worrying about the actual limitations of the AL language first.

In our example the file handlers are working with different sources (local network, FTP and Amazon S3) but they are providing common functionality. We’d probably need them all to:

  • List files in a given directory
  • Get the contents of a specific file
  • Delete files
  • Create new files

We might define all of the methods that we’d require a file handler to provide in an interface and have each file handler app implement that interface. This serves as a contract between the business logic app and the file handlers that the file handlers will always provide an agreed set of methods.

Polymorphism

A related, but slightly different idea is polymorphism. We might have a file handler base from which other file handlers inherit and override their functionality. This has the advantage of allowing the business logic app to create an instance of a file handler and call its methods without worrying about the precise type of file handler that is implementing those methods. For example, the business logic app can request that a file handler lists available files without knowing, or caring, precisely how that is being handled.

Yes, But We Code in AL not C#

Great. Thanks for the theory but none of this is possible in AL so why are talking about it? While we can’t write a solution using an interface or inheritance we can take inspiration from those approaches.

There are a couple of key challenges that we have to get a little funky in AL to overcome:

  1. How do we create an instance of a codeunit at runtime without knowing what that codeunit will be at design-time?
  2. How do we call methods in that codeunit without knowing which codeunit we’re talking about at design-time?

Use Events…Obviously

In one way the answer is simple. That’s what an event publisher is for. I call an event subscription and am able to call code in subscribers without knowing that they even exist at design-time. Perfect, apart from we are trying to avoid creating dependencies between our apps…remember? The file handlers can’t subscribe to an event in the business logic unless they depend on it or vice versa.

Common Dependency

One way to work around that is to have a common dependency between the apps that you want to integrate. Have the business logic raise an event in the base dependency that the file handler depends upon.

The base app could have some events that expose useful functionality to the business logic app (the sort of methods listed above).

Each file handler app could subscribe to those events and implement them.

We’re getting closer.

Pros

  • Only install the file handlers that you actually need
  • Decouples the business logic from the file handlers, they can be installed and maintained independently
  • We can pass AL types natively through the event parameters i.e. no need to serialize them and stuff them into a TempBlob record

Cons

  • If you want to support new methods you need to modify the base app which means you need to uninstall everything on top of it first
  • All file handlers will respond to all events raised in the base app. We’ll need to set a parameter to indicate which file handler we want to respond and have all file handlers respect it. Not insurmountable, but not particularly elegant either

Option D

With all that preamble I’ll get on to describing the Option D that I promised in the previous post.

I’ll attempt to outline our (current) approach in comprehensible English here but follow up with an example in the next post. This approach attempts to combine the best of both worlds:

  • Codeunit.Run targets a specific codeunit to run (rather than shouting for someone to help and having all the file handlers come running at the same time)
  • Events subscriptions allow you to pass native AL types

Credit to vjeko.com/i-had-a-dream-codeunit-references. This design takes some of the ideas Vjeko discusses in his post.

Listen…but Only When You’re Spoken To

We have a base app that is a common dependency for the apps that we are integrating as per the diagram above. The file handlers subscribe to an event in the base app which the business logic app is able to raise and pass appropriate parameters to. With multiple file handlers installed how do we prevent them from all responding all of the time? We want the business logic app to control which file handler’s event subscription fires each time.

The EventSubscriberInstance property. Set that to Manual for a codeunit and it will only respond to events when an instance of it is bound with BindSubscription. The codeunit will continue to respond until it is explicitly unbound or the instance goes out of scope. So, in order to have a particular subscriber respond we need a bound instance of its codeunit in scope when the event publisher is fired.

Interface Mgt.

The instances of subscribers are managed by a SingleInstance codeunit, Interface Mgt. Each file handler app requires a pair of codeunits:

  1. contains the logic i.e. the specifics of that file handler (EventSubscriberInstance = Manual)
  2. to register itself as an implementation of an interface, to bind an instance of codeunit 1 and pass that instance to Interface Mgt. when required

The flow is something like this (concentrate, this is the science bit):

  1. Interface Mgt. calls for interface implementations with a discovery event
  2. File handlers register their implementation with an Interface Code, Implementation Code, Codeunit ID (codeunit 2 as described above), Setup Page ID
    • File handlers that implement the same set of functions should have the same Interface Code e.g. “FILE HANDLER”
    • The Implementation Code uniquely identifies each handler e.g. “NETWORK”, “FTP”, “AMAZON S3”
  3. The business logic app asks Interface Mgt. to provide a lookup of available implementations for a given interface
    • Use this to assist with some setup in the business logic app
  4. The business logic app sets parameter values and asks Interface Mgt. to raise the event in a given file handler e.g.
    • Event Name = “GetFileContents”
    • Interface Code = “FILE HANDLER”
    • Implentation Code = “FTP”
    • Any other required event payload data
  5. Interface Mgt. runs the codeunit set on registration of the interface implementation (step 2)
    • That codeunit is responsible for binding an instance of the codeunit that contains the file handler logic
    • It passes that instance back to Interface Mgt. which stores it in a variant and keeps it in scope long enough to respond to the event in the following step
  6. Interface Mgt. calls the OnInterfaceEvent event with the payload set above (step 4)
  7. Regardless of how many subscriber there are to this event there should only be one bound codeunit in scope (the one set in step 5) so this is the only codeunit to respond to the event
  8. The file handler responds to the event, reading the event parameters and setting response data as appropriate
  9. The consumer reads the response information as required

Event Payload

I’ve talked about event parameters and response data above. How can you pass the required data in the OnInterfaceEvent event? We use an instance of a codeunit in the base app as a container for all the data associated with the event.

This codeunit has a bunch of methods for storing and retrieving data from the codeunit but essentially it is just an array of variants. We pass some data to the codeunit and tag it with a name and retrieve it again with the same name. This allows us to store any AL data types with their state and avoid serializing them.

Think of the Library – Variable Storage codeunit, it’s very similar.

Conclusion

Pros

  • The Interface Mgt. codeunit is generic and should be suitable for reuse in other scenarios where you have multiple implementations of given functionality
  • You can have as many implementations as you like and still be specific about the one you want to invoke each time
  • Pass instances of AL types around with their state e.g. a temporary set of Name/Value Buffer records or an xmlport without having to recreate it from JSON or XML

Cons

  • We’ve solved our objective of removing dependencies between extensions…with a dependency. Smart. Maybe if Microsoft made something like this available in the base app we could achieve our objectives with no dependencies at all
  • Complexity. Conceptually this is harder to follow than just using Codeunit.Run although once in place I don’t think the file handlers are any more difficult to write

Example

If none of that made much sense then fear not. I’ll show some example code and a calculator implementation next time.

Extensible Enums in Dynamics 365 Business Central

Option fields: great for scenarios where you want to provide a fixed, predefined list of values. Only a single value can apply and the user gets a convenient dropdown to select from. Perfect, until you want to extend the list of values.

Enter enums.

Documentation is here: https://docs.microsoft.com/en-us/dynamics365/business-central/dev-itpro/developer/devenv-extensible-enums

The Theory

Enums are object types in their own right, not merely data types you can assign to fields or variables.

Let’s have a quick look at how it works. Who doesn’t love a calculator example?

Define a new enum like this:

enum 50100 Operator
{
  Extensible = true;
  value(0; Addition)
  {
    Caption = 'Addition';
  }
  value(1; Subtraction)
  {
    Caption = 'Subtraction';
  }
}

Notice the Extensible property. You need to explicitly decide that other apps can extend your values, which seems sensible. Use that enum as the data type for a table field or variable as you see fit.

Operator : Enum Operator;

As with options you’ll typically handle enums with a case statement. Also use the same double-colon syntax you use for options.

case Operator of
  Operator::Addition:
    exit(a + b);
  Operator::Subtraction:
    exit(a - b);
  else
    begin
      OnCalculate(a, b, Operator, Result, Handled); //event publisher
      if Handled then
        exit(Result);
    end;
end;

Notice the else in the case block. There isn’t much point making the enum extensible if you don’t have a way to handle the extended values. We’re throwing an event for any Operator values that we don’t recognise. Perhaps we ought to also throw an error if the event is not handled as that would indicate someone has added an enum value without handling the calculation – but you get the idea.

Now another app developer can extend your calculator with some new operators in a dependent app.

enumextension 50100 OperatorExt extends Operator
{
  value(50100; Sin)
  {
    Caption = 'Sin';
  }
  value(50101; Cos)
  {
    Caption = 'Cos';
  }
  value(50102; Tan)
  {
    Caption = 'Tan';
  }
}

The enumextension adds new values to the Operator enum. These values are not handled by the case statement above so the event is called. Subscribe to the OnCalculate event to provide the result and set the Handled flag.

The Practice

Three scenarios spring to mind where extending an enum could be particularly useful.

Adding On-premise Support

As a rule we try to write our extensions so that they can target either Business Central platform (SaaS or on-premise). The target property in app.json is set to “Extension” (or just omitted).

calculator.jpg

Let’s imagine that you want to use the .Net System.Math library to calculate the results of sin, cos and tan. You can’t use .Net in an app with a target of Extension.

What you could do instead is build your base calculator functionality in a SaaS-friendly, target-Extension app and add your .Net functionality in a dependent on-prem, target-Internal app instead.

I know, in the real world there are probably a bajillion free web services that could provide the result or you could use .Net in an Azure Function. Heck, you could even calculate the result manually if you really wanted (but seriously, don’t). Then again, in the real world you’re probably not making a calculator app.

You might want to handle things different if you’re running on-prem or on-SaaS though. For example, you might need to use .Net or interact with local resources like printers or file shares. Those are off-limits to SaaS apps. Rather than making your whole app target-Internal you could have a base app that you extend with your on-prem functionality.

Adding Additional Providers

Another model might be where you need several codeunits to provide some common functionality. Let’s say you have some integration with shipping agents – submitting consignment details, retrieving tracking numbers and label details etc.

You could create an enum with the name of the shipping agents that you integrate with in your app, but make allowance for that enum to be extended by other apps and throw appropriate events for them to handle integration with different agents.

Reusability

Finally, and perhaps most obviously is reusability. How many times have you copied option fields with the same option string and captions from one table to another? For instance, how many different places in standard does a “Document Type” field with an identical set of options occur? (I started to go through but quickly realised it was more than I could be bothered to count).

Instead of doing that you can just define the enum and its values once and reuse it – even if you don’t plan on making it extensible. You know it makes more sense.