Stop Writing Automated Tests and Get On With Some Real Code

To be fair, these weren’t the exact words that were used, but a view was expressed from the keynote stage at Directions last week along these lines. Frustration that developers now have to concern themselves with infrastructure, like Docker, and writing automated tests rather than “real” code.

I couldn’t resist a short post in response to this view.

If It Doesn’t Add Value, Stop Doing It!

First, no one is forcing you to write automated tests – apart from Microsoft, who want them with your AppSource app submission. Even then, I haven’t heard of Microsoft rejecting an app because it wasn’t accompanied by enough automated tests.

I’m an advocate of developers taking responsibility for their own practices. Don’t follow a best practice simply because someone else tells you it’s a best practice. You know your scenario, your team, your code and your customers better than anyone else. You are best placed to judge whether implementing a new practice is worth the cost of getting started with it.

AppSource aside, if you are complaining about the amount of time you have to spend on writing tests then you have no one to blame but yourself. Or maybe your boss. If you don’t see the value in writing automated tests then you probably should stop wasting your time writing them!

Automated Tests vs “Real” Code

Part of the frustration with tests seemed to be that they aren’t even “real” code. If by “real” code we are referring to the code that we deliver and sell to customers then no, tests aren’t real code.

But what are we trying to achieve? Surely working, maintainable code that adds value for our customers.

We might invest in lots of things in pursuit of that goal. Time spent manually testing, sufficient hardware to develop and test the code on, an internet connection to communicate with each other and the customer, office space to work in, training courses and materials, coffee. We’re not selling these things to the customer either but no one would question that they are necessary to achieve the goal of delivering working software. Especially the coffee.

Whether or not automated tests are “real” code is the wrong question. The important judgement is whether the time spent on writing them makes a big enough contribution to the quality of the product that you eventually ship.

I won’t make the case for automated testing here. That’s for a different post. Or a different book. Suffice to say, I do think it is worth the investment.

But We’ve Got a Backlog of Code Not Covered By Tests

One problem you might have is that you’ve got a backlog of legacy code that isn’t covered by any automated tests. Trying to write tests to cover it all will take ages. This frustration also seemed to be expressed by the speaker at Directions. It even got a round of applause from some of the Directions audience.

My response would be the same – you are best placed to make a pragmatic judgement. Of course it would be nice to have 100% code coverage of your tens of thousands of lines of legacy code – but if you’ll have to stop developing new features for six months to achieve it, is it worth it? Probably not.

Automated tests should give you confidence that the code works as expected. If you are already confident that your existing code works then there might be limited value in writing a suite of tests to prove it.

Try starting with tests to cover new features that you develop or bug fixes. With these cases you’ve got some code that you aren’t confident works as expected – or that you know doesn’t. Take the opportunity to document and prove the expected behaviour with some tests. Over time you’ll build a valuable suite of tests that you can run to demonstrate that each new release of your product works and that bugs haven’t been reintroduced.

With some practice you’ll find that you can use the library codeunits to create scenarios with little test code e.g. you can create a customer, item and sales order, post it and get the posted sales invoice in 2 lines of code.

Interested? More here

Debugging the Next Session in Business Central

Business Central v15 includes some good new stuff for developers. Access modifiers for objects, smarter code analysis, background page tasks – there is a list of stuff here: https://docs.microsoft.com/en-us/dynamics365-release-plan/2019wave2/dynamics365-business-central/developer-tools

I’ve just been trying out the new debugger capability, specifically being able to attach the debugger to a service and debug the next session to hit a breakpoint or error.

A Brief Nostalgia Trip…

Excuse me if I indulge in a little nostalgia. If you don’t care about this and just want to know how it works then you can skip to “spare me the history lesson”.

The Classic Client Years

Still here? Then maybe you have been around NAV long enough to remember the introduction of the RoleTailored Client. We’d been used to having the Classic Client debugger for years. It wasn’t perfect, but we knew our way round it. We could easily switch between writing and debugging code, debug an application server or even debug a posting routine in live and lock the whole system – anyone else do that when they first started in support? Life was good.

The RoleTailored Client Years

Then the RoleTailored Client was introduced and it felt like we were developing with one arm tied behind our backs. No debugger. You could still debug in Classic Client but the clients weren’t necessarily even running the same C/AL code – thanks to the ISSERVICETIER keyword.

I know you could find the source that the service tier was actually running, attach Visual Studio to the Server.exe process and debug the C# but not many people wanted to do that. MESSAGE debugging was far more common. Especially entertaining if someone left a message box in live and you got a call from the customer wondering what some mysterious pop-up was about. Connoisseurs wrapped their MESSAGEs in

IF USERID = 'sa' THEN...

By NAV 2013, RTC was the only client customers could use and we had to be able to debug. To be fair, Microsoft came up with the goods and the new debugger was better than what we used to have in the Classic Client. Especially because we could debug other sessions connected to the same service tier or the next session to connect. Ask the user to repeat the steps that lead to error and debug their session, perfect. Also great for debugging web service calls.

The Business Central Years

And then along came Business Central. The RoleTailored Client, complete with debugger is going to be removed and we don’t quite have a replacement for everything we rely on it for. Sound familiar?

Don’t get me wrong, I love VS Code. I love the VS Code debugging experience. But how can I debug other user sessions? How can I debug web service calls?

Spare me the History Lesson, How Does it Work?

Open up launch.json and hit the Add Configuration button in the bottom right hand corner and you’ll notice a couple of new options:

  • Attach to the next client on the cloud sandbox
  • Attach to the next client on your server

Pick one of those and you’ll notice that the configuration it creates has a request value of attach.

breakOnNext determines the type of session that the debugger will be attached to: Background, WebClient or WebServiceClient.

Give the configuration a sensible name so that you’ll be able to refer to it when you attach the debugger. Attach the debugger by opening the debug pane, selecting your configuration and click on the Start Debugging button.

Set some breakpoints in your code and hit them. Either with some activity in the web client or with a web service call.

BreakOnNext Support

Note: the help for breakOnNext states “The sandbox version only supports attaching to a WebService Client”. This seems to apply to sandbox Docker containers (e.g. from mcr.microsoft.com/businesscentral/sandbox) as well as to cloud sandboxes. You can, however, use the other breakOnNext options with an on-premise Docker image (mcr.microsoft.com/businesscentral/onprem).

Using Templates in YAML Pipelines in Azure DevOps

So far we’ve been considering how you can define a yaml pipeline to define the steps required to build the code in a single repository. Create a .azure-pipelines.yml file, add the stages, jobs and steps and away you go. Cool.

What if you’re building multiple apps with the source code in multiple repositories though? You could just copy your pipeline definition from repo to repo. What happens when you want to make changes to the pipeline? Are you going to copy the changes here, there and everywhere?

No. You’ve got more self-respect than that. You want a single pipeline definition that is shared across the repos that need it. In which case, templates will be of interest.

Create a Template File

If you’ve got a yaml pipeline definition that already works for you you’re probably going to want to use that as the basis of your template. Copy and paste your pipeline into a new yaml file. You’ll probably want to create a new project or repo to hold this template file.

Remove Trigger

If you’ve got a trigger section in the pipeline you’re copying from (to trigger the pipeline when changes are pushed to certain branches) you can remove that from the template file.

Convert Variables to Parameters

If you have any variables in the pipeline you will need to convert them to parameters. Use the parameters keyword…simple enough. Notice that you can still provide default values for the parameters. If parameters values are not supplied by the pipeline that is using the template these default values will be used. For example:

parameters:
  image_name: mcr.microsoft.com/businesscentral/sandbox
  container_name: Build
  company_name: My Company
  user_name: admin
  password: P@ssword1

Any references to variables in the steps will need to be changed to refer to the parameters instead. Rather than this:

-task: PowerShell@1
  displayName: Create build container
  inputs:
    scriptType: inlineScript
    inlineScript: >
      Import-Module navcontainerhelper;
      New-NavContainer -containerName $(container_name)...

Use ${{parameters.[parameter_name]}} like this:

 -task: PowerShell@1
  displayName: Create build container
  inputs:
    scriptType: inlineScript
    inlineScript: >
      Import-Module navcontainerhelper;
      New-NavContainer -containerName ${{parameters.container_name}}...

I’ve called my template file build-template.yml and the first few lines look like this:

 parameters:
  image_name: mcr.microsoft.com/businesscentral/sandbox
  container_name: Build
  company_name: My Company
  user_name: admin
  password: P@ssword1
  license_file: C:\Users\james.pearson\Desktop\Licence.flf

stages:
- stage: build
  displayName: Build
  jobs:
  - job: Build
    pool:
      name: Default
    steps:
      - task: PowerShell@1    
        displayName: Create build container
        inputs:
          scriptType: inlineScript
          inlineScript: > 
            Import-Module navcontainerhelper;
            $Credential = [PSCredential]::new('${{parameters.user_name}}',(ConvertTo-SecureString '${{parameters.password}}' -AsPlainText -Force));
            ...

Change the Pipeline to Use the Template

Now you want to change the pipeline definition to use the template yaml file that you have created. Include a repository resource, specifying the name with repository key.

The type key refers to the host of the git repo. Confusingly, ‘git’ refers to an Azure DevOps project or you can also refer to templates in GitHub repos. Name is in the format Project/Repository – in my example both are called ‘Templates’. Define a ref (generally a branch or tag) in the template repo that specifies the version of the template you want.

trigger:
  - '*'

resources:
  repositories:
    - repository: templates
      type: git
      name: Templates/Templates
      ref: refs/heads/master

stages:
- template: build-template.yml@templates
  parameters:
    image_name: mcr.microsoft.com/businesscentral/sandbox
    company_name: My Company 

Templates can be used at different levels in the pipeline to specify stages, jobs, steps or variables – see here for more info. In my example the template file is specifying stages to use in the pipeline.

My pipeline simply becomes a template key beneath the stages key. The value is in the format [filename]@[repository]. The repository value here is taken from the repository key specified above. Supply parameter values with the parameters key. Any parameter values that are not supplied will take the default values from the template file.

And there you have it. A single template file that you can reuse across your different repos. Make changes to your pipeline once and have them used wherever the template is used.

YAML Multi-Stage Pipelines in Azure DevOps, Stage 2

In the previous post I introduced you to multi-stage YAML pipelines. Build/Release pipelines vs. a multi-stage pipeline, enabling the preview feature (it’s still in preview at the time of writing) and an overview of the structure of the file.

Now we’ll take a more detailed look at an example multi-stage YAML file. This is geared at building apps for Business Central but the principles are transferable to any other application you are targeting. This is the example that I talked through in my recent webinar. If you’d rather view it as a gist you can see that here.

trigger:
- '*'

pool:
  name: Default

variables:
  image_name: mcr.microsoft.com/businesscentral/sandbox
  container_name: Build
  company_name: My Company
  user_name: admin
  password: P@ssword1
  license_file: C:\Users\james.pearson.TECMAN\Desktop\Licence.flf

stages:
- stage: build
  displayName: Build
  jobs:
  - job: Build
    pool:
      name: Default
    steps:
      - task: PowerShell@1    
        displayName: Create build container
        inputs:
          scriptType: inlineScript
          inlineScript: > 
            Import-Module navcontainerhelper;
            $Credential = [PSCredential]::new('$(user_name)',(ConvertTo-SecureString '$(password)' -AsPlainText -Force));
            New-NavContainer -accept_eula -accept_outdated -containerName '$(container_name)' -auth NavUserPassword -credential $Credential -image $(image_name) -licenseFile $(license_file) -doNotExportObjectsToText -restart no -shortcuts None -useBestContainerOS -includeTestToolkit -includeTestLibrariesOnly -updateHosts
      - task: PowerShell@1
        displayName: Copy source into container folder
        inputs:
          scriptType: inlineScript
          inlineScript: >
            $SourceDir = 'C:\ProgramData\NavContainerHelper\Extensions\$(container_name)\Source';
            New-Item $SourceDir -ItemType Directory;
            Copy-Item '$(Build.SourcesDirectory)\*' $SourceDir -Recurse -Force;
      - task: PowerShell@1
        displayName: Compile app
        inputs:
          scriptType: inlineScript
          inlineScript: >
            Import-Module navcontainerhelper;
            $SourceDir = 'C:\ProgramData\NavContainerHelper\Extensions\$(container_name)\Source';
            $Credential = [PSCredential]::new('$(user_name)',(ConvertTo-SecureString '$(password)' -AsPlainText -Force));
            Compile-AppInNavContainer -containerName '$(container_name)' -appProjectFolder $SourceDir -credential $Credential -AzureDevOps -FailOn 'error';
      - task: PowerShell@1
        displayName: Copy app into build artifacts staging folder
        inputs:
          scriptType: inlineScript
          inlineScript: >
            $SourceDir = 'C:\ProgramData\NavContainerHelper\Extensions\$(container_name)\Source';        
            Copy-Item "$SourceDir\output\*.app" '$(Build.ArtifactStagingDirectory)'
      - task: PowerShell@1
        displayName: Publish and install app into container
        inputs:
          scriptType: inlineScript
          inlineScript: >
            Import-Module navcontainerhelper;        
            Get-ChildItem '$(Build.ArtifactStagingDirectory)' | % {Publish-NavContainerApp '$(container_name)' -appFile $_.FullName -skipVerification -sync -install}
      - task: PowerShell@1
        displayName: Run tests
        inputs:
          scriptType: inlineScript
          inlineScript: >
            $Credential = [PSCredential]::new('$(user_name)',(ConvertTo-SecureString '$(password)' -AsPlainText -Force));
            $BuildHelperPath = 'C:\ProgramData\NavContainerHelper\Extensions\$(container_name)\My\BuildHelper.app';
            Download-File 'https://github.com/CleverDynamics/al-build-helper/raw/master/Clever%20Dynamics_Build%20Helper_BC14.app' $BuildHelperPath;
            Publish-NavContainerApp $(container_name) -appFile $BuildHelperPath -sync -install;
            $Url = "http://{0}:7047/NAV/WS/{1}/Codeunit/AutomatedTestMgt" -f (Get-NavContainerIpAddress -containerName '$(container_name)'), '$(company_name)';
            $AutomatedTestMgt = New-WebServiceProxy -Uri $Url -Credential $Credential;
            $AutomatedTestMgt.GetTests('DEFAULT',50100,50199);
            Import-Module navcontainerhelper;
            $ResultPath = 'C:\ProgramData\NavContainerHelper\Extensions\$(container_name)\my\Results.xml';        
            Run-TestsInBcContainer -containerName '$(container_name)' -companyName '$(company_name)' -credential $Credential -detailed -AzureDevOps warning -XUnitResultFileName $ResultPath -debugMode
      - task: PublishTestResults@2
        displayName: Upload test results    
        inputs:
          failTaskOnFailedTests: true
          testResultsFormat: XUnit
          testResultsFiles: '*.xml'
          searchFolder: C:\ProgramData\NavContainerHelper\Extensions\$(container_name)\my

      - task: PublishBuildArtifacts@1
        displayName: Publish build artifacts
        inputs:
          ArtifactName: App Package
          PathtoPublish: $(Build.ArtifactStagingDirectory)

      - task: PowerShell@1
        displayName: Remove build container
        inputs:
          scriptType: inlineScript
          inlineScript: >
            Import-Module navcontainerhelper;
            Remove-NavContainer $(container_name)
        condition: always()
- stage: Release
  displayName: Release
  condition: and(succeeded(), eq(variables['build.sourceBranch'], 'refs/heads/master'))
  jobs:
  - deployment:
    displayName: Release
    pool:
      name: Default
    environment: Release
    strategy:
      runOnce:
        deploy:
          steps:               
            - task: PowerShell@1
              displayName: Copy artifacts to release directory                  
              inputs:
                scriptType: inlineScript
                inlineScript: >
                  $Path = Split-Path '$(System.ArtifactsDirectory)' -Parent;
                  $Artifact = "$Path\App Package\*.app";
                  Copy-Item $Artifact 'C:\Release\';

Build

I’ve got a simple Business Central app. I want to use the Build stage to take my source AL code and:

  • Compile it into an .app file
  • Publish and install it into a container
  • Load the test codeunits and methods into the DEFAULT test suite
  • Run the tests
  • Upload the test results to the build
  • Upload the .app file as a build artifact

Almost all of these steps are performed with a PowerShell task. I won’t talk you through the PowerShell, you can read the code in the steps and read more about the use of the navcontainerhelper module on Freddy’s blog or dig around the PowerShell posts on my blog. I will just mention that loading the test codeunits relies on our AL Build Helper app as described previously.

The final step to run is a PowerShell script to remove the container that has been created for the build. I always want this step to run, even if another step above it has failed. See the condition: always() line that takes care of that.

Release

Space, Rocket, Travel, Science, Sky, Abstract, Planet

So far this is fairly familiar territory and stuff that we’ve been through before. The more interesting part for the purposes of this post is having a second stage to run.

First notice the condition attached to the Release stage:

and(succeeded(), eq(variables['build.sourceBranch'], 'refs/heads/master'))

build.SourceBranch is one of the built-in variables that you can access in the pipeline which holds the name of the branch that triggered the build. This condition means the Release stage will only run if the pipeline has succeeded up to this point and the pipeline was triggered by the master branch.

This is useful for a CI/CD scenario where you want to trigger the pipeline for changes to any branch (and why would you not?) but only want to Release the code when it is merged back into the master branch.

My pipeline uses a specific type of deployment job which allows you to target a particular environment. The ‘Environments’ menu item is displayed when you enable the multi-stage pipelines preview feature. Including a deployment job in your pipeline will instruct Azure DevOps to automatically download the artifacts from the Build stage to the agent (see here).

The Release stage can include as many steps as you need depending on your definition of ‘releasing’ your software. It might include steps to:

  • Use PowerShell to publish the .app file to an on-prem instance of Business Central
  • Use the admin API to publish the .app file to a Business Central SaaS tenant
  • Upload the .app file to some other location: FTP, SharePoint, network path

As a very simple example my Release stage is simply going to copy the .app file that was downloaded as an artifact from the Build stage into a C:\Release folder on the build agent.

Environments

One of the good things about creating an environment and targeting it with a deployment job is that you can see at a glance which version of your software the environment is hosting.

Drill down on the environment to see details of the current and previous versions that have been deployed and all the relevant corresponding detail – the pipeline that deployed the software, the logs, commits and work items. Beautiful.

YAML Multi-Stage Pipelines in Azure DevOps, Stage 1

Let’s return to the subject of pipelines and this time let’s talk multi-stages. What is it and why might you want to implement it in your YAML file?

Builds/Releases

With the approach that Microsoft are now calling “classic” pipelines there was a definite division between a build pipeline and a release pipeline.

A build starts with a given version of your source code (a particular commit in your git repository, say) and proceeds to define the steps that should be performed on that code to “build” it.

You decide what “build” means and define the steps as you need them. In a Business Central AL extension context we’re probably talking: compile the extension into an app file, publish and install and run some tests.

A release takes artifact(s) that have been created by a particular build and/or code from a particular repository and “releases” them. Again, you define whatever “release” actually means to you. Publish an app file into a Business Central database, upload it to SharePoint, decompress the app file and send the source code to a printer – whatever you want.

Build pipelines can still be defined in the classic, visual editor or in a YAML file. The Azure DevOps interface makes it pretty obvious which way they recommend you do this. It took me a second to spot the discrete “use the classic editor” link when creating a new pipeline.

Clicking that link and successfully avoiding the top option from the following page which still creates a YAML file anyway will get you to the classic, visual editor. Select the agent that is going to run this pipeline, add one or more jobs and add one or more tasks to each job.

Even now, you’ll notice a “View YAML” link in the top right hand corner of the screen. Subtle. The term “classic” usually means something different when we’re talking about software than when we’re talking about novels. Less “masterpiece, will still be appreciated a century from now” and more “outdated, will be made obsolete and removed a few months from now”.

It’s probably a safe bet that the “classic” editor is going to go the same way as NAV’s “classic client” with its “classic reports”.

For completeness, the Release editor looks like this:

I’ve defined the build pipeline that provides the artifacts that will be released and can now define the stages and tasks involved in releasing it. At the time of writing you will still get this editor when creating a new pipelines from the Releases menu.

Multi-Stage Pipelines

Enter multi-stage pipelines. Rather than defining your build and your release tasks in separate editors you can define them in a single YAML pipeline definition.

You’ll need to enable the preview feature (from your profile menu in the top right hand corner). You’ll notice that the “Builds” option disappears from the Pipelines menu and is replaced with two new options “Pipelines” and “Environments”. Intriguing.

Now we can work with pipelines that look something like this:

trigger:
  '*'
parameters:
  image_name: a.docker.image
  container_name: my_container
stages:
- stage: build
  jobs:
  - job: Build
    pool:
      name: Default
    steps:
      (definition of the steps that are included in the build stage)
- stage: release
  condition:  and(succeeded(), eq(variables['build.sourceBranch'], 'refs/heads/master'))
  jobs:
  - deployment:
    pool:
      name: Default
    environment: QA
    (further definition of the steps involved in the release stage)

We’ll go into the details of a complete multi-stage YAML pipeline in another post. For now I just want to outline the structure of the file:

stages (1) -> stage (1..*) -> jobs (1) -> job (1..*) -> steps/deployment/tasks

You can include as many stages as you need to effectively manage the build and deployment of your software. Each stage can evaluate a condition expression which decides whether the stage should be run or not. In my case I only want to run the release stage if the pipeline has succeeded up to this point and the pipeline has been triggered by the master branch.

By default, stages will be run in series and will be dependent upon the previous stage. You can mix this up by defining the dependsOn key for each stage.

Environments

You’ll also notice that my ‘deployment’ task includes the environment to which I’m going to deploy my software. This will correspond to an environment that you have created from the Environments option of the Pipelines menu in Azure DevOps.

You can control how and when code is released to a given environment with stage conditions (as above) or with manual approvals.

Open the environment and select ‘Checks’ from the menu. All approvers that are entered on the following page must approve a pipeline before the deployment to the environment will proceed. The pipeline will be paused in the meantime.

Next time…

I hope that’s enough to whet your appetite to go and investigate the possibilities for yourself and see if/how you can start making use of this in your own development team.

Next time we’ll go through a more complete example of a multi-stage YAML pipeline and how it is put together and works. Until then you might like to check out the recording of the webinar that I did for Areopa webinars. If you like it, do them a favour and subscribe to the channel, thanks.