Getting Started with the Azure DevOps API

Azure DevOps is pretty sweet. Manage your code, backlog, sprints, builds – the whole caboodle. Also, it has a comprehensive REST API so you can access your data and integrate with DevOps from anywhere you like.

Ever since we started with DevOps (VSTS, TFS) we created some PowerShell scripts to integrate with it for Dynamics NAV development. They’ve become an indispensable part of a developer’s day.

(I’m going to refer to “Azure DevOps” as ADO from now on. I know ADO is an already familiar acronym in software development but I don’t want to type “Azure DevOps” every time and just referring to it as “DevOps” makes no sense. I’m not sure “Azure DevOps” makes much sense as a name anyway. Surely DevOps refers to the practice, not the tool you use to achieve it? Anyway…digression over.)

Authentication

The first thing you’re going to need to do is authenticate with your instance of ADO. You can create a Personal Access Token to authenticate your requests.

ADO Profile Menu.JPG

Sign in to your ADO instance, click on your profile (top right) and select Security from the menu.

Click “New Token” to create a new Personal Access Token. Give it a name, I’ll call mine “Azure Barbara” (only marginally sillier than “Azure DevOps”).

There are a bunch of “scopes” (25, at the time of writing) to which you can grant this token access. You can define which scopes an API call authorised with this token should have access to. For the sake of this example, I’ll choose “Full access”. Choose an expiration date for this key and hit Create.

Your token will be created and displayed. You need to copy this token somewhere safe. This is the only opportunity you will have to view the token. If you lose it you’ll need to create a new one.

Calling the API

Now that you’ve got an access token you can go ahead and call the API. The API is well documented here: https://docs.microsoft.com/en-us/rest/api/azure/devops/core/?view=azure-devops-rest-5.0

As a test, I’ll list the team projects in my instance. Open up PowerShell…

function Create-BasicAuthHeader {
  Param(
    [Parameter(Mandatory=$true)]
    [string]$Name,
    [Parameter(Mandatory=$true)]
    [string]$PAT
)

  $Auth = '{0}:{1}' -f $Name, $PAT
  $Auth = [System.Text.Encoding]::UTF8.GetBytes($Auth)
  $Auth = [System.Convert]::ToBase64String($Auth)
  $Header = @{Authorization=("Basic {0}" -f $Auth)} 
  $Header
}

Invoke-WebRequest -Uri 'https://dev.azure.com/<ADO organisation name>/_apis/projects' -Headers (Create-BasicAuthHeader 'Azure Barabara' '<personal access token>') -Method Get

Replace <ADO organisation name> with the name of your organisation in ADO. Also put your token name and value into the script in place of Barbara.

The Create-BasicAuthHeader function creates an authentication header which is passed by Invoke-WebRequest. If all is well you’ll get some JSON back. Something like this. I’ve got one project in my ADO instance called “Hello World”.

{"count":1,"value":[{"id":"<GUID>","name":"Hello World","url":"https://dev.azure.com/<my ADO instance>/_apis/projects/<project GUID>","state":"wellFormed","revision":471004199,"visibility":"private","lastUpdateTime":"2019-02-28T16:21:42.417Z"}]}

Nice. Next time we can set about something that is actually useful. To whet your appetite, these are some of the things that we use it for.

  • Finding the latest successful build for a given project and Git repo and downloading the build artefacts (the .app files that were created)
  • Reading a given file from a given project and Git repo – we use it to find app.json to download dependency apps recursively in the build process (more on that later)
  • Retrieving CAL objects that were modified by a given changeset #
  • Creating work items, iterations and other ADO entities

Working with Version Numbers in Dynamics Business Central / NAV

Specifically I’m talking about assigning version numbers to your own code and manipulating those versions in CAL / AL and PowerShell.

Version Numbering

There are lots of different systems for assigning a version number to some code. Some incorporate the date or the current year and day number within the year. Loads of background reading here if you’re interested.

The system we typically follow is:

Version number = a.b.c.d where:

  • a = major version – this is only incremented for a major refactoring or complete rewrite of the software
  • b = minor version – incremented when a significant new feature is implemented
  • c = fix – incremented for small changes and bug fixes
  • d = build – set to the ID of the build that created it in Azure DevOps

This system isn’t perfect and we don’t always follow it exactly as written. The line between what is just a fix and what is a new feature is a little blurry. We don’t run CAL code through our DevOps build process so they don’t get a build ID like AL apps do. Hit the comments section and tell me how and why you version differently.

Regardless, the important thing is you give some consideration to versioning. It is especially important that two different copies of your code must not go out to customers having the same version number. This is especially true for AL apps. If you want to publish an updated version of an app it must have a higher version number than the one you are replacing.

Automation

There are several situations where we need to work with version numbers in code and in scripts.

  • In the build process – reading the current version from app.json and setting the last element to equal the build ID
  • In our PowerShell script that creates a new navx package from CAL code (yes, we use v1 extensions. Not now, let’s go into that some other time)
  • In upgrade code – what was the previous version of the app? Was it higher or lower than a given version?

If you are considering, like we used to, just treating version numbers as strings…don’t. Think about it:

Treated as versions 1.10.0 is greater than 1.9.0 but when treated as strings it isn’t. That led us to split the versions into two arrays and compare each element. It worked, but it was convoluted. And completely unnecessary.

Some bright spark in our team wondered why we can’t just use .Net’s version type. We can.

CAL

Use a DotNet variable of type Version. Construct it with the version number string. NAVAPP.GETARCHIVEVERSION returns a string that can be used.

You can use the properties of the variable to access the individual elements of the version and its methods to compare to another string (less than, less than or equal to, greater than, greater than or equal to).

Version : DotNet System.Version.'mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'
Version2 : DotNet System.Version.'mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'

Version.Version('1.10.0');
Version2.Version(NAVAPP.GETARCHIVEVERSION);

IF (Version2.op_LessThan(Version) THEN BEGIN
  //some upgrade code that must be run when coming from an older version than 1.10.0
END;

PowerShell

Declare a variable of a given DotNet type using square brackets. Create a new version with new, Parse or TryParse. The latter expects a version variable passed by reference and returns a Boolean indicating whether a value could be assigned.

Access the elements of the version through the properties of the variable.

C:\> $Version1 = [Version]::new(1,10,0)
>> $Version2 = [Version]::new('1.9.0')
>> $Version1.CompareTo($Version2)
1

C:\> $Version = [Version]::new(1,10,0)
>> $Version.Minor
10

C:\> $Version = [Version]::new()
>> [Version]::TryParse('monkey',[ref]$Version)
False

AL

AL has a native Version datatype. As above, create a new version either from its elements or from a string. NavApp.GetArchiveVersion returns a string that can be used (for migration from v1).

To get the version of the current module (app) or of another app use NavApp.GetCurrentModuleInfo or NavApp.GetModuleInfo.

var
  Ver : Version;
  Ver2 : Version;
  DataVer : Version;
  AppVer : Version;
  ModInfo : ModuleInfo;
  ModInfo2 : ModuleInfo;
begin
  Ver := Version.Create(1,10,0);
  Ver2 := Version.Create(NavApp.GetArchiveVersion());

  if Ver > Ver2 then begin
    //some upgrade code
  end;

  //version of the current app
  NavApp.GetCurrentModuleInfo(ModInfo);
  DataVer := ModInfo.DataVersion();
  AppVer := ModInfo.AppVersion();

  //app version of the first dependency
  NavApp.GetModuleInfo(ModInfo.Dependencies().Get(1).Id(),ModInfo2); //dependencies is 1 based, not 0 based
  AppVer := ModInfo2.AppVersion();
end;

VS Code, PowerShell & Git: 5 Things

Visual Studio Code has moved quickly from “what’s that? Part of Visual Studio? No? Then why did they call it that?” to become the hub of much of my daily work. This post contains a few of the things (5 to be precise) that I’ve done to make it work better for me. Maybe you can glean something useful. Maybe you can teach me something about how you use it – post a comment.

Extensions

You can use VS Code to write JavaScript, C#, CSS, HTML and a raft of other languages, use its native support for Git and install extensions for AL (obviously), developing Azure Functions, integrating with Azure DevOps, managing Docker, writing Power Shell, adding support for TFVC…

Beautiful.

Having said that, I’m not a big fan of having lots of extensions that I only occasionally use. I’m pretty ruthless in uninstalling stuff I’m not using in Chrome and Android. VS Code is the same. If I don’t use it all the time I generally go without it. (For those of us that make apps for a living it’s a sobering thought that our prospective users are likely to be the same).

Right now I’ve got these extensions installed:

  • AL Language – every so often I need an upcoming version or a NAV 2018 version but most of the time I’ve got the one from the marketplace installed
  • Azure Account – provides some sign in magic required by other extensions
  • Azure Functions – like it sounds
  • Azure Pipelines – intellisense for YAML build definitions
  • CRS AL Language Extensions – for renaming AL files to follow best practices and because I don’t like the convention. Including the object type when the files are already in subfolders by object type and including objects IDs when we all agree we want to get rid of them and don’t care what they are as long as they’re unique seems pretty redundant to me…but I digress
  • GitLens – add blame annotations i.e. “how did this line of code get here”, file history, compare revisions, open the file in Azure DevOps
  • PowerShell – like it sounds
  • Night Owl – a theme. Because we can! Having suffered for years with an IDE that didn’t even highlight keywords I took my time trying out different themes. I like a dark theme but didn’t quite get on with the one that comes with VS Code.

Terminal

VS Code has a built in terminal. I use PowerShell a lot during the day to manage containers (with the navcontainerhelper module), manage Git and various tasks with our own module to call the with Azure DevOps REST API. It’s nice to also be able to do all that from within VS Code.

These ideas aren’t strictly to do with VS Code, but tweaking PowerShell and Git to make them more efficient for you.

Run as Administrator

If you’re going to use the terminal to manage docker containers you’re going to want to run VS Code (and therefore the terminal) as administrator.

You can set this in the Advanced section of the properties of the shortcut. This will force VS Code to always open as admin.

VS Code Shortcut Properties.JPG

I believe Freddy K is working on some changes to the navcontainerhelper module that will remove the requirement to run the cmdlets as admin. That would be nice.

PowerShell Profile

Have PowerShell automatically execute some script on loading by editing your profile. PowerShell has a built-in $profile variable which points to the location of your .ps1 profile file.

I use that file to import the posh-git module (below) and our own TFS Tools module. You could create the file with something like this (sc is an alias for the Set-Content command):

sc $profile 'Import-Module posh-git
Write-Host "PowerShell Ready" -ForegroundColor Green'

Opening a new terminal will look like this:

VS Code Terminal PowerShell Ready.JPG

Note: PowerShell ISE has a different profile file to PowerShell.

Posh-Git

I mostly use Git from the command line. I started using the command line rather than a GUI as I found it helped me understand what commands are actually being used – how fetch is different to pull, how to set tracking information for a branch or edit a remote.

And yes, perhaps there is small part of it that boosts my shallow sense of “I’m a real developer, I type weird commands into a prompt rather than clicking a button on a GUI”. It’s OK to admit that. I draw the line at Vim though.

Anyway.

If you’re planning on using Git in PowerShell you’re going to want to install the posh-git module.

Install-Module posh-git

It adds some details into the prompt (see above): the branch that you are on, how it compares to the remote branch that it is tracking and the status of your index. It adds tab completion all over the place as well – indispensable.

Git Aliases

If you do start using Git from the terminal you’re probably going to find typing some of the longer commands quite tedious. For instance, git log –graph is great to get an overview of your project and has loads of switches to alter its output. I tend to use:

git log --graph --oneline --all

To show a graph of all the branches (remote as well as local) with commit details on a single line each.

Git Log Graph Oneline.JPG

It gets you something like the above. You can see the commits that each of the branches is pointing at, which branches commits are included in and how work has been merged over time.

I don’t want to type the full command out each time though. Fortunately, Git doesn’t force you to. You can create an alias. I have:

  • git lol – to show the above graph
  • git fap – to fetch all changes from the remote and prune any references to remote branches that no longer exist (I’ve never understood why Git doesn’t automatically remove references to remote branches that no longer exist)
  • git pff – pull and merge changes from the remote branch, as long as your branch can be fast-forwarded

Conclusion

There are lots of opportunities – more than 5 – to enhance and tune VS Code and PowerShell to make your daily work more efficient. Check it out.

Business Central Development With CI/CD

If you follow blogs about Dynamics 365 Business Central / NAV development, attended development sessions at Directions or have seen the schedule for NAVTechDays then you may have noticed the terms “CI/CD” or “pipeline” being thrown around.

What do those terms actually refer to? And how does it affect the way we approach development?

Definitions

CI = “continuous integration”
CD = “continuous delivery” (or “continuous deployment”, if you prefer)

These are pretty old development concepts. Check out the Wikipedia entry if you want an overview and some of the history. I would summarise it like this.

Continuous integration: incorporate new development into your main development branch as soon as possible.

Continuous delivery: get that development in front of your end users as quickly as possible.

The concept of a pipeline is having a defined series of steps that new development goes through. Build, test, publish and install into target environment(s) – automated as much as possible

Why?

All this talk of  “as soon as possible” sounds a little reckless. Is this really a good idea?

In a nutshell, we’re trying to minimise the time between identifying some changes that the customer needs (some new feature or bug fix) and those changes actually being deployed onto the customer’s system.

We want to avoid work in progress changes hanging around for ages. You’ve probably experienced the problems that come with that:

  • The work becomes harder to merge back into the master branch as time goes by
  • Future development dependent on these changes is held up or goes ahead with the worry it will clash with work in progress
  • People start to forget, or lose interest, in why the changes were required in the first place making testing and code review harder or less effective
  • The customer loses interest in the development and is less inclined to test or use the new development

How?

Integration

All my experience is with Azure DevOps (what used to be called Visual Studio Team Services and used to be called Team Foundation Server) but other platforms provide similar functionality.

We start by defining small, discrete work items. I don’t have a fixed rule, but if the work can’t be completed in a single sprint (say, 2 weeks) then it’s probably too big and you should split it into smaller chunks.

The developer gets to work and puts their changes in for review. Pushing those changes up to the server triggers the build pipeline. Typically this is a series of tasks performed by a build agent running on a server that you control. Azure DevOps provides several options for agents hosted by Microsoft but for now they don’t provide the option we need to build AL packages.

I won’t go into detail about our build pipeline now but it includes:

  • Creating a Docker container
  • Compiling the AL source with the compiler included in the container
  • Running the automated tests (the developer should have included new tests to cover their changes)
  • Uploading the test results and the .app files (we split the product and its tests into two separate apps) as build artefacts
  • Notifying the developer of the build result

By the time any of the reviewers comes to look at the code review we should already that:

  • All the tests have passed
  • The changes can be merged into the master branch without any conflicts

Nice. We can be much more confident hitting the Approve button knowing it passes the tests and will merge neatly with master. We get the changes incorporated back into the product quickly and have a clean starting point for the next cycle.

Delivery

Delivery is a different story. At the time of writing our release process is to make the new .app package available on SharePoint. We don’t automate that.

With Dynamics NAV / BC on-premise there is scope for automating the publish & install of the new app package into target environments and tenants. That would involve the definition of a release pipeline. An agent on the target environment could collect the app package (or fob, or text file) created by the build pipeline and use PowerShell to import/compile/publish/install into one or more databases.

We don’t attempt this as in many cases we don’t control the environments that our apps are installed into. The servers are not ours to install agent software onto and be responsible for.

This is especially true of Business Central SaaS as we are developing apps for AppSource. No app package* makes it onto the platform until it has passed the AppSource validation process and deployed by Microsoft on their own schedule.

*unless it is developed in the 50,000 – 99,999 object range and uploaded.

Getting Started

I hope that’s whet your appetite to go and investigate some more. Before you do you’ll need to be up and running with source code management and automated tests (perhaps more of that another time).

Source Code Management: Migrating to Git

This is the third post in a series about source code management. You can start here if you haven’t read the others in the series.

There we were, happy as the proverbial Larry, checking our code into TFVC, requesting code reviews, branching, merging, viewing file history, comparing versions, annotating and writing a lot of PowerShell to automate tasks with the VSTS API. We were feeling pretty great about our source code management.

What could possibly induce us to move away from this system?

Developer Isolation

Our standard practice had always been for developers to work in the same development database. We rarely needed to have multiple developers working on the same project at the same time or not on the same objects at least.

As we invested in our products and grew the team we found ourselves overlapping a lot more than had been the case with customer specific development. Dynamics NAV isn’t equipped to handle this particularly well:

  • If two developers have the same object open at the same time, both make changes and then save, whoever saves last wins. There is potential for development to be lost as soon as it has been written
  • NAV does allow you to lock objects – sort of like checking them out in source code management – but we weren’t in the habit of doing it. We wouldn’t consistently lock them when we were working on them or unlock them when we had finished working on them
    • Hardly a fair criticism of NAV you might say – we just didn’t have a discipline for using the lock functionality properly. You may well be right
    • Even so, locking an object prevents anyone else from designing it. What is the point of me carefully splitting some development into discrete tasks so that different developers can work on them in parallel if they are going to trip over each other because they happen to be modifying the same objects?
  • Only one developer can debug on a service tier at any time
    • So add more service tiers to the development server. You could, but it was already becoming a headache trying to manage all the development databases and service tiers that we’d got on our dev server. I didn’t fancy throwing any more fuel on that particular fire
  • When you export the objects from NAV to check-in to source code management you increase the likelihood that mistakes will be made
    • I export objects that I’ve changed which may include changes that you’ve made. If I’m not careful then I’ll check-in your changes as part of my changeset
    • That can be complicated to fix and then merge into another branch. Or I’ll have to revert the changeset and have a redundant pair of changesets etched into the history of the project

The solution to all of these problems was to isolate the developers from each other. We’d each create a local development environment where we could work, test and debug safe in the knowledge that no one else is monkeying with the objects.

We invested time in PowerShell scripts to automate the creation and management of these environments. Once again, the VSTS API and tf.exe were our friends. As a side benefit we’d also limited our reliance on our development server and the single point of failure danger that it had posed.

Life was good again. We could work on separate features and share the code through shelvesets and code review before checking-in. We could create a new environment of a given product for development or testing in a few minutes with our automation.

Branching

Once we’d isolated developers I was more confident defining separate tasks for different developers to work on in parallel. So I did, but as we were still sharing the same branch in TFVC we started to run into a different set of problems.

  • What if the same developer wanted to work on multiple work items at the same time?
    • This was particularly true when they’d finish the first work item and put it in for review. While they were waiting for review they’d want to crack on with the next task
    • Managing their local development environment became difficult – when they start the second task ideally they should work in a database that doesn’t include the changes from the first task
    • Creating an environment per work item – while feasible – isn’t very attractive.
  • Having several code reviews open against the same branch becomes difficult to follow.
    • While we’d try to review and give feedback/approve as quickly as possible there are inevitably some that stick around for a while
    • The reviewing developer wants an environment with the latest stable code and the code under review. When there is an update to the code under review the shelveset must be replaced and downloaded and applied to the database again (a challenge in TFVC in general)

A sensible step to take is to introduce a branch per work item in TFVC. This allows unrelated changes to be isolated from each other and merged into a production branch once the code has been reviewed. I wasn’t thrilled at this prospect.

Branching in TFVC is expensive – in the sense that it is a full copy of the folder that is has been branched from. Even if you’ve got an entire branch downloaded into your workspace when you create another branch from it the new branch is created on the server and you must download it separately. If you want to delete the branch – which we’d want to do once the work item was finished – you need to download the entire branch, then delete your local folder to tell the server to delete the branch.

I now know that we’d stumbled over two of the most compelling reasons to use Git rather than TFVC (Or other distributed version control systems – but as we were already using VSTS Git was the natural alternative).

In Git:

  • Isolated developers are a given. Everyone has their own copy of the whole repository
  • Branching is cheap and easy. So cheap and easy that you are encouraged to do it all the time. It is very simple to isolate changes from each other and merge them back together again at a later date

Git

It is beyond the scope of this post to compare TFVC and Git in any depth – although there will be more in the final post – but these are the key points that led us to trial it and ultimately move all our code to Git repositories.

  • Now that we were working in separate NAV databases our source code was effectively distributed among us – as opposed to centralised in a single development database.
    • This fits the ethos of Git (as a distributed version control system) much better than TFVC (as a centralised version control system)
    • Without realising it at the time we had effectively already moved to distributed version control
  • Git maintains a single working directory with the contents of the current branch – as opposed to a separate folder with a copy of all objects per branch
    • This principle is what makes branching so cheap in Git. Creating a new branch requires no more than the creation of a single text file with a pointer to the contents of that branch
    • This is a far more attractive proposition when it comes to maintaining your local development database. Don’t create a database per branch or confuse yourself trying to work in multiple branches on the same database. Instead create a single database whose objects are updated to reflect the current contents of Git’s working directory (see below). PowerShell
  • Code is shared through branches which are pushed to the central repository
    • Rather than through shelvesets which are necessarily a single, self-contained set of changes which are difficult to update and re-share.
    • Code reviews (pull requests) compare the changes between two branches rather than the changes contained in a single shelveset. I am constantly delighted with the power of this concept and the tools that Azure DevOps (VSTS) provides to support it. Perhaps more of that in another post one day. Pull requests are awesome

Synchronising with Git’s Working Directory

The most important jigsaw piece in our puzzle of adopting Git was to find a smooth way to keep the NAV database and Git’s working directory in sync with one another. If not, we were going to see some unexpected differences when exporting from NAV.

PowerShell came to the rescue and I added a bunch of functions to the module that we develop and use internally. We also use some functions from Cloud Ready Software’s modules – mainly their wrappers for standard functions that make them a little easier to call by just supplying the NAV service tier. The main functions are:

  • Build-DevEnvironmentFromGit – to create the NAV database and service tier. I won’t go into the details of how that works now. Typically we’d do this once per product as we’re going to reuse the same database from now on rather than constantly building and deleting them
  • Start-GitDev – to start the NAV service tier and import the NAV PowerShell modules from the correct build
  • Export-ModifiedObjectsToWorkingTree
    • Export objects (Modified = true) from the NAV database to individual text files in the Git directory
    • Set Modified to false and DateTime to 1st January of the current year (to minimise conflicts on the DateTime)
  • Apply-CommitsToServiceTier (-Top / -Since) – find top X commits or commits since a point in the log and apply (using Apply-CommitToServiceTier) them to the service tier
  • Apply-CommitToServiceTier – identify the objects modified by this commit and import to / delete from the NAV database as appropriate
  • Checkout-GitBranch
    • Pop a list of branches (including remote) using Out-GridView for the developer to select the target branch
    • Identify the objects that are different between the current branch/commit and the branch/commit to checkout. Import to / delete from the NAV database as appropriate

Using an appropriate combination of the above we can always keep the NAV database and the Git working directory in sync. This provides some really powerful flexibility:

  • If I want to test the code someone is including in a pull request I can just checkout that remote branch and test in my local database. I can then easily switch back to what I was doing on my own branch
  • I can create as many local branches as I like for separate tasks and easily flick between them knowing that the NAV database doesn’t have any unrelated changes hanging around

Dynamics 365 Business Central

A lot of the above has now been rendered obsolete with Dynamics 365 Business Central as we are moving away from working with NAV databases.

The learning curve has, however, been invaluable as we continue to rely heavily on Git and Azure DevOps in our development of v2 apps for Business Central.

We’ll wrap up this series with some concluding thoughts in the next post.