Source Code Management: Migrating to Git

This is the third post in a series about source code management. You can start here if you haven’t read the others in the series.

There we were, happy as the proverbial Larry, checking our code into TFVC, requesting code reviews, branching, merging, viewing file history, comparing versions, annotating and writing a lot of PowerShell to automate tasks with the VSTS API. We were feeling pretty great about our source code management.

What could possibly induce us to move away from this system?

Developer Isolation

Our standard practice had always been for developers to work in the same development database. We rarely needed to have multiple developers working on the same project at the same time or not on the same objects at least.

As we invested in our products and grew the team we found ourselves overlapping a lot more than had been the case with customer specific development. Dynamics NAV isn’t equipped to handle this particularly well:

  • If two developers have the same object open at the same time, both make changes and then save, whoever saves last wins. There is potential for development to be lost as soon as it has been written
  • NAV does allow you to lock objects – sort of like checking them out in source code management – but we weren’t in the habit of doing it. We wouldn’t consistently lock them when we were working on them or unlock them when we had finished working on them
    • Hardly a fair criticism of NAV you might say – we just didn’t have a discipline for using the lock functionality properly. You may well be right
    • Even so, locking an object prevents anyone else from designing it. What is the point of me carefully splitting some development into discrete tasks so that different developers can work on them in parallel if they are going to trip over each other because they happen to be modifying the same objects?
  • Only one developer can debug on a service tier at any time
    • So add more service tiers to the development server. You could, but it was already becoming a headache trying to manage all the development databases and service tiers that we’d got on our dev server. I didn’t fancy throwing any more fuel on that particular fire
  • When you export the objects from NAV to check-in to source code management you increase the likelihood that mistakes will be made
    • I export objects that I’ve changed which may include changes that you’ve made. If I’m not careful then I’ll check-in your changes as part of my changeset
    • That can be complicated to fix and then merge into another branch. Or I’ll have to revert the changeset and have a redundant pair of changesets etched into the history of the project

The solution to all of these problems was to isolate the developers from each other. We’d each create a local development environment where we could work, test and debug safe in the knowledge that no one else is monkeying with the objects.

We invested time in PowerShell scripts to automate the creation and management of these environments. Once again, the VSTS API and tf.exe were our friends. As a side benefit we’d also limited our reliance on our development server and the single point of failure danger that it had posed.

Life was good again. We could work on separate features and share the code through shelvesets and code review before checking-in. We could create a new environment of a given product for development or testing in a few minutes with our automation.


Once we’d isolated developers I was more confident defining separate tasks for different developers to work on in parallel. So I did, but as we were still sharing the same branch in TFVC we started to run into a different set of problems.

  • What if the same developer wanted to work on multiple work items at the same time?
    • This was particularly true when they’d finish the first work item and put it in for review. While they were waiting for review they’d want to crack on with the next task
    • Managing their local development environment became difficult – when they start the second task ideally they should work in a database that doesn’t include the changes from the first task
    • Creating an environment per work item – while feasible – isn’t very attractive.
  • Having several code reviews open against the same branch becomes difficult to follow.
    • While we’d try to review and give feedback/approve as quickly as possible there are inevitably some that stick around for a while
    • The reviewing developer wants an environment with the latest stable code and the code under review. When there is an update to the code under review the shelveset must be replaced and downloaded and applied to the database again (a challenge in TFVC in general)

A sensible step to take is to introduce a branch per work item in TFVC. This allows unrelated changes to be isolated from each other and merged into a production branch once the code has been reviewed. I wasn’t thrilled at this prospect.

Branching in TFVC is expensive – in the sense that it is a full copy of the folder that is has been branched from. Even if you’ve got an entire branch downloaded into your workspace when you create another branch from it the new branch is created on the server and you must download it separately. If you want to delete the branch – which we’d want to do once the work item was finished – you need to download the entire branch, then delete your local folder to tell the server to delete the branch.

I now know that we’d stumbled over two of the most compelling reasons to use Git rather than TFVC (Or other distributed version control systems – but as we were already using VSTS Git was the natural alternative).

In Git:

  • Isolated developers are a given. Everyone has their own copy of the whole repository
  • Branching is cheap and easy. So cheap and easy that you are encouraged to do it all the time. It is very simple to isolate changes from each other and merge them back together again at a later date


It is beyond the scope of this post to compare TFVC and Git in any depth – although there will be more in the final post – but these are the key points that led us to trial it and ultimately move all our code to Git repositories.

  • Now that we were working in separate NAV databases our source code was effectively distributed among us – as opposed to centralised in a single development database.
    • This fits the ethos of Git (as a distributed version control system) much better than TFVC (as a centralised version control system)
    • Without realising it at the time we had effectively already moved to distributed version control
  • Git maintains a single working directory with the contents of the current branch – as opposed to a separate folder with a copy of all objects per branch
    • This principle is what makes branching so cheap in Git. Creating a new branch requires no more than the creation of a single text file with a pointer to the contents of that branch
    • This is a far more attractive proposition when it comes to maintaining your local development database. Don’t create a database per branch or confuse yourself trying to work in multiple branches on the same database. Instead create a single database whose objects are updated to reflect the current contents of Git’s working directory (see below). PowerShell
  • Code is shared through branches which are pushed to the central repository
    • Rather than through shelvesets which are necessarily a single, self-contained set of changes which are difficult to update and re-share.
    • Code reviews (pull requests) compare the changes between two branches rather than the changes contained in a single shelveset. I am constantly delighted with the power of this concept and the tools that Azure DevOps (VSTS) provides to support it. Perhaps more of that in another post one day. Pull requests are awesome

Synchronising with Git’s Working Directory

The most important jigsaw piece in our puzzle of adopting Git was to find a smooth way to keep the NAV database and Git’s working directory in sync with one another. If not, we were going to see some unexpected differences when exporting from NAV.

PowerShell came to the rescue and I added a bunch of functions to the module that we develop and use internally. We also use some functions from Cloud Ready Software’s modules – mainly their wrappers for standard functions that make them a little easier to call by just supplying the NAV service tier. The main functions are:

  • Build-DevEnvironmentFromGit – to create the NAV database and service tier. I won’t go into the details of how that works now. Typically we’d do this once per product as we’re going to reuse the same database from now on rather than constantly building and deleting them
  • Start-GitDev – to start the NAV service tier and import the NAV PowerShell modules from the correct build
  • Export-ModifiedObjectsToWorkingTree
    • Export objects (Modified = true) from the NAV database to individual text files in the Git directory
    • Set Modified to false and DateTime to 1st January of the current year (to minimise conflicts on the DateTime)
  • Apply-CommitsToServiceTier (-Top / -Since) – find top X commits or commits since a point in the log and apply (using Apply-CommitToServiceTier) them to the service tier
  • Apply-CommitToServiceTier – identify the objects modified by this commit and import to / delete from the NAV database as appropriate
  • Checkout-GitBranch
    • Pop a list of branches (including remote) using Out-GridView for the developer to select the target branch
    • Identify the objects that are different between the current branch/commit and the branch/commit to checkout. Import to / delete from the NAV database as appropriate

Using an appropriate combination of the above we can always keep the NAV database and the Git working directory in sync. This provides some really powerful flexibility:

  • If I want to test the code someone is including in a pull request I can just checkout that remote branch and test in my local database. I can then easily switch back to what I was doing on my own branch
  • I can create as many local branches as I like for separate tasks and easily flick between them knowing that the NAV database doesn’t have any unrelated changes hanging around

Dynamics 365 Business Central

A lot of the above has now been rendered obsolete with Dynamics 365 Business Central as we are moving away from working with NAV databases.

The learning curve has, however, been invaluable as we continue to rely heavily on Git and Azure DevOps in our development of v2 apps for Business Central.

We’ll wrap up this series with some concluding thoughts in the next post.

Source Code Management: Adopting TFVC

This is the second post in a series about source code management. If you haven’t already read the beginning of the story you can find it here.

We’d realised that we’d outgrown our system of one-developer-per-customer and ad-hoc communication between teams about ongoing changes to objects. We needed some more structure and somewhere safe to keep previous versions of objects.

Selecting a System

We evaluated a couple of systems and decided to go with Team Foundation Version Control (TFVC), hosted in Visual Studio Team Services (recently renamed Azure DevOps). The key factors:

  • It’s hosted in the cloud and therefore always available – including for developers who were working on site (something we tend not to do now but happened a lot at the time)
  • Familiar tools – developers were already using Visual Studio for RDLC report development and occasional C# development
  • Source code management is independent of NAV development
    • It doesn’t matter what version of NAV you are developing in as they all support export to/import from text files
    • It doesn’t matter where the NAV database is, you only need to be able to bring the text files back to the local workspace on your laptop
    • We did evaluate a product where previous versions of objects are saved in the NAV development database – but the thought of recreating a development database from a SQL backup and losing the history made it a non-starter
  • We could see there was a lot of scope for future improvements with work item tracking, testing and a comprehensive API
  • We’d already got some in-house experience with Team Foundation Server in our .Net development team

Our philosophy was:

  • We’d have a VSTS project for each customer project
  • A branch in that project to represent each NAV database
  • We’d check-in all vanilla objects at the start of the project and all modifications thereafter

Implementing TFVC

It was pretty hard.

The benefits are an easy sell – we’ll be able to rollback objects, we’ll know what was changed, when, by whom and link to the change request that also tell us why. We could even go live with some changes and not others, in the same file. What’s not to love about that? The practice is a little trickier.

The mindset change was the key thing to try and get right. We were so used to being able to make quick changes to objects. Some consultants were comfortable making minor changes – adding fields to forms and pages for example. Support would also make minor changes and put bug fixes in. Some customers would make their own changes as well.

For source code management to be of any use you have to be able to trust the data. For the data to be trustworthy all code changes must be checked-in, no matter how small and seemingly insignificant.

We didn’t want to put too many obstacles in the way of non-developers customising objects for our customers, after all, most likely the customer bought NAV on the strength of our ability to quickly customise it for them in the first place. However, one way or another, all code changes must make their way to TFVC.

The obvious place to start then was to incorporate TFVC into the development process as smoothly as possible:

  1. Write code in NAV test database
  2. Export object(s) to text
  3. Copy into local workspace and check-in to development branch
  4. Repeat 1-3 as necessary
  5. Merge changeset(s) from test branch to live branch
  6. Download merge changeset(s) from live branch
  7. Import into NAV live database, compile, synchronise (comments about how we ought to have been building fobs and importing them instead can be posted below if you really feel the need)

The main challenge were NAV is concerned is keeping the objects in the database and the contents of the workspace in sync. Fortunately, this was also the time that PowerShell support was being added and we were realising its power for other applications.

For step 2 we allow the developers to export all the relevant objects to a single text file and split them into separate files, one per object, named appropriately into a new folder. Identical to Split-NAVApplicationObjectFile but as a custom function so that we can support forms and dataports.

We also have a PowerShell function for step 6. The developer enters the required changeset ID and PowerShell uses the API to collect all the files modified in that changeset, with contents as at that changeset version, and join them into a single text file to import into the target database. Deleted files are included with a Version List of DELETEME and a message is popped to the developer asking them to manually delete those objects from Object Designer.


Some of the concepts and terms were new to most of us and took a while to get the hang of. Branching, merging, changesets, conflicts, server version, workspace version etc. but nothing that the team can’t get used to with some patience and practice.

Once we were comfortable with the development process and used to crafting sensible, self-contained changesets we had the foundation to start reviewing some code. What a good day that was.

Over time we’ve used the VSTS API and tf command line commands to automate many tasks:

  • Building an environment (database and service tier) for a given TFVC branch
  • Creating deltas of changes between objects in one branch and applying those deltas to objects in another branch
  • Getting the latest version from a given branch of objects that exist in a given folder
  • Creating deltas of the changes between two branches and packaging them into a (v1) extension
  • …a bunch of other cool stuff that isn’t really the point of this post

Suffice to say I can’t imagine us not using source code management now. All of the above only becomes possible once you’re code is checked-in and you can trust its content.


Eventually we reached a point where we started coming up against short-comings of TFVC. We were struggling to use TFVC to accommodate changes in the way that we wanted to work as a team and stared to learn about Git.

I’ll discuss why we decided to move to Git and why it might (or might not) be right for you in the next post…

Source Code Management: A Trilogy in Four Parts

It seems hard to believe now that we ever developed working software without using any source code management system – but we did. For a long time. And judging by the straw polls taken in sessions at NAV conferences lots of partners still do.
In this series of posts I am not intending to dispense any advice based on my own meandering experiences. I’m not Baz Luhrmann. I’ll also try to avoid the slightly self-righteous tone which often seems to accompany advice about source code management and unqualified assertions like, “If you’re using TFVC then you should migrate to Git.”
Instead, I’ll share some of our experiences and reasons for making the decisions about SCM that we did at the time. Hopefully there will be something of use in there for other dev teams at different stages in the same journey.
The stages are these:
  1. Decision to adopt an SCM system
  2. Adopting Team Foundation Version Control
  3. Switching to Git
  4. Conclusions

No Source Code Management?

It isn’t fair to say that we didn’t have any source code management in the old days. We did. The system was composed of certain expectations about how development was done and some manual checks that they were followed.

  • Development was done in the test/dev database before being moved to live
  • You could trust the Modified flag – any objects that had been changed from standard would be ticked as Modified
  • The date and time on the object would represent the last time the object was changed (therefore you could compare object datetimes between databases to judge which objects were different)
    • Developers would often check that the datetime in live was the same as test before starting development in test
  • Dated and initialled comments were added to the documentation trigger and modified lines in the code

In addition to these assumptions, generally

  • The original developer would also put their changes live – they knew which object(s) to move
  • The developer would take a backup of the objects from the live database before importing from test
  • We were a small enough development team that we could talk to each other about changes that had been made
  • The same developer would work on the same customers
  • Support would refer cases back to developers, especially if issues arose after some objects had been moved into live

All of this is, in itself, a system. Sure, there are problems with these assumptions, the steps weren’t always followed and it doesn’t scale too well but this is a source code management system. Of sorts.

Adopting a SCM System

So why did we choose to adopt a new SCM system? The main reasons were these:

  • We’d had instances of SQL databases being deleted which contained the only copy of some work. Lots of hours’ worth of work
  • We often had occasions where it was difficult to piece together what changes had been made and why – particularly if the documentation in the objects was light/absent
    • This was particularly problematic for support developers landed with a case that starts “we’ve had some changes go live recently and now xyz isn’t working”
    • While we’d try to go back to the original developer (see above) if this wasn’t possible the support developer would have the difficult task and trying to work out what had actually changed
  • We didn’t have a reliable, single place to go for previous versions of objects – either to refer or rollback to
    • Developers would take ad-hoc backups of objects before moving them live
    • We’d have a support copy database of most customer’s objects on an internal SQL server
    • We experimented with a homemade backup system that pulled copies of objects from support databases – but not from databases hosted on-premise where development was often done
  • The team was growing – both in terms of staff and number of projects – to the point where it wasn’t practical to rely on the same developer always working on a particular project or on ad-hoc communication between developers, support and consultants to manage the objects

It became clear that we needed a central system to manage our development in. We’d need to:

  • Backup copies of our objects
  • Track changes that had been made, when, by whom
  • Rollback objects to previous versions

Fortunately this realisation coincided with Luc van Vugt running a workshop at NAVTechdays about source code management with TFVC…