This is the second post in a series about source code management. If you haven’t already read the beginning of the story you can find it here.
We’d realised that we’d outgrown our system of one-developer-per-customer and ad-hoc communication between teams about ongoing changes to objects. We needed some more structure and somewhere safe to keep previous versions of objects.
Selecting a System
We evaluated a couple of systems and decided to go with Team Foundation Version Control (TFVC), hosted in Visual Studio Team Services (recently renamed Azure DevOps). The key factors:
- It’s hosted in the cloud and therefore always available – including for developers who were working on site (something we tend not to do now but happened a lot at the time)
- Familiar tools – developers were already using Visual Studio for RDLC report development and occasional C# development
- Source code management is independent of NAV development
- It doesn’t matter what version of NAV you are developing in as they all support export to/import from text files
- It doesn’t matter where the NAV database is, you only need to be able to bring the text files back to the local workspace on your laptop
- We did evaluate a product where previous versions of objects are saved in the NAV development database – but the thought of recreating a development database from a SQL backup and losing the history made it a non-starter
- We could see there was a lot of scope for future improvements with work item tracking, testing and a comprehensive API
- We’d already got some in-house experience with Team Foundation Server in our .Net development team
Our philosophy was:
- We’d have a VSTS project for each customer project
- A branch in that project to represent each NAV database
- We’d check-in all vanilla objects at the start of the project and all modifications thereafter
It was pretty hard.
The benefits are an easy sell – we’ll be able to rollback objects, we’ll know what was changed, when, by whom and link to the change request that also tell us why. We could even go live with some changes and not others, in the same file. What’s not to love about that? The practice is a little trickier.
The mindset change was the key thing to try and get right. We were so used to being able to make quick changes to objects. Some consultants were comfortable making minor changes – adding fields to forms and pages for example. Support would also make minor changes and put bug fixes in. Some customers would make their own changes as well.
For source code management to be of any use you have to be able to trust the data. For the data to be trustworthy all code changes must be checked-in, no matter how small and seemingly insignificant.
We didn’t want to put too many obstacles in the way of non-developers customising objects for our customers, after all, most likely the customer bought NAV on the strength of our ability to quickly customise it for them in the first place. However, one way or another, all code changes must make their way to TFVC.
The obvious place to start then was to incorporate TFVC into the development process as smoothly as possible:
- Write code in NAV test database
- Export object(s) to text
- Copy into local workspace and check-in to development branch
- Repeat 1-3 as necessary
- Merge changeset(s) from test branch to live branch
- Download merge changeset(s) from live branch
- Import into NAV live database, compile, synchronise (comments about how we ought to have been building fobs and importing them instead can be posted below if you really feel the need)
The main challenge were NAV is concerned is keeping the objects in the database and the contents of the workspace in sync. Fortunately, this was also the time that PowerShell support was being added and we were realising its power for other applications.
For step 2 we allow the developers to export all the relevant objects to a single text file and split them into separate files, one per object, named appropriately into a new folder. Identical to Split-NAVApplicationObjectFile but as a custom function so that we can support forms and dataports.
We also have a PowerShell function for step 6. The developer enters the required changeset ID and PowerShell uses the API to collect all the files modified in that changeset, with contents as at that changeset version, and join them into a single text file to import into the target database. Deleted files are included with a Version List of DELETEME and a message is popped to the developer asking them to manually delete those objects from Object Designer.
Some of the concepts and terms were new to most of us and took a while to get the hang of. Branching, merging, changesets, conflicts, server version, workspace version etc. but nothing that the team can’t get used to with some patience and practice.
Once we were comfortable with the development process and used to crafting sensible, self-contained changesets we had the foundation to start reviewing some code. What a good day that was.
Over time we’ve used the VSTS API and tf command line commands to automate many tasks:
- Building an environment (database and service tier) for a given TFVC branch
- Creating deltas of changes between objects in one branch and applying those deltas to objects in another branch
- Getting the latest version from a given branch of objects that exist in a given folder
- Creating deltas of the changes between two branches and packaging them into a (v1) extension
- …a bunch of other cool stuff that isn’t really the point of this post
Suffice to say I can’t imagine us not using source code management now. All of the above only becomes possible once you’re code is checked-in and you can trust its content.
Eventually we reached a point where we started coming up against short-comings of TFVC. We were struggling to use TFVC to accommodate changes in the way that we wanted to work as a team and stared to learn about Git.
I’ll discuss why we decided to move to Git and why it might (or might not) be right for you in the next post…