'. '

MultiGitRepository

From APIDesign

Revision as of 13:21, 13 April 2018 by JaroslavTulach (Talk | contribs)
Jump to: navigation, search

Using single Git repository is certainly more comfortable than working with multiple Git repositories. On the other hand, distributed development can hardly be performed in a single repository (unless you believe in a single Blockchain for the whole sourcecode on the planet). How to orchestrate multiple Git repositories to work together? That is the thousands dollar question various teams seek answer to! For example there was a talk at GeeCON 2017 in Prague about that by Robert Munteanu. Let's assume we have a project split into multiple Git repositories. What are the options?

Contents

Remember non-Distributed Version Control Systems?

There used to be times when people were afraid of distributed version control systems like Mercurial or Git. All the users of CVS or Subversion couldn't understand how one can develop and commit in parallel without integrating into the tip of the development branch! If each developer or team of developers has its own tip, where is the truth?

These days we know where the truth is: there is a master (integration) repository somewhere out there and whatever the tip there is, it is the truth. There can of course be multiple repositories, people are free to fork GitHub repositories like crazy, and some may even agree that one of the forks is the important one. Yet, unless the fork overtakes the original repository in minds of majority of developers, the truth will always remain in the original repository.

The situation with multiple repositories isn't that different. The history repeats on a new level. It is just necessary to explain even to single Mercurial or Git repository users that there is nothing to be afraid of!

Gates for Correctness

Typical GitHub workflow uses pull requests and some integration with Travis or other form of ContinuousIntegration which is usually well integrated with the review tool. As soon as one creates a PR, the continuous builder runs the tests and marks the PR as valid or broken. This greatly contributes to the stability of the master branch - it is almost impossible to break it by merging in PRs.

On the other hand please note that before your PR gets merged it may contain as many broken (e.g. not fully correct) commits as you wish. It is quite common one makes changes to the system, pushes them on a branch of own repository fork, creates a PR just to find out that while the functionality is OK, there are other things that need to be polished (formatting and proper spacing being my favorite). One then adds few more commits to polish the non-semantical problems of the code.

What I'd like to point out is: It is absolutely OK to have commits which are broken if they get fixed before merging into master branch. Now we are going to transplant this observation to the MultiGitRepository case.

Single Integration Repository

Just like there is the master branch in classical Git repository where all the commits have to ultimately end up (be merged) to, there has to be such integration point in the MultiGitRepository scenario as well. That means there has to be a single integration repository which references all the other repositories and identifies their exact commits at which they were integrated together.

One can use Git submodules for that, but other possibilities that uniquely identify the changesets work as well (GraalVM is using tool called MX which keeps these references in a special file called suite.py). All that is important is to have a single version of the truth - a single place that uniquely and completely identifies all the source code spread among all the repositories.

As in the single repository case, it is good to have a gate. An automated check that verifies with every PR to be merged into master branch of the integration repository that everything is still OK, still consistent. Such Travis or other ContinuousIntegration test checks out all the dependent repositories at their appropriate revisions (they are stored somewhere in the integration repository) and runs the test. If it passes, the PR is eligible for being merged. That guarantees the master branch of the integration repository is always correct.

What happens in the individual repositories meanwhile? may be your question. Well, anything. Things may get even broken there, but please note that was also the case in the single repository setup. There could also be broken commits meanwhile - all that mattered was to fix them before integrating. The same applies to the MultiGitRepository case: all that matters is that before the changes from a single repository get integrated (which means to update the appropriate commit references in the integration repository, create a PR and merge it into master branch of the integration repository), they are correct. But they have to be correct, as we have a gate in the integration repository which would refuse our PR otherwise!

Of course individual teams working on the non-integration slave repositories are encouraged to run tests and have their own gates. However such tests give just a hint, they aren't the ultimate source of truth. Just like developers working on a branch of a single repository are adviced to execute tests before making commits, yet they cannot expect such tests to guarantee their code will be able to be merged without any changes into master branch. In the same way regardless what happens in your slave repository, nothing can be guaranteed with respect to integration repository in the MultiGitRepository case.

Only when the final PR in the integration repository gets merged, one can claim that the we have new version of the truth which just moved forward.

Always Correct vs. Eventually Correct

For a long time I was proposing usage of lazy MultiGitRepository scheme. E.g. let anything happen in the individual repositories and concentrate only on the final gate check in the integration repository. Once the changes pass the gate and get merged, everything has to be OK. Clearly a win-win situation, I thought. However there must be some cultural aspect of this lazy verification which prevented my colleagues to accept this as a solution. I am still not sure what was the problem, as in my opinion it mimics the single repository behavior - anything can happen on branches and forks, even broken commits are allowed - all that matters is that the problems get fixed before PR that contains them gets merged into master.

Possibly the biggest psychological problem is that one can integrate into master of one of the non-integration slave repositories and then find out that such change cannot get into the integration repository. There is no rationalistic explanation why that should be a problem: The master branch in Git is just a name for a commit that most of users of the repository treat as the tip of developments. There is nothing special on it. If it is broken, you can add more commits on top of it to fix whatever needs to get fixed, or you can ignore few last commits and assign the name master to some other commit and try again. At the end (e.g. when some new commit from your slave repository successfully passes the integration repository gate) it has to be correct - e.g. the model leads to eventually correct code.

There is however fix for this psychological problem that has recently been implemented in one of the GraalVM teams and which seems to overcome the psychological barrier. It modifies the ContinuousIntegration builder of the slave repository to get the tip of the integration repository, update it with the commit in the slave and perform all the gate tests necessary for the integration. Only if these tests are OK the PR can be integrated - this time into both repositories at once - e.g. the master branches are always correct. This kind of eager check for correctness seems to be more acceptable among my colleagues.

However there is a scalability problem. When you sign for the always correct approach you need to run the integration repository tests for any PR in each slave repository. Soon you may find out that you are running out of your computation capacities. That isn't really surprising: most of the commits as just locally important and going though overall testing for each of them is clearly a waste of resources. As such you are likely to seek policies that slowly turn always correct towards ultimately correct checks. Because the ultimately correct checks can certainly scale way better: for example you may run them just once a day, or week - of course under the assumption that you culturally can accept the uncertainty that before your changes are in the integration repository they may be completely broken...

Single vs. Multi: Where's the difference?

It is 2018 and we, developers, have already learned to work with distributed version controls like Git. Time to learn to work in MultiGitRepository setup too! It is not that hard, in fact it perfectly matches what we were taught so far. We just need integration repository and bring our expectations into new level:


Action Single Git Repository MultiGitRepository
final commit destination master branch master branch in the integration repository
request to integrate PR targetting master branch PR updating reference to slave repository in the integration one
temporary work done on branches or in forks anything done in slave repositories before integration repository references it
ultimate gate runs before PR is merged to master branch runs before reference to a slave is updated in the integration repository

Don't be afraid to work in MultiGitRepository setup. With single integration repository it is not complicated at all!

Personal tools
buy