Who we are

We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.

If you want to give it a try, download it from here.

We also code SemanticMerge, and the gmaster Git client.

P4 break

Friday, January 27, 2012 Ma Nu 3 Comments

Are you confined in perforce? Do you want to escape from it?

Materials

We need some tools to success in our adventure!

  1. Python 2.7 (http://www.python.org/getit/releases/2.7/)
  2. P4PythonLib 2010.1 for Python 2.7 (http://public.perforce.com:8080/guest/sven_erik_knop/P4Pythonlib/bin/?ac=83)
  3. Bzr Standalone (http://launchpad.net/bzr/2.4/2.4.2/+download/bzr-2.4.2-1-setup.exe)
  4. Bzr Python Based (http://launchpad.net/bzr/2.4/2.4.2/+download/bzr-2.4.2-1.win32-py2.7.exe)
  5. Git for windows (http://msysgit.googlecode.com/files/Git-1.7.8-preview20111206.exe)
  6. P4-fast-export.py (bzrp4)(http://dl.dropbox.com/u/2974293/trunk.rar)
  7. I assume you have the P4 client and server.

Dangers

It’s not going to be a Boy Scout trip… some brave people failed.

  1. If the Bzr and P4PythonLib is not able to find the Python installation directory please review the following link: (http://selfsolved.com/problems/setuptools-06c11-fails-to-instal/s/63)
  2. If the p4-fast-export.py fails regarding a Git error you have to follow this:(http://stackoverflow.com/questions/5299122/unable-to-import-with-git-p4-on-windows)
The plan

  1. Install all the materials, just install it, you don’t need to open or configure anything.
  2. Place the http://dl.dropbox.com/u/2974293/trunk.rar content inside a directory called “bzrp4” under the "C:\Program Files\Bazaar\plugins" directory, the path to the python migration file should be like this: “C:\Program Files\Bazaar\plugins\bzrp4\p4-fast-export.py”.
  3. Customize the “setup_env.bat” parameters with your environment info, adapt the Git path, P4PORT, and the Perforce server Path. Finally run it.
  4. Open a command line window and run: “python p4-fast-export.py //my/repo/path@all > p4fe.dat”. If you are strong enough you can review the command help. Make sure you are using the Python 2.7 executable.
  5. Create a temporal directory, change your command line directory to it and perform the following: “git init .” “type p4fe.dat | git fast-import” and then “git fast-export --all --tag-of-filtered-object=drop --signed-tags=strip > gitFe.dat”
  6. The last step is “cm fast-import p4ImportedRep gitFe.dat”

If all works fine you will have your repository inside PlasticSCM!!!

Survivors?


Manuel Lucio
I'm in charge of the Customer Support area.
I deal with complex setups, policies and working methodologies on a daily basis.
Prior to taking full responsibility of support, I worked as software engineer. I have been in charge of load testing for quite some time, so if you want to know how well Plastic compares to SVN or P4 under a really heavy load, I'm your guy.
I like to play with Arduino GPS devices, mountain biking and playing tennis.
You can find me hooked to my iPhone, skate-boarding or learning Korean... and also here @mrcatacroquer.

3 comentarios:

Who we are

We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.

If you want to give it a try, download it from here.

We also code SemanticMerge, and the gmaster Git client.

CmdRunner power!

Wednesday, January 25, 2012 Ma Nu 0 Comments

The CmdRunner library is a tiny c# project that allows you to create porcelain applications using the cm shell strength.

What is the "cm shell" feature?

Manuel Lucio
I'm in charge of the Customer Support area.
I deal with complex setups, policies and working methodologies on a daily basis.
Prior to taking full responsibility of support, I worked as software engineer. I have been in charge of load testing for quite some time, so if you want to know how well Plastic compares to SVN or P4 under a really heavy load, I'm your guy.
I like to play with Arduino GPS devices, mountain biking and playing tennis.
You can find me hooked to my iPhone, skate-boarding or learning Korean... and also here @mrcatacroquer.

0 comentarios:

Who we are

We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.

If you want to give it a try, download it from here.

We also code SemanticMerge, and the gmaster Git client.

More on recursive merge strategy

Friday, January 20, 2012 Pablo Santos 10 Comments

NB: This article has been updated through out the years. It's the continuation to this initial post explaining how recursive merge works. Last update was done on December 2018.

Example: Only one ancestor is the right option

Recursive merge is not only good for “criss-cross” merge situations but also for more “regular” ones where merge history simply gets complicated. I’d like to go back to the original example and explain it now in bigger detail:
We’re going to merge from changeset 5 to changeset 6. In this diagram (another explanation using code will come later) we simply say the “content” of the file is A or B or C, but it applies to lines of code (containing important changes, bug fixes and so on).

DAG format

Just in case anyone has trouble understanding that the green lines are merges and they’re rendered from source to destination, here’s an alternative diagram:

Common ancestors

In order to run a merge, you need to identify the “source”, the “destination” and the nearest common ancestor. In this case the “src” and “dst” are clear but the problem is the “common ancestor” because there’s more than one candidate.
If you look carefully you’ll see that both changesets “2” and “3” can be taken as common ancestors.
A non-recursive merge algorithm will only use one of them, and depending on the “choice”, the result can be quite different.

Selecting “3” as common ancestor

If the algorithm chooses “3” (and as we explained in our previous post, this is the choice taken by Hg, and also would be the one taken by former plastic’s algorithm previous to “merge recursive” implementation) then the following situation will happen:
It will lead to a manual merge, where the user will be forced to choose.

Selecting “2” as common ancestor – the lucky shot

If the algorithm uses “2” as common ancestor, then the merge will be between content “B”, “B” and “C•”, so, since two contributors are the same, the result will be “C”.
It is the expected result as we will see later, and it is a pretty valid choice, the problem is that there’s no good consistent way to choose “2”. This time, the result matches the expected result, but sometimes it simply leads to an incorrect automatic choice, which is the real problem and the real reason why “recursive” is implemented by git and plastic.

The recursive option

Again, here’s how the “recursive works”: since there’re two ancestors at the same distance, it will “merge them” creating a new virtual ancestor that will be used as the “base” for the final merge between the merge candidates “5” and “6”:
The following image depicts the first step, the calculation of the “virt” common ancestor (the intermediate one):
And then the second step, where the result is calculated automatically, this time without user intervention:

Where is the huge difference?

Comparing recursive and no-recursive algorithms in this example, you might only see that the recursive is able to avoid a manual conflict because it is able to find out how it was solved before. Is it such a big difference? Just avoiding one manual merge? Real world has the answer: try a "real" merge involving several hundreds of files and then the answer is clear: usable if you don't have to solve the conflicts vs not usable.

Wrap up

We still have a ton of posts to write to share the particular cases where having recursive merge saves the day, so we’ll try to move forward one by one in the coming weeks.






Pablo Santos
I'm the CTO and Founder at Códice.
I've been leading Plastic SCM since 2005. My passion is helping teams work better through version control.
I had the opportunity to see teams from many different industries at work while I helped them improving their version control practices.
I really enjoy teaching (I've been a University professor for 6+ years) and sharing my experience in talks and articles.
And I love simple code. You can reach me at @psluaces.

10 comentarios:

Who we are

We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.

If you want to give it a try, download it from here.

We also code SemanticMerge, and the gmaster Git client.

VMware issues when copying virtual machines

Wednesday, January 11, 2012 Luix 0 Comments

We at Codice Software use internally VMware for testing purposes; specifically we use VMware Server 2 to run automated tests (both smoke, GUI and integration tests) and VMware Workstation for manual testing, support, test new oportunities, platforms, applications and so on.

We've been using this software for several years and we are very happy with it. It performs great, fast, secure and flexible. Nevertheless, sometimes we've encountered some problems. My work as builder of the company includes setting up new testing virtual machines and maintenance, so I'm working with VMware almost everyday.

Recently I created a cloned virtual machine from a snapshot in a VMware Workstation 7. After setting it up I moved it to our automated testing hosts (FYI, we have seven hosts that handles between 5 and 20 virtual machines each, all of them for testing purposes). When I tried to start up the virtual machine in VMware Server 2 I got a crash in the web application that I only could solve by restarting the service in the host machine!

Finally I workarounded the problem doing the following:

  1. First, I created a new virtual machine in VMware server with the same hardware features as the original one.
  2. Edit the .vmx file of the new virtual machine changing it to use the disk files (.vmdk) of the virtual machine I want to register in VMware Server. It's possible to do this from the "New virtual machine" wizard of VMware server or after that by editing the vmx file.
  3. Finally, start the new virtual machine.
This ensures the compatibility with VMware Server.

I hope you'll find this helpful in the future.

Keep in touch.

0 comentarios:

Who we are

We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.

If you want to give it a try, download it from here.

We also code SemanticMerge, and the gmaster Git client.

The history/story behind transparent scm

Tuesday, January 10, 2012 Pablo Santos 3 Comments

I'd like to tell the story of how Plastic evolved towards "transparent scm". What you're seeing today in 4.0 is the result of quite an evolution since we first wanted to walk the transparent path.

Inspiration

Believe it or not, our original inspiration towards transparent scm was good-ol ClearCase. CC implemented MVFS, a dynamic file system able to show you the full file tree of a given version, capture all IO calls and adjust them to SCM operations.

Sample: cat foo.c@main@4

Would be captured by the CC fs driver and instead of issuing a "file not found" it would be able to get the right version of the file to be dumped by "cat".

Likewise it was able to intercept "move" operations so you didn't need to run "cleartool mv" command. What does it mean? You don't need to tell the SCM that you modified a file or that you moved it: it is intercepting all your IO actions, and making the right ops at the FS level!

So, definitely, we always wanted to have some sort "move interception" to avoid having to "tell" plastic a file or dir was moved... it would simply know.

First attempts

We code-named "glass" the first version of what we called "plastic made transparent" (hence glass).

Glass was able to use some underlying FS layer (third party FS driver code to simplify FS creation) to intercept all ops and:

  • do a checkout automatically when you modify a file (remember, checkout in plastic, perforce or clearcase sense, not svn or git. Checkout means "create a new version" instead of "download the code"
  • do move ops for you (you just move the files or dirs and glass issues a plastic mv command for you)
  • deletes and adds elements

    Well, writing your own FS is always a cool thing, but there were caveats:

  • An issue on a FS can tear down the entire system (blue screen of dead ! :P)
  • Intercepting all the IO ops impacts performance

    So, while we used glass internally, we never moved it to production mode.

    Xdiff

    Ok, you might think: what does Xdiff have to do with transparent?? Well, Xdiff implements an algorithm to find moved fragments of text even when they've been modified afterwards. It is able to calculate "similarities".

    Xdiff opened up a new door to make Plastic transparent.

    The current version

    When we started the development of 4.0 we knew we wanted a new way to deal with local changes:
  • Work in the workspace doing changes, deletes, adding files or moving them
  • Have Plastic detect what happened.

    And we applied the Xdiff technology. How?

  • Finding changes, deleted and added elements is easy (just compare disk with the loaded tree)
  • Each added/deleted pair will be run through a similarity algorithm to check whether it is the same file... moved! The same holds true for directories with a more advanced algorithm.

    This way the "pending changes view" is now able to figure out what you did on your workspace, including modified and moved files, and directories (remember dvcs like git or mercurial are not able to track directory renames or moves).

    New way or working

    Now it is straightforward to work on your favorite editor, doing refactors (renames and moves) and simply switch to plastic to checkin after detecting your changes.

    Future

    Check http://www.youtube.com/watch?v=cnJ5UgJJSkU. It is a dynamic file system based on Dokan.Net which is able to show a given configuration on a virtual "unit drive".
    Pablo Santos
    I'm the CTO and Founder at Códice.
    I've been leading Plastic SCM since 2005. My passion is helping teams work better through version control.
    I had the opportunity to see teams from many different industries at work while I helped them improving their version control practices.
    I really enjoy teaching (I've been a University professor for 6+ years) and sharing my experience in talks and articles.
    And I love simple code. You can reach me at @psluaces.
  • 3 comentarios:

    Who we are

    We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.

    If you want to give it a try, download it from here.

    We also code SemanticMerge, and the gmaster Git client.

    The evolution of Plastic SCM branches

    Sunday, January 08, 2012 Pablo Santos 0 Comments

    A long time ago in a galaxy far, far away.... first level branches were the kings. If you weren’t a plastiker yet then keep on reading because the true story about the empire of selectors is about to be revealed.

    All branches were born equal

    When the first version of Plastic SCM was up and running (circa February 2006) all branches were “first level branches”. (If you’re interested on Plastic archeology check this http://youtu.be/bn9vc7MRkSQ). There was one branch to “rule them all” which was called “the main branch” which still survives as the ruler of the entire Plastic universe.

    The evil empire of selectors

    All branches were created equal for a reason: you had your “main” branch, then you added content to it and later you were able to create a new empty “first level branch”. Then, based on selector rules, you were able to specify how to put content on it.

    During this old period developers were forced to manually specify how the checkout process was going to happen. Look at the following selector:

    repository “coolcode”
      path “/”
        branch “/task001” checkout “/task001”
      path “/”
       branch “/main” checkout “/task001”

    Read through it in detail. What does it mean? There are two rules: the first one says: try to load the item from “/task001” and if you’ve loaded the item with this rule and you need to checkout, checkout to “/task001”.

    Then there’s a second rule, which is the one performing the branching. This rule will only load items that were not loaded with the first rule (the selector rules are executed in priority order, top to bottom). So, at the beginning items will be on the “/main” branch only, so all will go through this rule.

    If you check carefully you’ll see the second rule loads from “/main” but checkouts on “/task001”! This is how branching happened! On checkout, new revisions are placed on the new branch, then the first rule will be able to load them.

    Child branches to the rescue

    As you can see, all started with really dark days, when just checking out a file forced all poor programmers to remind that “selectors rule”.

    Then child branches were born and the following selector:

    repository “coolcode”
      path “/”
        branch “/main/task001” checkout “/main/task001”

    Was equivalent to the previous one. So, only one rule was needed to specify the “branching behavior” defined before.

    A branch like “/main/task001” means: “get the content from task001 but if it doesn’t exist take it from main and place all checkouts in main”. It was an important step ahead back in the day.

    The age of smart branches

    Later on (circa mid 2008) we launched “smart branches”. Basically to work on a branch you needed to “remember” the “starting point”. If “/main/task001” started from Label BL130 then you needed something like:
    repository “coolcode”
      path “/”
        branch “/main/task001” 
          label “BL130” 
        checkout “/main/task001”

    Otherwise “task001” would “inherit” from LAST on “main” which was the basis of the powerful “dynamic inheritance feature”.

    Smart branches made things simpler because you were able to “set a base to a branch” and then your selector turned out to be something like the following:

    repository “coolcode”
      path “/”
        smartbranch “/main/task001”

    No need to specify the “label” nor the “checkout branch”.

    Branching in 4.0

    Branches in 4.0 inherit from “smart branches” but simplify the whole thing: a branch starts from a changesets and upon checkin, adds changes to its initial configuration on the new branch. And that’s all.

    Basically, in 4.0 a branch that starts from changest 1024 (on “main”) can be named “main/importantfix” if you want to highlight it is a “child” of the “main” branch, but you can easily create “importantfix” directly from cset 1024 too and they’ll behave exactly the same.

    We kept the “child branch” concept in 4.0 because we think it introduces a “branching namespace” which is very good for existing Plastic SCM users (used to it for years) and also good to organize development for newcomers.

    But remember that now all branches are equal, independently of whether they’re children or “first level” branches, unlike what happened in Plastic prior to 4.0.

    An image is worth…

    Pablo Santos
    I'm the CTO and Founder at Códice.
    I've been leading Plastic SCM since 2005. My passion is helping teams work better through version control.
    I had the opportunity to see teams from many different industries at work while I helped them improving their version control practices.
    I really enjoy teaching (I've been a University professor for 6+ years) and sharing my experience in talks and articles.
    And I love simple code. You can reach me at @psluaces.

    0 comentarios:

    Who we are

    We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.

    If you want to give it a try, download it from here.

    We also code SemanticMerge, and the gmaster Git client.

    Listening to the user voice

    Wednesday, January 04, 2012 Pablo Santos 0 Comments

    Happy 2012 everyone!

    You know we listen to you and here’s a blog post focused on explaining what’s new in our latest BL237.7 build (4.0.237.7 version) and where does it came from.Remember we’re using “user voice” and we’re more than happy to receive your feedback there.

    The top one request: create branch from my checkouts

    You wanted it and here it is the result:
    (The entry was introduced in “uservoice” only 20 days ago!)

    Gamer’s choice: undo unchanged

    I got this one (again) during my “US Tour” about one month ago. Gaming companies seem to dramatically need it, and here it is! :)

    Never miss the switch

    Another “classic request”: you create a new branch, then you want to switch to it. Why Plastic doesn’t have an option to do it in one step? Ok, here it goes.

    More requests

    If you check BL237.7 release notes, you’ll find it is filled with requests and fixes. Just to mention a few:
  • New: Xlink creation from a label. Example: cm xlink codesecondrepo / lb:LB001@second@localhost:8084. I promised this one a few days ago on the forum, didn’t I?
  • Provide a way to “ignore changes” so that they’re only considered if they’re checked out! (Reminds me Salt Lake City! :P)
  • Branch explorer label menu: if there’s only one label, don’t show a sub-menu
  • Merge: able to specify “merge contributor” from a context menu
  • Merge: able to correctly merge (and visualize!) chmod protection changes
  • Enhance top level branch creation (top level branches were almost evil in 3.0, but they’re your friends now… I’ll blog about it soon)
  • New “rm-rep” trigger: new trigger available: before-rmrep and after-rmrep, executed when deleting an existing repository

    And fireworks to finish

    This one deserves a post on its own: do you know that Visual Studio “move tracking” is greatly limited? If you move a file from one project to another within the same solution… it won’t work. Visual Studio can’t handle this and will incorrectly notify an add/delete pair instead of a move operation to the underlying version control. It was broken for infamous SCC and it is still broken now, including TFS.

    Well… we fixed it! New 237.7 is able to deal with files moved between projects!! Cool, isn’t it!

    Pablo Santos
    I'm the CTO and Founder at Códice.
    I've been leading Plastic SCM since 2005. My passion is helping teams work better through version control.
    I had the opportunity to see teams from many different industries at work while I helped them improving their version control practices.
    I really enjoy teaching (I've been a University professor for 6+ years) and sharing my experience in talks and articles.
    And I love simple code. You can reach me at @psluaces.
  • 0 comentarios:

    Who we are

    We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.

    If you want to give it a try, download it from here.

    We also code SemanticMerge, and the gmaster Git client.

    Plastic SCM 3.0 to 4.0 migration notes

    Wednesday, January 04, 2012 Ruben 3 Comments

    This guide will give you some hints about how to perform a migration from Plastic SCM 3.0 to version 4.0.

    Background

    Plastic SCM 1.0 was first released in November 2006, and since then the basic underlying design has been evolving incrementally. During the development of version 4.0 we had to make some modifications to the core server structure in order to enable better distributed support, increased performance and an improved merge mechanism.

    These core changes didn’t have a huge impact on the underlying database structure (if you look carefully you’ll note that the only difference is about the usage of the old “childrenitem” table, which entries were modified (visible on beta) and now have disappeared completely.

    This important change made direct database upgrade (as we always did so far: just run the server and during start-up it upgrades the database) no longer feasible.

    On the other hand, we have changed our “import strategy” and instead of developing specific “importers” (Subversion, CVS, Visual Source Safe) we’ve now standardized on the “fast-import / fast-export command suite”. This new “fast-export/import” allows us to provide import capabilities from a wider set of SCMs and also enable an easier “way out” for teams who need to feel they’ve a “way back” in case something fails during the move to a new SCM. It also allows us to provide a much better git interop.

    We implemented the “fast-export” support for Plastic SCM 3.0 too (available on the latest releases) and then we’ve just decided to stick to this migration path to enable the move to 4.0.

    In short:

  • Upgrade your 3.0 installation to the latest (3.0.187.33 -> for migration purposes only, don't use it for normal production).
  • Upgrade your 4.0 installation to 4.0.237.7 (or newer).
  • Fast-export your repository from 3.0
  • Fast-import it on 4.0

    How to migrate from 3.0 to 4.0

  • Perform the fast-export of the 3.0 target repository. This command will create a file that will be used by the fast-import. The first argument is the spec of the target repository. The second one is the output file.
    $ cm fast-export repo@localhost:8084 repo.fast-export 
    
  • Perform the fast-import of the exported repository. This command will create a 4.0 database with the imported data (previously exported from 3.0). The first argument is the spec of the target repository (this should be a non-existing repository or a recently created, empty one). The second one is the output file with the 3.0 data.
    $ cm fast-import repo@localhost:8087 repo.fast-export 
    

    Restrictions due to branch inheritance

    There are some important changes between 3.0 and 4.0 underlying structure which will force the 3.0 fast-export command to “discard” some branches.

    The reason is tightly related to the way in which 3.0 dynamic branch inheritance worked. In 4.0 each branch points to a changeset “head”, the latest changeset on the branch. So, switching to a branch is always equivalent to switching to the last changeset on the branch. Each changeset is statically resolved which means it simply points to a given source code tree, starting on the root directory of your repository and then to all its content (it doesn’t mean each changeset contains an entire copy of the repo!! Don’t worry about storage!).

    Things were a little bit different with 3.0, where the selector rules were necessary in order to resolve a given tree. Remember you could have branches pointing to “last” which means switching to it would load your content on the branch and “combine it” (not merge, just combine the non-overlapping entries) with the latest on the parent branch.

    There is a big difference between the changesets in 3.0 and 4.0. In 4.0, each changeset points to an entire, statically resolved tree. It wasn’t true in 3.0 for the changesets on branches inheriting from “last” since “last” would dynamically move.

    What does it mean? Basically: branches inheriting from “last” won’t be exported by the 3.0 fast-export command and hence won’t be imported in 4.0. If this is a problem for you and prevents you to migrate to 4.0, please let us know.

    Merge tracking changes

    Merge tracking has been deeply modified in 4.0. Merge tracking was implemented at the “item” level prior to 4.0 and has now been moved to “changeset” level.
  • Item level merge tracking: means that each time you merge a file or directory, there’s a new merge link created between the specific file or directory revisions. It provides some advanced functionalities (like partial merging: you can merge just part of a changeset and then merge the rest later keeping full tracking) but paying a high price for it: performance is poorer. Each item has its own “merge tree” that needs to be calculated and checked individually. Merging a big number of items tends to be slow.
  • Changeset level merge tracking: means that the merge links are created between changesets instead of items. (Note: this is the way in which all the new DVCS work: Git, Hg, Plastic SCM…). Performance is much higher since merge calculation doesn’t depend on the number of items to be merged. Understanding the evolution is also simpler because you only have to take a look at the branch explorer to understand what was merged and what was not, instead of having to check each item individually. In 4.0 we didn’t only modify the “tracking” but also greatly improved the underlying merging mechanism supporting many merge cases that were simply not doable before (divergent move, change-delete conflicts and many more).

    This important structural changes (tree + merge tracking) are the reasons why a “direct” migration wasn’t feasible so far, and also the reason why we decided to go through a “export-import” path.

    The main risk migrating from 3.0 to 4.0 (and we’re currently working on it) is when a “partial merge link” is wrongly migrated as a full merge link (the only ones supported in 4.0) leading to potential merge calculation errors later on.

    Merge between branches with different bases

    Check the following merge scenario created with 3.0:

    The merge from /main/task001 to /main/task002 will only propose /src/foo.c to be merged, because it was the only item modified in the branch. It means that “/doc/plan.pdf” won’t be proposed to be merged to “task002”. This was an important limitation on 3.0 and has been fixed in 4.0. But, if the merge was performed in 3.0, the merge link will be there but “task002” won’t contain the modified “plan.pdf”.

    Now you migrate from 3.0 to 4.0 and decide to merge “task002” to “main”. The new merge system will check the differences between the source changeset and the base changeset and it will find that “plan.pdf” was modified (because “task002”, despite of the 3.0 merge, will still contain the “old plan.pdf”, and 4.0 doesn’t have (so far) a good way to know whether it was incorrectly merged on 3.0 or really modified.

    At the end of the day, merging task002 will put the old “plan.pdf” on “main”, incorrectly, because the merge link between task001 and task002 is set by 3.0, and 4.0 can’t handles merge differently.

    (The case is a little bit dense so feel free to contact us if you need further info). This is not a problem with the following type of merges:

  • It has been detected as branch rebase (the base of the branch was changed by the switchbranchbase command).
  • The source and destination of the merge are in the same branch.
  • The source of the merge is the main branch.
  • The destination branch of the merge is the branch base of the source branch.
  • The source branch of the the merge is the branch base of the destination branch.
  • The source & destination branches have the same changeset/label as branch base.
  • 3 comentarios: