Who we are
We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.
If you want to give it a try, download it from here.
We also code SemanticMerge, and the gmaster Git client.
Who we are
We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.
If you want to give it a try, download it from here.
We also code SemanticMerge, and the gmaster Git client.
CmdRunner power!
Wednesday, January 25, 2012
Ma Nu
0
Comments
Wednesday, January 25, 2012 Ma Nu 0 Comments
The CmdRunner library is a tiny c# project that allows you to create porcelain applications using the cm shell strength.
What is the "cm shell" feature?
Manuel Lucio
I'm in charge of the Customer Support area.I deal with complex setups, policies and working methodologies on a daily basis.
Prior to taking full responsibility of support, I worked as software engineer. I have been in charge of load testing for quite some time, so if you want to know how well Plastic compares to SVN or P4 under a really heavy load, I'm your guy.
I like to play with Arduino GPS devices, mountain biking and playing tennis.
You can find me hooked to my iPhone, skate-boarding or learning Korean... and also here @mrcatacroquer.
Who we are
We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.
If you want to give it a try, download it from here.
We also code SemanticMerge, and the gmaster Git client.
More on recursive merge strategy
Friday, January 20, 2012
Pablo Santos
merging
10
Comments
Friday, January 20, 2012 Pablo Santos merging 10 Comments
Example: Only one ancestor is the right option
Recursive merge is not only good for “criss-cross” merge situations but also for more “regular” ones where merge history simply gets complicated. I’d like to go back to the original example and explain it now in bigger detail:We’re going to merge from changeset 5 to changeset 6. In this diagram (another explanation using code will come later) we simply say the “content” of the file is A or B or C, but it applies to lines of code (containing important changes, bug fixes and so on).
DAG format
Just in case anyone has trouble understanding that the green lines are merges and they’re rendered from source to destination, here’s an alternative diagram:Common ancestors
In order to run a merge, you need to identify the “source”, the “destination” and the nearest common ancestor. In this case the “src” and “dst” are clear but the problem is the “common ancestor” because there’s more than one candidate.If you look carefully you’ll see that both changesets “2” and “3” can be taken as common ancestors.
A non-recursive merge algorithm will only use one of them, and depending on the “choice”, the result can be quite different.
Selecting “3” as common ancestor
If the algorithm chooses “3” (and as we explained in our previous post, this is the choice taken by Hg, and also would be the one taken by former plastic’s algorithm previous to “merge recursive” implementation) then the following situation will happen:It will lead to a manual merge, where the user will be forced to choose.
Selecting “2” as common ancestor – the lucky shot
If the algorithm uses “2” as common ancestor, then the merge will be between content “B”, “B” and “C•”, so, since two contributors are the same, the result will be “C”.It is the expected result as we will see later, and it is a pretty valid choice, the problem is that there’s no good consistent way to choose “2”. This time, the result matches the expected result, but sometimes it simply leads to an incorrect automatic choice, which is the real problem and the real reason why “recursive” is implemented by git and plastic.
The recursive option
Again, here’s how the “recursive works”: since there’re two ancestors at the same distance, it will “merge them” creating a new virtual ancestor that will be used as the “base” for the final merge between the merge candidates “5” and “6”:The following image depicts the first step, the calculation of the “virt” common ancestor (the intermediate one):
And then the second step, where the result is calculated automatically, this time without user intervention:
Where is the huge difference?
Comparing recursive and no-recursive algorithms in this example, you might only see that the recursive is able to avoid a manual conflict because it is able to find out how it was solved before. Is it such a big difference? Just avoiding one manual merge? Real world has the answer: try a "real" merge involving several hundreds of files and then the answer is clear: usable if you don't have to solve the conflicts vs not usable.Wrap up
We still have a ton of posts to write to share the particular cases where having recursive merge saves the day, so we’ll try to move forward one by one in the coming weeks.Pablo Santos
I'm the CTO and Founder at Códice.I've been leading Plastic SCM since 2005. My passion is helping teams work better through version control.
I had the opportunity to see teams from many different industries at work while I helped them improving their version control practices.
I really enjoy teaching (I've been a University professor for 6+ years) and sharing my experience in talks and articles.
And I love simple code. You can reach me at @psluaces.
Who we are
We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.
If you want to give it a try, download it from here.
We also code SemanticMerge, and the gmaster Git client.
VMware issues when copying virtual machines
Wednesday, January 11, 2012
Luix
0
Comments
We at Codice Software use internally VMware for testing purposes; specifically we use VMware Server 2 to run automated tests (both smoke, GUI and integration tests) and VMware Workstation for manual testing, support, test new oportunities, platforms, applications and so on.Wednesday, January 11, 2012 Luix 0 Comments
We've been using this software for several years and we are very happy with it. It performs great, fast, secure and flexible. Nevertheless, sometimes we've encountered some problems. My work as builder of the company includes setting up new testing virtual machines and maintenance, so I'm working with VMware almost everyday.
Recently I created a cloned virtual machine from a snapshot in a VMware Workstation 7. After setting it up I moved it to our automated testing hosts (FYI, we have seven hosts that handles between 5 and 20 virtual machines each, all of them for testing purposes). When I tried to start up the virtual machine in VMware Server 2 I got a crash in the web application that I only could solve by restarting the service in the host machine!
Finally I workarounded the problem doing the following:
- First, I created a new virtual machine in VMware server with the same hardware features as the original one.
- Edit the .vmx file of the new virtual machine changing it to use the disk files (.vmdk) of the virtual machine I want to register in VMware Server. It's possible to do this from the "New virtual machine" wizard of VMware server or after that by editing the vmx file.
- Finally, start the new virtual machine.
I hope you'll find this helpful in the future.
Keep in touch.
Who we are
We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.
If you want to give it a try, download it from here.
We also code SemanticMerge, and the gmaster Git client.
The history/story behind transparent scm
Tuesday, January 10, 2012
Pablo Santos
scm
3
Comments
I'd like to tell the story of how Plastic evolved towards "transparent scm".
What you're seeing today in 4.0 is the result of quite an evolution since we first wanted to walk the transparent path.
Tuesday, January 10, 2012 Pablo Santos scm 3 Comments
Inspiration
Believe it or not, our original inspiration towards transparent scm was good-ol ClearCase. CC implemented MVFS, a dynamic file system able to show you the full file tree of a given version, capture all IO calls and adjust them to SCM operations.
Sample: cat foo.c@main@4
Would be captured by the CC fs driver and instead of issuing a "file not found" it would be able to get the right version of the file to be dumped by "cat".
Likewise it was able to intercept "move" operations so you didn't need to run "cleartool mv" command. What does it mean? You don't need to tell the SCM that you modified a file or that you moved it: it is intercepting all your IO actions, and making the right ops at the FS level!
So, definitely, we always wanted to have some sort "move interception" to avoid having to "tell" plastic a file or dir was moved... it would simply know.
First attempts
We code-named "glass" the first version of what we called "plastic made transparent" (hence glass).
Glass was able to use some underlying FS layer (third party FS driver code to simplify FS creation) to intercept all ops and:
Well, writing your own FS is always a cool thing, but there were caveats:
So, while we used glass internally, we never moved it to production mode.
Xdiff
Ok, you might think: what does Xdiff have to do with transparent?? Well, Xdiff implements an algorithm to find moved fragments of text even when they've been modified afterwards. It is able to calculate "similarities".Xdiff opened up a new door to make Plastic transparent.
The current version
When we started the development of 4.0 we knew we wanted a new way to deal with local changes:And we applied the Xdiff technology. How?
This way the "pending changes view" is now able to figure out what you did on your workspace, including modified and moved files, and directories (remember dvcs like git or mercurial are not able to track directory renames or moves).
New way or working
Now it is straightforward to work on your favorite editor, doing refactors (renames and moves) and simply switch to plastic to checkin after detecting your changes.Future
Check http://www.youtube.com/watch?v=cnJ5UgJJSkU. It is a dynamic file system based on Dokan.Net which is able to show a given configuration on a virtual "unit drive".Pablo Santos
I'm the CTO and Founder at Códice.I've been leading Plastic SCM since 2005. My passion is helping teams work better through version control.
I had the opportunity to see teams from many different industries at work while I helped them improving their version control practices.
I really enjoy teaching (I've been a University professor for 6+ years) and sharing my experience in talks and articles.
And I love simple code. You can reach me at @psluaces.
Who we are
We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.
If you want to give it a try, download it from here.
We also code SemanticMerge, and the gmaster Git client.
The evolution of Plastic SCM branches
Sunday, January 08, 2012
Pablo Santos
branching
0
Comments
A long time ago in a galaxy far, far away.... first level branches were the kings. If you weren’t a plastiker yet then keep on reading because the true story about the empire of selectors is about to be revealed.
Sunday, January 08, 2012 Pablo Santos branching 0 Comments
All branches were born equal
When the first version of Plastic SCM was up and running (circa February 2006) all branches were “first level branches”. (If you’re interested on Plastic archeology check this http://youtu.be/bn9vc7MRkSQ). There was one branch to “rule them all” which was called “the main branch” which still survives as the ruler of the entire Plastic universe.The evil empire of selectors
All branches were created equal for a reason: you had your “main” branch, then you added content to it and later you were able to create a new empty “first level branch”. Then, based on selector rules, you were able to specify how to put content on it.During this old period developers were forced to manually specify how the checkout process was going to happen. Look at the following selector:
repository “coolcode” path “/” branch “/task001” checkout “/task001” path “/” branch “/main” checkout “/task001”
Read through it in detail. What does it mean? There are two rules: the first one says: try to load the item from “/task001” and if you’ve loaded the item with this rule and you need to checkout, checkout to “/task001”.
Then there’s a second rule, which is the one performing the branching. This rule will only load items that were not loaded with the first rule (the selector rules are executed in priority order, top to bottom). So, at the beginning items will be on the “/main” branch only, so all will go through this rule.
If you check carefully you’ll see the second rule loads from “/main” but checkouts on “/task001”! This is how branching happened! On checkout, new revisions are placed on the new branch, then the first rule will be able to load them.
Child branches to the rescue
As you can see, all started with really dark days, when just checking out a file forced all poor programmers to remind that “selectors rule”.Then child branches were born and the following selector:
repository “coolcode” path “/” branch “/main/task001” checkout “/main/task001”
Was equivalent to the previous one. So, only one rule was needed to specify the “branching behavior” defined before.
A branch like “/main/task001” means: “get the content from task001 but if it doesn’t exist take it from main and place all checkouts in main”. It was an important step ahead back in the day.
The age of smart branches
Later on (circa mid 2008) we launched “smart branches”. Basically to work on a branch you needed to “remember” the “starting point”. If “/main/task001” started from Label BL130 then you needed something like:repository “coolcode” path “/” branch “/main/task001” label “BL130” checkout “/main/task001”
Otherwise “task001” would “inherit” from LAST on “main” which was the basis of the powerful “dynamic inheritance feature”.
Smart branches made things simpler because you were able to “set a base to a branch” and then your selector turned out to be something like the following:
repository “coolcode” path “/” smartbranch “/main/task001”
No need to specify the “label” nor the “checkout branch”.
Branching in 4.0
Branches in 4.0 inherit from “smart branches” but simplify the whole thing: a branch starts from a changesets and upon checkin, adds changes to its initial configuration on the new branch. And that’s all.Basically, in 4.0 a branch that starts from changest 1024 (on “main”) can be named “main/importantfix” if you want to highlight it is a “child” of the “main” branch, but you can easily create “importantfix” directly from cset 1024 too and they’ll behave exactly the same.
We kept the “child branch” concept in 4.0 because we think it introduces a “branching namespace” which is very good for existing Plastic SCM users (used to it for years) and also good to organize development for newcomers.
But remember that now all branches are equal, independently of whether they’re children or “first level” branches, unlike what happened in Plastic prior to 4.0.
An image is worth…
Pablo Santos
I'm the CTO and Founder at Códice.I've been leading Plastic SCM since 2005. My passion is helping teams work better through version control.
I had the opportunity to see teams from many different industries at work while I helped them improving their version control practices.
I really enjoy teaching (I've been a University professor for 6+ years) and sharing my experience in talks and articles.
And I love simple code. You can reach me at @psluaces.
Who we are
We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.
If you want to give it a try, download it from here.
We also code SemanticMerge, and the gmaster Git client.
Listening to the user voice
Wednesday, January 04, 2012
Pablo Santos
0
Comments
Happy 2012 everyone!Wednesday, January 04, 2012 Pablo Santos 0 Comments
You know we listen to you and here’s a blog post focused on explaining what’s new in our latest BL237.7 build (4.0.237.7 version) and where does it came from.Remember we’re using “user voice” and we’re more than happy to receive your feedback there.
The top one request: create branch from my checkouts
You wanted it and here it is the result:(The entry was introduced in “uservoice” only 20 days ago!)Gamer’s choice: undo unchanged
I got this one (again) during my “US Tour” about one month ago. Gaming companies seem to dramatically need it, and here it is! :)Never miss the switch
Another “classic request”: you create a new branch, then you want to switch to it. Why Plastic doesn’t have an option to do it in one step? Ok, here it goes.More requests
If you check BL237.7 release notes, you’ll find it is filled with requests and fixes. Just to mention a few:And fireworks to finish
This one deserves a post on its own: do you know that Visual Studio “move tracking” is greatly limited? If you move a file from one project to another within the same solution… it won’t work. Visual Studio can’t handle this and will incorrectly notify an add/delete pair instead of a move operation to the underlying version control. It was broken for infamous SCC and it is still broken now, including TFS.Well… we fixed it! New 237.7 is able to deal with files moved between projects!! Cool, isn’t it!
Pablo Santos
I'm the CTO and Founder at Códice.I've been leading Plastic SCM since 2005. My passion is helping teams work better through version control.
I had the opportunity to see teams from many different industries at work while I helped them improving their version control practices.
I really enjoy teaching (I've been a University professor for 6+ years) and sharing my experience in talks and articles.
And I love simple code. You can reach me at @psluaces.
Who we are
We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.
If you want to give it a try, download it from here.
We also code SemanticMerge, and the gmaster Git client.
Plastic SCM 3.0 to 4.0 migration notes
Wednesday, January 04, 2012
Ruben
migration
3
Comments
This guide will give you some hints about how to perform a migration from Plastic SCM 3.0 to version 4.0.
Wednesday, January 04, 2012 Ruben migration 3 Comments
Background
Plastic SCM 1.0 was first released in November 2006, and since then the basic underlying design has been evolving incrementally. During the development of version 4.0 we had to make some modifications to the core server structure in order to enable better distributed support, increased performance and an improved merge mechanism.These core changes didn’t have a huge impact on the underlying database structure (if you look carefully you’ll note that the only difference is about the usage of the old “childrenitem” table, which entries were modified (visible on beta) and now have disappeared completely.
This important change made direct database upgrade (as we always did so far: just run the server and during start-up it upgrades the database) no longer feasible.
On the other hand, we have changed our “import strategy” and instead of developing specific “importers” (Subversion, CVS, Visual Source Safe) we’ve now standardized on the “fast-import / fast-export command suite”. This new “fast-export/import” allows us to provide import capabilities from a wider set of SCMs and also enable an easier “way out” for teams who need to feel they’ve a “way back” in case something fails during the move to a new SCM. It also allows us to provide a much better git interop.
We implemented the “fast-export” support for Plastic SCM 3.0 too (available on the latest releases) and then we’ve just decided to stick to this migration path to enable the move to 4.0.
In short:
How to migrate from 3.0 to 4.0
$ cm fast-export repo@localhost:8084 repo.fast-export
$ cm fast-import repo@localhost:8087 repo.fast-export
Restrictions due to branch inheritance
There are some important changes between 3.0 and 4.0 underlying structure which will force the 3.0 fast-export command to “discard” some branches.
The reason is tightly related to the way in which 3.0 dynamic branch inheritance worked. In 4.0 each branch points to a changeset “head”, the latest changeset on the branch. So, switching to a branch is always equivalent to switching to the last changeset on the branch. Each changeset is statically resolved which means it simply points to a given source code tree, starting on the root directory of your repository and then to all its content (it doesn’t mean each changeset contains an entire copy of the repo!! Don’t worry about storage!).
Things were a little bit different with 3.0, where the selector rules were necessary in order to resolve a given tree. Remember you could have branches pointing to “last” which means switching to it would load your content on the branch and “combine it” (not merge, just combine the non-overlapping entries) with the latest on the parent branch.
There is a big difference between the changesets in 3.0 and 4.0. In 4.0, each changeset points to an entire, statically resolved tree. It wasn’t true in 3.0 for the changesets on branches inheriting from “last” since “last” would dynamically move.
What does it mean? Basically: branches inheriting from “last” won’t be exported by the 3.0 fast-export command and hence won’t be imported in 4.0. If this is a problem for you and prevents you to migrate to 4.0, please let us know.
Merge tracking changes
Merge tracking has been deeply modified in 4.0. Merge tracking was implemented at the “item” level prior to 4.0 and has now been moved to “changeset” level.This important structural changes (tree + merge tracking) are the reasons why a “direct” migration wasn’t feasible so far, and also the reason why we decided to go through a “export-import” path.
The main risk migrating from 3.0 to 4.0 (and we’re currently working on it) is when a “partial merge link” is wrongly migrated as a full merge link (the only ones supported in 4.0) leading to potential merge calculation errors later on.
Merge between branches with different bases
Check the following merge scenario created with 3.0:The merge from /main/task001 to /main/task002 will only propose /src/foo.c to be merged, because it was the only item modified in the branch. It means that “/doc/plan.pdf” won’t be proposed to be merged to “task002”. This was an important limitation on 3.0 and has been fixed in 4.0. But, if the merge was performed in 3.0, the merge link will be there but “task002” won’t contain the modified “plan.pdf”.
Now you migrate from 3.0 to 4.0 and decide to merge “task002” to “main”. The new merge system will check the differences between the source changeset and the base changeset and it will find that “plan.pdf” was modified (because “task002”, despite of the 3.0 merge, will still contain the “old plan.pdf”, and 4.0 doesn’t have (so far) a good way to know whether it was incorrectly merged on 3.0 or really modified.
At the end of the day, merging task002 will put the old “plan.pdf” on “main”, incorrectly, because the merge link between task001 and task002 is set by 3.0, and 4.0 can’t handles merge differently.
(The case is a little bit dense so feel free to contact us if you need further info). This is not a problem with the following type of merges:
Popular Posts
- The version control timeline
- Towards Semantic Version Control
- Merge recursive strategy
- Configuring ignored items in your workspace
- Designing a better user experience
- Linus on branching...
- The fastest way to insert 100K records
- Put your hands on a programming-language-aware, refactor ready, merge tool
- The state of the art in merge technology
- How to link repositories using Xlinks
Labels
- Agile
- backend
- Bamboo
- best practices
- branch explorer
- branching
- cloud
- code review
- configuration
- continuous integration
- delphi
- DevOps
- differences
- distributed
- docker
- dotnet
- eclipse
- fun
- git
- integrations
- mergebot
- merging
- migration
- mono
- news
- performance
- plastic
- plasticfs
- scm
- Scrum
- semantic
- semanticmerge
- solaris
- testing
- triggers
- tube
- unity
- visual studio
- WebAdmin
- WebUI
3 comentarios: