Who we are

We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.

If you want to give it a try, download it from here.

We also code SemanticMerge, and the gmaster Git client.

Plastic SCM Release 4.1.10.361 is out!

Wednesday, October 31, 2012 Amalia 0 Comments

A new release is out: Plastic SCM 4.1.10.361, or BL361 as we name it internally.

The features of each Plastic SCM release, up to and including the latest version, can be found in the release notes page.

Besides the full release notes, this blog post will highlight some of them. Just in case you haven't downloaded this version yet you can do that here.

Bugs

Branch view saves the custom query

The Branch view, which supports querying, reloaded the custom query when Plastic SCM was restarted. Fixed. Now, if Plastic SCM is restarted, it loads the most recent saved query.

Wrong Merge File Resolution

When the destination revision is checked out, the destination parent revision is the ancestor revision and the destination content size is the same as the ancestor content size, although the destination content will be different than the ancestor content, the source contributor is wrongly selected as result. Fixed!


Enjoy it!

0 comentarios:

Who we are

We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.

If you want to give it a try, download it from here.

We also code SemanticMerge, and the gmaster Git client.

SCM: Continuous vs. Controlled Integration

Tuesday, October 23, 2012 Amalia 0 Comments

Agile methodologies. What do you know about them? If you run a development environment with hundreds of developers, this may not be for you, but if you run a tight-knit group, then agile programming is the way to go. Agile programming means frequently starting new iterations of the development cycle. That means every so often – maybe every two weeks – you’ll start a new cycle of planning, development, testing, and, most importantly, release.

This methodology rose at the start of the new century and has introduced a new vision and spirit in software development. Concepts like refactoring, pair programming, and collective code ownership meant nothing twenty years ago. Now, though, those buzz words are everywhere, and if you haven’t yet, you’d better sit up and pay attention.

Agile methodologies have not only influenced the way software is analyzed, developed, and written, they have also changed the way it’s assembled, or integrated.

Today, just about everyone in the software industry has at least heard about continuous integration tools and techniques.

This article will analyze the pros and cons of continuous integration and discuss whether there might still be opportunities for even more agile processes on the horizon.


Free-ride Software Development

Agile software development brings many relevant features to the scene. Different people would stress different features in different situations, but here, I’ll highlight a subset of agile programming features that I’ll call the free-ride methodology. Imagine a dirt biker catching some serious air. Now THAT’S a free ride! And that’s the spirit of agile software development that we’re trying to capture. Free-ride programmers …

  • Enforce change. Whether you’re refactoring code or participating in collective code ownership, the message is clear: you need to be adaptable, you need to be able to change anything – and everything – in order to create better code and better serve your clients.
  • Create a real team. Free-ride methods put people first, as opposed to traditional project management and software engineering techniques which fiercely look towards reducing staff dependency. Our method encourages interpersonal communication, with a team being people actively working together towards a common goal – not just a bunch of geeks sitting in the same room.
  • Have fun. By the way, this won’t ever fit into every kind of organization. If you need to compete in the global market, though, getting the best out of talented individuals is imperative. Achieving such a goal involves motivating the team members, and you have to admit, a fun work environment really helps motivate people.

Not all organizations and not all projects will benefit from using agile techniques or even be able to adopt them. Projects with hundreds of developers and high personnel rotation aren’t normally good candidates for these programming practices. In fact, the standard way to achieve agility in environments like these is to split larger teams into smaller ones, which is not always possible, and when it is not, a tall hierarchy chain is required which is incompatible with agile techniques.

Even in environments where full-on agile programming methods won’t work, there are still ways of introducing more agile working methods, such as those I listed above. Those same techniques will benefit almost all development teams and help them to overcome many of the problems derived from some extended software configuration management (SCM) practices.


SCM’s Role in Agile Development

What is the role of software configuration management in agile processes? SCM used to be perceived jus as a commodity, as a service to be used by developers. However, SCM can play a key role in contributing to the creation of the right environment to achieve the desired agile development goals. The problem is that not every version control or SCM tool allows you to reach those goals. Most of them fail, forcing developers to follow the process that’s best suited to the tool’s capabilities – not the developers’.

Agile programming is all about changing code in a safer way, adapting to requirements faster, and listening to the customer better. Some of the most widely spread SCM agile practices fail at this, giving developers the freedom to perform changes without putting in place all the mechanism to ensure the maximum codebase stability.


From Big Bang to Frequent Integration

As I mentioned before, continuous integration is one of the core practices in agile programming methods. Continuous integration is the extreme response to big bang integration (working in a silo for a long time and then putting all the pieces together at the end) which has been the root cause behind a huge number of failed and delayed projects.

Figure 1 shows a typical development cycle in which integration is done at the end of the project. With only one line of development going on, there’s absolutely no problem.

Figure 1. A regular development process


The problems start flowing freely, however, when you use big bang integration in a real-world situation, like the one depicted in Figure 2. The integration is delayed until the end of the project. When the integrators try to make all the code and components work together, it becomes a real nightmare. The problem is not only caused by the code itself – which needs to be re-worked – it’s because personnel are not used to running integrations. They’re done so rarely.

Figure 2. Big bang integration, big trouble at the end


So this is where frequent integration enters the scene: What if your team integrates their changes on a regular basis? Then, instead of having big trouble at the end of the project, the team will have more frequent, but smaller troubles, reducing the risk and making it much more manageable. Figure 3 depicts a frequent integration process.

Figure 3. Frequent integration


Now the question is: How frequently should I run the integration process? Should I run it once a month, once a week, or twice a day?


Non-stop integration

Agile programming methods clearly enforce frequent build and release cycles, but many development groups have ended up implementing what has been called non-stop integration. What does that mean? Instead of running integrations frequently, developers integrate all the time. A developer makes a change, checks all the code in, and the build system runs all the available test suites. If the build gets broken (if it doesn’t compile correctly or not all the run tests pass) the developer receives a warning notifying him or that he has to fix the problem. So, in fact, integrations are now continuous because they occur all the time.

The key difference between continuous integration and the evil code and fix cycle is the presence of a well-defined test suite, plus a firm developer’s commitment to run it all the time (or it could be enforced by build software).

But is continuous integration the solution to all version control headaches …. Or does it introduce more problems?

In a perfect world, the test suite would be almost perfect, so if it runs correctly, no problem would ever occur. In reality though, test suites are far from complete, and it’s not a far cry to imagine that a problem introduced by a developer reaches the main code line immediately without being correctly checked. Once it is detected it will be fixed, but in the meantime many other developers could have been affected. Figure 4 illustrates a bug spreading scenario

Figure 4. Bug spreading and mainline instability, aftermath of continuous integration


Imagine the following situation: A developer finishes a given task and wants somebody else from testing to check whether it’s correct or not. To deliver the code, she will check it into the version control system, trigger the build scripts, and then notify her colleague to get the code and check whether everything is correct or not. The only reason to submit the code at that point was making it available in a managed way. If the code has a problem or it doesn’t implement the feature correctly, the mainline would be already infected by the mistake. Because all the team members are basically doing the same, in a short period, there will be a lot of code built upon the unstable baseline.

Figure 5 shows a set of tasks being directly integrated into the mainline, as it would happen with the continuous integration working pattern. There is only one way for developers to deliver code: merging it into the mainline. In the figure, after tasks 1098, 1099, 1100, and 1104 have been delivered, what would happen if task 1098 is detected as a defective one? The answer would be it has to be fixed. But what if we need to release the code stat to a client or even just the testing group, and we already know some changes introduced by 1098 are wrong?

Most likely, features introduced by tasks 1099, 1100, and 1104 are totally independent from 1098 (task independency happens more often after the initial phase of a project during which tasks tend to be extremely dependent on each other due to the project’s infancy) and they could have been properly delivered if another working pattern had been used.

Figure 5. A task introducing a problem and all the rest building on top of it


Table 1. Continuous integration drawbacks


Controlled integration

By the way, I don’t mean to say that continuous integration isn’t controlled, it’s just that when I refer to controlled integration as opposed to continuous, I want to highlight that the former occurs frequently, but not all the time, and it is normally run only when a certain milestone is reached (the milestone could perfectly well be a weekly or daily planned integration).

There is also another difference between continuous and controlled integration, and it refers to the roles involved in the process. In a regular continuous integration scenario, all developers perform integrations and solve merge conflicts in code, which is perfectly acceptable on small, well-trained teams. Even agile teams can be affected by personnel rotation, though, or just new members joining, and it’s usually not a good idea to have a brand new developer mixing code he doesn’t yet understand.

In controlled integration a new role shows up: the integrator. The integrator is usually a seasoned team member who is familiar with the bulk of the code, the version control system, and the build and release process. The most important feature of the integrator is not that she knows all the code, which is not even necessary, but that she takes responsibility for the integration process. The integrator’s primary goal is creating a new stable baseline to serve as the base for development during the next iteration.

Figure 6 shows a scenario in which well defined integration points are not present. In such an environment, you’re likely to experience mainline instability and developers are likely to get stuck in a code and fix rut since they’ll be starting their iterations from unstable baselines.

Figure 6. Development cycle with no defined integration points


A controlled integration process will introduce a set of well-known starting points that developers will use to work against. So, as Figure 7 shows, now developers will always start working against a well-known and stable baseline. This way, for instance, between BL004 and BL005, everyone will start their new tasks with BL004 code, so no unnecessary dependencies or unstable developer workspaces will affect the development process.

Figure 7. Well-defined baselines in a controlled integration process


As a side effect of controlled integration, task-oriented development can be introduced. Now, each task a developer works on will be handled independently by the version control system, implementing full parallel development and giving the team more maneuverability during release creation.

Figure 8 highlights the differences between task-oriented parallel development and serialized development processes. When task-oriented patterns are supported by the version control, a look to the branching hierarchy will reflect how the development was, indeed, parallel, something that can only be imagined but not traced by serial development.

Figure 8. Differences between parallel and serial development


Parallel development and branching patterns

This brings us to the crux of real SCM powered parallel development: branches. Branches are unfortunately normally viewed as a necessary evil by many developers. This is mostly because many version control tools systematically discourage branch usage, and the reason behind it is not that branches are evil, but that most of the available tools are extremely bad at dealing with them.

In fact, when most of us think about branches, we consider them just from the project management perspective. You create a branch to split a project, to start maintenance, or to support different code variants. Use just a few branches for very specific tasks.

I’m here to tell you that branches can also be used in a more tactical way to isolate changes and create detailed traceability. Branching patterns like branch-per-task or branch per developer (also known as private branches or workspace branches) open a new world of possibilities in the integration process. Commits no longer have to be associated with just delivering anymore, they can just be used as checkpoints (we call them changesets) creating a real safety net around developers, boosting both productivity and change freedom, which are practices totally aligned with agile programming goals.

Achieving task independency through branching

Using mainline development on a single branch, as it is usually encouraged by continuous integration practitioners, ends up, as mentioned before, in situations like the one depicted in Figure 9, where all the tasks are inexorably linked.

Figure 9. Task dependency forced by construction


What if each task were developed on its own branch? Then developers get not one, but two added services from the version control system. First, they get to create intermediate checkpoints (changesets) when they need to, instead of being forced to wait until the code is finished. Second, they also gain the ability to decide what goes into a given release or not, while still retaining changes under source control.

Figure 10 shows a scenario where the developer switches from one branch to another, thereby avoiding unnecessary task dependency.

Figure 10. Task independency achieved by branching patterns


Table 2. Controlled integration and branching benefits


Controlled integration cycle

So far I have introduced the concept of controlled integration, but how does it happen in practice? The answer is simple: Once a day, once a week, or at most, every couple of weeks. It really depends on the work volume. When the stack of finished tasks reaches a certain quota, they get integrated by the integrator; a new release is created, tested, and then marked as the baseline for the next iteration. Figure 11 shows the full cycle. Notice that testing (unit testing, automated GUI testing, manual checks, etc.) plays a key role in the process. If there were no tests, the cycle wouldn’t make any sense.

Figure 11. Controlled integration cycle


Are there any drawbacks to using the controlled integration cycle? No method is perfect. The following drawbacks are worth mentioning:

  • If no build and test server is in place (something quite extended when continuous integration is present) developers run the test suites on their own workstations. This is normally time-consuming and can have an impact in productivity. In case automated GUI testing is used, developer’s workstations will be blocked until the tests finish.
  • Results are not published. Using an integration and build server, there will usually be a way to publish the test results, but when tests are run on developer’s workstations, such an option could be more difficult.

Getting the best of both worlds: controlled + continuous tools

What about having the build and test tools normally used in continuous integration mixed with the controlled best practices? This way we would still get the best out of the branching patterns, the added control introduced into the process, and benefit from the build technology spread by the continuous practitioners. Figure 12 shows a mixed process: each time a developer finishes a task, the integration server will trigger a build, getting the code from the associated branch. All the available tests get run and then results are published and made available to the whole team. Now developers can continue working while the tests are run, and they can get feedback after the whole test suite is run.

Figure 12. Mixing controlled and continuous techniques


Integration alternatives

When a regular controlled integration is performed, the integrator runs a subset of the complete test suite (smoke tests) for each integrated branch. This practice allows one to reject an offending task if it breaks the build or doesn’t pass the tests. The integrator is the one responsible for merging the code, running the tests, labeling the results, packing, and so on. Figure 13 illustrates the process. The problem is that the task itself can be very time consuming. Normally, if the right tool is used and it implements a good merging support, the merge process is extremely fast, but running all the tests again and again will be demanding of the CPU.

Figure 13. Centralized controlled integration


Are there any other options to solve the problem? Consider the proposal depicted by Figure 14. Developers integrate their branches against the mainline from their development branches, and then the integration server triggers the build and test cycle. When a number of tasks have been integrated, the integrator will check the mainline stability and decide to create a new baseline. Notice that this proposal is quite close to continuous integration but it has the following differences:

  • Developers still count on their own versioned sandboxes.
  • All tasks start from a well-known baseline point, which is known to be stable, so bug spreading is still avoided.

Figure 14. Developers integrate against the mainline; integrators are in charge of the baselines


Figure 15 introduces a new variation on the same theme which mixes controlled and continuous integration together. Now developers continue integrating their changes when they finish a task, but this time they do it against an integration branch. The integrator will be in charge of promoting the changes to the mainline when needed, also creating new baselines. Now the mainline is kept clean and just contains correct and finished code.

Figure 15. Controlled integration + integration branch


Conclusions

SCM tools and practices can play an important role both in the transition to agile practices and enhancing the current ones. Both small and large teams can benefit from better isolation, task independency, and better release assembly.

Isolating tasks and changes in branches introduces an added layer of security and traceability, pushing the freedom to perform changes and incrementing both stability and productivity.

The right choice will heavily depend on the company situation, but deploying version control systems which are agile and utilize branches will give the group the freedom to choose the right pattern for the right stage on the project’s lifecycle.


0 comentarios:

Who we are

We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.

If you want to give it a try, download it from here.

We also code SemanticMerge, and the gmaster Git client.

Plastic SCM 4.1.10.359 is out!!!

Monday, October 22, 2012 Amalia 0 Comments

A new week, a new release: this time we're announcing 4.1.10.359 (Build 359 as we name it internally) which includes a number of bug fixes and new features!! While you can read the detailed release notes, this blog post will highlight the most relevant features and bug fixes.

Remember you can find the latest installer here.

Features

Apply local changes to CLI

So far the ‘transparent scm’ (link) was better handled by the GUI than the CLI. Now the ci (checkin), unco (undo checkout) and shelve commands can look for local changes specifying the --all option (-a).

In order to convert the local changes to checkouts, the co command (checkout) now supports a --applylocal modifier, which detects the local changes and turns them into checkouts. .

Remarks:

Only the locally moved, deleted and changed items will be handled this way. The private items must be added using the add command.

For example:

  • Command to checking all the workspace changes: the local and controlled changes
    • cm ci -a
  • Command to undo all the workspace changes: the local and controlled changes
    • cm unco -a
  • Command to shelve all the workspace changes: the local and controlled changes
    • cm shelve -a
SQL Server backend performance boost

One of the key database operations Plastic SCM performs is reading “changeset trees”. A changeset tree is the directory structure associated to a given checkin. Each time you diff a changeset, switch to a branch, merge and browse a label Plastic SCM has to load a changeset tree (in case it wasn’t cached before).

This performance improvement reduces the changeset tree load time in about a 70%, which will have a big impact especially on big repositories.

Bugs

MergeReplay: Duplicated Key

The error “An element with the same key already exists in the dictionary”, that occurs under some special condition during the merge process, has been fixed.


Enjoy it!!

0 comentarios:

Who we are

We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.

If you want to give it a try, download it from here.

We also code SemanticMerge, and the gmaster Git client.

Direct push/pull from Plastic SCM to Git

Thursday, October 18, 2012 Pablo Santos 14 Comments

Plastic SCM is now able to speak git protocol language! What does it mean? Basically, Plastic now can push and pull directly to remote git servers using both native git and https protocols, which includes well-known sites such as github, BitKeeper, codeplex and many others!

The problem

When we started developing the git-bidirectional synchronization with Plastic SCM we had the following scenarios in mind:

  • Developers already using Plastic SCM who want to contribute to projects in sites like github, codeplex, BitKeeper and others.
  • Developers working on teams using Git as primary server who prefer to use Plastic SCM but need to contribute changes back to the main server.
  • Teams gradually adopting Plastic SCM who need to contribute to other teams on Git.

The solution

We went the hard way: we didn’t come up with some sort of intermediate script to convert changes from one system to the other, imposing a ton of limitations (you might have read of solutions like this during the last weeks, from other vendors :P), but we actually implemented the git network protocols in Plastic, so it can directly pull and push to git.

Pablo Santos
I'm the CTO and Founder at Códice.
I've been leading Plastic SCM since 2005. My passion is helping teams work better through version control.
I had the opportunity to see teams from many different industries at work while I helped them improving their version control practices.
I really enjoy teaching (I've been a University professor for 6+ years) and sharing my experience in talks and articles.
And I love simple code. You can reach me at @psluaces.

14 comentarios:

Who we are

We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.

If you want to give it a try, download it from here.

We also code SemanticMerge, and the gmaster Git client.

New Webinar is out: The task per branch cycle

Tuesday, October 16, 2012 Amalia 0 Comments

Plastic SCM is hosting a new webinar this October.

To learn more about DVCS we invite you to attend the following FREE webinar.
Register for an upcoming webinar: "The task per branch cycle".
Join us!




2012/10/17 - Task Per Branch Cycle

The goal of this webinar is to explain how the branch per task pattern works and why is so good to keep projects under control. Branching and merging is a key topic nowadays, specially due to the raise of DVCS, so we expect you find this webinar worth. Join to this webinar and learn!

When: 2012/10/26 5:00 PM – CEST

Register to The Branch Per Task Cycle Webinar


Follow or tweet this webinar with #taskbranchcycle

Plastic SCM team are preparing the next webinars. Stay tuned!!


0 comentarios:

Who we are

We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.

If you want to give it a try, download it from here.

We also code SemanticMerge, and the gmaster Git client.

Plastic SCM WEB UI

Monday, October 15, 2012 Amalia 2 Comments

We’re glad to announce Plastic SCM 4.1.10.357. The most important feature in this new release is the new WebUI beta! The WebUI is finally here and ready to install! But remember it is still in Beta status, so it will still have some rough edges. We’ll be more than happy to hear from your suggestions.

We published it as part of the 4.1 series instead of waiting for the next major release, so you can try it out on your current environment and we can get your feedback asap

Remember you can find the latest installer here.

What is the Plastic SCM WebUI?

2 comentarios:

Who we are

We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.

If you want to give it a try, download it from here.

We also code SemanticMerge, and the gmaster Git client.

Plastic SCM 4.1.10.355 is out!!!

Monday, October 08, 2012 Amalia 0 Comments

A new release of the 4.1 version is out: Plastic SCM 4.1.10.355!!!

The features of each Plastic SCM release, up to and including the latest version, can be found in the release notes page.

Besides the full release notes, this blogspost will highlght some of them. Just in case you haven't downloaded this version yet you can do that here.

Features

Tree Mode in branches view

This is a fantastic new feature! Now you can choose how you want to see your branches.

Plastic SCM team has developed a mechanism to show branches as a tree. Child branches are now under the parent branches in a TreeView.


This new option is in Branches View


There are two options to show branches:

  • As a list
  • As a tree

Features that have been implemented:

  • When a branch is shown as a tree, with the show plus/minus symbol, the double click does not execute the default action
  • The filter is preserved when you change between the tree view and the list view
  • The currently selected branch is preserved between the tree view and the list view
  • The tree view has short names, and the list view has long names
  • The preference (selected view) is stored
  • When refreshing the branch list on tree mode, the expanded branches are remembered
  • When creating a new branch, the recently created branch is focused
  • When refreshing the branch list on both modes, the focused branch is remembered
  • Changing between tree and list mode do not requires a view reload (the server is not queried).

More options to enjoy with Plastic SCM GUI

Bugs

Merge: Index out of range

The "Index out of range" error, that was happening while all merge was processed, has been fixed. This error occurred when all the modified on source files are locked by an external application.

Checking error

The checking operation error: "The changed node (ItemId:) cannot be null" is fixed.

i.e.: This error could happen on a checkin operation when an Xlinked repository has only unnecessary changes (same data changes) and its parent repository has needed changes.

Xlinks and empty items

Under some special conditions, if the checkin operation fails but some of the repositories has already been committed, the committed files will not have their data. Fixed!


Enjoy it!!

0 comentarios:

Who we are

We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.

If you want to give it a try, download it from here.

We also code SemanticMerge, and the gmaster Git client.

Distributed and proxy servers

Friday, October 05, 2012 Pablo Santos 0 Comments

Plastic SCM supports both distributed development (it is a DVCS – distributed version control) through distributed servers and also proxy servers.

Distributed servers and proxy servers are not the same and they serve very different purposes. But sometimes users get really confused, that’s why I’m going to try to explain what we had in mind when we designed our proxy and distributed servers.

A proxy server story

Proxy servers in version control jargon (including Plastic’s) are just “cache servers” able to cache data and optionally metadata to reduce the number of requests to the central server.

In the old days (present for some products out there :P) of central servers, proxy servers where the only way to speed things up for developers working at a remote location.

Proxy servers are read-only, which means you can read from them, but if you need to execute a “write” operation you need to connect to the central server.

Now, what happens if the network connection goes down (or it is extremely slow)? Then the developers can’t work with the version control.

That’s why “proxy servers” are not very good for distributed development. Even when the connection is reliable, there’s a real risk it won’t be fast enough for high demand users like software developers.

Many vendors, though, have been presenting proxy servers as a “distributed solution” when they don’t have a real DVCS offering.

Distributed means no cable

This is how a distributed setup (a multi-site setup to be more precise) looks like. Two servers pulling and pushing changes over a network, potentially the internet.

In a pure distributed scenario each developer will be able to host his own distributed repository so there won’t be just two “servers”.

In a “multi-site” setup like the one described above, one server will be at each side of the “cable”.

Plastic SCM is actually able to support the two scenarios: multi-site and fully distributed.

Now, what if the network goes down?

As the following diagram shows, each server is able to work independently of its peers. It won’t be able to exchange data with the disconnected server, but it will be able to continue serving the developers, so that broken connections or slow networks won’t force them to halt.

The ability to run fully distributed servers is what actually enables true distributed development and real DVCS.

It is not an easy task and that’s what makes Plastic SCM so unique: the only commercial DVCS (BitKeeper still there?) and the only one able to work both in central and distributed modes.

Use proxy servers at home – world upside down

Every Plastic SCM server is a distributed server, it is able to push and pull changes from remote ones.

So, why did we implement a proxy server if we already have a distributed one?

First thing is flexibility: we love to give our users all the possible options, so they can choose the best one for them. They can host repositories on each developer machine (pure git’s style) or they can have huge central servers using SQL Server or Oracle backends. That’s why we have both: some teams will prefer to go fully distributed, others will stay on a multi-site approach with single servers at each location and clients directly connecting to them without local repos, and others will prefer to have proxy servers on different locations.

But, to be honest, the main reason why we support proxy servers is to enable high performance setups behind the firewall. Teams with huge amounts of developers (several hundreds of thousands) working on the same physical location but with separate teams on different network fragments. In order to speed up we like to install a proxy server on each fragment to reduce data transfers bottlenecks on the main server.

Conclusion and disclaimer

I hope I wrote a better explanation on proxies and DVCS, although to be really honest I just learnt to draw the beautiful blue cloud combining circles in Visio and I wanted to share it with the world somehow :)

Enjoy.

Pablo Santos
I'm the CTO and Founder at Códice.
I've been leading Plastic SCM since 2005. My passion is helping teams work better through version control.
I had the opportunity to see teams from many different industries at work while I helped them improving their version control practices.
I really enjoy teaching (I've been a University professor for 6+ years) and sharing my experience in talks and articles.
And I love simple code. You can reach me at @psluaces.

0 comentarios:

Who we are

We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.

If you want to give it a try, download it from here.

We also code SemanticMerge, and the gmaster Git client.

Plastic SCM 4.1.10.350 is out!!

Monday, October 01, 2012 Amalia 0 Comments

A new release of the 4.1 version is out: Plastic SCM 4.1.10.350 has been released.

All the new features and bug fixes of each Plastic SCM release can be found in the release notes page

Besides the full release notes, this blogpost will highlight some of them. Just in case you haven't downloaded this version yet you can do it here.

Features

Database Integrity

There's a new command in the CLI: checkdatabase.

This command checks the database integrity of a Plastic SCM repository.

Usage:

"cm checkdatabase [spec]".

The "spec" argument is a repository specification it could be a server or repository (rep:default@localhost:8084)

Remarks: If the [spec] parameter is not specified, the checkdatabase command will use the default property defined inside the "client.conf" file.

Examples:

  • cm checkdatabase repserver:localhost:8084
  • cm checkdatabase rep:default@localhost:8084

Integrate Atmel Studio 6 with Plastic SCM

Added support for Atmel Studio 6.0 in the installer. This IDE is base on Visual Studio 2010. The package will be installed in Atmel Studio by means of the VSIX installer used for Visual Studio 2012.


Bugs

Export View Data in History View

An error occurred processing your export view data in history view, the file to export is created, but no data is exported.

Fixed!! The problem was caused by the Labels column. If this column had null values, an exception was caused when trying to get the string from the object. The Labels column has been removed. It's no longer necessary.

Move changes dependencies to CLI

When a full checkin was done from the CLI using the option "--applychanged", only the changed items under the current directory were committed instead of applying all the workspace changed items. Fixed!!


Enjoy it!!

0 comentarios: