How to share an engine repository between different video games

August 5, 2014

If you ever wondered how to set the dependencies between different Plastic SCM repositories or you have been struggling to share common repositories using Xlinks , you will probably find this post helpful.

Initial setup

Let´s review a scenario where a video game company has a shared engine that is being used at the same time by two different game studios (in different locations).

Each game studio will have an "engine" repository and a "game" repository.

  • The "engine" repository will be synchronized to the remote main "engine" repository in New York.
  • The "game" repository will be local an independent per game studio.
  • The goal is to share the engine code between the two studios and, at the same time, the engine developers internally continue evolving it in New York.

    How does a workspace look like?

    This is the workspace of a developer who works in the Game A at the Vancouver studio. There is a folder containing the game code and also an Xlink pointing to the local "engine" repository.

    But let´s now review in deep how the engine repository should be configured to succeed in this scenario.

    Engine repository configuration:

    As the engine might be different in each game studio, the “engine” repository has internal branches to handle the specializations. We create a new branch per game. This set up allows each game studio to use the engine code and modify it independently.

    In the Vancouver studio they create a new branch for this purpose.

    The good point of using specific branches per game is that interesting fixes or new features can be merged or cherrypicked between branches and then easily replicated to the remote sites.

    In our example, the studio located in Vancouver develops a fix in the engine that is considered indispensable for his colleagues in London (and other future studios). In order to share the fix, they perform a merge from his specific branch to the "main" branch of the "engine" repository.

    Finally, they just push the new created changesets to the engine central repository in New York. Now the new engine version (including the fix) is available for all the studios. So the other studio in London just need to pull the new stable version of the engine and rebase it to their working branch (remember that every game studio has a specific branch in the "engine" repository).

    Games repositories configuration:

    The game studio locations also need to configure their local setups. There will be a “gameX” repository containing the local game source code and a local “engine” repository. This repository will be a replica of the New York´s engine.

    The way the Plastic handles repository links are creating Xlinks. So we create an Xlink in each "game" repository pointing to the "engine" repository.

    Notice that the Xlinks are pointing to the specifically created branches per each game.

    When the Xlinks are created, we also have to properly adjust the branch auto expansion rules. This way, we can keep the independence between the two games although they are actually using the same Xlinked "engine" repository.

  • Xlink branch expansion rules on the GameA repository --> source branch: "/main", destination branch: "/main/GameA".
  • Xlink branch expansion rules on the GameB repository --> source branch: "/main", destination branch: "/main/GameB".
  • Note: We select both options:

  • Writable: Because we want to perform changes in the Xlinked "engine" repository.
  • Use relative server: The Xlink tries to get the repository from our local server. This option is necessary because we are working on a replicated local repository and we don´t need to access to the remote server.
  • Share the engine changes between games:

    This game independence doesn´t limit the propagation of the changes performed in one game to the other. It can be performed in two easy steps:

  • Cherrypick/merge the desired changesets from one game branch to the other in the "engine" repository.
  • Update the Xlink target to the merge result changeset.
  • Finally, It´s interesting to realize that using this workflow, we can avoid replicating the entire "engine" repository per game studio location. We can just sync the desired branches on demand avoding unnecessary disk and network usage.

    It means that the Vancouver studio doesn´t need to sync the "/main/GameB" branch because they are only working in the GameA.

    Plastic SCM server address changed, what now?

    July 22, 2014

    Sometimes you just need to change the server IP or port because of security reasons; sometimes the server is moved somewhere else and then the address changes. Ideally the IT team will notify you about the change and you’ll have time to prepare it.

    Now we’ll review the steps you will need to follow to achieve a smooth transition.

    Notify the developers about the change

    It’s important to share the information with the developers using Plastic about the server address change since they’ll need to carry out some actions, don’t worry everything is pretty easy. The only action they need to accomplish prior to the address change is the following one:
    • Checkin all the pending changed files/directories inside the workspaces before the server address changes. Basically, leave the “Pending changes view” clean.

    Why? The Plastic SCM changes metadata info is internally stored inside a binary file called “plastic. changes”; you’ll find it at the hidden “.plastic” directory at your root workspace path. This is the key reason finding checkouts in a huge workspace (>300K files) is so fast. The pending changes are stored along with the Plastic SCM repository and the server they belong to. Keeping this file during the address change will lead into future issues while committing the changes since the old server stored will not be valid.

    Reconfigure the client connection info

    When the server address changes the Plastic SCM client will not detect the new address, the client will keep trying to connect with the old configuration over and over again and you will much likely see something like the image below.

    Don’t worry; it’s just the Plastic SCM client complaining about the server not answering the start-up queries. That’s why we need to run the Plastic SCM client configurator, to tell the client the new server address to work with.
    Remember that you’ll find the client configurator at the Windows start button programs or simply running “plastic --configure"

    Once the client is configured to use the new server address and port the Plastic SCM Client will start.
    We didn’t finish but we are almost there.

    Metadata references matters

    The Plastic SCM client is still having a file storing references to the old server address; the name of the file is “plastic.wktree”. The workspace tree file, unlike the “plastic.changes” file, can be easily updated to fetch the new server address so it’s safe to preserve it during the transition.

    The “plastic.wktree” file is a really important file for the Plastic SCM client, it stores the loaded changeset metadata info, that is the files name, size, hash, owner, repository and so on. It also stores the server address the workspace is working with, in order to update it you will need to open a command line window, change the directory to the Plastic SCM workspace path and run a “cm update .” operation.

    That will update not only your workspace but also the metadata referencing the old server address, now the Plastic SCM client can continue working, perhaps you may see the following error message:

    No channel found trying to connect to [colada:8091 (unreachable)]

    That means your plastic SCM selector is hardcoded to work with a certain repository spec, “code@colada:8081” for example, if that’s the case please issue the “cm sts” command and choose one of the following two alternatives, you can remove the absolute server spec from the selector, preserving only the repository name or you can fix the old server address and set the new one.

    Old selector:
    repository "code@colada:8090"
    path "/"
    smartbranch "/main"

    New selector (Option 1, relative repo spec):
    repository "code"
    path "/"
    smartbranch "/main"

    New selector (Option 2, full repo spec):
    repository "code@tizona:9091"
    path "/"
    smartbranch "/main"

    Alternatively to the “cm sts” command you can issue an absolute repspec switch operation providing the new server address, for example:

    cm switch br:/main@code@tizona:9091

    Assuming “colada:8090” is your old Plastic SCM address and “tizona:9091” is the new one.

    Securing a Plastic SCM system

    July 15, 2014


    The two main motivations behind the Plastic SCM security system are:

  • Provide a mechanism to control access to the repositories and restrict certain operations.
  • Define custom policies for both development and deployment. Even in widely open organizations, the access to certain parts of a repository can be restricted, not only for security related reasons but to prevent mistakes.
  • After installing Plastic SCM, you can check that any authenticated user of the system will have full access granted.

    The first step should be to assign a specific user (or group) as the repository server owner, and then, you can start defining your custom security restrictions. The repository server owner will be the “root” Plastic server user.

    After that, it doesn´t matter if you, for some reason, misconfigure any permission because this user will be able to restore them again.

    Whether they are to prevent unwanted access or enforce certain development policies, you should consider the following:

  • Define the different users and groups that will have access to the Plastic SCM system, and give them the right access on the repository server. Later on, you can customize specific privileges to repositories, branches and even items if required.
  • The next step should be to change the repository server permissions. Changing the permissions to the top level element in the security hierarchy will ensure that all the rest of the objects get secured.
  • ACL´s and Permission inheritance:

    Each user (SEcured IDentifier or SEID in Plastic terms) or group who has granted or denied permissions will have an entry in a given ACL. Each entry, will have allowed and denied permissions.

    Permissions in Plastic SCM are inherited from the top object in the hierarchy (the server) down to the lower ones.

    This way, it is very simple to set a group of permissions at the server level that will apply to all the inheriting objects: repositories, branches...

    Disable or deny permission ?

    This is a question that you will probably need to answer when defining your permission policies. You need to take in account that disabling a permission is not the same as denying a permission.

    Consider the following example; a certain user is a member of the “developers” group and at the same time of the “integrators” group. Developers are not allowed to commit at the “main” branch but integrator certainly are.

    If we actively deny the checkin permission to the “developers” group on the main branch, this specific user won´t be allowed to perform the operation. The permission is granted due to he is an integrator but it’s denied because he is also a developer, the denied permission prevails over the granted.

    On the contrary, if we disable the permission (but not deny it) to the “developers” group, he will be allowed to perform the checkin operation at the main branch because he also belongs to the “integrators” group, and combining a disabled permission with an allowed one results in an allowed permission.

    Case studies:

    After this brief introduction to the Plastic SCM permissions system, I would like to review some examples that may help you to implement different permission policies based on your organizational needs.

    Case 1: Restrict permissions for integration branches

    Imagine you are developing your project using a branch per task branching pattern. Developers create task branches to perform their work.

    There are two different groups of people involved in the development: integrators and developers (but certain developers can act as integrators too).

    The recommended workflow is the following: developers can’t "checkin","applylabel" or "rmchangeset" on the integration branches, and only integrators can perform these operations.

    To handle this scenario, the proposed permissions configuration is:

    Disable the "checkin", "applylabel" and "rmchangeset" permissions to developers (not deny but disable). Integrators will have these permissions allowed, but the developers won´t be able to execute the mentioned operations.

    Remember that a user belonging to both groups will have the operations allowed: combining a disabled permission with an allowed one result in an allowed permission, meanwhile the users belonging only to “developers” won’t be able to perform the operations.

    Case 2: Restrict access to branches:

    In this second scenario, we have a development team that has the following roles: a system administrator group, a development group and an integration group.

  • Developers can only work on development branches.
  • Integrators can only make changes on integration branches.
  • Administrators have full access.
  • If a user belongs to several groups, he will have all the combined benefits (if a user is both a developer and administrator, he should be given full access).

    Steps to set up the scenario:

  • The "mkbranch" permission will be disabled at the repository level for both integrators and developers.
  • Development branches: integrators will have the "checkin" and "applylabel" permissions disabled. Developers will have the permissions allowed.
  • Integration branches: developers will have the "checkin" and "applylabel" permissions disabled. Integrators will have all permissions allowed.
  • Case 3: Restrict access to directories:

    In this case study, we have two different user profiles working on a branch: developers and artists. And we want to restrict the access to specific directories per user profile.

    A possible solution for this scenario could be:

  • Developers have all the path permissions enabled.
  • Artists have all the path permissions enabled, but the "src" folder.
  • Note: If you need to completely hide the directory to a user, the best solution would be to create a new repository per user profile using Xlinks and then customize the "view" permissions per repository.


    We have reviewed the main Plastic SCM features to secure a server and also some classic scenarios where you can customize the rules depending on your requirements.

    What are the permission policies that you are using in your company? Could you share your configuration with us?

    How to undo a merge

    July 8, 2014

    Reasons to undo a merge

    There’s no software free of bugs which means you’ll deal, sooner or later, with a branch that introduces a bug.

    Hopefully your automatic test suite will detect it. Otherwise an unhappy customer will do it for you. In either case you will need to revert a merge to remove the functionality that is causing the issue and then you’ll have make the release stable again.

    There’s also another reason to undo a merge that doesn’t mean you’ve a nasty bug: sometimes you simply need to remove a task branch because the team changed their mind and decided to postpone it to a future release.

    How to undo a merge (subtract a merge) with Plastic

    PlasticSCM provides an easy way to revert a merge operation while keeping the code history. We call it “subtractive merge” and it is way much easier than it sounds.
    I’ll explain how it works with an example. Consider the following scenario displayed in the image below: I've just merged the “/main/issue32” and the “/main/issue33” task branches into the “/main/Release_500” integration branch.

    We strongly recommend you run your testsuite (at least part of it depending on how fast it is) after each task branch is merged. This way you can check that everything still works once merged together.

    Sadly it wasn’t that easy with the “issue32” branch; this branch introduced an issue that only showed up when it was combined with the branch “issue33”.

    Now that we know where the issue is coming from, we need to move forward with the release. The whole “main/Release_500 branch” is affected by the issue so we need to take “issue32” out of release being built.

    In order to do that we will subtract changeset 178, the one introducing the changes of “issue32” into the “Release_500” branch.

    A subtractive merge from the changeset #178 does the job (red merge link), returning the branch to a stable status.
    The changeset #180 is now ready to be integrated into the “/main” branch in order to generate a new release.

    Merging back from a branch that has been subtracted

    Sometimes the release can’t continue once the faulty bugfix has been removed. Sometimes we can proceed with the release and simply merge the “reopened” task later on.
    In both cases we’ll need to merge “issue32” again.

    The developer continued working on “issue32” and created a new changeset to solve the problem.

    Once the “issue32” task branch is ready again to be reintegrated you will need to perform the following actions:
    • Cherry-pick (ignoring traceability) the subtracted changeset (purple merge link).
    • Merge from the task branch again (green merge link).

    You’ll need the cherry-pick operation to bring back all the changes removed by the subtractive merge and then the second merge from the “issue32” branch will work as the antidote of the issue. The cherry-pick has to ignore the traceability due the source and destination changesets are already connected so a regular cherry-pick will find nothing.

    Finally the release branch is fully tested and it needs to be merged into “main” to label the new “BL500” release.


    Steps summary

    • Subtractive merge from the failing changeset or branch.
    That will remove the feature or bug from the destination branch. If you also want to later re-integrate the branch you will need to:
    • Cherry-pick (ignoring traceability) from the changeset of branch subtracted
    • Merge the source task branch again to get the latest changes

    Here you have a gif summarizing all the steps together:

    Merge refactored code with ReSharper and SemanticMerge

    June 17, 2014

    Hi there!

    Most of you probably already know, but we're running a webinar today to explain how to deal with merges involving code being refactored using ReSharper.

    I'll be hosted by the folks at JetBrains and will cover a set of different refactors in C# + Visual Studio + ReSharper. Then I'll be showing how to get the code merged with SemanticMerge.

    If you're interested, it will start at 16:00 CEST (or 10:00 AM EDT), and you can register here

    New Plastic is now out

    May 23, 2014

    We're releasing a couple of new versions today:

    • A new release of SemanticMerge, this time 1.0.65
    • And a new 5.0 Plastic release including a few bug fixes

    New features

    The new release includes a few new features, requested both in UserVoiceand by support.

    Pending changes - "moved file detection" no longer limited to 50%

    This one closes a UserVoice request that has been around for a while.

    Now when you need to match similar files manually in 'pending changes' you can detect files with similarity level below 50%, something that wasn't possible before. As you might guess, not a difficult change but still closes a user request :P.

    Improvements in 'cm find review'

    The 'cm find review' command is now able to filter the output by the following fields:

    • title: The title of the code review.
    • targettype: 'branch' or 'changeset'.
    • target: The element being reviewed. I.e. a Branch Spec, a Changeset Spec, or an object Id.

    A few examples:

     cm find review where targettype = 'branch'
     cm find review where targettype = 'branch' and target = 'br:/main/br1'
     cm find review where title like '%br2%'

    listlocks improvement

    Now the 'cm listlocks' command shows the user name instead of the SID.

    And bug fixes

    • This is one of the key fixes included in the release and it is related to the Checkin operation: Fixed an issue when performing a checkin which involves items in an xlinked repository and its parent repository. If the checkin operation finishes sucessfully in the xlinked repository but the checkin in the parent repository fails, the xlink target is properly updated to the newly created changeset in the xlinked repository.
    • When the 'listlocks' command is used with the '--onlycurrentuser' option with an authentication mode that uses SID's, no locks were returned, although the current user had locked items.
    • The 'cm find revs' command was printing unexpected paths in the output with certain 'where' clause filters.
    • Merge view: The 'open contents' and 'diff' actions weren't available for xlink conflicts.
    • GUI: Merge view: Improved 'view contributors' dialog. Added scroll to the 'ancestors' textbox when the merge operation detected more than one ancestor (recursive merge).
    • The 'cm ls' command was failing if the workspace contained xlinks and: the '--tree' option or the '--selector' option were used together with the '--xml' or the '--format' options and the '-R' option was not used. Not an easy one to reproduce, definitely!
    • Fixed a 'CmItemLoadedTwiceOnTreeException' performing a checking after a merge.

    Please find more details reading the release notes and do not hesitate to contact us.

    Plastic SCM new releases ( is out)

    May 11, 2014

    It’s been a while since my last blog post about a new Plastic release although we’ve been quite busy releasing new versions on a really frequent basis.

    Here goes a diagram showing the new releases we’ve created since February 25th:

    We’re putting a lot of focus on the 5.4 release (which is available to download in the ‘labs’ section, and it is *really* pretty stable) which is the one where we’re merging most of the new features, and also making some maintenance releases for some installations still using 4.1 (basically improvements for *big* servers under heavy load, which were merged to 5.0 and 5.4 too). And of course we also created a few 5.0 releases, the ‘official’ version getting bug fixes and non-disruptive changes (frozen API).

    Overview of the 5.0 releases

    Here goes an overview of the key features released in the last 5.0 versions:


    • Lots of improvements in the Eclipse plugin (and more to come).
    • Improved CLI arguments and help. We’re improving the CLI to ease automation. Much more coming in the next releases, together with a full review of the help.
    • Improved JIRA extension.
    • New “plasticlogstats” options. It is now able to print XML output for easier parsing. More about the tool here.
    • Performance and memory usage improvements (check release notes of
    • Improvements and added features in the Maven plugin.
    • New Polarion plugin.

    What are we doing in 5.4?

    Well, we’ll be publishing soon a section on the website about ‘what is new in 5.4’, but before that let me highlight a few interesting features we’ve been working on.

    We’re working side by side with some game studios to make Plastic *the* version control for game developers, and in fact 5.4 is already the version we recommend to game teams.

    It is not an easy task since game teams have quite particular versioning requirements, but it is a really exciting challenge for the entire team and the results so far are promising.

    Many teams in the game industry are moving away from previous generation version control systems into Plastic, which they find as a good options since it brings together distributed version control with centralized requirements and big binaries.

    Here is the list of some of the features that you can already see in 5.4:

    • WAN optimized network transfer: 3 times faster than TCP when latency is high. Designed to connect teams on different continents.
    • Submodules: a more flexible way to manage thousands of repositories (not only for games but big teams in general).
    • Horizontal scalability: cloud based technology (available on-premise) lets add more servers to handle load (requests are evenly distributed among the available workers).
    • Transformable workspaces: client specs to locally transform the project structure in the workspace.
    • P4Sync: bi-directional synch with Perforce repos to ease transition and migration.
    • Fast-update: advanced update with minimal disk interaction. For game developers on huge projects.
    • Multi-core checkin: makes better use of fast disks and networks when uploading (and compressing) huge amounts of data. Can be up to 3 times faster than regular checkin. Beats competitors.
    • Multi-core update: takes advantage of multi-core workstations during big data download to workspaces. Uses more cores to uncompress and write to disk, so the only limit will be the available network bandwidth.
    • Improved blob storage: when a single working copy is bigger than 150Gb the underlying repository needs to be heavily optimized for large data. This is what improved blob storage is all about, both at the server (including replica) and update cache (proxy server) sides.

    Does it mean 5.4 is *only for game devs*? Definitely not. Every feature in the list benefits all kinds of users and there’s also a big number of usability improvements included.

    And, BTW, one of the key efforts in 5.4 is the development of Mac (Cocoa) and Linux (GTK) native GUIs: they will replace the current X11 based multi-platform GUI creating a richer native experience.

    Stay tuned!

    The Plastic SCM Team

    Real Time Web Analytics