Plastic meets Docker

January 22, 2015

Docker seems to be the new trend in application virtualization, and it is growing so fast that even Microsoft is getting ready to run Docker containers on Azure. They are also getting Windows ready to be dockerized.

This blogpost explains how to run a pre-built Plastic server Docker image that we have published at https://hub.docker.com. It explains the container structure we’ve prepared and how to isolate the server container from the data container to ease upgrades.

Meeting Docker

I’ve been playing with Docker these days, studying the integration possibilities it offers and finding an initial approach to wrap our beloved Plastic SCM server in a Docker container. And after some hours toying with our own Docker containers, I must say that I’m very excited about its potential!

But first things first: what’s actually Docker?

Docker: Sharing standardized module containers

Docker, as they define themselves, is an open platform for developers and sysadmins to build, ship and run distributed applications. It allows developers to create their own isolated environments according to their needs, define communication points (disk volumes or TCP/UDP ports) and save these settings into a Docker image.

Once the image has been set up, any other developer or sysadmin can run a new instance of the image, instantiating it into a Docker container.

Each one of those containers is run and virtualized on top of the Docker server, ensuring there’s no interference between them or the host OS.

Also, any change made in a container is persistent: all containers have their own local filesystem, which is managed by Docker. It won’t be accessed by any other container unless explicitly told to do so.



Native Linux GUI – gtkplastic

December 24, 2014

It is finally here! We went all the way from our old portable user interface (monoforms anyone?) to the brand new, GTK based, Linux native GUI. And here it is how it looks like:

The more I get used to the GTK look and feel, the more I like it. Needless to say that it is the easiest to program compared to Mac and WinForms (and WPF!).
This new “go native” GUI initiative is not only for Linux, as you all know we’ve released a Cocoa based Mac OS graphical user interface which is under heavy development now.
The goal is to get rid of “one size fits all” solutions and provide the native look and feel on Windows, Linux and Mac.


SQLite 3.8.x with WAL support

December 3, 2014

Since Plastic 5.4.16.625 (Nov 21th 2014) we support the new 3.8.x SQLite series adding WAL (Write-Ahead-Logging).

In short: it means if you’re using SQLite and Plastic, it will now work much better than before since it now supports several reads simultaneously and less chances for write operations to get blocked waiting for readers to finish and vice-versa. SQLite is a good option to store your repos on your laptop and this improvements means Plastic will work smoother than before. You can be refreshing your Branch Explorer while the SyncView is still recalculating, something that was not possible before this version.



Configuring ignored items on your workspace

November 27, 2014

There are files that you don’t want to submit to version control. Private IDE settings, intermediate build result files, binaries and so on.
This blog post explains you how to handle this in Plastic, manually editing the ignore.conf file. Alternatively you can ignore files using the GUI as explained in the User Guide. Whether you end up using the GUI or editing the file, this blog post will help you understanding how the ignore rules work.

Why do you need to ignore files

This is a familiar scenario that every developer can relate to. You have your workspace nicely set up and linked to its appropriate version control repository. All source files are checked in and organized.
But what happens after you build your project?
Lots of binary files will be created and the version control system will detect them as new!
You might experience the same issue with any other kind of non-permanent file: IDE settings files, temporary files created while you are testing your code, etc.
Obviously, you don’t want these files to be checked in; it could mean you have to review an unnecessarily long list of changes every time you want to check in source code modifications.
Or worse: if you decide to forget about these files and just check them in, your changesets will be cluttered with irrelevant information, making traceability more difficult and allowing errors to slip through.
But don’t worry! This is where the Plastic SCM ignored items come into play.

What is an ignored file

In Plastic SCM, an ignored item is just a private item which will not be added to the pending changes list unless explicitly told so. This is achieved by placing rules inside a file called ‘ignore.conf’, located at the workspace root path. Each private item whose path is matched by one of those rules will be ignored.

How ignored files work

To better understand how this works, let’s have a look at an example.
If you use Microsoft Visual Studio as your IDE, binary files are placed by default at directories called ‘bin’ and ‘obj’.
As we discussed above, you usually wish to prevent those files from being detected as private/new. It’s as simple as adding the following lines to the ‘ignore.conf’ file:
bin
obj
This means that any directory called ‘bin’ or ‘obj’ -along with all of their children, recursively- will be matched and, therefore, excluded from the private objects list.
We can see the result on the next two figures: at first we have almost 100 private files created by the compiler, whereas once we add the rules we see just some IDE settings files (and the ‘ignore.conf’ file, of course!).
These are the pending changes after the build:

And then the pending changes view after the binary directories are configured to be ignored:

Advanced ignore rules

Naturally, the rule format is not limited to directory names. You can also choose to ignore all .suo files by just adding a new rule:
*.suo
This will remove the three remaining private files in our example (see next image), leaving our pending changes view with our ‘ignore.conf’ file alone. Now we would like it to be ignored too, but we don’t want any other item to be affected. We just want to ignore that specific file, instead of every .conf file or every file called ‘ignore.conf’ (we might have more in our workspace tree that we need to be included at some point). The appropriate rule is:
/ignore.conf

We can see the results on the next figure: the main ignore.conf has disappeared but some new .conf files haven’t been ignored.

Also, we can decide to remove the AssemblyInfo.cs files from version control. We would just need to delete them on Plastic SCM and then add a line like this in our ‘ignore.conf’ file:
AssemblyInfo.cs
Please note that the rules are case sensitive, so we have to type them accordingly.

Exception rules

But we’re not done yet! Plastic SCM supports exception rules, too. Exception rules are regular rules, preceded by the ‘!’ symbol, that have the opposite effect: they force private items to be always present as new items. To illustrate this, let’s say that there’s a specific binary directory that we want to include if it appears. We only need to specify it as an exception rule:
!/LibGit2Sharp/bin
This will bring back the bin/ results directory of the LibGit2Sharp project.

Wildcard expansion

Both kinds of rules support wildcard expansion. For example: we wish to ignore all files and directories under directories called ‘temp’ but we don’t want .log files directly under them to be ignored. This behavior will be implemented using the following rules:
temp/**
!temp/*.log
In the previous example, a “**” string means “any character” whereas the single ‘*’ character means “any character excluding path separators”.

Conclusion

So, this is it! In this post we’ve covered the different possibilities of ignoring private items that Plastic SCM offers, as well as some combinations of them. Feel free to experiment with your own workspace, and don’t hesitate to share your thoughts in our forum!


Orchestrate your development with exclusive checkouts:

November 4, 2014

During the last months, more and more videogame companies have begun to use Plastic SCM. One of the most important requests in this kind of companies is the possibility to perform exclusive checkouts in their controlled files.

When an artist is modifying a texture or a character, he wants to be sure that the file is locked for him and the other artists will not touch the file at the same time. There are also files that can’t be easily merged or even merged at all, like images, animations, simulation data…

In this blog post, I will explain how to configure this scenario with Plastic SCM in both centralized and distributed working environments.

How does it work?

Each time the checkout operation is going to be performed, the client asks the server whether the file needs to be exclusive checked-out or not.

  • If the file was already locked by a different user, it can´t be checked-out.
  • If it´s not locked, Plastic will check in a “lock.conf” if the file matches any of the defined rules. If the file matches the rules, it will be locked.
  • Configure Exclusive checkout in a centralized environment:

    We just need to create a “lock.conf” file and store it in the server folder. As we only have one Plastic centralized server, this server will also be the lock server.

    rep:assets lockserver:localhost:8084
    *.obj
    *.fbx
    *.png
    Character_1.vcs
    

    The "lock.conf" file format is very simple:

  • rep:asssets” is the repository name where we want to configure the exclusive check-outs.
  • lockserver:localhost:8084” is the lock server name (or IP) and port where we want to configure the exclusive check-outs.
  • “*.obj” is a file format rule of the elements we want to lock. Both complete paths and patterns are supported. We specify a rule in each line. Each time we checkout on a path that meets any of the filtered rules, this path will be in exclusive checkout so that no one else can perform a checkout on it at the same time.
  • Configure Exclusive checkout in a distributed environment:

    In a distributed scenario, we have different Plastic servers spread in different locations.

    In that case, to configure an exclusive checkout mechanism , we have to select one server to be the lock server. The “lock.conf” needs be stored in all the Plastic servers, but there will be only one lock server. Let´s make it clearer with an example:

    Eg:

    UserA -> ServerA, with a "lock.conf" file having lockserver configured to itselt. (localhost:8084)

    rep:assets lockserver:localhost:8084

    UserB ->ServerB, with a "lock.conf" file having lockserver configured to the ServerA

    rep:assets lockserver:ServerA:8084

    This way, the ServerA will work as the central node that will manage the locks. Server B will ask ServerA if the file X is exclusively checked-out or not.

    Requiring a head changeset:

    In order to ensure that the exclusive checkout has the head changeset as its base changeset on the working branch, we need to add a keyword (requiehead) to each rule in the "lock.conf" file.

    rep: assets lockserver:localhost:8084 
    requirehead *.obj
    

    During the check-out, on the client side, if the rule says "requiredhead", then it´s checked if the head of the branch is the working changeset.

    If it isn´t, the exclusive checkout is cancelled and the user gets a message to update the workspace to the last changeset of the branch.

    Command line operations:

    There are also two some interesting commands that may help you to improve your workflow in exclusive checkout scenarios:

    cm listlocks server:port

    This command shows the locked items on a server. You can filter the locked items using the next options:

  • --onlycurrentuser: Filters the results showing only the locks performed by the current user.
  • --onlycurrentworkspace: Filters the results showing only the locks performed on the current workspace (matching them by name).
  • Eg:

    cm listlocks localhost:8084

    cm unlock server:port guid

    This command allows to undo item locks on a lockserver.

    It´s important to note that only the administrator of the server will be able to run the'cm unlock' command.

    To specify a GUID, the format should be the 32-digit separated by hyphens format (optionally enclosed in braces):

    {00000000-0000-0000-0000-000000000000}
    or 00000000-0000-0000-0000-000000000000
    

    Eg:

    cm unlock localhost:8084 2340b4fa-47aa-4d0e-bb00-0311af847865 bcb98a61-2f62-4309-9a26-e21a2685e075

    cm fileinfo item_path --format=str_format

    The fileinfo command provide detailed information about items in the workspace and it´s possible to check if an item is locked or not.

    Eg:

    cm fileinfo character1.png –format={IsLocked}
    True
    



    Plastic SCM email notifications!

    October 28, 2014

    People love notifications! So let’s add some to Plastic SCM.

    This is a DIY project; it’s not something you’ll find built-in with Plastic SCM. I came with this fast solution to get email notifications when certain Plastic SCM operations were triggered. If you want to have built-in notifications in Plastic, you know you can use the uservoice page to vote up for it.
    Now, you can download the tool from here.

    Triggers

    The entry point for third party tools is the trigger system. Using triggers you can hook up important operations and customize their behavior.

    I’ll take the following ones to start coding a simple notification center:

    • After-checkin
    • After-mkreview
    • After-editreview
    • After-mklabel

    With the four triggers above, I will be able to get an email when a checkin is created, a new code review is created or edited and finally when a new label is created.

    The Plastic SCM triggers provide extra information both before the operation is run (before triggers) and after it finished (after triggers). You can review all the details here. Consuming the standard input or reading the trigger environment variables will help you create smarter and customizable notifications, such us getting notified only when the “README.txt” file has been changed at the release branch by a certain user.

    This is a high level diagram explaining how the tool works:

    The tool

    In order to attach the tool to a Plastic SCM trigger you will need to use the “cm mktrigger” command as follows:

    cm mktrigger after-mklabel "mklabelnotifier" "C:\triggers\plasticnotifier.exe aftermklabel C:\triggers\mklabel.txt"

    You need to specify the trigger type, a name to easily recognize the new trigger and the tool you want to run when the operation is triggered.

    The “plasticnotifier.exe” tool only needs two parameters, the first one is the trigger type and then the configurator file for that trigger type.

    Configuration files

    The source code will understand three different file formats that will serve as an example that you can use to create additional ones.

    Mail list

    The most basic configuration file is the list of email recipients the trigger will use when fired. It looks like as follows:
    developer1@yourCompany.com
    developer2@yourCompany.com
    admin1@yourCompany.com
    This format is valid for the “aftermklabel” trigger. The “AftermklabelTrigger” class will read the entire list and will start sending emails.

    Translation file

    For some triggers you will need to provide a mechanism to translate the Plastic SCM user and get the email. This file will be used by the “AfteReviewTrigger” class to translate the code review assignee user and obtain the email to send the message.

    This is the config file aspect:
    dev1;developer1@yourCompany.com
    dev2;developer2@yourCompany.com
    dev3;developer3@yourCompany.com
    Each line has two fields separated by a semicolon; the first one is the Plastic SCM user ID and the
    second one the user email. The trigger will create an environment variable with the code review assignee user ID. The “plastic notifier” will use this file to obtain the email to send notifications to.

    Complex file

    The last file format has several fields and subfields, take the following content as an example:

    %_%message%_%
    Changeset {0} created with the "{1}" comment.\r\n
    Changeset content:\r\n{2}
    %_%subscribers%_%
    developer1@yourCompany.com;br:/main/experimental;br:/main
    admin1@yourCompany.com;*
    This file format is used by the “AfterCheckinTrigger” class and has two parts: the message and the subscribers.

    The message is the email body that you can customize.

    Three extra fields are available for the message: “{0}”, “{1}” and “{2}” will be automatically replaced by the changeset specification, the changeset comment and the files changed. Again this information is provided by the trigger using environment variables and the standard output.

    The subscribers part is similar to the “Mailing list” format we explained above, but this one has extra information. After the user mail you can write, separated by semicolons, the branches you are interested on for notifications. Using a star (*) you will get notification for all the checkin operations done in the server. In the example above, the user “developer1” will only get notifications for the /main and /main/experimental branches, but the user “admin1” will get an email for every single checkin done in the server.

    Further work

    With the tool provided, you can get notifications when:
    • A new label is created.
    • A new code review has been assigned to you.
    • A new checkin in a certain branch has been performed.
    But this is just a small preview of what you can get if you continue working. Here come some suggestions:
    • Get an email when permissions are changed.
    • Get an email when somebody tries to remove a repository.
    • Get an email when certain files are changed.
    The source code is written in C# but you can grab it and translate it to any other language. You can improve the code and complete it, and, if you do, please don’t forget to share it!

    Happy notifying!





    How the 2d version tree works

    October 27, 2014

    This article explains how the 2d version tree works and what it exactly renders.

    We realized 2d-version-tree is one of the less understood features in Plastic, so a detailed explanation is definitely worth.

    Item-level trees vs changeset trees

    Plastic displays the evolution of a repository rendering the Branch Explorer. It is an overall view of what happened at a global level instead of going file per file.

    While this is very useful in almost all cases, there are users who miss the old days of individual version trees. Maybe they have a pre-Plastic 4.0 background, maybe they come from good-ol ClearCase or maybe they just find it more natural.

    Plastic SCM works on a changeset basis: changesets are the core of the system and not individual file histories. The reason is merge tracking: merges are tracked at the changeset level and not the individual file revision level.

    We actually change this when we moved to version 4 a few years ago. Before that merges were tracked at the individual item (directory/revision) level.

    What does it mean? Well, when you had to merge, let’s say 1200 files (or directories) Plastic 3.0 had to walk 1200 different history trees finding merge links and ancestors. In 4.0 and beyond it only walks one single tree. There’s no way for 3.0 to outperform 4.x (and later) in merge speed because the old version simply had to do tons of work. I won’t cover all the details, but this radical change didn’t only benefit merge performance but also the overall system speed and distributed features.

    A simple 2d tree scenario

    Let’s go through a very simple branch/merge scenario and let’s follow the history of a single file inside our repository. The following figure shows the Branch Explorer of our repo and where the file “foo.c” was modified.

    As you can see the file was added in changeset 1, later branched and changed on changeset 5, and this change was merged back to “main” in changeset 7.

    How does the version tree of the “foo.c” file looks like?

    Look at the following figure: you probably expect something like the graphic on the right, but this is not how Plastic works. Plastic actually created only 2 revisions of “foo.c” so far: one created on changeset 1 and the second one created on changeset 5.

    You may wonder what happened during the merge: well, changeset 7 simply includes the revision loaded by changeset 5 because there is no merge conflict and hence no need to create an extra revision for the file. This is what we call a “revision replacement” because changeset 7, which is a children of 4, simply replaces the loaded revision of “foo.c” as the result of the merge.

    You probably expected something like the graphic on the right of the figure above and if fact this is how things worked on Plastic 3 and before, but the underlying merge tracking mechanism changed in 4.0 and beyond. There’s no need to create extra revisions of the file for trivial merges which greatly reduces the amount of operations to be performed.

    Think about it: suppose you added 10k files on a branch and later merged them back to main: 3.0 was actually creating another 10k revisions of the files on main, while 4.x and beyond simply say “hey, load them on the main branch, that’s all” saving precious time.

    So, how does the 2d-version-tree renders the previous case? Check the following figure:

    As you can see the 2d-version-tree decorates the “real” version tree of the file with information from the Branch Explorer (changeset history) so you can better understand what is going on with the file.

    The changesets marked as “U” mean the file was unchanged on this changeset, but it is still rendered so we can understand how the file evolved through the repo history. Looking at this diagram you can understand that the revision changed on branch1 was the one finally labelled as BL001. Looking at the “raw” tree (or the history of the file) you wouldn’t have enough info to understand it.

    A slightly more complex 2d-tree scenario

    Look now at the following Branch Explorer:

    It is slightly more complex than the previous since it involves 3 branches and a couple of merges. Our file “foo.c” was simply added on changeset “1” and changed on “9”. Look how the “real” version tree looks like:

    Looking at this tree you’d never understand what actually happened to the branch! How did it end up in branch2? Was it ever merged? You can’t tell.

    Now, let’s look at the 2d-version-tree:

    Still it explains there are only two revisions of the file, but by rendering the “unchanged changesets” where it was rendered you can now understand how the file evolved and how it end up being labelled in BL001.

    A 2d-tree with a file concurrently changed

    The cases so far didn’t run into merge conflicts: foo.c wasn’t modified in parallel and involved in a real merge.

    The following Branch Explorer renders a third scenario where foo.c is finally modified in parallel and merged:

    Now foo.c is added in 1 as before but changed both on 4 and 9.

    This is how the raw version tree looks like:

    Whenever we have a *real* merge we’ll be able to render a merge link between two revisions, which greatly helps understanding the scenario, but still, the graphic above falls short to explain what actually happens to the file, doesn’t it?

    You didn’t do a merge from “branch2” to “main” so, why do you have such a merge link?

    That’s why the 2d-version-tree solves the scenario as follows:

    Conclusion

    I hope that reading through the previous cases helps understanding how the 2d-version-tree works and getting a better idea of why it explain the history the way it does.

    Don’t hesitate to reach us if you have any questions.



    Real Time Web Analytics