Who we are

We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.

If you want to give it a try, download it from here.

We also code SemanticMerge, and the gmaster Git client.

Plastic 2 on Solaris

Thursday, January 17, 2008 Pablo Santos , , 1 Comments

I've just finished setting up a Plastic BL081 (preview 2) server on Solaris.

$ uname -a
SunOS atenea 5.11 snv_34 sun4u sparc SUNW,
Sun-Blade-1000
$


I've downloaded the latest 2.0.3 Firebird Server for Solaris 10 SPARC,
installed it and had some minor troubles to make it run.

I tried to start the plasticd server but it failed telling it wasn't able to create the databases. Odd.

Once installed it was refusing all connections. The classic server was listening (you were able to connect using a telnet to port 3050, but it was giving some errors: it wasn't able to find libfbembed.so library!

Well, it looked like a LD_LIBRARY_PATH issue.

Looking into /etc/inetd.conf (which is actually a link to /etc/inet/inetd.conf) I found out a line like the following:

gds_db stream tcp nowait root /opt/firebird/bin/fb_inet_server fb_inet_server

Well, it seemed that nobody was telling info about where the libraries were located.

I tried to run fb_inet_server manually and it failed because libgcc-3.4.6-sol9-sparc-local package wasn't installed. A quick visit to sunfreeware and pkgadd was enough this time.

Then the problem with the LD_LIBRARY_PATH: I typed the following script named fbserver at /opt/firebird/bin


#!/bin/sh
# all in one line
LD_LIBRARY_PATH=/usr/local/lib:/opt/firebird/lib
/opt/firebird/bin/fb_inet_server


And then modified inetd.conf to have the following line:

gds_db stream tcp nowait root /opt/firebird/bin/fbserver fbserver

And finally restart inetd.

But, hey, in solaris you have to take some intermediate steps:

  • run inetconv to convert inetd.conf in the internal xml format. It will let you know if the previous manifest was there already (mine was at /var/svc/manifest/network/gds_db-tcp.xml, remove it and run inetconv again)
  • then disable and enable again gds (firebird server) with inetadm
  • finally restart inetd


    # inetconv
    # inetadm -d svc:/network/gds_db/tcp:default
    # inetadm -e svc:/network/gds_db/tcp:default
    # svcadm disable inetd
    # svcadm enable inetd
    #


    And you're done. I guess there must be an easier way to set up a firebird server on Solaris but...

    I used the following db.conf to configure the plastic database backend


    <DbConfig>
    <ProviderName>firebird</ProviderName>
    <ConnectionString>ServerType=0;Server=localhost;Port=3050;
    User=SYSDBA;Password=masterkey;Database={0};
    Pooling=true;Connection Timeout=120;</ConnectionString>
    <DatabasePath></DatabasePath>
    </DbConfig>


    Well, remember all the connection string goes in one line.


    Hope it helps!
    Pablo Santos
    I'm the CTO and Founder at Códice.
    I've been leading Plastic SCM since 2005. My passion is helping teams work better through version control.
    I had the opportunity to see teams from many different industries at work while I helped them improving their version control practices.
    I really enjoy teaching (I've been a University professor for 6+ years) and sharing my experience in talks and articles.
    And I love simple code. You can reach me at @psluaces.
  • 1 comentarios:

    Who we are

    We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.

    If you want to give it a try, download it from here.

    We also code SemanticMerge, and the gmaster Git client.

    New Release BL063.11!

    Tuesday, January 15, 2008 mdepedro , 0 Comments


    We are working hard on the new Plastic SCM 2.0 but we are also releasing a new version of our Plastic SCM 1.5! You can now download Plastic SCM 1.5 Build 63.11 (internally BL063.11) from http://www.codicesoftware.com/opdownloads2.aspx.
    As a maintenance release of the 1.5 version it includes bug fixes improving the usability of the product.
    The bugs we have fixed are the following ones:
    An error when setting a workspace in the Eclipse plugin.
    An error when synchronizing checkouts.
    Error when merging from certain branches.

    We keep on working to give you the best SCM option!

    0 comentarios:

    Who we are

    We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.

    If you want to give it a try, download it from here.

    We also code SemanticMerge, and the gmaster Git client.

    Real world performance

    Friday, January 11, 2008 Pablo Santos , 0 Comments

    In my previous post about performance I forgot to mention one important issue. It is an obvious one, but normally misunderstood: testbots are not real users. Even when you try to randomize their behaviour and try to make them look like humans... it is not possible, well, not at least in a so simplified scenario like the one I described. I mean, probably we could consider (hey! maybe we will in the coming months!) writing some AI code, but this wasn't the case yet.

    Why I'm concerned about it? Well, a few months ago I had the chance to read an interesting book on software perfomance: release it!. It is interesting and easy to read, and it gives some good ideas about how to survive several race and performance conditions. Well, it even has a promotional video!

    What did I like the best? Well, it is hard to say because as I said I liked the whole contents but I found really interesting the case studies and topics like: limit your caches, bots are not users (don't develop for QA but for real users), and all the patterns about performance, stability and so on. Yes, you probably think you already know all the tricks in the book, but I think it is good to read about them and learn a little bit more about not always so obvious topics.

    And finally some good news about Plastic: we're currently in the Top Ten List at Component Source Best Sellers, fighting against well-known companies and products like SlickEdit, Telerik or DXperience... And still with 1.5!! We're excited about the upcoming 2.0 release. Part of the team is already working on the post-2.0 code already, including a new Eclipse integration...
    Pablo Santos
    I'm the CTO and Founder at Códice.
    I've been leading Plastic SCM since 2005. My passion is helping teams work better through version control.
    I had the opportunity to see teams from many different industries at work while I helped them improving their version control practices.
    I really enjoy teaching (I've been a University professor for 6+ years) and sharing my experience in talks and articles.
    And I love simple code. You can reach me at @psluaces.

    0 comentarios:

    Who we are

    We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.

    If you want to give it a try, download it from here.

    We also code SemanticMerge, and the gmaster Git client.

    Plastic stress testing

    Tuesday, January 08, 2008 Pablo Santos , 0 Comments

    I've been focusing on performance for the last month or so. Basically I've been checking how Plastic works under heavy load.

    It is very interesting to note how behavior differs from one single user accessing the server to several users running operations at the same time. I've been always concerned about speeding up plastic operations (as you've probably seen from my previous posts) and simulating hundreds of clients against a single server has been a good experience.

    Ok, which was the testing scenario and how did we set it up? Well, we have several ''independent'' testing agents performing the following operations:

  • Create a workspace
  • Download the latest on the main branch
  • Create a branch (*)
  • Switch to the branch
  • Modify up to 10 files (3 changes each), checking in intermediate changes
  • Check in everything
  • Go to (*)

    Repeating the whole process 5 times. The operations from the command line look like:


    $ cm mkwk -wk /tmp/-wk
    $ cd /tmp/-wk
    $ cm update .
    $ cm mkbr br:/main/task[ITERATION NUMBER]
    $ cm co file01.c
    # modify the file
    $ cm ci file01.c
    $ cm co file01.c
    # modify the file again
    $ cm ci file01.c
    $ cm co file01.c
    # modify the file (last time)
    # (no check in is done this time)
    # go to another file and repeat the process

    # check in all the checked out files
    $ cm fco --format={4} | cm ci -
    $



    Well, a really simple ''testbot'' which lets you measure how well the server scales up. We have another more complex ''scenario'' in which we have a more complete ''testbot'' which is able to mark a branch as finished (using an attribute) and jump to the ''recommended'' baseline. Another ''bot'' plays the integrator role: it waits until at least 10 branches has been finished and then integrates everything into main (of course it doesn't make clever decisions when a manual conflict arises) and moves the ''recommended baseline''. This way we can simulate heavy load with very simple and independent test bots.

    How do we launch the tests and gather the results? Well, using PUNit, the NUnit extension we've developed long time ago. PNUnit is totally open source and will be integrated into the next NUnit release, so maybe it becomes better known.

    So, basically we're using one server which runs the Plastic SCM server and the PNUnit launcher. The launcher reads a xml configuration file and tells the agents which tests they should run. This way, using different xml files, we can define different testing scenarios.

    As I mentioned above, it is very interesting to study the server's performance under heavy load. Code working fast with only a few users tends to be horribly slow when hundreds of clients make requests at the same time. I want say anything really new here, but the main things we've changed are:

  • Replacing lock sentences with ReaderWriterLocks. It can have an impact under heavy load. The problem here is that it seems some profilers tend to identify lock sentences as the root of problems, and this is not always the case. We're currently using AQTime, an excellent product, but I've found the following problem repeteadly: it tells you a method is eating 12% (for instance) of the time due to a lock. You get rid of the lock and the method is no longer eating a 12% but just 0.08%. Sometimes the overall time running inside the profiler is improved, but normally you don't get a benefit when the test is run with the standalone server. Ok, hopefully this is not always the case and you can find lots of performance bottlenecks using the profiler. We're also starting to use the one included in Mono.

  • Reducing remoting operations. We make extensive use of the .NET/Mono remoting, and normally reducing the roundtrips lead to a worse design, but to better performance, specially when lots and lots of calls are being made.

  • Trying to reduce the number of database operations. Things get really interesting here: some optimizations which make no difference with 1 client, can save lots of seconds with a big number of client testbots.

    And, what about the numbers? Well, we've tried with up to 40 machines (clients) and 1 server so far. All the clients were running Linux and have exactly the same hardware configuration. We have tried up to 200 simultaneous testbots but the regular testsuite was trying 1, 20, 35, 40 and 80 testbots against a single server. It is important to note that a testbot is not the same as a user but a number of them. I mean, a regular user doesn't create a branch, modify 30 files and check all the changes back in in less than 6 seconds, which is basically what a test bot does. So in reality we're simulating hundreds (even thousands) of simultaneous users.



    How good are the results? and, compared to what? Well, my intention here is publish the entire test suite in a few weeks, so Plastic users can set up the testing environment to check how it performs on their environments before they make the buy decision, and then you'll be able to really check our numbers. What I can say right now is that we've created exactly the same tests for Subversion and some other SCM products (which I won't disclose yet) and right now (using BL079 as code base, preview 1 is BL081) we're faster than any of them in this specific scenario I described above. How faster? Well, from 3 times to 6 times depending on the load if you compare us with SVN, for instance. But, we still have to run other scenarios to gain a more complete view and provide better results.
    Pablo Santos
    I'm the CTO and Founder at Códice.
    I've been leading Plastic SCM since 2005. My passion is helping teams work better through version control.
    I had the opportunity to see teams from many different industries at work while I helped them improving their version control practices.
    I really enjoy teaching (I've been a University professor for 6+ years) and sharing my experience in talks and articles.
    And I love simple code. You can reach me at @psluaces.
  • 0 comentarios:

    Who we are

    We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.

    If you want to give it a try, download it from here.

    We also code SemanticMerge, and the gmaster Git client.

    Plastic 2.0 preview, step by step tour

    Thursday, January 03, 2008 Pablo Santos , 3 Comments

    We've just introduced Plastic 2.0 preview so now it is time to go through some of the new features, step by step.

    The first thing you've probably noticed is the totally new GUI. We've rewritten the 1.x desktop tool trying to introduce a platform neutral look and feel.



    The image above shows the Plastic SCM GUI introduced with 2.0 preview. So today I'll just focus on what's new at the GUI level.

    If you download the VMWare OpenSuse machine you'll get an already preinstalled Plastic client and server which will help you follow this trip if you're not familiar with Plastic.

    The first and foremost new feature is the ability to handle multiple views. The former interface was constrained to the workspace tree on the left and only one view on the right, so we decided to change it in favour of a multi-view layout, as shown on the following screenshot.

    Our intention? Well, now it is a lot easier to keep the context while navigating through the information. Think about listing the available branches on a project and querying its contents. Previously you would jump from the branch list to the branch content, now the two views stay on your screen.

    Now let's have a look at how to navigate the views. Well, to show the main ones you just have to click on the buttons at the top: items, branches, changesets, labels, checkouts and checkouts all users.

    Suppose you're already on the branch view displaying the following content (as you would see using the sample included on the vmware image).



    Right-clicking on a branch pop ups the context menu and you can view the revisions on the branch as displayed on the following image.



    Another new feature is the changeset view. Changesets exist since release 1.0 but they were never first class citiziens for the GUI. Now there is a specific view to list them.



    Do you want to see what has been modified on a certain changeset? Ok, it is quite easy: pop up the context menu and click on show changeset content.



    A new view will be opened displaying the files and directories modified on the selected changeset.



    And now let's introduce what I consider the most valuable new feature from a developer point of view: the code review helper tool. There is a handful of books and essays written about the subject, and all of them agree to some extent about the benefits of reviewing or inspecting the code. From formal inspections to peer review or even pair programming, they all seem to agree that code review is a great technique. I totally agree with them but normally the lack of proper tools make reviewing code such a boring process that we try to avoid it by all means. Well, we tried to make it easier, faster and less boring with this new review tool. Of course, if you're looking for a great book focused on code inspections take a look at Karl E. Wiegers' book. Or if you prefer a lighter but still effective reading, I would go for the classic Code Complete.

    Ok, so, how do you launch the code review tool? Well, it is available from the changeset, branch and label views. From the changeset view you just have to select a changeset and click on compare changeset content.



    And the code review tool will let you scroll through all the changes files or directories.



    And the last (but not least) feature I'll be introducing today is the query system. Most of the new views are based on it and it eases selecting only the information you're interested on.

    The sample below shows how to locate the changesets created by user hank only.



    Well, and that's all for today. There are many, many other features in this new Plastic release, and I'll be introducing them in the coming weeks. Meanwhile feel free to try the new release, and please let us know what you think about the new GUI.
    Pablo Santos
    I'm the CTO and Founder at Códice.
    I've been leading Plastic SCM since 2005. My passion is helping teams work better through version control.
    I had the opportunity to see teams from many different industries at work while I helped them improving their version control practices.
    I really enjoy teaching (I've been a University professor for 6+ years) and sharing my experience in talks and articles.
    And I love simple code. You can reach me at @psluaces.

    3 comentarios:

    Who we are

    We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.

    If you want to give it a try, download it from here.

    We also code SemanticMerge, and the gmaster Git client.

    Plastic 2.0 preview is out!

    Thursday, January 03, 2008 Pablo Santos , 0 Comments

    We've just released a first preview of the upcoming Plastic SCM 2.0! You can download it here for both Windows and Linux, including a VMWare machine.

    We'll be posting much more about what's new in Plastic 2.0, but right now we're interested in your opinions about the new user interface. Many already say it is one of the most beautiful Linux GUI tools ever, but now we need to be sure it is also easier to use and more effective than the previous one.

    Enjoy!
    Pablo Santos
    I'm the CTO and Founder at Códice.
    I've been leading Plastic SCM since 2005. My passion is helping teams work better through version control.
    I had the opportunity to see teams from many different industries at work while I helped them improving their version control practices.
    I really enjoy teaching (I've been a University professor for 6+ years) and sharing my experience in talks and articles.
    And I love simple code. You can reach me at @psluaces.

    0 comentarios: