Who we are

We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.

If you want to give it a try, download it from here.

We also code SemanticMerge, and the gmaster Git client.

Agile Retrospectives (III): Activities (I)

Tuesday, September 28, 2010 Luix 0 Comments

Activities to set the stage

Check-in

Ask a question and make that everybody answers it in a couple of words. Example: How do you feel about the last sprint? Happy, angry, sad, hopeful...

Focus on / Focus off

Prepare pairs of words or concepts that are interesting, such as communication patterns and antipatterns, form groups and make that each group discuss a different pair of concepts.

ESVP

Each member annotates anonymously his/her impressions about doing the retrospective: he must express his/her general feeling using one of the following roles: Explorer (high interest), Shopper (interested but please, let us do as fast as possible because I have a lot of work to do!) Vacationer (I'm here because I have nothing better to do right now), Prisoner (I'm here because I have to; I'd rather be working...).

Working agreements

Form groups. Each group suggest a list of ideas that could lead to effective behaviour and that improve productivity. When everybody has finished, those lists are shown and the team choose 3 - 7 suggestions. The team as a whole is responsible to make that everyone commit the selected ideas.

Activities to gather data.

Timeline

In groups, decide events, activities and so on that were important during the last sprint. Key moments, relevant moments, good moments and bad moments. Gather all the data in the end and draw a common timeline.

Triple Nickels

Form groups. Each member has five minutes to annotate improvement actions in a piece of paper (say sticky note). Then, pass the note to the member on his/her right. In the end, gather all the notes and share them with the rest of the team. The aim is to have as many ideas as possible. If the team is smaller than 7 - 8 people, do it in a single team. Write feelings, events, important reactions and moments that have happened during the last sprint.

Color code dots

It's a good idea to combine this activity with Timeline. Once the timeline has been drawn, each member paints a red circle or green circle (depending on his/her general feelings) near each remarkable event drawn in the timeline. In the end, analyze which activities were better for the team and why.

Mad Sad Glad

Use it together with Timeline. Identify in the timeline which periods of time made the people feel anxious, sad or happy, using colours. Extract conclusions and analyze good and bad moments.

Locate Strenghts

Ask, by pairs, which were the most important moments of the sprint. The aim is not arguing, but interviewing each other.

Satisfaction Histogram

Create an histogram that includes the general impressions gathered in every sprint for every member. It's a good idea to make a scale to reflect the current status, so that everyone thinks about a number that reflects better his/her impressions. Analize the results of this sprint and the evolution of the histogram.

Team radar

Establish a list of factors or objectives to analyze; every member rank those objectives from the team's point of view: how good are we at doing this factor/activity/whatever? Discuss the results at the end.

Like to Like

Each member points out those events that were particularly well in his/her opinion. It is very important to analyze everybody's opinion: each member makes a list with 3 issues that are necessary to stop doing anymore; 3 issues that the team do and must continue doing and 3 issues that the team should start doing. Then, share the ideas with the rest of the team one at a time; the rest of the people rate every idea by using game cards (say numbered cards or coloured cards). To make things faster, the last quality card that has been uncovered is automatically removed. Take a couple of minutes to think about the idea before start every round.

Upcoming…

Activities to generate insights, to decide what to do and to close the retrospective meeting.


Bibliography:

Agile Retrospectives – Making good teams great

Esther Derby, Diana Larsen

0 comentarios:

Who we are

We are the developers of Plastic SCM, a full version control stack (not a Git variant). We work on the strongest branching and merging you can find, and a core that doesn't cringe with huge binaries and repos. We also develop the GUIs, mergetools and everything needed to give you the full version control stack.

If you want to give it a try, download it from here.

We also code SemanticMerge, and the gmaster Git client.

GUI testing Plastic SCM

Friday, September 17, 2010 Pablo Santos 0 Comments

Each time I’ve the chance to run the GUI test suite we’ve developed for Plastic SCM I find it amazing. The amount of work done by the team is simply overwhelming. It hasn’t been a weekend-long project, that’s pretty clear, but a continuous effort during the last years of development.
This video shows one of the tests checking the code review functionality under Windows XP (it's recorded at HD, so turn to full screen if you want).



Overall, the automated test suite can take almost 24 hours to run (as of release BL187.3). It doesn’t mean it is perfect, but it gives us a lot of advantages. The ones I consider most important are:


  • The basics are really well covered: ok, we can break something, but the basic operation cycle gets pretty well covered, so it’s pretty unlikely that we break something that prevents the users to work.
  • As a consequence of the previous point we’re pretty free to move fast and do a lot of changes, even strong modifications on the core. This is very good for overall product evolution and maintenance. If something is outdated or doesn’t look good anymore or we’ve a better idea to implement it, we just do it, because we know our test suite will cover the change and detect issues. This is extremely important for a product’s codebase that will last for years, because otherwise code tends to get older and older and finally you end up with few chances to make strong changes because they’re too risky and your product can’t keep the pace anymore.

    As a side note: the test suite doesn’t really take 24 hours to run because we split it in chunks and run it in parallel on different machines, so the real time is just a fraction of that, but if we run it on a single machine it would take very close to one day to finish.

    Different testing layers


    How do we exactly test? I think I already covered it in previous blog posts but basically we’ve three different testing tiers: unit tests, smoke tests and gui tests as the following picture shows:



    The unit tests are regular nunit. We’ve tests for both the client and the server and it is, like the other suites, always growing. It is smaller than it should, though. It covers core functionality at method level.
    The gui test suite, the one I’m focusing on today, starts up the GUI client and a server and performs different actions on the GUI. Every test focuses on different functionalities and new ones are added when new features are developed or to cover specific bugs. The suite is thicker than it should: first it takes a long time to run, which is far from perfect, and second it probably overlaps too much with the other suites. We’re working on making it thinner and more focused, in order to make it faster. The great thing is that the basic suite (which every developer runs once a task is finished together with the nunit and smoke, to check the task is ok) is run, during release testing, on all the supported Windows flavors: W2K, W2K3, XP, Vista, W7 and using different backends (SQL Server, MySql and so on), service packs, .NET frameworks and so on. At the end it needs a lot of CPU time to finish (too much is not good) but it is able to give us a very good view of where potential issues are, if any.
    Finally the smoke tests are probably the most extensive ones. They’re similar in concept to the GUI ones but they take advantage of PNUnit to automate the Plastic command line. They focus on the CLI instead of the GUI. They’re also thicker than they should.
    At the end the testing pyramid should look like this:


    But in our case it is a little bit inverted: smoke and GUI are much bigger than unit tests in coverage and required run time.
    The automated test suite gives us a lot of security and avoids lots of regressions and we’re constantly working on making it better, so I expect it to look like the former pyramid sooner than later…
    Pablo Santos
    I'm the CTO and Founder at Códice.
    I've been leading Plastic SCM since 2005. My passion is helping teams work better through version control.
    I had the opportunity to see teams from many different industries at work while I helped them improving their version control practices.
    I really enjoy teaching (I've been a University professor for 6+ years) and sharing my experience in talks and articles.
    And I love simple code. You can reach me at @psluaces.
  • 0 comentarios: