Saturday, 9 February 2013

Back to Basics: One-to-One's


One-to-one's for me are the number one way to build relationships with the members of my team.  Everybody is different; an individual.  We all respond to situations differently and we all have different motivations towards work.  It is therefore essential that we treat everybody as an individual and don't try to work with or manage them all in the same way.  To help us understand the individual - one-to-one's.

One-to-one's are also a great opportunity to get into the details of your teams work or issues.  This can actually be a huge time saver for you as a manager; especially if you do these weekly as your team will be more inclined to wait for their one-to-one to discuss the less urgent issues rather than requiring regular and immediate face time with you.

So how do we roll these out?

1. Do them weekly.
You cannot hope to build a relationship with somebody by spending 30 minutes with them just 12 times a year.  If this is a team that is new to you and you are concerned there will not be enough content to fill the 30 minutes (which incidentally is rarely the case and you are more likely to have the opposite problem), start at bi-weekly meetings.  I do however encourage moving to weekly meetings as soon as you can.

2. Have a balanced agenda.
Aim to have half the time dedicated to them, the other half for you.  Don't be too concerned if they over-run slightly on their side.  Remember, it's possibly a lot easier for you to arrange further time with them than it is for them to arrange time with you.

3. Let them talk about what is important to them.
Preferably this will be about work but if they want to spend a few minutes talking about their weekend, cars, cats etc, let them.  This is a meeting where its primary purpose is to build a relationship and you can't build a relationship with somebody when you dismiss what interests and is important to them.  Believe me, if they are having issues with their work and they really want your direction, they will soon bring it up.

4. Let them go first.
If you don't, you will set the tone of the meeting from the outset and they may not open up thereafter.

5. Don't allow yourself to be interrupted by external interruptions.
If you do, you risk sending a negative message that your meeting with them isn't as important as the interruption.  You cannot hope to build a motivated team if you allow that perception to manifest.  It's therefore a good idea to turn off all phones, email and IM when entering these meetings.  If anybody attempts to interrupt in person, politely ask them to come see you afterwards.  You have no idea how much this point alone contributes to building a relationship with somebody until you have actually had to do it.  It's very motivating.  Of course, apply some common sense here.  If the building is on fire then you probably want to know about it.  Just avoid everything other than the most essential interruptions.  It's only for 30 minutes after all.

6. Never cancel a meeting.
Again, this sends the same negative message as when you accept interruptions.  If you genuinely can't make your normal scheduled appointment, reschedule immediately for another time in the same week.  This will at least reaffirm that meeting with them is important to you and it is simply a scheduling issue.

7. Do them in private.
You can't expect somebody to open up about delicate or confidential issues if there is a concern they may be overheard.  If you can't find a private location in the office, a public location can still be private.  Coffee houses for example can make excellent locations for one-to-one's in the absence of a meeting room back at the office.

8. Always start and finish on time.
This one is a general meeting rule and should be no different for one-to-one's.  It helps form a mutual respect for each others time and provides coaching through example on this particular facet of meeting etiquette.

9. Determine who owns any resulting actions.
Just like any meeting, determine who is responsible for completing any actions and decide by what date they should be completed.

10. Follow up on any actions you received in the last meeting.
It is of course vital that your team has confidence in you as a leader.  If you yourself received any actions, follow through on them.  If you were unable to fulfil an action from the last meeting, be open and honest and add it to your list of actions for this week.  Try not to let this happen for too many consecutive weeks as this in itself sends an implicit message that their request isn't that important to you or the business.  If it was important enough to be added as an action to begin with, follow it through to completion.

That's it!  Just ten simple rules to help ensure the effective roll out of one-to-ones within your team.  What else do you do to ensure these are effective and run smoothly?

Tuesday, 22 January 2013

Improving estimates with Planning Poker


I once overheard a comment regarding a last minute request to estimate a piece of work in an area of the code base the developer knew nothing about.  "Whatever you estimate, you'll be held to it.".  It's no wonder developers are often reluctant to provide estimates when there is an unrealistic expectation that estimates are always spot on.  An estimate by its own definition is an approximation and we should therefore anticipate and plan for the possibility that it is going to be incorrect.  What we can do however is employ techniques that help improve the quality and accuracy of estimates so that we can have a higher level of confidence in them.  One such technique is "Planning Poker".

Let's say we've been asked to estimate a particular user story in a large area of the system.  Different developers have had different experiences within this area and so their opinion of the complexity of implementing this user story differs.  Planning poker allows us to identify this and provide a forum for further short discussion; by the end of which we should achieve a more accurate estimate that everybody buys into.  Let's play a short hand to see this by example.

We have five stakeholders playing which consist of a two DBA's and three developers.  They all hold the following standard hand which contain numbers from the Fibonacci sequence.

Image courtesy of Mountain Goat Software


















We have been asked to estimate two user stories and we'll play a hand to estimate user story 1 first.  Each stakeholder selects a card that represents their estimate of that story.  What we see after the first round are the following cards selected.  Two 1's, two 2's and a 3.  As the estimates from all were relatively close with no wildly differing opinions, the group decide to take the average(ish) estimate of 2 and move on.

Next we look at the second user story and get the following hand.  One 3, three 8's and a 13.  This time we have quite a split opinion so it is now up to the low and high scoring stakeholders to justify their estimates.  Once each stakeholder has presented, we go through the process of playing another hand, this time considering what we have just heard.  We repeat this cycle until we eventually reach a hand similar to the first where we had similar enough estimates that the group felt comfortable taking the average, if not the same estimate across the board.

As you can see, planning poker has the potential to be an effective tool in collaborative estimation.  It won't guarantee that every estimate will be perfect but it will improve the accuracy and reliability of them and certainly give you every possible chance of avoiding those unfortunate occasions where we get them wildly wrong.

Friday, 4 January 2013

The var keyword in C#. What's in your coding standards?

So this was a subject that led to much debate in the office this week.  When is it acceptable to use the var keyword in C#?  Well, lets start with the obvious situation where everybody will agree it is necessary and we have no choice otherwise.  Anonymous types.  

Take the following example where we have a Linq projection to an anonymous type...

var employeeDetails = from employee in employees
                where employee.County == "Kent" 
                select new { employee.ForeName, employee.Surname };

foreach (var item in employeeDetails)
{
    Console.WriteLine("Name={0} {1}", item.ForeName, item.Surname);
}


var is used twice here.  Once to hold a sequence of anonymous types, then again to iterate through the sequence of anonymous types; but where else is it appropriate for use?  Well actually, most people agreed that it shouldn't be used anywhere else.  It certainly shouldn't be used where it would cause ambiguity, like in the following example... 

var products = _repository.GetProductsByCategory(ProductCategory.Bike);

What is the type of the variable products in the example above?  It's not immediately obvious and requires you to drill down into the method GetProductsByCategory to determine the return type.  Instead, the use of an explicitly typed variable is preferred.  Here is the same example typed correctly. 

IList <IProduct> products = _repository.GetProductsByCategory(ProductCategory .Bike);

Much better.  Also, I don't think var should be used where the type is inferred from the right side of an assigmnet.  i.e...

var age = 31;

Sure, we know the implicit type of the variable age through our own experience, but what about junior members of your team?  Will they know?  Perhaps this was written by a junior.  Did they know the implicit type they would receive when they wrote this?  Were they expecting something else?  Its use here raises too many questions and is best avoided.

I think the biggest split in opinion was around using var when instantiating an object.  To be honest, I've always found C# rather verbose when it comes to this.  For example...

List<Bike> bikes = new List<Bike>();

Isn't there quite a bit of redundancy there?  We explicitly type the variable bikes, then we immediately instantiate that variable, specifying the type a second time.  Personally, I fall into the camp where if the type is specified on the right side of the assignment, use the more concise syntax of...

var bikes = new List<BikeDTO >();

To me, that reads much better.  It's concise and there isn't any ambiguity over the type.  This point however seems to split the room and there seems no way to convince either side otherwise.  I guess this one comes down to what you are used to and what other languages you have had exposure to.  The same verbose declaration in VB for example would look something like...

Dim bikes as List<BikeDTOnew List<BikeDTO>()

However, nobody would ever use that syntax.  Instead, the norm would be to use the conciser form of...

Dim bikes new List<BikeDTO>()

...and move on.

Ultimately, the standard you choose should be consistent and not cause the developer to think too much about it, either at the point of variable declaration or variable discovery.  It's important to settle on something that everybody will buy into and adhere to.  I think it's impossible to please everybody on this one so if permitting its usage for the sake of reducing redundancy is going to cause debate and inconsistency within your code base, perhaps go with the safest option and don't permit it.

What do your coding standards say about the use of var?

Thursday, 20 December 2012

Historical Debugging (Part 3) - IntelliTrace

When I first saw IntelliTrace introduced in Visual Studio 2010 I was initially disappointed and passed it by.  Sure, it was petty cool to make this part of your ALM ecosystem.  For example, whenever a test within a Microsoft Test Manager session or a unit test during a CI build failed, an iTrace file could then be stored on the TFS server against the failure and pulled down by a developer to investigate.  For me though, this wasn't where the functionality had the most potential benefit.  With internal testing, we already know the boundaries and set the parameters to ensure the test will fail.  They are written right there in the test.  The real benefit for me is with identifying the unknown cause of a problem within a production environment.  This is exactly what Microsoft gave us with Visual Studio 2012 and the new IntelliTrace Collector.

So what is IntelliTrace?  Well, it's like memory dump analysis on steroids.  However, unlike dump analysis where you can only see the state for frames within the stack, i.e. method entry and exit points, IntelliTrace can be configured to collect data at a far greater verbosity, down to the individual call level.  This provides a debugging experience much closer to that of live debugging except that the state information has already been collected and you can't purposely, or indeed accidentally modify it.

Another great benefit of IntelliTrace, when you have it configured to do so, is the ability to collect gesture information.  This means you no longer need to ask the tester or client for reproduction steps as all the information is right there within the .iTrace file.

IntelliTrace can also be incredibly useful for debugging applications that have a high cyclomatic complexity due to the client reconfigurability of the product.  No longer do you need to spend an enormous amount of time and expense trying to get an environment set up to match that of the client's configuration; if it's even possible that is.  As we are performing historical debugging against an iTrace file that has already collected the information pertaining to the bug, we can jump straight in and do what we do best.

One of the biggest criticisms people have of IntelliTrace however is the initial cost.  The ability to debug against a collected iTrace file is only available in the Ultimate version of Visual Studio.  True, this is an expensive product but I don't think it takes too long before you see a full return on that investment and indeed become better off, both financially and by gaining a strengthened reputation to respond quickly to difficult production issues.  In the very least, I think it makes perfect economic sense to kit out your customer facing maintenance teams with this feature.

Now that IntelliTrace can be used to step through production issues, it has become a formidable tool for the maintenance engineer.  Along side dump file analysis, we have the potential to make the "No repro" issue a thing of the past; only rearing it's head in non-deterministic or load based issues.

Historical Debugging (Part 2) - Visualised Dump File Analysis

As I mentioned previously, "Live Debugging" i.e. debugging a system as you press the buttons and pull the leavers doesn't always allow you to reproduce the issue.  Much time can be wasted building what you believe to be a representative environment only to find the issue still doesn't occur.  This is where "Historical Debugging" has so many advantages.

One of my favourite historical debugging additions to Visual Studio 2010 was the ability to debug against a user mode memory dump.  These dump files have always been creatable using a tool such as ADPlus, WinDbg or more recently ProcDump but since the launch of Windows Vista/Server 2008, their creation has been even easier.  Simply right click on the process in Windows Task Manager and select the "Create dump file" option on the context menu.  In doing so, the dump file is then placed in your %temp% directory, i.e. {SystemDrive}:\Users\{username}\AppData\Local\Temp.

Note, you can only use the above method if your application is working in the same address space as the underlying os architecture. If you attempt to dump a 32bit application running on 64bit windows, you'll just be dumping the WoW64 process that your application is running within.  In this case, use one of the aforementioned utilities to create the dump.

Once you have a *.dmp file and as long as you have the associated pdb's for the process you dumped, you can simply drag it into your IDE and begin performing a historical analysis of the process as of when the snapshot was taken.  You'll be able to see the call stack of each thread in the dump and switch to any frame within each stack so that you can interrogate and analyse the variables within the context of the source code and debugger visualisation tools you are already familiar with.  This is a vast improvement over the CLI based tools such as WinDbg with the SoS extensions used up until this point.

So lets see this in action.  I have put together a simple WinForms application and put something in there to cause the application to hang.


Step 1: Run the application and trigger the action that will cause it to hang.


Step 2: Open Task Manager and observe the application is reporting as not responding.


Step 3: Right click on the process and select "Create dump file".


Step 4: Drag the created .dmp file into Visual Studio and perform the action "Debug with Mixed".


Step 5: Use the debugger tools to perform a historical analysis of the variables.


As you can see from the last screen shot, the hang was simply caused by an infinite loop and Visual Studio took us straight to it because that's the point in the code the process reached when we created the memory dump but this strategy is just as applicable to thread contention issues or indeed any other issue where analysis of a running production environment is required.

Dump file analysis used to be a hard going process and often avoided.  Now we have the ability to import these files into Visual Studio and interrogate the state using the tools we are already familiar with, this once again becomes a very valuable and powerful tool which can save an enormous amount of time and money diagnosing those "no repro" issues.

Friday, 14 December 2012

Historical Debugging (Part 1) - The end of the "No repro" problem?

If you think about it, with all the advancements that have been made within the software engineering industry over the last 40 years, debugging tools and strategies have not really advanced at the same pace.  Sure, parallelism has received a lot of focus in recent years and as such we have seen new debugging visualisers like the "Parallel Stacks" & "Parallel Tasks" windows in Visual Studio 2010 and the new "Parallel Watch" window in Visual Studio 2012 but by and large, debugging strategies have remained the same. Examine a stack trace; place some breakpoints in the associated source and try to reproduce the issue.

How many times have we or our teams tried to reproduce an issue using the reproduction steps provided by QA or worse still, the client and not been able to do so?  More times than I care to remember.  The problem becomes even harder to diagnose when the issue is data driven or caused by a data inconsistency issue and requires setting up an environment that closely represents that of the environment at fault.  We shouldn't be asking our clients for copies of their production data, especially if it contains sensitive customer or patient information.  So how do we debug issues we can't reproduce in a controlled development environment?  The answer, "Historical Debugging".

Historical debugging is a strategy whereby a faulting application can be debugged after the fact, using data collected at the time the application was at fault.  This strategy negates the need to spend time and money configuring an environment and reproducing the issue in order to perform traditional or "live" debugging.  Instead, the collected information allows us to dive straight into the state of the application and debug the issue as if we had reproduced it ourselves in our own development environment.

There have been two major Visual Studio advancements in historical debugging strategies over the last few years; "Visualised Dump File Analysis" and "IntelliTrace".  Both of these have made debugging production issues for maintenance and development teams simpler than ever.  Do they represent the end of the "No repro" issue?  Well between them, quite possibly and I'll discuss each of them in turn in the following two blogs..

Monday, 26 November 2012

Using metrics to improve product quality

Metrics. Managers love them. Developers hate them; but why is that? In a word, "trust". If we don't make clear from the beginning the intention of these metrics, distrust and resentment of them will set in and they will have a counter productive influence as developers begin to look for ways to "beat the system". The funny thing is, whether you're a manager or a developer, we all have one thing in common. We all want to improve the quality of the product and metrics when applied correctly help us all to achieve that. So how do we make metrics work?
  • Be transparent. Ensure you make clear the purpose of these metrics is to improve the quality of the product. Not to beat developers with a stick.
  • Focus on product quality, not individual performance. We all share that common goal so establish and capitalise on that alignment of interest. Do measure cyclomatic complexity and afferent/efferent coupling. Don't measure "most check-ins a week" or "most lines of code".
  • Create buy-in. Don't force these down. Identify meaningful metrics by utilising the experiences of those closest to the issues.
If we focus on the wrong metrics, we risk creating a culture where we write code that's purpose is simply to pass fool a rule. This is entirely different to writing clean, maintainable code which is where the real benefit is. Worst still, if metrics are used incorrectly as developer KPI's, no developer is ever going to undertake that much needed refactor of that difficult to maintain code for fear that it will hurt their "score" or "ranking". When we get to that point, it's the beginning of the end.

What other ways can you ensure metrics succeed within your team?  What metrics do you capture?