Forums

This seems like a stupid thing for me to be struggling with after a dozen years in software development.

I need to do a better job measuring performance, and I feel like I can't even describe performance right now.

I'm a software development manager in a small software development shop.

We have two challenges, like most organizations. First is how quickly we can get features implemented into a software application. Second is how easily we can deal with diversions like customer support issues, non-feature oriented software changes (like infrastructure, documentation and testing) and bugs

At the core of this I continue to think that we should focus on performance and results vs just following a procedure.

At the moment when I think of performance I think of a relatively quick response to customer needs while reducing churn spent on re-work and setting pace in feature development as is reasonable so that bugs and other defects don't creep into our process.

But I'm having a tough time communicating to my directs and to my manager what kinds of goals/objectives our team should have and how we can turn this into measurable results.

Help!

awalters's picture
Training Badge

Greg -

I'm in a similar situation. I just listened to the 'How to Set Annual Goals' podcasts and am trying to apply the concepts to software development. Some ideas:

I can handle responsiveness to bugs with a few metrics. For example, my organization can report on average length of time to close a bug report. And as Mike said in the podcast, it is easier to use an existing metric than to create a new one.

Another way to measure responsiveness might be to conduct surveys. On a scale of 1-10, how well do the customer support, documentation and testing groups think my team is supporting them?

For new development, I struggle. We have a process that calls for design reviews, for example, but they are often skipped. I've had more than one developer ask me "do you want me to spend time in a design review, or do you want me to get the job done?" But design reviews are part of our process because we believe they allow us to produce higher quality software with less rework. So maybe I can measure how well the process was followed, using that measure as one proxy for the quality of our deliverables.

If your process isn't meaningful to you or your team, then you have a different problem to deal with. If you believe in the process, you can use O3's and feedback to align behaviors with the process. Or, if you have ownership of the process, you can brainstorm with your team to arrive at a process that has their buy-in - and then you reinforce it through O3's and feedback.

The biggest problem I face is in time estimation. We break assignments down into tasks, estimate the time required, and never meet those estimates. Support and bugs can be somewhat unpredictable, but even accounting for those we're never even close on our milestones and that has a big negative impact on the company's overall delivery schedule. I think I can write a goal around meeting estimates, and use the advice from the 'Develop a Sense of Urgency in Your Team' podcasts, but I'm curious to know if anyone has additional advice.

drinkcoffee's picture

Greg --

I feel your pain. I too am a software development manager in a small company. I spend a lot of time thinking about what I should be measuring and what I can do to improve, and this is what I've come up with -- maybe you're already doing some of these, maybe not:

1. Track how much time is spent on new feature development and bugfixes vs. handling support/infrastructure/etc. issues. Since it is your job to write quality software (right?), that's what you should be spending the most time on. PLUS -- context switching (going from developing some cool new feature to getting pulled into a support issue) is a widely-known productivity drain. Start measuring that and come up with target percentages. Something like, everyone should be spending 50% of their time on product development, 10% on support, 20% on overhead, etc.

2. Come up with some measurements on quality, like number of automated unit tests, unit test coverage, number of unit tests succeeded in nightly build, cyclomatic complexity, dependencies, etc. These are great numbers to have and excellent proxies for software quality.

3. Depending on your methodology, you probably have some kind of feature backlog that you're tracking per release (in Scrum, this would be user stories per sprint). Measure that!

4. How many bugs do you still have open at the point of release? How many are show-stoppers vs. trivial? How many were reopened and what was the reason (misunderstanding of requirements vs. didn't actually fix). I think there are some CMMI measurements out there that will help -- number of defects per lines of code is one example.

5. Standard adherence -- come up with a document detailing your coding standards and have everyone follow it. Do regular code reviews and offer feedback. Measure everyone's compliance.

These are all behaviors. They give you data for offering feedback and doing coaching. Performance is aggregated behavior. Once you start seeing trends (hopefully upward), this is performance that nobody can argue with.

I hope this is helpful. Best of luck!

Regards,
Bill

akinsgre's picture

I've got little to work with right now. There hadn't been any process, nor measurements before I got here.

I'm starting to keep track of defect counts, and have implemented a "story board" to track workload (not time-boxing yet).

I've had some initial talks about simple metrics like "Days since a Customer Feature was implemented in QA" and "Days since a customer reported defect".

Hoping that that helps get everyone focused on performance and makes the trade-offs visible.

The point about task-switching is well-taken. I'd just had that discussion with the rest of the management team; might take some repetitions before that sinks in.

akinsgre's picture

[quote="awalters"]Greg -

I'm in a similar situation....[/quote]

Andy. I sent you a PM.