Fit for Purpose

What do we mean when we say software quality?  How can we think effectively about it in a practical way? Quality is an elusive topic, and so is product quality for that matter. I have caught myself saying things like “we are not compromising on quality” and when pressed to describe what quality I meant, what was exactly what we were not to compromise on, I found myself in real difficulties.

I am not writing in this blog post about the true nature of quality or other philosophical disquisitions, but focusing on a practical approach that enables a team to take decisions and move forward in their product development journey.

A typical scenario of such an exploration can be receiving feedback describing our product not behaving the way customers expect it to do. Is the customer describing a defect in the system or a new functional request? Depending on the context of our product development efforts the question may have practical implications: What is the speed at which we should react? Should our customer support team work on this? Does the solution gets to be released in the next version of the product or as a hot-fix to the previous version?

A traditional way to answer that question has been to refer to the specification of the product: if the behavior described by the customer was not complying with our specification it is a defect, otherwise it is just an additional requirement. That is a useful distinction sometimes as it helps us discern whether our design team have foreseen a particular aspect of the solution or not, but maybe not very helpful when it comes to understand if we are serving our customers well. In particular, there are many cases in which the customer is trying to do something which is reasonable and useful and even the obvious answer but that we did not imagine when writing the specification. Some times our specs are completely right in reflecting our wrong assumptions about what the customers want.

For a product development organization there is a question line which in my opinion is more useful: Is the product we have released to the market making it possible for our users and customers to achieve what they expect to be able to do with it? Are we satisfying our customers? That can help us identify quality but it is difficult to answer those questions. Customer feedback is always telling us something important, but sometimes we need help understanding what it means precisely.

Fit for purpose is a thinking razor helping us move in the right direction. These are some of the question lines I use to facilitate the team inquiry about the products we create.

  • Is the usage the customer is describing covered by our goals when we released the product?
  • Is the customer in a demographic that we are not focusing on?
  • If the problematic case was not to be covered by our current version, does the product still achieve something meaningful for its intended users?
  • Does the product resolve a problem for the user beginning to end?
  • Can we consider the missing functionality as an add-on or is it part of what the current release of the product is aimed at solving?
  • Does it sound like something is broken or forgotten? Are we comfortable stating that the requested function is something our product does not do yet?

Let me put an example (I know, I know, I should have included one earlier…): Melomaniac Bit has just released their second version of RareRecordsHoarder (RRH), a great piece of software that allows record collectors to catalog any vinyl record, using the data collected by the company about almost any vinyl pressing out there. One of the most applauded features in version 1 was the capability to scan the barcode to add all modern vinyl records. For older editions the user can type a few details and the matches in the database will be offered as options, when everything else fails all details can be added manually and the user will be asked her permission to contribute those details to the shared database. After all the hard work the reward is superb you can brag in facebook about your collection and share as much or as little as you prefer. I know… record porn sharing. Version 2 introduced a great new feature the team would expect will double their customer base, using the user self-appraised state of each owned vinyl pressing and a database of vinyl sales the program calculated the estimated value of a user’s collection.

Immediately after releasing RRH v2 the team received feedback from Sheila, an upset customer. He had just bought RRH v2 and it still does not allow him to get information from the database for his shellac 78 RPM records, what kind of an upgrade was that?
Tanner comments he owns two copies of the same rare 50’s single, both are in mint condition one of the pressings has an (accidental) tan coloration and the other is just plain black. Unfortunately, both copies are not valued the same, the rare colored one sells for 50 times the price of the black one.

Lets analyze the two situations: Sheila is trying to catalog her 78s, and even though shellac records are an expansion area for the future the current target is vinyl only, which according to the team’s research accounts for 85% of the record collectors market. The team decides to explain that to Sheila reinforcing how much they value her business. Tanner is really trying to achieve what the team attempted to release, they even think without an accurate estimate of the value of the collection there is no market for v2. The team never thought about that, those are unique cases… adding a new field will not make it as each unique record is so for a unique reason… Panic! The team knows their software is not fit for purpose, it is a defective product. Suddenly Marie, kind of a quiet genius, says “what if, when there is such a disparity in prices we simply ask the user which one is theirs? They are self-appraising their collection after all. That is easy to implement and it can be online in two days. Data is ready for it too.” Saved. Email to Tanner, solution in two days. (I know, I could not resist the simplistic happy ending…

What about your team? How do you look at the quality of your product? Do you find fit for purpose a useful concept? How do you explore it?

Advertisements

Drucker, Agile and the Management Revolution

I took again Drucker’s The Effective Executive a few days back with the purpose of casually re-reading just a few pages, and this time I got hooked. I have been thinking a lot about what “management” means today in practical terms, in a business context influenced by Lean, Agile and many other paradigm change theories. The Drucker of the 1950’s had something to say about it.

In Chapter 3 What Can I Contribute? Drucker writes:

The man who focuses on efforts and who stresses his downward authority is a subordinate no matter how exalted his title and rank. But the man who focuses on contribution and who takes responsibility for the results, no matter how junior, is in the most literal sense of the phrase, “top management.” He holds himself accountable for the performance of the whole.

That simple phrase hit home. Drucker most likely never intended to define management with it, but just to express that the highest functions of management can happen at any level. Hierarchy cannot and does not dictate where top management happens.

The Agile Manifesto seems to go in this direction when articulates it’s focus on people and their interactions, building the projects around motivated individuals and using self-organization to make the best architectures and designs evolve. Of course the manifesto is focused on software development and not on general management, but there is in it, or at least in my reading of it, a similar sense of direction moving management down to each motivated individual.

The concept of “Respect for people (humanity)” is key in the Toyota Production System. That includes not only treating people well, but a deeper meaning of challenging people to perform at their best, think at their best and engage them in problem solving, teaching them to see the whole system, transforming by this act, any blue collar worker into a knowledge worker. That, according to Drucker, is top management assigned to each assembly line worker. Lean implementations in Western companies evolved from the TPS include practices to make each worker in a business process or value stream a participant in the governance and improvement of the said process. Holacracy does a similar thing with its governance circles.

There seems to be a tendency of pushing down “top management” as used by Drucker to the lowest level of the knowledge workers. Is there such a trend? If that is the case, what does this trend imply for management? There have been a few proposals to articulate answers to the question of what management means in the twenty-first century, with each putting emphasis on different aspects of it, like Jurgen Appelo’s Management 3.0 and Steve Denning’s Radical Management. Denning went as far as describing a new management canon being created just now. The topic is far from exhausted and I feel it is extraordinarily important for all of us as to be a part of it as knowledge workers, that is, as explained above as management practitioners, or as Drucker wrote it as executives. It may well be a revolution of the largest magnitude, transitioning the way we organize ourselves in businesses, governments and associations from the industrial age into the information age, into the age of the creative economy.

High Touch Retrospectives for Distributed Teams Using Trello

Some of us in the Agile community are not fortunate enough as to work within collocated teams all the time (is it most of us already?). Some of us may spend a significant part, or even all, of our effort as part of geographically distributed or dispersed teams. Just one of the myriad difficulties when working in such teams is to replicate the high-touch techniques to increase participation and collaboration in team events, such as planning and retrospectives. Recently plenty of online tools have reached maturity, allowing teams to collaborate in real-time, in simple and effective new ways, that is, closer to the same room experience.

This post describes in detail a real-life example of using Trello to run a retrospective. Please note no technique or tool is universally applicable and these ones are no exception. You will need to check the circumstances and forces influencing your own problem before applying any specific tool to solve it. Expect some story telling ahead.

Continue Reading

Finding Done

How can a facilitator help a team finding what done means for them, considering their environment and their work. How do I facilitate the Definition of Done (DoD) discovery workshops. What you can find below is my current standard practice. There might be better options for your teams and I’m sure I will keep evolving it and I will uncover a better way but it worked for me a few times and I don’t know better as of today. I hope it can be helpful to some of you.

How to Run a Definition of Done Workshop

Continue Reading

FlowchainSensei, Deming and the Coach’s Role

Bob Marshall, the FlowchainSensei, has written another interesting post today. Not that this is surprising as I really like his blog a lot, but this one title contains two cherished words to me: Coaching and Deming.

Thanks Bob, for bringing Deming to the conversation about coaching. His work changed and it is still changing how I see mine. I can understand how revolutionary (even weird) it was to the Japanese leaders that were listening to him decades ago. It still is.

Let me try to define how I see Agile coaching: helping teams, the individuals that form them and the organizations and individuals that interact with them to improve the system we form altogether so we can stay in business longer and produce better results for all involved.

It’s admittedly still green and the idea is not new at all. The simplest proof is there have been a number of ways to refer to the changes required in the system beyond the team’s autonomy, like <em>organizational impediments</em>. I am trying to articulate it well and let it drive my work as a coach.
A practical implication of this mindset when coaching is that I ask teams not to focus on improving the personal 5% span of influence, or even the team’s x%, but looking at the overall system we form, identify the factor with the biggest impact on our work (it may be impacting the value we can produce, our capability to produce or our wellness as teams or individuals). Once identified, we should establish a theory on how that factor works, i.e. how it influences the overall system. Then we put it to test on some small experiments and see if we can improve the overall state of our system. If the theory doesn’t stay… well, you know the method.

As already mentioned by Bob in his answer to a comment, the coach is one more part of the system and as such it can not be isolated from it when establishing a working theory. We are part of the system we want to improve and that is just one more rationale not to buy the metaphor of an engineer improving an external process. A more organic metaphor may be more useful, such as a network of relationships (i.e. a family) in which you are one of the parties. If you want to help your family, or marriage, or team improve you have to keep in mind you are still an interested party.

Agile Test Expertise Roadmap

My Intent

I have been asked by a friend and colleague to help her define the transformation plan for the tester’s team in a software development organization. The goal is to help testers learn agile testing techniques, so they can better help the agile development teams they belong to. As generalizing specialists, the organization wants them to be the agile testing experts within their teams.
I am by no means an agile testing guru, even though I have done my fare share of testing in agile teams, but I have seen a few repeating scenarios in the teams I have coached and it is my purpose to document them in this post, as it can be useful to somebody in a similar situation to my friend and in the hope of receiving useful feedback.

Structure of this Post

  1. Starting Points a description of different roles and stereotypes frequently found in development teams. It will help us understand where we are, that is the beginning of the trip, and what the next steps could be.
  2. An Agile Tester… tries to define what characterizes an agile tester and provide some guidance on the destination of this trip.
  3. Agile Testing BOK is a simple list with short descriptions of the different disciplines included in agile testing, structured around the world-famous _agile testing quadrants_.
  4. Some Frequent Journeys describes some examples of trips to agile testing mastery, starting from a few different realities.
  5. Learning Resources compiles links to books and articles that can help understanding agile testing and its practices.

Starting Points

It all starts where you are now. My intention in this section is to acknowledge there are as many different starting points as people willing to start a journey. Even seeing such wild diversity I think we can still recognize some common patterns that can be useful as you or the people you are trying to help finding their path may be close to what described herein.

Traditional Senior Tester
  • Proficient defining test specifications based on requirement or design specifications
  • Performs ad-hoc exploratory testing looking for bugs
  • Errors are defined as software behaviors that are out of specifications.
  • In borderline cases identifies together with other team members when there’s a specification or implementation defect.
  • May have some automation experience for repetitive tests (i.e. regression, performance measurement…).
Traditional Junior Tester
  • Completes test specifications by filling in details like in/out parameters, developing additional test cases, etc.
  • Executes manually or using automated scripts the test cases defined in the specification.
  • Raises issues on cases when expected behavior does not conform to the specification
Traditional Programmer
  • Tests and debug component code informally as it gets created, normally in isolation from other components.
  • Creates and executes Unit Test specifications at module / class / component level.
Traditional Architect / Senior Developer
  • Defines interfaces for system components and assigns responsibilities to them.
  • Creates Integration Test Specifications together with Senior Testers.
  • Creates and executes difficult performance / load test specifications using highly specialized tools.
Business Analyst
  • Defines Business Model and requirements for the system. Sometimes he/she can participate in the component breakdown definition providing specific requirements for each component.
Traditional Product Manager
  • Defines the product roadmap.
  • Participates in the requirements breakdown at the system level.
Agile Coder Mid-journey
  • Practices TDD, creating test specifications for each component / class / method before they get implemented that are executed continuously to ensure no regression error is introduced.
  • Co-creates, together with Business Analysts and Agile Testers, the system-wide requirements specified in the User Stories, sometimes even creating tests as the specification (BDD or Spec By Example).
  • Co-responsible for quality of the final product together with the rest of the Agile team.

An Agile Tester…

  • focuses on delivering business value.
  • is an Agile team member and knows and has applied Agile Software Development basics:
    • Manifesto
    • Principles
    • A simple framework as Scrum and / or Kanban
  • knows rudiments of XP Technical Practices and the synergy / relationships between them.
  • practices Agile testing, i.e. ATDD, Specification by example, BDD.
  • leverages automation as much as it’s practical.
  • is flexible, takes a whole team approach to software development.
  • is a generalizing specialist so she practices or learns from:
    • PO and BA. What the customers need. Domain knowledge. How to write good specifications using tests.
    • Developers. TDD, white box testing, architecture and design rudiments and some coding techniques.
    • Other Quality experts about non-testing aspects of software quality.
  • teaches others in the team about Agile testing (their specialty) in general.

Agile Testing BOK

This section is based on the Agile Testing Quadrants first introduced by Brian Marick here http://www.exampler.com/old-blog/2003/08/22/#agile-testing-project-2 and later refined by Lisa Crispin here http://lisacrispin.com/2011/11/08/using-the-agile-testing-quadrants and in her Agile Testing book http://techbus.safaribooksonline.com/book/software-engineering-and-development/software-testing/9780321616944.
If you are not yet familiar with Lisa’s book, do yourself a favor and read it now, it’s a great use of your time even if you are already in your agile testing trip.

Another great resource to acquire insights that can change your view of testing in a record time is Scott Ambler’s magnificent article Agile Testing and Quality Strategies: Discipline Over Rhetoric http://www.ambysoft.com/essays/agileTesting.html. Go read it, now… really!
This is my take on the agile testing quadrants, it contains a combination of ideas obtained from the aforementioned sources plus my own material (of course all errors are only my own).

Q1: Technology-Facing Tests that Support the Team
Purpose Go faster, do moreCreate flexible code that adapts gracefully
Shortest feedback cycle
Types Automated Unit Tests
Automated Component Tests
Toolset xUnit frameworksmockobjects
build automation tools
source code control
Audience developers: create, use and maintain test suites
test experts: assist in defining functional content and data sets to be used
Risks / difficulties technical debt: legacy code with low (even zero) unit test density
complex / heavy / slow dependencies (i.e. databases)
Execution of test suite takes longer than acceptable to the team so it’s not run often.
Frequent Practices TDDObject Mocking
Design for testability
Q2: Business-Facing Tests that Support the Team
Purpose These tests help provide the big picture and enough details to guide coding
Clarify and specify requirements
Risk identification / mitigation in obscure / risky areas of the product
Detail conditions of satisfaction
Increase the domain knowledge of the team
Functional regression safety net
Types Automated Acceptance Tests
Story Tests
Functional Tests
Simulations
Manual Prototypes
Examples
Simulations
Toolset Eliciting Requirements
– Flow-diagrams
– Mock-ups (i.e. paper prototypes)
– Wireframes
Testing behind the GUI
– xUNit frameworks
– BDD tools (Cuke4Duke, Cucumber-JVM, Concordion, easyb and JBehave for Java, NBehave and NSpec for .NET, and Cucumber and RSpec for Ruby)
– FIT / FitNesse / SLiM
– CrossCheck, Ruby Test::Unit, soapUI for testing Web Services
Testing through the GUI
– Record and playback tools
– Scripting tools
* Watir for Ruby
* Selenium
* Canoo WebTest
Test Management Tools
– Geminy
– HP Quality Centre
– IBM Rational Quality Manager
– TestLink
– Wiki
– FitNesse
Audience Test experts
* Define together with BAs the tests
* Implement automated tests together with developers
BAs
* Define together with Test-experts the tests
Developers
* Support the test automation effort
* Consume tests as specification and verification tools
Risks / difficulties Tests take long to be specified during the sprint.
Build tests incrementally to feed the development team high-level acceptance tests early on.
Tests are not maintained after the sprint in which they were introduced.
To avoid it make sure all tests always pass in your system.
Execution of test suite takes longer than acceptable to the team so it’s not run often.
technical debt: legacy code with low (even zero) unit test density
complex / heavy / slow dependencies (i.e. databases)
Frequent Practices BDD
ATDD
Specification-by-example
Data-driven testing
Testing-behind-the-UI
Testing-through-the-UI
Q3: Business-Facing Tests that Critique the Product
Purpose
Types Exploratory TestingScenariosUsability Testing

UAT (User Acceptance Tests)

Alpha / Beta Usage

Toolset Test Setup Automation using any of the tools mentioned in Q2 and Q1Test Data Generation tools (i.e. PerlClip)Log File Monitoring Tools (i.e. Unix’s tail, LogWatch)

Simulators

Emulators

Scenarios and Workflows

Audience Test-experts

  • Perform exploratory testing
  • Collaborate with developers in step automation.
  • Define domain-relevant scenarios.
  • Perform UAT.

BAs

  • Check the product is fit for the purpose it was built for.
  • Define domain-relevant scenarios.
  • Perform UAT.

Technical-writers

  • May do exploratory testing to learn about the product when writing user facing documentation.

Developers

  • Assist with automation of steps
  • CX, UX, Usability Experts
  • Help define and sometimes perform usability testing
Risks / difficulties Difficult to automate as it needs “a brain” to critique the product.If Q1 and Q2 tests don’t leverage automation properly there will be no time for Q3 testing.Sometimes it’s difficult to engage the relevant stakeholders into this kind of testing, especially when done in a rolling-wave iterative approach as some of them are accustomed to end-of-project testing stage.
Frequent Practices DemonstrationsEnd-to-end systemTest-behind-the-UI

Soap Opera Testing, a term coined by Hans Buwalda [2003]: Take a scenario that is based on real life, exaggerate it in a manner similar to the way TV soap operas exaggerate behavior and emotions, and compress it into a quick sequence of events. Think about questions like, “What’s the worst thing that can happen, and how did it happen?”

Use Automation to help in exploratory testing (i.e. setup, frequently performed sequences).

User Needs and Persona Testing for Usability testing

Q4: Technology-Facing Tests that Critique the Product
Purpose Verify nonfunctional requirements including configuration issues, security, performance, memory management, the “ilities” (e.g., reliability, interoperability, and scalability), recovery, and even data conversion.
Types Performance and Load TestingStress Testing”ilities” testing

  • Security
  • Reliability
  • Stability
  • Maintainability
  • Compatibility
Toolset Performance and Load Testing ToolsUnitl Level Perf Tools: JUnitPerf, httperfOpen Source: Apache JMeter, The Grinder, Pounder, ftptt, and OpenWebLoad

Commercial: NeoLoad, WebLoad, eValid LoadTest, LoadRunner, and SOATest.

OS Profiling Tools

Ethical Hacking Tools

Audience Developers

  • Develops and maintains all or parts of the tests to be performed.
  • Assists external experts in evaluating the testing needs as the product tech experts.

Test-experts

  • Assists external experts in evaluating the testing needs as the product behaviour experts.
  • May create the non-functional test specs and run them when they don’t require fully dedicated experts.

Security expert (Ethical Hacker)

  • May be requested to analyze or try to break a system security. Typically not focused on a single project.

Database Experts

  • Sometimes called in to design DB load tests or data conversion tests. Typically not focused on a single project.

Performance Test Expert

  • Normally a senior developer with specialized skillset and tools that designs and runs performance tests for a number of projects and systems. Typically not focused on a single project.
Risks / difficulties The team may get focused on the business requirements and forget about the non-functional ones. This may be even perceived as something to be dealt with by developers only.When they are perceived as low-risk for a project they may be missing completely from the test plan.They need special knowledge and expensive tools to be performed so they are “faked” or skipped all at once.

Access to the required experts is difficult. i.e. there’s a lead time to them and projects are asked to batch their testing, add it to a queue and wait for results.

Cross-functional tests are expensive and hard to do in small chunks.

Frequent Practices Incremental Nonfunctional Testing (from the start and building upon)Baseline Performance before TuningTest Environments (simulating, emulating or replicating production environments)

Some frequent journeys

Traditional Programmer
  1. Q1
    1. Code along a TDD intro book
    2. Practice TDD pairing with experts and novices
    3. Read on advanced techniques
    4. Practice TDD pairing with experts and novices
    5. Help others learning TDD
  2. Q2
    1. Prototyping
      • Wireframes
      • Mock-ups
    2. ATDD + Test Automation
      1. Read intro book
      2. Guided by an expert, implement one automated test suite behind-the-GUI
      3. Experiment with different automation tools
      4. Work through-the-GUI
      5. Learn how to write AT specifications
      6. Use the automation tools beyond AT
  3. Q4
    1. Support an expert by providing application specific info
    2. Implement Q4 Testing for a simple low-risk case
  4. Q3
    1. Support role: Work along test experts in the team providing them with the support they need (i.e. white box testing design)
Traditional Senior Tester
  1. Read Agile Testing intro book
  2. Q2
    1. ATDD, Test Automation
      1. Read ATDD intro book
      2. Learn writing AT specs pairing with an agile expert
      3. Get some hands-on practice using some BDD / ATDD automation tools
      4. Pair with experts and novices on automation tasks
      5. After TDD learning, pair with experts on test harness implementation tasks
  3. Q3
    1. Agile Exploratory Testing
      1. Pair with an Agile expert to learn timeboxing skills
      2. Learn how to use log watching tools
      3. Learn how to use Test Data Generators
      4. Design, tool and implement automation that reduces time-consuming tasks (i.e. setup)
    2. Scenario Development
      1. Pair with a senior test expert learning to execute already defined Test Scenarios
        • Workflow
        • Persona-based
        • Soap Opera
      2. Try developing new Scenarios including
        • Workflow
        • Persona-based
        • Soap Opera
  4. Q1 – TDD basics
    1. Acquire basic coding skills and TDD understanding
    2. Pair with programmers on project tasks (focus on learning xUnit tools and coding)
  5. Q4 – Support role
    1. Support an expert by providing application specific info
    2. Implement Q4 Testing for a simple low-risk case
Traditional Junior Tester
  1. Read Agile Testing intro book
  2. Q1 – TDD basics
    1. Acquire basic coding skills and TDD understanding
    2. Pair with programmers on project tasks (focus on learning xUnit tools and coding)
  3. Q2 – ATDD, Test Automation
    1. Read ATDD intro book
    2. Pair with experts on automation tasks
    3. Pair with experts on test harness implementation tasks
    4. Learn writing AT specs pairing with an ATDD expert
    5. Pair with experts and novices on Q2 tasks
  4. Q3
    1. Pair with an Agile test expert learning how to do Agile exploratory testing
    2. Pair with a senior test expert learning to execute already defined Test Scenarios
      • Workflow
      • Persona-based
      • Soap Opera
Business Analyst
  1. Q2 – ATDD, Test Automation
    1. Read ATDD intro book
    2. Learn writing AT specs pairing with an agile expert
  2. Q3
    1. Pair with an Agile test expert learning how to do Agile exploratory testing
    2. Pair with a senior test expert learning to execute already defined Test Scenarios
      • Workflow
      • Persona-based
      • Soap Opera
Agile Developer Mid-journey
  1. Q1
    1. Read on advanced techniques
    2. Practice TDD pairing with experts and novices
    3. Help others learning TDD
  2. Q2
    1. Prototyping
      • Wireframes
      • Mock-ups
    2. ATDD + Test Automation
      1. Read intro book
      2. Guided by an expert, implement one automated test suite behind-the-GUI
      3. Experiment with different automation tools
      4. Work through-the-GUI
      5. Learn how to write AT specifications
      6. Use the automation tools beyond AT
  3. Q4
    1. Support an expert by providing application specific info
    2. Implement Q4 Testing for a simple low-risk case
  4. Q3
    1. Support role: Work along test experts in the team providing them with the support they need (i.e. white box testing design)

Learning Resources

Lost in Time Tracking.

Let me be direct and confess my opinion and intention first. I see time reporting as just a waste from an engineering organization point of view. Even though I honestly expect to be proven wrong any day, considering how many intelligent people consider time tracking to be really important, I haven’t been provided with any solid proof or evidence yet. I am writing this entry  to help me clarify my own thinking and with the hope of getting some comments back and learn from them.

I expressed my opinion to an  informally gathered group of engineering managers. They strongly disagreed with my point of view, so I asked them to present the group with the change or improvement performed in their teams based in time usage reports or statistics during the last year. Unsurprisingly there was none. Nobody in group has ever used the reports or data coming from the enterprise time reporting systems to actually manage her teams. They agreed  this was partly due to the reports not representing the complex reality of time usage within a product development or engineering team. If it’s really important and it can provide insight into our teams and processes then we are missing the opportunity. It’s practically useless even if it could have some theoretical potential.

Continue Reading

Boiling More Than One Ocean (At the Same Time)

Note on the publishing date: This post first appeared on 2013/02/25 as part of the now defunct leannovation blog header, which died because it happened to coincide with the name of an unrelated existing company.

Since I joined my first non-corporate software development team a few years back I have seen plenty of examples of product development organizations trying to do too much in parallel. I’m not talking about the usual peak of activity here or there, but about a constant stretch to higher utilization levels or to a bigger number of projects crawling through the development life-cycle.

I am bringing one particular example as an illustration. It was maybe an exaggerated case which, fortunately enough, I have only witnessed but never been a part of. The organization in question was a software development department within the IT division of a mid-sized corporation. By then they usually bought most of their software “in-a-box” or through externally contracted development projects so the department was not very big. Most of their developers were allocated to multiple projects; the most demanded ones to as many as 10 projects. Teams didn’t last long enough to gel and individual allocations were “calculated” through some pretty sophisticated resource management techniques, considering several variables like title, occupation, availability (in chunks of 5%) and even cost.

Continue Reading