3 Risks to Every Team’s Progress and How to Mitigate

When looking at improving performance the first thought is often to increase the size of our development team; however, a larger group is not necessarily the only or the best solution. In this piece, I suggest several reasons to keep teams small and why to stop them from getting too tiny. I also look at several types of risk to consider when looking at team size: how team size effects communication, and the possibility of individual risk and systematic risk.

Optimal Team Size for Performance

The question of optimal team size is a perpetual debate in software organizations. To adjust, grow and develop different products we must rely on various sizes and makeups of teams.

We often assume that fewer people get less done, which results in the decision of adding people to our teams so that we can get more done. Unfortunately, this solution often has unintended consequences and unforeseen risks.

When deciding how big of a team to use, we must take into consideration several different aspects and challenges of team size. The most obvious and yet most often overlooked is communication.

Risk #1: Communication Costs Follow Geometric Growth

The main reason against big teams is communication. Adding team members results in a geometric growth of communication patterns and problems. This increase in communication pathways is easiest illustrated by a visual representation of team members and communication paths. 

Geometric Growth of Communication Paths

Bigger teams increase the likelihood that we will have a communication breakdown.

From the standpoint of improving communication, one solution that we commonly see is the creation of microservices to reduce complexity and decrease the need for constant communication between teams. Unfortunately, the use of microservices and distributed teams is not a “one size fits all” solution, as I discuss in my blog post on Navigating Babylon.

Ultimately, when it comes to improving performance, keep in mind that bigger is not necessarily better. 

Risk #2: Individual Risk & Fragility

Now a larger team seems like it would be less fragile because after all, a bigger team should be able to handle one member winning the lottery and walking out the door pretty well. This assumption is partially correct, but lottery tickets are usually individual risks (unless people pool tickets, something I have seen in a few companies).

When deciding how small to keep your team, make sure that you build in consideration for individual risk and be prepared to handle the loss of a team member.

Ideally, we want to have the smallest team as is possible while limiting our exposure to any risk tied to an individual. Unfortunately, fewer people tend to be able to get less work done than more people (leaving skill out of it for now).

Risk #3: Systematic Risk & Fragility

Systematic risk relates to events that will affect multiple people in the team. Fragility is the concept of how well structure or system can handle hardship (or changes in general). Systemic risks are aspects shared across the organization, this can be leadership, shared space, or shared resources.

Let’s look at some examples:

  • Someone brings the flu to a team meeting.
  • A manager/project manager/architect has surprise medical leave.
  • An affair between two coworkers turns sour.

All of these events can grind progress to a halt for a week (or much more). Events that impact morale can be incredibly damaging as lousy morale can be quite infectious.

In the Netherlands, we have the concept of a Baaldag (roughly translated as an irritable day) where team members limit their exposure to others when they know they won’t interact well. In the US with the stringent sick/holiday limits, this is rare.

Solutions to Mitigate Risk 

Now there are productive ways to minimize risk and improve communication. One way to do this is by carefully looking at your structure and goals and building an appropriate team size while taking additional actions to mitigate risk. Another effective technique for risk mitigation is through training. You shouldn’t be surprised, however, that my preferred method to minimize risk is by developing frameworks and using tests that are readable by anyone on your team.

The case for continuing education

Do you have job security? The surprising value in continuing education.

If the last 2 decades have taught us anything about change, they’ve shown that while software development may be one of the most rapidly growing and well-paid industries, it can also be highly unstable.

You may already invest in professional development in your free time. In this piece, I’ll show you how to convince your employer to invest in professional development as part of your job.

My Personal Story

I started my first software job at the height of the dot-com boom. I’d yet to finish my degree, but this didn’t matter because the demand for developers meant that just about anyone who merely knew what HTML stood for could get hired. Good developers could renew contract terms 2 or 3 times per year. Insanity reigned and some developers financially made out like bandits.

Of course, then came the crash came. The first crash happened just about the time I finished my degrees. By the time I graduated, I’d gone through three rounds of layoffs during my internships. By the time I actually started full-time work things had stabilized a bit, with layoffs settling down to once a year events in most companies. In 2007 we saw an uptick in a twice a year layoff habit for some companies, but then it quieted down again.

Of late, in most companies and industries software developer layoffs are less frequent. The more significant problem is, in fact, finding competent brains and bodies to fill open positions adequately.

My initial move into consultancy stemmed from a desire to take success into my own hands. Contracting and fulfilling specific project needs leaves me nimble and in control of my own destiny. My success is the happiness of my customer, and that is within my power.  Indeed, I am not immune to unexpected misfortune, but I rarely risk a sense of false security.  And I particularly enjoy the mentoring aspect of working as a consultant.

Despite the growth, I’d say software is still a boom and bust cycle.

Despite the relative calm (for the developers, not the companies), I think that as a software developer it is wise to accept that our work can vanish overnight or our salaries cut in half next month. Some people even leave the industry in hopes of better job security, while others deny the possibility that misfortune will ever knock on their door.

Not everyone has the desire, personality or the aptitude to be a consultant. However, everyone does have the ability to plan for and expect change. I wager that in any field it is wise to always have your next move in the back of one’s mind. This need to be prepared is particularly true in the area of software development. And while some people keep their resume fresh and they may even make a habit of annual practice interviews. Others have no idea which steps they’ll need to take to land their next job.

Landing that next job has some steps, and while the most straightforward step may be to make sure your resume and your LinkedIn profile are fresh with the right keywords (and associated skills) sprinkled throughout, it is even more important to stay on top of your game professionally.

Position yourself correctly, and you will fly through the recruiters’ hands into your next company’s lap. For many companies, keywords are not enough — they also need to know that you have experience with the most current versions and recent releases. Recruiters may not be able to tell you the difference between .Net 3.5 vs. 4.0; but if their client asks for only 4.0, they will filter out the 3.5 candidates. Versions are tricky, Angular 1 to 2 is a pretty big change, Angular 2 to 4 is tiny (and no, there is no Angular 3), it is not reasonable to expect recruiters to make heads or tails off of these versions.    

Constant Change Means Constant Learning

So how do you position yourself to leap if and when you need to? In the field of software development, new tools, methods, and practices are continually appearing. Software developers frequently work to improve and refine the trade and their products.

The result of this constant change is that for software engineers who maintain legacy products; you are at risk of losing your competitive edge. Staying at one job often results in developers becoming experts in software that will eventually be phased out.

Not surprisingly, the companies that rely on software to get their work done, but that are not actually software companies by trade tend to overlook professional development for their employees. The decision makers at these companies concern themselves with their costs more than the competitiveness of their employees and so they often remain entirely ignorant of the realities for their software engineers.

In some companies, from the decision makers’ point of view, they don’t see any logic in investing in training their employees or upgrading their software, when what they have works just fine. It’s easy to make a budget for a software upgrade, what is less evident is the cost of reduced marketplace competitiveness of their employees. Even worse, in some companies, there is an expectation that instead of investing in training, they’ll simply hire new people with the skills they need when their existing staff gets dated.

I once met a brilliant mathematician in Indianapolis that had worked on a legacy piece of software. One day after 40 years of loyal employment he found himself without a job due to a simple technology upgrade. With a skill set frozen circa 1980, he ended up working the remainder of his career in his neighborhood church doing administrative tasks and errands. Most people do not want to find themselves in that position, and they want to keep their economic prospects safe.

Maintain Your Own Competitive Edge

Another reason that many software engineers (and developers) move jobs every few years is to maintain their competitive edge and increase their pay. Indeed, earlier last year Forbes published a study showing that employees who stay longer than two years in a position tend to make 50% less than their peers who hop jobs.

“There is often a limit to how high your manager can bump you up since it’s based on a percentage of your current salary. However, if you move to another company, you start fresh and can usually command a higher base salary to hire you. Companies competing for talent are often not afraid to pay more when hiring if it means they can hire the best talent.”

More Important than Pay is the Software Engineer’s Fear of Irrelevance

As a software engineer working for a company that uses software (finance, energy, entertainment, you name it) there is nothing worse than seeing version 15 arrive on the scene when your firm remains committed to version 12.

Your fear is not that version 12 technology will phase out tech support, as these support windows are often a good decade in length. You fear that this release means that your expertise continues to become outdated and the longer you stay put, the harder it will be to get an interview, let alone snag a job. You feel a sinking dread that your primary skill-set is suddenly becoming irrelevant.  

Your dated skill-set has real financial implications and will eventually negatively impact your employability.

A Balancing Act

For companies, the incentive is to develop software cheaply, and cheap means that it is easy to use, quick to develop and let’s be realistic here, that you can Google the error message and copy your code from stack exchange.

A problem in software can often gobble up a few days when you are on the bleeding edge. All too often I stumble upon posts on a stack exchange where people answer their own question, often days later; or even worse I see questions responded to months after having asked for help. It makes sense that companies want to avoid the costs of implementing new releases.

Why would companies jump on the latest and greatest when the risk of these research problems is amplified in the latest version?

Companies are Motivated to Maintain Old Software, while employees are motivated to remain competitive.

This balancing act is a cost transfer problem; the latest framework is a cost to companies due to the research aspect, whereas an older framework is a cost to developers by reducing their marketability. At the moment where it is hard to hire good people, it will be hard to convince developers to bear the costs of letting their skills fall out of date.

New language and framework features can add value, but they are often minor, and there are often just ways to do something people can already do better and faster (but this is only true after the learning curve, and even then the benefits rarely live up to expectations (see No Silver Bullet). Chances are that the benefit of a new version of a framework will often outweigh the costs of learning the new framework, especially for existing code bases.

It seems like there should be some room for the middle ground; in the past, there was a middle ground. This was called the training budget.

Corporate Costs

With software developers jumping ship every few years to maintain their competitive edge, it is understandable that some management might find it difficult to justify or even expect a return on investment on training staff. In many cases, you’d need to break even on your training investment in less than a year.

At the same time, the need for developers to keep learning will never go away. Developers are acutely aware that having out of date skills is a direct threat to their economic viability.

For the near future developers will remain in high demand and the effects of refusing to provide on the job continuing education will only backfire. Developers are in demand, and they want to learn on the job. Today we do our learning on the production code, and companies pay the price (quite likely with interest). Whereas before developers were shipped off to conferences once a year, now they Google and read through the source code of the new framework on stack overflow for months as they try to solve a performance issue.

In Conclusion: Investing in Continuing Education Pays Off

The industry has gone through a lot of changes, in the dot-com boom developers were hopping jobs at an incredible speed, and companies reacted by changing how they treated developers and cut back on training as they saw tenures drop. This all makes perfect sense. Unfortunately, this has led to developers promoting aggressive and early adoption of frameworks so that developers keep their skills up-to-date with the market. And as more and more companies adapt to frequent updates, the pressure to do so will only increase.

Training provides a way to break the cycle and establish an unspoken agreement that companies will leave developers as competitive as they were when they were hired by regular maintenance through training. So how to support continuing education and maintain a stable and loyal development pool? Send your developers to conferences, host in-house training, lunch and learns, and so on to ensure that they feel both technically competitive and financially secure.  

Despite their reluctance, in the end, there is a real opportunity and a financial incentive for companies to go back to the training budget approach. Companies want to have efficient development, developers want to feel economically secure. If developers are learning then they feel like they are improving their economic prospects and remaining competitive. Certainly, some will still jump ship when it suits their professional goals, but many will chose to stay put if they feel they remain competitive.

“Advancement occurs through the education of practitioners at least as much as it does by the advancement of new technologies.” Improving Software Practice Through Education

 

 

A Crucial Look at the Unethical Risks of Artificial Intelligence

Artificial Intelligence Pros and Cons:

As much as we wonder at the discoveries and the artificial intelligence benefits to society of AI and prediction engines, we also recoil at some of their findings. We can’t make the correlations that this software discovers go away, and we can’t stop the software from re-discovering the associations in the future. As decent human beings, we certainly wish to avoid our software making decisions based on unethical correlations.

Ultimately, what we need is to teach our AI software lessons to distinguish good from bad…

Unintended results of AI: an example of the disadvantage of artificial intelligence.

A steady stream of findings already makes it clear that AI efficiently uses data to determine characteristics of people. Simplistically speaking, all we need is to feed a bunch of data into a system and then that system figures out formulas from that data to determine an outcome.

For example, more than a decade ago in university classes, we ran dome tests on medical records trying to find people that had cancer. We coded the presence of disease onto our training data, which we then scanned for correlations to other medical codes present.

The algorithm ran for about 26 hours. In the end, we scanned the data for accuracy, and needless to say, the system returned fantastic results. The system reliably honed in on a medical code that predicted cancer; and more specifically, the presence of tumors.

Of course, at the outset, we’d like to assume this data will go to productive, altruistic uses. At the same time, I’d like to emphasize that the algorithm delivered the response: “well, of course, that is the case,” substantially demonstrating that such a program can discover correlations without being explicitly told what to look for…

Researchers might develop such a program with the intention to cure cancer, but what happens if it gets into the wrong hands? As we well know, not everyone, especially when driven by financial gain, is altruistically motivated. Realistically speaking, if we use a program looking for correlations to guide research leading to scientific discoveries for good intent, it can also be used for bad.

The Negative Risk: Unethical Businesses

By function and design, algorithms naturally discriminate. They distinguish one population from another. The basic principles can be used to determine a multitude of characteristics: sick from healthy; gay from straight; and, black from white.

Periodically the news picks up an article that illustrates this facility. Lately, it’s been a discussion of facial recognition. A few years ago the big issue revolved around Netflix recommendations.

The risk is that this kind of software can likely figure out, for example, if you are gay with varying levels of certainty. Depending on the data available, AI software can figure out figure out all sorts of other information that we may or may not want it to know or that we may not intend for it to understand.

When it comes to the ethics and adverse effects of artificial intelligence, it’s all too easy to toss our hands in the air and have excited discussions around the water cooler or over the dinner table. What we can’t do is simply make it we can’t make it go away. This concern is a problem that we must address.

Breakthrough: The Problem is its own Solution

Up to this point, my arguments may sound depressing. The good news is that the source of the problem is also the source of the solution.

If this kind of software can determine from data sets the factors (such as the presence of tumors) that we associate with a discrimination (such as the presence of cancer), we can then take these same algorithms and tell our software to ignore the results.

If we don’t want to know this kind of information, simply ignore this type of result. And then, we can then test to verify that our directives are working and our software is not relying on the specified factors in our other algorithms.

For instance, say we determine that as part of a determination of the risk of delinquent payment for a mortgage, we know that our algorithm can also determine gender, race or sexual orientation. Rather than using this data, which is likely a wee bit racist, sexist, and bigoted, when calculating a mortgage rate recommendation, we could ask it to ignore said data.

In fact, we could go even further. Just as we have equal housing and equal employment legislation, we could carry over to legislate that if a set of factors can be used to discriminate, then software should be instructed to disallow the combining of those elements in a single algorithm.

Discussion: Let’s look at an analogy.

Generally speaking, US society legislates that Methamphetamine is bad, and people should not make it, but at the same time the recipe is known, and we can’t uninvent meth.

An unusual tactic is to publicise the formula and tell people not to mix the ingredients into their bathtub “accidentally.” If we find people preparing to combine the known ingredients, we can then, of course, take legal action.

For software, I’d recommend that we take similar steps and implement a set of rules. If and when we determine the possible adverse outcomes of our algorithms, we can require that the users (business entities) cannot combine the said pieces of data into a decision algorithm, of course making an exception for those doing actual constructive research into data-ethical issues.

The Result: Constructing and or Legislating a Solution

Over time our result would be the construction of a dataset of ethically sound and ethically valid correlations that could be used to teach software what it is allowed to do. This learning would not happen overnight, but it also might not be as far down the line as we first assume.

The first step would be to create a standard data dictionary where people and companies would be able to share what data they use, similar to elements on the chemical periodic table. From there we would be ready to look for the good and the bad kinds of discrimination. We can take the benefits of the good while removing the penalties from the bad.

This process might mean that some recommendations would possibly have to ask if it would be allowed to utilize data that could be used to discriminate based upon an undesirable metric (like race). And it might mean that in some cases it would be illegal to combine specific pieces of data, such as for a mortgage rate calculation.

No matter what we choose to do, we can’t close Pandora’s box. It is open; the data exists, the algorithms exist; we can’t make that go away. Our best bet is to put in the effort to teach software ethics, first by hard rules, and then hopefully let it figure some things out on its own. If Avinash Kaushik’s predictions are anywhere near accurate, maybe we can teach software actually to be better than humans at making ethical decisions, only the future will tell!

If you’re curious about the subject of AI and Big Data read more in my piece Predicting the Future.

How to Effortlessly Take Back Control of Third Party Services Testing

Tools of the Trade: Testing in with SaaS subsystems.

For the last few years, the idea has been floating around that every company is a software company, regardless of the actual business mission. Concurrently even more companies are dependent upon 3rd party services and applications. From here it is easy to extrapolate that the more we integrate, the more likely it is that at some point, every company will experience problems with updates: from downtime, uptime, and so on.

Predicting when downtime will happen and or forcing 3rd party services to comply with our needs and wishes is difficult if not impossible. One solution to these challenges is to build a proxy. The proxy allows us to regain a semblance of control and to test 3rd party failures. It won’t keep the 3rd parties up, but we can simulate failures whenever we want to.

As an actual solution, this is a bit like building a chisel with a hammer and an anvil. And yet, despite the rough nature of the job, it remains a highly useful tool that facilitates your work as a Quality Assurance Professional.

The Problem

Applications increasingly use and depend upon a slew of 3rd party integrations. In general, these services tend to maintain decent uptime and encounter only rare outages.

The level of reliability of these services leads us to continue to add more services to applications with little thought to unintended consequences or complications. We do this because it works: it is cheap, and it is reliable.

The problem (or problems) that arise stem from the simple nature of combining all of these systems. Even if each service maintains good uptimes, errors and discordant downtime may result in conditions where the time that all your services are concurrently up is not good enough.

The compounding uptimes

Let’s look at a set of services that individually boast at least 95% uptime. Let’s say we have a service for analytics, another for billing, another for logging, another for maps, another for reverse IP, and yet another for user feedback. Individually they may be up 95% of the time, but let’s say that collectively the odds of all of them being up at the same time is less than 75%.

As we design our service, working with an assumption of around a 95% up-time scenario feels a lot better than working with a chance of only 75% uptime. To exacerbates this issue, what happens when you need to test how these failures interact with your system?

Automated Testing is Rough with SaaS

To create automated tests around services being down is not ideal. Even if the services consumed are located on site, it is likely difficult to stop and start them using tests. Perhaps you can write some PowerShell and make the magic work. Maybe not.

But what happens when your services are not even located on the site? The reality is that a significant part of the appeal of third-party services is that businesses don’t really want onsite services anymore. The demand is for SaaS services that remove us from the maintenance and upgrade loop. The downside to SaaS means that suddenly turning a service off becomes much more difficult.

The Solution: Proxy to the Rescue

What we can do is to use proxies. Add an internal proxy in front of every 3rd party service, and now there is an “on / off switch” for all the services and a way to do 3rd party integration testing efficiently. This proxy set-up can also be a way to simulate responses under certain conditions (like a customer Id that returns a 412 error code).

Build or buy a proxy with a simple REST API for administration and it should be easy to run tests that simulate errors from 3rd party providers. Now we can simulate an entire system outage in which the entire provider is down.

By creating an environment isolated by proxies, test suites can be confidently run under different conditions, providing us with valuable information as to how various problems with an application might impact our overall business service.

Proxy Simulations

Upon building a proxy in between our service and the 3rd party service, we can also put in the service-oriented analog of a mock. This arrangement means we can create a proxy that generates response messages for specific conditions.

We would not want to do this for all calls, but for some. For example, say we would like to tell it that user “Bob” has an expired account for the next test. This specification would allow us to simulate conditions that our 3rd party app may not readily support.

Customer specific means better UX

By creating our own proxy, we can return custom responses for specific conditions. Most potential conditions can be simulated and tested before they happen in production. And we can see how various situations might affect the entire system. From errors to speed, we can simulate what would happen if a provider slows down, a provider goes down, or even a specific error, such as a problem that arises when closing out the billing service every Friday evening.

Beyond Flexibility

Initially, the focus might be on scenarios where you simulate previous failures of your 3rd party provider; but you can also test for conditions for which your 3rd party may not offer to you in a test environment. For instance, expiring accounts in mobile stores. With a proxy, you can do all of this by merely keying off of the specific data that you know will come in the request.

In Conclusion: Practical Programming for the Cloud

This proxy solution is likely not listed in your requirements. At the same time, it is a solution to a problem that in reality is all too likely to arise once you head to production.

In an ideal world, we wouldn’t need to worry about 3rd party services.

In an ideal world, our applications would know what failures to ignore.

In the real world, the best course is preparation: this allows us to test and prevent outages.

In reality, we rarely test for the various conditions that might arise and cause outages or slowdowns. Working with and through a 3rd party to simulate an outage is likely very difficult, if not impossible. You can try and call Apple Support to request that they turn off their test environment for the App store services, but they most likely won’t.

This is essentially a side-effect from the Cloud. The Cloud makes it is easy to add new and generally reliable services, which also happen to be cheap and makes the Cloud an all-around good business decision. It should not then be surprising that when you run into testing problems, an effective solution will also come from the Cloud. Spinning up small, lightweight proxies for a test environment is a practical solution for a problem in the Cloud.

Unrealistic Expectations: The Missing History of Agile

Unrealistic expectations or “why we can’t replicate the success of others…

Let’s start with a brain teaser to set the stage for questioning our assumptions.

One day a man visits a church and asks to speak with the priest. He asks the priest for proof that God exists. The priest takes him to a painting depicting a group of sailors, safely washed up on the shore following a shipwreck.

The priest tells the story of the sailors’ harrowing adventure. He explains that the sailors prayed faithfully to God and that God heard their prayers and delivered them safely to the shore.

Therefore God exists.

This is well and good as a story of faith. But what about all the other sailors who have prayed to God, and yet still died? Who painted them?

Are there other factors that might be at play?

When we look for answers, it’s natural to automatically consider only the evidence that is easily available. In this case, we know that the sailors prayed to God. God listened. The sailors survived.

What we fail to do, is look for less obvious factors.

Does God only rescue sailors that pray faithfully? Surely other sailors that have died, also prayed to God? If their prayers didn’t work, perhaps this means that something other than faith is also at play?

If our goal is to replicate success, we also need to look at what sets the success stories apart from the failures. We want to know what the survivors did differently from those that did not. We want to know what not to do, what mistakes to avoid.

In my experience, this is a key problem in the application of agile. Agile is often presented as the correct path; after all lots of successful projects use it. But what about the projects that failed, did they use Agile, or did they not implement Agile correctly? Or maybe Agile is not actually that big a factor in the success of the project?

Welcome to the history of what is wrong with Agile.

Consider this, a select group of Fortune 500 companies, including several technology leaders decides to conduct an experiment. They hand pick some people from across their organization to complete a very ambitious task. A task of an order of magnitude different from anything they’d previously attempted and with an aggressive deadline.

Question 1: How many do you think succeeded?

Answer 1: Most of them.

Question 2: If your team followed the same practices and processes that worked for these teams do you think your team would succeed?

Answer 2: Probably not.

The Original Data

In 1986, Hirotaka Takeuchi and Ikujiro Nonaka published a paper in the Harvard Business Review titled the “The New New Product Development Game.” In this paper, Takeuchi and Nonaka tell the story of businesses that conduct experiments with their personnel and processes to innovate new ways to conduct product development. The paper introduces several revolutionary ideas and terms, which most notably developed the practices that we now know as agile (and scrum).

The experiments, run by large companies and designed for product development (not explicitly intended for software development), addressed common challenges of the time regarding delays and waste in traditional methods of production. At the root of the problem, the companies saw the need for product development teams to deliver more efficiently.

The experiment and accompanying analysis focused on a cross-section of American and Japanese companies, including Honda, Epson, and Hewlett-Packard. To maintain their competitive edge each of these companies wished to rapidly and efficiently develop new products. The paper looks at commonalities in the production and management processes that arose across each company’s experiment.

These commonalities coalesced into a style of product development and management that Takeuchi and Nonaka compared to the rugby scrum. They characterized this “scrum” process with a set of 6 holistic activities. When taken individually, these activities may appear insignificant and may even be ineffective. However, when they occur together as part of cross-functional teams, they resulted in a highly effective product development process.

The 6 Characteristics (as published):

  1. Built-in instability;
  2. Self-organizing project teams;
  3. Overlapping development phases;
  4. Multilearning;
  5. Subtle control;
  6. And, organizational transfer of learning.

What is worth noting, is what is NOT pointed out in great detail.

For instance that the companies hand-picked these teams out of a large pool of, most likely, above average talent. These were not random samples, they were not even companies converting their process, these were experiments with teams inside of companies. The companies also never bet the farm on these projects, they were large, but if they failed the company would likely not go under.

If we implement agile, will we be guaranteed success?

First, it is important to note that all the teams discussed in the paper delivered positive results. This means that Takeuchi and Nonaka did not have the opportunity to learn from failed projects. As there were no failures in the data set, they did not have the opportunity to compare failures with successes, to see what might have separated the successes from failures.

Accordingly, it is important to consider that the results of the study, while highly influential and primarily positive, can easily deceive you into believing that if your company implements the agile process, you are guaranteed to be blessed with success.

After years in the field, I think it is vitally important to point out that success with an agile implementation is not necessarily guaranteed. I’ve seen too many project managers, team leads, and entire teams banging their heads up against brick walls, trying to figure out why agile just does not work for their people or their company. You, unlike the experiments, have a random set of people that you start with, and agile might not be suited for them.

To simplify this logical question; if all marbles are round, are all round things marbles? The study shows that these successful projects implemented these practices, it did not claim these practices brought success.

What is better: selecting the right people or the right processes for the people you have?

Consider that your company may not have access to the same resources available to the companies in this original experiment. These experiments took place in large companies with significant resources to invest. Resources to invest in their people. Resources to invest in training. Resources to invest in processes. Resources to cover any losses.

At the outset, it looks like the companies profiled by Takeuchi and Nonaka took big gambles that paid off as a result of the processes they implemented. However, it is very important to realize that they, in fact, took very strategic and minimal risk, because they made sure to select the best people, and did not risk any of their existing units. They spun up an isolated experiment at an arm’s length.

If you look at it this way, consider that most large multinational companies already have above average people, and then they cherry pick the best suited for the job. This is not your local pick-up rugby team, but rather a professional league. As large companies with broad resources, the strategic risks they took may not be realistic for your average small or medium-sized organization.

The companies profiled selected teams that they could confidently send to the Olympics or World Cup. How many of us have Olympians and all-star players on our teams? And even if we have one or two, do we have enough to complete a team? Generally, no.

The Jigsaw Puzzle: If one piece is missing, it will never feel complete.

Takeuchi and Nonaka further compare the characteristics of their scrum method to that of a jigsaw puzzle. They acknowledge that a single piece of the puzzle or a missing piece mean that your project will likely fail. You need all the pieces for the process to work. They neglect to emphasize that this also means that you need the right people to correctly assemble the puzzle.

The only mention they make regarding the people you have is the following:

“The approach also has a set of ‘soft’ merits relating to human resource management. The overlap approach enhances shared responsibility and cooperation, stimulates involvement and commitment, sharpens a problem-solving focus, encourages initiative taking, develops diversified skills, and heightens sensitivity toward market conditions.”

In other words, the solution to the puzzle is not only the six jigsaw puzzle pieces, but it is also your people. These “soft merits” mean that if your people are not able to share responsibility and cooperate, focus, take the initiative, develop diverse skills and so on, they aren’t the right people for an agile implementation.

If you don’t have all the pieces, you can’t complete the puzzle. And if you don’t have the right people, you can’t put the pieces together in the right order. Again, you might be round, but you might not be a marble.

Human-Centered Development for the People You HAVE

As with any custom software development project, the people who implement are key to your project’s success. Implementing agile changes the dynamics of how teams communicate and work. It changes the roles and expectations of all aspects of your project from executive management to human resources and budgeting.

Agile may work wonders for one company or team, but that success doesn’t mean that it will work wonders for YOUR team. Especially if all stakeholders do not understand the implications and needs of the process or they lack the appropriate aptitudes and skills.

In other words, if these methods don’t work for your people, don’t beat up yourself or everyone else. Instead, focus on finding a method that works for you and for your people.

Agile is not the only solution …

Why do people select agile? People implement agile because they have a problem to solve. However, with the agile approach managers need to step back and let people figure things out themselves. And that is not easy. Especially when managers are actively vested in the outcome. Most people are not prepared to step back and let their teams just “go.”

Maybe you have done the training, received the certifications, and theoretically “everyone” is on board. And yet, your company has yet to see Allstar success. Are you the problem? Is it executive management? Is it your team? What is wrong?

I cannot overemphasize that the answer is as simple as the people you have. Consider that the problem is unrealistic expectations. The assumption when using agile and scrum is that it is the best way to do development, but what if it is not the best way for you?

If you don’t have the right people or the right resources to implement agile development correctly, then you should probably do something else. At the same time, don’t hesitate to take the parts of agile that work for you. 

Citations:

Nonaka, H. T. (2014, August 01). The New New Product Development Game. Retrieved July 19, 2017, from https://hbr.org/1986/01/the-new-new-product-development-game

Correction Vs. Prevention

Correction vs Prevention in Software Development

The desire to prevent adversity is a natural instinct.

As humans, as individuals, we generally do what we can to avoid something going wrong. This is especially true when we invest a lot of our time and effort in a project.

Say you spend 40hrs per week on a particular project for a year. The project may become a part of your identity. At the least, you will be personally vested in a positive outcome.

The more time we invest, the greater our fear that something might go wrong. In this way, investing time and resources in a project is almost like raising a kid.

We do everything we can to set our project “children” up to avoid and prevent problems with the intent that they achieve success. Similarly, we do everything possible within reach of our finances and power (and sometimes beyond) from schooling to extracurricular activities, to ensure that our kids have the best chance in life as is possible.

Defect Prevention

Investing in our kids is a bit like defect prevention in software process improvement. Just as many theories of parenting existing, many methods of defect prevention exist. Some of them, such as Six Sigma look for the root causes of failures with the intention of preventing failures before they occur.

In prevention, we must continually ask why something might break and then fix the underlying causes. On one hand, this is very much like parenting.

A challenge in prevention work is that to be effective, detailed and good requirements are a necessity. For example, if 6 months ago someone rushed the requirements and left out a key step or valuable information, you will likely soon discover a slew of bugs in your project.

This kind of noise easily distracts us from the underlying issue that the problem came from faulty requirements. Either way, your project is delayed. And, if you have to hand over issues with requirements to another department, your work on a feature may suddenly grind to a halt.

At this point, the best you can do is note why the problem happened and resolve to do better next time. Prevention work is not always timely.

Appropriate applications for prevention work…

Prevention work generally means lessons learned for the next iteration.  We might learn how to more accurately prepare requirements during implementation. Or we might learn to focus better attention on our code after we’ve already deployed.

In prevention, we learn from problems we encounter so that we can prevent those problems in the future.  By paying more attention to difficult steps or stages, we learn what to avoid next time. Effective prevention work, in effect, creates a system that can create successful future projects, and in then it follows a company that can consistently launch successful products.

Shortcomings of Defect Prevention: Defect Prevention and Rare Events

Let’s return to the parenting analogy. Defect prevention, when applied to parenting, shifts our target goal away from our current child so that our end goal becomes setting up a system to be better parents to our future children.

I’ve chosen the parenting analogy because it clearly highlights a shortcoming of the defect prevention method. In real life, humans are highly invested in carrying out tasks that will ensure the success of already existing offspring. Unlike in prevention work, most parents don’t (intentionally) use their first child as a test case, to learn how to be successful with their future offspring.

What’s more, many families may choose only to have a single child or they may space their children 5 or 10 years apart. Defect prevention is a waste of resources if you lack a future (and immediate) application.

Prevention work is not appropriate for rare events.

Lots of small and medium-sized companies are not software companies, but companies that also do software. Software for them is a necessity, but not a business. For these companies, a software project is likely a rare event.

In prevention the entire feedback mechanism is reactive. If you want to use prevention, you will make your next project better with the lessons you learn, not your current project.

As we know, many companies may support only one or two software products and or they may only have a software development project come about every few years. Defect prevention that demonstrates how requirements could have been better 6 months ago may give comfort in the form of an explanation, but they will not solve existing and relevant problems.

Please, have a seat on the couch: explanations vs. solutions

Prevention methods may help you change your perception through the receipt of explanations, but they won’t give you solutions to an active problem. Let’s say the parent of a college student visits a psychologist to discuss problems they encountered raising their now young adult. The discussion and insight help the parent to understand and accept what they may have done right or wrong. But this explanation and acceptance will not fix a problem, it simply will change your perception of a problem. Of course, perception is significant, but it’s not a solution.

The discussion and insight may help the parent to understand and accept what they may have done right or wrong. But this explanation and acceptance will not fix a problem, it simply will change the parent’s perception of the problem. Of course, perception is significant, but it’s not a solution.

Resilience through rapid corrections…

An alternative is to simply pursue resilience through rapid corrections. Think of this like driving. Seated behind the wheel of a moving car you constantly process multiple pieces of information and feedback, adjusting your speed, direction, and so on. Driving requires constant attention and focus.

It’s a given that the closer we pay attention when driving, the better our results. Paying less attention, such as texting while driving, often results in the occasional need for bigger or last minutes corrections. Sometimes the information arrives too late or not at all and the result is an accident.

Attention + Discipline = Positive Results

This method again applies to raising children. Children and custom software development projects both require close attention. In child rearing there is a saying that “discipline is love.” Pay attention to your children, apply the rules consistently and thoughtfully, and generally speaking, they will grow up without too many bugs!

In correction (versus prevention) you pay constant attention to your custom software development projects. This focus allows you to react to problems and correct in a timely manner. Consistent attention and disciplined application of requirements result in a better end product. Rapid correction builds resilience.

Focus on reactive resiliency…

How we change as we go through production is a function of lessons learned, but also our ability to adapt, such as an outage to a new test case. Or perhaps some more automation to email for failures, so that we have the tools to pay closer attention and fine tune our responses as needed.

In correction, we maintain the goal of improving for future projects. We will still change systems and processes to fix defects, but we are less interested in learning how and why our problems exist. Instead, we focus on continually improving our near future. In correction, our immediate goal is to make today better than yesterday. And then tomorrow’s production a little better than today. Your ability to react quickly and appropriately builds resilience.

Instead, we focus on continually improving and resolving for our near future. In correction, our immediate goal is to make today better than yesterday. And then tomorrow’s production a little better than today. The ability to react quickly and appropriately builds resilience.

Deciding on Prevention or Correction

Know your company, know your goals: are you in the business of software development or are you a company that sometimes develops software?

In custom software development we come across different types of companies and goals. Some companies, say a mortgage company or an insurance company require software to function, but their business is not software. If your team is only involved the occasional software project, then it is likely more efficient fro you to focus on resilience trough correction over prevention.

From an industry perspective, it would be awesome if we could benefit from group intelligence and expect continuous improvement on every custom software project. Prevention across teams and companies is an ideal goal. Unfortunately, for the moment lessons learned on unique and rare projects cannot be easily shared. Concrete results or lessons learned are seldom shared across companies by anything other than word of mouth.

For infrequent projects, correction is best …

Until there is a method to consistently record and move beyond oral histories of software projects, the parties involved are most likely better off focusing on correcting problems in the projects we work on rather than preventing problems in future projects.

You can do both, but neither is free and prevention is most effectively applied to future projects. Decision makers must thus be careful to prioritize solutions that are best suited to your particular situation, project, and company.

If you are interested to know more about how we use correction at Possum Labs please feel free to contact me or start a discussion below.

Making the Abstract Tangible

Ideas to create excitement and add a tangible, physical, nature to software development

Humans do best when dealing with tangible, concrete ideas.

We like things that we can hear or see, feel and touch. Entire industries revolve around souvenirs. Across all religions, we see the senses engaged through both simple and complex symbolism.

The use of idols, incense, and liturgy anchors us and provides direction for us to focus our energy, make connections and find meaning. Humans readily associate meaning to the tangible items in daily life.

Software, by its nature, is intangible. Unlike hardware, we can’t pick it up and pass it around. We can’t drop it or toss it out the window when it makes us mad. And we can’t take it apart, piece by piece, to see what is missing when it breaks.

This intangible nature of software development is further compounded by the often abstract language and goals set forth on our projects. All of this combine to make it very easy to lose our way.

It’s easy to get lost when working with abstract concepts

Pragmatic software development focuses on simplifying development practices to make everyone’s life easier. Ideally, we want to deliver projects on time, as efficiently as possibly, and with high levels of confidence.

And yet, this human need for the tangible is all too easily overlooked and ignored. Too often we turn a blind eye to the problem, despite the fact that this oversight costs both time and money.

Defect Prevention is Abstract

We know that defect prevention is less expensive and less time consuming than defect detection. Unfortunately, in defect prevention, the goal is to prevent something from existing that hasn’t yet been written. Extracting the tangible from the intangible is a daunting task for even the most experienced business analyst or senior QA engineer.

In defect prevention, the mere avoidance of creating something that doesn’t even exist is infuriatingly abstract.

We should then not be surprised that the effort to rally people around defect prevention often fails. In defect prevention, engineers are challenged to find something on which to anchor and direct their efforts. With experience, some of us can maintain focus by sheer force of will, but even then, we often waste enough time and effort willing ourselves into new habits that the opportunity cost of the challenge remains high.

Anchoring Practices

Unlike many people, I enjoy and care about the abstract deeply. And yet, despite my natural affinity for the abstract, I must still acknowledge that it is difficult for me to stay focused and find my balance when I am presented with certain abstract situations and problems.

When managing the abstract, personal awareness is key. I must use various tricks and tips to remind myself to implement practices so that I can overcome the challenge to stay motivated on elusive projects.

Techniques to Maintain Awareness and Find Focus

Prioritizing and maintaining awareness is indeed exceptionally difficult to do when the object of our focus lacks a physical aspect. There are a few simple solutions and techniques that can be successfully implemented in software development.

A simple list makes the abstract tangible

One most effective things I do to maintain my awareness and find my focus is to create to-do lists that break my goals and responsibilities into small manageable tasks. As I accomplish tasks, I check them off, which is surprisingly satisfying.

A simple list makes the abstract tangible and creates an efficiency where before there was none. Before I started to use checklists, I wasted a ridiculous amount of time reassessing and trying to remember what I had done (or hadn’t done), so that I could determine my next actions.

Tangible Project Management

By now you probably recognize that some of the development practices gaining traction over the last few years are attractive because they grant physical aspects to the abstract.

In Agile development, for instance, there is the board that holds the tasks, and the physical act of moving a task is quite gratifying.

In Kanban, we have cards that must be physically walked through the company. The act of handing a card to the next person effectively demonstrates that something is complete, so that something else can start.

Even in the least “Agile” implementation of Agile, we will most often see the physical artifacts survive, whereas the more abstract elements tend to become warped or be flat-out forgotten.

The Power of Tangibility

Physical, tangible objects, are powerful. Finding ways to invest in and harvest that power will drive your project forward. As discussed above, both Agile and Kanban show that we can impart physicality and improve focus on a project by giving it a physical identity and meaning.

But there are other simple, concrete ways to create excitement and add a tangible, physical, nature to software development. Creating a project logo or mascot may appear as a fun distraction, but in fact, by creating an emotional and physical connection, a mascot can genuinely help people focus more easily on intangible project tasks.

Physical Tokens: Mascots & Other Symbols

A mascot can be kept on a desk. A logo can be posted to a monitor or cork board as a physical reminder of the task at hand and the ultimate project goal.

Subsequently, finding a way to reward team members with physical tokens, something small and tangible that can be displayed continues to make project goals and achievements more palatable and real.

In an age where money is also increasingly abstract, and engineers tend to be well paid, a small physical token with little cash value can have more power than a financial reward.

Using Symbols to Create Value When Working with the Abstract

Measuring the effect of symbols on our lives is difficult; however, if you take a look around your desk or those of your co-workers, most people, even the minimalist among us, has a symbol or two on our desk. A picture from of a kid, a gift from a client, a token from our alma mater, a hobby or our favorite team.

All of these symbols (and idols ) allow us as humans, to channel our energy more easily and in a valuable direction. Symbols are concrete, rational, pieces of motivation to which we can also create an emotional connection. All of these factors mean that the use of symbols is a pragmatic and effective tool to rein in the abstract and bring it down to earth.

At one time or another, your team or your organization will have a short or a long period where motivation and focus run astray. Creating a concrete symbol or mascot and an associated system of physical rewards may help bring your team back down to earth and find their focus.

Get your project on people’s mind by making an emotional, rational connection that gets physically in front of them as they work.

What kind of symbols have you used or seen used to represent abstract ideas and to get various stakeholders to find their focus and to drive project success?

If you enjoyed this article, please share it!