Business Irrationality: Why Smart Managers Don't Think Straight
Omar Mahmoud
13 June 2001


Summary

Most new product launches fail. With the benefit of hindsight failure is attributed to a fault in the proposition (weak copy, parity product, high price, etc.) or the marketing plan (not enough media spending, low distribution, lack of promotions, etc.). This interpretation is correct but it does not explain why failures continue in spite of the growing knowledge about factors driving success and failure and following a rigorous product launch process with clear success criteria.

We argue that new products fail because of our irrationality and our inability as individuals and groups to avoid many of the common thinking errors that affect our decision making process. This paper reviews a sample of about 10 thinking errors :

These errors can be grouped under 3 broad headings: Cognitive errors (treating information irrationally either by distorting it, ignoring it, putting different weights on different pieces of data, misunderstanding probabilities and other statistical concepts), Emotional errors (feeling that we are special and somehow protected from failure, that we and our initiatives are above average), and Group errors (thinking that if we agree, we can't all be wrong, that the decision responsibility is not really ours, or that we are only obeying orders).

Organizational limitations such as defective reward systems, global or standardization considerations, conflict of functional priorities often interact with the above thinking errors creating a negative multiplier effect.

This paper concludes by recommending actions to address these errors including rational thinking training, creating a culture that encourages dissenting opinions, allowing objective, anonymous, and external assessments of initiatives, and making changes to the decision making process and reward system.

Omar Mahmoud


Business Irrationality: Why Smart Managers Don't Think Straight?

"Everyone complains about the badness of his memory, nobody about his judgment." - La Rochefoucauld

"Every year, thousands of new products and brands appear on store shelves. In 1995, 22,572 new consumer goods items were introduced in the U.S., up from 10,588 in 1988; a twofold increase in 7 years. A typical US supermarket now carries about 30,000 items compared to 10,000 items in 1980. The same acceleration of product introduction is taking place in other industrialized countries, at varying paces." - Wilkie

The problem is that most of the items introduced fail in the market and even end up being discontinued, costing manufacturers millions of dollars, which they lose behind making, distributing and marketing their products. Most estimates of new introductions success rates revolve around the 30% figure, while others are as low as 10%. Clancy Business literature that addresses this issue tends to focus on the product, marketing and organizational aspects of the business, citing examples of what are considered the reasons for failure and giving managers advice for success. But most of the advice is not of much help to managers. Managers of large and mid-sized corporations are aware of, and operate by, sound management practices. They study their markets, they listen to consumers, they test their products before launching them and they support the launch with strong marketing plans and heavy spending. Many companies even have new product launch systems in place. But new products continue to fail, most of the time. Why? Common reasons usually revolve around issues related to the lack of distinctiveness of the advertising, modest product performance or sub-optimal marketing. While we agree with most of these assessments, they nevertheless require deeper explanations.

We think that a critical and common reason for new product failure lies in managers inability to think clearly and logically throughout the product launch cycle. It is taken for granted that all managers are smart and think straight. But, as thinking guru E. De Bono explains, there is a difference between intelligence and thinking. Most managers have a good car motor (brain) but they don't always drive it well (think straight). Most managers make smart and cold decisions based on clear and adequate data. But, in reality, the data are often incomplete, gray and messy. Moreover, cognitive limitations, biases, emotions, interests and wishful thinking often prevent managers from making sound business decisions. They cease to think rationally. Rational thinking is defined as the thinking that leads to the conclusion that is most likely to be correct given the available knowledge at the time. [Sutherland p4]

While managers' irrationality applies to many aspects of running the business, this paper focuses on business irrationality as it applies to new product introductions.

The Availability Error

"Man is a credulous animal and must believe something. In the absence of good grounds for belief, he will be satisfied with bad ones." -- Bertrand Russell

The availability error occurs when managers base their decisions on the most available information and not on all the evidence.

Information may be available because it is widely publicized, recent, dramatic, or emotional. Examples of the availability error in our everyday lives include slowing down after seeing a car collision, fearing to fly after an air crash, or thinking that one is more likely to die in an accident than due to a stroke. The availability error often leads us to draw illogical conclusions, confidently. When asked whether there are more words ending with "ing" or with "n" as the letter before last, the majority of respondents select the logically impossible answer of "ing," given that "ing" is a subset of the words with "n" as the before-last letter. The reason for this irrationality is obvious- words with "ing" are more available than words with "n" before the last letter.

In business, the availability error occurs when decisions are based on the most recent focus group or a manager's spouse's opinion of a new product as they may be more available to the manager's mind at the time of decision making, than tons of research data.  [Sutherland 16-32 and Kahneman 163-178]

The "penny wise" mentality is another example of the availability error in business. A company in a cost saving mode may slash all budgets, including its market research budget, by 20%. This reduction in a company's already tiny research budget of a few thousand dollars may result in taking undue risks on investments of millions of dollars. But this error occurs because the 20% savings are immediate or "available" while the high risk associated with less research is not.

The impact of managers' turnover on a company's performance is another case in point. Judgmentally, a high rate of regretted resignations must affect a company's performance in general, and its ability to launch products successfully in particular. But because data on this matter is difficult to quantify, or is "unavailable," this critical issue is seldom addressed by management.

A variant on the availability error is to think that the available information is all we need to solve the problem at hand. In a school experiment, students were given a number of math problems, including one that gave them the numbers of different animals on a ship and asked them to calculate the age of the captain (e.g. there are 12 cows, 16 sheep and 24 goats on a ship; how old is the captain?). About 70% of the students gave numerical answers and justified their response on the grounds that when you have a math problem you use the available numbers and mathematical signs to come up with the solution. While business cases may be less extreme, they often involve a similar mechanism. More data may be needed to answer a business question, but managers use what they have and think it is adequate. In everyday life people have difficulty saying "I don't know." In the business world many managers are reluctant to say " I need more data." In a macho-marketing environment, it is often considered more important to make a decision than to assess whether one has enough information to make such a decision. It is, however, rational behavior to seek more information when data is inadequate for decision making.

A team that launched a new fabric care brand successfully in their market put much more weight on data from their market than on more contradictory data from other markets they planned to expand to.

A team that tested a new concentrated detergent in an underdeveloped market, qualitatively, had a vivid picture of consumers' excitement about product performance in in-home visits.  This had a stronger impact than negative quantitative data that arrived in reports and forecasts later on.

First Impressions: The Anchoring Effect

Managers must learn to avoid the anchoring effect by relying on more "zero-based" thinking

The Anchoring Effect is another faulty form of reasoning which occurs when our thinking is influenced by the first piece of information given to us. Two groups of respondents were asked to estimate the percentage of African countries which are members of the UN and to indicate whether it is higher or lower than a given figure. That figure was 10% for one group and 65% for the second group. The resulting estimates were 25% for the first group and 45% for the second group. When people are asked to indicate the frequency of performing a certain task, say brushing their teeth, they report higher numbers if given a scale of 0-40 then if they are given a range of 0-15. In many aspects of life, we may change our initial judgments and perceptions but only within the boundaries set by those initial experiences.

In business, when managers are told that initial results of a test-market or of a product test are positive, they push the project ahead even if subsequent and more comprehensive results contradict earlier data. When sales of a new product are aggressively estimated at around $100 million, "conservative" revisions may bring it down to a lower number, but that number may still be much higher than a more realistic estimate had the original forecast been lower than $100 million. In other words, managers are often aware of the anchoring effect and make adjustments, but still adjust insufficiently. What managers need to do is to evaluate the new product economics using objective criteria and ignoring previous estimates.

The anchoring effect is a common tactic that managers use when requesting funds for their project's or department's budgets. If a manager wants to increase his budget from $1 million to $2 million, he asks for $4 million. The manager approving the budget assesses the budget from a base of $4 million rather than $1 million or $2 million. An approved budget of $2 million then looks like a budget cut compared to the requested $4 million, in line with the company's cost saving mode.

The Anchoring Effect is often present in employment interviews, rendering this form of recruitment procedure highly subjective. An interviewer may be anchored by the interviewee's physique, mannerisms, accent or by remarks of another interviewer.

A new product launch team tested a product at 3 prices -- 45 p, 55 p, and 65 p. They got fairly positive results and a payout proposition at 65 p. However, just before launch, economics dictated an 85 p pricing. There was no time to test the new price. The team went ahead driven by the initial, but now wrong, estimates.

It is too dark inside

"It used to be that people needed products to survive. Now products need people to survive." -- Nicholas Johnson

Clear thinking requires that managers seek to find out the facts not to confirm their judgment.

In one of Nasrudin pleasantries, the following conversation takes place between the Middle-Eastern sage who was seen looking for something under the street light, and his neighbor:

"Neighbor: What are you looking for?

Nasrudin: My house key.

Neighbor: Where did you lose it?

Nasrudin: Under the bed?

Neighbor: So, why aren't you looking for it inside the house?

Nasrudin: Oh! It's too dark inside."

In business, experts and managers agree in theory that products should be made based on consumer needs. In practice, however, companies often make what they are good at and then try to convince consumers that that's what consumers need. Why? Because it is not easy to identify and make what consumers really need. It is easier for companies to make what they are good at, regardless of what consumers want.

Managers then seek the information they need to prove that they are meeting consumer needs. Surveys are designed to confirm what has been decided, not to find out what needs to be done. This reality is often obscured by using marketing doublespeak . Instead of saying that management is ignoring consumer needs, it is said that the company is "capitalizing on its core competencies," and rather than say that a decision is plain wrong, it is said to be a "strategic decision" that is hard for young managers to understand.

A team invented a dishwashing gel to be used instead of using soap or washing powders available at home and used for laundry, household cleaning, and dishwashing. New technology and good aesthetics. The new technology was not addressing a real consumer need and required new usage and purchase habits. It failed.

Bull's Eye Objectives

Managers should make an effort to meet objectives and not create objectives that fit their efforts.

A man walked into a bar and noticed a bull's face drawn on the wall with an arrow in the middle of its eye. Impressed by the precision of the shot, the man asked the bar owner, "How did you get the arrow exactly in the middle of the bull's eye?" The bar owner replied, "I first shot the arrow, then I drew the bull's eye around the arrow."

In business, companies start by setting aggressive objectives, such as having superior products or best-value brands. Then, they translate those objectives into specific measures like obtaining a higher product reading against the market leader in a consumer product test. It is often in this translation from objectives to measures that things go wrong. After failing to obtain a product advantage for a new soft drink in a blind in-home use test, managers may start trying different measures. They may test the product identified, add biasing statements to the test such as "super-refreshing drink," test among morning drinkers only or run a taste test in a central location instead of an in-home use test. Through trial and error, the product is likely to achieve an advantage among some consumer group in some kind of testing for some test measures. The product may have met its action standards or measures but has it really met its objective?

A team tested a new personal cleansing product providing better moisturization. It tested it blind against existing products and got parity results. It repeated the testing context aiding with different kinds of statements, tested among demographic sub-groups, and finally found a small internal advantage among moisturizer users. The product was a mediocre launch.

Non-Stick (Teflon) Facts

"So, we see best what we are supposed to see. We see poorly, or not at all, that data that does not fit into our paradigm." -- Joel Barker

Breakthrough changes come from facts managers don't like and often ignore.

While the availability error produces a bias to the most available data, non-stick or Teflon facts are those data available to managers which are completely overlooked. Most managers talk about the importance of being first to introduce a new product. Data says otherwise. A published survey shows that out of 500 cases, only 11% of new brands were market leaders a few years later. About 47% of first entrants fail. [Golder and Tellis] This fact, however, is completely ignored by managers. In most corporations, a sense of urgency is considered a business virtue, but one seldom hears of a sense of quality. There is always time to do it over but no time to do it right. Interestingly, research has shown that companies that introduce a non-pioneering market leader usually talk about their brand as the market pioneer. The first entrant is usually discontinued and so its voice is not heard anymore.

Companies that pride themselves in the superiority of their products and advertising ignore or deny external evidence that suggests that their products or advertising is below average and tend to manufacture their own "facts."

A team tested a new initiative in a Concept & Use test. After-use reaction was positive, but concept scores were poor. Management was not discouraged and said "we can fix the concept when we develop the copy." The learning that it is extremely difficult to develop strong advertising out of a week concept was not internalized. Copy results came back poor, as would have been predicted by the concept scores. After several rounds of testing, the ad agency was changed. A new copy was developed deviating from the original concept and copy. The launch failed.

Lake Wobegon Effect

"The average person thinks he isn't."

Managers need to see the similarity in different things as much as they see the differences in similar things.

In his fictional community of Lake Wobegon where 'the women are strong, the men are good looking, and all the children are above average,' Garrison Keillor draws our attention to a peculiar form of irrationality: that of everyone thinking he or she is above average in favorable characteristics or skills. Research suggests that we all live in Lake Wobegon. In a survey among one million high school students 70% thought they were above average on leadership ability and only 2% thought they were below average. On getting along well with others, all students thought they were above average and 60% thought they were in the top 10%. Guess whom students learn from? A survey among university professors showed that 90% of them thought they had above average teaching capabilities. This Lake Wobegon effect applies to the future as much as it applies to the present. Although only 25% of the total population thought the US as a whole will be financially better off in the next year, 54% thought they will be better off. [Gilovich 77,78] The Lake Wobegon effect also seems to be a global phenomenon. In a survey among British motorists, 95% thought they were above average drivers. [Sutherland 240]

In business, the simple laws of probability suggest that most new product introductions will fail, given the large number of such introductions into predominantly stable markets. This is the same Murphy's Law logic that explains why the other line in a supermarket seems to move faster than our own line, simply because there are more "other" lines. [Matthews]

On a more personal level, young managers come up with the same ideas and products their predecessors unsuccessfully tested and expect different results. When queried about their rationale for expecting success, they usually refer to changing circumstances when they secretly think that they will make the proposition work because they are different. Every manager thinks his company, product category, brand or situation is different. True, we are all unique, just like everyone else.

A respectable business forecasting firm mentioned to me that when one of their clients did not like the volume forecast for their new lemon squared soap, they challenged the forecasting firm asking, "Have you ever done any forecasts for lemon squared soap in Bingoland before?" The forecasting firm explained that they had done many forecasts for similar products but the client insisted, "No, lemon squared soap is different."

We like to think that we are advertising experts. But our suppliers tell us our copy scores below average in validated copy testing. We often pride ourselves that we have the best people in the world. Maybe, maybe not. We need to have the facts and benchmarks to verify such a statement.

Objectivity

"Most men, when they think they are thinking, are merely rearranging their prejudices." -- Knute Rockne

Objectivity means giving equal weight to information that confirms our hypothesis as to that which contradicts it.

In reality, we often assign excessive weight to confirmatory data at the expense of contradictory data. This usually involves two mechanisms: first, we often quote anecdotal cases that support our beliefs and ignore those that don't, and, second, we exercise 'optional stopping' or "satisficing" in the pursuit of data when the early data supports our convictions, but continue the search for more data when early indicators do not support our predictions. [Sutherland p 259 and Janis p 25] If results of a first test are positive, we move ahead with our project. If they are not, we run more tests hoping to obtain better results. Thus, our biases influence the quality and the quantity of information we seek. In everyday life, this is often known as 'Self-Fulfilling Prophecy.' We seek the quantity and quality of information that confirm our pre-conceived ideas: "Seek and ye shall find."

One way of slanting the data to support our point of view is to seek the opinion of experts and consultants who agree with our point of view and disregard those who don't. If we are interested in stressing the importance of building brand loyalty or niche brands we would probably not invite a marketing consultant who does not believe in the viability of niche brands or increasing brand consumption among current brand users. We want to hear what we want to hear.

Darwin ' …. followed a golden rule, namely that whenever a new observation or thought came across me, which was opposed to my general results, to make a memorandum of it without fail and at once; for I had found by experience that such facts and thoughts were far more apt to escape from the memory than favorable ones.'  [Gilovich p 62]

When looking at equity or image information, we often look for the signs of significant differences (-s-) and draw firm conclusions around those differences. We ignore the lack of significant differences on most equity and image attributes. Our brands are often significantly similar to other brands. Same for trend data. We focus on change and ignore the fact that in many categories things are amazingly stable.

Asymmetric Evaluation

'It is the peculiar and perpetual error of the human understanding to be more moved and excited by affirmatives than negatives.' -- Francis Bacon

Managers should always check if there is evidence to the contrary of what they are advocating, and address it.

In a research seminar I gave my audience a popular objectivity test.

You are shown 4 cards that have letters on one side and numbers on the other side and are asked to determine the minimum numbers of cards you need to turn on the other side to confirm this statement:

ALL CARDS WITH VOWELS ON ONE SIDE HAVE ODD NUMBERS ON THE OTHER SIDE

The 4 cards are: A B 3 4

The most common answer, usually from marketing folks, is that you only need to turn the A card to "confirm" that this vowel A has an odd number on the other side. Very few realize that we also need to turn the 4 card to see if it has a vowel on the other side, thus 'dis-confirming' our hypothesis. It is a habit of the mind to seek information that confirms our beliefs but not that which denies it. Incidentally, a few managers, usually from product development, also suggest we should turn the 3 card to see if it has a vowel or a consonant. However, the 3 card is irrelevant since we are not making any statements about consonants. [Wujec p 168, Sutherland 137-138, Vos Savant 84-86, Paulos, 1995, p 73]

Asymmetric evaluation is one special type of the lack of objectivity involving observing and citing occasions when an event occurred and a specific outcome resulted, and overlooking all other event/outcome combinations.

A believer in astrology will cite cases of correct predictions as evidence of the validity of astrology but will ignore or even forget false predictions. Lottery organizers give wide publicity to the few lucky winners and say nothing about the vast majority that gain nothing. Similarly, a manager will refer to the one example where his strategy worked and ignore all examples when it didn't.

A rational way to address the asymmetric evaluation bias is to build a two-by-two table to count the incidents when an input was used, or not used, and the incidents when an outcome was obtained or not obtained. 

Event Happened/Event Did Not Happen   Result Obtained/Result Not Obtained

In everyday life, we often 'feel' that the telephone only rings when we are in the shower. In reality, we are more likely to recall times when the phone rang while we were showering than times when it rang when we were not showering or did not ring while we were showering.

I Was In Shower/I Was Not In Shower     Phone Rang/Phone Did Not Ring

In business, an advertiser may insist on using a celebrity in a TV commercial because he recalls a successful ad that had a celebrity. He is unlikely to recall successful commercials that had no celebrities or unsuccessful commercials with a celebrity.

Commercial With Celebrity/Commercial Without Celebrity     Successful/Not Successful

Even professionals are influenced by how options are framed to them. Doctors opt for different routes, operation vs. medication, depending on whether the probabilities of success are framed positively (80% will survive), or negatively (20% will die).

We should also provide future predictions in both positive and negative terms. For example, a sales forecast should indicate the likely revenue in case of in-market success, and also the likely loss in case of not meeting objectives. Put differently, for our internal transactions we should frame our propositions in both positive and negative terms.

False Dilemma

"The absence of alternatives clears the mind marvelously." -- Henry Kissinger

Managers need to seek new directions beyond either/or options.

"Are you for everyone's right to own a gun or do you want to have criminals run the country?" This is how the National Rifle Association frames the issue of gun control. This is a flagrant example of a false dilemma because only two unfavorable alternatives are presented when in reality there is a variety of policy options for gun control, ranging from stricter ownership controls to tougher penalties for illegal ownership.  [Gilbert 130-133]

Companies often set 'Go/No Go' decision stops for product initiatives. For example, a company's action standards for an initiative may stipulate that if the new product or advertising obtains a specific score in testing, it would be introduced. Otherwise, the project is shelved. This puts a manager in a situation where he has to make a tough call: "Should I approve the heavy investment or kill the project?" Moreover, when people's careers are at stake, every effort is made to lead top management to a 'Go' decision, even if the facts suggest otherwise. This situation often represents a false dilemma because in reality there are more than two choices. There is usually a continuum of decision making with "Go" and "No Go" only being the extreme points. Managers may opt to step back and refine the product idea or formula, do more consumer work, pilot the plan in a test-market, combine the initiative with another one, launch it on a smaller scale, etc.

Managers need to think black, white and gray.

Pennsylvania Dutch

"It ain't so much what people don't know that hurts as what they know that ain't so." -- Artemus Ward

Managers should know the origins of their beliefs, rules and practices so they can use them appropriately.

Large numbers of people in the US believe, erroneously, that the founders of Pennsylvania came from Holland, as they hear references to the Pennsylvania Dutch. The truth is that it was the Germans who settled in Pennsylvania in great numbers. Due to the difficulty many have in pronouncing "Deutsch," (or German), the term was distorted to "Dutch," (or, from Holland.) This is an extreme form of a communication phenomenon known as "Sharpening & Leveling," in which the speaker emphasizes what he considers to be the key points of the message and ignores the details or caveats. As kids, we encountered this strange phenomenon when we played the telephone game in which we sat in a circle and each one whispered a sentence in his neighbor's ear. By the time the circle was completed, the message was unrecognizable to its author.

In business, top management, the ultimate decision maker, is often exposed to heavily "sharpened and leveled" recommendations. Results of a small scale test are often 'sharpened' by omitting to mention the base size or ignoring the fact that the differences were not statistically significant. Worse, non-significant results of several small scale tests are lumped together to produce an artificially significant win. Product test wins among the current brand users in a directed interest test are quoted out of context giving the impression that the test was run among a representative sample in a blind context. A manager in charge of a project to re-stage the existing brand with new advertising reports that the plan will increase sales by 30% and neglects to say that this requires increased spending and that at equal spending to current levels, the sales increase would be less than 10%. Management is often given the good news only; the truth but not the whole truth nor nothing but the truth. This sharpening and leveling effect is often illustrated by the popular business anecdote about workers who evaluated a new product and judged it to be a crock of shit, took it to their supervisors who considered it a pail of dung, the managers assessed it as a container of excrement, but their directors thought it was a vessel of fertilizer. By the time the news reached top management, the product was considered to be an object that promotes growth.

The product test showed a non-significant advantage for the new formula over its main competitor, among a small group of past 3 months' users of a related product category, in a copy aided blind test. That's what the official summary said. The next piece of communication ignored the "non-significant" part. The next presentation left out the breakout part. What top management heard was that the new formula had a great win against competition in product testing.

The Past Is History

Managers should assess the soundness of their financial decisions ignoring past investments.

A common form of financial irrationality is know as the Sunk Cost error. In its simplest form, a person who buys a theater ticket for an extremely boring show will torture himself by watching it to its very end. Instead of minimizing his losses by making better use of his time, the person thinks he is "getting his money's worth." In business, managers will refuse to abandon a failing initiative because the company has already invested so much in it. This is a thinking error because what counts is the future gains and losses. The past is irrelevant.

Interestingly, the Sunk Cost error has its reverse side. A person who buys a theater ticket for $10 and loses it will refuse to buy a new ticket for $10. Again, whether he had bought a ticket in the past or not is irrelevant. If the show was worth paying $10 for in the first place, it should be worth paying the same amount again, unless of course one runs out of money which is rarely the case. In business, whether money has already been spent on a project or not is irrelevant to an assessment of a project's future cost/benefit analysis. [Sutherland 99-101]

The attempt to salvage a historically strong but now declining brand made little progress. The concept was not above average, and the product failed to score higher than competition, but the price was higher. The decision to launch was difficult given the test results. But the decision to stop the project was equally difficult given the past spending. Eventually, the "doing something is better than doing nothing" logic prevailed, and the product was launched unsuccessfully.

Hypotheses and Conclusions

"Our brains are designed to be brilliantly uncreative. They are designed to form patterns on every possible occasion on the future. '' -- Edward De Bono

Managers should not confuse hypotheses with conclusions.

As thinking expert Edward De Bono suggests, our brain is not a thinking organ. It is a pattern-forming organ, from an evolutionary perspective -- man sees beast, man runs away (or beast eats man). There is no need to break such a pattern from a survival point of view. Even in our current day-to-day life, routine action is more common and necessary than conscious thinking. We find a route to get to work and follow it for years. The tendency to look for order and to spot patterns is a natural and often a healthy one. However, when data are ambiguous, we should suspend judgment and treat our newly formed patterns as hypotheses and not as conclusions or facts. Such a distinction is often blurred as we seek to use whatever information we have at hand to make decisions. Tolerating ambiguity is often a key step towards breakthrough changes. Unfortunately, tolerating ambiguity is not a business virtue and what starts as a hypotheses becomes, if unchallenged, a conclusion.

'The Initiative Has No Clothes' or Business Group Think

"Where all men think alike, no one thinks very much.' -- Walter Lippman

Managers should encourage individual thinking and voicing of unpopular opinions.

Disastrous decisions are often made by teams concerned with maintaining consensus and taking action to such an extent that individuals do not voice their concerns. As each person silences his reservations, the group as a whole adopts a false feeling of the correctness of its decisions. Group think is characterized by an illusion of invulnerability, discounting early warnings, extreme optimism, putting pressure on those who express opposing opinions, suppressing one's own doubts to conform with the group, and unquestioned belief in the group's morality. [Janis pp 130. 131] The main danger of group think is that, by diffusing responsibility among group members, it allows teams to make irrational and hasty decisions which they would not make as individuals. People assume that with a group of smart managers, good decisions follow automatically. After all "we can't all be wrong." This, in turn, leads to a de-individuating effect on team members that inhibits them from expressing their opinions or trusting their judgment. As the cynical creator of the Dilbert character put it, "Remember, you can't be wrong unless you take a position. Don't fall into that trap." [Adams, 36] A serious consequence of this attitude is known as the "Bystander Effect," whereby individuals refrain from taking an action in the presence of others which they would take if they were alone. In its extreme form, it was found that individuals are more likely to come to the aid of victims of accidents or crimes when they are the only witnesses than when others are watching. The same phenomenon seems to be true of business accidents and crimes. [Sutherland 59, 60]

Research shows that people give more wrong answers when exposed to previous wrong answers by others than when they answer individually. This tendency is aggravated when a group is promised a monetary reward linked to the number of group members who answer correctly. In business, lower ranking managers are influenced by comments made by their superiors, and often suppress or change their opinions.

Piccadilly Explanations:

"I can't think now, I am working." -- Garfield cartoon

Managers should resist superficial bases for decisions, especially under time pressure.

Many companies attempt to study their past successes and failures to draw lessons for future application. Results of these analyses are often at best superficial, and, at worst, misleading, though for different reasons. For successes, it is usually the manager in charge who conducts the analysis. The purpose is usually to sell to top management the person's marketing or product development genius with the hope of obtaining a promotion and salary increase. The analysis usually provides good material for an annual meeting presentation, but it hardly ever provides adequate perspective for reapplication. No wonder very few companies manage to re-apply their in-market successes. Extraneous factors for success such as government regulations as well as non-dramatic marketing variables like distribution or pricing are often ignored and emphasis is put on the person's or team's marketing genius. In cases of failure, if any post-mortem analysis is done at all, the person who conducts the analysis is not the one who was in charge of the initiative, simply because that person is usually not around anymore. The review is again superficial and focuses on the previous manager's stupidity, wrapped in business jargon. The new manager is more interested in selling his new plans to management than in providing an objective assessment of the failing project. He is unlikely to conclude that the project was bound to fail because it was not based on a real market need or due to internal company politics.

Importantly, the root causes of success or failure are never explored. Detailed descriptions of what happened are given instead of explanations. This is known as the "the Piccadilly explanation" after the famous road and circus in London. The "explanation" given for giving that road such an un-English name was that a tailor who lived in that road used to make collars called piccadills. No explanation was given as to why the collars were called picadills in the first place. [Jones p 110] Similarly, internal company analyses conclude that the key to success was offering a product that consumers truly wanted and supporting it with persuasive advertising. But don't all marketing recommendations say that consumers truly want the product and that testing shows that the advertising is persuasive? What's different this time? For failures, it is a similarly simple explanation, such as "We did not have deep understanding of the consumer," or "The advertising did not give consumers a good reason to buy the product." Why not?

There are Lies, Damned Lies, and Market Research

'Market research can be conducted and interpreted to prove any desired conclusion.'  --The Law of Predicted Results

Managers should be trained on the basic principles of market research before they use its findings to make business decisions.

Market research is a critical element in the qualification of new introductions to the market place. Million dollar decisions are based on consumer research findings. Unfortunately, many managers lack a deep understanding of the basic concepts underlying consumer research methods and interpretation. But, very few have the audacity of David Stockman, Reagan's Budget Director, who admitted in 1981 that "None of us really understands what's going on with all these numbers." [Dewdney 95]

Below are a few observations on some of the most common errors in using market research findings.

Representivity: Managers are often given research data that shows that "consumers" gave high rating to the product but are not always told that the test was run among a narrow group that represents only 20% of the population and is difficult to reach in reality (e.g. users of a certain product sub-segment who think that a particular product attribute is important).

Validation: An advertising agency may play a piece of advertising and present data that shows that 80% of consumers said they would buy the advertised product, without mentioning that no correlation has ever been found between results of such a question and in-market results. That is, the technique and question asked are not validated. The research world is over-flowing with consumer research techniques that measure the effects of advertising, packaging, pricing and promotions on brand sales. But very few of these techniques are predictive of market results.

Statistical Significance: Consumer research data is based on samples of the total population ranging between 100-1000 people, in most cases. Thus, unlike results based on the total population, consumer research numbers carry with them a margin of error. This is analogous to the difference between results of opinion polls and those of actual elections. Statistical significance testing is conducted to indicate a level of confidence in the data. Results are declared significant if the chance of them having occurred by chance is extremely low (e.g. 5%-10%). Unfortunately, many managers quote research numbers without indicating whether the higher rating for product A versus product B is statistically significant or not. Moreover, significance, because of its literal meaning, is often confused with importance. Significance, statistically speaking, only means that the difference between two numbers is likely to be real, but not necessarily important. The total population may indeed prefer a red pack to a green one but this does not mean that more of them will buy the product in the red pack.

Bench marking: In consumer research, all numbers are relative. To say that a product received an average rating of 65 or that 40% said they would buy a product is meaningless. Are those numbers high or low? Compared to what? Good research design and interpretation require reliable benchmarks. Benchmarks could be another product in the same study or historical norms within a category or country. Failure to define a benchmark, especially for a new category, is one reason for mis-reading consumer reaction. A company that develops a low-caffeine coffee may not know whether to test it against regular coffee or decaffeinated coffee. What would you test Post-It pads, a 2-in-1 shampoo and conditioner, or a notebook computer against? Another common misleading benchmark is to compare the same product among different consumer groups and conclude that it is more accepted by one group vs. other groups. For example, results may show higher ratings for a new toothpaste among older consumers versus younger consumers, leading to the conclusion that the old like it more than the young. In reality, however, it could be the case that older consumer tend to rate any product higher than younger consumers. The right benchmark would require testing another product among both old and young consumers.

Ceteris Paribus: Ensuring that "everything else equal" is a vital requirement of sound research design and interpretation. Measuring single elements of an initiative as single variables is the only way to analyze a proposition's strengths and weaknesses. Testing two products that have different formulae and perfumes does not allow us to understand the differences in acceptance, due to the "halo effect" which results when one variable influences consumer acceptance of other variables. Ceteris Paribus also applies to business analyses. An analysis of product acceptance compared to market share may suggest that better product acceptance leads to higher market share. This may be true, but not necessarily. A third factor may be at play. For example, it is possible that better product acceptance encourages a company to spend more money behind the new initiative and, hence, the new brand achieves higher awareness and distribution which, in turn, lead to a higher market share. A single variable test would compare different products at similar spending levels, or vice-versa.

Correlation: A simple look at advertising spending compared to a brand's dollar sales may show a very strong correlation, suggesting that by increasing its advertising spending a brand can grow its dollar sales. The reality of the matter is that many companies allocate their advertising spending as a percentage of their dollar sales -- the higher a brand's sales are, the more money it gets to support its advertising. In other words, the causation is reversed -- high dollar sales cause higher advertising spending. [Clancy and Schulman p 135]

The above are just a few examples of how and why business thinking goes wrong. In reality, different types of thinking errors take place simultaneously and are aggravated by emotions, reward systems, work-load pressure, urgency, massive firings, business politics and inefficient organizational systems and processes.

What can we do?

The ideal business person is a realist when making a decision but an optimist when implementing it. -- J.E. Russo and P.J.H. Schoemaker

It is bad news that we are not the rational beings, or managers, we like to think we are. The good news is that there is something we can do about it. Here are a few suggestions for managers:

  • Provide formal training in critical thinking, covering the areas of logic, fallacies of thought, data analysis, statistics and probabilities.

  • Create a Devil's Advocate rotating position for new product evaluation, assigned with the task of identifying all weaknesses in a new initiative without being responsible for addressing them. The assigned person should be changed regularly to avoid labeling him and eventually dismissing his input.

  • Run formal "post-mortems" on failed initiatives, by outside consultants or a special group within the company to maximize the company's ability to learn from its mistakes without pointing fingers. Minimize "Hindsight simplifications" and "In the past we were stupid, now we are smart" conclusions by studying launch recommendations carefully.

  • Run objective assessments of success models by outside consultants or managers not involved in the project to ensure objectivity.

  • Run double-blind tests to guarantee managers' objectivity in interpreting research results.

  • Allow managers to provide anonymous written assessments on new initiatives before launching them.

  • Create an environment that encourages rapid reporting of bad news as much as good news.

  • Improve our decision making process, not just our new product launch process.

  • Ensure our rewards and promotions encourage in-market success, not just initiative launches, and long-term commitment to the brand.

  • Keep this topic of initiative success a priority, seek feedback, and share learning regularly.

  • Develop new day-to-day work habits: In meetings, let the lowest ranking managers speak first, then the next lowest, and so on. Ask questions to disconfirm your hypothesis. Frame proposals positively and negatively. Use intuition for hypothesis and framing, not for decision making.

REFERENCES

Books:

Adams, Scott. The Dilbert Principle. HarperBusiness, USA, 1996.

Barker, Joel Arthur. Paradigms- The Business of Discovering the Future. Harper Business, USA, 1993.

Boyer, Pascal. Ceteris Paribus (All Else Being Equal), in Brockman, John and Matson, Katinka. How Things Are- A Science Tool Kit for the Mind. William Morrow and Company Inc. New York, 1995.

Cialdini, Robert B. Influence- The Psychology of Persuasion. Quill, William Morrow, New York, 1993.

Clancy, Kevin J. & Shulman, Robert S. Marketing Myths That Are Killing Business. McGraw Hill, Inc. USA, 1995.

Cooper, Robert G. Winning at New Products. Addison-Wesley Publishing Company, USA, 1993.

De Bono, Edward. The Mechanism of Mind. Penguin Books, Great Britain, 1986.

De Bono, Edward. The Use of Lateral Thinking. Penguin Books, Great Britain, 1988.

Dewdney, A.K. 200% of Nothing. John Wiley & Sons, Inc, USA, 1993.

Gilbert, Michael A. How To Win An Argument. John Wiley & Sons, Inc, USA, 1996.

Gilovich, Thomas. How We Know What Isn't So. The Free Press, NY, 1993.

Gray, William, D. Thinking Critically About New Age Ideas. Wadsworth publishing Company, Belmont, California, 1991.

Janis, Irving L and mann, Leon. Decision Making- A Psychological Analysis of Conflict, Choice, and Commitment. The Free Press, New York, 1979.

Jones, Steve. Why Are Some People Black? In Brockman, John and Matson, Katinka. How Things Are- A Science tool-Kit for the Mind. William Morrow and Company Inc, NY, 1995.

Kahneman, Daniel., Slovic, Paul., and Tversky, Amos. Judgment Under Uncertainty: Heuristics and Biases. Cambridge University Press, USA, 1993.

Lin, Lynn Ying-Shiang. BASES- New Product Sales Forecasting Model. National Chung-Hsing University- Research Institute of Agricultural Economics. Taichung, Taiwan, (no year of publication).

Macrone, Michael. Eureka! What Archimedes Really Meant. Cader Books, HarperCollins Publishers, NY, 1994.

Paulos, John Allen. Innumeracy- Mathematical Illiteracy And Its Consequences.Viking, Great Britain, 1989.

Paulos, John Allen. A Mathematician Reads The Newspaper. Basic Books, HarperCollins, NY, 1995.

Russo, J. Edward and Schoemaker, Paul J.H. Decision Traps. A Fireside Book, New York, 1990.

Stuart Sutherland. Irrationality- The Enemy Within. Constable, London ,1992.

Vos Savant, Marilyn. The Power of Logical Thinking- Easy Lessons in the Art of Reasoning…and Hard Facts About Its Absence in Our Lives. St.Martin's Press, New York, 1996.

Vos, Savant, Marilyn and Fleischer, Leonore. Brain Building In Just 12 Weeks. Bantam books, USA, 1991.

Wujec, Tom. The Complete Mental Fitness Book. Aurum Press, Great Britain, 1989.

Articles:

Golder, Peter N and Tellis, Gerard J. Pioneer Advantage: Marketing Logic or Marketing Legend? Journal of Marketing Research, Vol XXX (May 1993), 158-70.

Matthews, A.J. The Science of Murphy's Law. Scientific American, April, 1997.

Wilkie, Joseph and LeComte, Muriel. The Effect Of Brand Proliferation On Consumer Purchasing Of Fast-Moving Consumer goods (FMCG): How The Changing Rules Of The Game Affect STM Models. Unpublished Paper, 1997.

Endnotes

1. Wilkie, Joseph and LeComte, Muriel. The Effect Of Brand Proliferation On Consumer Purchasing Of Fast-Moving Consumer goods (FMCG): How The Changing Rules Of The Game Affect STM Models. Unpublished Paper, 1997.

2. Clancy, Kevin J. & Shulman, Robert S. Marketing Myths That Are Killing Business. McGraw Hill, Inc. USA, 1995, pp. 81-83.

3. Stuart Sutherland. Irrationality- The Enemy Within. Constable, London ,1992, p. 4.

4. Stuart Sutherland. Irrationality- The Enemy Within. Constable, London ,1992, pp. 16-31 and

Kahneman, Daniel., Slovic, Paul., and Tversky, Amos. Judgment Under Uncertainty: Heuristics and Biases. Cambridge University Press, USA, 1993, pp. 163-178.

5. Golder, Peter N and Tellis, Gerard J. Pioneer Advantage: Marketing Logic or Marketing Legend? Journal of Marketing Research, Vol XXX (May 1993), pp. 158-70.

6. Gilovich, Thomas. How We Know What Isn't So. The Free Press, NY, 1993, pp. 77,78.

7. Stuart Sutherland. Irrationality- The Enemy Within. Constable, London ,1992, p. 240.

8. Matthews, A.J. The Science of Murphy's Law. Scientific American, April, 1997.

9. Stuart Sutherland. Irrationality- The Enemy Within. Constable, London ,1992, p. 259, and,

Janis, Irving L and mann, Leon. Decision Making- A Psychological Analysis of Conflict, Choice, and Commitment. The Free Press, New York, 1979, p. 25.

10. Gilovich, Thomas. How We Know What Isn't So. The Free Press, NY, 1993, p. 62.

11. Wujec, Tom. The Complete Mental Fitness Book. Aurum Press, Great Britain, 1989, p. 168,

Stuart Sutherland. Irrationality- The Enemy Within. Constable, London ,1992, pp. 137, 138,

Vos, Savant, Marilyn and Fleischer, Leonore. Brain Building In Just 12 Weeks. Bantam books, USA, 1991, pp. 84-86, and,

Paulos, John Allen. A Mathematician Reads The Newspaper. Basic Books, HarperCollins, NY, 1995, p. 73.

12. Gilbert, Michael A. How To Win An Argument. John Wiley & Sons, Inc, USA, 1996, pp. 130, 131.

13. Stuart Sutherland. Irrationality- The Enemy Within. Constable, London ,1992, pp. 99-101.

14. Janis, Irving L and Mann, Leon. Decision Making- A Psychological Analysis of Conflict, Choice, and Commitment. The Free Press, New York, 1979, pp 130, 131.

15. Adams, Scott. The Dilbert Principle. HarperBusiness, USA, 1996, p. 36.

16. Stuart Sutherland. Irrationality- The Enemy Within. Constable, London ,1992, pp. 59, 60.

17. Jones, Steve. Why Are Some People Black? In Brockman, John and Matson, Katinka. How Things Are- A Science tool-Kit for the Mind. William Morrow and Company Inc, NY, 1995, p. 110.

18. Dewdney, A.K. 200% of Nothing. John Wiley & Sons, Inc, USA, 1993, p. 95.

19. Clancy, Kevin J. & Shulman, Robert S. Marketing Myths That Are Killing Business. McGraw Hill, Inc. USA, 1995, p. 135.