When I started my consulting practice TeamTrainers™, I wanted to make sure I was only telling my clients what really worked. I’d attended some trainings on team development and read some books in the six years I’d been doing it at my work. Few spoke of the actions I was finding most effective, and some of the suggestions seemed really dubious based on what I had learned of small group psychology. So I fell back on my early career as a journalist and “hit the stacks.” Every week for six months, I spent a day in the libraries at the Univ. of New Mexico. I read books by scientists for scientists. Starting with those leads, I went through peer-reviewed journals reading reports of studies. (“Peer-reviewed” means an article must pass inspection by several anonymous scientists who agree the study was performed according to the quality standards for scientific research.) In those days, I was able to go through a half-dozen articles a week, focusing on scientific studies, and a book every couple of weeks. I think I had around 350 sources when I finished the first draft of my training method, The SuddenTeams™ Program. Mind you, not all of my sources are scientific. I have to create practical applications from the science, and examples from the real world help me teach them.
After moving to Seattle, for three years I published an e-newsletter, hitting the Univ. of Washington libraries a full day once a month. TeamResearch News morphed into a collection of study summaries arranged by topic on the TeamTrainers™ Web site. Along with other sources picked up over time, I was over 450 sources when I restarted the business after a move to Raleigh, NC. Then I walked to the library at North Carolina State Univ. once a month, but usually only half a Saturday. (Fortunately, it’s easier to find studies on the Internet these days.) With the typical human penchant for nice, round numbers, I yearned to top the 500 mark, and eventually passed 1,000!
Most people seem impressed when they hear about my research into “The Science of Teams™,” but I have run into skepticism. A meeting of potential referral partners in Seattle fell apart when one person took an anti-science stance. Speaking as a former reporter, I put a chunk of the blame on the media. When they report on studies without putting them into the bigger context; or make one study appear to contradict the next by not reporting the different methods; and hype books by people on the fringe of scientific thought as if those theories have been proven, the average reader is understandably confused.
But science learns the same way you do: reading and conferences plus trial and error. Scientists do this in a very controlled way, however. Their trials eliminate other factors that could have caused the result they saw, they try the same test again with some slight changes to see if that makes a difference, and they invite others to try it. They pore through other scientists’ work to get ideas and avoid others’ mistakes. And when they’re done with their trial, they have to submit the report to an anonymous team of colleagues who critique the article, questioning the scientist’s methods and conclusions (hence the term “peer-reviewed journal”). Then the journal editors take a crack.
Tiny differences in how studies are put together can cause very different findings. Over time, however, a trend will develop until most of the scientists in a particular field of study agree on some basic truths. Sometimes new evidence causes a huge shift in thinking. But more often, especially in the behavior sciences, consensus develops in a slow, methodical way over many years, and proves able to predict results. They’ll still call it a “theory” though, as in “the theory of gravity.” And there are always “outliers,” exceptions to the rule.
But the media do not report all this. There have been countless “shifts” reported that from a scientific standpoint are relatively minor. Whether you eat a lot or a little salt, or go on a high-fat diet to shed some pounds quickly, is almost irrelevant. The basic truths of nutritional science have held accurate through countless studies over decades. You have a much better chance of being healthier than the average person if over the long term you:
- eat a variety of food, including fresh fruits and vegetables.
- limit your fat intake, especially saturated fats.
- eat no more calories than you expend through exercise and daily activity.
The same is true in teamwork science. Sometimes the latest fad or buzzword flies in the face of science, with no studies supporting it. It’s just an idea somebody has. These eventually disappear, but not before wasting some teams’ time, money, and goodwill. Other popular teambuilding solutions are like diets: they might have a short-term, temporary effect, but as soon as you go off the diet/activity, the bigger, underlying issues are still there—and the pounds or problems return.
Winston Churchill famously wrote, “democracy is the worst form of government except all the others that have been tried.” Scientists make mistakes, have egos, hang onto theories longer than they should, and otherwise show the same foibles as the rest of us. I rejected at least 100 studies for various reasons, including my belief that some were poor science. But scientists follow a process, the scientific method, and subject themselves to checks and balances the rest of us would find highly irritating, for a simple reason: they want the truth.
I’ll take that over some consultant pushing his or her latest Big Idea, or popular but unproven practices, any day. And today, I have 600 reasons supporting me.
You pull your coffee out of the microwave oven and settle down for a few moments with a book on persuasion from a psychologist. Into your smartphone goes an idea to try on your boss. Turning to your computer, you use the ergonomic mouse that has saved your aching thumb to drill through some e-mails, thankful the allergy pill you took has kicked in.
In every case, science has made your life better. The electronics, pills, Internet, and ergonomics started with basic research whose practical uses were unknown at the time. You’re more likely to get the boss seeing things your way because of time-consuming studies comparing messages with and without a persuasive technique. Scientists doing tedious experiments, conducting carefully designed surveys, and using tightly controlled methods during case studies contribute to nearly every moment of your life in ways you will never know.
Today I hope to get you connecting these efforts to your role with teams. More to the point, I hope to get you thinking about where you get your information about teams, and more importantly, where those sources get their information. All I’m asking you to do is think.
Do you recall learning about the “scientific method” in school? “Scientists use the scientific method to search for cause and effect relationships in nature,” the student site Science Buddies explains. There are many versions of the method, but I’ll go with their simple one:
- “Ask a Question”
- “Do Background Research”
- “Construct a Hypothesis”(a possible answer to the question)
- “Test Your Hypothesis by Doing an Experiment”
- “Analyze Your Data and Draw a Conclusion”
- “Communicate Your Results”
You can do all this, and should when making major changes. The devil is in the details. Teamwork and other scientists do exhaustive reviews of previous research before proposing their hypotheses. They go out of their way to rule out other possible causes of the results, including sheer luck, and recognize possible downsides. Legitimate studies are reviewed by other anonymous experts who might point out mistakes and alternate explanations. Ethical scientists report all the results, even the ones showing their hypotheses were wrong. Contrast that with the companies in one study that refused to release information on any failed projects to the scientists, even though the companies’ names were kept anonymous!
Scientists provide enough details about their methods that others can question what their results really show, or even retry the experiment. Private companies trumpet interesting (and self-serving) “study” reports without that information. A report about talent retention that was splashed across Twitter made a number of claims about “employers” based on a survey of human resources (HR) professionals. But the report doesn’t say how people were chosen. If the company simply sent an invitation to members of an HR organization, that leaves out a large percentage of HR people—and their employers. If the people who responded are different from other HR people, the study probably does not provide an accurate picture of all HR people, much less all employers.
The report mentions that 31% of the respondents came from companies with more than 10,000 employees. According to data from the U.S. Census Bureau, only 0.015% of American companies had that many employees in 2004. The survey is worldwide, but there’s no way the global figure is that much higher. You can’t honestly make any general statement about companies based on this group of people. As always in these cases, I contacted the company public relations department four days before posting the original Teams Blog version of this topic for more information. As usually happens, they ignored me. Since they’re hiding the information, I assume they know their methods, and therefore their data, are questionable.
Another questionable source is stories about business successes. Before making a change based on them, you should ask whether there are:
- Other possible causes for the success than those the story suggests.
- Special circumstances in the company that allowed the technique to work there, or caused another technique they tried not to work.
- Companies built on less-flashy but better-proven techniques less risky for you to adopt.
- Companies that succeeded despite using, or even because of using, a technique that caused a failure elsewhere.
- Other reasons the people quoted in the story focused on the techniques mentioned (public relations, taking undue credit, etc.).
A common mistake bloggers make is to confuse “correlation” with “causation.” The fact that one trait changes with another doesn’t mean either caused the other. Many business gurus claim trust is the foundation of good team performance, and it is true that high-performing teams usually report high levels of trust. But my research and teambuilding experiences tell me trust grows with performance. Basic psychology research suggests it is a waste of time to try to make people trust each other, because real trust requires time and repeated positive experiences. Instead, put in a system that makes it obvious when people are and aren’t doing what they said they would do, so members don’t have to trust right away.
These articles on trust are a sample of the huge amount of guesswork in the blogosphere put forth as established fact. Along with the facts that Dan Rockwell’s instincts are usually right on target and he writes with heart, I like his Leadership Freak blog because he presents his opinions as such and welcomes their debate. Contrast that to other bloggers who report on “The Science of x” without reading a single scientific textbook, much less a peer-reviewed study. Data analysis from your company is not science, nor is a review of popular business sources.
Scientists are human. They make mistakes, their egos lead them astray, they sometimes hide failures, and they can fall victim to greed. The difference between them and most business gurus is that they work in a formal system where scrutiny of their conclusions and methods greatly increases the odds of their frailties getting noticed, their mistakes being admitted, and their results matching the way the business world really works.
Researchers have been looking into manufacturing, supply chain, and project management for years, often making recommendations to managers. Maybe you have followed their advice on the job. But there’s a problem, according to professors Francesca Gino and Gary Pisano: “Most formal analytical models in operations management (OM) assume that the agents who participate in operating systems or processes—as decision makers, problem solvers, implementers, workers, or customers—are either fully rational or can be induced to behave rationally.” In other words, scientific theories about how to run a plant or project assume that people:
- identify and react only to relevant information;
- have the same preferences in every situation;
- consider all options before making a decision; and
- make those decisions without emotion.
Oh, yeah, sounds like every workplace I’ve been in.
Gino, of the Univ. of North Carolina-Chapel Hill at the time but later at Harvard Univ., and Pisano of Harvard, point out in a 2008 paper that people effects on a system’s performance have permeated other fields of research, from economics and accounting to the law. But “a ‘behavioral perspective’ has largely been absent in the field of operations,” they write. Because of this, current OM models can’t really explain the difference between one firm’s performance and another’s. In turn, managers don’t find OM theories very useful. Especially relevant is the authors’ point that “behavioral operations” researchers needs to look into how individual cognition and “social norms and systems affect operations.” Gino and Pisano say based on previous research (cited below) that this will lead to very different predictions about what will fix specific issues.
In an interview I asked Gino, an assistant professor of organizational behavior, why OM scientists resist research on the effect of human behavior. She said there is “skepticism from some people that maybe it is not dramatic or very significant…” She noted that her co-author’s interest was sparked by going into organizations and seeing what worked, which suggests that other academics have not done that. However, she said, “The researchers, the more they hear, the more they understand that it is important to study the psychology of people.”
People effects have explained results that defied scientific theories in other fields. In the OM world, this could explain “the tendency of projects to run late and over budget or the tendency of organizations to over commit their R&D resources,” the article says. Researchers have identified many biases and questionable rules-of-thumb that affect our decision-making. Gino and Pisano provide a somewhat depressing list of 19 shortcuts humans take in their decision-making that can mess up the results:
- “Information avoidance—People’s tendency to avoid information that might cause mental discomfort…”
- “Confirmation bias—People’s tendency to seek information consistent with their own views or hypotheses”
- “Availability heuristic—People’s tendency to judge an event as likely or frequent depending on the ease of recalling or imagining it”
- “Salient information—People’s tendency to weigh more vivid information (e.g., based on prior experience/incidents) than abstract information (e.g., statistical base rates)”
- “Illusory correlation—People’s tendency to believe that two variables covary when they do not”
- “Procrastination—People’s tendency to defer actions or tasks to a later time”
- “Anchoring and adjustment heuristic—People’s tendency to rely too heavily, or ‘anchor,’ on one trait or piece of information when making decisions”
- “Representativeness heuristic—People’s tendency to assume commonality between objects of similar appearance”
- “Law of small numbers—People’s tendency to consider small samples as representative of the (entire) populations from which they are drawn”
- “Sunk costs fallacy—People’s tendency to pay attention to information about costs that have already been incurred and that cannot be recovered… when making current decisions”
- “Planning fallacy—People’s tendency to underestimate task-completion times”
- “Inconsistency—People’s inability to use a consistent judgment strategy across a repetitive set of cases or events”
- “Conservatism—People’s failure to update their opinions or beliefs when they receive new information…”
- “Overconfidence—People’s tendency to be more confident in their own behaviors, opinions, attributes, and physical characteristics than they ought to be”
- “Wishful thinking—People’s tendency to assume that because one wishes something to be true or false then it is actually true or false”
- “Illusion of control—People’s tendency to believe they can control, or at least influence, outcomes that they demonstrably have no influence over”
- “Fundamental attribution error—People’s tendency to overemphasize dispositional or personality-based explanations for behaviors observed in others while underemphasizing situational explanations”
- “Hindsight bias—People’s tendency to think of events that have occurred as more predictable than they in fact were before they took place”
- “Misperception of feedback—People’s tendency to misperceive dynamic environments that include multiple interacting feedback loops, time delays, and nonlinearities”
Take the “anchoring and adjustment” bias. People often start their thinking from a particular point, sometimes without a good reason for it, and then stay too close to that point. In one study, software developers given a higher anchor to start with ended up with higher final estimates than when they were given lower or no “anchors,” the article says. Sales forecasts are often off because they start with the previous year’s sales instead of an unbiased analysis of this year’s market.
Of course, behavioral operations researchers and managers can’t erase human bias. However, Gino and Pisano write, “operating systems can be designed in such a way that systematic errors are eliminated, or at least their negative consequences reduced.”
I asked Gino what advice she would give, for instance, a chief operations officer whose IT planner tends to anchor too closely to industry averages. “First, you need to be aware of the bias, which is a very simple lesson, but it is hard to recognize,” she said. Have someone act as a devil’s advocate, she suggested, asking the planner to bring alternatives to the table and questions like, “How did you come up with this number?” I would add, based on something else she said, that you cannot push the person for a certain number and be surprised when it turns out wrong. In the IT scenario, don’t anchor yourself to industry averages if the planner offers good reasons not to.
- Bendoly, E. (06), K. Donohue, and K. Schultz (06), “Behavior in Operations Management: Assessing Recent Findings and Revisiting Old Assumptions,” Journal of Operations Management 24(6):737.
- Boudreau, J., W. Hopp, J. McClain, and L.J. Thomas (03), “On the Interface Between Operations and Human Resources Management,” Manufacturing & Service Operations Management 5(3):179.
- Gino, F., and G. Pisano (08), “Toward a Theory of Behavioral Operations,” Manufacturing & Service Operations Management 10(4):676.