- Busting The Myth of Nonverbal Communication in Business
- A Good Speech Goes to Pot
- The Birth of a Myth: Niceness Does Not Pay?
- The Sorry State of Surveys
- On Surveys, Bias, and Admitting You’re Wrong
- “Brain Porn” Hides Real Leadership Lessons from Neuroscience
- The Useful Limitations of Scientific Thinking
It happened yet again. I was in a seminar, this one on sales, by a respected speaker on the subject who I won’t name to protect the guilty. His reputation is well deserved: His information on sales is practical, well presented, and in line with my own grad-school research into persuasion. But then he repeated what I call “The Mehrabian Myth.” He said only 35% of communication is through words. The rest, he said, was through body language and voice tone. To his credit, his number was the same as the most reliable figure in the scientific literature, but his mistake was in not qualifying the word “communication,” as I’ll explain in a moment. Then he made it worse, saying he had just read about two studies that said the figure was “as low as 7%.”
Unfortunately, what his source did not tell him is that those studies are nearly 50 years old… and didn’t really say that. I don’t really blame this gentlemen. This idea is repeated so often by consultants and business speakers that everyone assumes it must be true. That’s why I and others call it “The Mehrabian Myth,” for the UCLA psychology professor Albert Mehrabian who did those two studies. But the myth is not his fault: it’s about him, not created by him. Mehrabian has stated that the figure, which actually came from a later estimate, has been misquoted. And the figure was never about all communications. It was actually about the communication of emotion.
What Mehrabian first showed, and other researchers have confirmed, is a common-sense notion. When someone is talking about an emotion-raising topic, if their words do not match their body language, facial expression, and voice tone, the other person is going to believe those nonverbal cues. If I tell you I am doing fine, but you hear tension in my voice and I’m not smiling, you’re going to know I’m not actually fine.
In the everyday exchange of information and routine stories of daily life, words are doing all or almost all of the work. And even in honest emotional conversations, the receiver may draw some conclusions about the degree of emotion you are feeling through body language, but most of the information they receive will still be carried by the speaker’s words.
After I was divorced, someone pointed out to me that my “nonverbals” did not match my verbals when I tried to explain nicely my wife’s actions during our marriage. I was mad, and it showed, even if intellectually I understood her motives. No doubt this mismatch contributed to our problems. In the workplace, the lesson is to be honest with yourself and your team members about what you are feeling. If you’re ticked off, don’t try to hide it. And because your words have more weight than you’ve been led to believe, monitor them to make sure you don’t make things worse.
Otherwise, relax. In most of your communications, people aren’t monitoring your every move, despite what the purveyors of The Mehrabian Myth would have you believe.
Source: Mehrabian, A. (1981), Silent Messages: Implicit Communication of Emotions and Attitudes. Wadsworth, Inc.: Belmont, CA.
It happened again.
I attended a speech by a consultant with an Amazon bestseller, who has consulted with an impressive roster of companies and government agencies. Most of what she had to say was scientifically accurate. Much overlapped with my own preachings in her area of expertise. Then, she reported on a study.
She said Hewlett-Packard, a past client of mine, wanted to find out the impact of digital distractions on productivity. So HP contracted with a researcher at a college in England to conduct a study of 300 people. All were given a standard intelligence test. One-third were told to focus on the test. A second set was allowed to check social media and e-mail. The last, she said, was also asked to focus on the test—after smoking marijuana. She reported that the first group did the best, of course. The surprise was, the pot smokers did better than the multi-taskers. As she said that, laughter tittered throughout the room.
I wanted to believe this. But something smelled, and there wasn’t a haze of pot smoke at the luncheon.
Social science researchers do not typically encourage subjects to break the law, much less publicly admit they conspired to do so. (Medical studies are different, and are tightly controlled.) Even if the researchers wanted to, every respectable university has a human research ethics panel that would have balked, and HP’s public relations department would have had a fit. After the talk I asked the speaker privately if the study was published, so I could get the citation from her. She said she wasn’t sure, that she had found it on the Internet. My heart sunk. To her great credit, she took the time to find a couple of links—not the ones she found earlier, though—and send them to me. One was to a BBC report (see “Sources” below). As you can read for yourself, however, it reports on a survey, not a lab study, and does not say pot was involved. Strike one.
The article lists the university the researcher worked for. I went to its site and plugged in his name. Nothing came up. As someone who did this all the time for Teams Blog, I can tell you that is unusual. Even if he left six years earlier, around the time this story came out, if he was a regular faculty member he would have left a trace. Strike two.
Then I conducted another search on his name and a keyword from the study. Almost every researcher has a list of publications on a Web page somewhere. Instead what popped up, along with links repeating versions of the speaker’s report, were a couple of respectable science-writing blogs slamming the coverage of the study. One included a link to the researcher’s personal site, specifically to a rebuttal he felt compelled to write debunking the lies that had grown up around the study. Strike three. This myth is outed.
There is a kernel of truth in it. Dr. Glenn Wilson, an accomplished psychology scientist, did run a tiny study at the request of HP’s London publicists. Apparently they wanted additional information to support a survey of 1,100 people. Wilson’s project involved eight people, not 300. They were split into two groups, not three. They were employees of the publicity firm, not a large sample, and the study was not associated with the college, at which Wilson served in a junior role. Finally, it did not involve pot.
“This study was widely misrepresented in the media,” Wilson writes, “with the number of participants for the two aspects of the report being confused and the impression given that it was a published report (the only publication was a press release from Porter-Novelli). Comparisons were made with the effects of marijuana and sleep loss based on previously published studies not conducted by me. The legitimacy of these comparisons is doubtful…”
As politely as I could, I e-mailed the bad news to the speaker along with the relevant links, suggesting she “might want to change” her presentation. You now know why I am not identifying her. She graciously thanked me and said she was motivated to go find her “original research.” A doubly ironic phrase, that. In the scientific world, “original research” refers to an actual study, which even I don’t do, or at least the first study on a particular topic, which this certainly would have been if the story had been true!
In summary, a former executive with a top-school MBA and best-selling book who gets paid big bucks by big companies repeated a myth which the slightest skepticism would have debunked. The advice columnist Ann Landers said it best: “If it sounds too good to be true, it probably is.” When something seems improbable or questionable, question. You can’t just ask about the source’s background or character or popularity. You also have to ask where the person got their information.
As you know, the same thing that makes the Internet so handy increases the problem. The Web makes bad information just as easy to spread as good information. Newspapers can be nearly as guilty, and I say that as a former newspaper editor. Especially in these days of newsroom cutbacks, the reporter asked to cover a new study may have no prior experience with science writing, much less the specific topic. Often they do not know how to judge the quality or significance of a study. And that isn’t who writes the headlines, which are all that many readers will read.
At least in this case, unlike other work myths I debunk in my classes, the basic lesson is accurate. There is plenty of research evidence that multitasking harms the quantity and quality of whatever you are trying to do, whether it is getting a task done at work or driving while yakking on a cell phone. You don’t need a hit of weed to get the same results.
- BBC News, “Infomania Worse than Marijuana,” 4/22/2005, http://news.bbc.co.uk/2/hi/uk_news/4471607.stm.
- Wilson, G. (2010), “The ‘Infomania’ Study,” http://www.drglennwilson.com/Infomania_experiment_for_HP.doc.
I caused a change on a major news site in August of 2011. Too bad the change only made the post slightly more accurate. The story about this news story illustrates one way business myths are born.
Browsing MSNBC.com, my eyes stopped at the headline, “Life isn’t fair: Play nice, get paid less.” The short piece opened with, “Here’s another reason to hate the stereotypical ‘jerk at work’: He or she may also be earning more money than those of you who choose to be nice.” It went on to say a study reported in the Wall Street Journal had found “people who are less agreeable tend to earn more.” There was a gender difference, too, in that “being less agreeable paid off more for men, who earned about 18 percent more… Women who were rude saw a smaller salary bump of around 5 percent.”
I downloaded a draft of the study article to be published in the Journal of Personality and Social Psychology. The basic facts in the news story are correct. However, the terms “jerk” and “rude” are wildly misleading.
The research team led by management professor Timothy Judge of the Univ. of Notre Dame actually conducted four studies. Three analyzed data from research projects that have been following large groups of people over a number of years. The project data included questions on “agreeableness,” which is one of the five factors of personality, as well as income and other personality and work-related measures. The project used in the first study, for example, is interviewing 9,000 people who were aged 12–16 when it started in 1997. Among that group, in 2008, people rated as being more agreeable earned less than those who did not, with the effect stronger for men than for women.
I can’t resist noting that “women earned, on average, $4,787 less than men, even controlling for education, marital status, hours worked per week, and work force continuity.” Men who claim the persistent wage gap between men and women is because women take time out to raise babies are undermined by those last two items.
The second study used the National Survey of Midlife Development out of Harvard, which included a follow-up ten years later. It came up with basically the same results regarding agreeableness, gender, and income. Same for the third, using The Wisconsin Longitudinal Study, which is following people who graduated high school in that state in 1957.
To get a better sense of how this income disparity developed, the scientists came back to the lab. They devised a study in which college students were asked to play the role of human resource managers deciding which employees “should be placed on a fast-track to management.” Each fake candidate “was described, in some way, as conscientious, smart and insightful.” But their apparent agreeableness was manipulated using various words (see below). Sure enough, candidates who seemed less agreeable were more likely to be fast-tracked, and being less agreeable helped men more than women.
Now peruse the terms used to ask people how agreeable they thought they were or describe the fast-track candidates:
- Study 1—”Agreeable” versus “quarrelsome,” cooperative/difficult, flexible/stubborn.
- Study 2—Helpful, friendly, warm, caring, softhearted, outspoken, and sympathetic.
- Study 3—”(1) has a forgiving nature; (2) tends to find fault with others; (3) is sometimes rude to others; (4) is generally trusting; (5) can be cold and aloof; (6) is considerate to almost everyone; and (7) likes to cooperate with others.”
- Study 4—”trust, straightforwardness, modesty and compliance.”
Out of 21 phrases, only one used “rude,” and none used a definite opposite such as “polite.”
One can be disagreeable without being a rude jerk. I will dig in my heels when people challenge me in an area I’ve researched heavily, until they provide verifiable facts from objective sources. But I remain polite. I don’t yell. I don’t question their intelligence or motives or the legitimacy of their births.
Judge and the other authors of the study, Beth Livingston of Cornell Univ. and Charlice Hurst of the Univ. of Western Ontario, state that rudeness is “the least likely” explanation for their findings. They suggest that assertiveness and a willingness to take a strong stand in pay negotiations—the willingness to literally disagree—are probably more the point. “Also, as suggested in Study 3, disagreeable people may value money more highly and, thus, make higher investments in their extrinsic success,” they write. “For instance, a disagreeable individual might choose to move for a promising promotion that will put him at a distance from extended family while an agreeable man might choose to stay put, concerned with balancing the desire for career advancement with the motivation to maintain strong familial ties.” In Study 3, agreeable people reported higher life satisfaction, lower stress, and greater involvement with their community and friends.
The most practical takeaway from these studies may be the fact that agreeableness has such a smaller impact for women. The authors suggest this is because women are expected to be nice. Being disagreeable goes against that stereotype, turning off more people in women than it does in men. “Nice girls might not get rich, but ‘mean’ girls do not do much better,” the study says. (Don’t be offended by the use of “girls”; they are playing off the title of their study, “Do Nice Guys—and Gals—Really Finish Last?”)
I tweeted the reporter of the news article to suggest a correction. “Good point. I tweaked the post,” she replied.
The change was to say, “Women who were rude or in other ways disagreeable…” I thanked her by tweet for her integrity. However, I have to say that while the change is technically more accurate, the story still badly misses the gist of the findings. Those who read it will come away believing something the study did not say—a myth. There are exceptions to everything, but if you think you are going to “rude” your way to the upper class, you are probably in for a rude awakening.
- Linn, A. (2011), “Life isn’t fair: Play nice, get paid less,” Today.com, 8/17/11.
- Judge, T., B. Livingston, and C. Hurst (in press), “Do Nice Guys—and Gals—Really Finish Last? The Joint Effects of Sex and Agreeableness on Income,” Journal of Personality and Social Psychology.
Do you think business news stories based on surveys are useful? Please answer “Yes” or “No.”
“But,” you may object, “what if I think, ‘It depends?'” Ah-HA! When then, indeed?
While doing consulting work, I received a survey from one of my service providers. It asked how I felt about social media and different relevant services. My answer choices were, “Like it” and “Hate it.” That’s all.
I started to simply “abandon” the survey—the scientific term for quitting—and go on to something worth my time. But I want this company to do well. It is a good-sized employer in my area up against a heavyweight competitor based elsewhere. I have met a few people who work there and interviewed the CEO, who is a nice guy.
Their survey had an e-mail address. So first I wrote a paragraph trying to make clear I was not flaming them. Then I said, “I appreciate that you are trying to keep the survey short, but it takes little longer to select from five or more choices of a scientifically valid Likert scale than it does to choose from two choices that may not be true. These ‘false choices’ are well proven to raise abandonment rates, which skews your data. Also, when you force someone to treat their 51% Hate the same as another item they 100% Hate, those responses are highly inaccurate—by definition, they are off by as much as 49% from the person’s true feelings.” The company has not responded.
A few weeks back an e-mail request arrived for input on a story to be published by a large professional group. The story was based on a survey from a consulting firm which reported a number of conclusions about “employee” attitudes. I looked up the survey report and was appalled. As a member of the group, I wrote the requester and suggested she run the survey by a university expert before basing a story on it. “The white paper on (the company’s) site indicates they only surveyed ‘business leaders.’ By definition, we cannot draw accurate conclusions about employees without surveying employees. The samples are too small to represent the larger population of business leaders. The paper presents no information on methodology or the demographics of the sample.” She wrote back and said she would get it reviewed.
The latter criticisms have been applicable to almost every report I’ve checked that came out on business blogs or sites. One I wanted to report on in Teams Blog. But then I looked at the “Methodology” section. In a scientific journal, this would explain how the study was designed; how the people being tested (the “subjects”) were chosen; how the survey was developed and tested to ensure, for example, that people answered it the same way each time; and how the results were collected and analyzed. This information allows other scientists to make sure the survey was done correctly and repeat it to see if they get the same results (“replicate the findings”).
But reports from consulting firms offer limited details. Some leave out the methodology altogether. This one I had wanted to cover was nowhere near scientific standards, but better than most. The organization had a database of people interested in a particular profession, sent the survey to all of them who held certain titles, and got back thousands of responses. This method made it possible the answers would pretty well match the answers everyone holding those titles would give if you could survey everyone. In scientific terms, it would be a “representative sample.”
Looking more carefully, however, I realized the database only included members of the organization. Next I looked at the demographics (the characteristics) of the subjects. About 70% of them came from large employers. As politicians hammer home all the time, correctly for once, most jobs are in small businesses. (The U.S. Census Bureau says only 48% of Americans work in large organizations). The survey’s sample was probably representative of members of that profession who were members of the organization and worked for large companies.
The report did not say that, however. It acted as if the results were representative of everyone in the profession. It’s like someone surveyed the Florida Marlins baseball team, then kept saying, “baseball players believe” rather than “Florida Marlins believe.” The Marlins’ answers might be very different from those of other Major Leaguers, not to mention those of minor leaguers and amateurs.
Now to another survey problem I will illustrate with an extreme example. Please answer on a 1 to 5 scale, from “Completely Disagree” to “Completely Agree,” the following: “I like football and needlepoint.”
I know there are people who like both, and some like neither, but I’m guessing most folks only like one or the other to some degree. So how would those folks accurately answer the question? Making things worse, do I mean viewing football and needlepoint, or doing them?
Conjunctions (and, but, or) in survey questions are a no-no according to most experts, yet I have seen a survey questionnaire created by organizational psychologists in a consulting firm that broke the rule repeatedly. The second problem above is less common, but it occurs in popular business assessments. If you cannot be reasonably sure what the respondent was responding to, or how they interpreted the question, your data is useless.
The point to this topic is survey design is not for amateurs. I had the honor of studying the topic with a professor who wrote a widely used textbook on research methods. Yet if I were planning on investing time or money based on the results of a survey I created, I would pay an expert to review it. Your team deserves better than to make decisions on inaccurate data or poorly researched news stories about poorly done surveys. So yet again I warn you: Question your sources.
Daniel Klein is an economics professor at George Mason Univ. and a self-described libertarian. Though some of his libertarian beliefs would be considered politically “liberal”—ending of all narcotics laws, for example—most are more on the conservative side, such as ending of the income tax. Klein definitely energized conservatives when he published an opinion piece in the Wall Street Journal declaring American liberals ignorant about economics. “Responding to a set of survey questions that tested people’s real-world understanding of basic economic principles, self-identified progressives and liberals did much worse than conservatives and libertarians,” he says in a follow-up in The Atlantic. The Journal piece quickly spread across the Internet, leading to 10,000 downloads of the study upon which it was based. Klein had co-authored that with Elijka Buturovic, a psychology Ph.D. from Columbia Univ. who works for an opinion poll company Klein doesn’t name.
The follow-up’s title explains why it was necessary: “I was Wrong, and So are You.” The catalyst was a new study the pair did that checked the possibility unintended bias in the first survey’s questions caused the one-sided results. Indeed they had. The new study made clear that on average, U.S. conservatives, liberals, and libertarians are equally dumb about economics.
To understand how the results could be so different, read two example statements from the studies:
- “Restrictions on housing development make housing less affordable.”
- “A dollar means more to a poor person than it does to a rich person.”
The first came from the first study, and if you are a conservative or libertarian, you are more likely to want to agree with it, and therefore will if you do not know the objective facts. If you are liberal, you probably want to agree with the second instead, which came from the second study. The first group would disagree with the second statement, and the liberals with the first. But both statements are factually correct, based on significant research.
“You may have noticed that several of the statements we analyzed (in the first survey) implicitly challenge positions held by the left, while none specifically challenge conservative or libertarian positions,” Klein says in the Atlantic article. “A great deal of research shows that people are more likely to heed information that supports their prior positions, and discard or discount contrary information.” (This is the “Confirmation Bias” mentioned in “How Your Brain is Fooling You.”) That means liberals were more likely to discount the right-leaning statements of the first survey, and thus more likely to give wrong answers. Buturovic had been researching this bias effect, which led the pair to do the second survey using statements mostly challenging conservative or libertarian positions. This time, people who identified themselves as liberal were right more often than those in the other camps. Studies show that a dollar is more precious to a poor person, but more than 30 percent of libertarians and 40 percent of conservatives disagreed with that statement, versus only 4 percent of progressives. Klein chides his fellow libertarians: “c’mon, people!” But the bottom line comes in his next sentence. “Consistently, the more a statement challenged a group’s position, the worse the group did,” he says.
Klein then chides himself. “Shouldn’t a college professor have known better? Perhaps. But adjusting for bias and groupthink is not so easy,” he says. He points out that education levels had no impact on accuracy, which means it had no impact on confirmation bias.
This article bolsters several of my topics in this hypertext. The last one warned of the dangers of do-it-yourself surveys. In this case, even “a college professor” who was not an expert on surveys had to be corrected on his conclusions by a specialist. I have also explained my own bias toward science-based team leadership partly because researchers go to great lengths to overcome bias and are more likely than consultants to admit when they are wrong. Klein provides a very public, honorable example of that effort and that willingness. Finally, speaking as an ex-journalist, I have made the point that the public perception of scientific flip-flopping is more the fault of poor journalism than of science. The first article “set off fireworks,” Klein reports, and was picked up by major U.S. newspapers. The second received relatively little notice. Everyone who read the first stream of articles and blog posts will go through life believing Klein’s first position was his final one.
There are plenty of lessons in this for the workplace, besides the one about surveys:
- If something seems to completely support your position, demand more proof to make sure you are not just viewing it the way you want to. I actively seek studies that differ with my current knowledge.
- If something seems too simple an answer, seek out alternate explanations. I refuse to report study conclusions that support (or refute) my beliefs if I think the evidence from the study is unclear.
- If you find out you were wrong, say you were wrong. Admitting his error to the 450,000 paying customers of The Atlantic could not have been easy for Klein, but doing so gives those readers the chance to open their minds.
To check your understanding of your world, open it to the scrutiny of others. You might know your job or your team better than many people, but you do not know it better than all people. Because you are human, you are biased. Whether you let this fact mislead your organization into wasting time, money and stress, or admit that you are human and seek out the facts wherever they lead, is your choice.
Source: Klein, D. (2011), “I was Wrong, and So are You,” The Atlantic 308(5):66.
You can’t read about human behavior these days without seeing the word “neuroscience.” Many writers use it as shorthand for “how the brain works,” though it is actually the study of the entire nervous system. That lack of precision is part of why I am wary of their conclusions. Thankfully, two management professors provided context in Harvard Business Review. Adam Waytz and Malia Mason work at Kellogg School of Management and Columbia Business School respectively. But they are also postdoctoral fellows in neuroscience at Harvard Univ.
The New York Times claimed in one editorial that brain scans showed people loved their iPhones like lovers, and in another, were disgusted by the term “Republican.” But the same part of the brain was stimulated in each case. Waytz and Mason write, “The two op-eds are examples of something scientists call ‘brain porn’: mainstream media reports that vastly oversimplify neuroscience research—and fuel a burgeoning industry of neuroconsultants who suggest that they can unlock the secrets of leadership and marketing from the brain.”
Most of these claims are based on magnetic resonance imaging (MRI) studies showing what parts of the brain are being used in a situation. “But the problem is, MRI doesn’t necessarily show causation. What’s more, thinking and behavior don’t map onto brain regions one-to-one,” Waytz and Mason say. So far, scientists have identified 15 networks of brain sectors that work together in certain patterns.
The default network is still running when you are not concentrating on anything. Its existence means the brain continues to work on current knowledge even when not taking in new information. The authors say, “The capacity to envision what it’s like to be in a different place, a different time, a different person’s head, or a different world altogether is unique to humans and most potent when the default network is highly engaged.” This is valuable in innovation. Google’s famous policy of allowing one day a week for projects of personal interest is a good step toward engaging it, but that kind of program still focuses on problem solving. Waytz and Mason argue the quality of detachment may be more important than the quantity. “Companies could turn off employees’ e-mail and calendars; take away their phones; send them on a trip, away from all offices and other staff members; and take all other job duties off their plates,” they write, also mentioning meditation.
The reward network “reliably activates in response to things that evoke enjoyment and deactivates in response to things that reduce enjoyment.” Neuroscientists know it can be stimulated by rewards not necessary for survival. “That idea is consistent with a 2009 McKinsey survey of executives and managers who reported nonfinancial incentives to be as effective as financial ones in motivating employees—and at times more effective,” the article says. Brain studies suggest these include status, social approval, and fairness. One found that “when people are allowed to divide up small amounts of money between themselves and others, the reward network responds much more when they make generous, equitable choices.” Fairness also includes equal distribution of information and opportunities to provide input, Waytz and Mason say.
Opportunities to learn, goal achievement and interesting work stimulate the reward network, too. The authors say goals should not be too specific because they can stifle curiosity and flexibility. (Don’t include the “how” in a goal, I recommend, just the “what.”) A key takeaway is that money is not a top motivator. “Any number of things employers can do ‘on the cheap’—fostering a culture of fairness and cooperation, offering opportunities for people to engage their curiosity, and providing plenty of social approval—will motivate employees as much if not more,” Waytz and Mason conclude.
The professors say neuroscience has no answer yet on the correct balance between fact-based and gut-instinct decisions. What we call “feelings” are merely a set of physical responses in the body that can come from vastly different causes. We sometimes mis-identify the true cause, or get those feelings for irrational reasons. After giving a bad presentation, Waytz and Mason say, you will be nervous next time even when better prepared.
On the other hand, the subconscious mind apparently can figure things out before we are aware of them. In one study, people drew cards from four decks, two of which were rigged for bad results. After choosing 10 cards, normal people could get bad feelings when their hands hovered over bad decks. Yet it took 40 to 50 cards before they started actively choosing from good ones. People with damage in the “affect network” never had the stress response and continued making bad choices. The subconscious and conscious must work together, it seems. If you get a hunch, don’t ignore it. But seek both supporting and contradictory facts before making your decision.
The means by which you can make yourself do things is the control network. It must maintain a balance between competing interests. “It tilts the scales in favor of actions compatible with our goals but not to such an extent that our resources are overcommitted,” Waytz and Mason write. To help the control network, reduce your team’s multitasking. “Asking people to pursue numerous goals fragments their attention and makes engaging in any mindful work difficult. With too many objectives to maintain and monitor, the control network spreads its limited resources thin, and we struggle to give enough attention to any of our responsibilities,” they say. In another study, the control networks of multitaskers “failed to allocate resources in a way that matched their priorities (and) these people struggled to filter out irrelevant information.”
The professors say, “Success as a leader requires, first and foremost, creating just a few clear priorities and gathering the courage to eliminate or outsource less important tasks and goals.” They add, “The more leaders ask their workers to focus on, the worse those employees will perform. Though in the short term it’s cost-effective to keep staffs thin, brain science suggests many modern workers have already been pushed far beyond the point where their goals and tasks are manageable.”
What is your control network telling you now?
Source: Waytz, A., and M. Mason (2013), “Your Brain at Work,” Harvard Business Review, July-August, online: http://hbr.org/2013/07/your-brain-at-work/ar/1.
A study about HR practices points out how scientists think a little differently than most of us, and thus why I put a lot more stock in what they have to say than in most writers of business stories in popular publications.
This study was based on something called the Workplace and Employment Survey (WES), which is sponsored by the Canadian government. The WES is a powerful tool because those contacted are legally required to respond, making its results more likely to reflect what happens in most Canadian businesses than would surveys where only a small fraction respond. Right off the bat, this makes the study different. The response rate of around 96% is four times better than the typical response rate to surveys of roughly 25%. Scientists without government sanctions supporting them go to great lengths to make sure results are not too badly skewed due to those low response rates, and thus are similar to what they would have gotten had they reached everybody.
Scientists are also careful about drawing conclusions from their work. A major mistake most people make when reading about studies—including most journalists and consultants—is to confuse correlation with causation. Simply put, just because two data are linked, that doesn’t mean one caused the other. In the HR study, for example, higher levels of training at a workplace were linked to higher levels of people quitting. Is this because a better-trained worker has more skills they can use to get a job elsewhere, as the scientists suggest based on other research? Probably. But it also could be that companies with higher “quit rates” have to provide more training because they have to hire more people to backfill those positions. A simple correlation does not show whether the training came first or the quitting came first, and the article’s authors say that. (Their study design provides some evidence, though.)
Scientists will point out where their data are lacking. In this study, the authors point out the WES data is not very detailed. It is possible that classroom training leads to higher quit rates, but on-the-job training leads to lower ones. You can’t draw a conclusion about all training from this gross figure (gross as in “general,” not as in “yucky”).
Scientists also are pretty quick, at least in publications, to point out where they were wrong. In part this is because they know in peer-reviewed journals, where other anonymous scientists critique the articles before they are published, if the authors don’t admit they were wrong, the reviewers will tell them. In this study, some of the researchers’ hypotheses turned out to be wrong, and they state that.
Finally, scientists are careful to limit their conclusions to what they actually investigated and found. For example, these authors point out the study was only about voluntary turnover (versus firings), and there are likely to be compelling reasons for a company to offer training despite it harming this one metric. (If you doubt that, I refer you to the powerful evidence in the book The Fifth Discipline.)
Contrast all this to stories in popular business publications. They are not usually reviewed by other experts on the topic before publication. They assert positions without offering hard data to back it up. Their language is imprecise. I commented on another writer’s blog that the best “practices” a post claimed for teams were not “practices” at all, but descriptions of well-performing teams.
These stories also make claims they can’t support. A press release that got coverage from a national professional organization estimated the financial losses due to workers who avoid conflict at work. But when I contacted the firm that put out the study, they admitted the sample was just anybody who responded to an online poll, and the demographics showed that the respondents in no way represent the common worker. Sixty-seven percent were female, for example. Two-thirds worked in companies of 750 or more and 71 percent had college degrees. Most people work for smaller firms, and only around 25% of Americans have degrees. Yet the release claimed, “New research reveals employees waste an average of $1,500 and an 8-hour workday for every crucial conversation they avoid.” No, it doesn’t. It says workers who use the Internet and happened to see an ad for the survey and are interested enough to respond gave that as the average answer, which probably would not turn out to be accurate if an outside observer actually measured the time.
In short: reader beware.
Source: Haines, V., P. Jalette, and K. Larose (2010), “The Influence of Human Resources Management Practices on Employee Voluntary Turnover Rates in the Canadian Non Governmental Sector,” Industrial & Labor Relations Review 63(2):228.