Book Reviews
|
|||
Go to Executive Times
Archives |
|||
The
Wisdom of Crowds : Why the Many Are Smarter Than the Few and How Collective
Wisdom Shapes Politics, Business, Economies and Culture by James Surowiecki Rating: ••••• (Outstanding
book-read it now) |
|||
Click on
title or picture to buy from amazon.com |
|
||
|
|||
Collective Would you
believe that groups are often smarter than the smartest people in them? I
didn’t until I read James Surowiecki’s new book, The
Wisdom of Crowds. My experience has been that groups acting in concert
tend to “dumb down” the results. Surowiecki lays
out a different case, backed up by lots of facts and examples. Do you think
compromise and consensus produces the best results? According to Surowiecki, “…the best way for a group to be smart is for
each person in it to think and act as independently as possible.” There are
four conditions that characterize wise crowds: diversity of opinion,
independence, decentralization and aggregation. In the Wisdom of Crowds, Surowiecki explores each of these characteristics, and
explains how they can work best, and how they can lead to trouble. Here’s an
excerpt from Chapter 2,
Section III that may revise your thinking about experts: The fact that cognitive diversity matters does not mean that if you
assemble a group of diverse but thoroughly uninformed people, their
collective wisdom will be smarter than an expert’s. But if you can assemble a
diverse group of people who possess varying degrees of knowledge and
insight, you’re better off entrusting it with major decisions rather than
leaving them in the hands of one or two people, no matter how smart those
people are. If this is difficult to believe—in the same way that March’s assertions
are hard to believe—it’s because it runs counter to our basic intuitions
about intelligence and business. Suggesting that the organization with the
smartest people may not be the best organization is heretical, particularly
in a business world caught up in a ceaseless “war for talent” and governed
by the assumption that a few superstars can make the difference between an
excellent and a mediocre company. Heretical or not, it’s the truth: the
value of expertise is, in many contexts, overrated. Now, experts obviously
exist. The play of a great chess player is qualitatively different from the
play of a merely accomplished one. The great player sees the board differently,
he processes information differently, and he recognizes meaningful patterns
almost instantly. As Herbert A. Simon and W. G. Chase demonstrated in the
1970s, if you show a chess expert and an amateur a board with a chess game in
progress on it, the expert will be able to re-create from memory the layout
of the entire game. The amateur won’t. Yet if you show that same expert a
board with chess pieces irregularly and haphazardly placed on it, he will not
be able to re—create the layout. This is impressive testimony to how
thoroughly chess is imprinted on the minds of successful players. But it also
demonstrates how limited the scope of their expertise is. A chess expert
knows about chess, and that’s it. We intuitively assume that intelligence is
fungible, and that people who are excellent at one intellectual pursuit would
be excellent at another. But this is not the case with experts. Instead, the
fundamental truth about expertise is that it is, as Chase has said,
“spectacularly narrow.” More important, there’s
no real evidence that one can become expert in something as broad as
“decision making” or “policy” or “strategy.” Auto repair, piloting, skiing,
perhaps even management; these are skills that yield to application, hard
work, and native talent. But forecasting an uncertain future and deciding
the best course of action in the face of that future are much less likely to
do so. And much of what we’ve seen so far suggests that a large group of
diverse individuals will come up with better and more robust forecasts and
make more intelligent decisions than even the most skilled “decision maker.” We’re all familiar with
the absurd predictions that business titans have made: Harry Warner of
Warner Bros. pronouncing in 1927, “Who the hell wants to hear actors talk?,” or Thomas Watson of IBM declaring in 1943, “I think
there is a world market for maybe five computers.” These can be written off
as amusing anomalies, since over the course of a century, some smart people
are bound to say some dumb things. What can’t be written off, though, is the
dismal performance record of most experts. Between 1984 and 1999,
for instance, almost 90 percent of mutual-fund managers underperformed the
Wilshire 5000 Index, a relatively low bar. The numbers for bond-fund managers
are similar: in the most recent five-year period, more than 95 percent of all
managed bond funds underperformed the market. After a survey of expert
forecasts and analyses in a wide variety of fields, Wharton professor J. Scott Armstrong wrote, “I could
find no studies that showed an important advantage for expertise.” Experts,
in some cases, were a little better at forecasting than laypeople (although a
number of studies have concluded that nonpsychologists,
for instance, are actually better at predicting people’s behavior than psychologists
are), but above a low level, Armstrong concluded, “expertise and accuracy
are unrelated.” James Shanteau is one of the
country’s leading thinkers on the nature of expertise, and has spent a great
deal of time coming up with a method for estimating just how expert someone is.
Yet even he suggests that “experts’ decisions are seriously flawed.” Shanteau recounts a series of studies that have
found experts’ judgments to be neither consistent with the judgments of other
experts in the field nor internally consistent. For instance, the
between-expert agreement in a host of fields, including stock picking,
livestock judging, and clinical psychology, is below 50 percent, meaning
that experts are as likely to disagree as to agree. More disconcertingly, one
study found that the internal consistency of medical pathologists’ judgments
was just 0.5, meaning that a pathologist presented with the same evidence
would, ha1f the time, offer a different opinion. Experts are also
surprisingly bad at what social scientists call “calibrating” their
judgments. If your judgments are well calibrated, then you have a sense of
how likely it is that your judgment is correct. But experts are much like
normal people: they routinely overestimate the likelihood that they’re right.
A survey on the question of overconfidence by economist Terrance Odean found that physicians, nurses, lawyers, engineers, entrepreneurs,
and investment bankers all believed that they knew more than they did.
Similarly, a recent study of foreign-exchange traders found that 70 percent
of the time, the traders overestimated the accuracy of their exchange-rate
predictions. In other words, it wasn’t just that they were wrong; they also
didn’t have any idea how wrong they were. And that seems to be the rule among
experts. The only forecasters whose judgments are routinely well calibrated
are expert bridge players and weathermen. It rains on 30 percent of the days
when weathermen have predicted a 30 percent chance of rain. Armstrong, who studies expertise and
forecasting, summarized the case this way: “One would expect experts to have
reliable information for predicting change and to be able to utilize the
information effectively. However, expertise beyond a minimal level is of
little value in forecasting change.” Nor was there evidence that even if most
experts were not very good at forecasting, a few titans were excellent.
Instead, Armstrong wrote, “claims of accuracy by a single expert would seem
to be of no practical value.” This was the origin of Armstrong’s
“seer-sucker theory”: “No matter how much evidence exists that seers do not
exist, suckers will pay for the existence of seers.” Again, this doesn’t mean that well-informed,
sophisticated analysts are of no use in making good decisions. (And it certainly
doesn’t mean that you want crowds of amateurs trying to collectively perform
surgery or fly planes.) It does mean that however well-informed and
sophisticated an expert is, his advice and predictions should be pooled with
those of others to get the most out of him. (The larger the group, the more
reliable its judgment will be.) And it means that attempting to “chase the
expert,” looking for the one man who will have the answers to an
organization’s problem, is a waste of time. We know that the group’s decision
will consistently be better than most of the people in the group, and that it
will be better decision after decision, while the performance of human
experts will vary dramatically depending on the problem they’re asked to
solve. So it is unlikely that one person, over time, will do better than the
group. Now, it’s possible that a small number
of genuine experts—that is, people who can consistently offer better
judgments than those of a diverse, informed group—do exist. The investor
Warren Buffett, who has consistently outperformed
the S&P 500 Index since the 1960s, is certainly someone who comes to
mind. The problem is that even if these superior beings do exist, there is no
easy way to identify them. Past performance, as we are often told, is no
guarantee of future results. And there are so many would—be experts out
there that distinguishing between those who are lucky and those who are
genuinely good is often a near—impossible task. At the very least, it’s a job
that requires considerable patience: if you wanted to be sure that a
successful money manager was beating the market because of his superior
skill, and not because of luck or measurement error, you’d need many years,
if not decades, of data. And if a group is so unintelligent that it will
flounder without the right expert, it’s not clear why the group would be
intelligent enough to recognize an expert when it found him. We think that experts
will, in some sense, identify themselves, announcing their presence and
demonstrating their expertise by their level of confidence. But it doesn’t
work that way. Strangely, experts are no more confident in their abilities
than average people are, which is to say that they are overconfident like
everyone else, but no more so. Similarly there is very h tile correlation
between experts’ self— assessment and their performance. Knowing and knowing
that you know are apparently two very different skills. If this is the case, then
why do we cling so tightly to the idea that the right expert will save us?
And why do we ignore the fact that simply averaging a group’s estimates will
produce a very good result? Richard Larrick and
Jack B. Soll suggest that the answer is that we
have bad intuitions about averaging. We assume averaging means dumbing down or compromising. When people are faced with
the choice of picking one expert or picking pieces of advice from a number of
experts, they try to pick the best expert rather than simply average across
the group. Another reason, surely, is our assumption that true intelligence
resides only in individuals, so that finding the right person—the right consultant.
The right CEO—will make all the difference.
In a sense, the crowd is blind to its own wisdom. Finally, we seek out experts
because we get, as the writer Nassim Taleb asserts, “fooled by randomness.” If there are
enough people out there making predictions, a few of them are going to
compile an impressive record over time. That does not mean that the record
was the product of skill, nor does it mean that the record will continue into
the future. Again, trying to find smart people will not lead you astray.
Trying to find the smartest person
will. I’ve
enjoyed Surowiecki articles in The New Yorker and Slate
for several years, so I was predisposed with an open mind to what he had to
say in The
Wisdom of Crowds. Beyond the good writing, Surowiecki
brings some new thinking to disrupt my entrenched opinions and attitudes, and
I’m open to the possibility that my thinking about group processes may be
flawed. I’ve awarded The
Wisdom of Crowds with our top rating for several reasons: the premises
require thinking and reflection; Surowiecki’s
premises are supported through facts and examples; the notes disclose ample
sources for further investigation; good writing; and the material covers a
wide array of applications. The challenge for managers and leaders of groups
small and large is how to foster diversity of opinion, independence,
decentralization, and aggregate the knowledge to lead to better decisions and
better action. Reading The
Wisdom of Crowds and thinking about these issues provide a good
beginning. Steve
Hopkins, June 25, 2004 |
|||
|
|||
ã 2004 Hopkins and Company, LLC The recommendation rating for
this book appeared in the July 2004
issue of Executive Times URL for this review: http://www.hopkinsandcompany.com/Books/The
Wisdom of Crowds.htm For Reprint Permission,
Contact: Hopkins & Company, LLC • E-mail: books@hopkinsandcompany.com |
|||