This is a continuation of my series reflecting on Jacque Ellul's The Technological Society. I promised to write up what I felt to be some of the weaknesses of his method; this is the second.
"Wicked problems" are variously defined, but in effect, they're problems that you can understand completely only after you've solved them. You can get the idea by pondering what it would take to solve the Israeli-Palestinian conflict, or to create a genuine artificial intelligence. There are two key things to note about wicked problems: (1) they're everywhere, and (2) pretty much by definition, Technique doesn't work for them. This necessarily limits the power and scope of Technique, and carves out a realm where other phenomena can still function with some degree of autonomy. You can see how this works if you look at a classic wicked problem, how to write the best tax code. On p. 269, Ellul claims that "there is an optimum tax structure which can be completely determined," and that Technique will invariably force us to find and implement this structure. Yet 50 some years later, politicians in the US are still arguing about taxes – and any tax structure which might conceivably be viable in the US is dramatically different than what you would find in Europe. Indeed, due to the wildly differing philosophical presuppositions which underlie the debate, this is an area where there is very little agreement at all.
Similarly, Ellul describes (pp. 341ff) with great confidence the growing ability of psychology to accurately predict human behavior and, more disturbingly, allow technicians to control it. However, my experience with ad campaigns and, specifically, with the use of multivariate testing algorithms to select the best landing pages, leads me to believe that both Ellul's confidence and his worries are excessive. Determining the landing page that will give you the most conversions is a fairly simple and very well-defined problem, even if it has certain "wicked" elements to it. The most sophisticated approach involves assessing the attractiveness of different options within different page elements, using complex, multivariate statistics to overcome the astronomic number of combinations involved and predict the best combination of elements. At Zango, over the years, we tried this approach with at least three different companies, with precisely zero success. People simply didn't behave the way that the statistical models told us they'd behave; and even when they did, for a given page, it was nearly impossible to translate those learnings to the hundreds of other landing pages we needed to optimize. And this was for a very simple, very localized, very well-defined problem, with millions of data points available for analysis. Certainly advertisers, television executives, and movie producers have found a myriad of ways to manipulate us, and they're reasonably good at it. But it's still more art than science, more gut than technique. Who could have predicted the success of Nike's Just do it slogan? Or Apple's astonishingly simple "Hi I'm a Mac" ads? There's Technique there, sure, but there's a whole lot more creativity than Technique.
Here's another example. When an Internet company wants to maximize lifetime revenue from their audience, the standard technique is to split the audience up into "sample groups", and treat each of those sample groups differently (say, by showing them a different page when they visit your site). You then measure the "lifetime revenue per user" from each of those groups, and when you determine which of the sample groups has the highest lifetime revenue, you begin treating all your users the same way you treated those users. Google and Microsoft, those ancient adversaries, use this technique all the time. Back around 2000, when Google was just beginning its rise to power, both MSN and Google were trying to figure out how best to monetize their users. An insider from MS told me that the folks over at MSN decided to test showing ads on the MSN search page, and they tested it by dividing up the users into sample groups, and showing each sample group a different number of ads. Well, it turns out that the sample groups showed almost no difference in user lifetime, but the sample group which had the most ads had the best lifetime revenue. So MSN started showing a whole bunch of ads on their home page. And of course, why not? But the interesting thing is that Google ran the same tests, with the same sample groups, and they came up with the same results. But Google recognized that a sample group couldn't test everything: for instance, it couldn't test whether a user ended up referring friends to the site because it was so cool. So Google made the choice – against every advice that Technique could give them – not to show ads on their home page. In other words, unlike Microsoft, Google recognized that user retention was a "wicked problem", with counter-intuitive solutions. Of course, there are many reasons why Google has beaten Microsoft at search, but this recognition that not everything can be solved with Technique is a very big part of it.
Ellul reviews the various options which may stand in the way of Technique (morality, popular opinion, social structure and the state), and concludes that nothing in contemporary society is likely to stand in its way (pp. 301-318). But he ignores the entire class of problems that Technique simply can't address, and that calls his fundamental thesis into question. If Technique, by definition, has nothing to say to huge areas of human experience, it seems less of a threat than Ellul makes it out to be.
I suspect that it's only been in the last few decades – well after Ellul wrote – that we've come to recognize the nature and existence of wicked problems. The many futile attempts to create a general-purpose artificial intelligence have been highly enlightening in this regard. (See Hubert Dreyfus' What Computers Still Can't Do.) So it's perhaps understandable that Ellul could have repeated this claim:
"Jungk even claims that in the United States, on very advanced technical levels, unchallengeable decisions have already been made by 'electronic brains' in the service of the National Bureau of Standards; for example, by the EAC, surnamed the 'Washington Oracle'. The EAC is said to have been the machine which made the decision to recall General MacArthur after it had solved equations containing all the strategic and economic variables of his plan. This example, which must be given with all possible reservations, is confirmed by the fact that the American government has submitted to such computing devices a large number of economic problems that border on the political." (p. 259)
In the 1950's and 1960's, there was a fairly widespread assumption that the problems of artificial intelligence would be quickly solved, as evidenced by the tendency to call them 'electronic brains'. Still, this perspective seems absurdly naïve, and even though Ellul repeats it with "all possible reservations", the fact that he thought it worthy of repetition in any form shows just how badly he misunderstood the limitations of Technique. Certain problems are just not susceptible to technical solutions.
No comments:
Post a Comment