You don’t hear (or at least I don’t hear) people talking about our current epoch as an “Information Age” anymore. That might be because it’s taken for granted, just as I doubt Sumerians a few thousand years ago were walking around talking about their “Tablet Age”, but I suspect that there’s also a certain embarrassment about claiming the contemporary world to be one that’s distinguished by its respect for information. The novelty of the Internet has worn off, replaced by the grim realization that we are (barring catastrophe) probably in the least technologically mediated era any of us will live in ever again. What had once seemed like liberation now seems like a sort of home confinement: it’s not prison, exactly, but it’s also not exactly the freedom we once had. (No, really, remember the techno-optimism of the Nineties?)
Yet it is still an Information Age. Our society is distinguished from earlier decades, much less centuries, by the ease and affordability of accumulating, transforming, and producing vast amounts of data. That marks us this just as Sumeria was defined by the tilling of soils and harvesting of grains. And if it is slowly dawning that the reaping of data may not emancipate us all, well, so was it true that the average Mesopotamian might not have benefitted from the advent of centralized state bureaucracies to manage those harvests.
That the manipulation of data is so much cheaper now than it was when it required an array of workers (RIP, Dabney) with typewriters and Wite-Out, much less expensively trained scribes to craft warehouses full of clay pictograms, has conjured an expectation that the creation of information is similarly cheap and more productive. Note, by the way, that data changed to information in that sentence not because of editorial errors (although Enlil knows this newsletter has enough typos) but because I do want to draw a distinction between a mass of observations and a worthwhile interpretation. Setting aside questions of validation for a second, the shock of falling prices in the data-gathering and manipulation business has been to raise the relative costs of interpretation substantially.
When I do work in the 1920s and thereabouts, which I do surprisingly frequently, I’m reminded that the massive tabulations that I browse are the results of punchcard manipulations done by computers back when that was a gendered term (and click the link if you don’t know what I mean). Simply computing an average or median was a major task in 1925; from there, providing an interpretation was comparatively less demanding. I can assure you that the ratio of interpretations to data was skewed well in favor of the former as a result.
Now, the ratio is, if not reversed, certainly rebalanced: asking Excel to compute an average takes less time than to describe it, but describing the significance of that average still requires the same hardware—a Mark 1 brain—about the same amount of time.
Even in that simple example, then, there’s an upper limit to the number of informed observations any analyst can make, even with all of the advances in computation that make the back-breaking part of analysis now as easy as magic. (I’m just imagining going back in time to a temple or counting-house in Uruk and showing them how easy it is to keep records on a spreadsheet—those guys would learn Visual Basic tomorrow and be fighting about R versus Python within a week, so happy would they be to no longer have to keep records by scratching in tablets.)
That, in turn, means that the relatively more expensive product of an Information Age is interpretation, especially reliable interpretation. For producers of interpretations, this is stunningly obvious: it’s still hard to write a newspaper column, journal article, or trade presentation. For consumers, however, this may be less so. Interpretations appear as easily as anything else does. If Amazon can provide same-day shipping, why can’t writers provide informed assessments on demand?
Yet by now the fallacy should be obvious. Even if an expert can turn around an assessment on a given issue quickly, that reflects the fact that experts are specialized tools, not general-purpose analysis engines. The cost of developing expertise has likely risen substantially relative to the marginal cost of data processing.
Why does this matter? Because expecting any individual writer, research team, or institution to provide interpretation on demand is commonplace, usually in the form of “Why haven’t you talked about X or Y?”1 Offering an interpretation always and everywhere entails a reputational risk, as there may be a flaw in one’s logic or evidence that, once found, can leave a mark. Reducing those risks requires investments in the production and validation of interpretations—and ultimately time and resources are scarce. And this is just the familiar logic of why specialization means giving up pretenses to generality.
The takeaway, then, is that informed opinions are expensive. Someone with an answer for every situation is apt to be unreliable in, at least, most of them. And demanding a position from everyone about everything is ultimately not only futile but self-defeating.
I don’t get this very often! Which does make me wonder if that’s a backhanded compliment.