Like The Hub?
Join our community.
Join

Malcolm Jolley: Why (and for how long) you should cellar your wine

Commentary

The word cellar is both a noun and a verb. The latter meaning to simply put something in a cellar, though it usually is referring to a bottle, or more, of wine. In modern usage, one doesn’t have to cellar wine in an actual cellar. Wine can be put down in a purpose-built fridge, or in a corner that’s consistently cool and dark enough to allow the wine to age gracefully.

Merely having a cellar (or cellar-like storage place) is not enough for a successful venture in cellaring, though. As the Cambridge historian Helen Bettison writes, one also requires a wine that’s suited to be aged, and “some form of capital investment is usually necessary”. Actually, it’s worse than that: cellaring wine doesn’t require the willingness to pay for wine today that won’t be drunk for many tomorrows, it also requires the willingness to assume the risk that after several years of waiting what comes out of the bottle might not be any good.

Of course, the wines that are, by conventional wisdom, worth cellaring are almost always more expensive by orders of magnitude than those that aren’t. So, it’s little wonder that the great majority of the bottles of wine bought in the world are consumed within days, if not hours, of being purchased. And these wines are accordingly made to be enjoyed young. (For the purposes of this column “wine” refers to red wine, though there are many white wines that are suited to ageing.)

In fact, it’s reasonable to wonder why any wine is made to be cellared at all. Household wine cellars, as opposed to those in wineries or other buildings of commerce, are largely the invention of the wine trade from the 17th century on. Wine shipped from the port of Bordeaux to lucrative markets in London, Amsterdam, and Edinburgh would have had to have been stabilized, perhaps with sulphur compounds but surely with the astringent tannins from grape skins (especially Cabernet Sauvignon) and the oak barrels in which it was contained.

Stabilization is a fancy way of saying the wine was treated and made to inhibit living things from growing in it and spoiling it. Most wines now contain trace amounts of sulphides because their molecules bind with oxygen, which many bad bugs need to thrive, and of itself causes oxidization, a chemical degradation. So-called “natural wines” that are made without sulphides typically rely on tannins, drawn from long contact with the skins or even stems for extra astringency.

Very tannic wines are unpleasant to drink young; they dry out the mouth, like sucking on a teabag. This effect is commonly referred to in tasting notes as “gripping”. Before modern winemaking technology, like temperature-controlled fermentation tanks, tannins were a valuable and sought-after tool to keep wine from spoiling on its way to market. (Hops in beer perform an analogous role and were widely developed as stabilizers by the traders of the Hanseatic League.)

The trade-off for a London wine consumer, whose odds of receiving an unspoilt case of Claret were improved by the tannins in the wine, was that he would have to age the wines for some years before they were pleasant to drink. If you were (or are) the owner of a large house with a cellar and had (have) the capital to routinely buy wine en premier, or just by the case, then these wines would suit fine, since one could manage a rolling inventory of young wines in and old wines out. (Restaurants with big wine lists work this way.)

Does this make sense in the modern era? Like all answers to wine questions, it’s “it depends”. And like most answers to wine questions, the follow-up reason for why it might make sense is “because it’s fun”.

For most of my career as a wino, I did not cellar much wine since I lacked the simultaneous possession of Bettinson’s three criteria. What capital I had to spend on wine I tended to use to buy widely as I could, eager to try and enjoy many different styles of the stuff. I had a small fridge where a few prize bottles would sit for a while, but even then the temptation to open them would eventually become too strong to resist and the rate of depletion would sometimes exceed the rate of acquisition. Finally, as a journalist, I was fortunate enough to get in front of a fair amount of older wines at tasting and events, so at least I had some “lived experience”.

This began to change in the last few years. First, my wife and I renovated and expanded our house, and a storage area was added to the basement, under the new stairs. I had room. Then, the lockdowns of the pandemic changed the way I bought wine so that I bought a lot more of it, including “age-worthy” more expensive bottles, by the case. Since the cases of wine had to be stored in the short term in the cellar, I began to segregate bottles to different shelves and would put a few in the ones I set up for keeping.

Apart from the metaphysical thrill of drinking something old (what were you doing in 2012?), there are physical benefits too. In my experience, mid-range reds ($20-$40 a bottle) hit a kind of sweet spot at three to five years from vintage, though are often sent to market at two. Even if they are not particularly tannic, they seem to settle and some extra fruit is released.

As for the expensive wines that really are made to be cellared, the classified growths and luxury brands, I try hard to keep them down until they are a decade from being grown and picked. What I hope for is that they will have opened up, softened their grip without losing all their structure, and will reveal the core of their fruit character as well as the so-called tertiary characteristics that come with ageing, like earthiness, cedar, tobacco, or whatever.

In a conversation earlier this year, the Australian-based Master of Wine Neil Hadley remarked in passing that he thought that no wine benefited from more than 15 years of ageing. He didn’t mean that no wine couldn’t or shouldn’t be aged for longer, just that whatever chemical magic happened in a cellared bottle would likely have run its course by then. After the peak might come a plateau, though the risk of decline would also be present.

I have tasted just enough really old wine, say 30 or more years old, to know that at some point they kind of all taste the same. The flavours that made them distinct sort of melt into a raisiny rusty blur in the glass, which is best drunk quickly before exposure to oxygen kills what’s left to savour. It’s not unpleasant, but when a really old wine, like a 50-year-old Bordeaux or Barolo, still sings with fruit it’s a very special thing, and I am always grateful for whoever spent their capital on the investment.

Howard Anglin: Three cheers for the notwithstanding clause

Commentary

There are times when I read the news and wonder who is feeding everyone their lines. How did the commentariat settle so quickly on a common script and circulate it to everyone who pops up in my news feed? It happened this week in response to the Ontario government’s decision to use section 33 of the Charter to immunize its legislation to protect students from the threat of further disruption to their schooling (in this case, pre-empting a threatened strike by the union representing 55,000 educational support workers). Everywhere I looked I was assured that the Ford government’s use of section 33, the “notwithstanding clause,” was an abuse of a power that was only ever intended to be used in exceptional circumstances.

So we have Seamus O’Regan explaining that “the notwithstanding clause is meant to be used in the most extreme of circumstances,” Scott Reid tweeting that “its use was always imagined to be rare and accompanied by great stigma,” and Andrew Coyne intoning that “[t]he clause was to be deployed in the most rare and urgent crises, if at al—not in response to every provincial hangnail” and calling on the federal government to invoke its even more rarely-used power to disallow the provincial law.

Much as I always enjoy Coyne’s view of Canada from the terrace of the Price Street Terroni, protecting students who have lost the better part of a year of formal education from further learning loss (not to mention the havoc that closing schools at this time would bring to parents’ lives and a precarious economy) is something more than a hangnail. In fact, managing the difficult balance between the competing interests of teachers, students, and parents in a case like this is precisely the sort of thing that we elect representatives to the legislature to do. 

Which would not be controversial except that, beginning in 2007, the Supreme Court of Canada began discovering rights in section 2(d) of the Charter that protected first collective bargaining (in the BC Health Services case (2007) and then the right to strike (in Saskatchewan Federation of Labour case (2015). This was noteworthy because, in a trilogy of cases decided just five years after the Charter was enacted, the same court had ruled that these rights were not protected under that same section of the Charter.

When I referred to the Supreme Court’s “invention” of a right to strike in 2015, I received furious assurance on Twitter that the court had not “invented” a right, its thinking had merely evolved incrementally, or something (honestly, I couldn’t keep the various rationalizations straight). So let’s have a look at how the Court justified reversing 28 years of its own Charter jurisprudence: “It seems to me to be the time to give this conclusion [that the Charter includes a right to strike] constitutional benediction.” 

Huh. The reversal of established precedent with far-reaching consequences for public policy rested on…a feeling. The time just seemed right, apparently; gut instinct as constitutional principle. Yes, the majority pastes together an impressive collage of evidence based in “history…jurisprudence, and…Canada’s international obligations,” but nothing that wasn’t known by the Court in 1987. As the dissent points out, most of the evidence “existed at the time this Court rendered its decisions in the Labour Trilogy” and “[c]ontrary to the majority’s approach, international law provides no guidance to this Court…for at least one key reason: the current state of international law on the right to strike is unclear.”

Nothing had fundamentally changed about the relations between workers and employers since 1987—if anything, workers’ conditions and bargaining rights had improved since then. But five judges of the Supreme Court wanted the law to change, and so the federal and provincial legislatures lost a power they had had since Confederation. This whimsical decision is the sacred principle that the Ontario government is now offending. 

Is Ontario’s policy a wise one? A desirable one? Honestly, I have no idea. I don’t follow Ontario politics that closely and am not in a position to judge. Which is why the decision should be left to the representatives Ontario voters elected to make it. But the idea that our politicians should be responsible for policy-making is apparently now a controversial, even extreme, view among Canadian elites. Bizarrely, Seamus O’Regan even called the exercise of democratic law-making “an affront to democracy.”

More plausibly, Coyne called the use of section 33 of the Charter a violation of “the 1982 bargain,” referring to the inclusion of the clause as part of “a careful balancing of concessions, not only between those who wanted ironclad rights guarantees and those who preferred parliamentary supremacy, but also between the federal and provincial governments.” And here we have the root of the problem. I agree that “the 1982 bargain is now off,” but I differ in that I lay the blame not with Ford but where it belongs, with the courts.

When the Charter was mooted in the early 1980s, it provoked alarm on both the Left and the Right. Both sides had had front-row seats for a display of just how aggressive judges can be when empowered by an American-style bill of rights and freed from the English (and, it was thought, Canadian) tradition of restraint, with its institutional allergy to judges mixing law and politics. Both looked with trepidation at the experience of the United States in the 20th century.

On the Left, there was concern that the Charter’s individualistic liberalism would be an obstacle to governments making decisions that put collective interests and the communal goods above private interests. For proof, they pointed to the American experience of the Lochner era. On the Right, there was concern that the privileging of individual autonomy rights would lead to courts striking down laws rooted in a transcendent vision of the public good, tradition, or moral reasoning. They pointed to the American experience of the Warren and Burger courts.

The drafters of the Charter tried to assuage these concerns by drafting the text to avoid these problems. They included an explicit limitations clause up front in section 1, which made it clear that the rights set out in general and abstract terms were not absolute, but could (and should) be limited in the public interest under certain (vaguely stated) conditions. They also avoided using the words “due process” to describe the protection of certain rights in section 7, in part because U.S. judges had read these words (against their common and ordinary meaning) to empower themselves as a super-legislature. Instead, the Charter used the term “principles of fundamental justice.”

Finally, the drafters preserved the principle of parliamentary supremacy by providing in the Charter itself that a legislature is not bound by judicial interpretations of the text. It can, if it chooses to, immunize its legislation (subject to certain limitations) from judicial review using the notwithstanding clause. Coyne refers to this safeguard as “the product of some particularly grubby last-minute bargaining,” which is a strange way to refer to accommodating the principle that, for more than three centuries, has been the soul of the Westminster constitutional system. 

One by one the Supreme Court of Canada, with the help of the same liberals who saw in the Charter the entrenchment of their preferred philosophy, removed these prophylactics. First, in 1985, the Court declared that the intention of the Charter’s drafters to provide only procedural protections to the rights set out in section 7 was irrelevant because it would unduly limit the Court’s power. Then, a year later, the Court imported the German principle of proportionality to inform its interpretation of the Charter. In practice, this gave the Court wide and often arbitrary power while rendering the limitations clause in section 1 an afterthought.

So far, the Court has not taken the drastic step of neutering section 33, though there are many in the legal academy calling on them to do so. In the meantime, our media and political class have done the work for them, mounting a sustained campaign to delegitimize its use. And for most of the last forty years, they have succeeded (outside Quebec, and to be fair to them, they never agreed to the Charter). Now that the taboo seems to be losing its power ever so slightly, their rhetoric has shifted from alarmist to apocalyptic. 

But those hyperventilating about the more frequent use of the notwithstanding clause are directing their concern in the wrong direction. If the notwithstanding clause was supposed to be an exceptional power used only in rare cases of egregious judicial overreach, it is because it was thought that such cases would in fact be rare. But once judges began exercising the power to resolve contested questions of public policy—everything from whether prisoners should be allowed to vote to whether there is a right to assisted suicide to the appropriate sentences for mass murderers—they violated “the 1982 bargain.” 

Sometime in the first two decades of the Charter, all bets should have been off. Once the courts made exceptional cases the norm, governments were not only justified but practically compelled to establish a counter-norm of using the notwithstanding clause to restore balance. The only surprise is that it took them so long, but better late than never.

Now, I don’t want to be too hard on our elites. They have good reasons to want to restrict the scope of legislative policymaking by empowering judges to remove quintessentially political questions from the political forum. At least that’s what they tell us. I’m sure it’s just a coincidence that the courts’ choices about when to step into the political fray and when to hold back tend to align with their own policy preferences. In 1968, JAG Griffith described this as a “conjuring trick” in which judges “render to Caesar the things that are Caesar’s and to themselves the things that are God’s—the ultimate values of justice, fair play, and holding the balance between the powers of the executive and individual rights.” 

Unlike most commentators in the Canadian media this week, I think judges make poor gods. Call me a stickler for democracy, but I prefer that the people wielding ultimate power in my society be accountable and, in a pinch, removable. You can have your serene and untouchable juristocracy. Give me the spirited, messy, agonistic struggle of representative and responsible government, in which our rulers are constantly reminded of what they owe to the ruled. And if it takes the Charter’s notwithstanding clause to protect it, then I hope our governments use it as often as is necessary to do so.