Lab test to check for beef works best if meat is uncooked

The technique used to check for a meat’s origins works best when the meat is raw and worst when it is pan-fried.

Featured image credit: snre/Flickr, CC BY 2.0.

Ahead of Eid al Adha celebrations on September 13, the police in Haryana’s Mewat district were tasked with sniffing through morsels of meat biryani sold by vendors to check for the presence of cow beef. Haryana has some of India’s strictest laws on the production and consumption of cow-meat. The state also receives the largest number of complaints against these acts after Uttar Pradesh, according to the National Crime Records Bureau. However, the human senses are easily waylaid, especially when the political climate is charged, allowing room for the sort of arbitrariness that had goons baying for the blood of Mohammad Akhlaq in Dadri in September 2015.

The way to check if a piece of meat is from a cow is to ascertain if it contains cow DNA. The chemical test used for this is called a polymerase chain reaction (PCR), which rapidly creates multiples copies of whatever sample DNA is available and then analyses them according to preprogrammed rules. However, the PCR method isn’t very effective when the DNA might be damaged – such as when the meat is cooked at high temperatures for a long time.

The DNA molecule in most living creatures on Earth consists of a sequence of smaller molecules called nucleotides. The sequence of nucleotides in their entirety is unique to each individual creature as long as its cells contain DNA. A segment of these nucleotides also indicate what species the creature belongs to. It is this segment that a molecular biologist, usually someone at the postgraduate level or higher, will mount a hunt for using the physical and chemical tools at her disposal. The segment’s nucleotides and their ordering will give away the DNA’s identity.

The Veterinary and Animal Sciences University in Hisar, Haryana, is one centre where these tests are conducted. NDTV reported on September 10 that the university had been authorised to do so only two days before it received its first test sample. The vice-chancellor subsequently clarified that two other centres in the state were being set up to conduct these tests – but until they were ready, the university lab would be it.

What would need to be set up? Essentially: an instrument called a thermal cycler to perform the PCR and someone qualified to conduct the PCR, usually at the postgraduate level or higher. The following is how PCR works.

Once some double-strands have been extracted from cells in the meat, they are heated to about 96 ºC for around 25 seconds to denature them. This breaks the bonds holding the two strands together, yielding single strands. Then, two molecules, a primer and a probe, are made to latch onto each DNA single-strand. Primers are small strands of DNA, typically a dozen nucleotides long, that bind complementarily to the single-strand – i.e., the nucleotides adenine on one strand with thymine on the other, and cytosine on one with guanine on the other. Probes are also complementary strands of nucleotides, but its nucleotides are chosen such that the probe binds to sequences that identify the DNA as being from cows. They also contain some fluorescent material.

To enable this latching, the reaction temperature is held at 50-65 ºC for about 30 seconds.

Next, an enzyme called a DNA polymerase is introduced into the reaction solution. The polymerase elongates the primer – by weaving additionally supplied nucleotides along the single-strand to make a double-strand all over again. When the polymerase reaches the probe, it physically disintegrates the probe and releases the fluorescent material. The resulting glow in the solution signals to the researcher that a nucleotide sequence indicative of cow is present in the DNA.

If the Taq polymerase, extracted from microbes living around hot hydrothermal vents on the ocean floor, is used, the reaction temperature is maintained at 72 ºC. In this scenario, the polymerase weaves in about 1,000 nucleotides per minute.

A molecular biologist repeats these three tasks – denaturing the strands, latching the primer and probe on and elongating the primer using polymerase – in repeated cycles to make multiple copies of DNA. At the end of the first cycle, there is one double-strand DNA. At the end of the second, there are two. At the end of the third, there will be eight. So each cycle produces 2n DNA double-strands. When 20 cycles are performed, the biologist will possess over a million DNA double-strands. After 40 cycles, there will be almost 1.1 trillion. Depending on the number of cycles, PCR could take between two and six hours.

These many DNA molecules are needed to amplify their presence, and expose their nucleotides for the finding. The heating cycles are performed in the thermal cycler. This instrument can be modified to track the rate of increase of fluorescence in the solutions, and check if that’s in line with the rate at which new DNA double-strands are made. If the two readings line up, the molecular biologist will have her answer: that the DNA identifies meat from a cow.

The test gets trickier when the meat is cooked. The heat during preparation could damage the DNA in the meat’s cells, denaturing it to a point beyond which PCR can work with. One biologist The Wire spoke to said that if the “meat is nicely overcooked at high temperature, you cannot PCR anything”. A study published in the journal Meat Science in 2006 attests to this: “… with the exception of pan frying for 80 min, beef was determined in all meat samples including the broth and sauce of the roasted meat” using PCR.

At the same time, in March 2016, a study published in Veterinary World claimed that PCR could check for the origins of cooked and raw meat both, and also ascertain the presence of a small amount of beef (up to 1%) present in a larger amount of a different meat. The broader consensus among biologists seems to be that the more raw the meat, the easier it would be to test. The meat starts to become untestable when cooked at high temperatures.

A PCR test costs anywhere between Rs 2,000 and Rs 7,000.

The Wire
September 15, 2016

Corrected: Environmental journalism in India and false balance

If only in Indian environmental journalism, there is no Other Side anymore for sure.

Featured image credit: mamnaimie/Flickr, CC BY 2.0

I’ve developed a lousy habit of publishing posts before they’re ready to go, and not being careful enough about how I’m wording things. It happened recently with the review of Matthew Cobb’s book and then last evening with the post about false balance in environmental journalism. I don’t think my blog is small enough any more for me to be able to set the record straight quietly (evinced by the reader who pointed out some glaring mistakes). So this is fixing the false balance post. Apologies, I’ll be more careful next time.

In the same vein, any advice/tips on how to figure when an opinion is ready to go (and you’ve not forgotten something) would be much appreciated. What I usually do is take a break for 30 minutes after I’ve finished writing something, then return to it and read it out loud.


It’s no secret that the incumbent NDA government ruling in India has screwed over the country’s environmental protection machinery to such an extent that there remain few meaningful safeguards against corporate expansionism – especially of the rapacious kind. Everything – from land acquisition, tribal protection and coastal regulation to pollution control and assessment – has been systematically weakened. As a result, the government’s actions have become suspect by default.

For journalists in India, this has come with an obvious tilt in the balance of stories. Government actions and corporate interests have become increasingly indefensible. What redemption they may have been able to afford started to dissipate when both factions started to openly rub shoulders with each other, feeding off each others’ strengths: the government’s ability to change policy and modify legislation and the companies’ ability to… well, fund. Prime example: the rise of Gautam Adani.

In Indian journalism, therefore, representing all three sides in an article – the government, corporate interests and the environment – (and taking a minimalist PoV for argument’s sake) is no longer the required thing to do. Representation is magnified for environmental interests while government and corporate lines are printed as a matter of courtesy, if at all. This has become okay, and it is.

Do I have a problem with this? No. That’s why doing things like asking corporate interests what they have to say is called a false balance.

Is activist journalism equivalent to adversarial journalism simply by assuming its subject is to right a wrong? Recently, I edited an article for The Wire about how, despite the presence of dozens of regulatory bodies, nobody is sure who is responsible for conserving and bettering the status of India’s wetlands. The article was penned by an activist and was in the manner of an oped; all claims and accusations were backed up, it wasn’t a rant. I think it speaks more to the zeitgeist of Indian environmental journalism and not the zeitgeist of journalism in general that opeds like that one have become news reports de jure. In other words: if only in Indian environmental journalism, there is no Other Side anymore for sure.

This advent of a ‘false balance’ recently happened in the case of climate change, where a scientific consensus was involved. That global warming is anthropogenic came to be treated as fact after scientific studies to assess its origins repeatedly reached that conclusion. Therefore, journalistic reports that quote climate-change skeptics are presenting a false balance of the truth. A decision to not quote the government or corporate interests in the case presented above, however, is more fluid, influenced not by the truth-value of a single parameter but by the interests of journalism itself.

Where this takes us isn’t entirely difficult to predict: the notion of balance itself has had a problematic history, and needs to be deprioritised. Its necessity is invoked by the perception of many that journalism is, or has to be, objective. It may have been associated with objectivity at its birth but journalism today definitely has mostly no need to be. And when it doesn’t need to be happens only through the advent of false balances.

Lingua franca

Featured image credit: allypark/Flickr, CC BY 2.0

Guilt can be just as disabling as arrogance, however. The political good which Spivak has done far outweighs the fact that she leads a well-heeled life in the States. If complicity means living in capitalist society, then just about everyone but Fidel Castro stands accused of it; if it means ‘buying in’ (as the Americans revealingly phrase it) to something called Western Reason, then only those racist or non-dialectical thinkers for whom such reason is uniformly oppressive need worry about it. … In any case, Spivak is logically mistaken to suppose that imagining some overall alternative to the current system means claiming to be unblemished by it. To imagine that it would be nice to be in Siena is not necessarily to disavow the fact that I am in Scunthorpe.

These lines are from Terry Eagleton’s review of a book by Gayatri Chakravorty Spivak, called A Critique of Post-Colonial Reason: Toward a History of the Vanishing Present. I’ve heard of Spivak but the other two names in the previous sentence I admit ring no bells. And more than that the contents of the book (i.e. the lines quoted by Eagleton in his review) bounce off my head like raindrops off a Teflon boulder. To be sure, Eagleton’s review is about how Spivak is good at what she does but somehow her admiration of political writers past overlooks the lucidity of their writing, having written the book in “overstuffed, excessively elliptical prose”. (However, the word ‘unreadable’ doesn’t show up anywhere in the review.)

Anyway, the acknowledgment in the first half of the third line from the quote above was interesting to read. It’s something I’ve had trouble reconciling with, with Arundhati Roy as a popular example: how do you rile against the sort of passive injustice exemplified by oppressing the so-called ‘lower classes’ from the balcony of a palatial home? The second half of the same line is worse – I still don’t get it (although I am embarrassed by my ignorance as well as by my inability to surmount it). My problem is that the sentence overall seems to suggest that enjoying the fruits of a capitalist society is not complicity if only because it implicates a majority of the elite.

I’m wrong… right?

Thanks to Jahnavi for help untangling some of the lines.

ToI successfully launches story using image from China

Has the prime-mover of our space programme been taken hostage by the consequences of not separating space research from defence research?

It may not seem like a big deal, and the sort of thing that happens often at Times of India. After ISRO “successfully” tested its scramjet engine in what seem like the early hours of August 28, Times of India published a story announcing the development. And for the story, the lead image was that of a Chinese rocket. No biggie, right? I mean, copy-editors AFAIK are given instructions to not reuse images, and in this case all the reader needed to be shown was a representative image of a rocket taking off.

The ToI story showing a picture of a Chinese rocket adjacent to the announcement that ISRO has tested its scramjet engine.
The ToI story showing a picture of a Chinese rocket adjacent to the announcement that ISRO has tested its scramjet engine.

But if you looked intently, it is a biggie. I’m guessing Times of India used that image because it had run out of ISRO images to use, or even reuse. In the four days preceding the scramjet engine test, ISRO’s Twitter timeline was empty and no press releases had been issued. All that was known was that a test was going to happen. In fact, even the details of the test turned out to be different: ISRO had originally suggested that the scramjet engine would be fired at an altitude of around 70 km; sometime after, it seems this parameter had been changed to 20 km. The test also happened at 6 am, which nobody knew was going to be the case (and which is hardly the sort of thing ISRO could decide at the last minute).

Even ahead of – and during – the previous RLV-related test conducted on May 23, ISRO was silent on all of the details. What was known emerged from two sources: K. Sivan from the Vikram Sarabhai Space Centre in Thiruvananthapuram and news agencies like PTI and IANS. The organisation itself did nothing in its official capacity to publicly qualify the test. Some people I spoke to today mentioned that this may not have been something ISRO considered worth highlighting to the media. I mean, no one is expecting this test to be sensational; it’s already been established that the four major RLV tests are all about making measurements, and the scram test isn’t even one of them. If this is really why ISRO chooses to be quiet, then it is simply misunderstanding the media’s role and responsibility.

From my PoV, there are two issues at work here. First, ISRO has no incentive to speak to the media. Second, strategic interests are involved in ISRO’s developing a reusable launch vehicle. Both together keep the organisation immune to the consequences of zero public outreach. Unlike NASA, whose media machine is one of the best on the planet but which also banks on public support to secure federal funding, ISRO does not have to campaign for its money nor does it have to be publicly accountable. Effectively, it is non-consultative in many ways and not compelled to engage in conversations. This is still okay. My problem is that ISRO is also caged as a result, the prime-mover of our space programme taken hostage by a system that lets ISRO work in the environment that it does instead of – as I get often get the impression from speaking to people who have worked with it – being much more.

In the case of the first RLV test (the one on May 23), photos emerged a couple days after the test had concluded while there was no announcement, tweet or release issued before; it even took a while to ascertain its success. In fact, after the test, Sivan had told Zee News that there may have been a flaw in one of ISRO’s calculations but the statement was not followed up. I’m also told now that today’s scram test was something ISRO was happy with and that the official announcement will happen soon. These efforts, and this communication, even if made privately, are appreciated but it’s not all that could have been done. One of the many consequences of this silence is that a copy-editor at Times of India has to work with very little to publish something worth printing. And then get ridiculed for it.

Corrected: ‘Life’s Greatest Secret’ by Matthew Cobb

‘Life’s Greatest Secret’ focuses on those efforts to explore the DNA that were only a sideshow in ‘The Gene’ but possess intrigues of their own.

An earlier version of this post was published by mistake. This is the corrected version. Featured image credit: amazon.in

When you write a book like Siddhartha Mukherjee’s The Gene: An Intimate History, the chance of a half-success is high. You will likely only partly please your readers instead of succeeding or even failing completely. Why? Because the scope of your work will be your biggest enemy, and in besting this enemy, you will at various points be forced to find a fine balance between breadth and depth. I think the author was not just aware of this problem but embraced it: The Gene is a success for having been written. Over 490 pages, Mukherjee weaves together a social, political and technical history of the genome, and unravels how developments from each strain have fed into the others. The effect is for it to have become a popular choice among biology beginners but a persistent point of contention among geneticists and other researchers. However, that it has been impactful has been incontestable.

At the same time, the flipside of such a book on anything is its shadow, where anything less ambitious or even less charming can find itself languishing. This I think is what has become of Life’s Greatest Secret by Matthew Cobb. Cobb, a zoologist at the University of Manchester, traces the efforts of scientists through the twentieth century to uncover the secrets of DNA. To be sure, this is a journey many authors have retraced, but what Cobb does differently are broadly two things. First: he sticks to the participants and the progress of science, and doesn’t deviate from this narrative, which can be hard to keep interesting. Second: he combines his profession as a scientist and his education as an historian to stay aware, and keep the reader aware, of the anthropology of science.

On both counts – of making the science interesting while tasked with exploring an history that can become confusing – Cobb is assisted by the same force that acted in The Gene‘s favour. Mukherjee banked on the intrigues inherent in a field of study that has evolved to become extremely influential as well as controversial to get the reader in on the book’s premise; he didn’t have to spend too much effort convincing a reader why books like his are important. Similarly, Life’s Greatest Secret focuses on those efforts to explore the DNA that played second fiddle to the medicinal applications of genetics in The Gene but possess intrigues of their own. And because Cobb is a well-qualified scientist, he is familiar with the various disguises of hype and easily cuts through them – as well as teases out and highlights less well-known .

For example, my favourite story is of the Matthaei-Nirenberg experiment in 1961 (chapter 10, Enter The Outsiders). Marshall Nirenberg was the prime mover in this story, which was pegged on the race to map the nucleotide triplets to the amino acids they coded for. The experiment was significant because it ignored one of Francis Crick’s theories, popular at the time, that a particular kind of triplet couldn’t code for an amino acid. The experiment roundly drubbed this theory, and in the process delivered a much-needed dent to the circle of self-assured biologists who took Crick’s words as gospel. Another way the experiment triumphed was by showing that ‘outsiders’ (i.e. non-geneticists like the biochemists that Nirenberg and Heinrich) could also contribute to DNA research, and how an acceptance of this fact was commonly preceded by resentment from the wider community. Cobb writes:

Matthew Meselson later explained the widespread surprise that was felt about Nirenberg’s success, in terms of the social dynamics of science: “… there is a terrible snobbery that either a person who’s speaking is someone who’s in the club and you know him, or else his results are unlikely to be correct. And here was some guy named Marshall Nirenberg; his results were unlikely to be correct, because he wasn’t in the club. And nobody bothered to be there to hear him.”

This explanation is reinforced by a private letter to Crick, written in November 1961 by the Nobel laureate Fritz Lipmann, which celebrated the impact of Nirenberg’s discovery but nevertheless referred to him as ‘this fellow Nirenberg’. In October 1961, Alex Rich wrote to Crick praising Nirenberg’s contribution but wondering, quite legitimately, ‘why it took the last year or two for anyone to try the experiment, since it was reasonably obvious’. Jacob later claimed that the Paris group had thought about it but only as a joke – ‘we were absolutely convinced that nothing would have come from that’, he said – presumably because Crick’s theory of a commaless code showed that a monotonous polynucleotide signal was meaningless. Brenner was frank: ‘It didn’t occur to us to use synthetic polymers.’ Nirenberg and Matthaei had seen something that the main participants in the race to crack the genetic code had been unable to imagine. Some later responses were less generous: Gunther Stent of the phage group implied to generations of students who read his textbook that the whole thing had happened more or less by accident, while others confounded the various phases of Matthaei and Nirenberg’s work and suggested that the poly(U) had been added as a negative control, which was not expected to work.

A number of such episodes studded throughout the book make it an invaluable addition to a science-enthusiast’s bookshelf. In fact, if something has to be wrong at all, it’s the book’s finishing. In a move that is becoming custom, the last hundred or so pages are devoted to discussing genetic modification and CRISPR/Cas9, a technique and a tool that will surely shape the future of modern genetics but in a way nobody is quite sure of yet. This uncertainty is pretty well-established in the sense that it’s okay to be confused about where the use of these entities is taking us. However, this also means that every detailed discussion about these entities has become repetitive. Neither Cobb nor Mukherjee are able to add anything new on this front that, in some sense, hasn’t already been touched upon. (Silver lining: the books do teach us to better articulate our confusion.)

Verdict: 4/5

Anthropocene

It’s an epoch whose centre of attention de facto is the human even as the attention makes us more conscious of the other multitudes with which we share this universe.

The Anthropocene is not simply an epoch. It comes with an attendant awareness of our environment, of the environment we are for other creatures, that pervades through our activities and thoughts. Humans of the Anthropocene have left an indelible mark on the natural world around them (mostly of carbon) even as they – as we – have embedded within ourselves the product of decades of technological innovation, even as we upload our memories into the cloud. Simultaneously, we’re also becoming more aware of the ‘things’ we’re made of: of gut bacteria that supposedly affect our moods and of what our genes tell us about ourselves. It’s an epoch whose centre of attention de facto is the human even as the attention makes us more conscious of the other multitudes with which we share this universe.

UCal Irvine’s ‘fifth force’ farce

If a journalist buys into a UCI press release about some kind of ‘confirmation’ of a fifth force, and which is subsequently found to be simply false, an editor wouldn’t be faced with a tough choice whatsoever about which section she has to axe.

A screenshot of the UCI press release in question. Source: UCI
A screenshot of the UCI press release in question. Source: UCI

Michael Moyer just concluded a rant on Twitter (at the time of writing this) about how a press release on a recent theoretical physics result developed at the University of California, Irvine, had muddled up coverage on an important experimental particle physics result. I was going to write about this in detail for The Wire but my piece eventually took a different route, so I’m going to put some of my thoughts down on the UCI fuck-up here.

Let’s begin with some background: In April 2015, a team of nuclear physicists from the Hungarian Academy of Sciences (Atomki) announced that they had found an anomalous decay mode of an unstable beryllium-8 isotope. They contended in their paper, eventually published in Physical Review Letters in January 2016, that the finding had no explanation in nuclear physics. A team of American physicists – from the University of California, Irvine, and the University of Kentucky, Lexington – picked up on this paper and tried to draw up a theory that would (a) explain this anomaly even as it (b) would be a derivative of existing theoretical knowledge (as is the work of most theoretical physics operating at the edge). There are many ways to do this: the UCI-UKL conclusion was a theory that suggested the presence of a new kind of boson, hitherto undiscovered, which mediated the beryllium-8 decay to give rise to the anomalous result observed at Atomki.

Now, the foreground: A UCI press release announcing the development of the theory by its scientists had a headline that said the Atomki anomalous result had been “confirmed” at UCI. This kicked off a flurry of pieces in the media about how a ‘fifth force’ of nature had been found (which is what the discovery of a new boson would imply), that all of physics had been overturned, etc. But the press release’s claim was clearly stupid. It was published no more than a week after the particle physics community found out that the December 2015 digamma bump at the LHC was shown to be a glitch in the data, when the community was making peace with the fact that no observation was final until it had been confirmed with the necessary rigour even if physicists had come up with over 500 theoretical explanations for it. The release was also stupid because it blatantly defied (somewhat) common sense: how could a theoretical model built to fit the experimental data “confirm” the experimental data itself?

There’s even a paragraph in there that makes it sound like the particle’s been found! (My comments are in square brackets and all emphasis has been added:)

The UCI work demonstrates [misleading use] that instead of being a dark photon, the particle may be a “protophobic X boson.” While the normal electric force acts on electrons and protons, this newfound [the thing hasn’t been found!] boson [a boson is simply one interpretation of the experimental finding] interacts only with electrons and neutrons – and at an extremely limited range. Analysis co-author Timothy Tait, professor of physics & astronomy, said, “There’s no other boson that we’ve observed that has this same characteristic. [Does this mean UCI has actually observed this particular boson?] Sometimes we also just call it the ‘X boson,’ where ‘X’ means unknown.”

Moyer says in one of his tweets that PR machines will always try to hype results, outcomes, etc. – this is true, and journalists who don’t cut through this hype often end up writing flattering articles devoid of criticism (effectively missing the point about their profession, so to speak). However, as far as I’m concerned, what the UCI PR has done is not build hype as much as grossly mislead journalists, and I blame the machine in this case more than the journalists who wrote the “fifth force found” headlines. Journalism is already facing a credibility crisis in many parts of the world without having to look out for misguided press releases from universities of the calibre of UCI. Yes, such easily disturbed qualities are also often trusted by journalists, or anyone else, because we trust institutional authorities to take such qualities seriously themselves.

(Another such quality is ‘reputation’. Nicholas Dirks just quit because his actions had messed with the reputation of UCal Berkeley.)

This is a problem exacerbated by the fact that journalism also has a hard time producing – and subsequently selling – articles about particle physics. Everyone understands that the high-energy physics (HEP) community is significantly invested in maintaining a positive perception of their field, one that encourages governments to fund the construction of mammoth detectors and colliders. One important way to maintain this perception is to push for favourable coverage in the mainstream media of HEP research and keep the people – the principal proxy for government support – thinking about HEP activities for the right reasons. The media, in turn, can’t always commission pieces on all topics nor can it manufacture the real estate even if it has the perfect stories; every piece has to fight it out. And in crunch times, science stories are the first to get the axe; many mainstream Indian publications don’t even bother with a proper science section.*

If, in this context, a journalist buys into a UCI press release about some kind of ‘confirmation’ of a fifth force, and which is subsequently found to be simply false, an editor wouldn’t be faced with a tough choice whatsoever about which section she has to axe.

What happens next? We wait for experimental physicists try to replicate the Atomki anomaly in experiments around the world. If nothing else, this must happen because the Atomki team has published claims of having discovered a new particle at least twice before – in 2008 and 2012 – both at a significance upwards of 3 sigma (i.e., the chances of the results being a fluke being 1 in 200,000). This is a statistical threshold accepted by the particle physics community and which signifies the point at which a piece of data becomes equivalent to being evidence. However, the problem with the Atomki results is that both papers announcing the discoveries were later retracted by the scientists, casting all their claims of statistical validity in doubt. The April 2015 result was obtained with a claimed significance of 6.8 sigma.

*Even The Hindu’s science page that used to appear every Thursday in the main newspaper was shunted last year to appear every Monday in one of its supplements. It never carried ads.

Some notes and updates

Four years of the Higgs boson, live-tweeting and timezones, new music, and quickly reviewing an Erikson book.

Four years of the Higgs boson

Missed this didn’t I. On July 4, 2012, physicists at CERN announced that the Large Hadron Collider had found a Higgs-boson-like particle. Though the confirmation would only come in January 2013 (that it was the Higgs boson and not any other particle), July 4 is the celebrated date. I don’t exactly mark the occasion every year except to recap on whatever’s been happening in particle physics. And this year: everyone’s still looking for supersymmetry; there was widespread excitement about a possible new fundamental particle weighing about 750 GeV when data-taking began at the LHC in late May but strong rumours from within CERN have it that such a particle probably doesn’t exist (i.e. it’s vanishing in the new data-sets). Pity. The favoured way to anticipate what might come to be well before the final announcements are made in August is to keep an eye out for conference announcements in mid-July. If they’re made, it’s a strong giveaway that something’s been found.

Live-tweeting and timezones

I’ve a shitty internet connection at home in Delhi which means I couldn’t get to see the live-stream NASA put out of its control room or whatever as Juno executed its orbital insertion manoeuvre this morning. Fortunately, Twitter came to the rescue; NASA’s social media team had done such a great job of hyping up the insertion (deservingly so) that it seemed as if all the 480 accounts I followed were tweeting about it. I don’t believe I missed anything at all, except perhaps the sounds of applause. Twitter’s awesome that way, and I’ll say that even if it means I’m stating the obvious. One thing did strike me: all times (of the various events in the timeline) were published in UTC and EDT. This makes sense because converting from UTC to a local timezone is easy (IST = UTC + 5.30) while EDT corresponds to the US east cost. However, the thing about IST being UTC + 5.30 isn’t immediately apparent to everyone (at least not to me), and every so often I wish an account tweeting from India, such as a news agency’s, uses IST. I do it every time.

New music

I don’t know why I hadn’t found Yat-kha earlier considering I listen to Huun Huur Tu so much, and Yat-kha is almost always among the recommendations (all bands specialising in throat-singing). And while Huun Huur Tu likes to keep their music traditional and true to its original compositional style, Yat-kha takes it a step further, banding its sound up with rock, and this tastes much better to me. With a voice like Albert Kuvezin’s, keeping things traditional can be a little disappointing – you can hear why in the song above. It’s called Kaa-khem; the same song by Huun Huur Tu is called Mezhegei. Bass evokes megalomania in me, and it’s all the more sensual when its rendition is accomplished with human voice, rising and falling. Another example of what I’m talking about is called Yenisei punk. Finally, this is where I’d suggest you stop if you’re looking for throat-singing made to sound more belligerent: I stumbled upon War horse by Tengger Cavalry, classified as nomadic folk metal. It’s terrible.

Fall of Light, a part 2

In fantasy trilogies, the first part benefits from establishing the premise and the third, from the denouement. If the second part has to benefit from anything at all, then it is the story itself, not the intensity of the stakes within its narrative. At least, that’s my takeaway from Fall of Light, the second book of Steven Erikson’s Kharkanas trilogy. Its predecessor, Forge of Darkness, established the kingdom of Kurald Galain and the various forces that shape its peoples and policies. Because the trilogy has been described as being a prequel (note: not the prequel) to Erikson’s epic Malazan Book of the Fallen series, and because of what we know about Kurald Galain in the series, the last book of the trilogy has its work cut out for it. But in the meantime, Fall of Light was an unexpectedly monotonous affair – and that was awesome. As a friend of mine has been wont to describe the Malazan series: Erikson is a master of raising the stakes. He does that in all of his books (including the Korbal Broach short-stories) and he does it really well. However, Fall of Light rode with the stakes as they were laid down at the end of the first book, through a plot that maintained the tension at all times. It’s neither eager to shed its burden nor is it eager to take on new ones. If you’ve read the Malazan series, I’d say he’s written another Deadhouse Gates, but better.

Oh, and this completes one of my bigger goals for 2016.

Talking about science, NCBS

On June 24, I was invited to talk at the NCBS Science Writing Workshop, held every year for 10 days.

On June 24, I was invited to talk at the NCBS Science Writing Workshop, held every year for 10 days. The following notes are some of my afterthoughts from that talk.

Science journalism online is doing better now than science journalism in print, in India. But before we discuss the many ways in which this statement is true, we need to understand what a science story can be as it is presented in the media. I’ve seen six kinds of science pieces:

1. Scientific information and facts – Reporting new inventions and discoveries, interesting hypotheses, breaking down complex information, providing background information. Examples: first detection of g-waves, Dicty World Race, etc.

2. Processes in science – Discussing opinions and debates, analysing how complex or uncertain issues are going to be resolved, unravelling investigations and experiments. Examples: second detection of g-waves, using CRISPR, etc.

3. Science policy – Questioning/analysing the administration of scientific work, funding, HR, women in STEM, short- and long-term research strategies, etc. Examples: analysing DST budgets, UGC’s API, etc.

4. People of science – Interviewing people, discussing choices and individual decisions, investigating the impact of modern scientific research on those who practice it. **Examples**: interviewing women in STEM, our Kip Thorne piece, etc.

5. Auxiliary science – Reporting on the impact of scientific processes/choices on other fields (typically closer to our daily lives), discussing the economic/sociological/political issues surrounding science but from an economic/sociological/political PoV. Examples: perovskite in solar cells, laying plastic roads, etc.

6. History and philosophy of science – Analysing historical and/or philosophical components of science. Examples: some of Mint on Sunday’s pieces, our columns by Aswin Seshasayee and Sunil Laxman, etc.

Some points:

1. Occasionally, a longform piece will combine all five types – but you shouldn’t force such a piece without an underlying story.

2. The most common type of science story is 5 – auxiliary science – because it is the easiest to sell. In these cases, the science itself plays second fiddle to the main issue.

3. Not all stories cleanly fall into one or the other bin. The best science pieces can’t always be said to be falling in this or that bin, but the worst pieces get 1 and 2 wrong, are misguided about 4 (but usually because they get 1 and 2 wrong) or misrepresent the science in 5.

4. Journalism is different from writing in that journalism has a responsibility to expose and present the truth. At the same time, 1, 2 and 6 stories – presenting facts in a simpler way, discussing processes, and discussing the history and philosophy of science – can be as much journalism as writing because they increase awareness of the character of science.

5. Despite the different ways in which we’ve tried to game the metrics, one thing has held true: content is king. A well-written piece with a good story at its heart may or may not do well – but a well-packaged piece that is either badly written or has a weak story at its centre (or both) will surely flop.

6. You can always control the goodness of your story by doing due diligence, but if you’re pitching your story to a publisher on the web, you’ve to pitch it to the right publisher. This is because those who do better on the web only do better by becoming a niche publication. If a publication wants to please everyone, it has to operate at a very large scale (>500 stories/day). On the other hand, a niche publication will have clearly identified its audience and will only serve that segment. Consequently, only some kinds of science stories – as identified by those niche publications’ preferences in science journalism – will be popular on the web. So know what editors are looking for.

Tom Kibble (1932-2016)

Kibble was one of the six theorists who, in 1964, came up with the ABEGHHK’tH mechanism to explain how gauge bosons acquired mass.

Featured image: From left to right: Tom Kibble, Gerald Guralnik, Richard Hagen, François Englert and Robert Brout. Credit: Wikimedia Commons.

Sir Tom Kibble passed away on June 2, I learnt this morning with a bit of sadness that I’d missed the news. It’s hard to write about someone in a way that prompts others either to find out more about that person or, if they knew him or his work, to recall their memories of him when I myself would like only to do the former now. So let me quickly spell out why I think you should pay attention: Kibble was one of the six theorists who, in 1964, came up with the ABEGHHK’tH mechanism to explain how gauge bosons acquired mass. The ‘K’ in those letters stands for ‘Kibble’. However, we only remember that mechanism with the second ‘H’, which stands for Higgs; the other letters fell off for reasons not entirely clear – although convenience might’ve played a role. And while everyone refers to the mechanism as the Higgs mechanism, Peter Higgs, the man himself, continues to call it the ABEGHHK’tH mechanism.

Anyway, Kibble was known for three achievements. The first was to co-formulate – alongside Gerald Guralnik and Richard Hagen – the ABEGHHK’tH mechanism. It was validated in early 2013, earning only Higgs and ‘E’, François Englert, the Nobel Prize for physics that year. The second came in 1967, to explain how the mechanism accords the W and Z bosons, the carriers of the weak nuclear force, with mass but not the photons. The solution was crucial to validate the electroweak theory, and whose three conceivers (Sheldon Glashow, Abdus Salam and Steven Weinberg) won the Nobel Prize for physics in 1979. The third was the postulation of the Kibble-Żurek mechanism, which explains the formation of topological defects in the early universe by applying the principles of quantum mechanics to cosmological objects. This work was done alongside the Polish-American physicist Wojciech Żurek.

I spoke to Kibble once, only for a few minutes, at a conference at the Institute of Mathematical Sciences, Chennai, in December 2013 (at the same conference where I met George Sterman as well). This was five months after Fabiola Gianotti had made the famous announcement at CERN that the LHC had found a particle that looked like the Higgs boson. I’d asked Kibble what he made of the announcement, and where we’d go from here. He said, as I’m sure he would’ve a thousand times before, that it was very exciting to be proven right after 50 years; that it’d definitively closed one of the biggest knowledge gaps in modern theoretical particle physics; and that there was still work to be done by studying the Higgs boson for more clues about the nature of the universe. He had to rush; a TV crew was standing next to me, nudging me for some time with him. I was glad to see it was Puthiya Thalaimurai, a Tamil-language news channel, because it meant the ‘K’ had endured.

Rest in peace, Tom Kibble.

‘Infinite in All Directions’, a science newsletter

The idea for the newsletter is a derivative of a reading challenge a friend proposed: wherein a group of us would recommend books for each other to read, especially titles that we might not come upon by ourselves.

At 10 am (IST) every Monday, I will be sending out a list of links to science stories from around the web, curated by significance and accompanied by a useful blurb, as a newsletter. If you’re interested, please sign up here. If you’d like to know more before signing up, read on.

It’s called Infinite in All Directions – a term coined by Freeman Dyson for nothing really except the notion behind this statement from his book of the same name: “No matter how far we go into the future, there will always be new things happening, new information coming in, new worlds to explore, a constantly expanding domain of life, consciousness and memory.”

I will be collecting the links and sending the newsletter out on behalf of The Wire, whose science section I edit. And so, you can trust the links to not be to esoteric pieces (which I’m fond of) but to pieces I’d have liked to have covered at The Wire but couldn’t.

More than that, the idea for the newsletter is essentially a derivative of a reading challenge a friend proposed a while ago: wherein a group of us would recommend books for each other to read, especially titles that we might not come upon by ourselves.

Some of you might remember that a (rather, the same) friend and I used to send out the Curious Bends newsletter until sometime last year. The Infinite in All Directions newsletter will be similarly structured but won’t necessarily be India-centric. In fact, a (smaller than half) section of the newsletter may even be consistently skewed toward the history and philosophy of science. But you can trust that the issues will all be contemporary.

Apart from my ‘touch’ coming through with the selection, I will also occasionally include my take on some topics (typically astro/physics). You’re welcome to disagree (just be nice) – all replies to the newsletter will land up in my inbox. You’re also more than welcome to send me links to include in future issues.

Finally: Each newsletter will not have a fixed number of links – I don’t want to link you to pieces I myself haven’t been able to appreciate. At the same time, there will be at least five or so links. I think The Wire alone puts out that many good stories each week.

I hope you enjoy reading the newsletter. As with this blog, Infinite in All Directions will be a labour of love. Please share it with your friends and anybody who might be interested in such a service. Again, here is the link to subscribe.

A universe out of sight

We’ve been able to find that the universe is expanding faster than we thought. The LHC has produced the most data on one day. Good news, right?

Two things before we begin:

  1. The first subsection of this post assumes that humankind has colonised some distant extrasolar planet(s) within the observable universe, and that humanity won’t be wiped out in 5 billion years.
  2. Both subsections assume a pessimistic outlook, and neither projections they dwell on might ever come to be while humanity still exists. Nonetheless, it’s still fun to consider them and their science, and, most importantly, their potential to fuel fiction.

Cosmology

Astronomers using the Hubble Space Telescope have captured the most comprehensive picture ever assembled of the evolving Universe — and one of the most colourful. The study is called the Ultraviolet Coverage of the Hubble Ultra Deep Field. Caption and credit: hubble_esa/Flickr, CC BY 2.0
Astronomers using the Hubble Space Telescope have captured the most comprehensive picture ever assembled of the evolving universe — and one of the most colourful. The study is called the Ultraviolet Coverage of the Hubble Ultra Deep Field. Caption and credit: hubble_esa/Flickr, CC BY 2.0

Note: An edited version of this post has been published on The Wire.

A new study whose results were reported this morning made for a disconcerting read: it seems the universe is expanding 5-9% faster than we figured it was.

That the universe is expanding at all is disappointing, that it is growing in volume like a balloon and continuously birthing more emptiness within itself. Because of the suddenly larger distances between things, each passing day leaves us lonelier than we were yesterday. The universe’s expansion is accelerating, too, and that doesn’t simply mean objects getting farther away. It means some photons from those objects never reaching our telescopes despite travelling at lightspeed, doomed to yearn forever like Tantalus in Tartarus. At some point in the future, a part of the universe will become completely invisible to our telescopes, remaining that way no matter how hard we try.

And the darkness will only grow, until a day out of an Asimov story confronts us: a powerful telescope bearing witness to the last light of a star before it is stolen from us for all time. Even if such a day is far, far into the future – the effect of the universe’s expansion is perceptible only on intergalactic scales, as the Hubble constant indicates, and simply negligible within the Solar System – the day exists.

This is why we are uniquely positioned: to be able to see as much as we are able to see. At the same time, it is pointless to wonder how much more we are able to see than our successors because it calls into question what we have ever been able to see. Say the whole universe occupies a volume of X, that the part of it that remains accessible to us contains a volume Y, and what we are able to see today is Z. Then: Z < Y < X. We can dream of some future technological innovation that will engender a rapid expansion of what we are able to see, but with Y being what it is, we will likely forever play catch-up (unless we find tachyons, navigable wormholes, or the universe beginning to decelerate someday).

How fast is the universe expanding? There is a fixed number to this called the deceleration parameter:

q = – (1 + /H2),

where H is the Hubble constant and  is its first derivative. The Hubble constant is the speed at which an object one megaparsec from us is moving away at. So, if q is positive, the universe’s expansion is slowing down. If q is zero, then H is the time since the Big Bang. And if q is negative – as scientists have found to be the case – then the universe’s expansion is accelerating.

The age and ultimate fate of the universe can be determined by measuring the Hubble constant today and extrapolating with the observed value of the deceleration parameter, uniquely characterised by values of density parameters (Ω_M for matter and Ω_Λ for dark energy). Caption and credit: Wikimedia Commons
The age and ultimate fate of the universe can be determined by measuring the Hubble constant today and extrapolating with the observed value of the deceleration parameter, uniquely characterised by values of density parameters (Ω_M for matter and Ω_Λ for dark energy). Caption and credit: Wikimedia Commons

We measure the expansion of the universe from our position: on its surface (because, no, we’re not inside the universe). We look at light coming from distant objects, like supernovae; we work out how much that light is ‘red-shifted’; and we compare that to previous measurements. Here’s a rough guide.

What kind of objects do we use to measure these distances? Cosmologists prefer type Ia supernovae. In a type Ia supernova, a white-dwarf (the core of a dead stare made entirely of electrons) is slowly sucking in matter from an object orbiting it until it becomes hot enough to trigger fusion reaction. In the next few seconds, the reaction expels 1044 joules of energy, visible as a bright fleck in the gaze of a suitable telescope. Such explosions have a unique attribute: the mass of the white-dwarf that goes boom is uniform, which means type Ia supernova across the universe are almost equally bright. This is why cosmologists refer to them as ‘cosmic candles’. Based on how faint these candles are, you can tell how far away they are burning.

After a type Ia supernova occurs, photons set off from its surface toward a telescope on Earth. However, because the universe is continuously expanding, the distance between us and the supernova is continuously increasing. The effective interpretation is that the explosion appears to be moving away from us, becoming fainter. How much it has moved away is derived from the redshift. The wave nature of radiation allows us to think of light as having a frequency and a wavelength. When an object that is moving away from us emits light toward us, the waves of light appear to become stretched, i.e. the wavelength seems to become distended. If the light is in the visible part of the spectrum when starting out, then by the time it reached Earth, the increase in its wavelength will make it seem redder. And so the name.

The redshift, z – technically known as the cosmological redshift – can be calculated as:

z = (λobserved – λemitted)/λemitted

In English: the redshift is the factor by which the observed wavelength is changed from the emitted wavelength. If z = 1, then the observed wavelength is twice as much as the emitted wavelength. If z = 5, then the observed wavelength is six-times as much as the emitted wavelength. The farthest galaxy we know (MACS0647-JD) is estimated to be at a distance wherefrom = 10.7 (corresponding to 13.3 billion lightyears).

Anyway, z is used to calculate the cosmological scale-factor, a(t). This is the formula:

a(t) = 1/(1 + z)

a(t) is then used to calculate the distance between two objects:

d(t) = a(t) d0,

where d(t) is the distance between the two objects at time t and d0 is the distance between them at some reference time t0. Since the scale factor would be constant throughout the universe, d(t) and d0 can be stand-ins for the ‘size’ of the universe itself.

So, let’s say a type Ia supernova lit up at a redshift of 0.6. This gives a(t) = 0.625 = 5/8. So: d(t) = 5/8 * d0. In English, this means that the universe was 5/8th its current size when the supernova went off. Using z = 10.7, we infer that the universe was one-twelfth its current size when light started its journey from MACS0647-JD to reach us.

As it happens, residual radiation from the primordial universe is still around today – as the cosmic microwave background radiation. It originated 378,000 years after the Big Bang, following a period called the recombination epoch, 13.8 billion years ago. Its redshift is 1,089. Phew.

The relation between redshift (z) and distance (in billions of light years). d_H is the comoving distance between you and the object you're observing. Where it flattens out is the distance out to the edge of the observable universe. Credit: Redshiftimprove/Wikimedia Commons, CC BY-SA 3.0
The relation between redshift (z) and distance (in billions of light years). d_H is the comoving distance between you and the object you’re observing. Where it flattens out is the distance out to the edge of the observable universe. Credit: Redshiftimprove/Wikimedia Commons, CC BY-SA 3.0

A curious redshift is z = 1.4, corresponding to a distance of about 4,200 megaparsec (~0.13 trillion trillion km). Objects that are already this far from us will be moving away faster than at the speed of light. However, this isn’t faster-than-light travel because it doesn’t involve travelling. It’s just a case of the distance between us and the object increasing at such a rate that, if that distance was once covered by light in time t0, light will now need t > t0 to cover it*. The corresponding a(t) = 0.42. I wonder at times if this is what Douglas Adams was referring to (… and at other times I don’t because the exact z at which this happens is 1.69, which means a(t) = 0.37. But it’s something to think about).

Ultimately, we will never be able to detect any electromagnetic radiation from before the recombination epoch 13.8 billion years ago; then again, the universe has since expanded, leaving the supposed edge of the observable universe 46.5 billion lightyears away in any direction. In the same vein, we can imagine there will be a distance (closing in) at which objects are moving away from us so fast that the photons from their surface never reach us. These objects will define the outermost edges of the potentially observable universe, nature’s paltry alms to our insatiable hunger.

Now, a gentle reminder that the universe is expanding a wee bit faster than we thought it was. This means that our theoretical predictions, founded on Einstein’s theories of relativity, have been wrong for some reason; perhaps we haven’t properly accounted for the effects of dark matter? This also means that, in an Asimovian tale, there could be a twist in the plot.

*When making such a measurement, Earthlings assume that Earth as seen from the object is at rest and that it’s the object that is moving. In other words: we measure the relative velocity. A third observer will notice both Earth and the object to be moving away, and her measurement of the velocity between us will be different.


Particle physics

Candidate Higgs boson event from collisions in 2012 between protons in the ATLAS detector on the LHC. Credit: ATLAS/CERN
Candidate Higgs boson event from collisions in 2012 between protons in the ATLAS detector on the LHC. Credit: ATLAS/CERN

If the news that our universe is expanding 5-9% faster than we thought sooner portends a stellar barrenness in the future, then another foretells a fecundity of opportunities: in the opening days of its 2016 run, the Large Hadron Collider produced more data in a single day than it did in the entirety of its first run (which led to the discovery of the Higgs boson).

Now, so much about the cosmos was easy to visualise, abiding as it all did with Einstein’s conceptualisation of physics: as inherently classical, and never violating the principles of locality and causality. However, Einstein’s physics explains only one of the two infinities that modern physics has been able to comprehend – the other being the world of subatomic particles. And the kind of physics that reigns over the particles isn’t classical in any sense, and sometimes takes liberties with locality and causality as well. At the same time, it isn’t arbitrary either. How then do we reconcile these two sides of quantum physics?

Through the rules of statistics. Take the example of the Higgs boson: it is not created every time two protons smash together, no matter how energetic the protons are. It is created at a fixed rate – once every ~X collisions. Even better: we say that whenever a Higgs boson forms, it decays to a group of specific particles one-Yth of the time. The value of Y is related to a number called the coupling constant. The lower Y is, the higher the coupling constant is, and more often will the Higgs boson decay into that group of particles. When estimating a coupling constant, theoretical physicists assess the various ways in which the decays can happen (e.g., Higgs boson → two photons).

A similar interpretation is that the coupling constant determines how strongly a particle and a force acting on that particle will interact. Between the electron and the electromagnetic force is the fine-structure constant,

α = e2/2ε0hc;

and between quarks and the strong nuclear force is the constant defining the strength of the asymptotic freedom:

αs(k2) = [β0ln(k22)]-1

So, if the LHC’s experiments require P (number of) Higgs bosons to make their measurements, and its detectors are tuned to detect that group of particles, then at least P-times-that-coupling-constant collisions ought to have happened. The LHC might be a bad example because it’s a machine on the Energy Frontier: it is tasked with attaining higher and higher energies so that, at the moment the protons collide, heavier and much shorter-lived particles can show themselves. A better example would be a machine on the Intensity Frontier: its aim would be to produce orders of magnitude more collisions to spot extremely rare processes, such as particles that are formed very rarely. Then again, it’s not as straightforward as just being prolific.

It’s like rolling an unbiased die. The chance that you’ll roll a four is 1/6 (i.e. the coupling constant) – but it could happen that if you roll the die six times, you never get a four. This is because the chance can also be represented as 10/60. Then again, you could roll the die 60 times and still never get a four (though the odds of that happened are even lower). So you decide to take it to the next level: you build a die-rolling machine that rolls the die a thousand times. You would surely have gotten some fours – but say you didn’t get fours one-sixth of the time. So you take it up a notch: you make the machine roll the die a million times. The odds of a four should by now start converging toward 1/6. This is how a particle accelerator-collider aims to work, and succeeds.

And this is why the LHC producing as much data as it already has this year is exciting news. That much data means a lot more opportunities for ‘new physics’ – phenomena beyond what our theories can currently explain – to manifest itself. Analysing all this data completely will take many years (physicists continue to publish papers based on results gleaned from data generated in the first run), and all of it will be useful in some way even if very little of it ends up contributing to new ideas.

The steady (logarithmic) rise in luminosity – the number of collision events detected – at the CMS detector on the LHC. Credit: CMS/CERN
The steady (logarithmic) rise in luminosity – the number of collision events detected – at the CMS detector on the LHC. Credit: CMS/CERN

Occasionally, an oddball will show up – like a pentaquark, a state of five quarks bound together. As particles in their own right, they might not be as exciting as the Higgs boson, but in the larger schemes of things, they have a role to call their own. For example, the existence of a pentaquark teaches physicists about what sorts of configurations of the strong nuclear force, which holds the quarks together, are really possible, and what sorts are not. However, let’s say the LHC data throws up nothing. What then?

Tumult is what. In the first run, the LHC used to smash two beams of billions of protons, each beam accelerated to 4 TeV and separated into 2,000+ bunches, head on at the rate of two opposing bunches every 50 nanoseconds. In the second run, after upgrades through early 2015, the LHC smashes bunches accelerated to 6.5 TeV once every 25 nanoseconds. In the process, the number of collisions per sq. cm per second increased tenfold, to 1 × 1034. These heightened numbers are so new physics has fewer places to hide; we are at the verge of desperation to tease them out, to plumb the weakest coupling constants, because existing theories have not been able to answer all of our questions about fundamental physics (why things are the way they are, etc.). And even the barest hint of something new, something we haven’t seen before, will:

  • Tell us that we haven’t seen all that there is to see**, that there is yet more, and
  • Validate this or that speculative theory over a host of others, and point us down a new path to tread

Axiomatically, these are the desiderata at stake should the LHC find nothing, even more so that it’s yielded a massive dataset. Of course, not all will be lost: larger, more powerful, more innovative colliders will be built – even as a disappointment will linger. Let’s imagine for a moment that all of them continue to find nothing, and that persistent day comes to be when the cosmos falls out of our reach, too. Wouldn’t that be maddening?

**I’m not sure of what an expanding universe’s effects on gravitational waves will be, but I presume it will be the same as its effect on electromagnetic radiation. Both are energy transmissions travelling on the universe’s surface at the speed of light, right? Do correct me if I’m wrong.