The power of the Sun

This video was pretty cool until the end. “Harness the power of the Sun”? This torch emits 2,300 lumens from a battery that lasts 30 minutes. The Sun? About 98,000 lumens, equivalent to 100 billion megatonnes of TNT blowing up per second for 9 billion years. This is also a chance to marvel at the hypernova, a.k.a. the superluminous supernova: a few billion billion megatonnes of TNT per second, a.k.a. 10 million Suns blowing up per second. It’s the hypernova that actually harnesses the power of the Sun.

Aside | Posted on by | Tagged , , , , , , | Leave a comment

We did start the fire

This is one of the dumbest things I’ve seen done in a while:

The Asian News International network tweeted that a group of Indian priests had performed a long yagya in Tokyo for the express purpose of purifying the environment. A yagya (or a yagna, although I’m not sure if they’re the same) typically involves keeping a pyre of wooden logs lubricated with ghee burning for a long time. So a plea to the gods to clear the airs was encoded in many kilograms of carbon dioxide? Clearly these god-fearing gentlemen insist that they will accept only the gods’ solutions to their problems – not anyone else’s, no matter how motivated. I dearly hope that, if nothing else, the event will create an ironic awareness of what’s at stake.

At least the other shit-peddlers back home have had the sense to not force the cow piss down our throats (ignoring the massive public healthcare and R&D funding cuts, of course). If my ire seems disproportionate to the amount of pollutants these yagnas will have released, it’s because I fear someone else will get ideas now, especially those aspiring to get into the record books. (How often have you heard the anchor on Sun TV news croon at the tail-end of the segment at 7 pm everyday, “XYZ கின்னெஸ் சாதனை படைத்தார்” – “XYZ set a Guinness world record”?)

Featured image credit: kabetojamaicafotografia/Flickr, CC BY 2.0.

Posted in Culture & Society, Opinions, Science & Technology | Tagged , , , , , , , | 1 Comment

Cybersecurity in space

Featured image: The International Space Station, 2011. Credit: gsfc/Flickr, CC BY 2.0

On May 19, 1998, the Galaxy IV satellite shut down unexpectedly in its geostationary orbit. Immediately, most of the pagers in the US stopped working even as the Reuters, CBS and NPR news channels struggled to stay online. The satellite was declared dead a day later but it was many days before the disrupted services could be restored. The problem was found to be an electrical short-circuit onboard.

The effects of a single satellite going offline are such. What if they could be shutdown en masse? The much-discussed consequences would be terrible, which is why satellite manufacturers and operators are constantly devising new safeguards against potential threats.

However, the pace of technological advancements, together with the proliferation of the autonomous channels through which satellites operate, has ensured that operators are constantly but only playing catch-up. There’s no broader vision guiding how affected parties could respond to rapidly evolving threats, especially in a way that constantly protects the interests of stakeholders across borders.

With the advent of low-cost launch options, including from agencies like ISRO, since the 1990s, the use of satellites to prop up critical national infrastructure – including becoming part of the infrastructure themselves – stopped being the exclusive demesne of developed nations. But at the same time, the drop in costs signalled that the future of satellite operations might rest with commercial operators, leaving them to deal with technological capabilities that until then were being handled solely by the defence industry and its attendant legislative controls.

Today, satellites are used for four broad purposes: Earth-observation, meteorology and weather-forecasting; navigation and synchronisation; scientific research and education; and telecommunication. They’ve all contributed to a burgeoning of opportunities on the ground. But in terms of their own security, they’ve become a bloated balloon waiting for the slightest prick to deflate.

How did this happen?

Earlier in September, three Chinese engineers were able to hack into two Tesla electric-cars from 19 km away. They were able to move the seats and mirrors and, worse, control the brakes. Fortunately, it was a controlled hack conducted with Tesla’s cooperation and after which the engineers reported the vulnerabilities they’d found to Tesla.

The white-hat attack demonstrated a paradigm: that physical access to an internet-enabled object was no longer necessary to mess with it. Its corollary was that physical separation between an attacker and the target no longer guaranteed safety. In this sense, satellites occupy the pinnacle of our thinking about the inadequacy of physical separation; we tend to leave them out of discussions on safety because satellites are so far away.

It’s in recognition of this paradigm that we need to formulate a multilateral response that ensures minimal service disruption and the protection of stakeholder interests at all times in the event of an attack, according to a new report published by Chatham House. It suggests:

Development of a flexible, multilateral space and cybersecurity regime is urgently required. International cooperation will be crucial, but highly regulated action led by government or similar institutions is likely to be too slow to enable an effective response to space-based cyberthreats. Instead, a lightly regulated approach developing industry-led standards, particularly on collaboration, risk assessment, knowledge exchange and innovation, will better promote agility and effective threat responses.

Then again, how much cybersecurity do satellites need really?

Because when we speak about cyber anything, our thoughts hardly venture out to include our space-borne assets. When we speak about cyber-warfare, we imagine some hackers at their laptops targeting servers on some other part of the world – but a part of the world, surely, and not a place floating above it. However, given how satellites are becoming space-borne proxies for state authority, they do need to be treated as significant space-borne liabilities as well. There’s even precedence: In November 2014, an NOAA satellite was hacked by Chinese actors with minor consequences. But in the process, the attack revealed major vulnerabilities that the NOAA rushed to patch.

So the better question would be: What kinds of protection do satellites need against cyber-threats? To begin with, hackers have been able to jam communications and replace legitimate signals with false ones (called spoofing). They’ve also been able to invade satellites’ SCADA systems, introduce viruses to trip up software and,  pull DOS attacks. The introduction of micro- and nanosatellites has also provided hackers with an easier conduit into larger networks.

Another kind of protection that could be useful is from the unavoidable tardiness with which governments and international coalitions react to cyber-warfare, often due to over-regulation. The report states, “Too centralised an approach would give the illicit actors, who are generally unencumbered by process or legislative frameworks, an unassailable advantage simply because their response and decision-making time is more flexible and faster than that of their legitimate opponents.”

Do read the full report for an interesting discussion of the role cybersecurity plays in the satellite services sector. It’s worth your time.

Posted in Science & Technology | Tagged , , , , , , , , , , , , , , , , | Leave a comment

Lab test to check for beef works best if meat is uncooked

Featured image credit: snre/Flickr, CC BY 2.0.

Ahead of Eid al Adha celebrations on September 13, the police in Haryana’s Mewat district were tasked with sniffing through morsels of meat biryani sold by vendors to check for the presence of cow beef. Haryana has some of India’s strictest laws on the production and consumption of cow-meat. The state also receives the largest number of complaints against these acts after Uttar Pradesh, according to the National Crime Records Bureau. However, the human senses are easily waylaid, especially when the political climate is charged, allowing room for the sort of arbitrariness that had goons baying for the blood of Mohammad Akhlaq in Dadri in September 2015.

The way to check if a piece of meat is from a cow is to ascertain if it contains cow DNA. The chemical test used for this is called a polymerase chain reaction (PCR), which rapidly creates multiples copies of whatever sample DNA is available and then analyses them according to preprogrammed rules. However, the PCR method isn’t very effective when the DNA might be damaged – such as when the meat is cooked at high temperatures for a long time.

The DNA molecule in most living creatures on Earth consists of a sequence of smaller molecules called nucleotides. The sequence of nucleotides in their entirety is unique to each individual creature as long as its cells contain DNA. A segment of these nucleotides also indicate what species the creature belongs to. It is this segment that a molecular biologist, usually someone at the postgraduate level or higher, will mount a hunt for using the physical and chemical tools at her disposal. The segment’s nucleotides and their ordering will give away the DNA’s identity.

The Veterinary and Animal Sciences University in Hisar, Haryana, is one centre where these tests are conducted. NDTV reported on September 10 that the university had been authorised to do so only two days before it received its first test sample. The vice-chancellor subsequently clarified that two other centres in the state were being set up to conduct these tests – but until they were ready, the university lab would be it.

What would need to be set up? Essentially: an instrument called a thermal cycler to perform the PCR and someone qualified to conduct the PCR, usually at the postgraduate level or higher. The following is how PCR works.

Once some double-strands have been extracted from cells in the meat, they are heated to about 96 ºC for around 25 seconds to denature them. This breaks the bonds holding the two strands together, yielding single strands. Then, two molecules, a primer and a probe, are made to latch onto each DNA single-strand. Primers are small strands of DNA, typically a dozen nucleotides long, that bind complementarily to the single-strand – i.e., the nucleotides adenine on one strand with thymine on the other, and cytosine on one with guanine on the other. Probes are also complementary strands of nucleotides, but its nucleotides are chosen such that the probe binds to sequences that identify the DNA as being from cows. They also contain some fluorescent material.

To enable this latching, the reaction temperature is held at 50-65 ºC for about 30 seconds.

Next, an enzyme called a DNA polymerase is introduced into the reaction solution. The polymerase elongates the primer – by weaving additionally supplied nucleotides along the single-strand to make a double-strand all over again. When the polymerase reaches the probe, it physically disintegrates the probe and releases the fluorescent material. The resulting glow in the solution signals to the researcher that a nucleotide sequence indicative of cow is present in the DNA.

If the Taq polymerase, extracted from microbes living around hot hydrothermal vents on the ocean floor, is used, the reaction temperature is maintained at 72 ºC. In this scenario, the polymerase weaves in about 1,000 nucleotides per minute.

A molecular biologist repeats these three tasks – denaturing the strands, latching the primer and probe on and elongating the primer using polymerase – in repeated cycles to make multiple copies of DNA. At the end of the first cycle, there is one double-strand DNA. At the end of the second, there are two. At the end of the third, there will be eight. So each cycle produces 2n DNA double-strands. When 20 cycles are performed, the biologist will possess over a million DNA double-strands. After 40 cycles, there will be almost 1.1 trillion. Depending on the number of cycles, PCR could take between two and six hours.

These many DNA molecules are needed to amplify their presence, and expose their nucleotides for the finding. The heating cycles are performed in the thermal cycler. This instrument can be modified to track the rate of increase of fluorescence in the solutions, and check if that’s in line with the rate at which new DNA double-strands are made. If the two readings line up, the molecular biologist will have her answer: that the DNA identifies meat from a cow.

The test gets trickier when the meat is cooked. The heat during preparation could damage the DNA in the meat’s cells, denaturing it to a point beyond which PCR can work with. One biologist The Wire spoke to said that if the “meat is nicely overcooked at high temperature, you cannot PCR anything”. A study published in the journal Meat Science in 2006 attests to this: “… with the exception of pan frying for 80 min, beef was determined in all meat samples including the broth and sauce of the roasted meat” using PCR.

At the same time, in March 2016, a study published in Veterinary World claimed that PCR could check for the origins of cooked and raw meat both, and also ascertain the presence of a small amount of beef (up to 1%) present in a larger amount of a different meat. The broader consensus among biologists seems to be that the more raw the meat, the easier it would be to test. The meat starts to become untestable when cooked at high temperatures.

A PCR test costs anywhere between Rs 2,000 and Rs 7,000.

The Wire
September 15, 2016

Posted in Science & Technology | Tagged , , , , , , , , , | Leave a comment

Corrected: Environmental journalism in India and false balance

Featured image credit: mamnaimie/Flickr, CC BY 2.0

I’ve developed a lousy habit of publishing posts before they’re ready to go, and not being careful enough about how I’m wording things. It happened recently with the review of Matthew Cobb’s book and then last evening with the post about false balance in environmental journalism. I don’t think my blog is small enough any more for me to be able to set the record straight quietly (evinced by the reader who pointed out some glaring mistakes). So this is fixing the false balance post. Apologies, I’ll be more careful next time.

In the same vein, any advice/tips on how to figure when an opinion is ready to go (and you’ve not forgotten something) would be much appreciated. What I usually do is take a break for 30 minutes after I’ve finished writing something, then return to it and read it out loud.

It’s no secret that the incumbent NDA government ruling in India has screwed over the country’s environmental protection machinery to such an extent that there remain few meaningful safeguards against corporate expansionism – especially of the rapacious kind. Everything – from land acquisition, tribal protection and coastal regulation to pollution control and assessment – has been systematically weakened. As a result, the government’s actions have become suspect by default.

For journalists in India, this has come with an obvious tilt in the balance of stories. Government actions and corporate interests have become increasingly indefensible. What redemption they may have been able to afford started to dissipate when both factions started to openly rub shoulders with each other, feeding off each others’ strengths: the government’s ability to change policy and modify legislation and the companies’ ability to… well, fund. Prime example: the rise of Gautam Adani.

In Indian journalism, therefore, representing all three sides in an article – the government, corporate interests and the environment – (and taking a minimalist PoV for argument’s sake) is no longer the required thing to do. Representation is magnified for environmental interests while government and corporate lines are printed as a matter of courtesy, if at all. This has become okay, and it is.

Do I have a problem with this? No. That’s why doing things like asking corporate interests what they have to say is called a false balance.

Is activist journalism equivalent to adversarial journalism simply by assuming its subject is to right a wrong? Recently, I edited an article for The Wire about how, despite the presence of dozens of regulatory bodies, nobody is sure who is responsible for conserving and bettering the status of India’s wetlands. The article was penned by an activist and was in the manner of an oped; all claims and accusations were backed up, it wasn’t a rant. I think it speaks more to the zeitgeist of Indian environmental journalism and not the zeitgeist of journalism in general that opeds like that one have become news reports de jure. In other words: if only in Indian environmental journalism, there is no Other Side anymore for sure.

This advent of a ‘false balance’ recently happened in the case of climate change, where a scientific consensus was involved. That global warming is anthropogenic came to be treated as fact after scientific studies to assess its origins repeatedly reached that conclusion. Therefore, journalistic reports that quote climate-change skeptics are presenting a false balance of the truth. A decision to not quote the government or corporate interests in the case presented above, however, is more fluid, influenced not by the truth-value of a single parameter but by the interests of journalism itself.

Where this takes us isn’t entirely difficult to predict: the notion of balance itself has had a problematic history, and needs to be deprioritised. Its necessity is invoked by the perception of many that journalism is, or has to be, objective. It may have been associated with objectivity at its birth but journalism today definitely has mostly no need to be. And when it doesn’t need to be happens only through the advent of false balances.

Posted in Writing | Tagged , , , , , | Leave a comment

Lingua franca

Featured image credit: allypark/Flickr, CC BY 2.0

Guilt can be just as disabling as arrogance, however. The political good which Spivak has done far outweighs the fact that she leads a well-heeled life in the States. If complicity means living in capitalist society, then just about everyone but Fidel Castro stands accused of it; if it means ‘buying in’ (as the Americans revealingly phrase it) to something called Western Reason, then only those racist or non-dialectical thinkers for whom such reason is uniformly oppressive need worry about it. … In any case, Spivak is logically mistaken to suppose that imagining some overall alternative to the current system means claiming to be unblemished by it. To imagine that it would be nice to be in Siena is not necessarily to disavow the fact that I am in Scunthorpe.

These lines are from Terry Eagleton’s review of a book by Gayatri Chakravorty Spivak, called A Critique of Post-Colonial Reason: Toward a History of the Vanishing Present. I’ve heard of Spivak but the other two names in the previous sentence I admit ring no bells. And more than that the contents of the book (i.e. the lines quoted by Eagleton in his review) bounce off my head like raindrops off a Teflon boulder. To be sure, Eagleton’s review is about how Spivak is good at what she does but somehow her admiration of political writers past overlooks the lucidity of their writing, having written the book in “overstuffed, excessively elliptical prose”. (However, the word ‘unreadable’ doesn’t show up anywhere in the review.)

Anyway, the acknowledgment in the first half of the third line from the quote above was interesting to read. It’s something I’ve had trouble reconciling with, with Arundhati Roy as a popular example: how do you rile against the sort of passive injustice exemplified by oppressing the so-called ‘lower classes’ from the balcony of a palatial home? The second half of the same line is worse – I still don’t get it (although I am embarrassed by my ignorance as well as by my inability to surmount it). My problem is that the sentence overall seems to suggest that enjoying the fruits of a capitalist society is not complicity if only because it implicates a majority of the elite.

I’m wrong… right?

Thanks to Jahnavi for help untangling some of the lines.

Posted in Writing | Tagged , , , , , | Leave a comment

ToI successfully launches story using image from China

It may not seem like a big deal, and the sort of thing that happens often at Times of India. After ISRO “successfully” tested its scramjet engine in what seem like the early hours of August 28, Times of India published a story announcing the development. And for the story, the lead image was that of a Chinese rocket. No biggie, right? I mean, copy-editors AFAIK are given instructions to not reuse images, and in this case all the reader needed to be shown was a representative image of a rocket taking off.

The ToI story showing a picture of a Chinese rocket adjacent to the announcement that ISRO has tested its scramjet engine.

The ToI story showing a picture of a Chinese rocket adjacent to the announcement that ISRO has tested its scramjet engine.

But if you looked intently, it is a biggie. I’m guessing Times of India used that image because it had run out of ISRO images to use, or even reuse. In the four days preceding the scramjet engine test, ISRO’s Twitter timeline was empty and no press releases had been issued. All that was known was that a test was going to happen. In fact, even the details of the test turned out to be different: ISRO had originally suggested that the scramjet engine would be fired at an altitude of around 70 km; sometime after, it seems this parameter had been changed to 20 km. The test also happened at 6 am, which nobody knew was going to be the case (and which is hardly the sort of thing ISRO could decide at the last minute).

Even ahead of – and during – the previous RLV-related test conducted on May 23, ISRO was silent on all of the details. What was known emerged from two sources: K. Sivan from the Vikram Sarabhai Space Centre in Thiruvananthapuram and news agencies like PTI and IANS. The organisation itself did nothing in its official capacity to publicly qualify the test. Some people I spoke to today mentioned that this may not have been something ISRO considered worth highlighting to the media. I mean, no one is expecting this test to be sensational; it’s already been established that the four major RLV tests are all about making measurements, and the scram test isn’t even one of them. If this is really why ISRO chooses to be quiet, then it is simply misunderstanding the media’s role and responsibility.

From my PoV, there are two issues at work here. First, ISRO has no incentive to speak to the media. Second, strategic interests are involved in ISRO’s developing a reusable launch vehicle. Both together keep the organisation immune to the consequences of zero public outreach. Unlike NASA, whose media machine is one of the best on the planet but which also banks on public support to secure federal funding, ISRO does not have to campaign for its money nor does it have to be publicly accountable. Effectively, it is non-consultative in many ways and not compelled to engage in conversations. This is still okay. My problem is that ISRO is also caged as a result, the prime-mover of our space programme taken hostage by a system that lets ISRO work in the environment that it does instead of – as I get often get the impression from speaking to people who have worked with it – being much more.

In the case of the first RLV test (the one on May 23), photos emerged a couple days after the test had concluded while there was no announcement, tweet or release issued before; it even took a while to ascertain its success. In fact, after the test, Sivan had told Zee News that there may have been a flaw in one of ISRO’s calculations but the statement was not followed up. I’m also told now that today’s scram test was something ISRO was happy with and that the official announcement will happen soon. These efforts, and this communication, even if made privately, are appreciated but it’s not all that could have been done. One of the many consequences of this silence is that a copy-editor at Times of India has to work with very little to publish something worth printing. And then get ridiculed for it.

Posted in Opinions, Psych of Science | Tagged , , , , , , , , , | Leave a comment

Corrected: ‘Life’s Greatest Secret’ by Matthew Cobb

An earlier version of this post was published by mistake. This is the corrected version. Featured image credit:

When you write a book like Siddhartha Mukherjee’s The Gene: An Intimate History, the chance of a half-success is high. You will likely only partly please your readers instead of succeeding or even failing completely. Why? Because the scope of your work will be your biggest enemy, and in besting this enemy, you will at various points be forced to find a fine balance between breadth and depth. I think the author was not just aware of this problem but embraced it: The Gene is a success for having been written. Over 490 pages, Mukherjee weaves together a social, political and technical history of the genome, and unravels how developments from each strain have fed into the others. The effect is for it to have become a popular choice among biology beginners but a persistent point of contention among geneticists and other researchers. However, that it has been impactful has been incontestable.

At the same time, the flipside of such a book on anything is its shadow, where anything less ambitious or even less charming can find itself languishing. This I think is what has become of Life’s Greatest Secret by Matthew Cobb. Cobb, a zoologist at the University of Manchester, traces the efforts of scientists through the twentieth century to uncover the secrets of DNA. To be sure, this is a journey many authors have retraced, but what Cobb does differently are broadly two things. First: he sticks to the participants and the progress of science, and doesn’t deviate from this narrative, which can be hard to keep interesting. Second: he combines his profession as a scientist and his education as an historian to stay aware, and keep the reader aware, of the anthropology of science.

On both counts – of making the science interesting while tasked with exploring an history that can become confusing – Cobb is assisted by the same force that acted in The Gene‘s favour. Mukherjee banked on the intrigues inherent in a field of study that has evolved to become extremely influential as well as controversial to get the reader in on the book’s premise; he didn’t have to spend too much effort convincing a reader why books like his are important. Similarly, Life’s Greatest Secret focuses on those efforts to explore the DNA that played second fiddle to the medicinal applications of genetics in The Gene but possess intrigues of their own. And because Cobb is a well-qualified scientist, he is familiar with the various disguises of hype and easily cuts through them – as well as teases out and highlights less well-known .

For example, my favourite story is of the Matthaei-Nirenberg experiment in 1961 (chapter 10, Enter The Outsiders). Marshall Nirenberg was the prime mover in this story, which was pegged on the race to map the nucleotide triplets to the amino acids they coded for. The experiment was significant because it ignored one of Francis Crick’s theories, popular at the time, that a particular kind of triplet couldn’t code for an amino acid. The experiment roundly drubbed this theory, and in the process delivered a much-needed dent to the circle of self-assured biologists who took Crick’s words as gospel. Another way the experiment triumphed was by showing that ‘outsiders’ (i.e. non-geneticists like the biochemists that Nirenberg and Heinrich) could also contribute to DNA research, and how an acceptance of this fact was commonly preceded by resentment from the wider community. Cobb writes:

Matthew Meselson later explained the widespread surprise that was felt about Nirenberg’s success, in terms of the social dynamics of science: “… there is a terrible snobbery that either a person who’s speaking is someone who’s in the club and you know him, or else his results are unlikely to be correct. And here was some guy named Marshall Nirenberg; his results were unlikely to be correct, because he wasn’t in the club. And nobody bothered to be there to hear him.”

This explanation is reinforced by a private letter to Crick, written in November 1961 by the Nobel laureate Fritz Lipmann, which celebrated the impact of Nirenberg’s discovery but nevertheless referred to him as ‘this fellow Nirenberg’. In October 1961, Alex Rich wrote to Crick praising Nirenberg’s contribution but wondering, quite legitimately, ‘why it took the last year or two for anyone to try the experiment, since it was reasonably obvious’. Jacob later claimed that the Paris group had thought about it but only as a joke – ‘we were absolutely convinced that nothing would have come from that’, he said – presumably because Crick’s theory of a commaless code showed that a monotonous polynucleotide signal was meaningless. Brenner was frank: ‘It didn’t occur to us to use synthetic polymers.’ Nirenberg and Matthaei had seen something that the main participants in the race to crack the genetic code had been unable to imagine. Some later responses were less generous: Gunther Stent of the phage group implied to generations of students who read his textbook that the whole thing had happened more or less by accident, while others confounded the various phases of Matthaei and Nirenberg’s work and suggested that the poly(U) had been added as a negative control, which was not expected to work.

A number of such episodes studded throughout the book make it an invaluable addition to a science-enthusiast’s bookshelf. In fact, if something has to be wrong at all, it’s the book’s finishing. In a move that is becoming custom, the last hundred or so pages are devoted to discussing genetic modification and CRISPR/Cas9, a technique and a tool that will surely shape the future of modern genetics but in a way nobody is quite sure of yet. This uncertainty is pretty well-established in the sense that it’s okay to be confused about where the use of these entities is taking us. However, this also means that every detailed discussion about these entities has become repetitive. Neither Cobb nor Mukherjee are able to add anything new on this front that, in some sense, hasn’t already been touched upon. (Silver lining: the books do teach us to better articulate our confusion.)

Verdict: 4/5

Posted in Psych of Science, Writing | Tagged , , , , , , , , , , , | 1 Comment


The Anthropocene is not simply an epoch. It comes with an attendant awareness of our environment, of the environment we are for other creatures, that pervades through our activities and thoughts. Humans of the Anthropocene have left an indelible mark on the natural world around them (mostly of carbon) even as they – as we – have embedded within ourselves the product of decades of technological innovation, even as we upload our memories into the cloud. Simultaneously, we’re also becoming more aware of the ‘things’ we’re made of: of gut bacteria that supposedly affect our moods and of what our genes tell us about ourselves. It’s an epoch whose centre of attention de facto is the human even as the attention makes us more conscious of the other multitudes with which we share this universe.

Aside | Posted on by | Tagged , , , , , | Leave a comment

UCal Irvine’s ‘fifth force’ farce

A screenshot of the UCI press release in question. Source: UCI

A screenshot of the UCI press release in question. Source: UCI

Michael Moyer just concluded a rant on Twitter (at the time of writing this) about how a press release on a recent theoretical physics result developed at the University of California, Irvine, had muddled up coverage on an important experimental particle physics result. I was going to write about this in detail for The Wire but my piece eventually took a different route, so I’m going to put some of my thoughts down on the UCI fuck-up here.

Let’s begin with some background: In April 2015, a team of nuclear physicists from the Hungarian Academy of Sciences (Atomki) announced that they had found an anomalous decay mode of an unstable beryllium-8 isotope. They contended in their paper, eventually published in Physical Review Letters in January 2016, that the finding had no explanation in nuclear physics. A team of American physicists – from the University of California, Irvine, and the University of Kentucky, Lexington – picked up on this paper and tried to draw up a theory that would (a) explain this anomaly even as it (b) would be a derivative of existing theoretical knowledge (as is the work of most theoretical physics operating at the edge). There are many ways to do this: the UCI-UKL conclusion was a theory that suggested the presence of a new kind of boson, hitherto undiscovered, which mediated the beryllium-8 decay to give rise to the anomalous result observed at Atomki.

Now, the foreground: A UCI press release announcing the development of the theory by its scientists had a headline that said the Atomki anomalous result had been “confirmed” at UCI. This kicked off a flurry of pieces in the media about how a ‘fifth force’ of nature had been found (which is what the discovery of a new boson would imply), that all of physics had been overturned, etc. But the press release’s claim was clearly stupid. It was published no more than a week after the particle physics community found out that the December 2015 digamma bump at the LHC was shown to be a glitch in the data, when the community was making peace with the fact that no observation was final until it had been confirmed with the necessary rigour even if physicists had come up with over 500 theoretical explanations for it. The release was also stupid because it blatantly defied (somewhat) common sense: how could a theoretical model built to fit the experimental data “confirm” the experimental data itself?

There’s even a paragraph in there that makes it sound like the particle’s been found! (My comments are in square brackets and all emphasis has been added:)

The UCI work demonstrates [misleading use] that instead of being a dark photon, the particle may be a “protophobic X boson.” While the normal electric force acts on electrons and protons, this newfound [the thing hasn’t been found!] boson [a boson is simply one interpretation of the experimental finding] interacts only with electrons and neutrons – and at an extremely limited range. Analysis co-author Timothy Tait, professor of physics & astronomy, said, “There’s no other boson that we’ve observed that has this same characteristic. [Does this mean UCI has actually observed this particular boson?] Sometimes we also just call it the ‘X boson,’ where ‘X’ means unknown.”

Moyer says in one of his tweets that PR machines will always try to hype results, outcomes, etc. – this is true, and journalists who don’t cut through this hype often end up writing flattering articles devoid of criticism (effectively missing the point about their profession, so to speak). However, as far as I’m concerned, what the UCI PR has done is not build hype as much as grossly mislead journalists, and I blame the machine in this case more than the journalists who wrote the “fifth force found” headlines. Journalism is already facing a credibility crisis in many parts of the world without having to look out for misguided press releases from universities of the calibre of UCI. Yes, such easily disturbed qualities are also often trusted by journalists, or anyone else, because we trust institutional authorities to take such qualities seriously themselves.

(Another such quality is ‘reputation’. Nicholas Dirks just quit because his actions had messed with the reputation of UCal Berkeley.)

This is a problem exacerbated by the fact that journalism also has a hard time producing – and subsequently selling – articles about particle physics. Everyone understands that the high-energy physics (HEP) community is significantly invested in maintaining a positive perception of their field, one that encourages governments to fund the construction of mammoth detectors and colliders. One important way to maintain this perception is to push for favourable coverage in the mainstream media of HEP research and keep the people – the principal proxy for government support – thinking about HEP activities for the right reasons. The media, in turn, can’t always commission pieces on all topics nor can it manufacture the real estate even if it has the perfect stories; every piece has to fight it out. And in crunch times, science stories are the first to get the axe; many mainstream Indian publications don’t even bother with a proper science section.*

If, in this context, a journalist buys into a UCI press release about some kind of ‘confirmation’ of a fifth force, and which is subsequently found to be simply false, an editor wouldn’t be faced with a tough choice whatsoever about which section she has to axe.

What happens next? We wait for experimental physicists try to replicate the Atomki anomaly in experiments around the world. If nothing else, this must happen because the Atomki team has published claims of having discovered a new particle at least twice before – in 2008 and 2012 – both at a significance upwards of 3 sigma (i.e., the chances of the results being a fluke being 1 in 200,000). This is a statistical threshold accepted by the particle physics community and which signifies the point at which a piece of data becomes equivalent to being evidence. However, the problem with the Atomki results is that both papers announcing the discoveries were later retracted by the scientists, casting all their claims of statistical validity in doubt. The April 2015 result was obtained with a claimed significance of 6.8 sigma.

*Even The Hindu’s science page that used to appear every Thursday in the main newspaper was shunted last year to appear every Monday in one of its supplements. It never carried ads.

Posted in Psych of Science | Tagged , , , , , , , , , | 3 Comments

Some notes and updates

Four years of the Higgs boson

Missed this didn’t I. On July 4, 2012, physicists at CERN announced that the Large Hadron Collider had found a Higgs-boson-like particle. Though the confirmation would only come in January 2013 (that it was the Higgs boson and not any other particle), July 4 is the celebrated date. I don’t exactly mark the occasion every year except to recap on whatever’s been happening in particle physics. And this year: everyone’s still looking for supersymmetry; there was widespread excitement about a possible new fundamental particle weighing about 750 GeV when data-taking began at the LHC in late May but strong rumours from within CERN have it that such a particle probably doesn’t exist (i.e. it’s vanishing in the new data-sets). Pity. The favoured way to anticipate what might come to be well before the final announcements are made in August is to keep an eye out for conference announcements in mid-July. If they’re made, it’s a strong giveaway that something’s been found.

Live-tweeting and timezones

I’ve a shitty internet connection at home in Delhi which means I couldn’t get to see the live-stream NASA put out of its control room or whatever as Juno executed its orbital insertion manoeuvre this morning. Fortunately, Twitter came to the rescue; NASA’s social media team had done such a great job of hyping up the insertion (deservingly so) that it seemed as if all the 480 accounts I followed were tweeting about it. I don’t believe I missed anything at all, except perhaps the sounds of applause. Twitter’s awesome that way, and I’ll say that even if it means I’m stating the obvious. One thing did strike me: all times (of the various events in the timeline) were published in UTC and EDT. This makes sense because converting from UTC to a local timezone is easy (IST = UTC + 5.30) while EDT corresponds to the US east cost. However, the thing about IST being UTC + 5.30 isn’t immediately apparent to everyone (at least not to me), and every so often I wish an account tweeting from India, such as a news agency’s, uses IST. I do it every time.

New music

I don’t know why I hadn’t found Yat-kha earlier considering I listen to Huun Huur Tu so much, and Yat-kha is almost always among the recommendations (all bands specialising in throat-singing). And while Huun Huur Tu likes to keep their music traditional and true to its original compositional style, Yat-kha takes it a step further, banding its sound up with rock, and this tastes much better to me. With a voice like Albert Kuvezin’s, keeping things traditional can be a little disappointing – you can hear why in the song above. It’s called Kaa-khem; the same song by Huun Huur Tu is called Mezhegei. Bass evokes megalomania in me, and it’s all the more sensual when its rendition is accomplished with human voice, rising and falling. Another example of what I’m talking about is called Yenisei punk. Finally, this is where I’d suggest you stop if you’re looking for throat-singing made to sound more belligerent: I stumbled upon War horse by Tengger Cavalry, classified as nomadic folk metal. It’s terrible.

Fall of Light, a part 2

In fantasy trilogies, the first part benefits from establishing the premise and the third, from the denouement. If the second part has to benefit from anything at all, then it is the story itself, not the intensity of the stakes within its narrative. At least, that’s my takeaway from Fall of Light, the second book of Steven Erikson’s Kharkanas trilogy. Its predecessor, Forge of Darkness, established the kingdom of Kurald Galain and the various forces that shape its peoples and policies. Because the trilogy has been described as being a prequel (note: not the prequel) to Erikson’s epic Malazan Book of the Fallen series, and because of what we know about Kurald Galain in the series, the last book of the trilogy has its work cut out for it. But in the meantime, Fall of Light was an unexpectedly monotonous affair – and that was awesome. As a friend of mine has been wont to describe the Malazan series: Erikson is a master of raising the stakes. He does that in all of his books (including the Korbal Broach short-stories) and he does it really well. However, Fall of Light rode with the stakes as they were laid down at the end of the first book, through a plot that maintained the tension at all times. It’s neither eager to shed its burden nor is it eager to take on new ones. If you’ve read the Malazan series, I’d say he’s written another Deadhouse Gates, but better.

Oh, and this completes one of my bigger goals for 2016.

Posted in Creative Stuff & Hobbies, Opinions, Psych of Science, Writing | Tagged , , , , , , , , , , , , , | Leave a comment

Talking about science, NCBS

On June 24, I was invited to talk at the NCBS Science Writing Workshop, held every year for 10 days. The following notes are some of my afterthoughts from that talk.

Science journalism online is doing better now than science journalism in print, in India. But before we discuss the many ways in which this statement is true, we need to understand what a science story can be as it is presented in the media. I’ve seen six kinds of science pieces:

1. Scientific information and facts – Reporting new inventions and discoveries, interesting hypotheses, breaking down complex information, providing background information. Examples: first detection of g-waves, Dicty World Race, etc.

2. Processes in science – Discussing opinions and debates, analysing how complex or uncertain issues are going to be resolved, unravelling investigations and experiments. Examples: second detection of g-waves, using CRISPR, etc.

3. Science policy – Questioning/analysing the administration of scientific work, funding, HR, women in STEM, short- and long-term research strategies, etc. Examples: analysing DST budgets, UGC’s API, etc.

4. People of science – Interviewing people, discussing choices and individual decisions, investigating the impact of modern scientific research on those who practice it. **Examples**: interviewing women in STEM, our Kip Thorne piece, etc.

5. Auxiliary science – Reporting on the impact of scientific processes/choices on other fields (typically closer to our daily lives), discussing the economic/sociological/political issues surrounding science but from an economic/sociological/political PoV. Examples: perovskite in solar cells, laying plastic roads, etc.

6. History and philosophy of science – Analysing historical and/or philosophical components of science. Examples: some of Mint on Sunday’s pieces, our columns by Aswin Seshasayee and Sunil Laxman, etc.

Some points:

1. Occasionally, a longform piece will combine all five types – but you shouldn’t force such a piece without an underlying story.

2. The most common type of science story is 5 – auxiliary science – because it is the easiest to sell. In these cases, the science itself plays second fiddle to the main issue.

3. Not all stories cleanly fall into one or the other bin. The best science pieces can’t always be said to be falling in this or that bin, but the worst pieces get 1 and 2 wrong, are misguided about 4 (but usually because they get 1 and 2 wrong) or misrepresent the science in 5.

4. Journalism is different from writing in that journalism has a responsibility to expose and present the truth. At the same time, 1, 2 and 6 stories – presenting facts in a simpler way, discussing processes, and discussing the history and philosophy of science – can be as much journalism as writing because they increase awareness of the character of science.

5. Despite the different ways in which we’ve tried to game the metrics, one thing has held true: content is king. A well-written piece with a good story at its heart may or may not do well – but a well-packaged piece that is either badly written or has a weak story at its centre (or both) will surely flop.

6. You can always control the goodness of your story by doing due diligence, but if you’re pitching your story to a publisher on the web, you’ve to pitch it to the right publisher. This is because those who do better on the web only do better by becoming a niche publication. If a publication wants to please everyone, it has to operate at a very large scale (>500 stories/day). On the other hand, a niche publication will have clearly identified its audience and will only serve that segment. Consequently, only some kinds of science stories – as identified by those niche publications’ preferences in science journalism – will be popular on the web. So know what editors are looking for.

Posted in Writing | Tagged , , , | Leave a comment