In solidarity with Nautilus’s writers

In April this year, Undark published a piece that caught me by surprise: Nautilus magazine was going broke. Actually, it wasn’t a surprise that lasted long. Nautilus, to me, had been doing a commendable job of being ‘the New Yorker version of the Scientific American‘, an aspiration of its own phrasing, by publishing thought-provoking science writing. At the same time, it was an extravagant production: its award-winning website, the award-winning illustrations that accompanied every article, and the award-winning writing itself I knew must have cost a lot to produce.

The Undark report confirmed it: Nautilus had burned through $10 million in five years.

But what had gone unsaid was that, in this time, Nautilus had also commissioned many pieces that it knew it wouldn’t be able to pay for. This is according to a bunch of science writers who have come together under a ‘National Writers Union’ and asked that Nautilus settle their collective dues – a total of $50,000 – or face legal action. Before you think they’re being rash, remember that many of them haven’t been paid for over a year, that they’re on average each owed $2,500, and one among them is owed a staggering $11,000.

I laud these writers, 19 in all, for what they’re doing. It wouldn’t have been easy to have to force a publication that’s struggling financially to settle its bills, a publication that, while functional, was likely a unique platform to present those ideas that wouldn’t have found a home elsewhere. And – though I’m not sure what it’s worth – I stand with the writers in solidarity #paynautiluswriters. As The Wire‘s science editor, I’ve often had to turn down interesting pitches and submissions because I’d spent all my commissioning money for that month. It was painful to not be able to publish these pieces but it would have been indefensible to take them on anyway – but that’s what Nautilus seems to have done.

When Undark‘s report was published, I’d blogged about Nautilus‘s plight and speculated about where they could’ve gone wrong, assisted by my experience helping build The Wire. I’d like to reiterate what I’d written then. First: Nautilus may have taken on too much too soon. For example, the magazine may have put together awesome visuals to go with its stories but, from what we at The Wire have observed firsthand, readers are evaluating the writing above all else. So going easy on the presentation until achieving financial stability may not have been a bad idea. Second: In commissioning content it knew it couldn’t afford, Nautilus squandered any opportunity to build long-term relationships with the people whose words and ideas made it what it is.

The open letter penned by the science writers to Nautilus also brings another development to the fore. When John Steele, Nautilus‘s publisher, had been under pressure to pay his writers earlier this year, he had cleared some partial payments while simultaneously them promising that the remainder would come through when the American Association for the Advancement of Science (AAAS) had finished ‘absorbing’ Nautilus into itself. This didn’t bode well then because it left the consequences of this acquisition on the magazine’s editorial independence unclear. Since then, the letter says, the acquisition has fallen through.

While I’m not unhappy that Nautilus isn’t merging with the AAAS, I’m concerned about where this leaves Steele’s promise to pay the writers. I’m also concernfully curious about where the money is going to come from. Think about it: a magazine that used up $10 million in five years is now struggling to put together $50,000. This is a sign of gross mismanagement and is not something that could’ve caught the leadership at Nautilus by surprise. Someone there had to know their ship was sinking fast and, going by Steele’s promise, put all their eggs in the AAAS basket. One way or another, this was never going to end well.

Featured image credit: NWU.


‘Lots of people don’t know lots of things’

You might have seen news channels on the television (if you do at all, in fact) flash a piece of information repeatedly on their screens. News presenters also tend to repeat things they’ve said 10 or 15 minutes before and on-screen visuals join in this marquee exercise. I remember being told in journalism school that this is done so people who have tuned in shortly after a piece of news has been ‘announced’ to catch up quickly. So say some news item is broken at 8 pm; I can tune in at 8.10 pm and be all caught up by 8.15 pm.

Of course, this has become a vestigial practice in the age of internet archiving technologies and platforms like Facebook and Google ‘remembering’ information forever, but would’ve been quite useful in a time when TV played a dominant role in information dissemination (and when news channels weren’t going bonkers with their visuals).

I wonder if this ’15 minutes’ guideline – rather a time-based offset in general – applies to reporting on science news. Now, while news is that which is novel, period, it’s not clear whom it’s novel for. For example, I can report on a study that says X is true. X might’ve been true for a large number of scientists, and perhaps people in a different country or region, for a long time but it may not be for the audience that I’m writing for. Would this mean X is not news?

Ultimately, it comes down to two things.

First: Lots of people don’t know lots of things. So you can report on something and it will be news for someone, somewhere. However, how much does it cost to make sure what you’ve written reaches that particular reader? Because if the cost is high, it’s not worth it. Put another way, you should regularly be covering news that has the lowest cost of distribution for your publication.

Second: Lots of people don’t know lots of things. So you can report on something and it will be news for someone, somewhere. And if the bulk of your audience is a subset of the group of people described above, then what you’re reporting will always likely be new, and thus news. As things stand, most Indians still needs to catch up on basic science. Scientists aren’t off the hook either: many of them may know the divergence of a magnetic field is always zero but attribute this statement’s numerous implications to a higher power.

So, through science journalism, there are many opportunities to teach as well as inform, particularly in that order. And a commitment to these opportunities implies that I will also be writing and publishing reports that are newsy to my readers but not to people in other parts of the world, of a different demographic, etc.

Featured image credit: mojzagrebinfo/pixabay.

Why a pump to move molten metal is awesome

The conversion of one form of energy into another is more efficient at higher temperatures.1 For example, one of the most widely used components of any system that involves the transfer of heat from one part of the system to another is a device called a heat exchanger. When it’s transferring heat from one fluid to another, for example, the heat exchanger must facilitate the efficient movement of heat between the two media without allowing them to mix.

There are many designs of heat exchangers for a variety of applications but the basic principle is the same. However, they’re all limited by the explicit condition that entropy – “the measure of disorder” – is higher at lower temperatures. In other words, the lower the temperature difference within the exchanger, the less efficiently the transfer will happen. This is why it’s desirable to have a medium that can carry a lot of heat per unit volume.

But this is not always possible for two reasons. First: there must exist a pump that can move such a hot medium from one point to another in the system. This pump must be made of materials that can withstand high temperatures during operation as well as not react with the medium at those temperatures. Second: one of the more efficient media that can carry a lot of heat is liquid metals. But they’re difficult to pump because of their corrosive nature and high density. Both reasons together, this is why medium temperatures have been limited to around 1,000º C.

Now, an invention by engineers from the US has proposed a solution. They’ve constructed a pump using ceramics. This is really interesting because ceramics have a good reputation for being able to withstand extreme heat (they were part of the US Space Shuttle’s heat shield exposed during atmospheric reentry) but an equally bad reputation for being very brittle.2 So this means that a ceramic composition of the pump material accords it a natural ability to withstand heat.

In other words, the bigger problem the engineers would’ve solved for would be to keep it from breaking during operation.

DOI: 10.1038/nature24054

DOI: 10.1038/nature24054

Their system consists of a motor (not visible in the image above but positioned to the right of the shaft, made of an insulating material), the gearbox, a piping network and a reservoir of liquid tin. When the motor is turned on, the pump receives liquid tin from the bottom of the reservoir. Two interlocking gears inside the pump (shown left bottom) rotate. As the tin flows between the blades, it is compressed into the space between them, creating a pressure difference that sucks in more tin from the reservoir. After the tin moves through the blades, it is let out into another pipe that takes it back to the reservoir.

The blades are made of Shapal, an aluminium nitride ceramic made by the Tokuyama Corporation in Japan with the unique property of being machinable. The pump seals and piping network are made of graphite. High-temperature pumps usually have pipes made of polymers. Graphite and such polymers are similar in that they’re both very difficult to corrode. But graphite has an upper hand in this context because it can also withstand higher temperatures before it loses its consistency.

Using this setup, the engineers were able to operate the pump continuously for 72 hours at an average temperature of 1,200º C. For the first 60 hours of operation, the flow rate varied between 28 and 108 grams per second (at an rpm in the lower hundreds). According to the engineers’ paper, this corresponds to an energy transfer of 5-20 kW for a vat of liquid tin heated from 300º C to 1,200º C. They extrapolate these numbers to suggest that if the gear diameter and thickness were enlarged from 3.8 cm to 17.1 cm and 1.3 cm to 5.85 cm (resp.) and operated at 1,800 rpm, the resulting heat transfer rate would be 100 MW – a jump of 5,000x from 20 kW and close to the requirements of a utility-scale power plant.

And all of this would be on a tabletop setup. This is the kind of difference having a medium with a high energy density makes.

The engineers say that their choice of temperature at which to run the pump – about 1,200ºC – was limited by whatever heaters they had available in their lab. So future versions of this pump could run for cheaper and at higher temperatures by using, say, molten silicon and higher grade ceramics than Shapal. Such improvements could have an outsize effect in our world because of the energy capacity and transfer improvements they stand to bring to renewable energy storage.

1. I can attest from personal experience that learning the principles of thermodynamics is easier through application than theory – an idea that my college professors utterly failed to grasp.

2. The ceramics used to pave the floor of your house and the ceramics used to pad the underbelly of the Space Shuttle are very different. For one, the latter had a foamy internal structure and wasn’t brittle. They were designed and manufactured this way because the ceramics of the Space Shuttle wouldn’t just have to withstand high heat – they would also have to be able to withstand the sudden temperature change as the shuttle dived from the -270º C of space into the 1,500º C of hypersonic shock.

Featured image credit: Erdenebayar/pixabay.

A problem worth its weight in salt

Pictures of Jupiter’s moon Europa taken by the Galileo space probe between 1995 and 2003 support the possibility that Europa’s surface has plate tectonics. In fact, scientists think it could be one of only two bodies in the Solar System – the other being Earth – to display this feature. But it must be noted that Europa’s tectonics is nothing like Earth’s if only because the materials undergoing this process are very different – compare the composition of Earth’s crust and Europa’s ice shell. There are also no arc volcanoes or continents on Europa.1 But this doesn’t mean there aren’t any similarities either. For example, scientists have acknowledged that shifting ice plates on the moon’s surface, with some diving over others and pushing them down, could be a way for minerals on the top to plunge further interior. Because Europa has been suspected of harbouring a subsurface ocean of liquid water, a mineral cycle could be boosting the chances of finding life there. Plate tectonics played a similar role in making Earth habitable.

The biggest giveaway is that the moon’s surface is not littered with craters the way other Jupiter moons are. This meant that cratered patches of the ice shell were disappearing into somewhere and replaced with ‘cleaner’ patches. There are also kilometre-long ridges on the shell suggesting that something had moved along that distance, and they ended abruptly in some places. In 2014, a pair of geologists from Johns Hopkins and the University of Idaho used software like Photoshop to cut up Galileo’s maps of Europa and stitch them back together such that the ridges lined up. They found that there were some areas with a “big gap”. One way to explain it was that the patch there had dived beneath a neighbouring one – a simple version of plate tectonics. But tantalising as the possibility is, more evidence is needed before we can be sure.

If we’re hoping to find the first alien life inside a Jovian moon, we’ll need good models that can help us predict how life might’ve evolved there. A new paper from researchers at Brown University tries to help by trying to figure out why the plates might be shifting (To say something could be happening, it helps to have a simple way it could be happening and with the available resources). On Earth, interactions between the crust and the mantle are motivated among other factors by differences in temperature. The crust is cooler than the magma it ‘slides’ over, which means it’s denser, which assists its subduction when it happens. Such differences aren’t mirrored on Europa, where scientists think there’s a thin, cold ice shell on top and a relatively warmer one below. When a patch of ice from the top slides down, it becomes warmer because the upper layer provides insulation, which prevents the sliding layer from sliding further down because the density has been evened out.

Instead, the Brown University fellows think the density differences could arise thanks to salt content (which, by the way, could also be useful when reading their press release. It says, “A Brown University study provides new evidence that the icy shell of Jupiter’s moon Europa may have plate tectonics similar to those on Earth.” You know it’s not similar, especially if left unqualified like that.) Salt is denser than water, so ice that has more salt is more dense. A 2003 study also suggested that warmer ice will have lesser salt because eutectic mixtures could be dissolving and draining it out. So using a computer model and making supposedly reasonable assumptions about the shell’s temperature, porosity and salinity ranges, the Brown team calculated that ice slabs made up of 5% salt and saltier than their surroundings by 2.5% would be able to subduct. However, if the distribution of salt was uniform on Europa’s surface (varying by less than 1% from slab to slab, e.g.), then a subducting slab would have to have at least 22% salt → very high.

I said “supposedly reasonable assumptions” because we don’t exactly know how salinity and porosity vary around and through Europa. In their simulations, the researchers assumed that the ice has a porosity of 10% (i.e. 10% of the material is filled with pores), which is considered to be on the higher side of things. But the study remains interesting because it’s able to establish the big role salts can play in how the ice moves around. This is also significant because Galileo found the Europan magnetic field to be stronger than it ought to, suggesting the subsurface ocean had a lot of salt. So it’s plausible that the cryomagma2 on which Europa’s upper shell moves could be derived from the waters below.

The researchers also claim that if the subducting slab doesn’t lose all its salt in about one million years, it will remain dense enough to go all the way down to the ocean, where it could be received as a courier carrying materials from the surface that help life take root.3 But of you think this might be too out there, look at it in terms of the planned ESA Jupiter Icy Moons Explorer (JUICE) and NASA Clipper missions for the mid-2020s. Both Cassini and Galileo data have shown that there’s a lot going on with the icy moons of the gas giants Jupiter and Saturn, with observations of phenomena like vapour plumes pointing to heightened chances for the formation and sustenance of alien life. If JUICE and Clipper have to teach us something useful about these moons, then they’ll have to go in prepared to study the right things, the things that matter. The Brown University paper has shown that salt is definitely one of them. It was accepted for publication in the Journal of Geophysical Research: Planets on December 4, 2017. Full text here.

Featured image: An artist’s impression of water vapour plumes erupting from Europa’s south pole, with Jupiter in the background. Credit: NASA-ESA.

1Venus has two continent-like areas , Ishtar and Aphrodite terra, and also displays tectonic activity in the form of mountains and volcanoes, e.g. But it does not have plate tectonics because its crust heals faster than it is damaged during tectonic activity.

2One of the more well known cryovolcanoes in the Solar System is Doom Mons on where else but Titan.

3 On Earth, tectonic plates that are pushed downward also take a bunch of carbon along, keeping the surface from accumulating the element in amounts that could be deleterious to life.

Ruins of the Sutlej avulsion paper’s coverage

Reporting on the new Indus civilisation study out of IIT-K and Imperial College London was an interesting experience because it afforded an opportunity to discover how the technical fields of sedimentology and hydrodynamics can help understand the different ways in which a civilisation can grow. And also how “fluviodeltaic morphodynamics” just rolls off the tongue.

In my report for The Wire, however, I stuck to the science for the most part because that in itself offered a lot to discover (and because you know I’m biased). For example, how the atomic lattices of quartz and feldspar played an important part in identifying that the Sutlej river had formerly occupied the Ghaggar-Hakra palaeochannel.

Audience response to the reports were also along expected lines:

  • a fifth read it quietly, without much fanfare, asking polite questions (without notifying the authors, however) about various claims made in the article;
  • some two-fifths went to town with it, calling the Hindutva brigade’s search for the Saraswati a lost cause; and
  • another two-fifths also went to town with it, calling out The Wire‘s attempt to ‘disparage’ the Saraswati misguided.

I’ll leave you to judge for yourself.

What was not along expected lines, however, was international coverage of the study. The BBC’s and Axios‘s headline on the topic were the following (in order): River departed ‘before Indus civilisation emergence’ and Indus Valley civilization may have arisen without a river. The Axios headline is just wrong. The BBC headline is fine but its article is wrong, stating:

The Indus society came to prominence in what is now northwest India and Pakistan some 5,300 years ago thanks in large part to the sustenance of a long-lost Himalayan river.

Or so it was thought.

New evidence now indicates this great water course had actually changed its path and disappeared before the Indus people had even settled in the region.

That they lacked the resource offered by a big, actively flowing river will come as a surprise to many; the other early urban societies of the time, in Egypt and Mesopotamia, certainly benefitted in this way.

The Daily Mail had an unsurprisingly garbage headlineMysterious Indus Valley Civilisation managed to thrive without a river to provide flowing water 5,300 years ago. Newsweek‘s headline (Long-lost river discovered in the Himalayas may completely change what we know about early civilisations) and article were both sensational. Excerpt:

Scientists have found the ancient remains of the river that prove it did not exist at the same time as the Indus civilization. This means the civilization existed without a major active water source, something archaeologists did not believe was possible.

The common mistake in all these reports is that they either assume or suggest that the Indus valley civilisation was fed by one river – at least in the first half – and that the entire civilisation was centred around that river. On the contrary, the Indus valley civilisation was the largest of its time, over a million sq. km in area, and was fed by the Indus and its dozens of tributaries (only one of which was the Sutlej).

This in turn limits the extent to which claims about civilisations being able to arise without perennial sources of water can be generalised. The prominent Indus valley settlements affected by the Sutlej’s avulsion are two in number (Banawali and Kalibangan) whereas the civilisation overall hosted over 1,000 such sites and, by one estimate, almost five million people. Second: to what extent would the Indus civilisation have been possible (relative to what actually was) if all of its settlements had been fed by gentler monsoonal rivers?

So yes, the study does provide a new perspective – a new possibility, rather – on the question of what resources are necessary to form a conducive natural environment for a proto-urban human settlement. But this is not a “revolutionary” idea, as many reports would have us believe, at least because other researchers have explored it before and at most because there is little data to run with at the moment. What we do know and for sure is that the Sutlej avulsed 8,000 years ago and, about 5,000 years ago, a part of the Indus valley civilisation took root in the abandoned valley.

Further, I’m also concerned the reports might overstate what “ancient Indians” (but for some reason not “ancient Pakistanis”) could have been capable of. This is a topic that the Hindutva brigade has refurbished with alarming levels of success to imply that the world should bow down to India. Archaeological surveys of the Indus valley region could definitely do with staying away from such problems, at least as much as they can afford to, and some of the language in the sites quoted above isn’t helping.

Featured image credit: Commons, CC BY-SA 3.0.

Similar DNA

From an article in Times Now News:

Comparing Prime Minister Narendra Modi with former prime minister Atal Bihari Vajpayee, Union Science and Technology Minister Harsh Vardhan on Wednesday said both have a similar “DNA” and share a passion for scientific research.

I’m sure I’m interpreting this too literally but when the national science minister makes a statement saying two people share similar DNA, I can’t help but wonder if he knows that the genome of any two humans is 99.9% the same. The remaining 0.1% accounts for all the difference. Ergo, Prime Minister Narendra Modi has DNA similar to Rahul Gandhi, me and you.

That said, I refuse to believe a man who slashed funding for the CSIR labs by 50% (and asked them to make up for it – a princely sum of Rs 2,000 crore – in three years by marketing their research), who claims ancient Indians surgically transplanted animal heads on humans, whose government passively condones right-wing extremism fuelled by irrational beliefs, whose ministries spend crores of rupees on conducting biased investigations of cow urine, and whose bonehead officials have interfered in the conduct of autonomous educational institutions even knows how scientific research works, let alone respects it.

Vardhan himself goes on to extol Vajpayee as the man who suffixed ‘jay vigyan‘ (‘Hail science’) to the common slogan ‘Jay jawan, jay kisan‘ (‘Hail the soldier, hail the farmer’) and, as an example of his contribution to the scientific community, says that the former PM made India a nuclear state within two months of coming to power. Temporarily setting aside the fact that it takes way more than two months to build and test nuclear weapons, it’s also disturbing that Vardhan thinks atom bombs are good science.

Additionally, Modi is like Vajpayee according to him because the former keeps asking scientists to “alleviate the sufferings of the common man” – which, speaking from experience, is nicespeak for “just do what I tell you and deliver it before my term is over”.

English as the currency of science’s practice

K. VijayRaghavan, the secretary of India’s Department of Biotechnology, has written a good piece in Hindustan Times about how India must shed its “intellectual colonialism” to excel at science and tech – particularly by shedding its obsession with the English language. This, as you might notice, parallels a post I wrote recently about how English plays an overbearing role in our lives, and particularly in the lives of scientists, because it remains a language many Indians don’t have to access to get through their days. Having worked closely with the government in drafting and implementing many policies related to the conduct and funding of scientific research in the country, VijayRaghavan is able to take a more fine-grained look at what needs changing and whether that’s possible. Most hearteningly, he says it is – only if we had the will to change. As he writes:

Currently, the bulk of our college education in science and technology is notionally in English whereas the bulk of our high-school education is in the local language. Science courses in college are thus accessible largely to the urban population and even when this happens, education is effectively neither of quality in English nor communicated as translations of quality in the classroom. Starting with the Kendriya Vidyalayas and the Nayodya Vidyalayas as test-arenas, we can ensure the training of teachers so that students in high-school are simultaneously taught in both their native language and in English. This already happens informally, but it needs formalisation. The student should be free to take exams in either language or indeed use a free-flowing mix. This approach should be steadily ramped up and used in all our best educational institutions in college and then scaled to be used more widely. Public and private colleges, in STEM subjects for example, can lead and make bi-lingual professional education attractive and economically viable.

Apart from helping students become more knowledgeable about the world through a language of their choice (for the execution of which many logistical barriers spring to mind, not the least of which is finding teachers), it’s also important to fund academic journals that allow these students to express their research in their language of choice. Without this component, they will be forced to fallback to the use of English, which is bound to be counterproductive to the whole enterprise. This form of change will require material resources as well as a shift in perspective that could be harder to attain. Additionally, as VijayRaghavan mentions, there also need to be good quality translation services for research in one language to be expressed in another so that cross-disciplinary and/or cross-linguistic tie-ups are not hampered.

Featured image credit: skeeze/pixabay.

Why Titan is awesome #10


How much I’ve missed writing these posts since Cassini passed away. Unsurprisingly, it’s after the probe’s demise that we’ve really begun to realise how much of Cassini’s images and data we were consuming on a daily basis, all of which is gone. There’s no more the steady stream of visuals of Saturn’s rings, bands, storms and panoply of moons – in fact all of which have been replaced by Jupiter’s rings, bands, storms and panoply of moons thanks to Juno. Nonetheless, one entire area of the Solar System has been darkened in my imagination. Until the next full mission to the Saturnian system (although nothing of the kind is in the works), we’ll have to make do with what Cassini data trickles down through NASA’s and ESA’s data-processing sieves.

One such is a new study about the temperature of the air high above Titan’s poles. Before Cassini’s death-dive into Saturn, the probe spent some time studying the moon’s polar atmosphere. Researchers from the University of Bristol who obtained this data noticed something odd: the part of the atmosphere over Titan’s poles began to develop a warm spot over late 2009 but that by 2012, it had become a ‘cold spot’. By 2015, the temperature at about 550 km above had dropped to 120 K (that’s a little below the temperature at which supercooled water turns into a glass).

On Earth, a warm spot forms over the poles because of two principle reasons: how Earth’s wind circulates around the planet and because of the presence of carbon dioxide. During winter, air over the corresponding hemispheric pole sinks down, becomes compressed and heats up. Moreover, the carbon dioxide present in the air also emits the heat it has trapped in its chemical bonds.

In 2012, astronomers using Cassini data had found that Titan also exhibits a wind circulation process that is moon-wide. It can be understood as Titan having two atmospheres, or layers, one on top of the other. In the lower atmosphere, there are three Hadley cells; each cell represents a distinct air circulation system wherein air rises up for 10 km or so from near the equator, moves up/down towards subtropical regions, sinks back down and returns to the equator along the surface. In the second, upper atmosphere, air moves between the two poles directly in a unified, global Hadley cell.

Titan_south polar vortex

Now, remember that Titan’s distance from the Sun means that one Titan-year is 29.5 Earth-years, that each Titanic season lasts over seven Earth-years and that seasonal shifts are much slower on the moon as a result. However, in 2012, scientists studying Cassini data found that the rate at which the air over one of Titan’s poles was sinking into the pole – like the air does on Earth – was happening really quickly: according to Nick Teanby, a researcher at the University of Bristol and also the lead author of the latest study, the rate of subsidence increased from 0.5 mm/s in January 2010 to 1.5 mm/s in June 2010. In other words, it was a shift that, unlike the moon’s seasons, happened rapidly (in just 12 Titanic days).

The same study concluded that Titan’s atmosphere was thicker than previously thought because trace gases like ethane, hydrogen cyanide, acetylene and cyanoacetylene were found to be produced at an altitude of over 500 km over the poles thanks to photochemical reactions induced by ultraviolet radiation and high-energy electrons streaming in from the Sun. These gases would then subside into the lower atmosphere over the polar region – which brings us to the latest study. It says that, unlike what carbon dioxide warming Earth’s atmosphere, the (once) trace gases actually cool the atmosphere, resulting in the dreadfully cold spot over Titan’s poles. They also participate in the upper Hadley cell circulation.

This is similar to a unique phenomenon observed over Saturn’s south pole in 2005.

Changes in trace gas abundances over Titan's south pole. Credit: ESA

Changes in trace gas abundances over Titan’s south pole. Credit: ESA

What a beauty you are, Titan. And I miss you, Cassini, more than I miss many other things in life.

I couldn’t find a link to the paper of the latest study; here’s the press release. Update: link to paper.

Links to previous editions:

  1. Why Titan is awesome #1
  2. Why Titan is awesome #2
  3. Why Titan is awesome #3
  4. Why Titan is awesome #4
  5. Why Titan is awesome #5
  6. Why Titan is awesome #6
  7. Why Titan is awesome #7
  8. Why Titan is awesome #8
  9. Why Titan is awesome #9

Featured image: Cassini’s last shot of Titan, taken with the probe’s narrow-angle camera on September 13, 2017. Credit: NASA.

What it takes to wash a strainer: soap, water and some wave optics

When I stay over at a friend’s place whenever I come to Delhi, I try to help around the house. But more often than not, I just do the dishes – often a lot of dishes. One item I’ve always had trouble cleaning is the strainer, whether a small tea strainer or a large but fine sieve, because I can never tell if the multicoloured sheen I’m seeing on the wires is a patch of oil, liquid soap or something else. The fundamental problem is that these items are susceptible to the quirks of the wave of nature of light, as a result of which their surfaces display an effect called goniochromism, also known as iridescence.

At first (and over 12 years after high school), I suspected the wires on the sieve were acting as a diffraction grating. This is a structure that has a series of fine and closely spaced ridges on the surface. When a wave of light strikes this surface, the ridges scatter different parts of the wave in different directions. When these waves interact with each other on the other side, they interfere with each other constructively or destructively. A constructive interference produces a brighter band of colour; a destructive interference produces a darker band. How the wave becomes scattered is a function of its frequency: the lower the frequency (or redder the colour), the more the wave is bent around a grating.

As a result, white and continuous light appears to breakdown into its constituent colours when passed through a diffraction grating. But it must be noted that a useful diffraction grating used in a visible-light experiment has something like 4,000-6,000 ridges every centimetre. The width of each ridge has to be of comparable size to the wavelength of visible light because only then can it scatter that portion of light. On the other hand, the sieve I was holding appeared to have only 6-8 ridges every centimetre, so the structure itself couldn’t have been what was effecting the sheen.

Goniochromism, or iridescence, is caused when two transparent or semi-transparent films – like liquid soap atop water – reflect the incident light multiple times. In fact, this is one type of iridescence, called thin-film interference. Here, imagine a thin layer of soap on the surface of a thin layer of water, itself sitting on the surface of a vessel you’re cleaning. (With a strainer, the water-soap liquid forms meniscuses between the wires.) When white light strikes the soap layer, some of it is reflected our and some is transmitted. The transmitted portion than strikes the surface of the water layer: some of it is sent through while the rest is reflected back out.

When the light reflected by each of the two layers interact, their respective waves can interfere either constructively or destructively. Depending on the angle at which you’re viewing the vessel, bright and dark bands of light will be visible. Additionally, the thickness of the soap film also decides which frequencies are intensified and which become subdued in this process. The total effect is for you to see rainbow-esque pattern of undulating brightness on the vessel.

So herein lies the rub. Either effect, although the second more than the first, produces what effectively looks like an oily sheen on the strainer in my hand no matter how many times I scrub it with soap and run it under the water. And ultimately, I end up doing a very thorough job of it if there was no oil on the strainer to begin with – or a very bad one if there was oil on it but I’ve let it be assuming it’s soap residue. It’s a toss-up… so I think I’ll just follow my friend C.S.R.S’s words: “Just rub it a few times and leave it.”

Featured image credit: Lumix/pixabay.

Onto drafting the gravitational history of the universe

It’s finally happening. As the world turns, as our little lives wear on, gravitational wave detectors quietly eavesdrop on secrets whispered by colliding blackholes and neutron stars in distant reaches of the cosmos, no big deal. It’s going to be just another day.

On November 15, the LIGO scientific collaboration confirmed the detection of the fifth set of gravitational waves, made originally on June 8, 2017, but announced only now. These waves were released by two blackholes of 12 and seven solar masses that collided about a billion lightyears away – a.k.a. about a billion years ago. The combined blackhole weighed 18 solar masses, so one solar mass’s worth of energy had been released in the form of gravitational waves.

The announcement was delayed because the LIGO teams had to work on processing two other, more spectacular detections. One of them involved the VIRGO detector in Italy for the first time; the second was the detection of gravitational waves from colliding neutron stars.

Even though the June 8 is run o’ the mill by now, it is unique because it stands for the blackholes of lowest mass eavesdropped on thus far by the twin LIGO detectors.

LIGO’s significance as a scientific experiment lies in the fact that it can detect collisions of blackholes with other blackholes. Because these objects don’t let any kind of radiation escape their prodigious gravitational pulls, their collisions don’t release any electromagnetic energy. As a result, conventional telescopes that work by detecting such radiation are blind to them. LIGO, however, detects gravitational waves emitted by the blackholes as they collide. Whereas electromagnetic radiation moves over the surface of the spacetime continuum and are thus susceptible to being trapped in blackholes, gravitational waves are ripples of the continuum itself and can escape from blackholes.

Processes involving blackholes of a lower mass have been detected by conventional telescopes because these processes typically involve a light blackhole (5-20 solar masses) and a second object that is not a blackhole but instead usually a star. Mass emitted by the star is siphoned into the blackhole, and this movement releases X-rays that can be spotted by space telescopes like NASA Chandra.

So LIGO’s June 8 detection is unique because it signals a collision involving two light blackholes, until now the demesne of conventional astronomy alone. This also means that multi-messenger astronomy can join in on the fun should LIGO detect a collision of a star and a blackhole in the future. Multi-messenger astronomy is astronomy that uses up to four ‘messengers’, or channels of information, to study a single event. These channels are electromagnetic, gravitational, neutrino and cosmic rays.

The masses of stellar remnants are measured in many different ways. This graphic shows the masses for black holes detected through electromagnetic observations (purple); the black holes measured by gravitational-wave observations (blue); neutron stars measured with electromagnetic observations (yellow); and the masses of the neutron stars that merged in an event called GW170817, which were detected in gravitational waves (orange). GW170608 is the lowest mass of the LIGO/Virgo black holes shown in blue. The vertical lines represent the error bars on the measured masses. Credit: LIGO-Virgo/Frank Elavsky/Northwestern

The masses of stellar remnants are measured in many different ways. This graphic shows the masses for black holes detected through electromagnetic observations (purple); the black holes measured by gravitational-wave observations (blue); neutron stars measured with electromagnetic observations (yellow); and the masses of the neutron stars that merged in an event called GW170817, which were detected in gravitational waves (orange). GW170608 is the lowest mass of the LIGO/Virgo black holes shown in blue. The vertical lines represent the error bars on the measured masses. Credit: LIGO-Virgo/Frank Elavsky/Northwestern

The detection also signals that LIGO is sensitive to such low-mass events. The three other sets of gravitational waves LIGO has observed involved black holes of masses ranging from 20-25 solar masses to 60-65 solar masses. The previous record-holder for lowest mass collision was a detection made in December 2015, of two colliding blackholes weighing 14.2 and 7.5 solar masses.

One of the bigger reasons astronomy is fascinating is its ability to reveal so much about a source of radiation trillions of kilometres away using very little information. The same is true of the June 8 detection. According to the LIGO scientific collaboration’s assessment,

When massive stars reach the end of their lives, they lose large amounts of their mass due to stellar winds – flows of gas driven by the pressure of the star’s own radiation. The more ‘heavy’ elements like carbon and nitrogen that a star contains, the more mass it will lose before collapsing to form a black hole. So, the stars which produced GW170608’s [the official designation of the detection] black holes could have contained relatively large amounts of these elements, compared to the stellar progenitors of more massive black holes such as those observed in the GW150914 merger. … The overall amplitude of the signal allows the distance to the black holes to be estimated as 340 megaparsec, or 1.1 billion light years.

The circumstances of the discovery are also interesting. Quoting at length from a LIGO press release:

A month before this detection, LIGO paused its second observation run to open the vacuum systems at both sites and perform maintenance. While researchers at LIGO Livingston, in Louisiana, completed their maintenance and were ready to observe again after about two weeks, LIGO Hanford, in Washington, encountered additional problems that delayed its return to observing.

On the afternoon of June 7 (PDT), LIGO Hanford was finally able to stay online reliably and staff were making final preparations to once again “listen” for incoming gravitational waves. As part of these preparations, the team at Hanford was making routine adjustments to reduce the level of noise in the gravitational-wave data caused by angular motion of the main mirrors. To disentangle how much this angular motion affected the data, scientists shook the mirrors very slightly at specific frequencies. A few minutes into this procedure, GW170608 passed through Hanford’s interferometer, reaching Louisiana about 7 milliseconds later.

LIGO Livingston quickly reported the possible detection, but since Hanford’s detector was being worked on, its automated detection system was not engaged. While the procedure being performed affected LIGO Hanford’s ability to automatically analyse incoming data, it did not prevent LIGO Hanford from detecting gravitational waves. The procedure only affected a narrow frequency range, so LIGO researchers, having learned of the detection in Louisiana, were still able to look for and find the waves in the data after excluding those frequencies.

But what I’m most excited about is the quiet announcement. All of the gravitational wave detection announcements before this were accompanied by an embargo, lots of hype building up, press releases from various groups associated with the data analysis, and of course reporters scrambling under the radar to get their stories ready. There was none of that this time. This time, the LIGO scientific collaboration published their press release with links to the raw data and the preprint paper (submitted to the Astrophysical Journal Letters) on November 15. I found out about it when I stumbled upon a tweet from Sean Carroll.

And this is how it’s going to be, too. In the near future, the detectors – LIGO, VIRGO, etc. – are going to be gathering data in the background of our lives, like just another telescope doing its job. The detections are going to stop being a big deal: we know LIGO works the way it should. Fortunately for it, some of its more spectacular detections (colliding intermediary-mass blackholes and colliding neutron stars) were also made early in its life. What we can all look forward to now is reports of first-order derivatives from LIGO data.

In other words, we can stop focusing on Einstein’s theories of relativity (long overdue) and move on to what multiple gravitational wave detections can tell us about things we still don’t know. We can mine patterns out of the data, chart their variation across space, time and their sources, and begin the arduous task of drafting the gravitational history of the universe.

Featured image credit: Lovesevenforty/pixabay.