The blog and the social media

Because The Wire had signed up to be some kind of A-listed publisher with Facebook, The Wire‘s staff was required to create Facebook Pages under each writer/editor’s name. So I created the ‘Vasudevan Mukunth’ page. Then, about 10 days ago, Facebook began to promote my page on the platform, running ads for it that would appear on people’s timelines across the network. The result is that my page now has almost as many likes as The Wire English’s Facebook Page: 320,000+. Apart from sharing my pieces from The Wire, I now use the page to share my blog posts as well. Woot!

Action on Twitter hasn’t far behind either. I’ve had a verified account on the microblogging platform for a few months now. And this morning, Twitter rolled out the expanded tweet character limit (from 140 to 280) to everyone. For someone to whom 140 characters was a liberating experience – a mechanical hurdle imposed on running your mouth and forcing you to think things through (though many choose not to) – the 280-char limit is even more so.

How exactly? An interesting implication discussed in this blog post by Twitter is that allowing people to think 280 characters at a time allowed them to be less anxious about how they were going to compose their tweets. The number of tweets hitting the character limit dropped from 9% during the 140-char era to 1% in the newly begun 280-char era. At the same time, people have continued to tweet within the 140-char most of the time. So fewer tweets were being extensively reworked or abandoned because people no longer composed them with the anxiety of staying within a smaller character limit.

But here’s the problem: most of my blog’s engagement had already been happening on the social media. As soon as I published a post, WordPress’s Jetpack plugin would send an email to 4brane’s 3,600+ subscribers with the full post, post the headline + link on Twitter and the headline + blurb + image + link on Facebook. Readers would reply to the tweet, threading their responses if they had to, and drop comments on Facebook. But on the other hand, the number of emails I’ve been receiving from my subscribers has been dropping drastically, as has the number of comments on posts.

I remember my blogging habit having taken a hit when I’d decided to become more active on Twitter because I no longer bore, fermented and composed my thoughts at length, with nuance. Instead, I dropped them as tweets as and when they arose, often with no filter, building it out through conversations with my followers. The 280-char limit now looks set to ‘scale up’ this disruption by allowing people to be more free and encouraging them to explore more complex ideas, aided by how (and how well, I begrudgingly admit) Twitter displays tweet-threads.

Perhaps – rather hopefully – the anxiety that gripped people when they were composing 140-char tweets will soon grip them as they’re composing 280-char tweets as well. I somehow doubt 420-char tweets will be a thing; that would make the platform non-Twitter-like. And hopefully the other advantages of having a blog, apart from the now-lost ‘let’s have a conversation’ part, such as organising information in different ways unlike Twitter’s sole time-based option, will continue to remain relevant.

Featured image credit: LoboStudioHamburg/pixabay.

Advertisements

Confused thoughts on embargoes

Seventy! That’s how many observatories around the world turned their antennae to study the neutron-star collision that LIGO first detected. So I don’t know why the LIGO Collaboration, and Nature, bothered to embargo the announcement and, more importantly, the scientific papers of the LIGO-Virgo collaboration as well as those by the people at all these observatories. That’s a lot of people and many of them leaked the neutron-star collision news on blogs and on Twitter. Madness. I even trawled through arΧiv to see if I could find preprint copies of the LIGO papers. Nope; it’s all been removed.

Embargoes create hype from which journals profit. Everyone knows this. Instead of dumping the data along with the scientific articles as soon as they’re ready, journals like Nature, Science and others announce that the information will all be available at a particular time on a particular date. And between this announcement and the moment at which the embargo lifts, the journal’s PR team fuels hype surrounding whatever’s being reported. This hype is important because it generates interest. And if the information promises to be good enough, the interest in turn creates ‘high pressure’ zones on the internet – populated by those people who want to know what’s going on.

Search engines and news aggregators like Google and Facebook are sensitive to the formation of these high-pressure zones and, at the time of the embargo’s lifting, watch out for news publications carrying the relevant information. And after the embargo lifts, thanks to the attention already devoted by the aggregators, news websites are transformed into ‘low pressure’ zones into which the aggregators divert all the traffic. It’s like the moment a giant information bubble goes pop! And the journal profits from all of this because, while the bubble was building, the journal’s name is everywhere.

In short: embargoes are a traffic-producing opportunity for news websites because they create ‘pseudo-cycles of news’, and an advertising opportunity for journals.

But what’s in it for someone reporting on the science itself? And what’s in it for the consumers? And, overall, am I being too vicious about the idea?

For science reporters, there’s the Ingelfinger rule promulgated by the New England Journal of Medicine in 1969. It states that the journal will not publish any papers with results that have been previously published elsewhere and/or whose authors have not discussed the results with the media. NEJM defended the rule by claiming it was to keep their output fresh and interesting as well as to prevent scientists from getting carried away by the implications of their own research (NEJM’s peer-review process would prevent that, they said). In the end, the consumers would receive scientific information that has been thoroughly vetted.

While the rule makes sense from the scientists’ point of view, it doesn’t from the reporters’. A good science reporter, having chosen to cover a certain paper, will present the paper to an expert unaffiliated with the authors and working in the same area for her judgment. This is a form of peer-review that is extraneous to the journal publishing the paper. Second: a pro-embargo argument that’s been advanced is that embargoes alert science reporters to papers of importance as well as give them time to write a good story on it.

I’m conflicted about this. Embargoes, and the attendant hype, do help science reporters pick up on a story they might’ve missed out on, to capitalise on the traffic potential of a new announcement that may not be as big as it becomes without the embargo. Case in point: today’s neutron-star collision announcement. At the same time, science reporters constantly pick up on interesting research that is considered old/stale or that wasn’t ever embargoed and write great stories about them. Case in point: almost everything else.

My perspective is coloured by the fact that I manage a very small science newsroom at The Wire. I have a very finite monthly budget (equal to about what someone working eight hours a day and five days a week would make in two months on the US minimum wage) using which I’ve to ensure that all my writers – who are all freelancers – provide both the big picture of science in that month as well as the important nitty-gritties. Embargoes, for me, are good news because it helps me reallocate human and financial resources for a story well in advance and make The Wire‘s presence felt on the big stage when the curtain lifts. Rather, even if I can’t make it on time to the moment the curtain lifts, I’ve still got what I know for sure is good story on my hands.

A similar point was made by Kent Anderson when he wrote about eLife‘s media policy, which said that the journal would not be enforcing the Ingelfinger rule, over at The Scholarly Kitchen:

By waiving the Ingelfinger rule in its modernised and evolved form – which still places a premium on embargoes but makes pre-publication communications allowable as long as they don’t threaten the news power – eLife is running a huge risk in the attention economy. Namely, there is only so much time and attention to go around, and if you don’t cut through the noise, you won’t get the attention. …

Like it or not, but press embargoes help journals, authors, sponsors, and institutions cut through the noise. Most reporters appreciate them because they level the playing field, provide time to report on complicated and novel science, and create an effective overall communication scenario for important science news. Without embargoes and coordinated media activity, interviews become more difficult to secure, complex stories may go uncovered because they’re too difficult to do well under deadline pressures, and coverage becomes more fragmented.

What would I be thinking if I had a bigger budget and many full-time reporters to work with? I don’t know.

On Embargo Watch in July this year, Ivan Oransky wrote about how an editor wasn’t pleased with embargoes because “staffers had been pulled off other stories to make sure to have this one ready by the original embargo”. I.e., embargoes create deadlines that are not in your control; they create deadlines within which everyone, over time, tends to do the bare minimum (“as much as other publications will do”) so they can ride the interest wave and move on to other things – sometimes not revisiting this story again even. In a separate post, Oransky briefly reviewed a book against embargoes by Vincent Kiernan, a noted critic of the idea:

In his book, Embargoed Science, Kiernan argues that embargoes make journalists lazy, always chasing that week’s big studies. They become addicted to the journal hit, afraid to divert their attention to more original and enterprising reporting because their editors will give them grief for not covering that study everyone else seems to have covered.

Alice Bell wrote a fantastic post in 2010 about how to overcome such tendencies: by newsrooms redistributing their attention on science to both upstream and downstream activities. But more than that, I don’t think lethargic news coverage can be explained solely by the addiction to embargoes. A good editor should keep stirring the pot – should keep her journalists moving on good stories, particularly of the kind no one wants to talk about, report on it and play it up. So, while I’m hoping that The Wire‘s coverage of the neutron-star collision discovery is a hit, I’ve also got great pieces coming this week about solar flares, open-access publishing, the health effects of ******** mining and the conservation of sea snakes.

I hope time will provide some clarity.

Featured image credit: Free-Photos/pixabay.

The metaphorical transparency of responsible media

Featured image credit: dryfish/Flickr, CC BY 2.0.

I’d written a two-part essay (although they were both quite short; reproduced in full below) on The Wire about what science was like in 2016 and what we can look forward to in 2017. The first part was about how science journalism in India is a battle for relevance, both within journalistic circles and among audiences. The second was about how science journalism needs to be treated like other forms of journalism in 2017, and understood to be afflicted with the same ills that, say, political and business journalism are.

Other pieces on The Wire that had the same mandate, of looking back and looking forward, stuck to being roundups and retrospective analyses. My pieces were retrospective, too, but they – to use the parlance of calculus – addressed the second derivative of science journalism, in effect performing a meta-analysis of the producers and consumers of science writing. This blog post is a quick discussion (or rant) of why I chose to go the “science media” way.

We in India often complain about how the media doesn’t care enough to cover science stories. But when we’re looking back and forward in time, we become blind to the media’s efforts. And looking back is more apparently problematic than is looking forward.

Looking back is problematic because our roundup of the ‘best’ science (the ‘best’ being whatever adjective you want it to be) from the previous year is actually a roundup of the ‘best’ science we were able to discover or access from the previous year. Many of us may have walled ourselves off into digital echo-chambers, sitting within not-so-fragile filter bubbles and ensuring news we don’t want to read about doesn’t reach us at all. Even so, the stories that do reach us don’t make up the sum of all that is available to consume because of two reasons:

  1. We practically can’t consume everything, period.
  2. Unless you’re a journalist or someone who is at the zeroth step of the information dissemination pyramid, your submission to a source of information is simply your submission to another set of filters apart from your own. Without these filters, finding something you are looking for on the web would be a huge problem.

So becoming blind to media efforts at the time of the roundup is to let journalists (who sit higher up on the dissemination pyramid) who should’ve paid more attention to scientific developments off the hook. For example, assuming things were gloomy in 2016 is assuming one thing given another thing (like a partial differential): “while the mood of science news could’ve been anything between good and bad, it was bad” GIVEN “journalists mostly focused on the bad news over the good news”. This is only a simplistic example: more often than not, the ‘good’ and ‘bad’ can be replaced by ‘significant’ and ‘insignificant’. Significance is also a function of media attention. At the time of probing our sentiments on a specific topic, we should probe the information we have as well as how we acquired that information.

Looking forward without paying attention to how the media will likely deal with science is less apparently problematic because of the establishment of the ideal. For example, to look forward is also to hope: I can say an event X will be significant irrespective of whether the media chooses to cover it (i.e., “it should ideally be covered”); when the media doesn’t cover the event, then I can recall X as well as pull up journalists who turned a blind eye. In this sense, ignoring the media is to not hold its hand at the beginning of the period being monitored – and it’s okay. But this is also what I find problematic. Why not help journalists look out for an event when you know it’s going to happen instead of relying on their ‘news sense’, as well as expecting them to have the time and attention to spend at just the right time?

Effectively: pull us up in hindsight – but only if you helped us out in foresight. (The ‘us’ in this case is, of course, #notalljournalists. Be careful with whom you choose to help or you could be wasting your time.)


Part I: Why Independent Media is Essential to Good Science Journalism

What was 2016 like in science? Furious googling will give you the details you need to come to the clinical conclusion that it wasn’t so bad. After all, LIGO found gravitational waves; an Ebola vaccine was readied; ISRO began tests of its reusable launch vehicle; the LHC amassed particle collisions data; the Philae comet-hopping mission ended; New Horizons zipped past Pluto; Juno is zipping around Jupiter; scientists did amazing (but sometimes ethically questionable) things with CRISPR; etc. But if you’ve been reading science articles throughout the year, then please take a step back from everything and think about what your overall mood is like.

Because, just as easily as 2016 was about mega-science projects doing amazing things, it was also about climate-change action taking a step forward but not enough; about scientific communities becoming fragmented; about mainstream scientific wisdom becoming entirely sidelined in some parts of the world; about crucial environmental protections being eroded; about – undeniably – questionable practices receiving protection under the emotional cover of nationalism. As a result, and as always, it is difficult to capture what this year was to science in a single mood, unless that mood in turn captures anger, dismay, elation and bewilderment at various times.

So, to simplify our exercise, let’s do that furious googling – and then perform a meta-analysis to reflect on where each of us sees fit to stand with respect to what the Indian scientific enterprise has been up to this year. (Note: I’m hoping this exercise can also be a referendum on the type of science news The Wire chose to cover this year, and how that can be improved in 2017.) The three broad categories (and sub-categories) of stories that The Wire covered this year are:

GOOD BAD UGLY
Different kinds of ISRO rockets – sometimes with student-built sats onboard – took off Big cats in general, and leopards specifically, had a bad year Indian scientists continued to plagiarise and engage in other forms of research misconduct without consequence
ISRO decided to partially privatise PSLV missions by 2020 The JE/AES scourge struck again, their effects exacerbated by malnutrition The INO got effectively shut down
LIGO-India collaboration received govt. clearance; Indian scientists of the LIGO collaboration received a vote of confidence from the international community PM endorsed BGR-34, an anti-diabetic drug of dubious credentials Antibiotic resistance worsened in India (and other middle-income nations)
We supported ‘The Life of Science’ Govt. conceived misguided culling rules India succumbed to US pressure on curtailing generic drugs
Many new species of birds/animals discovered in India Ken-Betwa river linkup approved at the expense of a tiger sanctuary Important urban and rural waterways were disrupted, often to the detriment of millions
New telescopes were set up, further boosting Indian astronomy; ASTROSAT opened up for international scientists Many conservation efforts were hampered – while some were mooted that sounded like ministers hadn’t thought them through Ministers made dozens of pseudoscientific claims, often derailing important research
Otters returned to their habitats in Kerala and Goa A politician beat a horse to its death Fake-science-news was widely reported in the Indian media
Janaki Lenin continued her ‘Amazing Animals’ series Environmental regulations turned and/or stayed anti-environment Socio-environmental changes resulting from climate change affect many livelihoods around the country
We produced monthly columns on modern microbiology and the history of science We didn’t properly respond to human-wildlife conflicts Low investments in public healthcare, and focus on privatisation, short-changed Indian patients
Indian physicists discovered a new form of superconductivity in bismuth GM tech continues to polarise scientists, social scientists and activists Space, defence-research and nuclear power establishments continued to remain opaque
/ Conversations stuttered on eastern traditions of science /

I leave it to you to weigh each of these types of stories as you see fit. For me – as a journalist – science in the year 2016 was defined by two parallel narratives: first, science coverage in the mainstream media did not improve; second, the mainstream media in many instances remained obediently uncritical of the government’s many dubious claims. As a result, it was heartening on the first count to see ‘alternative’ publications like The Life of Science and The Intersection being set up or sustained (as the case may be).

On the latter count: the media’s submission paralleled, rather directly followed, its capitulation to pro-government interests (although some publications still held out). This is problematic for various reasons, but one that is often overlooked is that the “counterproductive continuity” that right-wing groups stress upon – between traditional wisdom and knowledge derived through modern modes of investigation – receives nothing short of a passive endorsement by uncritical media broadcasts.

From within The Wire, doing a good job of covering science has become a battle for relevance as a result. And this is a many-faceted problem: it’s as big a deal for a science journalist to come upon and then report a significant story as finding the story itself in the first place – and it’s as difficult to get every scientist you meet to trust you as it is to convince every reader who visits The Wire to read an article or two in the science section per visit. Fortunately (though let it not be said that this is simply a case of material fortunes), the ‘Science’ section on The Wire has enjoyed both emotional and financial support. To show for it, we have had the privilege of overseeing the publication of 830 articles, and counting, in 2016 (across science, health, environment, energy, space and tech). And I hope those who have written for this section will continue to write for it, even as those who have been reading this section will continue to read it.

Because it is a battle for relevance – a fight to be noticed and to be read, even when stories have nothing to do with national interests or immediate economic gains – the ideal of ‘speaking truth to power’ that other like-minded sections of the media cherish is preceded for science journalism in India by the ideals of ‘speaking’ first and then ‘speaking truth’ second. This is why an empowered media is as essential to the revival of that constitutionally enshrined scientific temperament as are productive scientists and scientific institutions.

The Wire‘s journalists have spent thousands of hours this year striving to be factually correct. The science writers and editors have also been especially conscientious of receiving feedback at all stages, engaging in conversations with our readers and taking prompt corrective action when necessary – even if that means a retraction. This will continue to be the case in 2017 as well in recognition of the fact that the elevation of Indian science on the global stage, long hailed to be overdue, will directly follow from empowering our readers to ask the right questions and be reasonably critical of all claims at all times, no matter who the maker.

Part II: If You’re Asking ‘What To Expect in Science in 2017’, You Have Missed the Point

While a science reporter at The Hindu, this author conducted an informal poll asking the newspaper’s readers to speak up about what their impressions were of science writing in India. The answers, received via email, Twitter and comments on the site, generally swung between saying there was no point and saying there was a need to fight an uphill battle to ‘bring science to everyone’. After the poll, however, it still wasn’t clear who this ‘everyone’ was, notwithstanding a consensus that it meant everyone who chanced upon a write-up. It still isn’t clear.

Moreover, much has been written about the importance of science, the value of engaging with it in any form without expectation of immediate value and even the usefulness of looking at it ‘from the outside in’ when the opportunity arises. With these theses in mind (which I don’t want to rehash; they’re available in countless articles on The Wire), the question of “What to expect in science in 2017?” immediately evolves into a two-part discussion. Why? Because not all science that happens is covered; not all science that is covered is consumed; and not all science that is consumed is remembered.

The two parts are delineated below.

What science will be covered in 2017?

Answering this question is an exercise in reinterpreting the meaning of ‘newsworthiness’ subject to the forces that will assail journalism in 2017. An immensely simplified way is to address the following factors: the audience, the business, the visible and the hidden.

The first two are closely linked. As print publications are shrinking and digital publications growing, a consideration of distribution channels online can’t ignore the social media – specifically, Twitter and Facebook – as well as Google News. This means that an increasing number of younger readers are available to target, which in turn means covering science in a way that interests this demographic. Qualities like coolness and virality will make an item immediately sellable to marketers whereas news items rich with nuance and depth will take more work.

Another way to address the question is in terms of what kind of science will be apparently visible, and available for journalists to easily chance upon, follow up and write about. The subjects of such writing typically are studies conducted and publicised by large labs or universities, involving scientists working in the global north, and often on topics that lend themselves immediately to bragging rights, short-lived discussions, etc. In being aware of ‘the visible’, we must be sure to remember ‘the invisible’. This can be defined as broadly as in terms of the scientists (say, from Latin America, the Middle East or Southeast Asia) or the studies (e.g., by asking how the results were arrived at, who funded the studies and so forth).

On the other hand, ‘the hidden’ is what will – or ought to – occupy those journalists interested in digging up what Big X (Pharma, Media, Science, etc.) doesn’t want publicised. What exactly is hidden changes continuously but is often centred on the abuse of privilege, the disregard of those we are responsible for and, of course, the money trail. The issues that will ultimately come to define 2017 will all have had dark undersides defined by these aspects and which we must strive to uncover.

For example: with the election of Donald Trump, and his bad-for-science clique of bureaucrats, there is a confused but dawning recognition among liberals of the demands of the American midwest. So to continue to write about climate change targeting an audience composed of left-wingers or east coast or west coast residents won’t work in 2017. We must figure out how to reach across the aisle and disabuse climate deniers of their beliefs using language they understand and using persuasions that motivate them to speak to their leaders about shaping climate policy.

What will be considered good science journalism in 2017?

Scientists are not magical creatures from another world – they’re humans, too. So is their collective enterprise riddled with human decisions and human mistakes. Similarly, despite all the travails unique to itself, science journalism is fundamentally similar to other topical forms of journalism. As a result, the broader social, political and media trends sweeping around the globe will inform novel – or at least evolving – interpretations of what will be good or bad in 2017. But instead of speculating, let’s discuss the new processes through which good and bad can be arrived at.

In this context, it might be useful to draw from a blog post by Jay Rosen, a noted media critic and professor of journalism at New York University. Though the post focuses on what political journalists could do to adapt to the Age of Trump, its implied lessons are applicable in many contexts. More specifically, the core effort is about avoiding those primary sources of information (out of which a story sprouts) the persistence with which has landed us in this mess. A wildly remixed excerpt:

Send interns to the daily briefing when it becomes a newsless mess. Move the experienced people to the rim. Seek and accept offers to speak on the radio in areas of Trump’s greatest support. Make common cause with scholars who have been there. Especially experts in authoritarianism and countries when democratic conditions have been undermined, so you know what to watch for— and report on. (Creeping authoritarianism is a beat: who do you have on it?). Keep an eye on the internationalization of these trends, and find spots to collaborate with journalists across borders. Find coverage patterns that cross [the aisle].

And then this:

[Washington Post reporter David] Fahrenthold explains what he’s doing as he does it. He lets the ultimate readers of his work see how painstakingly it is put together. He lets those who might have knowledge help him. People who follow along can see how much goes into one of his stories, which means they are more likely to trust it. … He’s also human, humble, approachable, and very, very determined. He never goes beyond the facts, but he calls bullshit when he has the facts. So impressive are the results that people tell me all the time that Fahrenthold by himself got them to subscribe.

Transparency is going to matter more than ever in 2017 because of how the people’s trust in the media was eroded in 2016. And there’s no reason science journalism should be an exception to these trends – especially given how science and ideology quickly locked horns in India following the disastrous Science Congress in 2015. More than any other event since the election of the Bharatiya Janata Party to the centre, and much like Trump’s victory caught everyone by surprise, the 2015 congress really spotlighted the extent of rational blight that had seeped into the minds of some of India’s most powerful ideologues. In the two years since, the reluctance of scientists to step forward and call bullshit out has also started to become more apparent, as a result exposing the different kinds of undercurrents that drastic shifts in policies have led to.

So whatever shape good science journalism is going to assume in 2017, it will surely benefit by being more honest and approachable in its construction. As will the science journalist who is willing to engage with her audience about the provenance of information and opinions capable of changing minds. As Jeff Leek, an associate professor at the Johns Hopkins Bloomberg School of Public Health, quoted (statistician Philip Stark) on his blog: “If I say just trust me and I’m wrong, I’m untrustworthy. If I say here’s my work and it’s wrong, I’m honest, human, and serving scientific progress.”

Here’s to a great 2017! 🙌🏾

Curious Bends – big tobacco, internet blindness, spoilt dogs and more

1. Despite the deadly floods in Uttarakhand in 2013, the govt ignores grave environmental reports on the new dams to be built in the state

“The Supreme Court asked the Union environment ministry to review six specific hydroelectric projects on the upper Ganga basin in Uttarakhand. On Wednesday, the ministry informed the apex court that its expert committee had checked and found the six had almost all the requisite and legitimate clearances. But, the ministry did not tell the court the experts, in the report to the ministry, had also warned these dams could have a huge impact on the people, ecology and safety of the region, and should not be permitted at all on the basis of old clearances.” (6 min read, businessstandard.com)

2. At the heart of the global-warming debate is the issue of energy poverty, and we don’t really have a plan to solve the problem

“Each year, human civilization consumes some 14 terawatts of power, mostly provided by burning the fossilized sunshine known as coal, oil and natural gas. That’s 2,000 watts for every man, woman and child on the planet. Of course, power isn’t exactly distributed that way. In fact, roughly two billion people lack reliable access to modern energy—whether fossil fuels or electricity—and largely rely on burning charcoal, dung or wood for light, heat and cooking.” (4 min read, scientificamerican.com)

3. Millions of Facebook users have no idea they’re using the internet

“Indonesians surveyed by Galpaya told her that they didn’t use the internet. But in focus groups, they would talk enthusiastically about how much time they spent on Facebook. Galpaya, a researcher (and now CEO) with LIRNEasia, a think tank, called Rohan Samarajiva, her boss at the time, to tell him what she had discovered. “It seemed that in their minds, the Internet did not exist; only Facebook,” he concluded.” (8 min read, qz.com)

+ The author of the piece, Leo Mirani, is a London-based reporter for Quartz.

4. The lengths to which big tobacco industries will go to keep their markets alive is truly astounding

“Countries have responded to Big Tobacco’s unorthodox marketing with laws that allow government to place grotesque images of smoker’s lung and blackened teeth on cigarette packaging, but even those measures have resulted in threats of billion-dollar lawsuits from the tobacco giants in international court. One such battle is being waged in Togo, where Philip Morris International, a company with annual earnings of $80 billion, is threatening a nation with a GDP of $4.3 billion over their plans to add the harsh imagery to cigarette boxes, since much of the population is illiterate and therefore can’t read the warning labels.” (18 min video, John Oliver’s Last Week Tonight via youtube.com)

5. Hundreds of people have caught hellish bacterial infections and turned to Eastern Europe for a century-old viral therapy

“A few weeks later, the Georgian doctors called Rose with good news: They would be able to design a concoction of phages to treat Rachel’s infections. After convincing Rachel’s doctor to write a prescription for the viruses (so they could cross the U.S. border), Rose paid the Georgian clinic $800 for a three-month supply. She was surprised that phages were so inexpensive; in contrast, her insurance company was forking over roughly $14,000 a month for Rachel’s antibiotics.” (14 min read, buzzfeed.com)

Chart of the week

“Deshpande takes her dog, who turned six in February, for a walk three times every day. When summers are at its peak, he is made to run on the treadmill inside the house for about half-hour. Zuzu’s brown and white hair is brushed once every month, he goes for a shower twice a month—sometimes at home, or at a dog spa—and even travels with the family to the hills every year. And like any other Saint Bernard, he has a large appetite, eating 20 kilograms of dog food every month. The family ends up spending Rs5,000 ($80)-7,000 ($112) every month on Zuzu, about double the amount they spend on Filu, a Cocker Spaniel.” (4 min read, qz.com)

59d83687-7134-4c90-8298-ba975d380556

Ello! I love you, let me jump in your game!

This is a guest post contributed by Anuj Srivas. Formerly a tech. reporter and writer for The Hindu, he’s now pursuing an MSc. at the Oxford Internet Institute, and blogging for Sciblogger.

If there were ever an artifact to which Marshall McLuhan’s ‘the medium is the message’ would be best applicable, it would be Ello. The rapidly-growing social network – much like the EU’s ‘right to be forgotten’ – is quickly turning out to be something of a Rorschach test: people look at it and see what they wish to see.

Like all political slogans, Ello’s manifest is becoming an inkblot onto which we can project our innermost ideologies. It is almost instructive to look at the wide range of reactions, if only for the fact that it tells us something about the way in which we will build the future of the Web.

Optimists and advocates of privacy take a look at Ello and see the start of something new, or view it as a chance to refresh the targeted-advertising foundations of our Web. The most sceptical of this lot, however, point towards the company’s venture capital funding and sneer.

Technology and business analysts look at Ello and see a failed business model; one that is doomed from the start. Feminists and other minority activists look at the company’s founders and notice the appalling lack of diversity. Utopian Internet intellectuals like Clay Shirky see Ello as a way to reclaim conversational discourse on the Internet, even if it doesn’t quite achieve it just yet.

What do I see in the Ello inkblot? Two things.

The first is that Ello, if it gains enough traction, will become an example of whether the free market is capable of providing a social network alternative that respects privacy.

For the last decade, one of the biggest debates among netizens has been whether we should take steps (legal or otherwise) to safeguard values such as privacy on the Internet. One of the most vocal arguments against this has been that “if the demand for the privacy is so great, then the market will notice the demand and find some way to supply it”.

Ello is seemingly the first proper, privacy-caring, centralized social network that the market has spit out (Diaspora was more of a social creation that was designed to radically change online social networks, which was in all likelihood what caused its stagnation). In this way, the VC funding gives Ello a greater chance to provide a better experience – even if it does prove to be the spark that leads to the company’s demise.

If Ello succeeds and continues to stick to its espoused principles, then that’s one argument settled.

The second point pertains to all that Ello does not represent. Sociologist Nathan Jurgensen has an excellent post on Ello where he lashes out at how online social networks are still being built by only technology geeks. He writes:

This [Ello] is yet another example of social media built by coders and entrepreneurs, but no central role for those expert in thinking about and researching the social world. The people who have decided they should mediate our social interactions and write a political manifesto have no special expertise in the social or political.

I cannot emphasize this point enough. One of the more prominent theories regarding technology and its implications is the ‘social shaping of technology’. It theorizes that technology is not born and developed in a vacuum – it is instead very much shaped and created by relevant social groups. There is little doubt that much of today’s technology and online services is skewed very disproportionately – the number of social groups that are involved in the creation of an online social network is minuscule compared to the potential reach and influence of the final product. Ello is no different when it comes to this.

It is a combination of these two points that sums up the current, almost tragic state of affairs. The technology and digital tools of today are very rarely created, or deployed, keeping in mind the needs of the citizen. They usually are brought to life from some entrepreneur’s or venture capitalist’s PowerPoint presentation and then applied to real world situations.

Is Ello the anti-Facebook that we need? Perhaps. Is it the one we deserve? Probably not.

The federation of our digital identities

Facebook, Twitter, email, WordPress, Instagram, online banking, the list goes on… Offline, you’re one person maintaining (presumably) one identity. On the web, you have many of them. All of them might point at you, but they’re still distinct packets of data floating through different websites. Within each site, your identity is unified, but between them, you’re different people. For example, I can’t log into Twitter with my Facebook username/password because Facebook owns them. When digital information becomes federated like this, it drives down cross-network accountability because my identity doesn’t move around.

However, there are some popular exceptions to this. Facebook and Twitter don’t exchange my log-in credentials – the keys with which I unlock my identity – because they’re rivals, but many other services and these sites are not. For example, I can log into my YouTube account using my GMail credentials. When I hit ‘Submit’, YouTube banks on the validity of my identity on GMail to log me in. Suddenly, GMail and YouTube both have access to my behavioral information through my username now. In the name of convenience, my online visibility has increased and I’ve become exposed to targeted advertising, likely the least of ills.

The Crypto-Book

John Maheswaran, a doctoral student at Yale University, has a solution. He’s called it ‘Crypto-Book’, describing its application and uses in a pre-print paper he and his colleagues uploaded to arXiv on June 16.

1. The user clicks ‘Sign up using Facebook’ on StackOverflow.

stackoverflow

2. StackOverflow redirects the user to Facebook to log in using Facebook credentials, 3. after which the user grants some permissions.

facebook

4. Facebook generates a temporary OAuth access token corresponding to the permissions.

5. Facebook redirects the user back to StackOverflow along with the access token.

redirection

 

6. StackOverflow can now access the user’s Facebook resources in line with the granted permissions.

Crypto-Book sits between steps 1 and 6. Instead of letting Facebook and StackOverflow talk to each other, it steps in to take your social network ID from Facebook, uses that to generate a username and password (in this context called a public and private key, respectively), and passes them on to StackOverflow for authentication.

OpenID and OAuth

It communicates with both sites using the OAuth protocol, which came into use in 2010. Five years before this, the OpenID protocol had launched to some success. In either case, the idea was to reduce the multiplicity of digital identities but in the context of sites like Facebook and Twitter that could own your identities themselves, the services the protocols provided enabled users to wield more control over what information they shared, or at least keep track of it.

OpenID let users to register with itself, and then functioned as a decentralized hub. If you wanted to log into WordPress next, you could do so with your OpenID credentials; WordPress only had to recognize the protocol. In that sense, it was like, say, Twitter, but with the sole function of maintaining a registry of identities. Its use has since declined because of a combination of its security shortcomings and other sites’ better authentication schemes. OAuth, on the other hand, has grown more popular. Unlike OpenID, OAuth is an identity access protocol, and gives users a way to grant limited-access permissions to third-party sites without having to enter any credentials (a feature called pseudo-authentication).

So Crypto-Book inserts itself as an anonymizing layer to prevent Facebook and StackOverflow from exchanging tokens with each other. Maheswaran also describes additional techniques to bolster Crypto-Book’s security. For one, a user doesn’t receive his/her key pair from one server but many, and has to combine the different parts to make the whole. For another, the user can use the key-pair to log in to a site using a technique called linkable ring sgnatures, “which prove that the signer owns one of a list of public keys, without revealing which key,” the paper says. “This property is particularly useful in scenarios where trust is associated with a group rather than an individual.”

The cryptocurrency parvenu

Interestingly, the precedent for an equally competent solution was set in 2008 when the cryptocurrency called bitcoins came online. Bitcoins are bits of code generated by complex mathematical calculations, and each is worth about $630 today. Using my public and private keys, I can perform bitcoin transactions, the records of which are encrypted and logged in a publicly maintained registry called the blockchain. Once the blockchain is updated with a transaction, no other information except the value exchanged can be retrieved. In April 2011, this blockchain was forked into a new registry for a cryptocurrency called namecoin. Namecoins and bitcoins are exactly the same but for one crucial difference. While bitcoins make up a decentralized banking system, namecoins make up a decentralized domain name system (DNS), a registry of unique locations on the Internet.

The namecoin blockchain, like its website puts it, can “securely record and transfer arbitrary names,” or keys, an ability that lets programmers use it as an anonymizing layer to communicate between social network identities and third-party sites in the same way Crypto-Book does. For instance, OneName, a service that lets you use a social network identity to label your bitcoin address to simplify transactions, describes itself as

a decentralized identity system (DIS) with a user directory made of entries in a decentralized key-value store (the Namecoin blockchain).

Say I ‘register’ my digital identity with namecoin. The process of registration is logged on the blockchain and I get a public and private key. If Twitter is a relying partner, I should be able to log in to it with my keys and start using it. Only now, Twitter’s server will log me in but not itself own the username with which it can monitor my behavior. And unlike with OpenID or OAuth, neither namecoin or anyone on the web can access my identity because it has been encrypted. At the same time, like with Crypto-Book, namecoin will use OAuth to communicate with the social networking and third-party sites. But at the end of the day, namecoin lets me mobilize only the proof that my identity exists and not my identity itself in order to let me use services anonymously.

If everybody’s wearing a mask, who’s anonymous?

As such, it enables one of the most advanced anonymization services today. What makes it particularly effective is its reliance on the blockchain, which is not maintained by a central authority. Instead, it’s run by multiple namecoin users lending computing resources that process and maintain the blockchain, so there’s a fee associated with staking and sustaining your claim of anonymity. This decentralization is necessary to dislocate power centers and forestall precipitous decisions that could compromise your privacy or shut websites down.

Services like IRC provided the zeroth level of abstraction to achieve anonymity in the presence of institutions like Facebook – by being completely independent and ‘unhooked’. Then, the OpenID protocol aspired, ironically, to some centrality by trying to set up one set of keys to unlock multiple doors. In this sense, the OAuth protocol was disruptive because it didn’t provide anonymity as much as tried to provide an alternative route by limiting the number of identities you had to maintain on the web. Then come the Crypto-Book and blockchain techniques, both aspiring toward anonymity, both reliant on Pyrrhic decentralization in the sense that the power to make decisions was not eliminated as much extensively diluted.

Therefore, the move toward privatization of digital identities has been supported by publicizing the resources that maintain those identities. As a result, perfect anonymity becomes consequent to full participation – which has always been the ideal – and the size of the fee to achieve anonymity today is symptomatic of how far we are from that ideal.

(Thanks to Vignesh Sundaresan for inputs.)

Did Facebook cheat us?

'I don't want to live on this planet anymore' meme. Image: superbwallpapers.com

You might want to rethink that.

No.

There were some good arguments on this topic, swinging between aesthetic rebuttals to logical deconstructions. Here are four I liked:

1. Tal Yarkoni, Director of the Psychoinformatics Lab at University of Texas, Austin, writes on his blog,

“… it’s worth keeping in mind that there’s nothing intrinsically evil about the idea that large corporations might be trying to manipulate your experience and behavior. Everybody you interact with–including every one of your friends, family, and colleagues–is constantly trying to manipulate your behavior in various ways. Your mother wants you to eat more broccoli; your friends want you to come get smashed with them at a bar; your boss wants you to stay at work longer and take fewer breaks. We are always trying to get other people to feel, think, and do certain things that they would not otherwise have felt, thought, or done. So the meaningful question is not whether people are trying to manipulate your experience and behavior, but whether they’re trying to manipulate you in a way that aligns with or contradicts your own best interests. The mere fact that Facebook, Google, and Amazon run experiments intended to alter your emotional experience in a revenue-increasing way is not necessarily a bad thing if in the process of making more money off you, those companies also improve your quality of life. I’m not taking a stand one way or the other, mind you, but simply pointing out that without controlled experimentation, the user experience on Facebook, Google, Twitter, etc. would probably be very, very different–and most likely less pleasant.”

2. Yarkoni’s argument brings us to these tweets.

https://twitter.com/pmarca/status/483024580554932224

Didn’t get it? Chris Dixon explains.

I didn’t spot these tweets. TechCrunch did, and it brings up the relevant comparison with A/B testing. A/B testing is a technique whereby web-designers optimize user experience engineers by showing different layouts to different user groups, then decide on the best layout depending how which users responded to which layouts. Like Dixon asks, is it okay if it’s done all the time on sites that want to make money by giving you a good time?

You’d argue that we’ve signed up to be manipulated like that, not like this – see #4. Or you’d argue this was different because Facebook was just being Facebook – but the social scientists weren’t being ethical. This is true. To quote from the TechCrunch piece,

A source tells Forbes’ Kashmir Hill it was not submitted for pre-approval by the Institutional Review Board, an independent ethics committee that requires scientific experiments to meet stern safety and consent standards to ensure the welfare of their subjects. I was IRB certified for an experiment I developed in college, and can attest that the study would likely fail to meet many of the pre-requisites.

3. The study that appeared in the Proceedings of the National Academy of Sciences, which it appears not many have read. It reports a statistically significant result that emotions are contagious over Facebook. But as Yarkoni demonstrates, its practical significance is minuscule:

… the manipulation had a negligible real-world impact on users’ behavior. To put it in intuitive terms, the effect of condition in the Facebook study is roughly comparable to a hypothetical treatment that increased the average height of the male population in the United States by about one twentieth of an inch (given a standard deviation of ~2.8 inches).

4. Facebook’s Terms of Service – to quote:

We use the information we receive about you in connection with the services and features we provide to you and other users like your friends, our partners, the advertisers that purchase ads on the site, and the developers that build the games, applications, and websites you use. For example, in addition to helping people see and find things that you do and share, we may use the information we receive about you:

… for internal operations, including troubleshooting, data analysis, testing, research and service improvement.

IMO, the problems appear to be:

  1. The social scientists didn’t get informed consent from the subjects of their experiments.
  2. What a scientific experiment is is not clearly defined in Facebook’s ToS – and defining such a thing will prove very difficult and is likely never to be implemented.

To-do: Find out more about the IRB and its opinions on this experiment.

Hey, is anybody watching Facebook?

The Boston Marathon bombings in April 2013 kicked off a flurry of social media activity that was equal parts well-meaning and counterproductive. Users on Facebook and Twitter shared reports, updates and photos of victims, spending little time on verifying them before sharing them with thousands of people.

Others on forums like Reddit and 4chan started to zero in on ‘suspects’ in photos of people seen with backpacks. Despite the amount of distress and disruption these activities, the social media broadly also served to channel grief and help, and became a notable part of the Boston Marathon bombings story.

In our daily lives, these platforms serve as news forums. With each person connected to hundreds of others, there is a strong magnification of information, especially once it crosses a threshold. They make it easier for everybody to be news-mongers (not journalists). Add this to the idea that using a social network can just as easily be a social performance, and you realize how the sharing of news can also be part of the performance.

Consider Facebook: Unlike Twitter, it enables users to share information in a variety of forms – status updates, questions, polls, videos, galleries, pages, groups, etc – allowing whatever news to retain its multifaceted attitude, and imposing no character limit on what you have to say about it.

Facebook v. Twitter

So you’d think people who want the best updates on breaking news would go to Facebook, and that’s where you might be wrong. ‘Might’ because, on the one hand, Twitter has a lower response time, keeps news very accessible, encourages a more non-personal social media performance, and has a high global reach. These reasons have also made Twitter a favorite among researchers who want to study how information behaves on a social network.

On the other hand, almost 30% of the American general population gets its news from Facebook, with Twitter and YouTube at par with a command of 10%, if a Pew Research Center technical report is to be believed. Other surveys have also shown that there are more people from India who are on Facebook than on Twitter. At this point, it’d just seem inconsiderate when you realize Facebook does have 1.28 billion monthly active users from around the world.

A screenshot of Facebook Graph Search.

A screenshot of Facebook Graph Search.

Since 2013, Facebook has made it easier for users to find news in its pages. In June that year, it introduced the #hashtagging facility to let users track news updates across various conversations. In September, it debuted Graph Search, making it easier for people to locate topics they wanted to know more about. Even though the platform’s allowance for privacy settings stunts the kind of free propagation of information that’s possible on Twitter (and only 28% of Facebook users made any of their content publicly available), Facebook’s volume of updates enables its fraction of public updates rise to levels comparable with those of Twitter.

Ponnurangam Kumaraguru and Prateek Dewan, from the Indraprastha Institute of Information Technology, New Delhi (IIIT-D), leveraged this to investigate how Facebook and Twitter compared when sharing information on real-world events. Kumaraguru explained his motivation: “Facebook is so famous, especially in India. It’s much bigger in terms of the number of users. Also, having seen so many studies on Twitter, we were curious to know if the same outcomes as from work done on Twitter would hold for Facebook.”

The duo used the social networks’ respective APIs to query for keywords related to 16 events that occurred during 2013. They explain, “Eight out of the 16 events we selected had more than 100,000 posts on both Facebook and Twitter; six of these eight events saw over 1 million tweets.” Their pre-print paper was submitted to arXiv on May 19.

An upper hand

In all, they found that an unprecedented event appeared on Facebook just after 11 minutes while on Twitter, according to a 2014 study from the Association for the Advancement of Artificial Intelligence (AAAI), it took over ten times as longer. Specifically, after the Boston Marathon bombings, “the first [relevant] Facebook post occurred just 1 minute 13 seconds after the first blast, which was 2 minutes 44 seconds before the first tweet”.

However, this order-of-magnitude difference could be restricted to Kumaraguru’s choice of events because the AAAI study claims breaking news was broken fastest during 29 major events on Twitter, although it considered only updates on trending topics (and the first update on Twitter, according to them, appeared after two hours).

The data-mining technique could also have played a role in offsetting the time taken for an event to be detected because it requires the keywords being searched to be manually keyed. Finally, the Facebook API is known to be more rigorous than Twitter’s, whose ability to return older tweets is restricted. On the downside, the output from the Facebook API is restricted by users’ privacy settings.

Nevertheless, Kumaraguru’s conclusions paint a picture of Facebook being just as resourceful as Twitter when tracking real-world events – especially in India – leaving news discoverability to take the blame. Three of the 16 chosen events were completely local to India, and they were all accompanied by more activity on Facebook than on Twitter.

table1

Even after the duo corrected for URLs shared on both social networks simultaneously (through clients like Buffer and HootSuite) – 0.6% of the total – Facebook had the upper hand not just in primacy but also origin. According to Kumaraguru and Dewan, “2.5% of all URLs shared on Twitter belonged to the facebook.com domain, but only 0.8% of all URLs shared on Facebook belonged to the twitter.com domain.”

Facebook also seemed qualitatively better because spam was present in only five events. On Twitter, spam was found to be present in 13. This disparity can be factored in by programs built to filter spam from social media timelines in real-time, the sort of service that journalists will find very useful.

Kumaraguru and Dewan resorted to picking out spam based on differences in sentence styles. This way, they were able to avoid missing spam that was stylistically conventional but irrelevant in terms of content, too. A machine wouldn’t have been able to do this just as well and in real-time unless it was taught – in much the same way you teach your Google Mail inbox to automatically sort email.

Digital information forensics

A screenshot of TweetCred at work. Image: Screenshot of TweetCred Chrome Extension

A screenshot of TweetCred at work. Image: Screenshot of TweetCred Chrome Extension

Patrick Meier, a self-proclaimed – but reasonably so – pioneer in the emerging field of humanitarian technologies, wrote a blog post on April 28 describing a browser extension called TweetCred which is just this sort of learning machine. Install it and open Twitter in your browser. Above each tweet, you will now see a credibility rating bar that grades each tweet out of 7 points, with 7 describing the most credibility.

If you agree with each rating, you can bolster with a thumbs-up that appears on hover. If you disagree, you can give the shown TweetCred rating a thumbs down and mark what you think is correct. Meier makes it clear that, in its first avatar, the app is geared toward rating disaster/crisis tweets. A paper describing the app was submitted to arXiv on May 21, co-authored by Kumaraguru, Meier, Aditi Gupta (IIIT-D) and Carlos Castillo (Qatar Computing Research Institute).

Between the two papers, a common theme is the origin and development of situational awareness. We stick to Twitter for our breaking news because it’s conceptually similar to Facebook, fast and importantly cuts to the chase, so to speak. Parallely, we’re also aware that Facebook is similarly equipped to reconstruct details because of its multimedia options and timeline. Even if Facebook and Twitter the organizations believe that they are designed to accomplish different things, the distinction blurs in the event of a real-world crisis.

“Both these networks spread situational awareness, and both do it fairly quickly, as we found in our analysis,” Kumaraguru said. “We’d like to like to explore the credibility of content on Facebook next.” But as far as establishing a mechanism to study the impact of Facebook and Twitter on the flow of information is concerned, the authors have exposed a facet of Facebook that Facebook, Inc., could help leverage.

A revisitation inspired by Facebook’s opportunities

When habits form, rather become fully formed, it becomes difficult to recognize the drive behind its perpetuation. Am I still doing what I’m doing for the habit’s sake, or is it that I still love what I do and that’s why I’m doing it? In the early stages of habit-formation, the impetus has to come from within – let’s say as a matter of spirit – because it’s a process of creation. Once the entity has been created, once it is fully formed, it begins to sustain itself. It begins to attract attention, the focus of other minds, perhaps even the labor of other wills. That’s the perceived pay-off of persevering at the beginning, persevering in the face of nil returns.

But where the perseverance really makes a difference is when, upon the onset of that dull moment, upon the onset of some lethargy or the writers’ block, we somehow lose the ability to set apart fatigue-of-the-spirit and suspension-of-the-habit. If I no longer am able to write, even if at least for a day or so, I should be able to tell the difference between that pit-stop and a perceived threat of the habit starting to become endangered. If we don’t learn to make that distinction – which is more palpable than fine or blurry most of the time – then we will have have persevered for nothing but perseverance’s sake.

This realization struck me after I opened a Facebook page for my blog so that, given my incessant link-sharing on the social network, only the people who wanted to read the stuff I shared could sign-up and receive the updates. I had no intention earlier to use Facebook as anything but a socialization platform, but after my the true nature of my activity on Facebook was revealed to me (by myself), I realized my professional ambitions had invaded my social ones. So, to remind myself why the social was important, too, I decided to stop sharing news-links and analyses on my timeline.

However, after some friends expressed excitement – that I never quite knew was there – about being able to avail my updates in a more cogent manner, I understood that there were people listening to me, that they did spend time reading what I had to say on science news, etc., not just from on my blog but also from wherever I decided to post it! At the same moment, I thought to myself, “Now, why am I blogging?” I had no well-defined answer, and that’s when I knew my perseverance was being misguided by my own hand, misdirected by my own foolishness.

I opened astrohep.wordpress.com in January, 2011, and whatever science- or philosophy-related stories I had to tell, I told here. After some time, during a period coinciding with the commencement of my formal education in journalism, I started to use isnerd more effectively: I beat down the habit of using big words (simply because they encapsulated better whatever I had to say) and started to put some effort in telling my stories differently, I did a whole lot of reading before and while writing each post, and I used quotations and references wherever I could.

But the reason I’d opened this blog stayed intact all the time (or at least I think it did): I wanted to tell my science/phil. stories because some of the people around me liked hearing them and I thought the rest of the world might like hearing them, too.

At some point, however, I crossed over into the other side of perseverance: I was writing some of my posts not because they were stories people might like to hear but because, hey, I was a story-writer and what do I do but write stories! I was lucky enough to warrant no nasty responses to some absolutely egregious pieces of non-fiction on this blog, and parallely, I was unlucky enough to not understand that a reader, no matter how bored, never would want to be presented crap.

Now, where I used to draw pride from pouring so much effort into a small blog in one corner of WordPress, I draw pride from telling stories somewhat effectively – although still not as effectively as I’d like. Now, astrohep.wordpress.com is not a justifiable encapsulation of my perseverance, and nothing is or will be until I have the undivided attention of my readers whenever I have something to present them. I was wrong in assuming that my readers would stay with me and take to my journey as theirs, too: A writer is never right in assuming that.