UCL cancels homeopathy event by Indian docs

An India-based homeopathic organisation caused ripples in academic circles in the UK over the last few days after announcing it would conduct a conference on treating cancer at the University College London (UCL) premises – an appointment that has since been cancelled by UCL.

Although homeopathy has been widely drubbed as possessing zero curative potential, it continues to have an existence ranging from undemonstrative to unrestrained in many countries. In the UK, its practice is restricted by law; further compounding the issue is that the scheduled conference plans to discuss ways to manage cancer with homeopathy, the kind of advertising that’s barely legal in the country (see: Section 4, Cancer Act, 1939).

The website built for the event says that the Dr. Prasanta Banerji Homeopathic Research Foundation, based in Kolkata, will conduct the conference at the UCL Institute of Neurology, with an entry fee of £180. The two-day event will discuss the so-called Banerji Protocols, a set of methods developed by doctors Prasanta and Pratip Banerji to manage various ailments using only homeopathy and arrive at diagnoses quickly. However, their claims appear insufficiently backed up – a list of publications on the foundation’s page doesn’t contain any peer-reviewed studies or reports from randomised clinical trials.

Once the event’s details were publicised, the furore was centred on the Banerjis’ using UCL premises to promote their methods. As Andy Lewis wrote in The Quackometer, “[UCL’s] premises are being used to bring respectability to a thoroughly disturbing business.” However, after complaints lodged by multiple activists, researchers and others, UCL cancelled the event on February 1 and said, according to blogger David Colquhoun, that the booking was made by a “junior [secretary] unaware of issues”, that it had learnt its lesson, and that a process had been set up to prevent similar issues from recurring in the future.

The UCL clarification came close on the heels of the UK’s Medicines and Healthcare products Regulatory Authority approving five homeopathic combinations to make therapeutic claims. All combinations are made by a company named Helios and assure palliative and curative effects, including for hay fever. Edzard Ernst, noted for his vehement opposition to homeopathy, wrote in response on his blog, “If you look critically at the evidence, you are inevitably going to arrive at entirely different verdicts about the effectiveness of these remedies: they actually do nothing!”

It’s notable that the marketing practices that the Banerjis are following closely mimic those generally adopted by people selling dubitable products, services or ideas:

  • Advertising methods through case studies instead of scientific details – Three items on the conference agenda read: “Evidence based management of cancer, renal failure and other serious illnesses with case presentations including radiology and histopathology images” and “Live case studies to demonstrate case taking for difficult conditions”
  • Conflating invitation from institutes with invitation from governments (the latter hardly ever happens) – From banerjiprotocols.in: “Under invitation from Spain, Portugal, Royal Academy of Japan, USA, Roswell park cancer centre at Buffalo, New York, Italy, Netherlands, Germany we have done workshops and teaching seminars and we received standing ovations in all the places.”
  • Citing alleged accreditation by prestigious institutions but of which no official record exists – Also from banerjiprotocols.in: “Our protocol for Brain cancer & Breast Cancer has been experimented by the scientist of the MD Anderson Cancer Centre, Houston, USA and found in vitro experiment that these medicines selectively kills cancer cells but not the normal cells. Joint paper by us and scientist, professor of cell biology and genetics has been published in International Journal of Oncology. Our work with National Cancer Institute, USA has been published in journal of Oncology Reports.” – The papers are not to be found.

Others include referring to essays and books of their own authorship; presenting their publication as validation of their methods; not participating in any collaborative work, especially with accredited research institutions; and often labouring unto not insubstantial commercial gains.

Despite a World Health Organisation directive in 2009 cautioning against the use of homeopathy to cure serious illnesses like malaria, it is officially counted among India’s national systems of medicine. Its research and practice receives support from the Department of Ayurveda, Yoga and Naturopathy, Unani, Siddha and Homeopathy (AYUSH) under the Ministry of Health and Family Welfare. From 1980 to 2010, the number of homeopathic doctors in the country doubled while the number of dispensaries increased four-fold.

The Wire
February 2, 2016

Tabby’s star

Announcing the WTF star

This paper presents the discovery of a mysterious dipping source, KIC 8462852, from the Planet Hunters project. In just the first quarter of Kepler data, Planet Hunter volunteers identified KIC 8462852’s light curve as a “bizarre”, “interesting”, “giant transit” (Q1 event depth was 0.5% with a duration of 4 days). As new Kepler data were released in subsequent quarters, discussions continued on ‘Talk’ about KIC 8462852’s light curve peculiarities, particularly ramping up pace in the final observations quarters of the Kepler mission.

Umm, is there an alien megastructure around the star?

The most extreme case of a transiting megastructure would be a structure or swarm so large and opaque that it completely occults the star. In this case there might be a very small amount of scattered light from other components of a swarm, but for the most part the star would go completely dark at optical wavelengths. In the limit that such a structure or swarm had complete coverage of the star, one has a “complete Dyson sphere” (α = 1 in the AGENT formalism of Wright et al. 2014a). Less complete swarms or structures (as in the case of Badescu and Shkadov’s scenarios above) undergoing (perhaps nonKeplerian) orbital motion might lead to a star “winking out” as the structure moved between Earth and the star. In such scenarios, the occulting structure might be detectable at midinfrared wavelengths if all of the intercepted stellar energy is ultimately disposed of as waste heat (that is, in the AGENT formalism, if ≈ α and α is of order 1).

If there are aliens around the star, they aren’t pinging us in radio

We have made a radio reconnaissance of the star KIC 8462852 whose unusual light curves might possibly be due to planet-scale technology of an extraterrestrial civilization. The observations presented here indicate no evidence for persistent technology-related signals in the microwave frequency range 1 – 10 GHz with threshold sensitivities of 180 – 300 Jy in a 1 Hz channel for signals with 0.01 – 100 Hz bandwidth, and 100 Jy in a 100 kHz channel from 0.1 – 100 MHz. These limits correspond to isotropic radio transmitter powers of 4 – 7 10^15 W and 10^20 W for the narrowband and moderate band observations. These can be compared with Earth’s strongest transmitters, including the Arecibo Observatory’s planetary radar (2 1013 W EIRP). Clearly, the energy demands for a detectable signal from KIC 8462852 are far higher than this terrestrial example (largely as a consequence of the distance of this star). On the other hand, these energy requirements could be very substantially reduced if the emissions were beamed in our direction. Additionally, it’s worth noting that any society able to construct a Dyson swarm will have an abundant energy source, as the star furnishes energy at a level of ~10^27 watts. This report represents a first survey placing upper limits on anomalous flux from KIC 8462852. We expect that this star will be the object of additional observations for years to come.

There could be a comet swarm around the star

We find that a comet family on a single orbit with dense clusters can cause most of the observed complex series of irregular dips that occur after day 1500 in the KIC 8462852 light curve. However, the fit requires a large number of comets and is also not well constrained. We cannot limit the structure of the system much beyond the observational constraints and the dynamical history of the comet family is unknown, but if the comet family model is correct, there is likely a planetary companion forming sungrazers. Since the comets are still tightly clustered within each dip, a disruption event likely occurred recently within orbit, like tidal disruption by the star. This comet family model does not explain the large dip observed around day 800 and treats it as unrelated to the ones starting at day 1500. The flux changes too smoothly and too slowly to be easily explained with a simple comet family model.

Okay, the aliens won’t have it hard to ping us, but if they don’t know we’re listening, we might have to look pretty hard for them

If, however, any inhabitants of KIC 8462852 were targeting our solar system (Shostak & Villard 2004), the required energy would be reduced greatly. As an example, if such hypothetical extraterrestrials used a 10 m mirror to beam laser pulses in our direction, then using a 10 m receiving telescope, the minimum detectable energy per pulse would be 125,000 joules. If this pulse repeated every 20 minutes, then the average power cost to the transmitting civilization would be a rather low 100 watts. This would be a negligible cost for any civilization capable of constructing a megastructure large enough to be responsible for the dimming seen with KIC 8462852, particularly if that structure were used to capture a large fraction of the star’s energy (~10^27 watts). It would be considerably easier to detect such signals intentionally directed toward Earth than to intercept collimated communications between two star systems along a vector that accidentally intersects the Earth (Forgan 2014).

BTW, the star faded in brightness in the last 100 years

KIC8462852 is suffering a century-long secular fading, and this is contrary to the the various speculation that the obscuring dust was created by some singular catastrophic event. If any such singular event happened after around 1920, then the prior light curve should appear perfectly flat, whereas there is significant variability before 1920. If the trend is caused by multiple small catastrophic events, then it is difficult to understand how they can time themselves so as to mimic the trend from 1890-1989. In the context of the idea that the star in undergoing a Late Heavy Bombardment (Lisse et al. 2015), it is implausible that such a mechanism could start up on a time scale of a century, or that it would start so smoothly with many well-spaced collisions.

Wait, there’s a reason it might not have faded in the last 100 years

Assuming that all stars have been drawn randomly from the same sample, the chance of drawing 2 of 2 constant stars is 13%. It might be attributed to bad luck that these apparent data discontinuities were not seen in the first place. After visual inspection of all data, we favour the interpretation that both structural breaks, and long-term (decades) linear trends are present in these data. The structural breaks appear most prominent at the “Menzel gap”, but might also be present at other times. These issues might arise from changes in technology, and imperfect calibration.

32 days and counting

It’s been 32 days since I’ve been able to write in that half-ranting half-jargon-dropping style that’s always been the clearest indication that I’m excited about something (yes, my writing tells me things I can’t otherwise figure out). In the last three weeks, I’ve written three pieces for The Wire, and all of them were coughed-spluttered-staccatoed out. I dearly miss the flow and 32 days is the longest it’s been gone.

I’m not sure which one the causes are and which the effect – but periods of the block are also accompanied by the inability to think things through, being scatter-brained and easily distracted, and a sense of general disinterestedness. And when I can’t write normally, I can’t read or ideate normally either; even the way I’ve ratified and edited submissions for The Wire took a toll.

I’ve tried everything that’s worked in the past to clear the block but nothing has worked. I tried writing more, reading more, cathartic music; moving around, meeting people, changes of scenery; got into a routine I’ve traditionally reserved for phases like this, a diet, some exercise. This is frightening – I need a new solution and I’ve no idea where to look. Do you just wait for your block to fade or do you have a remedy for it?

Featured image credit: Matthias Ripp/Flickr, CC BY 2.0.

Priggish NEJM editorial on data-sharing misses the point it almost made

Twitter outraged like only Twitter could on January 22 over a strange editorial that appeared in the prestigious New England Journal of Medicine, calling for medical researchers to not make their research data public. The call comes at a time when the scientific publishing zeitgeist is slowly but surely shifting toward journals requiring, sometimes mandating, the authors of studies to make their data freely available so that their work can be validated by other researchers.

Through the editorial, written by Dan Longo and Jeffrey Drazen, both doctors and the latter the chief editor, NEJM also cautions medical researchers to be on the lookout for ‘research parasites’, a coinage that the journal says is befitting “of people who had nothing to do with the design and execution of the study but use another group’s data for their own ends, possibly stealing from the research productivity planned by the data gatherers, or even use the data to try to disprove what the original investigators had posited”. As @omgItsEnRIz tweeted, do the authors even science?

The choice of words is more incriminating than the overall tone of the text, which also tries to express the more legitimate concern of replicators not getting along with the original performers. However, by saying that the ‘parasites’ may “use the data to try to disprove what the original investigators had posited”, NEJM has crawled into an unwise hole of infallibility of its own making.

In October 2015, a paper published in the Journal of Experimental Psychology pointed out why replication studies are probably more necessary than ever. The misguided publish-or-perish impetus of scientific research, together with publishing in high impact-factor journals being lazily used as a proxy for ‘good research’ by many institutions, has led researchers to hack their results – i.e. prime them (say, by cherry-picking) so that the study ends up reporting sensational results when, really, duller ones exist.

The JEP paper had a funnel plot to demonstrate this. Quoting from the Neuroskeptic blog, which highlighted the plot when the paper was published, “This is a funnel plot, a two-dimensional scatter plot in which each point represents one previously published study. The graph plots the effect size reported by each study against the standard error of the effect size – essentially, the precision of the results, which is mostly determined by the sample size.” Note: the y-axis is running top-down.

funnel_shanks1

The paper concerned itself with 43 previously published studies discussing how people’s choices were perceived to change when they were gently reminded about sex.

As Neuroskeptic goes on to explain, there are three giveaways in this plot. One is obvious – that the distribution of replication studies is markedly separated from that of the original studies. Second: the least precise results from the original studies worked with the larger sample sizes. Third: the original studies all seemed to “hug” the outer edge of the grey triangles, which represents a statistical measure responsible for indicating if some results are reliable. The uniform ‘hugging’ is an indication that all those original studies were likely guilty of cherry-picking from their data to conclude with results that are just about reliable, an act called ‘p-hacking’.

A line of research can appear to progress rapidly but without replication studies it’s difficult to establish if the progress is meaningful for science – a notion famously highlighted by John Ioannidis, a professor of medicine and statistics at Stanford University, in his two landmark papers in 2005 and 2014. Björn Brembs, a professor of neurogenetics at the Universität Regensburg, Bavaria, also pointed out how the top journals’ insistence on sensational results could result in a congregation of unreliability. Together with a conspicuous dearth of systematically conducted replication studies, this ironically implies that the least reliable results are often taken the most seriously thanks to the journals they appear in.

The most accessible sign of this is a plot between the retraction index and the impact factor of journals. The term ‘retraction index’ was coined in the same paper in which the plot first appeared; it stands for “the number of retractions in the time interval from 2001 to 2010, multiplied by 1,000, and divided by the number of published articles with abstracts”.

Impact factor of journals plotted against the retraction index. The highest IF journals – Nature, Cell and Science – are farther along the trend line than they should be. Source: doi: 10.1128/IAI.05661-11

Impact factor of journals plotted against the retraction index. The highest IF journals – Nature, Cell and Science – are farther along the trend line than they should be. Source: doi: 10.1128/IAI.05661-11

Look where NEJM is. Enough said.

The journal’s first such supplication appeared in 1997, then writing against pre-print copies of medical research papers becoming available and easily accessible – á la the arXiv server for physics. Then, the authors, again two doctors, wrote, “medicine is not physics: the wide circulation of unedited preprints in physics is unlikely to have an immediate effect on the public’s well-being even if the material is biased or false. In medicine, such a practice could have unintended consequences that we all would regret.” Though a reasonable PoV, the overall tone appeared to stand against the principles of open science.

More importantly, both editorials, separated by almost two decades, make one reasonable argument that sadly appears to make sense to the journal only in the context of a wider set of arguments, many of them contemptible. For example, Drazen seems to understand the importance of data being available for studies to be validated but has differing views on different kinds of data. Two days before his editorial was published, another appeared co-authored by 16 medical researchers – Drazen one of them – in the same journal, this time calling for anonymised patient data from clinical trials being made available to other researchers because it would “increase confidence and trust in the conclusions drawn from clinical trials. It will enable the independent confirmation of results, an essential tenet of the scientific process.”

(At the same time, the editorial also says, “Those using data collected by others should seek collaboration with those who collected the data.”)

For another example, NEJM labours under the impression that the data generated by medical experiments will not ever be perfectly communicable to other researchers who were not involved in the generation of it. One reason it provides is that discrepancies in the data between the original group and a new group could arise because of subtle choices made by the former in the selection of parameters to evaluate. However, the solution doesn’t lie in the data being opaque altogether.

A better way to conduct replication studies

An instructive example played out in May 2014, when the journal Social Psychology published a special issue dedicated to replication studies. The issue contained both successful and failed attempts at replicating some previously published results, and the whole process was designed to eliminate biases as much as possible. For example, the journal’s editors Brian Nosek and Daniel Lakens didn’t curate replication studies but instead registered the studies before they were performed so that their outcomes would be published irrespective of whether they turned out positive or negative. For another, all the replications used the same experimental and statistical techniques as in the original study.

One scientist who came out feeling wronged by the special issue was Simone Schnall, the director of the Embodied Cognition and Emotion Laboratory at Cambridge University. The results of a paper co-authored by Schnall in 2008 hadfailed to be replicated, but she believed there had been a mistake in the replication that, when corrected, would corroborate her group’s findings. However, her statements were quickly and widely interpreted to mean she was being a “sore loser”. In one blog, her 2008 findings were called an “epic fail” (though the words were later struck out).

This was soon followed a rebuttal by Schnall, followed by a counter by the replicators, and then Schnall writing two blog posts (here and here). Over time, the core issue became how replication studies were conducted – who performed the peer review, the level of independence the replicators had, the level of access the original group had, and how journals could be divorced from having a choice about which replication studies to publish. But relevant to the NEJM context, the important thing was the level of transparency maintained by Schnall & co. as well as the replicators, which provided a sheen of honesty and legitimacy to the debate.

The Social Psychology issue was able to take the conversation forward, getting authors to talk about the psychology of research reporting. There have been few other such instances – of incidents exploring the proper mechanisms of replication studies – so if the NEJM editorial had stopped itself with calling for better organised collaborations between a study’s original performers and its replicators, it would’ve been great. As Longo and Drazen concluded, “How would data sharing work best? We think it should happen symbiotically … Start with a novel idea, one that is not an obvious extension of the reported work. Second, identify potential collaborators whose collected data may be useful in assessing the hypothesis and propose a collaboration. Third, work together to test the new hypothesis. Fourth, report the new findings with relevant coauthorship to acknowledge both the group that proposed the new idea and the investigative group that accrued the data that allowed it to be tested.”

The mistake lies in thinking anything else would be parasitic. And the attitude affects not just other scientists but some science communicators as well. Any journalist or blogger who has been reporting on a particular beat for a while stands to become a ‘temporary expert‘ on the technical contents of that beat. And with exploratory/analytical tools like R – which is easier than you think to pick up – the communicator could dig deeper into the data, teasing out issues more relevant to their readers than what the accompanying paper thinks is the highlight. Sure, NEJM remains apprehensive about how medical results could be misinterpreted to terrible consequence. But the solution there would be for the communicators to be more professional and disciplined, not for the journal to be more opaque.

The Wire
January 24, 2016

Parsing Ajay Sharma v. E = mc2

Featured image credit: saulotrento/Deviantart, CC BY-SA 3.0.

To quote John Cutter (Michael Caine) from The Prestige:

Every magic trick consists of three parts, or acts. The first part is called the pledge, the magician shows you something ordinary. The second act is called the turn, the magician takes the ordinary something and makes it into something extraordinary. But you wouldn’t clap yet, because making something disappear isn’t enough. You have to bring it back. Now you’re looking for the secret. But you won’t find it because of course, you’re not really looking. You don’t really want to work it out. You want to be fooled.

The Pledge

Ajay Sharma is an assistant director of education with the Himachal Pradesh government. On January 10, the Indo-Asian News Service (IANS) published an article in which Sharma claims Albert Einstein’s famous equation E = mc2 is “illogical” (republished by The Hindu, Yahoo! NewsGizmodo India, among others). The precise articulation of Sharma’s issue with it is unclear because the IANS article contains multiple unqualified statements:

Albert Einstein’s mass energy equation (E=mc2) is inadequate as it has not been completely studied and is only valid under special conditions.

Einstein considered just two light waves of equal energy emitted in opposite directions with uniform relative velocity.

“It’s only valid under special conditions of the parameters involved, e.g. number of light waves, magnitude of light energy, angles at which waves are emitted and relative velocity.”

Einstein considered just two light waves of equal energy, emitted in opposite directions and the relative velocity uniform. There are numerous possibilities for the parameters which were not considered in Einstein’s 1905 derivation.

It said E=mc2 is obtained from L=mc2 by simply replacing L by E (all energy) without derivation by Einstein. “It’s illogical,” he said.

Although Einstein’s theory is well established, it has to be critically analysed and the new results would definitely emerge.

Sharma also claims Einstein’s work wasn’t original and only ripped off Galileo, Henri Poincaré, Hendrik Lorentz, Joseph Larmor and George FitzGerald.

The Turn

Let’s get some things straight.

Mass-energy equivalence – E = mc2 isn’t wrong but it’s often overlooked that it’s an approximation. This is the full equation:

E2 = m02c4 + p2c4

(Notice the similarity to the Pythagoras theorem?)

Here, m0 is the mass of the object (say, a particle) when it’s not moving, p is its momentum (calculated as mass times its velocity – m*v) and c, the speed of light. When the particle is not moving, v is zero, so p is zero, and so the right-most term in the equation can be removed. This yields:

E2 = m02c4 ⇒ E = m0c2

If a particle was moving close to the speed of light, applying just E = m0c2 would be wrong without the rapidly enlarging p2c4 component. In fact, the equivalence remains applicable in its most famous form only in cases where an observer is co-moving along with the particle. So, there is no mass-energy equivalence as much as a mass-energy-momentum equivalence.

And at the time of publishing this equation, Einstein was aware that it was held up by multiple approximations. As Terence Tao sets out, these would include (but not be limited to) p being equal to mv at low velocities, the laws of physics being the same in two frames of reference moving at uniform velocities, Planck’s and de Broglie’s laws holding, etc.

These approximations are actually inherited from Einstein’s special theory of relativity, which describes the connection between space and time. In a paper dated September 27, 1905, Einstein concluded that if “a body gives off the energyL in the form of radiation, its mass diminishes by L/c2“. ‘L’ was simply the notation for energy that Einstein used until 1912, when he switched to the more-common ‘E’.

The basis of his conclusion was a thought experiment he detailed in the paper, where a point-particle emits “plane waves of light” in opposite directions while at rest and then while in motion. He then calculates the difference in kinetic energy of the body before and after it starts to move and accounting for the energy carried away by the radiated light:

K0 – K1 = 1/2 * L/c2 * v2

This is what Sharma is referring to when he says, “Einstein considered just two light waves of equal energy, emitted in opposite directions and the relative velocity uniform. There are numerous possibilities for the parameters which were not considered in Einstein’s 1905 derivation.” Well… sure. Einstein’s was a gedanken (thought) experiment to illustrate a direct consequence of the special theory. How he chose to frame the problem depended on what connection he wanted to illustrate between the various attributes at play.

And the more attributes are included in the experiment, the more connections will arise. Whether or not they’d be meaningful (i.e. being able to represent a physical reality – such as with being able to say “if a body gives off the energy Lin the form of radiation, its mass diminishes by L/c2“) is a separate question.

As for another of Sharma’s claims – that the equivalence is “only valid under special conditions of the parameters involved, e.g. number of light waves, magnitude of light energy, angles at which waves are emitted and relative velocity”: Einstein’s theory of relativity is the best framework of mathematical rules we have to describe all these parameters together. So any gedanken experiment involving just these parameters can be properly analysed, to the best of our knowledge, with Einstein’s theory, and within that theory – and as a consequence of that theory – the mass-energy-momentum equivalence will persist. This implication was demonstrated by the famous Cockcroft-Walton experiment in 1932.

General theory of relativity – Einstein’s road to publishing his general theory (which turned 100 last year) was littered with multiple challenges to its primacy. This is not surprising because Einstein’s principal accomplishment was not in having invented something but in having recombined and interpreted a trail of disjointed theoretical and experimental discoveries into a coherent, meaningful and testable theory of gravitation.

As mentioned earlier, Sharma claims Einstein ripped off Galileo, Poincaré, Lorentz, Larmor and FitzGerald. For what it’s worth, he could also have mentioned William Kingdon Clifford, Georg Bernhard Riemann, Tullio Levi-Civita, Gregorio Ricci-Curbastro, János Bolyai, Nikolai Lobachevsky, David Hilbert, Hermann Minkowski and Fritz Hasenhörl. Here are their achievements in the context of Einstein’s (in a list that’s by no means exhaustive).

  • 1632, Galileo Galilei – Published a book, one of whose chapters features a dialogue about the relative motion of planetary bodies and the role of gravity in regulating their motion
  • 1824-1832, Bolyai and Lobachevsky – Conceived of hyperbolic geometry (which didn’t follow Euclidean laws like the sum of a triangle’s angles is 180º) over 1824-1832, which inspired Riemann and his mentor to consider if there was a kind of geometry to explain the behaviour of shapes in four dimensions (as opposed to three)
  • 1854, G. Bernhard Riemann – Conceived of elliptic geometry and a way to compare vectors in four dimensions, ideas that would benefit Einstein immensely because they helped him discover that gravity wasn’t a force in space-time but actually the curvature of space-time
  • 1876, William K. CliffordSuggested that the forces that shape matter’s motion in space could be guided by the geometry of space, foreshadowing Einstein’s idea that matter influences gravity influences matter
  • 1887-1902, FitzGerald and Lorentz – Showed that observers in different frames of reference that are moving at different velocities can measure the length of a common body to differing values, an idea then called the FitzGerald-Lorentz contraction hypothesis. Lorentz’s mathematical description of this gave rise to a set of formulae called Lorentz transformations, which Einstein later derived through his special theory.
  • 1897-1900, Joseph Larmor – Realised that observers in different frames of reference that are moving at different velocities can also measure different times for the same event, leading to the time dilation hypothesis that Einstein later explained
  • 1898, Henri Poincaré – Interpreted Lorentz’s abstract idea of a “local time” to have physical meaning – giving rise to the idea of relative time in physics – and was among the first physicists to speculate on the need for a consistent theory to explain the consequences of light having a constant speed
  • 1900, Levi-Civita and Ricci-Curbastro – Built on Riemann’s ideas of a non-Euclidean geometry to develop tensor calculus (a tensor is a vector in higher dimensions). Einstein’s field-equations for gravity, which capped his formulation of the celebrated general theory of relativity, would feature the Ricci tensor to account for the geometric differences between Euclidean and non-Euclidean geometry.
  • 1904-1905, Fritz Hasenöhrl – Built on the work of Oliver Heaviside, Wilhelm Wien, Max Abraham and John H. Poynting to devise a thought experiment from which he was able to conclude that heat has mass, a primitive synonym of the mass-energy-momentum equivalence
  • 1907, Hermann Minkowski – Conceived a unified mathematical description of space and time in 1907 that Einstein could use to better express his special theory. Said of his work: “From this hour on, space by itself, and time by itself, shall be doomed to fade away in the shadows, and only a kind of union of the two shall preserve an independent reality.”
  • 1915, David Hilbert – Derived the general theory’s field equations a few days before Einstein did but managed to have his paper published only after Einstein’s was, leading to an unresolved dispute about who should take credit. However, the argument was made moot by only Einstein being able to explain how Isaac Newton’s laws of classical mechanics fit into the theory – Hilbert couldn’t.

FitzGerald, Lorentz, Larmor and Poincaré all laboured assuming that space was filled with a ‘luminiferous ether’. The ether was a pervasive, hypothetical yet undetectable substance that physicists of the time believed had to exist so electromagnetic radiation had a medium to travel in. Einstein’s theories provided a basis for their ideas to exist withoutthe ether, and as a consequence of the geometry of space.

So, Sharma’s allegation that Einstein republished the work of other people in his own name is misguided. Einstein didn’t plagiarise. And while there are many accounts of his competitive nature, to the point of asserting that a mathematician who helped him formulate the general theory wouldn’t later lay partial claim to it, there’s no doubt that he did come up with something distinctively original in the end.

The Prestige

Ajay Sharma with two of his books. Source: Fundamental Physics Society (Facebook page)

Ajay Sharma with two of his books. Source: Fundamental Physics Society (Facebook page)

To recap:

Albert Einstein’s mass energy equation (E=mc2) is inadequate as it has not been completely studied and is only valid under special conditions.

Claims that Einstein’s equations are inadequate are difficult to back up because we’re yet to find circumstances in which they seem to fail. Theoretically, they can be made to appear to fail by forcing them to account for, say, higher dimensions, but that’s like wearing suede shoes in the rain and then complaining when they’re ruined. There’s a time and a place to use them. Moreover, the failure of general relativity or quantum physics to meet each other halfway (in a quantum theory of gravity) can’t be pinned on a supposed inadequacy of the mass-energy equivalence alone.

Einstein considered just two light waves of equal energy emitted in opposite directions with uniform relative velocity.

“It’s only valid under special conditions of the parameters involved, e.g. number of light waves, magnitude of light energy, angles at which waves are emitted and relative velocity.”

Einstein considered just two light waves of equal energy, emitted in opposite directions and the relative velocity uniform. There are numerous possibilities for the parameters which were not considered in Einstein’s 1905 derivation.

That a gedanken experiment was limited in scope is a pointless accusation. Einstein was simply showing that A implied B, and was never interested in proving that A’ (a different version of A) did not imply B. And tying all of this to the adequacy (or not) of E = mc2 leads equally nowhere.

It said E=mc2 is obtained from L=mc2 by simply replacing L by E (all energy) without derivation by Einstein. “It’s illogical,” he said.

From the literature, the change appears to be one of notation. If not that, then Sharma could be challenging the notion that the energy of a moving body is equal to the sum of the energy of the body at rest and its kinetic energy – letting Einstein say that the kinetic energy on the LHS of the equation can be substituted by L (or E) if the RHS is added to E0(energy of the body at rest): E = E0 + K. In which case Sharma’s challenge is even more ludicrous for calling one of the basic tenets of thermodynamics “illogical” without indicating why.

Although Einstein’s theory is well established, it has to be critically analysed and the new results would definitely emerge.

The “the” before “new results” is the worrying bit: it points to claims of his that have already been made, and suggests they’re contrary to what Einstein has claimed. It’s not that the German is immune to refutation – no one is – but that whatever claim this is seems to be at the heart of what’s at best an awkwardly worded outburst, and which IANS has unquestioningly reproduced.

A persistent search for Sharma’s paper on the web didn’t turn up any results – the closest I got was in unearthing its title (#237) in a list of titles published at a conference hosted by a ‘Russian Gravitational Society’ in May 2015. Sharma’s affiliation is mentioned as a ‘Fundamental Physics Society’ – which in turn shows up as a Facebook page run by Sharma. But an ibnlive.com article from around the same time provides some insight into Sharma’s ‘research’ (translated from the Hindi by Siddharth Varadarajan):

In this way, Ajay is also challenging the great scientist of the 21st century (sic) Albert Einstein. After deep research into his formula, E=mc2, he says that “when a candle burns, its mass reduces and light and energy are released”. According to Ajay, Einstein obtained this equation under special circumstances. This means that from any matter/thing, only two rays of light emerge. The intensity of light of both rays is the same and they emerge from opposite directions. Ajay says Einstein’s research paper was published in 1905 in the German research journal [Annalen der Physik] without the opinion of experts. Ajay claims that if this equation is interpreted under all circumstances, then you will get wrong results. Ajay says that if a candle is burning, its mass should increase. Ajay says his research paper has been published after peer review. [Emphasis added.]

A pattern underlying some of Sharma’s claims have to do with confusing conjecturing and speculating (even perfectly reasonably) with formulating and defining and proving. The most telling example in this context is alleging that Einstein ripped off Galileo: even if they both touched on relative motion in their research, what Galileo did for relativity was vastly different from what Einstein did. In fact, following the Indian Science Congress in 2015, V. Vinay, an adjunct faculty at the Chennai Mathematical Institute and teacher in Bengaluru, had pointed out that these differences in fact encapsulated the epistemological attitudes of the Indian and Greek civilisations: the TL;DR version is that we weren’t a proof-seeking people.

Swinging back to the mass-energy equivalence itself – it’s a notable piece but a piece nonetheless of an expansive theory that’s demonstrably incomplete. And there are other theories like it, like flotsam on a dark ocean whose waters we haven’t been able to see, theories we’re struggling to piece together. It’s a time when Popper’s philosophies haven’t been able to qualify or disqualify ‘discoveries’, a time when the subjective evaluation of an idea’s usefulness seems just as important as objectively testing it. But despite the grand philosophical challenges these times face us with, extraordinary claims still do require extraordinary evidence. And at that Ajay Sharma quickly fails.

Hat-tip to @AstroBwouy, @ainvvy and @hosnimk.

The Wire
January 12, 2016

Ways of seeing

A lot of the physics of 2015 was about how the ways in which we study the natural world had been improved or were improving.

Colliders of the future: LHeC and FCC-he

In this decade, CERN is exploiting and upgrading the LHC – but not constructing “the next big machine”.

Source

Looking into a section of the 6.3-km long HERA tunnel at Deutsches Elektronen-Synchrotron (DESY), Hamburg. Source: DESY

Looking into a section of the 6.3-km long HERA tunnel at Deutsches Elektronen-Synchrotron (DESY), Hamburg. Source: DESY

For many years, one of the world’s most powerful scopes, as in a microscope, was the Hadron-Elektron Ring Anlage (HERA) particle accelerator in Germany. Where scopes bounce electromagnetic radiation – like visible light – off surfaces to reveal information hidden to the naked eye, accelerators reveal hidden information by bombarding the target with energetic particles. At HERA, those particles were electrons accelerated to 27.5 GeV. At this energy, the particles can probe a distance of a few hundredths of a femtometer (earlier called fermi) – 2.5 million times better than the 0.1 nanometers that atomic force microscopy can achieve (of course, they’re used for different purposes).

The electrons were then collided head on against protons accelerated to 920 GeV.

Unlike protons, electrons aren’t made up of smaller particles and are considered elementary. Moreover, protons are approx. 2,000-times heavier than electrons. As a result, the high-energy collision is more an electron scattering off of a proton, but the way it happens is that the electron imparts some energy to the proton before scattering off (this is imagined as an electron emitting some energy as a photon, which is then absorbed by the proton). This is called deep inelastic scattering: ‘deep’ for high-energy; ‘inelastic’ because the proton absorbs some energy.

One of the most famous deep-inelastic scattering experiments was conducted in 1968 at the Stanford Linear Accelerator Centre. Then, the perturbed protons were observed to ’emit’ other particles – essentially hitherto undetected constituent particles that escaped their proton formation and formed other kinds of composite particles. The constituent particles were initially dubbed partons but later found to be quarks, anti-quarks (the matter/anti-matter particles) and gluons (the force-particles that held the quarks/anti-quarks together).

HERA was shut in June 2007. Five years later, the plans for a successor at least 100-times more sensitive than HERA were presented – in the form of the Large Hadron-electron Collider (LHeC). As the name indicates, it is proposed to be built adjoining the Large Hadron Collider (LHC) complex at CERN by 2025 – a timeframe based on when the high-luminosity phase of the LHC is set to begin (2024).

Timeline for the LHeC. Source: CERN

Timeline for the LHeC. Source: CERN

On December 15, physicists working on the LHC had announced new results obtained from the collider – two in particular stood out. One was a cause for great, yet possibly premature, excitement: a hint of a yet unknown particle weighing around 747 GeV. The other was cause for a bit of dismay: quantum chromodynamics (QCD), the theory that deals with the physics of quarks, anti-quarks and gluons, seemed flawless across a swath of energies. Some physicists were hoping it wouldn’t be so (because its flawlessness has come at the cost of being unable to explain some discoveries, like dark matter). Over the next decade, the LHC will push the energy frontier further to see – among other things – if QCD ‘breaks’, becoming unable to explain a possible new phenomenon.

Against this background, the LHeC is being pitched as the machine that could be dedicated to examining this breakpoint and some other others like it, and in more detail than the LHC is equipped to. One helpful factor is that when electrons are one kind of particles participating in a collision, physicists don’t have to worry about how the energy will be distributed among constituent particles since electrons don’t have any. Hadron collisions, on the other hand, have to deal with quarks, anti-quarks and gluons, and are tougher to analyse.

An energy recovery linac (in red) shown straddling the LHC ring. A rejected design involved installing the electron-accelerator (in yellow) concentrically with the LHC ring. Source: CERN

An energy recovery linac (in red) shown straddling the LHC ring. A rejected design involved installing the electron-accelerator (in yellow) concentrically with the LHC ring. Source: CERN

So, to accomplish this, the team behind the LHeC is considering installing a pill-shaped machine called the energy recovery linac (ERL), straddling the LHC ring (shown above), to produce a beam of electrons that’d then take on the accelerated protons from the main LHC ring – making up the ‘linac-ring LHeC’ design. A first suggestion to install the LHeC as a ring, to accelerate electrons, along the LHC ring was rejected because it would hamper experiments during construction. Anyway, the electrons will be accelerated to 60 GeV while the protons, to 7,000 GeV. The total wall-plug power to the ERL is being capped at 100 MW.

The ERL has a slightly different acceleration mechanism from the LHC, and doesn’t simply accelerate particles continuously around a ring. First, the electrons are accelerated through a radio-frequency field in a linear accelerator (linac – the straight section of the ERL) and then fed into a circular channel, crisscrossed by magnetic fields, curving into the rear end of the linac. The length of the circular channel is such that by the time the electrons travel along it, their phase has shifted by 180º (i.e. if their spin was oriented “up” at one end, it’d have become flipped to “down” by the time they reached the other). And when the out-of-phase electrons reenter the channel, they decelerate. Their kinetic energy is lost to the RF field, which intensifies and so provides a bigger kick to the new batch of particles being injected to the linac at just that moment. This way, the linac recovers the kinetic energy from each circulation.

Such a mechanism is employed at all because the amount of energy lost in a form called synchrotron radiation increases drastically as the particle’s mass gets lower – when accelerated radially using bending magnetic fields.

The bluish glow from the central region of the Crab Nebula is due to synchrotron radiation. Credit: NASA-ESA/Wikimedia Commons

The bluish glow from the central region of the Crab Nebula is due to synchrotron radiation. Credit: NASA-ESA/Wikimedia Commons

Keeping in mind the need to explore new areas of physics – especially those associated with leptons (elementary particles of which electrons are a kind) and quarks/gluons (described by QCD) – the energy of the electrons coming out of the ERL is currently planned to be 60 GeV. They will be collided with accelerated protons by positioning the ERL tangential to the LHC ring. And at the moment of the collision, CERN’s scientists hope that they will be able to use the LHeC to study:

  • Predicted unification of the electromagnetic and weak forces (into an electroweak force): The electromagnetic force of nature is mediated by the particles called photons while the weak force, by particles called W and Z bosons. Whether the scientists will observe the unification of these forces, as some theories predict, is dependent on the quality of electron-proton collisions. Specifically, if the square of the momentum transferred between the particles can reach up to 8-9 TeV, the collider will have created an environment in which physicists will be able to probe for signs of an electroweak force at play.
  • Gluon saturation: To quote from an interview given by theoretical physicist Raju Venugopalan in January 2013: “We all know the nuclei are made of protons and neutrons, and those are each made of quarks and gluons. There were hints in data from the HERA collider in Germany and other experiments that the number of gluons increases dramatically as you accelerate particles to high energy. Nuclear physics theorists predicted that the ions accelerated to near the speed of light at the [Relativistic Heavy Ion Collider] would reach an upper limit of gluon concentration – a state of gluon saturation we call colour glass condensate.”
  • Higgs bosons: On July 4, 2012, Fabiola Gianotti, soon to be the next DG of CERN but then the spokesperson of the ATLAS experiment at the LHC, declared that physicists had found a Higgs boson. Widespread celebrations followed – while a technical nitpick remained: physicists only knew the particle resembled a Higgs boson and might not have been the real thing itself. Then, in March 2013, the particle was most likely identified as being a Higgs boson. And even then, one box remained to be checked: that it was the Higgs boson, not one of many kinds. For that, physicists have been waiting for more results from the upgraded LHC. But a machine like the LHeC would be able to produce a “few thousand” Higgs bosons a year, enabling physicists to study the elusive particle in more detail, confirm more of its properties – or, more excitingly, find that that’s not the case – and look for higher-energy versions of it.

A 2012 paper detailing the concept also notes that should the LHC find that signs of ‘new physics’ could exist beyond the default energy levels of the LHeC, scientists are bearing in mind the need for the electrons to be accelerated by the ERL to up to 140 GeV.

The default configuration of the proposed ERL. The bending arcs are totally about 19 km long (three to a side at different energies). Source: CERN

The default configuration of the proposed ERL. The bending arcs are totally about 19 km long (three to a side at different energies). Source: CERN

The default configuration of the proposed ERL. The bending arcs are totally about 19 km long (three to a side at different energies). Source: CERN

The default configuration of the proposed ERL. The bending arcs are totally about 19 km long (three to a side at different energies). Source: CERN

The unique opportunity presented by an electron-proton collider working in tandem with the LHC goes beyond the mammoth energies to a property called luminosity as well. It’s measured in inverse femtobarn per second, denoting the number of events occurring per 10-39 squared centimetres per second. For example, 10 fb-1 denotes 10 events occurring per 10-39 sq. cm s-1 – that’s 1040 events per sq. cm per second (The luminosity over a specific period of time, i.e. without the ‘per seconds’ in the unit, is called the integrated luminosity). At the LHeC, a luminosity of 1033 cm-2 s-1 is expected to be achieved and physicists hope that with some tweaks, it can be hiked by yet another order of magnitude. To compare: this is 100x what HERA achieved, providing an unprecedented scale at which to explore the effects of deep inelastic scattering, and 10x the LHC’s current luminosity.

It’s also 100x lower than that of the HL-LHC, which is the configuration of the LHC with which the ERL will be operating to make up the LHeC. And the LHeC’s lifetime will be the planned lifetime of the LHC, till the 2030s, about a decade. In the same period, if all goes well, a Chinese behemoth will have taken shape: the Circular Electron-Positron Collider (CEPC), with a circumference 2x that of the LHC. In its proton-proton collision configuration – paralleling the LHC’s – China claims it will reach energies of 70,000 GeV (as against the LHC’s current 14,000 GeV) and luminosity comparable to the HL-LHC. And when its electron-positron collision configuration, which the LHeC will be able to mimic, will be at its best, physicists reckon the CEPC will be able to produce 100,000 Higgs bosons a year.

Timeline for operation of the Future Circular Collider being considered. Source: CERN

Timeline for operation of the Future Circular Collider being considered. Source: CERN

 

As it happens, some groups at CERN are already drawing up plans, due to be presented in 2018, for a machine dwarfing even the CEPC. Meet the Future Circular Collider (FCC), by one account the “ultimate precision-physics machine” (and funnily named by another). To be fair, the FCC has been under consideration since about 2013 and independent of the CEPC. However, in sheer size, the FCC could swallow the CEPC – with an 80-100 km-long ring. It will also be able to accelerate protons to 50,000 GeV (by 2040), attain luminosities of 1035 cm-2 s-1, continue to work with the ERL, function as an electron-positron collider (video), and look for particles weighing up to 25,000 GeV (currently the heaviest known fundamental particle is the top quark, weighing 169-173 GeV).

An illustration showing a possible location and size, relative to the LHC (in white) of the FCC. The main tunnel is shown as a yellow dotted line. Source: CERN

An illustration showing a possible location and size, relative to the LHC (in white) of the FCC. The main tunnel is shown as a yellow dotted line. Source: CERN

And should it be approved and come online in the second half of the 2030s, there’s a good chance the world will be a different place, too: not just the CEPC – there will be (or will have been?) either the International Linear Collider (ILC) and Compact Linear Collider (CLIC) as well. ‘Either’ because they’re both linear accelerators with similar physical dimensions and planning to collide electrons with positrons, their antiparticles, to study QCD, the Higgs field and the prospects of higher dimensions, so only one of them might get built. And they will require a decade to be built, coming online in the late 2020s. The biggest difference between them is that the ILC will be able to reach collision energies of 1,000 GeV while the CLIC (whose idea was conceived at CERN), of 3,000 GeV.

Screen Shot 2015-12-30 at 5.57.55 pmFCC-he = proton-electron collision mode; FCC-hh = proton-proton collision mode; SppC = CEPC’s proton-proton collision mode.

Some thoughts on the nature of cyber-weapons

With inputs from Anuj Srivas.

There’s a hole in the bucket.

When someone asks for my phone number, I’m on alert, even if it’s so my local supermarket can tell me about new products on their shelves. Or for my email ID so the taxi company I regularly use can email me ride receipts, or permission to peek into my phone if only to see what other music I have installed – All vaults of information I haven’t been too protective about but which have off late acquired a notorious potential to reveal things about me I never thought I could so passively.

It’s not everywhere but those aware of the risks of possessing an account with Google or Facebook have been making polar choices: either wilfully surrender information or wilfully withhold information – the neutral middle-ground is becoming mythical. Wariness of telecommunications is on the rise. In an effort to protect our intangible assets, we’re constantly creating redundant, disposable ones – extra email IDs, anonymous Twitter accounts, deliberately misidentified Facebook profiles. We know the Machines can’t be shut down so we make ourselves unavailable to them. And we succeed to different extents, but none completely – there’s a bit of our digital DNA in government files, much like with the kompromat maintained by East Germany and the Soviet Union during the Cold War.

In fact, is there an equivalence between the conglomerates surrounding nuclear weapons and cyber-weapons? Solly Zuckerman (1904-1993), once Chief Scientific Adviser to the British government, famously said:

When it comes to nuclear weapons … it is the man in the laboratory who at the start proposes that for this or that arcane reason it would be useful to improve an old or to devise a new nuclear warhead. It is he, the technician, not the commander in the field, who is at the heart of the arms race.

These words are still relevant but could they have accrued another context? To paraphrase Zuckerman – “It is he, the programmer, not the politician in the government, who is at the heart of the surveillance state.”

An engrossing argument presented in the Bulletin of the Atomic Scientists on November 6 did seem an uncanny parallel to one of whistleblower Edward Snowden’s indirect revelations about the National Security Agency’s activities. In the BAS article, nuclear security specialist James Doyle wrote:

The psychology of nuclear deterrence is a mental illness. We must develop a new psychology of nuclear survival, one that refuses to tolerate such catastrophic weapons or the self-destructive thinking that has kept them around. We must adopt a more forceful, single-minded opposition to nuclear arms and disempower the small number of people who we now permit to assert their intention to commit morally reprehensible acts in the name of our defense.

This is akin to the multiple articles that appeared following Snowden’s exposé in 2013 – that the paranoia-fuelled NSA was gathering more data than it could meaningfully process, much more data than might be necessary to better equip the US’s counterterrorism measures. For example, four experts argued in a policy paper published by the nonpartisan think-tank New America in January 2014:

Surveillance of American phone metadata has had no discernible impact on preventing acts of terrorism and only the most marginal of impacts on preventing terrorist-related activity, such as fundraising for a terrorist group. Furthermore, our examination of the role of the database of U.S. citizens’ telephone metadata in the single plot the government uses to justify the importance of the program – that of Basaaly Moalin, a San Diego cabdriver who in 2007 and 2008 provided $8,500 to al-Shabaab, al-Qaeda’s affiliate in Somalia – calls into question the necessity of the Section 215 bulk collection program. According to the government, the database of American phone metadata allows intelligence authorities to quickly circumvent the traditional burden of proof associated with criminal warrants, thus allowing them to “connect the dots” faster and prevent future 9/11-scale attacks.

Yet in the Moalin case, after using the NSA’s phone database to link a number in Somalia to Moalin, the FBI waited two months to begin an investigation and wiretap his phone. Although it’s unclear why there was a delay between the NSA tip and the FBI wiretapping, court documents show there was a two-month period in which the FBI was not monitoring Moalin’s calls, despite official statements that the bureau had Moalin’s phone number and had identified him. This undercuts the government’s theory that the database of Americans’ telephone metadata is necessary to expedite the investigative process, since it clearly didn’t expedite the process in the single case the government uses to extol its virtues.

So, just as nuclear weapons seem to be plausible but improbable threats fashioned to fuel the construction of evermore nuclear warheads, terrorists are presented as threats who can be neutralised by surveilling everything and by calling for companies to provide weakened encryption so governments can tap civilian communications easier-ly. This state of affairs also points to there being a cyber-congressional complex paralleling the nuclear-congressional complex that, on the one hand, exalts the benefits of being a nuclear power while, on the other, demands absolute secrecy and faith in its machinations.

However, there could be reason to believe cyber-weapons present a more insidious threat than their nuclear counterparts, a sentiment fuelled by challenges on three fronts:

  1. Cyber-weapons are easier to miss – and the consequences of their use are easier to disguise, suppress and dismiss
  2. Lawmakers are yet to figure out the exact framework of multilateral instruments that will minimise the threat of cyber-weapons
  3. Computer scientists have been slow to recognise the moral character and political implications of their creations

That cyber-weapons are easier to miss – and the consequences of their use are easier to disguise, suppress and dismiss

In 1995, Joseph Rotblat won the Nobel Prize for peace for helping found the Pugwash Conference against nuclear weapons in 1955. In his lecture, he lamented the role scientists had wittingly or unwittingly played in developing nuclear weapons, invoking those words of Zuckerman quoted above as well as going on to add:

If all scientists heeded [Hans Bethe’s] call there would be no more new nuclear warheads; no French scientists at Mururoa; no new chemical and biological poisons. The arms race would be truly over. But there are other areas of scientific research that may directly or indirectly lead to harm to society. This calls for constant vigilance. The purpose of some government or industrial research is sometimes concealed, and misleading information is presented to the public. It should be the duty of scientists to expose such malfeasance. “Whistle-blowing” should become part of the scientist’s ethos. This may bring reprisals; a price to be paid for one’s convictions. The price may be very heavy…

The perspectives of both Zuckerman and Rotblat were situated in the aftermath of the nuclear bombings that closed the Second World War. The ensuing devastation beggared comprehension in its scale and scope – yet its effects were there for all to see, all too immediately. The flattened cities of Hiroshima and Nagasaki became quick (but unwilling) memorials for the hundreds of thousands who were killed. What devastation is there to see for the thousands of Facebook and Twitter profiles being monitored, email IDs being hacked and phone numbers being trawled? What about it at all could appeal to the conscience of future lawmakers?

As John Arquilla writes on the CACM blog

Nuclear deterrence is a “one-off” situation; strategic cyber attack is much more like the air power situation that was developing a century ago, with costly damage looming, but hardly societal destruction. … Yes, nuclear deterrence still looks quite robust, but when it comes to cyber attack, the world of deterrence after [the age of cyber-wars has begun] looks remarkably like the world of deterrence before Hiroshima: bleak. (Emphasis added.)

… the absence of “societal destruction” with cyber-warfare imposed less of a real burden upon the perpetrators and endorsers.

And records of such intangible devastations are preserved only in writing, in our memories, and can be quickly manipulated or supplanted by newer information and problems. Events that erupt as a result of illegally obtained information continue to be measured against their physical consequences – there’s a standing arrest warrant while the National Security Agency continues to labour on, flitting between the shadows of SIPA, the Patriot Act and others like them. The violations are like a creep, easily withdrawn, easily restored, easily justified as being counterterrorism measures, easily depicted to be something they aren’t.

That lawmakers are yet to figure out the exact framework of multilateral instruments that will minimise the threat of cyber-weapons

What makes matters frustrating is a multilateral instrument called the Wassenaar Arrangement (WA), which was originally drafted in 1995 to restrict the export of potentially malignant technologies leftover from the Cold War, but which lawmakers resorted to in 2013 to prevent entities with questionable human-rights records from accessing “intrusive software” as well. In effect, the WA defines limits on its 41 signatories about what kind of technology can or can’t be transferred between themselves or not at all to non-signatories based on the tech’s susceptibility to be misused. After 2013, the WA became one of the unhappiest pacts out there, persisting largely because of the confusion that surrounds it. There are three kinds of problems:

1. In its language – Unreasonable absolutes

Sergey Bratus, a research associate professor in the computer science department at Dartmouth College, New Hampshire, published an article on December 2 highlighting WA’s failure to “describe a technical capability in an intent-neutral way” – with reference to the increasingly thin line (not just of code) that separates a correct output from a flawed one, which hackers have become adept at exploiting. Think of it like this:

Say there’s a computer, called C, which Alice uses for a particular purpose (like to withdraw cash if C were an ATM). C accepts an input called I and spits out an output called O. Because C is used for a fixed purpose, its programmers know that the range of values I can assume is limited (such as the four-digit PIN numbers used at ATMs). However, they end up designing the machine to operate safely for all known four-digit numbers and neglecting what would happen should I be a five-digit number. By some technical insight, a hacker could exploit this feature and make C spit out all the cash it contains using a five-digit I.

In this case, a correct output by C is defined only for a fixed range of inputs, with any output corresponding to an I outside of this range being considered a flawed one. However, programmatically, C has still only provided the correct O for a five-digit I. Bratus’s point is just this: we’ve no way to perfectly define the intentions of the programs that we build, at least not beyond the remits of what we expect them to achieve. How then can the WA aspire to categorise them as safe and unsafe?

2. In its purpose – Sneaky enemies

Speaking at Kiwicon 2015, New Zealand’s computer security conference, cyber-policy buff Katie Moussouris said the WA was underprepared to confront superbugs targeting computers connected to the Internet irrespective of their geographical location but the solutions for which could potentially emerge out of a WA signatory. A case in point that Moussouris used was Heartbleed, a vulnerability that achieved peak nuisance in April 2014. Its M.O. was to target the OpenSSL library, used by a server to encrypt personal information transmitted over the web, and force it to divulge the encryption key. To protect against it, users had to upgrade OpenSSL with a software patch containing the solution. However, such patches targeted against bugs of the future could fall under what the WA has defined simply as “intrusion software”, and for which officials administering the agreement will end up having to provide exemptions dozens of times a day. As Darren Pauli wrote in The Register,

[Moussouri] said the Arrangement requires an overhaul, adding that so-called emergency exemptions that allow controlled goods to be quickly deployed – such as radar units to the 2010 Haiti earthquake – will not apply to globally-coordinated security vulnerability research that occurs daily.

3. In presenting an illusion of sufficiency

Beyond the limitations it places on the export of software, the signatories’ continued reliance on the WA as an instrument of defence has also been questioned. Earlier this year, India received some shade after hackers revealed that its – our – government was considering purchasing surveillance equipment from an Italian company that was selling the tools illegitimately. India wasn’t invited to be part of the WA and had it been, it would’ve been able to purchase the surveillance equipment legitimately. Sure, it doesn’t bode well that India was eyeing the equipment at all but when it does so illegitimately, international human rights organisations have fewer opportunities to track violations in India or be able to haul authorities up for infarctions. Legitimacy confers accountability – or at least the need to be accountable.

Nonetheless, despite an assurance (insufficient in hindsight) that countries like India and China would be invited to participate in conversations over the WA in future, nothing has happened. At the same time, extant signatories have continued to express support for the arrangement. “Offending” software came to be included in the WA following amendments in December 2013. States of the European Union enforced the rules from January 2015 while the US Department of Commerce’s Bureau of Industry and Security published a set of controls pursuant to the arrangement’s rules in May 2015 – which have been widely panned by security experts for being too broadly defined. Over December, however, they have begun to hope National Security Adviser Susan Rice can persuade the State Department push for making the language in the WA more specific at the plenary session in December 2016. The Departments of Commerce and Homeland Security are already onboard.

That computer scientists have been slow to recognise the moral character and political implications of their creations

Phillip Rogaway, a computer scientist at the University of California, Davis, penned an essay he published on December 12 titled The Moral Character of Cryptographic Work. Rogaway’s thesis is centred on the increasing social responsibility of the cryptographer – as invoked by Zuckerman – as he writes,

… we don’t need the specter of mushroom clouds to be dealing with politically relevant technology: scientific and technical work routinely implicates politics. This is an overarching insight from decades of work at the crossroads of science, technology, and society. Technological ideas and technological things are not politically neutral: routinely, they have strong, built-in tendencies. Technological advances are usefully considered not only from the lens of how they work, but also why they came to be as they did, whom they help, and whom they harm. Emphasizing the breadth of man’s agency and technological options, and borrowing a beautiful phrase of Borges, it has been said that innovation is a garden of forking paths. Still, cryptographic ideas can be quite mathematical; mightn’t this make them relatively apolitical? Absolutely not. That cryptographic work is deeply tied to politics is a claim so obvious that only a cryptographer could fail to see it.

And maybe cryptographers have missed the wood for the trees until now but times are a’changing.

On December 22, Apple publicly declared it was opposing a new surveillance bill that the British government is attempting to fast-track. The bill, should it become law, will require messages transmitted via the company’s iMessage platform to be encrypted in such a way that government authorities can access them if they need to but not anyone else – a fallacious presumption that Apple has called out as being impossible to engineer. “A key left under the doormat would not just be there for the good guys. The bad guys would find it too,” it wrote in a statement.

Similarly, in November this year, Microsoft resisted an American warrant to hand over some of its users’ data acquired in Europe by entrusting a German telecom company with its servers. As a result, any requests for data about German users using Microsoft to make calls or send emails, and originating from outside Germany, will now have to deal with German lawmakers. At the same time, anxiety over requests from within the country are minimal as the country boasts some of the world’s strictest data-access policies.

Apple’s and Microsoft’s are welcome and important changes in tack. Both companies were featured in the Snowden/Greenwald stories as having folded under pressure from the NSA to open their data-transfer pipelines to snooping. That the companies also had little alternative at that time was glossed over by the scale of NSA’s violations. However, in 2015, a clear moral as well as economic high-ground has emerged in the form of defiance: Snowden’s revelations were in effect a renewed vilification of Big Brother, and so occupying that high-ground has become a practical option. After Snowden, not taking that option when there’s a chance to has come to mean passive complicity.

But apropos Rogaway’s contention: at what level can, or should, the cryptographer’s commitment be expected? Can smaller companies or individual computer-scientists afford to occupy the same ground as larger companies? After all, without the business model of data monetisation, privacy would be automatically secured – but the business model is what provides for the individuals.

Take the case of Stuxnet, the virus unleashed by what are believed to be agents with the US and Israel in 2009-2010 to destroy Iranian centrifuges suspected of being used to enrich uranium to explosive-grade levels. How many computer-scientists spoke up against it? To date, no institutional condemnation has emerged*. Though it could be that neither the US nor Israel publicly acknowledging their roles in developing Stuxnet could have made it tough to judge who may have crossed a line, that a deceptive bundle of code was used as a weapon in an unjust war was obvious.

Then again, can all cryptographers be expected to comply? One of the threats that the 2013 amendments to the WA attempts to tackle is dual-use technology (which Stuxnet is an example of because the virus took advantage of its ability to mimic harmless code). Evidently such tech also straddles what Aaron Adams (PDF) calls “the boundary between bug and behaviour”. That engineers have had only tenuous control over these boundaries owes itself to imperfect yet blameless programming languages, as Bratus also asserts, and not to the engineers themselves. It is in the nature of a nuclear weapon, when deployed, to overshadow the simple intent of its deployers, rapidly overwhelming the already-weakened doctrine of proportionality – and in turn retroactively making that intent seem far, far more important. But in cyber-warfare, its agents are trapped in the ambiguities surrounding what the nature of a cyber-weapon is at all, with what intent and for what purpose it was crafted, allowing its repercussions to seem anywhere from rapid to evanescent.

Or, as it happens, the agents are liberated.

*That I could find. I’m happy to be proved wrong.

Featured image credit: ikrichter/Flickr, CC BY 2.0.

Hopes for a new particle at the LHC offset by call for more data

At a seminar at CERN on Tuesday, scientists working with the Large Hadron Collider provided the latest results from the particle-smasher at the end of its operations for 2015. The results make up the most detailed measurements of the properties of some fundamental particles made to date at the highest energy at which humankind has been able to study them.

The data discussed during the seminar originated from observations at two experiments: ATLAS and CMS. And while the numbers were consistent between them, neither experimental collaboration could confirm any of the hopeful rumours doing the rounds – that a new particle might have been found. However, they were able to keep the excitement going by not denying some of the rumours either. All they said was they needed to gather more data.

One rumour that was neither confirmed nor denied was the existence of a particle at an energy of about 750 GeV (that’s about 750x the mass of a proton). That’s a lot of mass for a single particle – the heaviest known elementary particle is the top quark, weighing 175 GeV. As a result, it’d be extremely short-lived (if it existed) and rapidly decay into a combination of lighter particles, which are then logged by the detectors.

When physicists find such groups of particles, they use statistical methods and simulations to reconstruct the properties of the particle that could’ve produced them in the first place. The reconstruction shows up as a bump in the data where otherwise there’d have been a smooth curve.

This is the ATLAS plot displaying said bump (look in the area over 750 GeV on the x-axis):

ATLAS result showing a small bump in the diphoton channel at 750 GeV in the run-2 data. Credit: CERN

ATLAS result showing a small bump in the diphoton channel at 750 GeV in the run-2 data. Credit: CERN

It was found in the diphoton channel – i.e. the heavier particle decayed into two energetic photons which then impinged on the ATLAS detector. So why aren’t physicists celebrating if they can see the bump?

Because it’s not a significant bump. Its local significance is 3.6σ (that’s 3.6 times more than the average size of a fluctuation) – which is pretty significant by itself. But the more important number is the global significance that accounts for the look-elsewhere effect. As experimental physicist Tommaso Dorigo explains neatly here,

… you looked in many places [in the data] for a possible [bump], and found a significant effect somewhere; the likelihood of finding something in the entire region you probed is greater than it would be if you had stated beforehand where the signal would be, because of the “probability boost” of looking in many places.

The global significance is calculated by subtracting the effect of this boost. In the case of the 750-GeV particle, the bump stood at a dismal 1.9σ. A minimum of 3 is required to claim evidence and 5 for a discovery.

A computer’s reconstruction of the diphoton event observed by the ATLAS detector. Credit: ATLAS/CERN

A computer’s reconstruction of the diphoton event observed by the ATLAS detector. Credit: ATLAS/CERN

Marumi Kado, the physicist who presented the ATLAS results, added that when the bump was studied across a 45 GeV swath (on the x-axis), its significance went up to 3.9σ local and 2.3σ global. Kado is affiliated with the Laboratoire de l’Accelerateur Lineaire, Orsay.

A similar result was reported by James Olsen, of Princeton University, speaking for the CMS team with a telltale bump at 745 GeV. However, the significance was only 2.6σ local and 1.2σ global. Olsen also said the CMS detector had only one-fourth the data that ATLAS had in the same channel.

Where all this leaves us is that the Standard Model, which is the prevailing theory + equations used to describe how fundamental particles behave, isn’t threatened yet. Physicists would much like it to be: though it’s been able to correctly predict the the existence of many particles and fundamental forces, it’s been equally unable to explain some findings (like dark matter). And finding a particle weighing ~750 GeV, which the model hasn’t predicted so far, could show physicists what could be broken about the model and pave the way for a ‘new physics’.

However, on the downside, some other new-physics hypotheses didn’t find validation. One of the more prominent among them is called supersymmetry, SUSY for short, and it requires the existence of some heavier fundamental particles. Kado and Olsen both reported that no signs of such particles have been observed, nor of heavier versions of the Higgs boson, whose discovery was announced mid-2012 at the LHC. Thankfully they also appended that the teams weren’t done with their searches and analyses yet.

So, more data FTW – as well as looking forward to the Rencontres de Moriond (conference) in March 2016.

Physicists could have to wait 66,000 yottayears to see an electron decay

The longest coherently described span of time I’ve encountered is from Hindu cosmology. It concerns the age of Brahma, one of Hinduism’s principal deities, who is described as being 51 years old (with 49 more to go). But these are no simple years. Each day in Brahma’s life lasts for a period called the kalpa: 4.32 billion Earth-years. In 51 years, he will actually have lived for almost 80 trillion Earth-years. In a 100, he will have lived 157 trillion Earth-years.

157,000,000,000,000. That’s stupidly huge. Forget astronomy – I doubt even economic crises have use for such numbers.

On December 3, scientists announced that we’ve all known something that will live for even longer: the electron.

Yup, the same tiny lepton that zips around inside atoms with gay abandon, that’s swimming through the power lines in your home, has been found to be stable for at least 66,000 yottayears – yotta- being the largest available prefix in the decimal system.

In stupidly huge terms, that’s 66,000,000,000,000,000,000,000,000,000 (66,000 trillion trillion) years. Brahma just slipped to second place among the mortals.

But why were scientists making this measurement in the first place?

Because they’re desperately trying to disprove a prevailing theory in physics. Called the Standard Model, it describes how fundamental particles interact with each other. Though it was meticulously studied and built over a period of more than 30 years to explain a variety of phenomena, the Standard Model hasn’t been able to answer few of the more important questions. For example, why is gravity among the four fundamental forces so much weaker than the rest? Or why is there more matter than antimatter in the universe? Or why does the Higgs boson not weigh more than it does? Or what is dark matter?

Silence.

The electron belongs to a class of particles called leptons, which in turn is well described by the Standard Model. So if physicists are able to find that an electron is less stable the model predicts, it’d be a breakthrough. But despite multiple attempts to find an equally freak event, physicists haven’t succeeded – not even with the LHC (though hopeful rumours are doing the rounds that that could change soon).

The measurement of 66,000 yottayears was published in the journal Physical Review Letters on December 3 (a preprint copy is available on the arXiv server dated November 11). It was made at the Borexino neutrino experiment buried under the Gran Sasso mountain in Italy. The value itself is hinged on a simple idea: the conservation of charge.

If an electron becomes unstable and has to break down, it’ll break down into a photon and a neutrino. There are almost no other options because the electron is the lightest charged particle and whatever it breaks down into has to be even lighter. However, neither the photon nor the neutrino has an electric charge so the breaking-down would violate a fundamental law of nature – and definitely overturn the Standard Model.

The Borexino experiment is actually a solar neutrino detector, using 300 tonnes of a petroleum-based liquid to detect and study neutrinos streaming in from the Sun. When a neutrino strikes the liquid, it knocks out an electron in a tiny flash of energy. Some 2,210 photomultiplier tubes surrounding the tank amplify this flash for examination. The energy released is about 256 keV (by the mass-energy equivalence, corresponding to about a 4,000th the mass of a proton).

However, the innards of the mountain where the detector is located also produce photons thanks to the radioactive decay of bismuth and polonium in it. So the team making the measurement used a simulator to calculate how often photons of 256 keV are logged by the detector against the ‘background’ of all the photons striking the detector. Kinda like a filter. They used data logged over 408 days (January 2012 to May 2013).

The answer: once every 66,000 yotta-years (that’s 420 trillion Brahma-years).

Physics World reports that if photons from the ‘background’ radiation could be eliminated further, the electron’s lifetime could probably be increased by a thousand times. But there’s historical precedent that to some extent encourages stronger probes of the humble electron’s properties.

In 2006, another experiment situated under the Gran Sasso mountain tried to measure the rate at which electrons violated a defining rule in particle physics called Pauli’s exclusion principle. All electrons can be described by four distinct attibutes called their quantum numbers, and the principle holds that no two electrons can have the same four numbers at any given time.

The experiment was called DEAR (DAΦNE Exotic Atom Research). It energised electrons and then measured how much of it was released when the particles returned to a lower-energy state. After three years of data-taking, its team announced in 2009 that the principle was being violated once every 570 trillion trillion measurements (another stupidly large number).

That’s a violation 0.0000000000000000000000001% of the time – but it’s still something. And it could amount to more when compared to the Borexino measurement of an electron’s stability. In March 2013, the team that worked DEAR submitted a proposal for building an instrument that improve the measurement by a 100-times, and in May 2015, reported that such an instrument was under construction.

Here’s hoping they don’t find what they were looking for?

New LHC data has more of the same but could something be in the offing?

Dijet mass (TeV) v. no. of events. SOurce: ATLAS/CERN

Dijet mass (TeV) v. no. of events. Source: ATLAS/CERN

Looks intimidating, doesn’t it? It’s also very interesting because it contains an important result acquired at the Large Hadron Collider (LHC) this year, a result that could disappoint many physicists.

The LHC reopened earlier this year after receiving multiple performance-boosting upgrades over the 18 months before. In its new avatar, the particle-smasher explores nature’s fundamental constituents at the highest energies yet, almost twice as high as they were in its first run. By Albert Einstein’s mass-energy equivalence (E = mc2), the proton’s mass corresponds to an energy of almost 1 GeV (giga-electron-volt). The LHC’s beam energy to compare was 3,500 GeV and is now 6,500 GeV.

At the start of December, it concluded data-taking for 2015. That data is being steadily processed, interpreted and published by the multiple topical collaborations working on the LHC. Two collaborations in particular, ATLAS and CMS, were responsible for plots like the one shown above.

This is CMS’s plot showing the same result:

Source: CMS/CERN

Source: CMS/CERN

When protons are smashed together at the LHC, a host of particles erupt and fly off in different directions, showing up as streaks in the detectors. These streaks are called jets. The plots above look particularly at pairs of particles called quarks, anti-quarks or gluons that are produced in the proton-proton collisions (they’re in fact the smaller particles that make up protons).

The sequence of black dots in the ATLAS plot shows the number of jets (i.e. pairs of particles) observed at different energies. The red line shows the predicted number of events. They both match, which is good… to some extent.

One of the biggest, and certainly among the most annoying, problems in particle physics right now is that the prevailing theory that explains it all is unsatisfactory – mostly because it has some really clunky explanations for some things. The theory is called the Standard Model and physicists would like to see it disproved, broken in some way.

In fact, those physicists will have gone to work today to be proved wrong – and be sad at the end of the day if they weren’t.

Maintenance work underway at the CMS detector, the largest of the five that straddle the LHC. Credit: CERN

Maintenance work underway at the CMS detector, the largest of the five that straddle the LHC. Credit: CERN

The annoying problem at its heart

The LHC chips in providing two kinds of opportunities: extremely sensitive particle-detectors that can provide precise measurements of fleeting readings, and extremely high collision energies so physicists can explore how some particles behave in thousands of scenarios in search of a surprising result.

So, the plots above show three things. First, the predicted event-count and the observed event-count are a match, which is disappointing. Second, the biggest deviation from the predicted count is highlighted in the ATLAS plot (look at the red columns at the bottom between the two blue lines). It’s small, corresponding to two standard deviations (symbol: σ) from the normal. Physicists need at least three standard deviations () from the normal for license to be excited.

But this is the most important result (an extension to the first): The predicted event-count and the observed event-count are a match across 6,000 GeV. In other words: physicists are seeing no cause for joy, and all cause for revalidating a section of the Standard Model, in a wide swath of scenarios.

The section in particular is called quantum chromodynamics (QCD), which deals with how quarks, antiquarks and gluons interact with each other. As theoretical physicist Matt Strassler explains on his blog,

… from the point of view of the highest energies available [at the LHC], all particles in the Standard Model have almost negligible rest masses. QCD itself is associated with the rest mass scale of the proton, with mass-energy of about 1 GeV, again essentially zero from the TeV point of view. And the structure of the proton is simple and smooth. So QCD’s prediction is this: the physics we are currently probing is essential scale-invariant.

Scale-invariance is the idea that two particles will interact the same way no matter how energetic they are. To be sure, the ATLAS/CMS results suggest QCD is scale-invariant in the 0-6,000 GeV range. There’s a long way to go – in terms of energy levels and future opportunities.

Something in the valley

The folks analysing the data are helped along by previous results at the LHC as well. For example, with the collision energy having been ramped up, one would expect to see particles of higher energies manifesting in the data. However, the heavier the particle, the wider the bump in the plot and more the focusing that’ll be necessary to really tease out the peak. This is one of the plots that led to the discovery of the Higgs boson:

 

Source: ATLAS/CERN

Source: ATLAS/CERN

That bump between 125 and 130 GeV is what was found to be the Higgs, and you can see it’s more of a smear than a spike. For heavier particles, that smear’s going to be wider with longer tails on the site. So any particle that weighs a lot – a few thousand GeV – and is expected to be found at the LHC would have a tail showing in the lower energy LHC data. But no such tails have been found, ruling out heavier stuff.

And because many replacement theories for the Standard Model involve the discovery of new particles, analysts will tend to focus on particles that could weigh less than about 2,000 GeV.

In fact that’s what’s riveted the particle physics community at the moment: rumours of a possible new particle in the range 1,900-2,000 GeV. A paper uploaded to the arXiv preprint server on December 10 shows a combination of ATLAS and CMS data logged in 2012, and highlights a deviation from the normal that physicists haven’t been able to explain using information they already have. This is the relevant plot:

Source: arXiv:1512.03371v1

Source: arXiv:1512.03371v1

 

The one on the middle and right are particularly relevant. They each show the probability of the occurrence of an event (observed as a bump in the data, not shown here) of some heavier mass of energy decaying into two different final states: of W and Z bosons (WZ), and of two Z bosons (ZZ). Bosons make a type of fundamental particle and carry forces.

The middle chart implies that the mysterious event is at least 1,000-times less likelier to occur than normally and the one on the left implies the event is at least 10,000-times less likelier to occur than normally. And both readings are at more than 3σ significance, so people are excited.

The authors of the paper write: “Out of all benchmark models considered, the combination favours the hypothesis of a [particle or its excitations] with mass 1.9-2.0 [thousands of GeV] … as long as the resonance does not decay exclusively to WW final states.”

But as physicist Tommaso Dorigo points out, these blips could also be a fluctuation in the data, which does happen.

Although the fact that the two experiments see the same effect … is suggestive, that’s no cigar yet. For CMS and ATLAS have studied dozens of different mass distributions, and a bump could have appeared in a thousand places. I believe the bump is just a fluctuation – the best fluctuation we have in CERN data so far, but still a fluke.

There’s a seminar due to happen today at the LHC Physics Centre at CERN where data from the upgraded run is due to be presented. If something really did happen in those ‘valleys’, which were filtered out of a collision energy of 8,000 GeV (basically twice the beam energy, where each beam is a train of protons), then those events would’ve happened in larger quantities during the upgraded run and so been more visible. The results will be presented at 1930 IST. Watch this space.

Featured image: Inside one of the control centres of the collaborations working on the LHC at CERN. Each collaboration handles an experiment, or detector, stationed around the LHC tunnel. Credit: CERN.