DT
PT
Subscribe To Print Edition About The Tribune Code Of Ethics Download App Advertise with us Classifieds
search-icon-img
search-icon-img
Advertisement

AI can play the sleuth to expose scientific fraud

TWO prominent cases of data fraud, both in the academic domain, have been in the news in the US. Ten research publications by Johns Hopkins’ professor and 2019 medicine Nobel Prize winner Gregg Semenza were retracted due to falsified data...
  • fb
  • twitter
  • whatsapp
  • whatsapp
Advertisement

TWO prominent cases of data fraud, both in the academic domain, have been in the news in the US. Ten research publications by Johns Hopkins’ professor and 2019 medicine Nobel Prize winner Gregg Semenza were retracted due to falsified data and images. Retraction of research papers isn’t a new phenomenon, even among Nobel laureates. The 2018 chemistry Nobel laureate, Frances Arnold, retracted a 2019 paper when she was unable to replicate the results. Linda Buck, the 2004 Nobel laureate in medicine, retracted papers published in 2005 and 2006. However, it’s unusual to have 10 retractions of papers published within a 15-year period.

Prof Francesca Gino of Harvard Business School was sent on administrative leave following allegations that she had systematically manipulated data and falsified results in four papers that she co-authored. A group behind the Data Colada blog sent a dossier detailing the anomalies to Harvard University in 2021. When examining version control (the practice of tracking and managing changes to a software code) in Microsoft Excel, different rows inside a spreadsheet appeared to have been manipulated. The data following the alleged manipulation did appear to demonstrate the effect the researchers had hoped to find, but the data before the apparent manipulation failed to show it, as the experts perceived. Gino, however, filed a multimillion-dollar lawsuit against the university and her accusers.

While it makes sense that accusations of fraud, including everything from economic to medical data, garner a great deal of public attention, scientific fraud is surprisingly endemic. Selective use and publication of data are also serious types of misconduct. The Oil Drop Experiment (1909) by Robert Andrews Millikan is well known for yielding the electron’s charge. After his passing, it was discovered that Millikan examined 140 data points; each was recorded in his notebook. But he chose only 58 ‘good’ data points that supported his theory.

Advertisement

In the 1980s, the academic community was rocked by the John Darsee case. Darsee was a young clinical investigator with a bright future in cardiological research. He had a long list of publications in prestigious journals. At the age of 33, he got an opportunity to join the faculty at Harvard Medical School. However, his career soon started to unravel. By May 1981, his colleagues, suspecting regular and systematic falsification, were making allegations. Darsee ‘expanded’ other data to report more significant results, according to investigators, and had reported data from experiments that were never performed. Over 80 of his papers were removed from the literature. Finally, he apologised for disseminating ‘inaccuracies and falsehoods’.

To what extent does data fraud or its allegations exist? Around the time Darsee was exposed, William Broad and Nicholas Wade, two former news reporters for Science, gave an intriguing and unsettling picture of scientific fraud by compiling case studies of scientific research misconduct in their 1982 book Betrayers of the Truth: Fraud and Deceit in the Halls of Science. They asserted that the practice is and has always been pervasive.

Advertisement

Galileo’s results on falling bodies, which lacked experimental evidence, and Ptolemy’s observations of the stars, which were made in the great library of Alexandria instead of beneath the night sky, are two such examples. And there is the case of Gregor Mendel’s work on genetics, which is statistically too perfect. Indeed, an eminent British geneticist and statistician, Sir Ronald Fisher, discovered in the 1930s that the ratio of dominant to recessive phenotypes was implausibly close to the expected ratio of 3:1 after reconstructing Mendel’s experiments. Fisher came to the conclusion that “the data of most, if not all, of the experiments has been falsified so as to agree closely with Mendel’s expectations”.

These are but a few high-profile instances. Daniele Fanelli of Edinburgh, UK, in a 2009 paper published in PLOS One, wrote that an average of 2 per cent of scientists admitted to fabricating, falsifying or altering data at least once — a serious misconduct by any measure — and up to 34 per cent of scientists admitted to engaging in other dubious research practices. One may ponder: were there many who didn’t acknowledge it, perhaps?

Can statistics really help identify data fraud? Always? Regretfully, no. Never is it that simple. The book, Fraud and Misconduct in Biomedical Research, edited by Frank Wells and Michael Farthing, examines the roles of statistical analysis, peer review and routine enhanced audit in this context. However, let’s be honest and acknowledge that available technologies for detecting data fraud are still unable to handle every scenario. Furthermore, statistical methods frequently produce results that are blatantly inconclusive and only serve to cast doubts on the facts at best.

And then there is the extraordinary example of a ‘data detective’, Dr Elisabeth Bik, who is a microbiologist by training and a scientific misconduct hunter by passion. She got engaged in uncovering the dark side of science by independently detecting thousands of studies containing potentially doctored scientific images using only her eyes and memory. The narrative of this Stanford microbiologist shows how an astute scientist evolved into biology’s ‘image detective’. After looking over 1,00,000 papers in her areas of expertise, she discovered apparent image falsification in 4,800 of them and other indications of fabrication in 1,700 more. So far, her reports have resulted in about 950 retractions and corrections appearing in numerous other publications. Furthermore, this suggests that manipulation of data and image in scientific publications is quite common.

As we are living in the age of artificial intelligence (AI), can AI play a ‘data detective’ for scientific research as well, potentially bringing about a revolution on the ethical front? Big data analytics can now help detect plagiarism to a considerable extent. Similarly, by swiftly performing statistical tests and statistical pattern matching, a strong database may enable a powerful generative AI to verify various types of image and data fraud. Consequently, it may become easier to spot fraud in scientific research, especially data fraud. The same is true for examining data fraud in relation to different social, economic and medical datasets.

Advertisement
Advertisement
Advertisement
Advertisement
tlbr_img1 Home tlbr_img2 Opinion tlbr_img3 Classifieds tlbr_img4 Videos tlbr_img5 E-Paper