3R in preclinical research
2. Harnessing the potential of big data and artificial intelligence for the 3Rs in preclinical research
Utilising existing and future data
The recognition of alternative methods is one thing. But what opportunities does big data offer for adding extra impetus to the 3Rs? Big data has as yet untapped potential to accelerate and further improve biomedical research. But how would it work? Research findings are published in scientific journals, as are the results of animal experiments. But the study data – for example information about the relevant methodology, findings, from what dose a medicine caused side effects or when a medicine is effective against a disease – is not recorded anywhere in such a way that researchers can use and compare the results of previous studies. To learn from animal experiments that have already been carried out, the available test data would have to be collected via a digital platform, for instance, and linked in such a way that researchers could quickly and easily access the information – regardless of whether the results were positive or negative. For example, tablets consist of an active substance that is effective against a disease, and also substances to bulk out the tablets and allow them to be compressed. These bulking agents must not trigger any toxic effects, and so also have to be tested using animal experiments. Since the bulking agents are widely used in tablet manufacturing, it would be useful to record all toxicological tests in a database so that other researchers at universities or in pharmaceutical companies that conduct research could use these test results for their own applications, without having to repeat the tests. Big data is therefore only relevant if data can be linked and made accessible. Making this data collectively usable also necessitates the initiation of a cultural change and the breakdown of existing silos within institutions and between research institutions and companies.
The use of artificial intelligence requires large datasets
If large volumes of data are available, the question is then whether artificial intelligence (AI) should be used. In fact, linking animal experiment data could not only result in these experiments not being replicated, but also in entirely new techniques and applications being developed. For instance, artificial intelligence could be used to model animal experiments and predict whether a medicine would be effective or how safe it would be, without the need for animal studies. In conventional drug development, “drug discovery”, i.e., the phase in which the new active substance is discovered and tested “in vitro” (e.g., in cell cultures), makes up the largest proportion of the development process (in terms of the number of tests conducted on potential candidate substances). The next stage is to test a potential medicinal product using animal experiments, before it is used on humans in clinical trials. Artificial intelligence could play a crucial role in expanding this “in vitro” research and making some of the animal experiments in the preclinical phase obsolete, in other words, further reducing them. AI also has the potential to speed up the process of drug development from the concept through to approval, and to discover errors in an earlier phase of development so that only highly promising potential active substances get as far as the clinical phase. However, using artificial intelligence presupposes a large volume of data and access to this data, while also ensuring that the protection of intellectual property is taken into account. Above all, it requires researchers with the expertise to use AI in a targeted manner, evaluate the findings obtained and make decisions based on these findings.