Q&A Report: Streamlining Health Technology Assessments by Automating Literature Reviews

Experts answer top questions from a recent webinar, “Streamlining Health Technology Assessments by Automating Literature Reviews.”

How is confidentiality preserved? How much data needs to be uploaded for the Artificial Intelligence (AI) function to work effectively?

It’s our understanding that only those you assign permission to are on the project. These would be the only people that have access to the data. DistillerSR can only view your project for troubleshooting if they are asked and provided with permission. My experience is that when they are helping trouble shoot an issue, they are evaluating the settings and not the content. Additionally, the projects are not public, not even to your organization. The administrative account owner has automatic access to all projects, but only those assigned to projects can see them. So, there is even confidentiality within your organization if needed.

Is there any automation for the extraction/validation phase or quality appraisal? Does the PRISMA generate automatically? Is there a simple way to export lists of excluded records by reason for exclusion?

It is important to clarify the different types of automation. There is the use of AI automation to build algorithms for decision making during the screening process, evaluating probability of include, error checking etc. But then there is automation applied by the human user that is systemically applied throughout the project. Examples being changes in criteria, updates to screening form, isolating certain groups of papers e.g., includes vs. excludes, conflicts for conflict resolution, and running a variety of reports (e.g., excludes and reasons for exclusion). These all become automated by DistillerSR so you are not going through each individual citation to group, modify, and sort this information. Creating forms to help with data extraction or quality appraisal introduces consistency to how that data is reported. This reduces the opportunity for conflicts and allows to you export this data to your data extraction workbook. You can build form templates that suit your project/organizational needs, or you can use some popular forms that DistillerSR has created (e.g., Cochrane RoB). DistillerSR will not automatically assess your full texts and complete these forms. Humans will assess and populate the forms. The process is automated because conflicts are readily visible, easy to resolve, and modifications can be easily applied to all forms without having to open them all individually. It is a big timesaver.

One of the main barriers to adoption is the absence of a sufficient evidence-base on the effectiveness of AI techniques. This is a barrier to Health Technology Assessment (HTA) agencies providing guidelines. Does the panel have a view of how this evidence-base, through empirical methodological research, might be generated?

There are some studies that are published and publicly available regarding the performance of DistillerSR. However, it is important to note that projects are quite heterogeneous, so it is difficult to come up with metrics that apply to all situations. We do have very well-established guidelines of how to conduct literature reviews. Using a platform like DistillerSR does not deviate from those guidelines, and if anything helps elevate the robustness of the review and better document how you have followed those guidelines. The automation applied to the processes of a literature review does not take away from procedure. It enhances it so that elements of the review process that used to take days and weeks can now be done more efficiently. The accuracy still lies in the hands of the screeners/person who is operating DistillerSR. The use of AI automation, where AI is used as a decision-making tool, is where we are lacking guidance. There is uncertainty about how we will report this. The development of AI algorithms is typically proprietary information, and the technology likely differs across platforms. I imagine in the future there may be standard data sets that are used and thresholds that will need to be met, to deem the use of particular platforms as acceptable without necessarily disclosing the technology behind it. That being said, humans need to train classifiers within their projects, so proper training and understanding of these tools will be critical to optimal performance.

Can you use DistillerSR for HTA SLR only, or can it be used even in disciplines or different research fields other than health?

You can use DistillerSR for all kinds of research purposes that involve review of literature. Even more gray, Real World Evidence (RWE) type reviews as well. It is very flexible to meet various methodological approaches.