Q&A Report: Achieving Consensus When Everyone is an Expert, but No One Agrees
How well accepted are the published results of the Delphi panel? Have you seen many examples where it really changed how physicians thought about a particular topic?
At PHAR, we have been able publish the results of our Delphi panels at both scientific conferences and in journals. If the results address an important clinical issue, clinically focused journals have accepted them. If the focus is less obviously a key clinical problem (eg., estimating utilization or cost), HEOR type journals can be a better place to publish. We have seen it affect clinical practice. In 2019, we conducted a Delphi panel to develop an order set to manage pain from sickle cell crises in New York City ERs. That order set was published, presented at a local ER physician conference, and is currently being implemented in the region.
Can you speak to your experience conducting a probabilistic Delphi in which one seeks to identify distributional estimates (eg., a lower bound, first quartile, median, third quartile, and upper bound for question)? How well would an expert panel composed of physicians and patients be able to provide such estimates?
We have used the Delphi panel method to develop estimates on cost and utilization. We use the same steps, including summarizing existing evidence and developing a rating form, tailoring our questions to ask physicians to estimate utilization.
What are some of the challenges that you foresee with a manufacturer conducting an expert panel for clinical practice guidelines and how to overcome them?
We have worked with manufacturers on Delphi panels that develop guidelines. If the panel is product specific (versus above brand for example), we recommend blinding both the sponsor and the panelists to each other’s identities to reduce potential bias. With this approach, the sponsor has no interaction with panelists and provides no input on the rating form nor interpretation of results. Only at publication are the panelists and sponsor unblinded. Other ways to address concerns of bias include funding the project as a grant rather than a contract or involving a specialty society as a trusted intermediary.
For blinded ones, how to handle the publication part if the results are set to be in the public domain?
We recommend the sponsor and panelists remain blinded until we are required to disclose financial conflicts upon manuscript or abstract submission. All results and interpretation of results are therefore finalized in a blinded state.
Would the authors be disclosed by publication, therefore defeating the purpose of blinding?
Not at all. The point of blinding is so that the process is unaffected by who is paying for it. The identities of the panelists, their ratings, and the development of the final output are all written and finalized prior to unblinding. The unblinding only happens when the manuscript is submitted to the journal.
Do you have/have you had any challenges with using the chair to recruit through recommendations from an ethics/compliance perspective?
We have not had any challenges using a chair to recruit other panelists. We recommend recruiting a diverse panel. The chair can be helpful with identifying panelists with specific characteristics (eg., do you know a female oncologist who practices in a community setting in the West?).
What are additional ways you remove biases in the Delphi? Are participants (including the patient you mentioned who was part of the panel) paid for their time?
We recommend blinding the sponsor and panelists to each other’s identities to reduce bias. Yes, panelists are still paid for their time by the sponsor. However, they are paid through us (or another third party) and therefore can be blinded to the sponsor throughout the project. We disclose identities only upon publication, so results and interpretation of results are finalized in a blinded state.
How do you deal with consensus feedback that is not in line with labeled indications?
We ask panelists to consider the available evidence when completing rating forms. Clinicians are not restricted to labeled indications and therefore may recommend things outside of those parameters. We have not seen problems with this. We could imagine a conservative sponsor not wanting to address any off-label indications, and this could be handled during the rating form development process.
How do you use these panels within your medical affairs teams and interactions with healthcare professionals (HCPs)?
We often work with medical affairs on these projects. These teams often help us recruit HCPs, including a panel chair, with whom they may have existing relationships.
What would be the maximum number of panelists you'd recommend for a Delphi Panel?
The RAND/UCLA modified Delphi panel method recommends 9-12 participants. For a Delphi (unmodified), more may be better, but for a panel discussion, more can be unwieldy. We’ve done them with dozens on a panel but would not recommend it.
Could you please give a bit of insight when no consensus is achieved on one or two key criteria?
We usually describe that disagreement remains after the Delphi panel meeting and may provide our assumptions on why (e.g., not enough data exists yet to support a decision). Journal reviewers are understanding of this and like to see additional details on areas where consensus was not achieved.
Any experience on how evidence generated from a Delphi panel led to a successful regulatory or Health Technology Assessment (HTA) submission?
Yes, we have used Delphi panels to generate inputs for a model (e.g., cost, QALY, and utilization estimates), that can then be used for HTA submissions. One such submission is currently under review at a regulatory agency.
Have you used “Decision Conferencing” for the in-person meetings? And have you used Delphi software like “e-delphi”, “Welphi”, or others for the online panels?
We have not. In the RAND/UCLA modified Delphi panel method, most of the meeting is spent discussing the rating form and panelists’ first-round ratings. We prefer panelists do not complete their second-round ratings until the end of the meeting and do not ask for votes or tally responses while the meeting is ongoing. Our rating form results are analyzed using a programmed dashboard in Microsoft Excel.