Clifton Chow, Ph.D., Consultant - Anthony Petrosino, Ph.D., WestEd
The move towards evidence-based policy has focused on generating trustworthy evidence upon which to base decisions. For example, a critical component of the evidence-based policy movement has been the support for more rigorous primary studies with strong "internal validity" such as randomized experiments and well-controlled quasi-experiments. Another critical component has been the increased importance of transparent and explicit methods for synthesizing research such as systematic reviews and meta-analyses.
But once the scientific community has generated trustworthy research findings, how can this research be used to generate policy and practice guidance? Processes for doing this in a systematic, defensible and accessible way has presented a challenge across different disciplines and fields. This briefing summarizes some of examples from different fields on the development of such processes.
The importance of guidelines development was articulated by the U.S. Centers for Disease Control:
Guidelines affect practice, policy, and the allocation of resources on a wide scale. Thus, it is critical that recommendations included in guidelines documents are based on an objective assessment of the best available evidence. Systematic literature reviews and rigorous methods used to rate the quality of evidence can assist in reducing scientific bias by increasing the probability that high‐quality, relevant evidence is considered. However, guideline development involves more than assessing scientific evidence. Developers also use expert opinion to interpret research and offer insights from practice. If not gathered carefully, expert opinion has the potential to bias the evidence synthesis and decision‐making process (see Appendix A for further elaboration by the CDC on selecting work group experts and the consensus approach).
To rapidly identify some different approaches, we solicited information from over 50 colleagues across fields of education, justice, public health, psychology, and sociology. We received 15 documents and several electronic exchanges from these colleagues to identify how research evidence has been used to create public policy and practice guidelines. These are common approaches, but no systematic search of the literature was undertaken to identify all relevant approaches. Of the 15 documents,i seven provided sufficient information to understand the framework proposed to move from research to policy recommendation. We begin with a discussion of six prominent guidelines creation documents.
In this section, we summarize six approaches to creating guidelines: (1) The National Academies Study Process; (2) The Institute for Education Sciences Practice Guide; (3) Emerging Consensus on Rating Quality of Evidence and Strength of Recommendations (GRADE); (4) British Columbia Handbook on Developing Guidelines and Protocols; (5) International Standards for Clinical Policy Guidelines; and (6) The National Institute for Health and Care Excellence (NICE) Guidelines process.
National Academies Study Process
National Academies reports are often viewed as being valuable and credible because of checks and balances that are applied at every step in the study process to protect the integrity of the reports and to maintain public confidence in them. Experts are selected to serve on a pro bono basis (travel expenses are reimbursed) on committees to produce study reports. The Academies provide independent advice; external sponsors have no control over the conduct of a study once the statement of task and budget are finalized. Study committees gather information from many sources in public meetings but they carry out their deliberations in private in order to avoid political, special interest, and sponsor influence.
The process for producing a study report involves the following:
Defining the study. Academies' staff and members of their boards work with sponsors to determine the specific set of questions to be addressed by the study in a formal "statement of task," as well as the duration and cost of the study. The statement of task defines and bounds the scope of the study, and it serves as the basis for determining the expertise and the balance of perspectives needed on the committee.
Committee Selection and Approval. All committee members serve as individual experts, not as representatives of organizations or interest groups. Each member is expected to contribute to the project on the basis of his or her own expertise and good judgment. A committee is not finally approved until a thorough balance and conflict-of-interest discussion is held at the first meeting, and any issues raised in that discussion or by the public are investigated and addressed. Full details on how committee members are selected can be found in Appendix B.
Committee Meetings, Information Gathering, Deliberations, and Drafting the Report. Study committees typically gather information through: 1) meetings that are open to the public and that are announced in advance through the Academies' website; 2) the submission of information by outside parties; 3) reviews of the scientific literature; and 4) the investigations of the committee members and staff. In all cases, efforts are made to solicit input from individuals who have been directly involved in, or who have special knowledge of, the problem under consideration.
Report Review. As a final check on the quality and objectivity of the study, all Academies reports whether products of studies, summaries of workshop proceedings, or other documents must undergo a rigorous, independent external review by experts whose comments are provided anonymously to the committee members. The Academies recruit independent experts with a range of views and perspectives to review and comment on the draft report prepared by the committee. The review process is structured to ensure that each report addresses its approved study charge and does not go beyond it, that the findings are supported by the scientific evidence and arguments presented, that the exposition and organization are effective, and that the report is impartial and objective.ii
Institute for Education Sciences Practice Guides
A panel comprised of research experts, policymakers and practitioners is convened (generally around 5 persons). The panel works with staff from the IES What Works Clearinghouse (WWC) to cull through the research evidence. The WWC standards divide studies into those that meet minimum standards (either randomized controlled trials or quasi-experiments with strong evidence of group equivalence at baseline). All other studies are considered to not meet minimum standards. The studies are ranked according to these WWC evidence standards. The panel meets several times (sometimes over several days) to weigh the evidence and structure recommendations. Each recommendation comes with a weak-to-strong qualifier that signals the extent to which conclusions were based on WWC-acceptable standards, or those that did not. When panels are not able to come to consensus, WWC management gets involved to determine resolutions and to push the process forward.
Grading of Recommendations Assessment, Development and Evaluation (GRADE)
In 2000, a working group began as an informal collaboration of people with an interest in addressing the shortcomings of grading evidence in health care. The working group developed an approach to grading quality of evidence and strength of recommendations. Many international organizations have provided input into the development of the approach and have started using it (e.g., the World Health Organization, the Cochrane Collaboration and more than 25 other organizations). GRADE was not developed for generating specific guidelines but is a process that was developed so that any organization can use it to create its own set of recommendations and standards. There are now literally dozens of articles on the GRADE process, and this briefing only discusses the overarching framework.
The first consideration in GRADE is a determination of whether the scientific evidence is of high quality (i.e., the evidence indicates that the chances for desirable effects outweighs the chances for adverse effects). The second consideration is to use this scientific evidence to produce a simple, transparent rating of the strength of the evidence supporting each recommendation (e.g., “strong” or “weak”).
The GRADE approach can be summarized as follows:
The Canadian Guidelines and Protocols Advisory Committee (GPAC) is charged with developing clinical policy guidelines and recommendations for the British Columbia Province’s Ministry of Health. Their process in judging the state of evidence involves identifying the evidence from meta-analyses or other quantitative systematic reviews. If the systematic reviews on the topic are not yet available, the Committee conducts its own literature searches for individual studies, preferably randomized controlled trials (RCTs). If this evidence is also unavailable, recommendations are based on the "best available" evidence.
One feature of GPAC is the way it assigns expert work groups. A separate workgroup is formed for a specific clinical guideline and is composed of general practitioners, relevant medical specialists and often a pharmacist from the ministry of health. The workgroup is overseen by a research officer (who may or may not be the chair). The workgroup could be disbanded after an initial guideline has been developed and a new workgroup formed for subsequent revision. This process addresses the problem of expert or “scientific bias” noted by many observers, the tendency for guidelines to be based overwhelmingly on the opinions of scientific experts to the exclusion of input by practitioners.
The American College of Physicians outlined some problems in developing clinical guidelines that included variations in quality, limitations of systematic reviews, and lack of transparency and adequate documentation of methods. To address these short-comings, the ACP created a set of recommendations for guideline creation and advocated for a use of a panel to form recommendations from research (i.e. the panel should include diverse and relevant stakeholders, such as health professionals, methodologists, experts on a topic, and patients). ACP also presented a set of ruling principles for the creation of guidelines including:
1. A guideline should describe the process used to reach consensus among the panel members and, if applicable, approval by the sponsoring organization. This process should be established before the start of guideline development.
2. A guideline should clearly describe the methods used for the guideline development in detail.
3. Guideline developers should use systematic evidence review methods to identify and evaluate evidence related to the guideline topic.
4. A guideline recommendation should be clearly stated and based on scientific evidence of benefits; harms; and, if possible, costs.
5. A guideline should use a rating system to communicate the quality and reliability of both the evidence and the strength of its recommendations.
6. A guideline should include an expiration date and/or describe the process that the guideline groups will use to update recommendations.
7. A guideline should disclose financial support for the development of both the evidence review as well as the guideline recommendations
The National Institute for Health and Care Excellence (NICE) in the UK makes evidence-based recommendations on a wide range of topics, from preventing and managing specific conditions, improving health, managing medicines in different settings, to providing social care and support to adults and children, and planning broader services and interventions to improve the health of English communities. The NICE promotes both individualized care and integrated care (for example, by covering transitions between children's and adult services and between health and social services).
NICE guidance is based on the best available evidence of what works and what it costs--and it is developed by Committees of experts. The NICE uses both scientific and other types of evidence from “multiple sources, extracted for different purposes and through different methods… within an ethical and theoretical framework.” Evidence is classified into:
Scientific evidence: which is defined as “explicit (codified and propositional), systemic (uses transparent and explicit methods for codifying), and replicable (using the same methods with the same samples will lead to the same results). It can be context-free (applicable generally) or context-sensitive (driven by geography, time and situation)”.
Colloquial evidence: essentially derived from expert testimony, stakeholder opinion and necessarily value driven and subjective.
The evidence is then debated by a committee and the guidance developed and agreed upon. One feature of NICE is that clinical evidence is augmented by economic evidence in forming judgments for guidelines. There are many documents on NICE and an extensive manual. Appendix C provides a diagram of the NICE guideline creation process, and a summary of its core features.
Table 1 provides a summary of some characteristics of the six approaches described here. These characteristics include the field/discipline in which the guidelines were developed, whether a deliberating body was used to develop the guidelines, and whether the evidence and the strength of the recommendation were rated.
Approach | Field/Discipline | Type of Deliberating Body |
Rating of Evidence? |
Strength of Recommendation? |
National Academies Study Reports |
Sciences, More Broadly |
Committees of Experts |
No | No |
Institute for Education Sciences Practice Guides |
Education | 5-person Panel | Yes | Yes |
GRADE | Health Care | Organizations | Yes | Yes |
British Columbia Handbook |
Health Care | Work Group | Yes | Yes |
American College of Physicians |
Health Care | Panel | Yes | Yes |
UK-National Institute of Clinical |
Health Care | Committees of Experts |
Yes | Yes |
These six approaches suggest some overarching characteristics to be considered when developing guidelines:
This briefing summarized some examples from different disciplines and fields on the development of processes for generating trustworthy research findings into policy and practice guidelines. From electronic exchanges with colleagues and documents we obtained from them we outlined six approaches on how research has been used to create recommendations for public policy and practice guidelines. All of these approaches rest on a transparent process at every stage, from the formation of deliberating bodies that are diverse in expertise to the discussion on the nature of the evidence and judgment place on their internal validity. No study that are relevant to the topic are excluded, even if they were not from randomized-controlled studies. Care was also given to ensure that panel members are unbias, including rotating team members. The care taken and the flexibility of including a variety of evidence ensure that policy and guidelines developed can be trusted and practical.
Scientific bias may enter into guideline development when important scientific perspectives are not adequately represented. Guideline developers should select work group members in such a way that all relevant disciplines and perspectives are included and that members of both the science and practice perspectives are represented. Having a multidisciplinary work group can help ensure the evidence is reviewed and interpreted by individuals with varying values, preferences, and perspectives and that the resulting recommendations are balanced.
Scientific bias may also arise when the opinions of work group experts are not adequately represented. The work group members may have differences in professional status or scientific knowledge. Some work group members dominate discussions more than others. Because of these differences and other social processes that emerge in group decision making, ensuring that information is shared and opinions are adequately represented can be challenging. Consensus development methods can help ensure that all expert perspectives are shared and that bias is counterbalanced. Consensus methods that might be considered include the Delphi method, the Nominal Group process, and the Glaser approach. These methods structure group interaction in ways that bring consensus on recommendation statements; for example, by using an iterative process to solicit views through questionnaires, note cards, or written documents, reflect views back to work group members systematically, and formulate final written recommendations. Regardless of the method used, systematic ways of gathering expert opinion, views, and preferences for recommendations can help to reduce bias.
In terms of participation in committees, NICE also differs from other panels in that it includes lay members and public at-large. Lay members are defined as those with personal experience of using health or care services, or from a community affected by an established or soon to be considered guideline. In developing the guidelines, the Committee is the independent advisory group that considers the evidence and develops the recommendations, taking into account the views of stakeholders. It may be a standing Committee working on many guideline topics, or a topic-specific Committee put together to work on a specific guideline. NICE also advocates flexibility in calling for participation in the Committee. If needed for a topic, the Committee can co-opt members with specific expertise to contribute to developing some of the recommendations. For example, members with experience of integrating delivery of services across service areas may also be recruited, particularly where the development of a guideline requires more flexibility than “conventional organisational boundaries” permit. If the guideline contains recommendations about services, NICE could call upon individuals with a commissioning or provider background in addition to members from practitioner networks or local authorities.
The NICE approach towards evaluating clinical evidence differs from other approaches. In addition to clinical evidence, the committee is implored to also take into account other factors, such as the need to prevent discrimination and to promote equity. Similarly, NICE recognizes that not all clinical research could or should result in implementation; therefore, NICE has added an indication as to whether a procedure should only be tested in further research or that it be put forward for implementation. Factors that might prevent research from being implemented in practice would be evidence that the committee considers to be insufficient at the current time. A 'research only' recommendation is made if the evidence shows that there are important uncertainties which may be resolved with additional evidence (presumably from clinical trials or real world settings).Evidence may also indicates the intervention is unsafe and/or not efficacious, and the committee will make a recommendation, under those conditions, not to use the procedure.
An important feature in the NICE framework is its use of economic evidence in guidelines development. There are two primary considerations in drawing conclusions from economic studies for a given intervention. The first is that the methodology is sufficiently strong to avoid the possibility of double-counting costs or benefits. NICE recommends that the way consequences are implicitly weighted should be recorded openly, transparently and as accurately as possible. Cost–consequences analysis then requires the decision-maker to decide which interventions represent the best value using a systematic and transparent process. A related process is that an incremental cost-effectiveness ratio (ICER) threshold be used whenever possible and that interventions with an estimated negative net present value (NPV) should not be recommended unless social values outweigh costs.
The second consideration NICE put forward on using economic evidence in translating research to clinical practice/policy concerns cost-minimization procedures. The committee took care to avoid blindly choosing interventions with the lowest costs by declaring that cost minimization can be used only when the difference in benefits between an intervention and its comparator is known to be small and the cost difference is large. Given the criteria, NICE believes that cost-minimisation analysis is only applicable in a relatively small number of cases.
In sum, economic evidence estimating the value of the intervention should be considered alongside clinical evidence, but judgment by social values (policy) should also be taken into account to avoid choosing intervention merely because it is offered at the lowest cost.
The final step in translating research evidence into practice and policy guidelines is drafting recommendations. Because many people read only the recommendations, the wording must be concise, unambiguous and easy to translate into practice by the intended audience. As a general rule, the committee recommends that each recommendation or bullet point within a recommendation should contain only one primary action and be accessible as much as possible to a wide audience.
An important guideline explicitly stated by NICE is to indicate levels of uncertainty in the evidence. It is the only institution to have created a "Research recommendations process and methods guide," which details the approach to be used to identify key uncertainties and associated research recommendations. In considering which research intervention or evidence to put forward for recommendation, the committee established guidelines that includes three levels of certainty:
1. Recommendations for activities or interventions that should (or should not) be used
2. Recommendations for activities or interventions that could be used
3. Recommendations for activities or interventions that must (or must not) be used.
Clifton Chow, Ph.D., Consultant - Anthony Petrosino, Ph.D., WestEd
The move towards evidence-based policy has focused on generating trustworthy evidence upon which to base decisions. For example, a critical component of the evidence-based policy movement has been the support for more rigorous primary studies with strong "internal validity" such as randomized experiments and well-controlled quasi-experiments. Another critical component has been the increased importance of transparent and explicit methods for synthesizing research such as systematic reviews and meta-analyses.
But once the scientific community has generated trustworthy research findings, how can this research be used to generate policy and practice guidance? Processes for doing this in a systematic, defensible and accessible way has presented a challenge across different disciplines and fields. This briefing summarizes some of examples from different fields on the development of such processes.
The importance of guidelines development was articulated by the U.S. Centers for Disease Control:
Guidelines affect practice, policy, and the allocation of resources on a wide scale. Thus, it is critical that recommendations included in guidelines documents are based on an objective assessment of the best available evidence. Systematic literature reviews and rigorous methods used to rate the quality of evidence can assist in reducing scientific bias by increasing the probability that high‐quality, relevant evidence is considered. However, guideline development involves more than assessing scientific evidence. Developers also use expert opinion to interpret research and offer insights from practice. If not gathered carefully, expert opinion has the potential to bias the evidence synthesis and decision‐making process (see Appendix A for further elaboration by the CDC on selecting work group experts and the consensus approach).
To rapidly identify some different approaches, we solicited information from over 50 colleagues across fields of education, justice, public health, psychology, and sociology. We received 15 documents and several electronic exchanges from these colleagues to identify how research evidence has been used to create public policy and practice guidelines. These are common approaches, but no systematic search of the literature was undertaken to identify all relevant approaches. Of the 15 documents,i seven provided sufficient information to understand the framework proposed to move from research to policy recommendation. We begin with a discussion of six prominent guidelines creation documents.
In this section, we summarize six approaches to creating guidelines: (1) The National Academies Study Process; (2) The Institute for Education Sciences Practice Guide; (3) Emerging Consensus on Rating Quality of Evidence and Strength of Recommendations (GRADE); (4) British Columbia Handbook on Developing Guidelines and Protocols; (5) International Standards for Clinical Policy Guidelines; and (6) The National Institute for Health and Care Excellence (NICE) Guidelines process.
National Academies Study Process
National Academies reports are often viewed as being valuable and credible because of checks and balances that are applied at every step in the study process to protect the integrity of the reports and to maintain public confidence in them. Experts are selected to serve on a pro bono basis (travel expenses are reimbursed) on committees to produce study reports. The Academies provide independent advice; external sponsors have no control over the conduct of a study once the statement of task and budget are finalized. Study committees gather information from many sources in public meetings but they carry out their deliberations in private in order to avoid political, special interest, and sponsor influence.
The process for producing a study report involves the following:
Defining the study. Academies' staff and members of their boards work with sponsors to determine the specific set of questions to be addressed by the study in a formal "statement of task," as well as the duration and cost of the study. The statement of task defines and bounds the scope of the study, and it serves as the basis for determining the expertise and the balance of perspectives needed on the committee.
Committee Selection and Approval. All committee members serve as individual experts, not as representatives of organizations or interest groups. Each member is expected to contribute to the project on the basis of his or her own expertise and good judgment. A committee is not finally approved until a thorough balance and conflict-of-interest discussion is held at the first meeting, and any issues raised in that discussion or by the public are investigated and addressed. Full details on how committee members are selected can be found in Appendix B.
Committee Meetings, Information Gathering, Deliberations, and Drafting the Report. Study committees typically gather information through: 1) meetings that are open to the public and that are announced in advance through the Academies' website; 2) the submission of information by outside parties; 3) reviews of the scientific literature; and 4) the investigations of the committee members and staff. In all cases, efforts are made to solicit input from individuals who have been directly involved in, or who have special knowledge of, the problem under consideration.
Report Review. As a final check on the quality and objectivity of the study, all Academies reports whether products of studies, summaries of workshop proceedings, or other documents must undergo a rigorous, independent external review by experts whose comments are provided anonymously to the committee members. The Academies recruit independent experts with a range of views and perspectives to review and comment on the draft report prepared by the committee. The review process is structured to ensure that each report addresses its approved study charge and does not go beyond it, that the findings are supported by the scientific evidence and arguments presented, that the exposition and organization are effective, and that the report is impartial and objective.ii
Institute for Education Sciences Practice Guides
A panel comprised of research experts, policymakers and practitioners is convened (generally around 5 persons). The panel works with staff from the IES What Works Clearinghouse (WWC) to cull through the research evidence. The WWC standards divide studies into those that meet minimum standards (either randomized controlled trials or quasi-experiments with strong evidence of group equivalence at baseline). All other studies are considered to not meet minimum standards. The studies are ranked according to these WWC evidence standards. The panel meets several times (sometimes over several days) to weigh the evidence and structure recommendations. Each recommendation comes with a weak-to-strong qualifier that signals the extent to which conclusions were based on WWC-acceptable standards, or those that did not. When panels are not able to come to consensus, WWC management gets involved to determine resolutions and to push the process forward.
Grading of Recommendations Assessment, Development and Evaluation (GRADE)
In 2000, a working group began as an informal collaboration of people with an interest in addressing the shortcomings of grading evidence in health care. The working group developed an approach to grading quality of evidence and strength of recommendations. Many international organizations have provided input into the development of the approach and have started using it (e.g., the World Health Organization, the Cochrane Collaboration and more than 25 other organizations). GRADE was not developed for generating specific guidelines but is a process that was developed so that any organization can use it to create its own set of recommendations and standards. There are now literally dozens of articles on the GRADE process, and this briefing only discusses the overarching framework.
The first consideration in GRADE is a determination of whether the scientific evidence is of high quality (i.e., the evidence indicates that the chances for desirable effects outweighs the chances for adverse effects). The second consideration is to use this scientific evidence to produce a simple, transparent rating of the strength of the evidence supporting each recommendation (e.g., “strong” or “weak”).
The GRADE approach can be summarized as follows:
The Canadian Guidelines and Protocols Advisory Committee (GPAC) is charged with developing clinical policy guidelines and recommendations for the British Columbia Province’s Ministry of Health. Their process in judging the state of evidence involves identifying the evidence from meta-analyses or other quantitative systematic reviews. If the systematic reviews on the topic are not yet available, the Committee conducts its own literature searches for individual studies, preferably randomized controlled trials (RCTs). If this evidence is also unavailable, recommendations are based on the "best available" evidence.
One feature of GPAC is the way it assigns expert work groups. A separate workgroup is formed for a specific clinical guideline and is composed of general practitioners, relevant medical specialists and often a pharmacist from the ministry of health. The workgroup is overseen by a research officer (who may or may not be the chair). The workgroup could be disbanded after an initial guideline has been developed and a new workgroup formed for subsequent revision. This process addresses the problem of expert or “scientific bias” noted by many observers, the tendency for guidelines to be based overwhelmingly on the opinions of scientific experts to the exclusion of input by practitioners.
The American College of Physicians outlined some problems in developing clinical guidelines that included variations in quality, limitations of systematic reviews, and lack of transparency and adequate documentation of methods. To address these short-comings, the ACP created a set of recommendations for guideline creation and advocated for a use of a panel to form recommendations from research (i.e. the panel should include diverse and relevant stakeholders, such as health professionals, methodologists, experts on a topic, and patients). ACP also presented a set of ruling principles for the creation of guidelines including:
1. A guideline should describe the process used to reach consensus among the panel members and, if applicable, approval by the sponsoring organization. This process should be established before the start of guideline development.
2. A guideline should clearly describe the methods used for the guideline development in detail.
3. Guideline developers should use systematic evidence review methods to identify and evaluate evidence related to the guideline topic.
4. A guideline recommendation should be clearly stated and based on scientific evidence of benefits; harms; and, if possible, costs.
5. A guideline should use a rating system to communicate the quality and reliability of both the evidence and the strength of its recommendations.
6. A guideline should include an expiration date and/or describe the process that the guideline groups will use to update recommendations.
7. A guideline should disclose financial support for the development of both the evidence review as well as the guideline recommendations
The National Institute for Health and Care Excellence (NICE) in the UK makes evidence-based recommendations on a wide range of topics, from preventing and managing specific conditions, improving health, managing medicines in different settings, to providing social care and support to adults and children, and planning broader services and interventions to improve the health of English communities. The NICE promotes both individualized care and integrated care (for example, by covering transitions between children's and adult services and between health and social services).
NICE guidance is based on the best available evidence of what works and what it costs--and it is developed by Committees of experts. The NICE uses both scientific and other types of evidence from “multiple sources, extracted for different purposes and through different methods… within an ethical and theoretical framework.” Evidence is classified into:
Scientific evidence: which is defined as “explicit (codified and propositional), systemic (uses transparent and explicit methods for codifying), and replicable (using the same methods with the same samples will lead to the same results). It can be context-free (applicable generally) or context-sensitive (driven by geography, time and situation)”.
Colloquial evidence: essentially derived from expert testimony, stakeholder opinion and necessarily value driven and subjective.
The evidence is then debated by a committee and the guidance developed and agreed upon. One feature of NICE is that clinical evidence is augmented by economic evidence in forming judgments for guidelines. There are many documents on NICE and an extensive manual. Appendix C provides a diagram of the NICE guideline creation process, and a summary of its core features.
Table 1 provides a summary of some characteristics of the six approaches described here. These characteristics include the field/discipline in which the guidelines were developed, whether a deliberating body was used to develop the guidelines, and whether the evidence and the strength of the recommendation were rated.
Approach | Field/Discipline | Type of Deliberating Body |
Rating of Evidence? |
Strength of Recommendation? |
National Academies Study Reports |
Sciences, More Broadly |
Committees of Experts |
No | No |
Institute for Education Sciences Practice Guides |
Education | 5-person Panel | Yes | Yes |
GRADE | Health Care | Organizations | Yes | Yes |
British Columbia Handbook |
Health Care | Work Group | Yes | Yes |
American College of Physicians |
Health Care | Panel | Yes | Yes |
UK-National Institute of Clinical |
Health Care | Committees of Experts |
Yes | Yes |
These six approaches suggest some overarching characteristics to be considered when developing guidelines:
This briefing summarized some examples from different disciplines and fields on the development of processes for generating trustworthy research findings into policy and practice guidelines. From electronic exchanges with colleagues and documents we obtained from them we outlined six approaches on how research has been used to create recommendations for public policy and practice guidelines. All of these approaches rest on a transparent process at every stage, from the formation of deliberating bodies that are diverse in expertise to the discussion on the nature of the evidence and judgment place on their internal validity. No study that are relevant to the topic are excluded, even if they were not from randomized-controlled studies. Care was also given to ensure that panel members are unbias, including rotating team members. The care taken and the flexibility of including a variety of evidence ensure that policy and guidelines developed can be trusted and practical.
Scientific bias may enter into guideline development when important scientific perspectives are not adequately represented. Guideline developers should select work group members in such a way that all relevant disciplines and perspectives are included and that members of both the science and practice perspectives are represented. Having a multidisciplinary work group can help ensure the evidence is reviewed and interpreted by individuals with varying values, preferences, and perspectives and that the resulting recommendations are balanced.
Scientific bias may also arise when the opinions of work group experts are not adequately represented. The work group members may have differences in professional status or scientific knowledge. Some work group members dominate discussions more than others. Because of these differences and other social processes that emerge in group decision making, ensuring that information is shared and opinions are adequately represented can be challenging. Consensus development methods can help ensure that all expert perspectives are shared and that bias is counterbalanced. Consensus methods that might be considered include the Delphi method, the Nominal Group process, and the Glaser approach. These methods structure group interaction in ways that bring consensus on recommendation statements; for example, by using an iterative process to solicit views through questionnaires, note cards, or written documents, reflect views back to work group members systematically, and formulate final written recommendations. Regardless of the method used, systematic ways of gathering expert opinion, views, and preferences for recommendations can help to reduce bias.
In terms of participation in committees, NICE also differs from other panels in that it includes lay members and public at-large. Lay members are defined as those with personal experience of using health or care services, or from a community affected by an established or soon to be considered guideline. In developing the guidelines, the Committee is the independent advisory group that considers the evidence and develops the recommendations, taking into account the views of stakeholders. It may be a standing Committee working on many guideline topics, or a topic-specific Committee put together to work on a specific guideline. NICE also advocates flexibility in calling for participation in the Committee. If needed for a topic, the Committee can co-opt members with specific expertise to contribute to developing some of the recommendations. For example, members with experience of integrating delivery of services across service areas may also be recruited, particularly where the development of a guideline requires more flexibility than “conventional organisational boundaries” permit. If the guideline contains recommendations about services, NICE could call upon individuals with a commissioning or provider background in addition to members from practitioner networks or local authorities.
The NICE approach towards evaluating clinical evidence differs from other approaches. In addition to clinical evidence, the committee is implored to also take into account other factors, such as the need to prevent discrimination and to promote equity. Similarly, NICE recognizes that not all clinical research could or should result in implementation; therefore, NICE has added an indication as to whether a procedure should only be tested in further research or that it be put forward for implementation. Factors that might prevent research from being implemented in practice would be evidence that the committee considers to be insufficient at the current time. A 'research only' recommendation is made if the evidence shows that there are important uncertainties which may be resolved with additional evidence (presumably from clinical trials or real world settings).Evidence may also indicates the intervention is unsafe and/or not efficacious, and the committee will make a recommendation, under those conditions, not to use the procedure.
An important feature in the NICE framework is its use of economic evidence in guidelines development. There are two primary considerations in drawing conclusions from economic studies for a given intervention. The first is that the methodology is sufficiently strong to avoid the possibility of double-counting costs or benefits. NICE recommends that the way consequences are implicitly weighted should be recorded openly, transparently and as accurately as possible. Cost–consequences analysis then requires the decision-maker to decide which interventions represent the best value using a systematic and transparent process. A related process is that an incremental cost-effectiveness ratio (ICER) threshold be used whenever possible and that interventions with an estimated negative net present value (NPV) should not be recommended unless social values outweigh costs.
The second consideration NICE put forward on using economic evidence in translating research to clinical practice/policy concerns cost-minimization procedures. The committee took care to avoid blindly choosing interventions with the lowest costs by declaring that cost minimization can be used only when the difference in benefits between an intervention and its comparator is known to be small and the cost difference is large. Given the criteria, NICE believes that cost-minimisation analysis is only applicable in a relatively small number of cases.
In sum, economic evidence estimating the value of the intervention should be considered alongside clinical evidence, but judgment by social values (policy) should also be taken into account to avoid choosing intervention merely because it is offered at the lowest cost.
The final step in translating research evidence into practice and policy guidelines is drafting recommendations. Because many people read only the recommendations, the wording must be concise, unambiguous and easy to translate into practice by the intended audience. As a general rule, the committee recommends that each recommendation or bullet point within a recommendation should contain only one primary action and be accessible as much as possible to a wide audience.
An important guideline explicitly stated by NICE is to indicate levels of uncertainty in the evidence. It is the only institution to have created a "Research recommendations process and methods guide," which details the approach to be used to identify key uncertainties and associated research recommendations. In considering which research intervention or evidence to put forward for recommendation, the committee established guidelines that includes three levels of certainty:
1. Recommendations for activities or interventions that should (or should not) be used
2. Recommendations for activities or interventions that could be used
3. Recommendations for activities or interventions that must (or must not) be used.
EBP Society is the growing community of professionals who share a commitment to the application of evidence-based frameworks to the work we do;
Through our online community, organizations and their staff can efficiently access resources that were exclusive to our events. Our members are employed in the health, human, social, and justice services fields.
Copyright 2023 - EBP Society - All Rights Reserved - Terms & Conditions - Privacy Statement - Cancellation Policy - Society for Evidence-Based Professionals
Get Your Free Article to...
Becoming An Evidence-Based Organization (EBO)
Five Key Components To Consider by David L. Myers, PhD.
Would You Like To Set Your Leadership Apart from The Typical Organization?
Get Your Free Article to...
Becoming An Evidence-Based Practitioner (EBP)
How To Set Yourself Apart By Mark M. Lowis, MINT
Would You Like To Set Your Leadership Apart from The Typical Practitioner?