Politics and Evidence-Based Nursing Practice and Policy Assignment
Politics and Evidence-Based Nursing Practice and Policy Assignment
Sean P. Clarke
“The union of the political and scientific estates is not like a partnership, but a marriage. It will not be improved if the two become like each other, but only if they respect each other’s quite different needs and purposes. No great harm is done if in the meantime they quarrel a bit.”
Don K. Price
Health care has been a conservative field characterized by deep investments in tradition. Evolution of treatment approaches and facility and service management often has been very gradual, punctuated by occasional breakthroughs. For many years, it was said that nearly 2 decades could pass between the appearance of research findings and their uptake into practice. Although this statement bears revisiting in the era of evidence-based practice and in the Internet age, disconnects between evidence and care practices are still common, as are inconsistencies in practice and variations in patient outcomes across providers and institutions. It is clear that bringing research findings to real world settings remains a slow and uneven process.
Clinicians, researchers, and policymakers are aware of poor uptake of research evidence and lost opportunities to improve service, which has spurred an interest in clinical practice and, recently, health care policy, driven by high-quality scientific evidence. An often-cited definition of evidence-based practice is “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients” (Sackett et al., 1996). Evidence-based policy is an extension or extrapolation of the tenets of evidence-based practice to decisions about resource allocation and regulation by various governmental and regulatory bodies. Recognition of the scale of investments in health and social service programs and research around the world, the enormous stakes of providers and clients in the outcomes of policy decisions, and increasing demands for transparency and accountability influenced its rise. Politics and Evidence-Based Nursing Practice and Policy Assignment
Evidence-based policy has been defined as an approach that “helps people make well informed decisions about policies, programs, and projects by putting the best available evidence from research at the heart of policy development and implementation” (Davies, 1999).
This approach stands in contrast to opinion-based policy, which relies heavily on either the selective use of evidence (e.g., on single studies irrespective of quality) or on the untested views of individuals or groups, often inspired by ideological standpoints, prejudices, or speculative conjecture. (Davies, 2004, p. 3)
Controversies in clinical care and policy development are sometimes very intense. Political forces can influence the types of research evidence generated, how it is interpreted in the context of other data and values, and, most significantly, how it is used (if at all) in influencing practice. This chapter will review the politics of translating research into evidence-based practice and policy, from the generation of knowledge to its synthesis and translation.
495 Politics and Evidence-Based Nursing Practice and Policy Assignment
The Players and Their Stakes
Translating research into practice involves many stakeholder groups. Health care professionals are often influenced by practice changes based on evidence. Many are invested in particular clinical methods or work practices and structures of practice or in the status quo of treatment approaches they use and the way care is organized. They often have preferences, pet projects, and passions and may have visions for health care and their profession’s role that might be advanced or blocked by change. Health professionals may seek to protect their working conditions or defend turf from other professions, notably lucrative services or programs.
There are often direct financial consequences for industries connected with health care when research drives adoption, continued use, or rejection of specific products, such as pharmaceuticals and both consumable (e.g., dressings) and durable (e.g., hospital beds) medical supplies but also less visible (but equally expensive and important) products, such as consulting services.
Managers, administrators, and policymakers have stakes in delivering services in their facilities or organizations or jurisdictions in certain ways or within specific cost parameters. In general, administrators prefer to have as few constraints as possible in managing health care services and may be less enthusiastic about regulations as a method of controlling practice; however, changes that increase available resources may be better accepted.
For researchers, wide uptake of findings into practice is one of the most prestigious forms of external recognition, particularly if mandated by some sort of high-impact policy or legislation. This is especially the case for researchers working in policy-relevant fields where funding and public profile are mutually reinforcing. Researchers and academics involved in the larger evidence-based practice movement also have stakes in the enterprise. Politics and Evidence-Based Nursing Practice and Policy Assignment. There are researchers, university faculty, and other experts who have become specialists in synthesizing and reporting outcomes and have interests in ensuring that distilled research in particular forms retains high status. Furthermore, funding agency advisers and bureaucrats may also be very much invested in the legitimacy conferred by the use of evidence-based practice processes.
The general public, especially subgroups that have stakes in specific types of health care, wants safe, effective, and responsive health care. They want to think their personal risks, costs, and uncertainties are minimized, and they may or may not have insights or concerns about broader societal and economic consequences of treatments or models of care delivery. Expert opinions and research findings tend to carry authority, but for the public, these are filtered through the media, including Internet outlets.
Elected politicians and bureaucrats want to maintain appearances of being well informed and responsive to the needs of the public and interest groups, while conveying that their decisions balance risks, benefits, and the interests of various stakeholder groups. Elected politicians are usually concerned about voter satisfaction and their prospects for reelection. They, like the public, receive research evidence filtered through others, sometimes by the media but often by various types of civil servants. Nonelected bureaucrats inform politicians, manage specialized programs, and implement policies on a day-to-day basis. They may be highly trained and come to be very well informed about research evidence in particular fields. As top bureaucrats serve at the pleasure of elected officials, they are sensitive to public perceptions, opinions, and preferences.
The Role of Politics in Generating Evidence
Health care research is often a time- and cost-intensive activity involving competition for scarce resources and rewards. Much is on the line for many stakeholders. Which projects are attempted, what results are generated, and what is reported from completed studies are all very much affected by political factors at multiple levels.
Much research likely to influence practice or policy requires financial support from outside institutions. Researchers write applications to funders for grants to pay for the resources to carry out their work. Before agreeing to underwrite projects,
496
external funders must believe that a topic being researched is important and relevant to the funding mission; the research approach is viable; and the proposed research team is able to carry out the project. Funders are often governmental or quasi-governmental agencies, but producers or marketers of specific products or services can subsidize research. When research is supported by suppliers of particular medications, products, or services, funders may have overtly stated or implicit interests in the results of the studies, and researchers may face pressures around the framing of questions, research approaches, and how, where, and when findings are disseminated. Only recently has the full extent of potential conflicts of interest related to industry-researcher partnerships come to light. However, not-for-profit and government agencies have stakes and preferences in what types of projects are funded and their decisions are also influenced by public relations and political considerations.
Researchers must please their employers with evidence of their productivity (e.g., successful research grants and high-profile publications). Not surprisingly, researchers choose to pursue certain types of projects over others and gravitate toward topics they believe will help them secure funding. They may defend or try to increase the profile of their approaches or topics through their influence as reviewers or members of editorial boards of journals or grant review committees and appointments to positions of real or symbolic power. There can be a great disincentive to move away from research approaches that have garnered support and recognition in the past. Nonetheless, research topics and approaches go in and out of style over time; subjects become relevant or capture the public’s or professionals’ imaginations and then often fade. Politics and Evidence-Based Nursing Practice and Policy Assignment. As a result, academic departments, funding bodies, institutions, and dissemination venues become locales where specific tastes and priorities emerge or disappear. This also applies to methodologies within research fields.
Some subject matter areas or theoretical stances for framing subjects are so inherently controversial that securing funding and carrying out data collections are extremely challenging. Anything touching on reproductive health or sexual behavior tends to be potentially volatile, especially in a conservative political climate, and the questioning of the effectiveness or cost-benefit ratio of a health service much beloved by providers, the public, or both as potentially wasteful can encounter resistance.
Comparative Effectiveness Studies
Research that compares the effectiveness of different clinical approaches or different approaches to managing services is the most relevant for shaping practice and making policy. Comparative effectiveness research (CER) was defined by a federal coordination body established to guide $1.1 billion in earmarked funds under the American Recovery and Reinvestment Act (and later abolished under the Affordable Care Act [ACA]) as:
The conduct and synthesis of research comparing the benefits and harms of different interventions and strategies to prevent, diagnose, treat, and monitor health conditions in “real world” settings. The purpose of this research is to improve health outcomes by developing and disseminating evidence-based information to patients, clinicians, and other decision-makers, responding to their expressed needs, about which interventions are most effective for which patients under specific circumstances. (Federal Coordinating Council, 2009, p. 5)
Although important, comparative effectiveness research is difficult to carry out. Obtaining access to health care settings and to ethically conduct studies exposing patients or communities to different approaches requires a freely acknowledged state of uncertainty regarding the superiority of one approach over another. To conduct meaningful research, the interventions or approaches in question need to be sufficiently standardized and researchers must be able to rigorously measure harms and benefits across sufficient numbers of patients over enough clinical settings (Ashton & Wray, 2013). Comparative intervention research is complicated, demanding, and expensive work to carry out. It is also likely to plunge researchers into politically sensitive debates. It may not be surprising that, because of the practical challenges and political pitfalls involved in evaluating or testing
497
interventions, many researchers in health care are engaged in research intended to inform understandings of health-related phenomena that will enable the design of potentially useful interventions. Unfortunately, when careful evaluations are carried out, history has shown that many widely accepted treatments are shown to be ineffective and needlessly increase both health care costs and risks to the public, suggesting that more rather than less of this difficult research is needed. Funding for comparative effectiveness research, which many hope will stimulate this essential type of inquiry, is included in the ACA of 2010.
The Politics of Research Application in Clinical Practice
Individual Studies
To stand any chance of influencing practice or policy, findings must be disseminated and read by those in a position to make or influence clinical or policy decisions. Individual research papers may or may not receive attention depending on timeliness of the topic, whether or not findings are novel, the profile of the researchers, and the prestige of a journal or conference where results are presented.
A key principle of evidence-based practice and policy is that one study alone never establishes anything as incontrovertible fact. In theory, single studies are given limited credence until their findings are replicated. Despite evidence that dramatic findings in landmark studies, especially using nonrandomized or observational research designs, are rarely replicated under more rigorous scrutiny (Ioannidis, 2005), there is often an appetite for novel findings and a drive to act on them. As a result, single studies, particularly ones with findings that resonate strongly with one or more interest groups, can receive a great deal of attention and even influence health policy, even though their findings are preliminary.
Journalists must find the most newsworthy of the findings in research reports and make them understandable and entertaining to their audiences. In contrast, for scientists, legitimacy hinges on integrity in reporting findings. Use of simplistic language or terminology or the reworking of complex scientific ideas into layman’s terms in the popular press may result in broad statements unjustified by the data. Being seen as a media darling, especially one whose work is popularized without careful qualifiers, can be damaging to a researcher’s scientific credibility. Furthermore, given that reactions and responses (and backlashes) can be very strong, researchers seeking media coverage of their research must be cautious. Politics and Evidence-Based Nursing Practice and Policy Assignment. It is generally best to avoid popular press coverage of one’s results before review by peers and publication in a venue aimed at research audiences. Avoiding avoiding overstating results and ensuring that key limitations of study findings are clearly described is essential, particularly if a treatment or approach has been studied in a narrow population or context or without controlling for important background variables.
Summarizing Literature and the Politics of Guidelines and Syntheses
Despite the appeal of single studies with intriguing results, the principles of evidence-based practice and policy dictate that before action is taken, synthesis of research results be carried out. Studies with larger representative samples and tighter designs are granted more weight in such syntheses.
Conducting and writing systematic reviews and practice guidelines are labor-intensive exercises requiring skill in literature searching, abstracting key elements of relevant research, and comparison of findings. The process is expensive and time-consuming, often requiring investments from stakeholder groups to ensure completion. Synthesis and guideline development are often conducted by teams to render the work involved manageable and increase the quality of the products and user perceptions of balance and fairness in the conclusions. Procedures used to identify relevant literature are now almost always described in detail to permit others to verify (and later update) the search strategy. It is worth noting that except in contexts such as the Cochrane Collaboration (where all procedures are extremely clearly laid out and designed to be as bias-free as possible), the grading of evidence
498
and the drafting of syntheses can be somewhat subjective and reflect rating compromises.
Political forces will influence which topics, clienteles, or areas of science or practices are targeted for synthesis, often high-volume or high-cost services or services where clients are at high risk. Who compiles synthesis documents and under what circumstances reflects research and professional politics as well as influences from funders and policymakers. The credibility of syntheses hinges on the scientific reputation of those responsible for writing and reviewing them. There is debate regarding whether or not subject matter expertise is required of those conducting a synthesis and whether or not having conducted research in an area creates a vested interest that can jeopardize integrity of a review. Interestingly, different individuals tend to be involved in conducting research as opposed to carrying out reviews. Key investigators in the area may not want to take the time away from their research to work on reviews, but may feel a need to defend their studies or protect what they believe to be their interests. Often, recognized experts are brought in at the beginning or end of a search and synthesis exercise to ensure that relevant studies have not been omitted and that study results have been correctly interpreted.
Systematic reviews, disseminated by authoritative sources, can be especially influential for both clinical practice and health policy. When the usefulness of a treatment for recipients is brought into question or it is suggested that some diagnostic or treatment approaches are superior to others, it is very likely that the creators, manufacturers, or researchers involved with the losers will bring their resources together to fight. In 1995, the Agency for Health Care Policy and Research (AHCPR), the federal entity that was the precursor of the Agency for Healthcare Research and Quality (AHRQ), released a practice guideline dealing with the treatment of lower back pain that stated spinal fusion surgery produced poor results (Gray, Gusmano, & Collins, 2003). Lobbyists for spinal surgeons were able to garner sympathy from politicians averse to continued funding for the agency. In the face of other political enemies and threats to the AHCPR, the result was the threatened disbanding of the agency. The AHCPR was reborn in 1999 as the AHRQ, with a similar mandate to focus on “quality improvement and patient safety, outcomes and effectiveness of care, clinical practice and technology assessment, and health care organization and delivery systems” (), but without practice guideline development in its portfolio.
Skepticism is warranted when reading literature syntheses involving the standing of a particular product or service that has either been directly funded by industry or interest groups or had close involvement by industry-sponsored researchers (Detsky, 2006). Guidelines and best practices to reduce bias in literature synthesis and guideline creation are being circulated (Institute of Medicine [IOM], 2009; Palda, Davis, & Goldman, 2007) in much the same way as parameters, checklists, and reporting requirements for randomized trials and observational research (e.g., the CONSORT guidelines at ) were first created and disseminated years ago.
The Politics of Research Applied to Policy Formulation
Distilling research findings and crafting messages to allow research evidence to influence policy can be even more complex and daunting than translating research related to particular health care technologies or treatments. Direct evidence about the consequences of different policy actions is often sparse, and much extrapolation is necessary to link available evidence with the questions at hand. Attempts have been made in the United States and elsewhere, often through nonprofit foundations such as the Robert Wood Johnson Foundation and the Canadian Foundation for Healthcare Improvement (formerly the Canadian Health Services Research Foundation) to educate the public and policymakers about health services research findings. The political challenges in implementing health policy change are considerable. The amounts of money are often higher, and symbolic significance of the decisions is even greater, which makes conflict across the same types of stakeholder
499
interests discussed throughout this chapter even more dramatic. Box 59-1 shows pearls and pitfalls of using research in a policy context.
Box 59-1
Pearls and Pitfalls of Using Research in Policy Contexts
Pearls
- Before trying to link research with a policy issue, understand the underlying policy issue as well as possible to determine how results in question add to a debate.
- Consider the way opponents of a particular policy stance will interpret study findings, and consider adjusting messages accordingly.
- Be aware of major limitations in the study findings (e.g., weaknesses or Achilles’ heels such as lack of randomization in an evaluation study or a failure to consider an important confounder), and be prepared to respond to them and explain why results are relevant anyway.
- Refer to bodies of similar or related research rather than individual studies, where possible, and acknowledge controversies.
Pitfalls
- Assuming policymakers and journalists are familiar with or interested in research method details.
- Writing research results with needlessly biased or strong language and/or citing such research in policy without reservations.
- Exaggerating the magnitude of effects and ignoring all weaknesses or inconsistencies, particularly those that are easily identified by educated nonspecialists.
- Citing research and/or researchers without checking credibility or verifying scientific quality of the results.
- Failing to recognize that research findings are only one component of wider policy debates.
Glenn (2002) explores the role of scientific evidence in policymaking with regard to ultimate and derivative values and their relationships to each other. He frames ultimate values as those held without real justification (or need for justification) with facts. Notions that patient suffering is bad and is to be avoided at all costs, that health care is a right (and that society has a duty to help those in need), or that patients deserve care free of errors could all be considered as ultimate values. Ultimate values are by nature ill-suited to scientific investigation, and in addition to value judgments, they may be fundamental political views about the role of government or religious beliefs. Derivative values result from (or are derived from) the combination of an ultimate value with a stance about the realities of the world. Some may argue that because low nurse staffing leads to higher error rates (an interpretation of research offering a testable insight about the clinical world) and their belief that patients should be exposed to as few errors as possible (an ultimate value), that low staffing should be avoided (or legislated against) through the use of minimum nurse staffing ratios (a derivative value).
In Glenn’s words “…science can assess the validity of the beliefs about reality that link derivative to ultimate values” (Glenn, 2002, p. 69). Verifying statements about reality, not defending either ultimate or derivative values, is its role. Researchers are expected to remain objective and fair: to use the rules of evidence for scientific inquiry properly, clearly reporting facts that contradict their impressions or hypotheses, as well as ones consistent with their and others’ ultimate and derivative values. However, several forces, namely, a tendency to resist admitting having drawn incorrect or overly simplified conclusions in the past, as well as social and political pressures from one’s fan base (what Glenn calls the researcher’s significant others) can create problems with keeping these boundaries clear. Researchers may be accused of bias or, worse, promulgating junk science. Politics and Evidence-Based Nursing Practice and Policy Assignment. Journalists have commented on inflated estimates of prevalence or impacts of various diseases or conditions using research data (using loose definitions, questionable assumptions, or data with limited potential to be verified) (Barlett & Steele, 2004) to lobby for increased funding for research, treatment initiatives, or policy actions.
When research findings collide with the interests of stakeholder groups in a policy debate, the responses can be extreme. The ethical integrity, scientific competence, or motivations of the researchers involved can be called into question by stakeholders whose interests are in conflict with particular results. Late in 2009, controversy emerged when e-mail messages exchanged between prominent United Kingdom climate researchers were made public. These scientists’ work is often cited to document claims of global warming and to justify tighter vehicular, industrial emission, and environmental controls. The content of the e-mail messages was considered by some to show clear evidence of departures from objectivity, data massaging, and politicking to reduce the impact of conflicting findings from competing scientists (Booker, 2009; Sarewitz & Thernstrom, 2009). Equally high-profile and bitter arguments surround the potential health hazards associated with genetically modified crops and pit scientists, industry, and government stakeholders against each other. Within health care, as of this writing (winter 2014), controversy continues to simmer about public opinion on the ACA, the consequences of ACA for health insurance premiums, and the impact of the ACA’s provisions penalizing employers who do not offer health insurance on unemployment rates and job creation (Bowman & Rugg, 2013; FactChecking “Pernicious” Obamacare Claims, 2013).
The culture of critique and a media appetite for sensationalism, fueled by rapid dissemination of news stories through the Internet, have undermined claims of complete objectivity in research and highlighted the political aspects of research. Whether or not the scientific claims or conclusions of any researchers are correct or even whether objectivity can ever exist in research is probably immaterial to the discussion here. Today, researchers, like politicians, are assumed to have vested interests unless proven otherwise. Good scientific practice is the best defense against claims of bias or worse, but it does not confer immunity from accusations. Nurse researchers aspiring to policy relevance and politically active nurses seeking to use research findings in their endeavors should be aware of the pitfalls and consequences. It is useful for researchers and activists to identify potential winners and losers under proposed policy changes and anticipate their likely interpretations of research findings. In making policy from the research literature tying outcomes to nurse staffing levels, opposing stakeholders at their extremes either cast managers and executives as untrustworthy when it comes to decisions where the bottom line and patient safety might collide or frame nurses, their associations, and collective bargaining units as self-interested and prepared to see hospitals become insolvent by insisting on unnecessarily high staffing levels and/or expensive staffing models.
In the end, it is probably wise to avoid exaggerating the ultimate influence of research findings on shaping policy. Policy victories attributed to research evidence may be more about skill and luck in turning opinion than the research evidence itself and how it is spun in various forums. Furthermore, policy changes stimulated by or defended by research can be short lived. The balance between various political forces and interest groups can and often do influence the outcomes of many policy debates as much as, or more than, thoughtful application of research evidence. Resistance from organized medicine to expanded scope of practice for advanced practice nurses is one example of where a critical mass of evidence supports a change but political forces have conspired against it (Hughes et al., 2010).
The translation of evidence into clinical practice and policy is, by nature, a political process. Researchers are most likely to influence policy by designing studies that will yield the clearest possible answers to questions with policy relevance.
Politics and Evidence-Based Nursing Practice and Policy Assignment Discussion Questions
- Think about a specific area of clinical care you are familiar with where one or more interest groups are attempting to bring about a change in the nature of clinical care or systems of service delivery. Assume a new, potentially game changing research finding appears in the literature and receives wide attention. Using the list of types of stakeholders in translating research into practice in this chapter, identify the groups that might have an interest in these findings and hazard a guess about their likely reactions to new research.
- Thinking about Glenn’s explanation of the role of scientific evidence in policymaking and returning to the area of care or practice that you considered in connection with the preceding discussion question, what deeply held beliefs (ultimate values) and derivative values (conclusionsand values from interpretation of empirical data about the world in the light of ultimate values) do stakeholders claim in this area of clinical/policy controversy? Do you agree that the purpose of research is to add empirical data to policy debates rather than to support or refute ultimate or derivative value statements?
References
Ashton CM, Wray NP. Comparative effectiveness research: Evidence, medicine and policy. Oxford University Press: New York; 2013.
Barlett DL, Steele JB. Critical condition: How health care in America became big business—And bad medicine. Doubleday: New York; 2004.
Booker C. Climate change: This is the worst scientific scandal of our generation. The Telegraph. 2009 [Retrieved from]
.
Bowman K, Rugg A. Top 10 takeaways: Public opinion on the Affordable Care Act. [Retrieved from]
; 2013.
Davies PT. What is evidence-based education? British Journal of Educational Studies. 1999;47(2):108–121.
Davies P. Is evidence-based government possible?. [Jerry Lee lecture, 2004. Presented at the 4th Annual Campbell Collaboration Colloquium] National School of Government (UK): Washington, DC; 2004 [Retrieved from]
.
Detsky AS. Sources of bias for authors of clinical practice guidelines. Canadian Medical Association Journal. 2006;175(9):1033.
FactChecking “Pernicious” Obamacare claims. Articles/featured posts. [Retrieved from]
; 2013.
Federal Coordinating Council. Federal Coordinating Council for Comparative Effectiveness Research: Report to the President and the Congress. Department of Health and Human Services: Washington, DC; 2009 [Retrieved from]
.
Glenn N. Social science findings and the “family wars.”. Imber JB. Searching for science policy. Transaction: New Brunswick, NJ; 2002.
Gray BH, Gusmano MK, Collins SR. AHCPR and the changing politics of health services research. Health Affairs. 2003 [Suppl Web Exclusives, W3-283-307].
Hughes F, Clarke SP, Sampson DA, Fairman J, Sullivan-Marx EM. Research in support of nurse practitioners. Mezey MD, McGivern DO, Sullivan-Marx EM. Nurse practitioners: The evolution and future of advanced practice. 5th ed. Springer: New York; 2010.
Institute of Medicine [IOM]. Conflicts of interest and development of clinical practice guidelines. Lo B, Field MJ. Conflict of interest in medical research, education, and practice. National Academies Press.: Washington, DC; 2009 [Retrieved from]
.
Ioannidis JP. Contradicted and initially stronger effects in highly cited clinical research. Journal of the American Medical Association. 2005;294(2):218–228.
Palda VA, Davis D, Goldman J. A guide to the Canadian Medical Association handbook on clinical practice guidelines. Canadian Medical Association Journal. 2007;177(10):1221–1226.
Sackett DL, Rosenberg WMC, Gray JAM, Haynes RB, Richardson WS. Evidence based medicine: What it is and what it isn’t. British Medical Journal. 1996;312(7023):71–72.
Sarewitz D, Thernstrom S. Climate change e-mail scandal underscores myth of pure science. The Los Angeles Times. 2009 [Retrieved from]
.
Online Resources
Academy Health (professional association for health policy and health services research).
.
Canadian Foundation for Healthcare Improvement.
.
Commonwealth Fund.
.