States struggle to measure success of health coverage
As state agencies take more responsibility for overseeing health care coverage for those under age 65, they are being asked by federal program officials, state legislators, and federal and state policy-makers for massive amounts of statistical information and analysis to document and evaluate the results of their actions. States, however, face major obstacles in collecting the needed information and performing evaluations that are both pertinent and timely.
Now, there is a new resource available. Marsha Gold, a senior fellow at Mathematica Policy Research in Washington, DC, has written Evaluation of State Healthcare Coverage Expansion, a manual published by the Alpha Center, also in Washington, DC, to help state officials respond to requests and overcome some of the barriers to evaluation.
"Most policy-makers and program officials are not researchers," Ms. Gold says. "Yet, they increasingly are asked to respond to a variety of questions that only can be answered with appropriately targeted research. Often these questions are difficult to answer, even without the considerable constraints of limited resources, time, and data that apply in real-world settings." Ms. Gold says her manual is intended to help policy-makers and their staffs become sophisticated users of existing research findings as well as effective participants in shaping future evaluation strategies so that the results are relevant and can help answer questions that arise in the context of state policy.
"The demand for evaluation varies from state to state," she tells State Health Watch. "One of our goals was to address this issue — less in a crisis mode and more in a capacity-building mode for states. Having the right analytical tools has always been a problem for states, but the current environment seems to be a good time to promote building needed capacity to do this work."
The manual is intended to give people tools to think about what’s most important in what they’re evaluating and also to demonstrate how evaluators think about their research.
State, local levels are target areas
She targets those working at the state and local levels because that’s where problems are arising now.
"We’ve built an elaborate set of monitoring systems at the national level, but decisions are made and programs conducted at the state and community levels. The question is now to support the need for analysis at that level. States vary a lot in their capacity for good research and evaluation, and it’s often hard to get this work done uniformly and in a way that’s sensitive to local communities and still have something that can be compared with another program or locale," adds Ms. Gold. "We want to be able to help states make their cases for funding for research and evaluation."
Ms. Gold provides two classic, consistent definitions of evaluation:
• determining the value or amount of success in achieving a predetermined objective;
• measuring the effects of a program against the goals it sets out to accomplish as a means for contributing to subsequent decision making about the program and improving future programming.
However, she says agreeing on a definition is ultimately less important than encouraging a common understanding among stakeholders about the information needed to make decisions and when it is likely to be required. "Thus, it is important to be concerned less with terminology than with whether (1) you and your audience are thinking of evaluation the same way and (2) what you propose to answer matches the most relevant questions in a timely and appropriate way."
Before making recommendations for improving evaluations, the manual looks at problems that can interfere with or prevent effective evaluation. Obstacles to collecting needed information and performing useful evaluations include:
• data challenges when reliable estimates of current insurance coverages at the state and local level don’t exist, especially on particular target populations;
• concurrent changes that occur at the same time as coverage expansion, making it difficult to determine which factors are actually influencing coverage levels;
• programs that evolve over time, often in response to comments from beneficiaries, advocates, providers, state staff, legislators, the news media, and others, often making it hard to determine exactly what is being evaluated;
• the state staff resources available to perform evaluations;
• the lag time between data collection and analysis that makes it difficult for policy-makers to provide real-time responses to legislative inquiries.
While the techniques she recommends "will not remove these constraints," Ms. Gold says, "with a better grasp of evaluation methods, policy-makers can think more carefully about ways to address questions, think through their feasibility, and decide how best to position their agencies to respond."
The manual identifies six ways in which state policy-makers can leverage their resources to respond more effectively to evaluation questions, even when those resources are limited:
1. Become a sophisticated user of existing studies. Ms. Gold recommends reviewing earlier studies for their applicability and synthesizing insights from them as a basis for responding to policy questions and as a vehicle for identifying key issues to evaluate in any state. "Sometimes, there’s stuff available and it’s unused, but it wouldn’t take much work to be able to use it."
2. Leverage operational processes and systems. If policy-makers think strategically about the logic of their programs and the kinds of information that may be relevant to future evaluations, they often can tweak operating systems to provide operational measures of program performance. For instance, a program might have low enrollment rates for a number of reasons. Problems with education and outreach might suggest better targeting or more effective message development. Problems with enrollment might suggest an easier enrollment process or targeted campaigns to address specific concerns. Problems with retention/transition might lead to a review of policies on duration of eligibility, premiums, or incentives for continuity of enrollment.
3. Collaborate with other agencies. Evaluations can be more effective if staff in various agencies share their information and expertise. Ms. Gold cites examples of collaboration in Wisconsin, Mississippi, and Rhode Island that led to improved evaluations in a number of efforts.
4. Develop a public-private partnership with local universities and other sources. "States, at times, may be able to develop partnerships with local universities or others that can be used as a vehicle to expand the capacity for evaluation," the manual says. "Such partnerships should be actively managed to encourage relevant and rewarding work for both partners despite their different needs and incentives. States that develop partnerships with outsiders should invest in qualified internal staff who can oversee the work and encourage its relevance and timeliness." Ms. Gold says such arrangements cannot replace the need to develop sufficient internal resources, but can lend credibility to state efforts because outsiders may be viewed as more independent and unbiased.
5. Capitalize on the potential for external funding to support evaluation. Ms. Gold says program money sometimes can be directed to evaluation analyses as can grants from organizations such as the Robert Wood Johnson Foundation.
6. Cooperate with and use external evaluations as targeted opportunities. Often there are outside agencies that evaluate a program regardless of the views of the executive agency or program offices, such as federal- and foundation-funded CHIP evaluations, 1915(b) and 115 waiver evaluations, and legislative audits and evaluations. "State policy-makers can identify whether such evaluations present an opportunity to obtain useful data and analysis," the manual says. "These evaluations are likely to have more public impact than internally generated evaluations. Agencies will want to take time to help evaluators understand their programs by providing documentation, data, information, perspectives or accomplishments and challenges, and feedback on the accuracy of drafts. Reacting defensively or withholding information from evaluators is not good practice; they are likely to learn about these issues from others, and policy-makers will want to provide their perspective on the issue."
Ms. Gold believes that even though policy-makers are not researchers or evaluators by training, if they spend some time understanding the concepts that underlie effective evaluations, they will be better able to effectively use and guide evaluations. To set a context, she defines policy-making as "the process for resolving policy issues and the competing claims for resources among the various public and private parties that have a stake in the outcome." Evaluation results often are used in different ways, according to Ms. Gold, reflecting differences in needs at different stages of the policy process. Uses of evaluation include help for making decisions, providing support to substantiate decisions already made, and acting as a tool to establish and alter attitudes in ways that may influence future decisions.
Evaluation influences agendas
Uses of information also vary as the policy-making process progresses. Some observers note that evaluation generally has a greater influence in agenda-setting when issues are being raised and compete for attention, rather than when specific proposals are being considered and stakeholders are negotiating specific concerns.
Since a key characteristic of evaluation is that it reflects analytical thinking, Ms. Gold calls for explicitly considering and providing analysis in three areas — technical effectiveness (will the option achieve its intended results), political acceptability (what are the main sources of support and opposition), and operational feasibility and cost (what will it take to implement and how much will it cost).
"Both the design and interpretation of evaluations are best determined in context," Ms. Gold asserts. "One key element of context involves a good understanding of what the initial goals were and what compromises were made in enactment or implementation that could influence the state’s ability to achieve these goals. Goal-based evaluation is inherently difficult, since policy often evolves from multiple unstated or hard-to-acknowledge compromises."
Policy-makers should choose a type of evaluation to support a specific need. Evaluations could cover relevance and progress of a program to assess whether it is needed and/or whether it is the right strategy, and to ensure data will be available to track progress in implementation. In assessing implementation, Ms. Gold says, policy-makers will benefit from data that are appropriate measures of progress against relevant goals and expectations.
Another type of evaluation focuses on effectiveness and efficiency, emphasizing intermediate measures of performance. Such an evaluation measures the direct outputs of a policy or product, what it costs to generate such outputs, and other intermediate performance measures. It often is useful, the manual says, to calculate key measures of output and intermediate performance based on age, insurance status/eligibility, category, geography, or other relevant characteristics for which data exist.
The criterion for internal validity gets to the issue of showing a causal relationship in a program and ensuring that the relationship will stand up to scrutiny.
Ms. Gold suggests that policy-makers can become more sensitive to alternative explanations for a given change and limitation of a study if they ask themselves how they would want to explain a less favorable outcome. And the criterion for study feasibility focuses on the practicality of a given study and whether it can be completed with the available data and resources, and within a relevant time frame.
Contact Ms. Gold at (202) 484-9220.