Вы находитесь на странице: 1из 13

04/01/08

Cost – Effectiveness Analysis


Text, Chapter 5

Cost – effectiveness analysis is a methodology for comparing resources with


consequences. As always, costs are measured in dollars, whereas outcomes are measured
in “natural units.” Thus, it can only be used to compare similar (2 or more) alternatives.
This means they must be measured with the same denominator – symptom free days,
cures, lives saved, years of life saved, etc.

In comparison to cost – benefit (solely dollars), where CB measures in “Net benefit”,


CEA measures per outcome. Thus, CB shows “NB of 250,000”, whereas CEA shows
“10,000 per life saved.” Once again, only can compare similar (same) denominator, and
it must be measureable (set units – lives, mmHg, etc.)

The choice in therapy follows into two categories (generally). One is domination, the
other is trade-offs. It depends on the situation, but here is an example to illustrate. When
Drug A is more expensive than Drug B, and Drug A is more effective than Drug B, there
is a tradeoff. However, when Drug A is less expensive than Drug B, and Drug A is more
effective than Drug B, then you have a “dominating choice.” The same is true in reverse
for Drug B.

Five steps in Cost-Effectiveness Analysis


1. Define the problem or objective being tested
a. Select objectives (the more specific, the better)
b. Define perspective.
2. Identify Alternative Interventions
a. Consider relevant treatment options
b. Choose appropriate comparators
3. Describe Inputs and Outputs
a. How resrouces affect the outcomes
b. Decision analysis models
4. Identify costs and outcomes
a. Can be final (cure / survival / death)
b. Or intermediate (mmHg, FEV1, days)
i. Have proof they’re related to one another
5. Interpret and present results
a. C/E ratios; graphical representation of data

Calculations are simple – based on effectiveness (cure rate, as represented as a decimal


percentage) and cost. Thus, the higher the cure rate, the better. However, tweaking input
can change which one you want. Example – if you only had 70,000 to spend, between
two options, with two different costs, which one? Divide total cost by cost per cure to
determine the number of people you can treat, then multiply by the cure rate, to
determine how many individuals were cured.
Cost-effectiveness Analysis maximizes “the bang for the buck.” While it recognizes
limited resources, it is counter-intuitive to traditional medicine (when MD’s bear no
financial risk). Operating on a budget almost always results in a change in preferred
outcome – this is the single most controversial issue in Pharmacoeconomics.

Summarized best by Cpt. Kirk and Dr. Spock:

Spock: “The needs of the many outweigh…”


Kirk: “…the needs of the few.”
Spock: “Or the one.”

Incremental C/E ratio is similar to marginal cost. It is calculated by subtracting the total
cost of Option B from Option A, and dividing by the total outcomes for A minus the total
outcomes for B:

TotalCostsA − TotalCostsB
TotalOutcomesA − TotalOutcomesB

This equation can also be used with the rates…. (per cost and decimal rate) for a shortcut.
Always pay attention to order, and to sign – they determine how to read the data. Thus, A
 B, with a negative result; indicates that going with option A is a loss of money,
whereas B  A, with a positive result means that B is a gain.

Politically, C/E ratios are amazing for decision makers (politicians, lobbyists), as they can
have a fundamental method for determining what to endorse. However, when using a
C/E ratio, you have to determine what is good enough, the usefulness depends solely
upon the quality of the data, and some assumptions are always required. Commonly
depicted in league tables, and C/E ‘plane.’ (Four quadrant xy graphing of efficacy (x axis
is effect difference, and y is cost difference)

04/03/08
Cost-Utility Analysis (CUA)

Cost Utility Analysis (CUA) compared to CEA, is that all outcomes are final outcomes.
Always termed “QALY”, as opposed to a specific denominator for CEA (per lives saved,
etc.) Numerator is still cost, in dollars as always.

QALYs stands for “Quality Adjusted Life Year.” It is a measurement that integrates costs
and outcomes within a “utility analysis” framework. It is effectively quality times
quantity. Utilities refers to patient preferences, and is always depicted as a ratio = costs /
QALY. QALY can be substituted with a variety of denominators, all similar, except
measuring different aspects. A few are:

1. QALY – Quality Adjusted Life Year


2. HYE – Health Year Equivalent
3. TWiST – Time Without SympToms
4. “Well-years gained”
5. “Functional Years”
6. QALE – Quality Adjusted Life Expectancy

CUA is appropriate when the quality of life is important for integration. This is usually
as the primarily outcome. Also, treatments must affect mortality and morbidity. Similar
in nature to CEA, the diverse treatments must achieve a common basis (particular
outcome – years of life / etc.). Also, it can be used when studying a project that has
already implemented CUA. CUA is no appropriate when only intermediate outcomes are
available, when alternatives are equally effective, Quality isn’t measured, and as always –
when it’s too expensive.

How do you calculate a QALY? It is the set unit of time (# of years) multiplied by the
proportion Q that we are looking for. In QALY specifically, the units are Years, and
proportion of illness – with 1 equaling perfect health, and 0 equaling death. Thus, QALY
merely equals the years with the degree of health. (20 years healthy = 20, 20 years
chronic illness (~Q = 0.5) = 10)

Q values are inherently subjective, and thus can be calculated in a variety of ways –
either from literature, from health professional’s judgments, or a direct measurement
from patients. A few which are following.
Category (Case) Rating Scale
Having them place a dot on a measure line (ex. 10 cm line), allows them
to establish a numerical reference specific to a question – “Left being
death, and right being perfect health, please mark where you feel you are.”
Standard Gamble
Patients are given a choice (binary) – either live less than healthy
(condition) or “gamble” on a new procedure, with a set risk of death. The
utility of state i (living with the condition) equals Q, which also equals p
(probability of perfect health).
Time Trade Off
A ratio measurement – example is, if you had a heart condition, that was
going to kill you in 10 years  how many years would you give up to
have perfect health, even if the procedure would shorten your life?
Incremental measurement, where utility of state i (health state) = Q = x
(time alive in perfect health) / t (time alive in imperfect health).
Magnitude Estimation
Patients are given a reference case, and asked to compare to that reference
case. Ex. Situation 1 is 2x as desireable as base case. Etc.

Quality of Life can also be measured on multi-attribute Rating scales, that seperates QOL
into various measureable attributes, for a more diverse and stratified viewing of
outcomes. Included are Mobility, Physical activity, social activity, symptoms, etc. Once
again, are still presented in League tables, and are controversial in their use. They are
also, similar to CEA, designed for use for allocating $$$ to the most needed locations.
QUA are limited in that utility values can vary significantly – not only in
measurements, but different methods can have different QALYs. In addition, estimating
utilities may not always be accurate, as they may be non linear (is 0.2  0.3 the same as
0.7  0.8?). There are other measurements than QALYs, but they are less popular, and
more complex.

When reviewing a CUA study:


1. Describe the utilities in layman’s terms
2. Report the source of the weights used
3. Explain the results in layman’s terms
4. “Transparency” in all aspects
5. Sensitivity analysis

04/08/08
Decision Analysis Part I
Text: Chapter 8

Decision Analysis is a quantitative approach to decision making. Using a diagram for


choices and outcomes, it quantifies uncertain events, and imposes logical thinking, in
order to determine outcomes. It is the method used “inside” CEA, CUA, etc. The
emphasis here is on “expected values.”

Little history – DA started in WWII, as a military application to allocate scarce resources.


In the 70’s, it was introduced to medical / health literature. In ’87, there was greater than
200 published medical studies using it. In ’95, there were 81 articles for Rx products
using it. Today, there are 21,840 PubMed hits.

“Expected” values are derived from probability theory. In probability theory, the
expected value of a random variable is the sum of the probability of each possible
outcome of the experiment, multiplied by its payoff. It represents the average amount
one expects as the outcome, if it was repeated many times. It does not have to be present
(or even possible) to achieve as an answer (such as 3.5, rolling a 6 sided die). Used
commonly in gambling, attempts at medical therapies, and other problem solving
situations.

Calculation example – create a chart with the total number of options (die score, sum of 2
dies, etc.) and multiply the probability (as a decimal) vs. the face value, and then you
have its expected value. A great example of this is “Deal or No Deal” which uses this
exact logic to determine how much the cases left in play are ‘worth.’

In drawing, a box is a choice node, and what follos is the result of a decision. A circle is
a chance node, and what follows is uncertain – requires probability of it occurring. Time
is always shown increasing (continuing) from left to right. The two things to ensure are:
Identify a decision which needs to be made; and then diagram the decision, and ALL
PLAUSIBLE results. You include the probabilities for each result, calculate the expected
value for each decision, and then you can identify the preferred alternative.

Analyzing, or building a DA chart can be time consuming, and error prone. To make it
easier, build it in a set form. Build the chart first – a central node, followed by options,
and each branching out based on probability / decision nodes. Once all end points have
been made, create a table at the far end, with a row present for each endpoint. Fill in the
probability of reaching that endpoint (multiplicative P, if necessary - 5 % chance of this
branch, followed by a 50% percent of this one  2.5 % chance or reach there, etc.).
Multiply this by the total cost of that branch (path), and you have the “Expected” value,
for that branch. Sum all nodes in that decisional pathway, and you have the overall,
average expected cost of that ‘treatment’ (primary branch).

Practice makes perfect – they are relatively easy, but also easy to mess up.

04/10/08
Decision Analysis Part II

The same methods used in simple decision analysis can be manipulated for a variety of
needs. Instead of using cost, Length of Stay is another common variable that can be
used, and an “expected” LoS can be calculated. Similarly, the question asked can modify
– such as, “per healthy patient” or “length of stay per patient” – all of which require a
slight modification to the plug-and-chug of data. Simply, think of this way – isolate what
you are looking for – you only need a small amount, per question – i.e. – “Per Healthy
Patient” you only require the probability for a healthy individual, and the other relative
descriptor (ex. Cost) – if they are not healthy, they are not included. Thus, for that
question, it is merely Probability (healthy) x Cost (based on decision branch).

Once you have all the data interpreted, you can calculate an Incremented Cost Efficacy
Ratio (ICER). Exceptionally similar to CEA, it uses the same equation:
TotalCostA − TotalCostB
TotalOutcomesA − TotalOutcomesB

Thus, you can determine the overall cost per denominator (i.e. “Cost per healthy
patient”).

FYI (maybe a question) – Treeage is a small company that is nearby in MA, that develops
and sells Decision Analysis software, for creating, managing, and evaluating complex DA
trees. This company seemingly has the market on it, as CA is becoming increasingly
important in the healthcare industry. Clerkship students have used it, as well as searching
through medical records, to help determine costs of therapy, and cut down spending in
various hospitals. It is a very time consuming, but effective method of interpreting data,
and deciding the best course of action.
04/15/08
Sensitivity Analysis
HRQOL: Text Chapter 7
Sensitivity Analysis: pgs 76-7, 105-7, 162, 371-2, 384

Sensitivity analysis is a process of varying assumptions and variables over plausible


ranges to test the “robustness” of results and conclusions. In short, it is using random
data, to ensure that your decision or therapy guideline is cost effective, etc. You do this
for a variety of reasons – for effect assumptions and conclusions, it helps identify your
critical assumptions, and it helps predict future effects of the decision. It has quickly
become a key indicator for a quality study.

Sensitivity analysis is aimed to do just that – test any assumption (variable) that can affect
the outcome. This can be just about anything – from Rx effectiveness, Adverse Event
Profiles, Resources Used (and cost), discount rates, etc. Sources of uncertainty in any
decision can come from a lot of places – variability in the data, how generalizable it is to
real world results, extrapolation of data, and how various analytical methods sort /
interpret the data.

Because of these ‘ranges’ in ‘variables’, there are mathematically immense numbers of


combinations to study, to accurately interpret that data. An example is – say a drug has
an Adverse Event that occurs somewheres between 1 and 5% of the time, and costs
between 100,000 and 150,000 $  you have 5 x 50,000 combinations to test for.

Because these immense combinations, there are a variety of analyses you can use: The
four main types: “Simple,” Threshold, Analysis of Extremes, Probabilistic. Simple is
just that – it modifies key assumptions across a reasonable range – if the conclusion
changes, then the model is considered sensitive. Threshold determines the “breakeven”
point, where the conclusion changes. Analysis of extremes is once again, just that – the
best case, and worst case, and if the conclusion changes between them. Probabilistic (aka
Monte Carlo) tests by randomly ‘playing’ data through the model, and determining
when/where the conclusion changes, to determine if it is sensitive.

Sensitivity does not indicate bad, or improper data. It does not detract from the study. It
merely indicates that the model presented (or used) varies based on the situation that it is
applied to – i.e. – “What is the best antibiotic to use for XXX vs. YYY?” – where it is
possibly sensitive to those strains.

SA is not being used if at all, not sufficiently enough. It is a key factor in determining the
acceptability, and usability of various therapeutic models, payment models, etc. Data
(realistic estimates) can be obtained in a number of ways – from RCTs, to health care
claims data, or even nationally available public data sets – difficulty, and data present all
vary based on the data set. Select carefully, wisely, and based on budget, for the most
appropriate data for your model (Don’t test your Type 2 DM medications regimen against
people who have Type 1 DM).

04/15/08
Health Related Quality of Life- Psychometrics- The Science of Surveys
(SKIP if you already know Psychometrics – lots of reliability / validity / precision, etc.)

Health Status and Quality of Life are not the same – nor can they be measured the same.
Health status includes such things as functional status, morbidity (or disability), and
wellbeing – be it mental, social, or otherwise. This is objective data. Quality of Life is
an opinion, or subjective assessment of their current situation. It is based both on the
patient’s perceptions, as well as their intrinsic level of satisfaction. Easier to differentiate
as such: Clinical Outcomes vs. Patient outcomes. Clinical outcome: Bone Mineral
Density. Patient Outcome: Level of Pain.

General Health surveys are used to create a population of “normal” values, for things to
be compared against. They measure several health categories, such as functional status,
social role and functioning, as well as an overall assessment of health. Because of this set
general nature, they can be easily used to compare a sub population to the general
population, or compare two different sub populations. These include (but not limited to)
the Medical Outcomes Study Short-Forms (SF-36, SF-12), the Sickness Impact Profile
(SIP), the Nottingham Health Profile (NHP), the EQ-5D (EuroQol), etc.

Specifically, the SF-36 is the most prevalent, most common Generalized survey that is
used. It measures a patients health in 8 dimensions, and uses categorical scoring such as:
“Excellent, Very Good, Good, Fair, Poor.” From this layout, there can be an overall
establishment of a “Health” score for the patient, as well as group –specific scoring for a
more tailored, specific depiction of the health status of the patient (i.e. – difficulty
moving, etc.). The SF-36, along with many other general surveys are too long, and too
time consuming. Because of the time and energy to fill them out, they have VERY low
response rates, and can have complex scoring (this is where the money is – you can have
the test, but you have to send it to us for grading ;) ) In addition, because it is a
generalized survey of health, it may not ask question specific for a population, based on a
disease.

Disease specific surveys, such as the Karnofsky Performance Status Scale, and American
Rheumantism Association Functional Classification started off not as surveys, but as a
measurement of functional limitation. Originally designed by clinicians, they later were
converted to survey form, in order to have a more patient-centric view of their status.
These surveys are much more specific (and sensitive) based on their application (but only
THAT application – don’t use a depression scale for Arthritis), they have MUCH higher
response rates, because they are significantly shorted. However, because they are disease
specific, they can not be easily compared to larger groups (overall population), and they
may miss unexpected effects, that are out of the scope of their questions. While this is
also likely in generalized studies, their more miniscule (Dx specific) are more likely to
miss them than a generalized.

PROQoLID is an online database of “instruments” (aka surveys) that can be used. They
are specific for disease states, age, etc. – that way you can locate the most appropriate,
and informative survey to be used for that patient. As Always, check the literature – see
how the survey was developed, and tested, as well as the research that has applied it (in
particular, what health / medical resources.) Chances are, if the big boys use it, it’s much
better than if no one’s heard of it – but always make sure it is appropriate for your
patients!

Psychometrics -----

Reliability (Preferred over 0.8 or 0.9)


This measures how often you ‘score the same.’ The more reliable a system is, the
more often you will get the same score. I.e., if you were to take a test, with a
100% reliability, and put the same answers down, you would always get the same
result. This is not with whether it grades accurately – it is only if the score can be
consistently repeated.
Validity
Validity builds off of reliability. Once a ‘test’ is reliable, the question is whether
or not the measurements are accurate. Think of it as a bullseye – a valid test will
always get you what you want – a bullseye – and it must be reliable, otherwise the
validity could be coincidental. There are many different types of validity – three
to focus on are content (does it have the right questions), Criterion (do scores of
the survey compare to another gold standard?), Construct (The theory is sound,
and relates to other measurements), as well as Factorial (items within a dimension
relate – pain scores are similar to ache scores, etc.)

REMEMBER: A VALID instrument WILL BE RELIABLE


REMEMBER: A RELIABLE instrument MAY OR MAY NOT BE VALID

Precision (Responsiveness)
If the target changes, will you be able to detect the changes? This is directly
linked to sensitivity and specificity. Increasing both, increases precision.
Decreasing both, decreases precision.
Sensitivity vs. Specificty
Sensitivity: The ability to detect a change when it exists
= True Positive / (True Positive + False Negative)
Specificity: The ability to detect “no change” when there really is no change
= True Negative / (False Positive + True Negative)

Believe it or not, Psychometric theory actually works… and is a pain in the ass.
04/17/08
Pharmacoeconomic Guidelines
Text: Chapter 16

Guidelines are necessary for several reasons – Pharmacoeconomics (PE) is new – the use
of guidelines can build confidence in it. In addition, without rules, it promotes poor
methods, allowing many problems to form. Lastly, there is no set regulatory body –
hopefully the FDA (most likely), but nothing yet – thus, it is a form of self-policing.
Without them, there is no basis to evaluate studies (especially since they have opposing
conclusions), it prevents “plugging in your own numbers” (bad design = bad results).
And most importantly, there are too many choices overall – too many different methods,
alternatives, assumptions – a unified groundwork must be present to evaluate them
equally, and fairly.

Also, the demand for guidelines has increased – this is primarily due to the dramatic rise
in the overall number of studies, the importance of money in decisions, and monetary
decisions themselves, as well as study qualities to date being poor, with misused terms,
biased titles, and lacking important criteria.

Since evaluation of studies is important for decision makers (government, for use of
limited resources, the industry – to prove the value of a product, or MCOs in order to
manage their overall costs), there are some established aspects that are evaluated. These
include measuring costs and outcomes, the types of PE analyses used, perspective, the
use of a fair comparator, inclusion of sensitivity analysis (as well as validity), types and
amount of costs used, outcome measurements, simulation modeling, discounting,
disclosure of finances and investigators, as well as generalizability. These factors are all
common sense – they all ensure that there is a fair and equal comparison, and that the
material used is of sufficient quality and quantity.

BMJ was the first to create guidelines for publishing, and it used a checklist system. It
was a 35 item inventory, that was described and scored, evaluating the manuscript in
three general issues – study design, data collection, and analysis & interpretation.

In 1993, there was a panel on cost effectiveness, that resulted in the 1996 JAMA
manuscripts. They established the following guidelines for C/E studies: limits of C/E
and the reference case, the components in the C/E ratio, measuring costs, valuing
consequence in the denominator, estimating effectiveness of the intervention, time
preference and discounting, handling uncertainty, as well as reporting guidelines.

An update was published in 1995 in the Annals of Internal Medicine – these were by the
“Task Force on Principles for Economic Analysis of Health Care Technology.”
Conducted by Leonard Davis institute, including members from the government,
industry, as well as academia – they formed 2 sets of guidelines: one for researcher
independence, and a second on reporting economic outcomes. It defined 7 key areas:
research design, protocol objectives, report contents, costs / resources, effectiveness /
benefits, data sources, and extrapolation of results.
There are 8 common criteria themes that you must know:
1. specific, well defined problem statement
2. Comprehensive list of alternatives considered
3. Study perspective clearly stated
4. Comprehensive list of costs and benefits stated
5. Discounting if and when appropriate
6. Appropriate sensitivity analysis
7. Marginal / incremental ratios
8. Comprehensive presentation of results

As a general rule, we are doing horrible – the weakest areas are in stating assumptions,
perspective, discounting and ethical considerations, as well as performing sensitivity
analysis. Unfortunately for the industry, Rx journals are much weaker (and worse at this)
than traditional medical journals. Some numbers are provided of how well we are doing,
if you are interested (slide 18)

04/22/08
Managed Care Settings and Formularies
Text Chapter 14; Kaiser Report

Formularies are a listing of drugs that are “approved” for use within a setting –
sometimes these are nationwide, some are MCO specific, and even others could be
hospital (or instutition) specific. They act successfully to contain drug costs, and improve
the overall quality of health care.

In 1992, Australia was the first to require C/E for consideration of a drug into the national
formulary. In 1994, Canada and the Netherlands did the same thing with CCOHTA. In
1999, the National Institute of Clinical Excellence was founded in the UK, and it
initiates, and conducts evaluations, and it also often makes non-binding recommendations
to the National Health Service (NHS). It has introduced the NICE Review process, that
standardized, and linearlized the overall method for the incorporation or modification to /
of formularies. In 2000, the Acadamy of Managed Care Pharmacy (AMCP) created the
first standard “format” for formulary submissions. Because of their usefulness, they are
changing from passive to active systems, and have been requesting information on C/E,
as well as clinical effectiveness.

There are many different formularies – limited, restricted, partially closed, etc – all of
which are variations of the “open – closed” spectrum. The opposite of this is the negative
formularies – and all they are is a list of drugs which cannot be prescribed. Open
formularies are considered such, as it is relatively ‘open’ or ‘easy’ for adding new drugs.
Closed is the exact opposite – very difficult. Drugs are added for their effectiveness, as
well as their C/E, and primarily added by the P&T committee. This restricted and
structured use of medications reduces costs in may areas, as well as maximizes patient
outcomes, and implements guidelines to treatment.

With this restriction of drugs, there has been a lot flak from many individuals. While the
P&T committee is responsible for formulating, maintaining, and editing, they also
include the input of physicians into it. Despite this, the flak comes from
misunderstandings of the nature of a formulary – be it believing cost-containment the
only reason, limitation of their medical rights, or studies indicating (few, and poor) that
limiting drug diversity decreases quality / costs – all of which are not true. While it is
accurate that a formulary’s primary goal is cost containment, this same mechanism is to
ensure overall health, as well as maximizing care for all individuals. In addition, drugs
are included based on their overall worth – expensive drugs are included, if there is
evidence for them compared to others. Lastly, of course – there is always an exception to
the rule, and formularies are no different.

The decision for anything to be added the formulary (and where, and how much, etc.) is
based on a web of many different choices. This include everything from acquisition cost,
to its C/E, to side effects, to productivity, QOL, DTC advertising – the works. Because of
the nature, and efficacy of formularies, 90% of all Rx’s are filled using formulary drugs.
For those exceptions, usually a Prior Authorization (PA) is required – this is a deterrent,
used to determine need and rationale. Also, automatic exceptions are always made when
necessary – such as drug allergies, and use of formulary drugs without success.

Modern formularies vary, but a common theme (particularly among 1200+ medicare
plans) is a 3 tier (now 4 tier) plan. The first tier is the generic medications, and because
they cost the least, they are placed with the lowest level of copay for the patient – be it %
or flat fee. The next step up, Tier 2, are the preferred brand products. These have the
next step up in flat fee or %, as there is no substitute available – such medications would
be new brands that are superior to old medications. Lastly, non-preferred brands are
stuck with the highest flat fee or %, as they are either not superbly indicated, or too high
cost for efficacy data to validate. Such medications as Crestor (yes, more effective –
however, other generic statins exist), traditionally fall in this category.

The introduction of a 4th Tier, the Specialty drugs, has been much debated. It was
instituted via legislation to mandate coverage for non-formulary drugs. This is generally
where the very expensive medications, such as chemotherapy, amongst other things lies.
While it is not always available, patients typically have to pay a % of cost. This creation
of a Tier 4, is the exact opposite of the concept of health care – they are distributing the
costs of a few sick patients on themselves, to keep the premiums down for everyone – the
opposite of what they are supposed to be doing.

Formularies do not work by themselves- they require constant updates, be it for


therapeutic modifications, new, more C/E drugs, etc. – they can be used alongside
guidelines to find the best therapy for the patient – who cares if it has a 100% cure rate, if
NO ONE can use it? Thus, health plans should (and have begun to) request a
standardized dossier of each medication, for consideration to their formulary.
Dossier are just that – succinct documents containing a variety of pertinent material,
allowing for a more informed choice. There are 5 parts to a recommended dossier, as are
follows:
1. Product Information
a. Pharmacology and PK – as well as interactions and how to minimize
them.
2. Supporting Clinical and Economic Data
a. Summaries of key clinical studies that are either published or
unpublished, as well as economic evaluations of their results.
3. Impact Model Report
a. Comprehensive disease based analytical model that is based on clinical
trial as well as economic data, that is tailored to the population of the
plan – its practice patterns, as well as its patient’s population
4. Clinical Value and Overall Cost
a. This data should all support this section, which is a summary stating
expected per unit product cost, as well as plan’s expenditures for the
product. It should also contain an argument for justifying these
expenses, the anticipated effects on clinical outcomes as well as
HRQOL, as well as the economic consequences for the plan, and its
patients.
5. Supporting Information
a. A bibliography, checklist, and appendices, along with copies of all
clinical and pharmacoeconomic studies, and spreadsheet models of the
presented data.

Dossier’s are CONFIDENTAL! As such, companies should prepare prior to launch, so


that the drug is included on formularies as soon as it is relased from the FDA.

An example formulary submission process:


1. 6 months before submission, the manufacturer sends a notice of intention to
submit to a health plan’s formulary manager.
2. Manufacturer schedules presubmission meetings with health plan officals
3. 2 months before submission, the health plan receives the submission,
including an executive summary, a checklist of requested components, and
justification for incomplete or missing data.
4. The health plan reviews the submission, and may ask for more information.
5. 2 weeks before the P&T meeting, the health plan informs the manufacturer if
the dossier is considered complete. If it is, it will be used. If not, it may be
returned to the manufacturer (with reasons detailed), and they may resubmit.
6. The plan submits a summary of the dossier to the P&T committee, which
presents the arguments for and against.
7. Lastly, the manufacturer receives written notification of the P&T’s
recommendation, any recommendations for restricted access, and if so (or
denied), guidance for reconsideration or appeal.
A number of organizations are utilizing this plan, including several BCBS units, Cardinal
Health, Mayo Health Plan, as well as some PBMs including Advance PCS, Argus, and
Wellpoint. The DoD and several state Medicaid programs do as well. (NOT NY YET)

It is possible that the US could move towards a centralized formulary, based on evidence-
based decision making in health care… but the FDA sucks, and has not released
guidelines mandating use or even production of dossier, to those who request them.

04/24/08
Pharmacoeconomics and Clinical Trials; Pharmacogenomics
Text chapters 10, 11

Вам также может понравиться