It is difficult to contest the excellence of warfarin for risk reduction in atrial fibrillation (average stroke reduction approximately 66%; mortality reduction 26%). Nonetheless, maintenance therapy with warfarin is complicated by drug interactions, dietary interactions, and the foibles of inconsistent medication administration in even the best of hands. The consequences of significant bleeding — especially intracranial bleeding — have spawned a variety of plans for initial dosing and subsequent dose adjustment, each method seeking the most prompt and safe pathway for achieving a therapeutic level of anticoagulation while avoiding over-anticoagulation.

A good deal of the variation in response to warfarin is due to genetic metabolic pathways, particularly CYP2C9 and VKORC1. Identification of individual genetic makeup in reference to these pathways could help predict (at least in theory) doses of warfarin necessary to achieve therapeutic anticoagulation.

First, the good news. In the clinical trial by Pirmohamed et al (n = 455) in which genotyping was performed, there was a statistically significant increase in time in the therapeutic range (67.3%) vs traditional warfarin dosing (60.3%). Additionally, the incidence of excessive anticoagulation was less in the group whose dosing was based on genotyping.

The bad news, however, is that in the same issue of the New England Journal of Medicine another trial (n = 1015) that compared pharmacogenetic-based warfarin dosing vs standard dosing found no difference between methodologies (N Engl J Med2013;369:2283-2293). Counterintuitively, in black patients in the latter trial, maintenance of a therapeutic INR was actually less in the pharmacogenetic-based group.

At the current time, you and I do not have to worry about sorting this out, since current guidelines do not advocate routine genetic testing. The status of pharmacogenetic warfarin management remains controversial.