After hearing and assimilating this program, the listener will be better able to: ( 1 ) Increase his/her basic knowledge of important advances in medicine; ( 2 ) Identify a broad range of clinical research reported in the medical literature; ( 3 ) Synthesize research findings through one-on-one interviews with authors and editorialists; ( 4 ) Integrate new treatments reviewed in the summaries into current practice; ( 5 ) Challenge oneself with thoughtful, clinically relevant questions. Disclosure In adherence to the ACCME Standards for Commercial Support, Audio-Digest requires all faculty and members of the planning committee to disclose relevant financial relationships within the past 12 months that might create any personal conflicts of interest. Any identified conflicts were resolved to ensure that this educational activity promotes quality in health care and not a proprietary business or commercial interest. For this program, Dr. Anthony Komaroff reported that he receives grant and/or research support from the Government of Portugal Science & Technology Foundation and sits on the editorial boards of Harvard Medical School and Harvard Health Publications (where he serves as Editor-in-Chief). Dr. Allan Brett and the planning committee members reported that they had nothing to disclose. EBOLA VACCINE PHASE 1 Several candidate vaccines aimed at preventing Ebola virus disease are being explored. On the heels of nonhuman primate infection-model studies, researchers have conducted a phase 1 trial of a replication-defective recombinant chimpanzee adenovirus type 3vectored Ebola vaccine. Their findings appear on the website of the New England Journal of Medicine ( http:// dx.doi.org /10.1056/NEJMoa1410863 ). Twenty healthy adults from the Washington, DC area got single intramuscular doses of a vaccine that included either 210 10 particle units (that was group 1) or 210 11 particle units (that was group 2). Safety monitoring and an evaluation of immunogenic responses were conducted during the first month after vaccination. There were no serious side effects in either group. A minority of the patients had prolonged activated partial-thromboplastin time and either neutropenia or leukopenia (all were asymptomatic). The induction of an antiphospholipid antibody was seen in the patients whose activated partial-thromboplastin time was prolonged. All of the patients in both groups developed antibody responses; group 2 had a significantly higher response than group 1. T-cell responses were also more prevalent in group 2 than in group 1 for both CD4 and CD8 cells. These encouraging findings support the tenet that a vaccine should be effective in ultimately lowering the incidence of Ebola virus disease. To stem the ongoing epidemic in West Africa, the start of this trial was accelerated in coordination with the Food and Drug Administration. CONFIRMING THE BENEFITS OF EARLYTREATMENT OF HIV Although most HIV-positive patients who get antiretroviral therapy achieve undetectable viral loads, many dont attain normal CD4-cell counts. To see how the timing of the start of antiretroviral therapy affects CD4-cell recovery, researchers evaluated immune reconstitution in 1200 HIV-positive patients who had achieved undetectable viral loads (the median CD4-cell count before antiretroviral therapy was 358 cells/mm 3 ). CD4-cell normalization was defined as achieving a count of higher than 900 cells/mm 3 . Details appear on the website of JAMA Internal Medicine ( http:// dx.doi.org /10.1001/jamainternmed.2014.4010 ). After virologic suppression for a median of 5years, only a third of the patients achieved CD4-cell normalization. Normalization was significantly more common in the patients who had started antiretroviral therapy within a year of their estimated date of seroconversion than in those who had started antiretroviral therapy later. Compared with the later initiators, the early initiators had lower rates of developing AIDS and higher rates of response to vaccination against hepatitis B. These findings add to accumulating evidence that an earlier start of antiretroviral therapy in HIV-positive patients is associated with higher CD4-cell counts and improved immune function. The finding that earlier antiretroviral therapy and CD4-cell normalization is beneficial is consistent with recent findings of a residual risk for AIDS-related complications in patients with CD4-cell counts below 750 cells/mm 3 ( www. jwatch .org /na31991 ). Another previously documented advantage of earlier therapy is a lower rate of HIV transmission. An editorialist highlights ( http:// dx.doi.org /10.1001/jamainternmed.2014.4004 ) the need for novel adjunctive measures aimed at restoring full immune function for patients who are diagnosed or treated too late to achieve normal CD4-cell counts. NUTRITION IN ACUTE PANCREATITIS: HAVE WE COME FULL CIRCLE? Current guidelines recommend the early use of nasoenteric tubes with polymeric or semi-elemental formulas as the standard of care in patients with acute pancreatitis thats predicted to be severe ( http:// dx.doi.org /10.1038/ajg.2013.218 ). The early use of enteral nutrition has been considered to be important to stabilize the gut mucosal barrier and prevent bacterial translocation, thereby lowering the risk for serious infections and death. In a study in the November20th, 2014 New England Journal of Medicine ( http:// dx.doi.org /10.1056/NEJMoa1404393 ), researchers in the Netherlands randomized 200 patients with predicted severe acute pancreatitis either to the early placement of a nasoenteric tube and the start of tube feedings within 24hours or to the introduction of a regular oral diet at 72hours. The rates of death, serious infection (namely, infected pancreatic necrosis, bacteremia, or pneumonia), admission to the intensive care unit, the need for mechanical ventilation, and new-onset organ failure were similar in the two groups. Two thirds of the oral-diet patients tolerated the introduction of oral nutrition, and the time to tolerating a full oral diet was shorter in the oral-diet group. Contrary to guidelines and generally accepted paradigms, these findings tell us that offering regular oral intake at 72 to 96hours after admission is an acceptable strategy in patients with predicted severe acute pancreatitis, with tube feeding reserved for those who dont tolerate oral intake. We dont know whether longer delays are reasonable. The approach to providing nutrition in acute pancreatitis has now come full circle from oral to parenteral to enteral and back to oral again. GENES INFLUENCE COMPOSITION OF THE GUT MICROBIOME An explosion of evidence in the past 5years links the microbial composition of the gut to obesity in people. Obesity is also influenced by many human genes, and the gut microbiome is clearly influenced by the environment. In a multicenter study in the November6th, 2014 issue of Cell ( http:// dx.doi.org /10.1016/ j. cell.2014.09.053 ), researchers asked whether human genes themselves might influence the composition of the gut microbiome. The researchers collected more than 1000 fecal samples from monozygotic twin pairs, dizygotic twin pairs, and unrelated individuals. They determined the presence and the amount of nearly 10,000 bacterial groupings. The microbiota of monozygotic twins were much more similar than were the microbiota of dizygotic twins or unrelated people. For some bacterial groups, like Bacteroidetes , the environmental influences were much greater than the influences of human genes. For other groups, like Christensenellaceae , the opposite was true; this group also appeared to protect against obesity. When germ-free mice were inoculated with stool from lean people (who were rich in Christensenellaceae ), the mice stayed lean; whereas, when germ-free mice were inoculated with stool from obese people (who were poor in Christensenellaceae ), the mice became obese. When Christensenellaceae was added to human stool that was initially devoid of this group, the altered stool also protected against obesity. This huge study of human gut microbiota reinforces many recent studies that have shown a potent effect of the gut microbiome on obesity in people. Plus, it shows that the genetic inheritance of obesity might be attributable partially to the influence of human genes on the gut microbiome. GROWING HUMAN SMALL INTESTINE USING PLURIPOTENT STEM CELLS Pluripotent human stem cells whether embryonic stem cells or induced pluripotent stem cells can replace cells that have been killed by disease. An example is the replacement of myocardial cells that have been killed by infarction. But using stem cells to create whole organs is a much bigger challenge: A whole organ has many different types of cells and requires a vascular and nerve supply to function. In a multicenter study in the November2014 issue of Nature Medicine ( http:// dx.doi.org /10.1038/nm.3737 ), researchers seeded a cylindrical structure with either human embryonic stem cells or human induced pluripotent stem cells and exposed the cells to growth factors that encourage the development of intestine. During a month, small cylinders of intestinal epithelium and mesenchyme formed. These tiny so-called human intestinal organoids were implanted into the kidney capsules of immunocompromised mice. During the next 6weeks, the organoids grew 50- to 100-fold larger, with crypt-villus architecture and underlying laminated submucosal layers all of human cell origin. The organoids made brush-border digestive enzymes and were capable of absorbing gut contents (surgically placed into the ectopic organoids) into the bloodstream. Blood supplies for the organoids developed in the mice, and the organoids were responsive to systemic signals. Already, simple designer organs (like bladders and tracheas) are being created from stem cells and are being used in people. This study shows that growing more complex organs is also feasible. And, if induced pluripotent stem cells are used, a person might get a replacement organ created from their own genetically identical cells. PROBIOTICS, PREBIOTICS, AND SYNBIOTICS FOR IBS Some patients with irritable bowel syndrome use probiotic therapy ingesting live or attenuated microorganisms either on their own or as recommended by clinicians. Alternatives to probiotics include prebiotics (which are nondigestible food ingredients that stimulate the growth or the activity of intestinal bacteria) and synbiotics (the combination of prebiotics and probiotics). In a meta-analysis in the October2014 American Journal of Gastroenterology ( http:// dx.doi.org /10.1038/ajg.2014.202 ), researchers examined the efficacy of these substances in patients with irritable bowel syndrome or chronic idiopathic constipation. For irritable bowel syndrome, the proportion of patients with overall improvement in their symptoms was significantly higher with probiotics than with placebo; seven patients had to be treated to benefit one patient. Synbiotics seemed to be of no clear benefit, and there were no trials of prebiotics that were eligible for inclusion in the analysis. For chronic idiopathic constipation, stool frequency increased from baseline with probiotics and with synbiotics, but not with prebiotics. Probiotic therapy seems to be worthwhile for a minority of patients with irritable bowel syndrome. But we dont know whether the responses persist long term, because most of the studies in this meta-analysis lasted for only a few weeks or months. Plus, the bacterial strains used in these studies were quite heterogeneous. For patients with chronic idiopathic constipation, probiotics and synbiotics might be beneficial, but they need further study. Although probiotics are generally considered to be safe, the evidence is sparse on long-term use and use by immunocompromised patients. PROTON-PUMP INHIBITORS MIGHT BE ASSOCIATED WITH PNEUMONIA IN STROKE PATIENTS In some hospitals, patients with stroke are routinely given acid-suppressive therapy with proton-pump inhibitors or histamine-2receptor antagonists for prophylaxis , despite no evidence to support this practice. In fact, acid-suppressive therapy could be harmful, because there may be an association with an increased risk for pneumonia. In a retrospective study in the November2014 Annals of Neurology ( http:// dx.doi.org /10.1002/ana.24262 ), researchers asked whether acid-suppressive therapy in patients with stroke was associated with an excess risk for hospital-acquired pneumonia. Among 1700 patients admitted with stroke (and hospitalized for at least 2days) during a 10-year period, 80% got acid-suppressive therapy (usually proton-pump inhibitors). The incidence of hospital-acquired pneumonia was 17%. In a multivariate analysis adjusting for 30 potential confounders (including mechanical ventilation), the use of proton-pump inhibitors was associated with a significantly higher risk for pneumonia; a smaller elevated risk with the use of histamine-2receptor antagonists wasnt significant. Although residual confounding could be responsible for the finding that using acid-suppressive medications in patients with stroke is associated with hospital-acquired pneumonia, they carry weight, because no proven benefit outweighs the potential harm. Another potential harm of routine acid suppression is a predisposition to Clostridium difficile infection. Acid-suppressive drugs shouldnt be prescribed to hospitalized stroke patients or any hospitalized patient, for that matter in the absence of an evidence-based indication. DOES CAROTID ARTERY STENOSIS PREDICT PERIOPERATIVE STROKE AFTER NONCARDIAC SURGERY? Does carotid artery stenosis predispose patients who undergo noncardiac, noncarotid surgery to perioperative stroke? To find out, researchers at Cleveland Clinic performed a retrospective study, the results of which appear in the November2014 issue of Anesthesiology ( http:// dx.doi.org /10.1097/ALN.0000000000000438 ). During a 5-year period, 2100 patients who underwent noncardiac surgery had carotid duplex ultrasound performed within 6months before or a month after their surgeries. Forty percent of the patients had at least one carotid stenosis estimated at greater than 50% (as measured by internal carotid artery peak systolic velocity), and 13% of the patients had greater than 70% stenosis. Neither of these cutoffs for carotid artery stenosis was associated with an elevated risk for in-hospital postoperative stroke or 30-day all-cause mortality. Plus, carotid artery stenosis wasnt associated with postoperative myocardial injury. This retrospective study on the link between carotid artery stenosis and stroke after noncardiac surgery has several limitations: Subtle perioperative neurological or cardiac events might have been missed; we dont know exactly why carotid ultrasound was ordered for these patients; and a small group of patients who underwent carotid revascularization after ultrasound (and before noncardiac surgery) was excluded. But these results suggest albeit indirectly that screening for carotid artery stenosis before noncardiac surgery has no value. NSAID USE IS ASSOCIATED WITH ELEVATED RISK FOR BLEEDING IN PATIENTS WITH AF RECEIVING ANTITHROMBOTIC THERAPY Nonsteroidal anti-inflammatory drug use is associated with an excess risk for bleeding in patients who take anticoagulant medications, but researchers havent specifically examined this issue in patients with atrial fibrillation. Until now. In an observational cohort study in the November18th, 2014 Annals of Internal Medicine ( http:// dx.doi.org /10.7326/M13-1581 ), researchers in Denmark used national registries to evaluate the association between NSAID use and serious bleeding (like intracranial or gastrointestinal bleeding) and thromboembolism (that is to say, stroke or systemic arterial embolism) in more than 150,000 patients with a median age of 75 and nonvalvular atrial fibrillation who were getting antithrombotic therapy. During a median follow-up of 6years, 40% of the patients were prescribed NSAIDs; there were about 17,000 serious bleeds and 19,000 thromboembolic events. In analyses adjusted for potentially confounding variables, the NSAIDs patients, compared with those who didnt take the drugs, had increased risks for serious bleeding and for thromboembolism. These risks were independent of the type of NSAID (like selective cyclooxygenase-2 inhibitors or nonselective NSAIDs) and of CHA 2 DS 2 -VASc scores ( www. jwatch .org /jw201103030000002 ). This retrospective cohort study of patients with atrial fibrillation who were getting antithrombotic therapy showed an association between concomitant NSAID use and an increased risk for serious bleeding. Although this finding isnt surprising, the study also showed an association between NSAIDs and an increased risk for arterial thromboembolism a finding that parallels the previously shown association between NSAIDs (selective and nonselective) and the risk for recurrent myocardial infarction. Given these findings, alternatives to oral NSAIDs might include acetaminophen (but that drug can potentiate warfarin), topical NSAIDs, opioids, or disease-specific alternatives (like steroids or colchicine for gout). ARE NEW ORAL ANTICOAGULANTS SAFER THAN VITAMIN K ANTAGONISTS ? The new direct oral anticoagulants (namely, dabigatran [trade name: Pradaxa], rivaroxaban [trade name: Xarelto], apixaban [trade name: Eliquis], and edoxaban [which isnt yet approved by the Food and Drug Administration]) have been compared with vitamin K antagonists (namely, warfarin and similar agents) for acute venous thromboembolism in six recent phase III clinical trials encompassing nearly 30,000 patients. A meta-analysis of those trials, which appears in the September18th, 2014 issue of Blood ( http:// dx.doi.org /10.1182/ blood-2014-04-571232 ) yielded the following results: Recurrent venous thromboembolism was seen at similar rates (around 2%) in the patients treated with direct oral anticoagulants and vitamin K antagonists. Compared with vitamin K antagonists, direct oral anticoagulants were associated with a 40% reduction in major bleeding, including significantly less intracranial bleeding, fatal bleeding, and clinically relevant non-major bleeding; smaller declines were seen in gastrointestinal bleeding. In subgroup analyses, fewer recurrent venous thromboemboli were seen with direct oral anticoagulants than with vitamin K antagonists in patients 75 or older and in those with cancer; there were no significant differences in recurrent venous thromboemboli with direct oral anticoagulants vs . vitamin K antagonists in patients with deep vein thrombosis, pulmonary embolism, a body weight of 100kg or higher, or moderately impaired renal function (defined as a creatinine clearance between 30 and 49 mL/min/1.73 m 2 ). There were no differences in bleeding with direct oral anticoagulants vs . vitamin K antagonists in patients with cancer, and significantly less bleeding was seen with direct oral anticoagulants in older patients and in those with impaired renal function. The new direct oral anticoagulants are as effective as vitamin K antagonists in patients with venous thromboembolism and are associated with less major bleeding. Their drawbacks include the lack of a specific antidote, contraindication in those with a creatinine clearance of less than 30 mL/min/1.73 m 2 , and substantial cost. We need clinical trials to see if there are significant differences among these new anticoagulants. HOME-BASED INTERVENTION IMPROVES ASTHMA CONTROL IN ADULTS Patients in lower socioeconomic groups are disproportionately affected by asthma, and many adult patients dont know how to self-manage their disease. In a study on the website of JAMA Internal Medicine ( http:// dx.doi.org /10.1001/jamainternmed.2014.6353 ), researchers randomized nearly 400 lower-income adults with uncontrolled asthma to either home-based asthma management or usual care. The home-based program included five in-home visits during 7months from trained community health workers who provided social services support and asthma education (namely, asthma action plans, spacers, inhalers, and allergen avoidance measures). During the yearlong study, the home-based patients (compared with the controls) averaged 2more symptom-free days during each 2-week period (the number needed to treat to gain 2 symptom-free days per 2weeks was 7); the intervention group also scored significantly higher on a quality-of-life measure. In both of the groups, patients averaged 1 fewer urgent care visit during the study than during the past year; the difference between the two groups wasnt significant. With every outpatient visit for asthma, clinicians should give their patients asthma action plans and instruct them on inhaler use and trigger avoidance. Unfortunately, many patients dont use their controller medications regularly or correctly and cant verbalize their management plans. Rather than stepping up to increasingly expensive asthma medications, home- and school-based interventions might be better and less expensive ways to improve asthma control. As the researchers point out, the per-patient cost of the home-based intervention was US$1300 thats less than the cost of a 1-year supply of a typical inhaled steroid. DRUG TREATMENTS FOR DIABETIC NEUROPATHY: A META-ANALYSIS In a systematic review and network meta-analysis in the November4th, 2014 Annals of Internal Medicine ( http:// dx.doi.org /10.7326/M14-0511 ), researchers compared the effectiveness of drug therapies for painful diabetic neuropathy. They discovered 70 relevant randomized trials involving 30 medications, with nearly 13,000 patients. The key findings are that: Serotonin-norepinephrine reuptake inhibitors, topical capsaicin, tricyclic antidepressants, and anticonvulsants all reduced pain, compared with placebo. As a group, serotonin-norepinephrine reuptake inhibitors and tricyclic antidepressants reduced pain more than did anticonvulsants and capsaicin. Very few studies extended beyond a duration of 3months. For individual drugs, carbamazepine, venlafaxine, duloxetine, amitriptyline, and pregabalin were statistically better than placebo. There are few head-to-head comparisons of individual drugs, and most showed insignificant differences between the drugs, but pregabalin was inferior to venlafaxine and duloxetine. Side-effect profiles differ among these drug classes; adverse events were problematic with virtually all of them. Although clinicians have many choices of drug therapies for their patients with diabetic neuropathy, this analysis reveals weaknesses in the evidence. Generally, the studies were short-term, and most of these drugs have substantial central nervous system side effects. Plus, comparative results are somewhat skewed by the size of the randomized trials: For example, although the average reduction in pain was slightly more with gabapentin than with pregabalin, only pregabalins results were statistically significant, because its industry-sponsored trials were much larger than those of gabapentin. Currently, only duloxetine and pregabalin are approved by the Food and Drug Administration for treating patients with diabetic neuropathy. LONG-TERM OUTCOMES OF SINGLE CORTICOSTEROID INJECTIONS FOR TRIGGER FINGER The estimated lifetime risk for trigger finger is about 3% in the general population and as high as 10% in patients with diabetes. Treatment options include observation, splinting, nonsteroidal anti-inflammatory drugs, and corticosteroid injections. In a retrospective review in the November19th, 2014 Journal of Bone & Joint Surgery ( http:// dx.doi.org /10.2106/JBJS.N.00004 ), researchers report the long-term outcomes of first-time injections for trigger finger in 370 patients (a quarter with diabetes) who were followed for up to 10years. Treatment was considered to be successful if the patients didnt need second injections or a surgical release of their trigger fingers. Overall, nearly half of the patients benefited after a single injection. Sex and the number of trigger fingers at presentation correlated with treatment success: At 10years, the success rates among the patients with single trigger fingers were higher in women than in men, but among both women and men with multiple trigger fingers, success rates were similar. Most treatment failures happened within 2years of the initial injections. Age and diabetes status didnt predict outcome. In this retrospective study, patients with trigger finger were followed for as long as 10years after an initial corticosteroid injection; until now, data on long-term treatment outcomes have been sparse. These findings give clinicians and their patients an estimate of the likelihood (around 45%) that a single injection will be effective in the long term. Plus, a patient with a good result after 2years has an excellent chance of sustaining that outcome. MORE SUPPORT FOR USING STEROIDSIN PATIENTS WITH REFRACTORY SEPTIC SHOCK The most recent Surviving Sepsis Guidelines ( http:// dx.doi.org /10.1097/CCM.0b013e31827e83af ) suggest that 200 mg/ day of hydrocortisone should be used only when volume resuscitation and vasopressors cant restore hemodynamic stability. In a study in the November2014 issue of Critical Care Medicine ( http:// dx.doi.org /10.1097/CCM.0000000000000518 ), researchers looked at data on patients who were treated for septic shock at 30 hospitals in Canada, the United States, and Saudi Arabia; 1800 patients who got low-dose steroids (less than 80 mg/ day of a prednisone equivalent) were propensity-matched with a comparable group of patients who didnt get low-dose steroids. Patients were excluded if they died within 48hours of admission to an intensive care unit or got steroids later than 48hours after the documentation of shock. Mortality was similar in the two groups. In subgroup analyses, 30-day mortality in the sickest patients (those with the highest Acute Physiology and Chronic Health Evaluation II scores) was significantly lower in the steroid group. Unlike in past studies, the administration of steroids wasnt associated with a shorter time to the resolution of shock. Among the patients who were the least sick, those who got steroids vs . those who didnt had a nonsignificant trend toward higher mortality. The pendulum continues to swing back and forth on whether patients with septic shock should be treated with corticosteroids. Although we still dont have a definitive answer, this large retrospective study supports the common practice of giving steroids to the sickest patients with refractory septic shock. This is a reasonable practice in patient populations with very high mortality. IS HOSPITAL MEDICINE A LOW-RISK SPECIALTY ? The hospitalist model introduces many care handovers and transient doctor-patient relationships, factors that would seemingly increase the risk for malpractice claims. In a study in the December2014 Journal of Hospital Medicine ( http:// dx.doi.org /10.1002/jhm.2244 ), researchers evaluated more than 50,000 medical malpractice claims from 20 different insurance programs covering more than 3000 healthcare organizations in the United States representing approximately 30% of closed claims from 1997 to 2011. Only 270 medical malpractice claims were filed against internal medicine hospitalists (who were defined as internists who spend more than half of their time on inpatient care), corresponding to a rate of 0.5 claims per 100 physician coverage-years. This rate was significantly lower than the rate of claims against nonhospitalist internal medicine physicians (1.9 claims per 100 physician coverage-years), emergency medicine physicians (3.5claims per 100 physician coverage-years), and general surgeons (4.7 claims per 100 physician coverage-years). Among the claims made against hospitalists, a third resulted in payments to plaintiffs (the median amount was US$240,000). The researchers found no significant difference by specialty in the percentage of cases that resulted in payments. Although almost half of the malpractice claims against hospitalists were related to clinical judgment in diagnosis or consultation, a substantial number of claims were related to clinician communication and documentation. The lower rate of malpractice claims against internal medicine hospitalists could reflect hospitalists availability to patients, their knowledge of hospital systems, and their greater inpatient care experience. Inpatient providers might be able to mitigate risk further by focusing on communication with patients and consultants and on clinical documentation to accurately reflect the care provided. General Medicine 2014: Year in Review from NEJM Journal Watch Audio A NEW ERA UNFOLDS IN HCVTREATMENT The newest drugs to combat hepatitis C virus infections are very effective and very expensive. Just as 1996 was a watershed year in treating HIV infections, so 2014 has seen a sea change in treating hepatitis C virus (HCV) infections, with unwieldy, often ineffective treatments suddenly giving way to convenient, well-tolerated, remarkably effective alternatives. Two components of these new regimens were approved late in 2013: Simeprevir (trade name: Olysio), a second-generation protease inhibitor; and sofosbuvir (trade name: Sovaldi), a polymerase inhibitor. Although both were released only for use in combination with interferon alfa and ribavirin, several industry-sponsored studies published this year suggest that both of these toxic old drugs are now happily obsolete. In a trial in which researchers enrolled only patients with genotype1 infections (the most common genotype in the U.S. and the most difficult to treat), 167 patients were randomized to receive the combination of simeprevir and sofosbuvir with or without ribavirin. More than 90% of patients achieved sustained virologic responses, regardless of prior treatment experience, subtype of virus, presence of cirrhosis, or receipt of ribavirin. Serious adverse events were seen in only 2% of patients ( www. jwatch .org /na35407 ). In three other studies, about 2000 patients with genotype1 infections received sofosbuvir in combination with ledipasvir, an oral drug that inhibits a different viral protein. Again, response rates exceeded 90% even for treatment-naive patients randomized to only 8weeks of therapy. Neither history of treatment failure nor presence of cirrhosis significantly affected treatment results, and side effects of the drugs were uniformly tolerable ( www. jwatch .org /na34443 ). The combination of ledipasvir and sofosbuvir was approved by the FDA at the end of 2014 as a single daily pill (trade name: Harvoni). Additional oral, well-tolerated, effective drugs are still in the pipeline. Despite these triumphs, serious questions about treating patients with HCV infections remain. Most stem from treatment cost: Complete courses of the new combinations can cost US$100,000 or more ( www. jwatch .org /na34443 ). These charges fuel an ongoing debate about who should prescribe the drugs, who should receive them, and the optimal duration of treatment. At present, many insurers are releasing the new drugs only to physicians with documented experience in treating HCV and, even then, only for patients with clear HCV-related liver damage. Whether market competition will ease these constraints in coming years remains to be seen. FECAL DNA TESTING: FINALLY READY FOR PRIME TIME? Patients who seek noninvasive options for colorectal cancer screening should be more comfortable with this new test. Many large studies have confirmed the value and cost-effectiveness of screening for colorectal cancer. The U.S. Preventive Services Task Force recommends fecal occult blood testing and two invasive options flexible sigmoidoscopy and colonoscopy to accomplish this. Yet at least one third of eligible U.S. residents dont get screened: Flexible sigmoidoscopy is rarely offered, and many patients decline colonoscopy because they perceive it as too invasive or unpleasant. Fecal occult blood testing, however, is not very sensitive and misses both precancerous lesions and cancers. What we need is a more sensitive noninvasive test. During the past 30years, researchers have elucidated the genetic changes that lead to colon cancer. Because cancerous and precancerous cells are shed in feces, much research has focused on whether detecting genetic changes in stool specimens could serve as a screening test. In 2014, a major study showed that a particular DNA test that evaluated multiple different genetic changes linked to colon cancer was significantly more sensitive than fecal occult blood testing. Stool was collected from nearly 10,000 patients who were scheduled to undergo colonoscopy. The stool was evaluated both by the new DNA tests (which also included a hemoglobin immunoassay) and by a currently marketed fecal immunochemical test (FIT). Using the gold standard of colonoscopy, colorectal cancer was detected in 0.7% of participants, and advanced precancerous lesions were found in 7.6%. The sensitivity of the fecal DNA assay was 92% for cancer overall and 93% for early cancers (stages IIII), compared with 74% and 73%, respectively, for FIT. The sensitivity of the DNA assay was 69% for high-grade dysplasia and 42% for large sessile serrated polyps, compared with 46% and 5%, respectively, for FIT. The specificity of FIT was somewhat better than that of the fecal DNA assay (96% and 90%; www. jwatch .org /na33982 ). Colonoscopy is more sensitive and specific than a fecal DNA assay, and precancerous lesions can be removed during colonoscopy. For those reasons, colonoscopy remains the best option. However, if availability of a sensitive noninvasive screening test increases the number of patients who consent to screening by attracting those who prefer not to undergo invasive testing, it will be an important advance. The manufacturer of this test, which has been FDA approved, lists the maximum out-of-pocket cost at US$599; however, the cost-effectiveness of the test remains to be determined. HAS RISK FOR CONTRAST-INDUCED NEPHROPATHY BEEN EXAGGERATED? In a well-controlled observational study, contrast-enhanced computed tomography wasnt associated with kidney injury. Archival interview with: Jennifer McDonald, PhD & Robert McDonald, MD, PhD ( www.audio-digest.org/editorial/JW/2014/JW2510_McDonald.mp3 ) Conventional wisdom supported largely by inadequately controlled observational studies holds that patients with even modestly abnormal renal function are vulnerable to contrast-induced nephropathy. Presumed causes include direct renal toxicity and intrarenal hemodynamic effects of contrast media. In 2014, Mayo Clinic researchers published two studies that challenged the importance of contrast-induced nephropathy after computed tomography (CT) scanning. The researchers drew from a database of patients who underwent chest, abdominal, or pelvic CT scanning. Using propensity scoring, they created two cohorts, each with 6245 patients: One group had undergone intravenous contrast-enhanced CT, and the other had undergone unenhanced CT, but the two groups otherwise were remarkably well-matched in important demographic and clinical variables including prescan estimated glomerular filtration rate (GFR) and comorbidities. The overall incidence of a ≥0.5 mg/ dL rise in serum creatinine following CT scanning was 5% in both groups. Even among patients with prescan GFR <30 mL/min/1.73 m 2 , incidences of postscan declines in renal function were similar in the contrast-enhanced and unenhanced groups ( www. jwatch .org /na34257 ). Because a small rise in serum creatinine could be considered a surrogate endpoint, the researchers also examined two important clinical outcomes death and need for dialysis in a separate analysis. They found that the 30-day incidence of these outcomes was not higher in the contrast-enhanced group than in the unenhanced group ( www. jwatch .org /na36327 ). Can we make sense of these findings, given how frequently clinicians see bumps in serum creatinine after patients receive intravenous contrast? The most likely explanation is that the random fluctuations in serum creatinine levels commonly seen in hospitalized patients are misinterpreted as contrast-related when, in fact, they reflect unrelated effects of fluctuating hydration, fluctuating blood pressure, other nephrotoxic drugs, or comorbid diseases. Because a randomized trial to settle this issue is unlikely, well-controlled observational studies like these will have to suffice. The researchers acknowledge that unmeasured confounding variables could have biased their results, but they believe that intravenous contrast should not be withheld when it is deemed to be essential for accurate CT diagnosis. Two final caveats: Patients with severe renal impairment (i.e., GFR <15 mL/min/1.73 m 2 ) probably were underrepresented in these studies, and the results are not necessarily relevant to intra-arterial contrast. SOME CHANGES IN HYPERTENSION TREATMENT RECOMMENDED IN JNC 8 The Joint National Committee recommends that patients older than 60 be treated for hypertension only when systolic blood pressure exceeds 150mmHg. Archival interview with: Donald DiPette, MD ( www.audio-digest.org/ editorial/JW/2014/ JW2502_DiPette.mp3 ) The Joint National Committee (JNC) 8 guideline addresses blood pressure (BP) thresholds at which drug therapy should be initiated, BP targets during hypertensive treatment, and choice of antihypertensive agents ( www. jwatch .org /NA33228 ). For patients younger than 60, JNC 8 specifies that drug therapy should be considered when diastolic BP is >90mmHg or systolic BP is >140mmHg. For older patients (age, ≥60), the diastolic BP threshold remains >90mmHg, but the systolic BP threshold is >150mmHg. Among people with diabetes or chronic renal disease, the threshold to initiate drug therapy is 140/90mmHg, and the goal for treatment is <140/90. In black patients, initial drug choices include thiazide-type diuretics or calcium-channel blockers (CCBs); in nonblack patients, initial drug choices were expanded (relative to JNC 7 recommendations) to include not just thiazide-type diuretics, but also CCBs, angiotensin-converting-enzyme (ACE) inhibitors and angiotensin-receptor blockers (ARBs) but not β-blockers. JNC 8 recommends that patients with chronic renal disease generally should be prescribed ACE inhibitors or ARBs. Compared with the JNC 7 writers, the JNC 8 writing committee drew its conclusions more strictly from randomized-trial evidence and limited the scope of the guideline to drug therapy for hypertension. The most controversial and contested recommendation is the higher threshold (systolic BP, >150mmHg) for people older than 60. Some experts, and the American Society of Hypertension, recommend a target this high only for patients older than 80. PROACTIVE PALLIATIVE CARE AND HOSPICE CONSULTATION BENEFIT PATIENTS AT THE END OF LIFE Inpatient or outpatient hospice care was preferred by patients and lowered healthcare costs. Archival interview with: Susan Block, MD ( www.audio-digest.org/ editorial/JW/2014/ JW2506_Block.mp3 ) Several studies published during 2014 highlight the benefits of palliative care and hospice consultation in seriously ill patients. In a randomized trial, 24 medical oncology clinics at a comprehensive cancer care center in Canada were assigned to provide their adult outpatients with advanced cancer standard care or with immediate consultation and regular follow-up by a palliative care team (intervention). Of the 461 enrolled patients, nearly all were receiving chemotherapy, radiation treatments, or both. After 4months, quality of life and patient satisfaction with care were significantly higher in the intervention group ( www. jwatch .org /NA33803 ). In another study, 3100 Canadian outpatients who received care from palliative care teams were matched by propensity score with an equal number of patients who received usual care; about 80% of patients in each group had cancer. Compared with the usual-care group, significantly fewer patients who received care from palliative care teams were hospitalized (39% vs . 31%), visited an emergency department (35% vs . 29%), or died in a hospital ( 29% vs . 16%; www. jwatch .org /NA34912 ). In a case-control Medicare study, researchers identified 18,000 patients with poor-prognosis cancer who enrolled in hospice and matched them (by age, sex, place of residence, and time from cancer diagnosis to death) with 18,000 similar patients who did not enroll in hospice. Patients in both groups survived about 7months after cancer diagnosis. Average duration of hospice care was 11days. During hospice care (or equivalent time for nonhospice patients), hospice patients were significantly less likely than nonhospice patients to be hospitalized (42% vs . 65%), be admitted to intensive care (15% vs . 36%), undergo invasive procedures (27% vs . 51%), or die in a hospital (3% vs . 50%) or a skilled nursing facility (11% vs . 24%). Mean cost of care during the last year of life was also significantly lower in hospice patients ( www. jwatch .org /NA36244 ). Finally, in a study at two academic medical centers, researchers compared outcomes for two groups of patients 167 seriously ill patients who died after being transferred from intensive care units (ICUs) to dedicated hospice inpatient units (DHIPUs) and 99 similar patients who received comfort care (including withdrawal of life-sustaining treatments) and died in ICUs. Compared with patients who stayed in ICUs, patients transferred to DHIPUs were less likely to have received ventilator and vasopressor support and had a significantly shorter mean ICU length of stay (7 vs . 10days) resulting in substantial cost avoidance ( www. jwatch .org /NA34793 ). What do these results mean? Previous research has shown that patients approaching death desire symptom and pain control, and many want to die at home. These studies indicate that proactive palliative care and hospice consultation whether in an outpatient setting or in an ICU are associated with these patient-desired outcomes and lower costs. WHEN IS CONSERVATIVE TREATMENT BETTER? NEARLY ALWAYS, FOR SOMEPATIENTS WITH CHRONICORTHOPEDIC PAIN Surgical treatment for nontraumatic meniscal knee tears and use of injections for spinal stenosis and cervical disk disease are called into question. Archival interviews with: Gunnar Andersson, MD, PhD ( www.audio-digest.org/editorial/JW/2014/JW2515_Andersson.mp3 ) Moin Khan, MD ( www.audio-digest.org/editorial/JW/2014/JW2519_Side_B_Interview.mp3 ) In 2014, investigations have called into question the need for arthroscopy for degenerative meniscal tears and have suggested that epidural injections for cervical radiculopathy and lumbar spinal stenosis offer little benefit. In a Finnish trial, 150 patients with nontraumatic meniscal tears and knee pain were randomized to arthroscopy or to sham arthroscopy, followed by a graduated exercise program for all. At 12months, both groups showed similar improvement in knee pain ( www. jwatch .org /na33060 ). This outcome reinforced the results of a 2013 randomized trial in which patients with meniscal tears and osteoarthritis did not benefit more from arthroscopic repair than from physical therapy alone ( www. jwatch .org /jw201303280000003 ). In addition, researchers performed a meta-analysis of seven randomized, controlled trials that involved patients with meniscal tears, knee pain, and mild or no osteoarthritis who were treated operatively or nonoperatively. Operative patients experienced a minimal improvement in short-term functional outcomes at 6months but no improvement in pain at 6months and 2years ( www. jwatch .org /na35613 ). These studies strongly support nonoperative management of patients with chronic degenerative meniscal injury; in contrast, surgery might be necessary for selected patients with acute severe meniscal tears. Two other studies called epidural steroid injections into question. Investigators compared epidural injections of steroids plus lidocaine and injections of lidocaine alone in patients with buttock or leg pain attributed to central lumbar spinal stenosis. The proportion of patients who had ≥30% improvement in pain at 6weeks was similar in both groups (although the steroid-treated group reported a small improvement at 3weeks). Because no true sham treatment was evaluated, the possibility exists that lidocaine alone or a placebo effect is as good as injected steroids ( www. jwatch .org /na35108 ). In another unblinded, open-label study, 169 patients with cervical radicular arm pain were treated with epidural steroid injections, drug therapy plus physical therapy, or a combination of the two during 6months. At 1month, no difference was found in the primary endpoint (improvement in arm pain) in any group, although the combination-treatment group showed a trend toward more improvement at 3months ( www. jwatch .org /na36340 ). These findings for lumbar spinal stenosis and cervical radiculopathy mirror the mostly negative results of previously published studies on epidural steroid injections for sciatica ( www. jwatch .org /jw201212130000001 ). Taken together, these studies indicate that the widespread use of epidural steroid injections in the U.S. might not be warranted. WHAT IS AN OUNCE OF CVPREVENTION WORTH? Low-dose aspirin and imaging-guided therapy both were largely ineffective for preventing cardiovascular disease in elders with risk factors. Cardiovascular disease (CVD) remains the leading cause of death in U.S. adults. However, two studies in 2014 cast doubt on the efficacy of two CVD prevention strategies one old and one new. In the Japanese Primary Prevention Project, almost 15,000 older adults (age range, 6085) with cardiovascular risk factors but without known CVD were randomized to daily low-dose aspirin (100mg) or no aspirin. After a median follow-up of approximately 5years, no difference was found in the prespecified endpoint of myocardial infarction (MI), stroke, or incident CVD. Nonfatal MI and transient ischemia attack were less common in the aspirin group, but that benefit was offset by a higher risk for bleeding ( www. jwatch .org /na36285 ). Editorialists point out that risk for hemorrhagic stroke is higher in this largely Asian population than in Western populations and that the overall low event rate and unblinded study design are limitations ( http:// dx.doi.org /10.1001/jama.2014.16047 ). Three large ongoing trials of aspirin for primary prevention in higher-risk populations will provide more answers in coming years. In another study, 900 patients with type1 or 2 diabetes were randomized to screening with computed tomography angiography (CCTA) or to no screening. The no-screening group received standard medical therapy (targets: Glycosylated hemoglobin level <7%, LDL cholesterol level <100 mg/ dL, and systolic blood pressure <130mmHg). Participants in the CCTA group underwent imaging and then received standard or more-aggressive medical therapy plus revascularization as indicated by CCTA results (which incorporated both coronary calcium scores and percent coronary stenoses). After a mean follow-up of 4years, the composite outcome of all-cause mortality, nonfatal MI, or unstable angina requiring hospitalization was not significantly different between groups ( www. jwatch .org /na36286 ). The Japanese study adds to the controversy about aspirin for primary prevention. Although the study is limited by a low overall CVD event rate, reserving aspirin for higher-risk patients in whom vascular benefits clearly outweigh bleeding risks seems prudent. The lack of benefit for CCTA screening in diabetic patients likely is related to the success of standard medical therapy in lowering adverse event rates in these higher-risk patients, and editorialists advocate for better adherence to existing treatment recommendations rather than additional screening tests. DIRECT ORAL ANTICOAGULANTS ARE SAFE AND EFFECTIVE COMPARED WITH WARFARIN Several large meta-analyses provide reassurance that benefits outweigh risks. Archival interviews with: Christian Ruff, MD ( www.audio-digest.org/editorial/JW/2014/JW2503_Ruff.mp3 ) Partha Sardar, MD ( www.audio-digest.org/editorial/JW/2014/JW2513_Sardar.mp3 ) Since 2010, three direct-acting oral anticoagulants (DOACs) have been approved by the FDA as alternatives to drugs such as warfarin and heparin for preventing stroke and systemic embolism in patients with atrial fibrillation (AF) and for treating and preventing venous thromboembolism (VTE). In 2014, the thrombin inhibitor dabigatran (trade name: Pradaxa) and the factor Xa inhibitor apixaban (trade name: Eliquis), which previously were approved for preventing stroke in patients with AF, gained approval for VTE treatment and prophylaxis. The factor Xa inhibitor rivaroxaban (trade name: Xarelto) already was approved for both indications. And another promising thrombin inhibitor, edoxaban (trade name: Savaysa), is in the pipeline. Postmarketing reports on dabigatran have raised concerns about excess risks for serious and fatal bleeding, but a 2013 FDA review of claims and administrative data reinforced clinical trial data that showed no excess bleeding risk with dabigatran compared with warfarin ( www. jwatch .org /em201304120000003 ). In 2014, four new meta-analyses further clarified risks and benefits of DOACs. Two of these analyses were based on the four major clinical trials in which each of the DOACs was compared with warfarin (duration, ≤3years; ≈72,000 patients with AF). One analysis showed that DOACs were associated with 19% fewer total strokes and systemic embolic events and 52% fewer intracranial hemorrhages, but a 25% relative increase in the rate of gastrointestinal bleeds ( www. jwatch .org /na33185 ). In a different meta-analysis of these trials, DOACs were associated with lower rates of all-cause mortality (relative risk, 0.89), vascular-related mortality (RR, 0.88), and bleeding-related mortality (RR, 0.54; www. jwatch .org /na35996 ). In a meta-analysis of six trials in which >27,000 patients with acute symptomatic VTE were treated with one of the four DOACs or a vitamin K antagonist for 3 to 12months, VTE recurrence rates were similar in both groups (≈2%), but DOACs were associated with a 39% relative reduction in major bleeds ( www. jwatch .org /na35816 ). Finally, in a meta-analysis of data from 10 trials on >25,000 older patients (age, ≥75) with AF, VTE, or serious medical illnesses, rates of major or clinically relevant bleeding were similar in patients who took DOACs and those who received other therapies; DOACs were more effective than other therapies in preventing both strokes and VTEs ( www. jwatch .org /NA34787 ). Taken together, these analyses provide reassurance that DOACs are at least as safe and effective as warfarin in most clinical settings. Compared with warfarin, the DOACs certainly are more convenient, although some experts speculate that the absence of frequent monitoring could lead to decreased adherence. Antidotes for the DOACs, now being developed, would provide an extra measure of safety. NEW DEVELOPMENTS IN CVASSESSMENT BEFORE NONCARDIAC SURGERY A guideline has been updated; a trial showed no benefit for perioperative aspirin. For patients who take daily aspirin for cardiovascular (CV) prophylaxis and are scheduled for noncardiac surgery, a dilemma is whether to continue aspirin perioperatively; the rationale would be to prevent perioperative vascular events without increasing bleeding risk. And for high-risk patients who are not on daily aspirin, would preoperative initiation of aspirin confer benefit? These questions were addressed in POISE-2, a multicenter randomized trial that involved 10,000 patients who were undergoing various noncardiac surgical procedures. Patients were eligible if they had known vascular disease (32%), were undergoing major vascular surgery (5%), or had multiple specified risk factors (83%). Nearly half the patients had been taking daily aspirin prior to enrollment (continuation stratum), and the others had not (initiation stratum). Patients with recent coronary stenting were excluded. Patients received aspirin or placebo starting just before surgery and continuing for 30days in the initiation stratum or for 7days in the continuation stratum (after which, the latter patients resumed daily aspirin). The outcome: No difference between the aspirin and placebo groups in the 30-day endpoint of death or nonfatal myocardial infarction, but slightly more major bleeding in aspirin recipients than in placebo recipients (5% vs . 4%; P =0.04). Aspirin did not benefit the continuation stratum, the initiation stratum, or the subgroup with previously known vascular disease. In sum, this trial suggests that perioperative aspirin does not protect high-risk patients from ischemic perioperative CV events, and it confers a small excess risk for major bleeding ( www. jwatch .org /na34173 ). Another important 2014 report was a new guideline from the American College of Cardiology and the American Heart Association on perioperative cardiovascular management of patients undergoing noncardiac surgery. This guideline first appeared in 1996 and has been updated several times, most recently in 2007. As data have accumulated over the years most of it weighing against elaborate preoperative testing or drug initiation the guideline has evolved. The 2014 guideline update recommends that clinically stable patients who are scheduled for elective noncardiac surgery and whose risk for major adverse cardiac events is <1% proceed to surgery without further testing. Patients whose risk is >1% but whose functional capacity is reasonable (e.g., they can easily climb a flight of stairs) also should proceed to surgery without further testing. For remaining patients (i.e., high-risk, poor or unknown functional capacity), the guidelines state that it may be reasonable to perform stress testing if a positive result would change management, but this is a class II recommendation (i.e., effectiveness is not well established). The guideline devotes considerable attention to the controversial literature on perioperative β-blocker use. The only class I (i.e., strong) recommendation is that β-blockers should be continued perioperatively in patients who have been taking them chronically. Class II recommendations suggest that it may be reasonable to begin β-blockers preoperatively in β-blockernaive patients at high risk for ischemic heart disease; but, in such cases, the drug should be initiated at least several days before surgery (never on the day of surgery) so that its tolerability can be assessed ( www. jwatch .org /na36145 ). STUDIES SUGGEST WE SHOULD MODIFY OUR APPROACH TO SEPSISTREATMENT Early goal-directed therapy and a liberal transfusion threshold did not benefit critically ill septic patients. In 2014, three long-awaited, randomized trials of treatment in patients with septic shock were released, and the third involved transfusion threshold in this population. Two studies focused on the early resuscitation of patients with septic shock. In the ProCESS trial, 1341 patients with septic shock received one of three treatment strategies: Early goal-directed therapy (EGDT) with specific targets for central venous pressure, central venous saturation, and hematocrit; protocol-based resuscitation with peripheral access only and slightly different hemodynamic targets; or usual care. No difference in 60-day, 90-day, or 1-year mortality was noted among the groups ( www. jwatch .org /na34016 ). In the ARISE trial, 1600 septic shock patients received either EGDT or usual care. Patients in the EGDT group received more vasopressors, packed red-cell transfusions, and inotropic support, but 90-day mortality was the same between groups ( www. jwatch .org /na35864 ). In the third trial, researchers examined care of patients with septic shock after they were admitted to intensive care units (ICUs). Almost 1000 patients were randomized to a conservative hemoglobin transfusion threshold (7 g/dL) or a liberal threshold (9 g/dL). Red blood cell transfusions were given to 99% of the liberal group and to only 64% of the conservative group. Ninety-day mortality, need for life support, and ischemic events were similar in both groups ( www. jwatch .org /na35745 ). The greatest benefit of EGDT and its incorporation into the Surviving Sepsis guidelines ( www. sccm .org /Documents/SSC-Guidelines.pdf ) has been earlier recognition of sepsis. The most important thing to remember about these studies is that all patients received early antibiotics and volume resuscitation. These are the two principles that should guide clinical management of septic patients: Rapid antibiotic administration and initial fluid boluses to restore intravascular volume. Following specific protocols with regard to target central venous pressures, central venous saturations, and hematocrit goals in subsequent care makes less sense. This marks a big change in approach, because EGDT has been the mantra in ICUs for many years. CAN WE DO A BETTER JOB OF KEEPING PATIENTS OUT OF THE HOSPITAL? Complex patients require special home care management but we still dont know the best way to do that. Archival interviews with: Abir Kanaan, PharmD ( www.audio-digest.org/editorial/JW/2014/JW2503_Kanaan.mp3 ) Eric De Jonge, MD ( www.audio-digest.org/editorial/JW/2014/JW2524_DeJonge.mp3 ) Keeping patients out of the hospital and preventing readmissions was the focus of several studies in 2014. This topic has taken on great importance, due to financial pressures to shorten hospital length of stay and new financial penalties for early readmissions. In a randomized trial, researchers examined whether a virtual ward could prevent readmissions. The intervention involved a team of outpatient healthcare professionals who sought to provide daily integrated care (similar to hospital rounds and including home visits) for high-risk internal medicine patients, starting the day after these patients were discharged from Canadian hospitals and continuing for 1month. Researchers noted no difference in 1-year mortality or hospital readmissions between patients assigned to the virtual ward and those who received usual care ( www. jwatch .org /na35858 ). In two studies, integrated outpatient care to keep patients out of the hospital in the first place proved to be a more promising approach. In one study, 722 complex patients (nearly half of whom had been hospitalized during the preceding 4months) received home-based primary care that consisted of teams of physicians, nurses, social workers, and mental health professionals who provided home visits and 24/7 telephone coverage. During 2years of follow-up, these patients had 9% fewer hospitalizations, 10% fewer emergency department visits, 27% fewer skilled nursing facility days, and 17% lower costs than matched patients who received usual care. In another study from the U.S. Veterans Affairs system, a home-based primary care program also achieved impressive reductions in hospital admissions ( www. jwatch .org /na36208 ). The patient-centered medical home (PCMH) an office-based, not home-based, model is an approach that is currently in vogue. However, in a study in which 32 PCMH practices and a similar number of control practices were compared, the PCMH model was not associated with fewer hospitalizations or emergency department visits ( www. jwatch .org /na33786 ). Adverse drug events (ADEs) after hospital discharge were the focus of two studies. In one study, at least one ADE occurred in 19% of 1000 consecutive older patients during the 6weeks after hospital discharge ( www. jwatch .org /na33302 ). In another study, researchers compared the medication lists received by 471 cardiology patients at hospital discharge with the medications they were actually taking. Two to 3days after discharge, 27% were not taking at least one prescribed medication, and 36% were taking at least one medication that was not on their discharge list ( www. jwatch .org /na35397 ). The main conclusion is that transitions out of the hospital are increasingly complicated, with many patients receiving large numbers of medications and complex instructions. Maintaining these patients at home might be more successful if clinicians provide intense, integrated, team-based care, but we do not yet understand exactly what features of such care are most effective, which patients benefit most, and what outcomes (improved care quality or lower resource use) can be expected.
Ratings and Reviews
To review this course, please login.Login