Вы находитесь на странице: 1из 18

The Fight to Save Planetary Science, and Why the New Mars Rover Doesnt Mean Victory Planetary

scientists have come together to prioritize the most compelling, cutting-edge questions across our entire field. Some of these questions are best addressed by ambitious, sophisticated, large-scale missions. Others are best addressed by smaller, more focused missions. Some require continued operations of existing plantary orbiters or rovers. All require a commitment to maintaining the existing planetary science community. While the future of large-scale missions has been receiving the most headlines, the other priorities have uncertain, worrying futures, and American planetary exploration may suffer greatly as a result. The relationship between planetary science and NASA is deeply intertwined and fraught with complications. Almost every US planetary scientist depends on the space agency in some way: either directly as civil servants employed by NASA, recipients of peer-reviewed science grants funded by NASA, participants in the operations or science planning of ongoing or anticipated planetary missions, or simply as users of the vast quantity of data returned by those missions since the first one 50 years ago. Every ten years a Decadal Survey (DS) is conducted across planetary science to identify the highest-priority science questions and chart a course for answering those questions. The DS is conducted by the National Research Council, sponsored and initiated by NASA and the NSF. While it does not carry the force of law, the recommendations of the DS are seen by Congress, the public, and planetary scientists themselves as representing the consensus of the planetary science community. Usually, NASA then uses these recommendations to ensure that it plans and implements the most scientifically productive suite of projects possible. The most recent DS was completed in 2011. The highest profile DS recommendations relate to spacecraft missions. Three cost classes exist for NASA planetary science missions, the first two of which are the direct result of past DS recommendations:Discovery (roughly $500 million), New Frontiers (a bit less than $1 billion), and Flagship (everything more than $1 billion). The selection process differs from class to class: Discovery and New Frontiers missions are chosen after a two-step competitive proposal process that first winnows the dozens of submitted proposals down to a handful, then picks a final winner from the remaining few. The destinations and goals of Flagship missions are set by NASA, and scientists then compete to provide and work with individual instruments, while the mission as a whole is run from a specific NASA Center. Discovery missions can be targeted anywhere in the solar system that compelling science can be addressed within the allotted budget. A set of acceptable New Frontiers destinations are generated through the DS process, along with the desired science to be done at each destination. The top priority Flagship mission, according to the DS, is Mars Sample Return, sort of. A mission to collect and return a sample of the martian surface to Earth is a project that will cost at least $7-8 billion, an eye-opening cost even in the best of budgetary times, which this is not. So the sample return mission was divided into three easier-to-swallow pieces, the first of which is this decades priority Flagship and simply identifies and collects the desired samples and caches them for later retrieval. The unspoken assumption is that this retrieval and subsequent return to Earth will occur in the following decade, possibly as the next two highest-priority Flagships in the queue. The Office of Management and Budget (OMB) is rumored to be wary of committing the government to such a large outlay and biasing future decadal surveys toward completing the sample return, especially since the caching mission has uncertain immediate science value on its own. In all, the priorities in the Decadal Survey are pretty clear. If Mars Sample Return cannot be achieved within budget constraints, a Europa mission that fitsshould be attempted. In either case, as resources become scarcer, the largest projects (i.e. Flagships) should be descoped or postponed to protect the smaller mission projects, with continued operations of existing missions and funding of research and analysis grants(R&A) the highest priority for continued funding, even when budgetary conditions are bad. This, to me, is most obviously interpreted as putting highest priority on these grants and continued operations, the PI-led Discovery and New Frontiers programs second, and the largest-scale Flagship missions last. However, for reasons that are obscure to me but which some of my colleagues find compelling, an alternative set of priorities has been asserted focusing on the balanced program preference in the Decadal Survey. In this view, some progress should always be occurring on missions at all scales, though this balance is only applicable to mission classes and potentially targets: R&A, technology development, and ongoing missions are clearly intended to be exempt from this balance, with a separate protected status. There has been considerable effort placed on studying and restudying large missions to find some way to fit them into the available budgets. This has led, in some way, to the announcement that the Curiosity rover currently on Mars will be rebuilt for a 2020 launch as the next Flagship-class mission. It is thought that the 2020 rebuild, if a close copy to the original, can be done more cheaply than Curiositys final cost, perhaps for $1.5 billion. However, such a rebuild is not obviously

motivated by science reasons. While connections to the DS and Mars Sample Return can be imagined for this new rover, none have been explicitly made. It has been argued in some circles that the 2020 rover represents a good-faith effort to fly the highest priority mission in the DS. It is definitely the case that Mars Sample Return is the consensus choice for the next large mission. However, it is neither the case that the community thinks Mars Sample Return is the top priority for planetary science, nor that Mars is the highest priority mission target. It may be the case that a copy of Curiosity could be built for $1.5 billion. However, it is not the case that such a copy is responsive to the recommendations of the planetary science community. It may be the case that a Curiosity copy could be made responsive to those recommendations, but the necessary alterations would make it less likely it can be built for $1.5 billion. While the startup funds for a 2020 rover may be present in the current Mars Program budget, any overruns or unforeseen additions will have consequences for other projects, as we have unfortunately seen in the past in similar situations. Furthermore, even if everything runs smoothly, the majority of funding will be required several years from now, beyond the time for which budgets are currently planned. The temptation to put off the Discovery and New Frontiers program for yet another year (or two or three) will be present, especially if doing so supports a more capable Mars rover. Beyond all of this, lost in the maneuvering and rescoping and ruckus with respect to the Flagships are the other priorities in the DS. In my opinion there is simply no honest, rational reading by which the Flagship missions are higher priorities than Discovery or New Frontiers, as outlined above. Yet, there has been too little attention paid to the constantly slipping timelines and lengthening years between opportunities for these programs. As originally conceived and still recommended in the DS, Discovery opportunities would be available every 24-36 months, if not more often, with multiple selections per competition. Yet only one selection was made in the entire 2002-2011 decade, and while InSight (a Mars mission) was selected in 2012, the next Discovery opportunity is not scheduled before 2015. This is an astonishing slowdown compared to the 1992-2001 period when 10 missions were selected. While NASA leadership has focused on large missions in general and Mars specifically, there has been no obvious effort to maintain the desired pace of smaller missions, which have provided amazing andvaried science results and trained tomorrows leading planetary scientists. Furthermore, there have been threats to the operation of ongoing missions. TheMESSENGER spacecraft orbiting Mercury has enough fuel to remain in orbit for two more years, and plans to fill those years with follow-up science investigations that have been in place. However, scientists close to the mission report that it is unclear whether sufficient funds will be available to continue its operation beyond 2013, and supporters are now fighting for its life. The Cassini spacecraft orbiting Saturn has also been the subject of shutdown rumors. Again, this is despite the exceedingly high priority placed on continuing these missions by the planetary science community. Finally, the highest priority to the science community, the maintenance and expansion of R&A, has also been largely neglected. A recent analysis by Mark Sykes of the Planetary Science Institute shows that recent increases in R&A funding have been concentrated in near-Earth object studies (good for my personal interests, not great for colleagues who study Venus or the rings of Saturn), and that proposal success rates have been dropping precipitously over the past decade from upward of 40% to a current rate of 25% or lower. Astonishingly, even this 25% success rate may be cut further, with recent evidence that rates as low as 10% are planned for coming years. This will cripple the scientific community that NASA relies on to make its exploration missions successful and generate the ideas and research that leads to future missions. The bottom line for many concerned planetary scientists is this: we have been asked by NASA to rally behind the DS since its publication, but we are watching NASA by and large abandon the recommendations of the DS. The people working at the agency are in a difficult situation and are working hard for planetary science, but a lack of transparency and a steady stream of surprises makes too many scientists feel more like pawns than partners. The technical workers and engineers who enabled amazing Mars landings are being rightfully acknowledged and their skills protected, but it appears that the rest of the planetary science community is not considered important enough by NASA to maintain. It would be naive to assume that the goals and agendas of NASA as an agency are always aligned with the best interests of the field of planetary sciences (or earth sciences, or astrophysics, and so on). It would be folly to demand that science always take precedence over political or financial constraints. What I think we can ask for, as scientists and as citizens, is that decisions made due to non-science considerations are honestly presented as such. If we are to get coal in our stockings, dont tell us its because thats what we put on our wish list to Santa.

One solution to these problems is for NASA to follow the Decadal Surveys recommendations (and Congressional instructions: see the House report language for Planetary science) to protect and expand R&A, maintain a steady rate of cost-effective Discovery and New Frontiers missions, and delay major investments in Flagship missions until funds are available. A better solution is not necessarily a politically popular one: increasing NASAs budget for planetary science so that all these goals can be achieved. NASA is such a high-profile agency that the public thinks it spends much more money than it actually does. If every American old enough to vote decided to give up one cup of coffee or a six-pack of soda and could donate that money to support planetary science, we would have been able to start work on all three recent finalist mission candidates for the Discovery program not only the Mars seismic station that was selected, but also a boat to sail the seas of Titan and a spacecraft to explore the surface of a comet. In these difficult economic times, however, scientists know we must share sacrifice with our fellow citizens, and we are doing so and have been doing so. Eventually, however, we all hope and expect easier times to return to our country. As long as humans have sent machines to explore the solar system, Americans have played a leading role, and our achievements along with those of other spacefaring nations have thrilled the world. With the long lead times necessary to plan planetary missions, it is not too soon to start thinking about missions that may not launch for a decade, as evidenced by the beginning of work on the 2020 Mars rover. Yet NASAs apparent disregard for the clear recommendations of the Decadal Survey concerning the relative priorities of Flagship vs. Discovery missions causes much concern. Given the complicated relationship mentioned earlier, many of us are uncomfortable appearing to criticize NASA or wondering if we are breaking a law against lobbying by doing so. However, professional societies and citizen supporters are starting to be more active in promoting the benefits of planetary science, and all of the high-priority items in the DS, to our government and fellow Americans. The Division for Planetary Sciences and American Geophysical Union, major professional societies, have made several statements related to the NASA budget, as have groups of science enthusiasts like the Planetary Society. We hope to increase our engagement and improve the prospects of achieving the highest-priority goals in planetary science, including sustaining the health of the scientific community. With continued support from NASA and the American people, we will continue to learn more about all objects in the solar system from the sun-baked plains of Mercury to the expanses where the Voyager spacecraft still gather data, and beyond. Yes, including Mars. The 2012 Apocalypse, or Why the World Wont End This Week If you believe The Daily Mail, were all convinced that the world is going to end on 21st December 2012. Apparently people arestockpiling food and weapons, flocking to remote villages andheading for mystical peaks from whence an extra-terrestrial mothership housed for centuries in an alien temple inside the mountain will pluck believers to safety. With ten days to go before the Mayan apocalypse supposedly casts Earth into oblivion, time is running out for believers to find alien salvation the Mail proclaims. So why all the recent hysteria? According to Maya myth, the world was created on 11 August 3114 BC in the Gregorian Calendar; or 13.0.0.0.0 by the Maya count. This creation was the fourth incarnation of the world, the previous age having ended after the thirteenth baktun (a c.400-year cycle). On 21st December, it will once again be 13.0.0.0.0 and the Great Cycle will be completed, bringing the thirteenth baktun of the current age to an end. Some translations of the glyphs from a partially illegible Maya stela suggest that the end of the present baktun will see the descent of the god Bolon Yookte KUh (sometimes translated as the Nine-Footed God). This convergence of dates and prophecies has been seen as marking the transition to the next world, and hence the end of this one. For many years, a scarcity of Maya calendrical references to dates post-2012 was also seen as a possible indication of a cataclysmic end to the world this December. But, quite apart from the question of practicalities (I mean, how many of you have a calendar on your desk which reaches to 2406 a baktun from now?) even this tenuous evidence has recently been refuted by the discovery of an early Maya mural in Xultn which includes calendrical and cosmological calculations stretching some 7,000 years into the future. Certainly, the Precolumbian Maya might have considered 21st December 2012 a symbolic date, a moment of potential transformation. But does that mean they thought the world would end? Its easy to mock The Daily Mail (far, far too easy) but for some people, the prospect of apocalypse is a very real fear. David Morrison, of NASAs Astrobiology Institute, says that they have received thousands of questions about the 2012 doomsday predictions, some of them from people who have considered suicide, because they are so terrified by the horrific idea of living through the end of the world. Some people are clearly deeply

troubled by the recent obsession with apocalypse, but that the origins of their fear lie in a highly disputed and extremely tenuous Maya prophecy is a fascinating and baffling situation. According to the Maya legends eloquently recorded in the sixteenth-century Popol Vuh, humans were created in this, the fourth world, when the gods moulded our ancestors from maize dough (after unsuccessful attempts at fashioning men from monkeys, wood and clay). I doubt that many of the so-called preppers who are preparing themselves for the end of the world or an ascent into the stars with their alien overlords believe in the Precolumbian myths of creation, so (even if we had conclusive evidence of a Maya belief in a 2012 apocalypse) why would they believe in the myths of destruction? And if you dont believe me, why not listen to the Maya themselves. Modern-day Maya see the apocalypse as a European invention. For them, the end of the baktun is a time of renewal and celebration, a new beginning, not an end. So, despite the spectre of impending doom, Im going to carry on Christmas shopping, and looking forward to my imminent research leave, secure in the knowledge that the ancient Maya didnt believe the world will end on Friday. And even if they did, I dont. Psychology Reveals the Comforts of the Apocalypse December 21, according to much-hyped misreadings of the Mayan calendar, will mark the end of the world. Its not the first end is nigh proclamationand its unlikely to be the last. Thats because, deep down for various reasons, theres something appealingat least to some of us about the end of the world.

Enjoy the Self-Fulfilling Prophecy University of Minnesota neuroscientist Shmuel Lissek, who studies the fear system, believes that at its heart, the concept of doomsday evokes an innate and ancient bias in most mammals. The initial response to any hint of alarm is fear. This is the architecture with which were built, Lissek says. Over evolutionary history, organisms with a better-safe-than-sorry approach survive. This mechanism has had consequences for both the body and brain, where the fast-acting amygdala can activate a fearful stress response before higher cortical areas have a chance to assess the situation and respond more rationally. But why would anyone enjoy kindling this fearful response? Lissek suspects that some apocalyptic believers find the idea that the end is nigh to be validating. Individuals with a history of traumatic experiences, for example, may be fatalistic. For these people, finding a group of like-minded fatalists is reassuring. There may also be comfort in being able to attribute doom to some larger cosmic ordersuch as an ancient Mayan prophecy. This kind of mythology removes any sense of individual responsibility. Theres an even broader allure to knowing the precise end date. Apocalyptic beliefs make existential threatsthe fear of our mortality predictable, Lissek says. Lissek, in collaboration with National Institute of Mental Health neuroscientist Christian Grillon and colleagues, has found that when an unpleasant or painful experience, such as an electric shock, is predictable, we relax. The anxiety produced by uncertainty is gone. Knowing when the end will come doesnt appeal equally to everyone, of coursebut for many of us its paradoxically a reason to stop worrying. This also means people can focus on preparing. Doomsday preppers who assemble their bunker and canned food, Lissek believes, are engaged in goal-oriented behaviors, which are a proven therapy in times of trouble.

The Power of Knowledge Beyond the universal aspects of fear and our survival response to it, certain personality traits may make individuals more susceptible to believing its the end of the world. Social psychologist Karen Douglas at the University of Kent studies conspiracy theorists and suspects that her study subjects, in some cases, share attributes with those who believe in an impending apocalypse. She points out that, although these are essentially two different phenomena, certain apocalyptic beliefs are also at the heart of conspiracy theoriesfor example, the belief that government agencies know about an impending disaster and are intentionally hiding this fact to prevent panic. One trait I see linking the two is the feeling of powerlessness, often connected to a mistrust in authority, Douglas says. Among conspiracy theorists, these convictions of mistrust and impotence make their conspiracies more preciousand real. People feel like they have knowledge that others do not.

Relatively few studies exist on the individuals who start and propagate these theories. Douglas points out that research into the psychology of persuasion has found that those who believe most are also most motivated to broadcast their beliefs. In the Internet age, thats an easier feat than ever before.

Lessons from Dystopia Steven Schlozman, drawing both from his experiences as a Harvard Medical School child psychiatrist and novelist (his first book recounts a zombie apocalypse) believes its the post-apocalyptic landscape that fascinates people most. I talk to kids in my practice and they see it as a good thing. They say, life would be so simpleId shoot some zombies and wouldnt have to go to school, Schlozman says. In both literature and in speaking with patients, Schlozman has noticed that people frequently romanticize the end times. They imagine surviving, thriving and going back to nature. Schlozman recently had an experience that eerily echoed Orson Welless 1938 The War of the Worlds broadcast. He was discussing his book on a radio program and they had to cut the show short when listeners misconstrued his fiction for fact. He believes the propensity to panic is not constant in history but instead reflects the times. In todays complicated world with terrorism, war, fiscal cliffs and climate change, people are primed for panic. All of this uncertainty and all of this fear comes together and people think maybe life would be better after a disaster, Schlozman says. Of course, in truth, most of their post-apocalyptic dreams are just fantasies that ignore the real hardships of pioneer life and crumbling infrastructure. He points out that, if anything, tales of apocalypse, particularly involving zombies, should ideally teach us something about the world we should avoidand how to make necessary changes now. Physicists Find a Backdoor Way to Do Experiments on Exotic Gravitational Physics The whole point of an explanation is to reduce something you dont know to something you do. By that standard, you dont gain much by explaining anything in terms of black holes. Appealing to themost mysterious objects known to science as an explanation sounds likeusing one mystery to explain another. Yet this is precisely what physicists have been doing to make sense of high-temperature superconductors and plasmas of nuclear particles. Both of these states of matter are about as un-black-hole-like as you can imagine. They dont suck you to your deathindeed, the force of gravity plays no role in them at alland they dont split open the very foundations of physics. They are hard to understand in much the same way Earths climate is: the laws governing their constituents are perfectly well-known, but there are just so damned many constituents. In the course of studying black holes, however, string theorists have discovered unexpected parallels, or dualities, between gravitational systems and non-gravitational ones. These correspondences may be purely mathematical or may reflect deeper physical linkages, but either way, you can leverage your knowledge of one domain to solve problems in another. In the January issue of Sci Am, Harvard physicist Subir Sachdev describes how to take analyses of gravitational phenomena and apply them to otherwise intractable problems regarding superconductors. Sabine Hossenfelder at Backreaction blogged on this topic recently, too, although she presumed a comfort level with vector fields and critical points. But what about running the dualities in the other direction, using laboratory measurements of extreme materials to probe exotic gravitational physics? At an afternoon coffee-and-cookie break this spring at the Kavli Institute for Theoretical Physics, string theorist Ramy Brustein of BenGurion University in Israel told me a way to do just that. He and Joey Medved of Rhodes University in South Africa have since written up their proposal. An expert on nuclear plasmas, Raju Venugopalan at the Brookhaven National Lab, likes the idea of returning the favor that string theorists have paid his subject area. Can these experiments be used to learn about aspects of gravity? Venugopalan wonders. That would just be a phenomenon. The experiments in question entail smashing gold or lead nuclei together to create plasmas of quarks and gluons. When Brookhavens RHIC accelerator, following upearlier discoveries at CERN, first created these plasmas in 2005, physicists were flummoxed. Theyd predicted the plasmas would behave like a gas, since quarks and gluons interact only weakly under the conditions that RHIC achieved. But the particulate debris betrayed pressure gradients that a gas cannot sustain. The plasmas must actually be liquid. Evidently the sheer number of particles compensated for the inherent weakness of their interactions.

Theorists were at a loss to calculate basic parameters of the fluid, such as viscosityloosely speaking, the friction of fluid flow. The best they could manage was a rough argument based on Heisenbergs uncertainty principle. Viscosity depends on the energy of the fluids constituent particles and the average time between successive particle collisions, and the uncertainty principle relates these two quantities, thereby implying a minimum possible value to the viscosity (as explained here). Even a so-called superfluid cant evade Dr. Heisenbergs strictures. A gas actually has a fairly large viscosity, since its particles are spaced farther apart and collide less frequently than those in a liquid. (A technical note: by viscosity, I really mean the ratio of viscosity to density.) But what exactly the minimum value should be, theorists couldnt tell, until Dam Son of the University of Washington and his colleagues applied duality. They equated the viscosity of a fluid to gravitational waves caroming off a black hole in higher-dimensional spacewhich, even for a physicist, is not an analogy that springs to mind. That was a big surprise, Brustein says. The fact you can calculate hydrodynamical parameters from gravity was not understood. The answer: 1/4, in the appropriate units. The viscosity measured by RHIC comes close. Water, some 400 times more viscous, is molasses in comparison. Surprisingly, the minimum value is the same for all fluids, whatever they are made of. Through the logic of duality, this universality has a simple explanation: Viscosity is equivalent to a gravitational phenomenon, and according to Einsteins general theory of relativity, gravitation is blind to compositional details. This is the line of reasoning Brustein hopes to flip around. The way he tells the story, it all started on an extended visit to CERN during the snowy winter in Europe two years ago. Brustein was out shoveling his driveway in the French village of Thoiry and got talking to his neighbor. Turned out the neighbor was the technical director of the ALICE experiment, which is CERNs answer to RHIC. Not long after, Brustein bumped into the ALICE team leader at a formal dinner. Clearly it was meant to be. Some months later, Brustein sat with ALICE scientists in the CERN cafeteria and sketched out his ideas on a napkin (see photo above). Even if they dont work out, Brustein has at least checked off two items on physicists list of 1000 things to do before you die: (1) napkin sketch, and (2) CERN cafeteria, a storied hangout where scientists have come up with such ideas as the World Wide Web. Brusteins insight was that viscosity is not the only fluid property you can measure. Shape is another. If the duality is valid, viscosity and shape will be related in a way that pins down the corresponding theory of gravity. Hes looking for new observables that are a bit more discriminating than viscosity, Venugopalan says. For instance, if Einsteins general relativity governs the gravitational dual, the minimum viscosity will equal 1/4 and the plasma should be spherically symmetrical. Nuclear physicists would not expect an ephemeral roiling fireball to have such symmetry, so this counts as a strong and significant prediction. Its an actual way of proving the quark-gluon plasma has a gravitational dual, Brustein says. Things get even more interesting if Einsteins theory is only an approximation to a deeper theory, as string theory holds. Then the viscosity value will differ from 1/4 and may no longer be universal among substances; the plasma shape will gain some angular structure (a quadrupolar correlation function, to be technical). So the experiment is able to probe post-Einsteinian physics. To be sure, these measurements would not probe the law of gravity that governs our universe, but only the law of gravity that is implicit in the plasma dynamics. That is to say, the plasmas fluid behavior can be thought of as related to some hypothetical universe where gravity acts a certain way. That universe may or may not be a model for ours. What the measurements would do, however, is test the general concept of duality, which currently has the status of a conjecture, and validate it as a tool in the search for a unified theory. Brusteins biggest challenge is not the physics per se; it is to persuade RHIC and ALICE experimentalists to take the data he needs. Typically experimentalists measure just the numbers of particles coming out in different directions, rather than the details of the particles energy and momentum. Venugopolan cautions: Though I appreciate where Brustein is coming from and it would be indeed great if one can make an empirical determination of these questions, there are a large number of nontrivial issues to resolve before one gets there. Particle experimentalists are busy people these days and have no shortage of ideas for what to look for. So Brustein might have to eat a lot more cookies and shovel more driveways to convince them. When You Fall Into a Black Hole, How Long Have You Got?

In chatting with colleagues after a talk this week, Joe Polchinski said hed love to fall into a black hole. Most theoretical physicists would. Its not because they have some peculiar death wish or because science funding prospects are so dark these days. They are just insanely curious about what would happen. Black holes are where the known laws of physics come into their most direct conflict. The worst trouble is the black hole information paradox that Stephen Hawking loosed upon the world in 1976. Polchinski and his colleagues have shown that the predicament is even worse than physicists used to think. I first heard about their brainstorm while visiting the Kavli Institute for Theoretical Physics in Santa Barbara this spring, and the teamPolchinski and fellow Santa Barbarans Don Marolf, Ahmed Almheiri, and James Sullywrote it upover the summer. Polchinski blogged about it a few months ago, and another theorist who helped to usher in the idea, John Preskill, did so last week. Polchinskis talk to the New York University physics department drew a standing-room-only crowd, not a single person snuck out early, and he was still fending questions an hour after it ended. Almost as much has been written about Hawkings original paradox (including by me) as about the fiscal cliff, so Ill jump straight to the new version. Step #1 of the argument is what Polchinski and his co-authors call the no-drama principle. According to current theories of physics, a black hole is mostly just empty space. Its perimeter or event horizon is not a material surface, but just a hypothetical location that marks the point of no return. Once inside, you are gripped too tightly by gravity ever to get back out. By then, falling at nearly the speed of light, you have a few seconds to look around before you reach the very center and get crushed into oblivion. But nothing noticeable should happen at the moment of crossing. One of Einsteins great insights was that observers who are freely fallingwhether into a black hole or toward the grounddont feel the force of gravity, since everything around them is falling, too. As they say, its not the fall that kills you; its the sudden stop at the end. An outside observer knows youre doomed, but likewise doesnt think anything untoward happens upon passing through the event horizon. Indeed, this observer never sees anything actually cross over. Because of a kind of gravitational mirage, things seem to slow down and freeze in time. All the stuff piling up at the horizon forms a ghostly membrane, which obeys the usual laws of physics and has conventional properties such as viscosity and electrical conductivity. Step #2 is to relate these two viewpoints. To the infalling observer, space looks like a vacuum, and in quantum theory, a vacuum is a very special state of affairs. It is a region of space that is empty of particles. It is not a region that is empty of everything. Theres no getting rid of the electromagnetic field and other fields. (If you could, the region would not merely be empty, but nonexistent.) A particle is nothing more or less than a vibration one of these fields, and what makes a vacuum a vacuum is that all the possible vibrations cancel one another precisely, leaving the fields becalmed. To maintain this finely balanced condition, the vibrations must be thoroughly quantum-entangled with one another. To the outgoing observer, the horizon (or membrane) cleaves space in two, and the vibrations no longer appear to cancel out. It looks like there are particles flying off in every direction. This is perfectly compatible with the infalling observers viewpoint, since the fields are what is fundamental and the presence of particles is a matter of perspective. To put it differently, emptiness is a holistic property in quantum physicstrue for a region of space in its entirety, but not for individual subregions. For consistency between the two viewpoints, the outside observer infers that each particle he or she sees has a doppelgnger inside the horizon. The two are quantum-entangled, like those particles in laboratory experiments you read about. (Watch this lighthearted video that my colleagues made earlier this year to explain entanglement.) Individually, both particles behave completely randomly, but together they form a matched pair. See the diagram at left: the infalling observer sees vacuum state a, the outside observer sees entangled particles b and b. Particle b is part of what physicists call the Hawking radiation. Step #3 is to consider the long-term fate of the hole. Like everything else in this world, black holes must decayquantum mechanics mandates it. In the process, a hole must gradually release everything that fell in. If Joe Polchinski jumps into a black hole, he will get scrambled with all the other theorists who have done the same, and the morbid gruel will emerge particle by particle in the Hawking radiation. Though mangled beyond recognition, each martyr to the cause of knowledge can still be separated out and pieced back together. To enable this reconstruction, the particles of the Hawking radiation must be thoroughly entangled with one another. So, by step #2, each particle flying away from the hole must be thoroughly entangled with its doppelgnger inside the hole. By step #3, the particle must also be thoroughly entangled with other particles that are flying away from the hole. These two conclusions clash, because quantum

mechanics says that particles are monogamous. They cant be thoroughly entangled with more than one other partner at a time. They can be partially entangled, but that is not enough to ensure consistency between the observers view or to reconstruct the infalling physicists. This formulation of the black-hole paradox vindicates Hawkings original argument. For years physicists hoped that the devil lay in the details that more precise calculations would reveal an escape routeonly to be serially disappointed. Now they have officially given up hope. One of the basic premises must be wrongwhich is to say, something deep about modern physics must be wrong. You need huge changes, not just quantumgravitational corrections, to invalidate Hawkings argument, Polchinski told the assembled multitudes at NYU. More surprisingly, Polchinski and his co-authors have shown that a popular approach known as black-hole complementarity, championed by Leonard Susskind of Stanford University, isnt up to the task, either. Susskind reasoned that, although infalling and outside observers might see different and mutually incompatible events, no single observer can be both infalling and outside, so no single observer is ever faced with a direct contradiction. In that case, the paradox is only ever conceptualsuggesting it is somehow illusory, the product of thinking about the situation in the wrong way. But Polchinski and colleagues showed that a single observer can catch a particle in the act of polygamy by first lingering outside the hole and then jumping in. The least radical conclusion is that the no-drama principle is false. Someone falling into a black hole doesnt pass uneventfully through the horizon, but hits a wall of fire and is instantly incinerated. I think its crazy, Polchinski admitted. But in order for a black hole to decay and its contents to spill out, as quantum mechanics demands, the infalling observer cant see just a vacuum. The firewall idea strikes me as similar to past speculation that black holes are somehow material objectsso-called black starsor dark matter starsrather than merely blank space. I spent 20 years confused by this, Polchinski said, and now Im as confused as ever. It would be nice to answer the question, if only so that no one ever has to undertake the journey to answer the question. Winter Storm Bears Down on Midwest After Dumping Snow on Rockies The first major winter storm of the season, which started Tuesday in the Rocky mountains, could dump more than a foot of snow in some areas of the central Plains late Wednesday, the National Weather Service said. CHICAGO (Reuters) - The first major winter storm of the season, which started Tuesday in the Rocky mountains, could dump more than a foot of snow in some areas of the central Plains late Wednesday, the National Weather Service said. "It has evolved into a full-fledged blizzard around the Colorado, Nebraska and Kansas border area..." said Alex Sosnowski, meteorologist for Accuweather.com. "It's a pretty nasty storm." He said the wind attached to the storm is also blowing dust in the West Texas area, causing traffic accidents. In Colorado, Interstate 70 was closed east of Denver to the Kansas state line due to high winds blowing snow into drifts and reducing visibility, said Mindy Crane, spokeswoman for the Colorado Department of Transportation. Several other roads in eastern Colorado were closed because of the blizzard conditions, she said. Crane also said a stretch of Interstate 70 in the mountains near the ski resort of Vail was closed temporarily on Wednesday so crews could do some work to prevent avalanches. The storm marks a major change from the mild December so far in most of the nation, Sosnowski said. This means many parts of the country could see a White Christmas. More storms are expected in the middle of next week. Blizzard warnings have been issued Wednesday in parts of Colorado, Nebraska, Kansas, Iowa, Missouri, Minnesota, Wisconsin and Michigan, meteorologists said. The heaviest snow is falling at a rate of up to an inch per hour in parts of Nebraska, Kansas and Colorado. The worst of the blizzard is expected to hit communities from Omaha, Nebraska, to Green Bay, Wisconsin, Wednesday night into late Thursday, according to Accuweather.com. In Chicago, the storm is expected to begin as rain and later change to snow Thursday, Sosnowski said. Heavy snow and high winds were expected anywhere from the central plains into the Midwest/Great Lakes regions through much of the day Thursday, the National Weather Service said. Hazardous travel conditions were expected through Thursday and into early Friday.

Moisture off the Gulf of Mexico is expected to cause rain in the lower Mississippi River Valley Thursday, pushing east into the southeastern states Friday. In the West, a system along the Pacific coast will bring scattered snow and rain showers into the northwestern states, according to the weather service. Over a foot of snow is expected in the higher elevations of the Washington Cascades and upper Rockies. Guest Post: Are Microgrids the Key to Energy Security? Energy independence is a concept that has become part of the political lexicon and touted as a panacea for a downturn economy. Recently, the concept has morphed into energy security which encompasses not only a domestic abundance of energy resources, but freedom from energy market manipulation. Still, there are numerous and conflicting definitions for energy security. Does energy security mean using only renewable or carbon-neutral energy resources to prevent further anthropogenic global warming? How do fossil fuels, particularly natural gas, fit into a secure energy future? One thing is certain we know an energy security failure we when we see itor worse, experience it. The aftermath of Superstorm Sandy was the most recent example of how vulnerable society is to disruptions in energy supply. According to the Department of Energy, more than 8.6 million customers were without power following Sandy, more than any other storm in history. However, amidst the extensive Northeast blackouts were islands of power that may point the way to true energy security. Microgrids kept the lights on when the electric transmission system failed. A microgrid is the interconnection of local generating resources and electric users (loads) optimized for reactive and sustainable operation. As opposed to the large, centralized generating plants that provide the backbone of the transmission grid, microgrids utilize distributed generation (DG), which can be derived from conventional generators, fuel cells, efficient combined heat and power systems, and renewable energy sources. Although microgrid systems may normally be connected to regional transmission networks, they also have the ability to be self-sustaining or islanded when the electric grid goes down. The Climate Change Case for Microgrids Over the last two years (2011 and 2012) fourteen (14) extreme weather events, each causing more than a billion dollars in damage have occurred in the U.S., according to the Center for American Progress. Many of these events have caused widespread power outages. Ongoing drought is also a huge concern, as many energy technologies rely heavily on water, including steam electric power and natural gas fracking. According to the International Energy Agency (IEA), 50 percent of global water usage is for energy production. While nationally there has not been a coordinated policy effort to address energy security impacts from climate change, the situation is causing some states to investigate microgrids as a solution. After a series of storms walloped the state with large-scale outages, Connecticut is exploring policy to encourage microgrid development. Connecticut will be an important case study in how policy must be crafted to facilitate adoption, and the issues involved with community-based microgrids including parity, utility involvement, and economics. The sustainability of energy supply amidst emergencies that take down regional power systems has been a primary driver for microgrids. Consider the difference between the microgrid approach and centralized power generation. Centralized power generation relies heavily on large baseload nuclear, coal, and natural gas plants. Disruption of power between the centralized generating plants and the delivery of that electric to end users can occur anywhere along the network of transformers, transmission lines and substations that is hundreds of miles long. Vulnerability is not limited to winds and flooding. Voltage instabilities, equipment malfunctions, unplanned generator outages, terrorist attacks, ice, and lightning can cause widespread blackouts. In contrast, a microgrid system has multiple (and often diverse) generating sources as well as energy storage capability that are local to end users. Control is also local, allowing responsiveness to instabilities in the transmission grid, compensating load reduction, and efficient deployment of available generation. The U.S. military has been exploring the use of microgrids for obvious energy security needs during field operations. A recent Department of Defense (DoD) study cataloged 44 existing, planned, or demonstrated DoD microgrid installations. Other applications for microgrids include remote areas that do not have access to larger transmission networks, hospitals, data centers, and other mission critical systems that cant afford to lose power. Two noteworthy institutional microgrids are the Santa Rita jail and the living laboratory microgrid at UC San Diego. In the Netherlands, PowerMatching City is a 22-home community where advanced microgrid technologies are being demonstrated. These microgrid

systems provide valuable technology vetting and learning opportunities. Globally, Pike Research has identified a total of 3.2 gigawatts (GW) of existing microgrid capacity. The Sustainability Case for Microgrids Besides improving reliability, microgrids offer other benefits including energy efficiency and integration of renewable energy sources. Microgrids employ sophisticated technology architecture and controls to allow demand response, optimizing loads in response to changes in generation, and switching to islanded operation if disruptive events in the regional transmission grid occur. Microgrids are designed and customized to the mix of electric (and sometimes heat) needed for a particular mixed-use community or installation, and allow automated adjustable and sheddable loads to improve efficiency and reliability. And because microgrids often use renewable energy sources and fuel cells, theysupport carbon reduction and green living goals. Renewable energy sources are well suited for microgrids for a couple of reasons. Regional electricity balancing authorities consider most renewable sources as stochastic (unpredictable) and for forecasting purposes treat them as negative loads rather than a generation source. Adequate operating reserve must be available from generating sources to meet the highest projected demand for a given time of day. That means a great deal of redundancy (some would call it waste) in generating resources. Further, if a renewable source such as a wind farm produces more generation than expected, the transmission network must compensate for potential voltage and frequency fluctuations along transmission lines caused by that increased power. By matching renewable generation to demand on the load side (locally) and utilizing energy storage, microgrids help smooth out the variability in renewable generation delivered to the grid. A hybrid power supply also reduces reliance on traditional generator fuels such as diesel, propane, and gasoline. Making the Economic Case for Microgrids Even with a renewed attention on the energy security benefits of distributed generation and microgrids, the technology requires large upfront investment which can be a barrier to entry. Siemens, a key developer of microgrid power generation resources and management software, has estimated that a microgrid to support a 40 megawatt (MW) load can require an investment upwards of $150 million. Although large-scale energy storage has been cost prohibitive, the smaller scale of microgrid storage, efficiency improvements, and the ability of local distribution networks to manage intermittency are expected to improve the economics. In addition to enhanced energy security, the economic justification for microgrids includes energy savings, efficiency improvement, and reduced emissions. Because there are numerous technology options for generating resources, energy storage, smart meters, transformers, control system architecture, and communication networks, microgrid planning is a complicated exercise in investment optimization. The impact of each of these choices on the system cost and return on investment (ROI) is not obvious. DNV KEMA, an energy and sustainability company with extensive experience in smart grids, microgrids, and energy markets has developed an intuitive visualization tool to assist with master planning of microgrids. DNV KEMAs proprietary Microgrid Cost/Benefit Analysis model evaluates the financial decisions for a range of technologies including generation, energy storage, building efficiency, load automation, thermal load management, distributed system infrastructure, telemetry and controls. The location-specific optimization tool allows the user to evaluate the cost, ROI, emissions performance, reliability, and occupancy rate (i.e. for mixed use developments) while evaluating uncertainty and risks associated with climate, technology costs, energy prices, and changing demand. Such optimization tools help identify long-term investment approaches, track energy balances, and quantify the duration of support for critical loads. In addition for making the business case, microgrid optimization models can also inform policymaking by comparing the impacts of different rate structures, incentives, and new technologies. Despite the cost barriers, a recent survey of smart grid executives commissioned byIEEE reported that hospitals and healthcare institutions were the largest expected market for microgrids over the next five years. The report concludes that private- and public-sector funding for microgrid, DG, and grid-level storage projects would advance cost-effective application of these technologies. By 2020, the global microgrid market is projected to reach $13.40 billion, a nearly three-fold increase from 2012 investments. Regulatory Hurdles

Policy and regulatory hurdles complicate microgrid development and can make the economics less than favorable. Issues such as regulations governing generating asset ownership, classification of a microgrid as a distribution or steam utility under state laws, utility legal responsibility as the provider of last resort, grid interconnection, transmission charges, rights of way, state policies on net metering that dont apply to microgrids, and feed-in tariff structures for renewable generation present legal and regulatory hurdles. New York State conducted an extensive assessment of regulatory definitions and legal requirements to which microgrids would be subjected, and developed a roadmap for facilitating microgrids in the state. The comprehensive report serves as a valuable tool for developing state-level policies. Utility response to microgrid opportunities has been tepid, in part due to lack ofestablished microgrid standards. In 2011 IEEE published Guide for Design, Operation, and Integration of Distributed Resource Island Systems with Electric Power Systems and the Federal Energy Regulatory Commission proposed implementation standards for demand response which provided much needed engineering protocols. Utilities that are at the forefront of microgrid development arerural electric cooperatives, with service areas that do not have the option of connecting to a larger transmission grid. Because of the regulatory and economic challenges, microgrids will likely remain a niche application over the next several years. But as the costs for energy storage, renewable generation, and smart grid automation become more competitive, microgrids will play an expanding role in the quest for energy security. Theres Something in the Air: Trans-planetary Microbes Cover your mouth when you cough! Weve all learned the hard way that microbial organisms, from bacteria to viruses, can be transported by air. But the extent to which organisms exist in the Earths atmosphere is only now becoming clear. There is good evidence that bacteria (or bacterial spores) can help nucleate water condensation, seeding clouds and encouraging precipitation. It has been speculated that this could even form part of abacterial life-cycle, lofting organisms into the air, transporting them, and bringing them back to earth for fresh pastures. Theres also growing evidence for just how widespread airborne microbial ecosystems might be. Scientists in Austria have found bacteria in cloud droplets at 10,000 feet as well as clear signs that these microbes are not just passengers theyre actually growing and reproducing in-situ in the super-cooled water environment. This suggests that clouds are quite literally another habitat for life on Earth, and with an average covering of 60% of the planetary surface represent a pretty major ecosystem. Now a new study finds that dust plumes in the troposphere are carrying over 2,000 distinct species from Asia to North America right across the Pacific Ocean. Some of these organisms are fungal, but at least 50% are bacterial, and make the trans-planetary journey in only 7-10 days when storms loft them as high at ten miles into the atmosphere. This might not seem so surprising, we know that single-celled organisms occupy almost every niche on the planet. However, it does seem that the Asian microbes represent a distinct population thats usually only a trace on the continental USA but when the wind blows their numbers on the western hemisphere definitely increase significantly. This means that there is real mixing of species going on, a microbial pollution that may have consequences for all manner of things, including local ecosystem function and even disease. Its fascinating stuff. This kind of transportation must have been going on across all three to four billion years of life on Earth, leading us to wonder exactly what role it may have played in maintaining the global biosphere. Its also food for thought in considering the potential ecosystems of Mars, a place where planet-wide dust stormsregularly loft particles high into the atmosphere. Which World Will We Face in 2030? Last week, I and some 200 other attendees of theGlobal Trends 2030: U.S. Leadership in a Post-Western World conference got a thoughtprovoking look at the current megatrends leading to four possible futures for the world some 10 to 15 years from now. Cutting across all of them is the disruptive influence of emerging technologieswhich was the theme of the panel I moderated at the event, held at Newseum in Washington, D.C., on December 10 and 11. The main subjects of the conference were the U.S. National Intelligence Councils Global Trends 2030: Alternative Worlds report, which was released with the Atlantic Councils Strategic Foresight Initiatives companion opus, Envisioning 2030: US Strategy for a Post-Western World.

The conference brought together policy leaders, technology experts, business leaders and futurists for an expansive discussion of how the U.S. should respond to global trends. If were wise and steady, we can navigate current transition to a better world, said Chuck Hagel, chair of the Atlantic Council and a former U.S. Senator, during his opening remarks. The 2030 report dubs the four futures: Stalled Engines (the U.S. draws inward and globilization falters); Fusion (China and the U.S. collaborate broadly, leading to greater global cooperation); Gini-Out-of-the-Bottle (inequalities increase disruptive social tensions and the U.S. is no longer global policeman); and Nonstate World (with emerging technologies, nonstate actors take the lead in confronting global challenges). How the U.S. turns out will affect all the other game changers, said Frederick Kempe, president and CEO of the Atlantic Council, referring to the broad trends identified in the report, such as a crisis-prone global economy and the impacts of emerging technologies. But, he added, At the same time, tech center of gravityinnovation and so onis moving away from the U.S. as we speak. Ultimately, he warned, The U.S. will either dynamically shape trends thru 2030 or be unfavorably shaped by them. Other factors in that shaping will include collaborating with other nations and the economy. In the panel I moderated, Emerging Technologies that Could Change Our Future, we explored several themes that brought home the yin and yang of any technology: how it can be both a tool for our benefit or detriment. As youll see in the video below, panelist Mikael Hagstrom, executive vice president, Europe, Middle East, Africa and Asia Pacific for SAS, kicks us off with some thoughts on the need for governance for digital assets. Paul Saffo, managing director of Foresight, Discern Analytics; Senior Fellow, Strategic Foresight Initiative, The Brent Scowcroft Center on International Security, Atlantic Council, identifies the problem of new technologies that increase wealth without creating many jobs. And Gen. James E. Cartwright, Harold Brown Chair in Defense Policy Studies, Center for Strategic and International Studies, talks about how moving knowledge around is key to success in many venuesfrom military engagements to the kinds of man-machine interfaces in advanced prosthetics. Along the way, we touched on the force of social media; the potential of additive, or 3-D printing for manufacturing; robotics and automation; remote operation of devices; and even transferring knowledge stored on a chip from one brain to another. All three men were incredibly articulate about these complex issues, and I hope you enjoy the discussion. Intensive Weight Loss Programs Might Help Reverse Diabetes Type 2 diabetes has long been thought of as a chronic, irreversible disease. Some 25 million Americans are afflicted with the illness, which is associated with obesity and a sedentary lifestyle, as well as high blood pressure.Recent research demonstrated that gastric bypass surgerya form of bariatric surgery that reduces the size of the stomachcan lead to at least temporary remission of type 2 diabetes in up to 62 percent of extremely obese adults. But can less drastic measures also help some people fight back the progressive disease? A new randomized controlled trial found that intensive weight loss programs can also increase the odds that overweight adults with type 2 diabetes will see at least partial remission. The findings were published online December 18 in JAMA, The Journal of the American Medical Association. The increasing worldwide prevalence of type 2 diabetes, along with its wide-ranging complications, has led to hopes that the disease can be reversed or prevented, wrote the authors of the new paper, led by Edward Gregg of the Centers for Disease Control and Prevention. The study tracked 4,503 overweight adults with type 2 diabetes for four years. About half of the subjects received basic diabetes support and education (including three sessions per year that covered diet, physical activity and support). The other half received more intensive lifestyleintervention assistance. This second group received weekly individual and group counseling for six months, followed by three-sessions each month for the next six months, and refresher group sessions and individual contact for the subsequent three years. The interventions aimed to help individuals limit daily calories to 1,200 to 1,800in particular by reducing saturated fat intakeand to help them get the recommended 175 minutes per week of physical activity. After two years about one in 11 adults in the intervention group experienced at least partial remission of their diabetes, meaning that a patients blood sugar levels reverted to below diabetes diagnosis levels without medication. Only about one in 60 in the control group, which received only basic support and education, saw any remission after two years. The findings suggest that partial remission, defined by a transition to prediabetic or normal glucose levels without drug treatment for a specific period, is an obtainable goal for some patients with type 2 diabetes, the researchers noted.

The improvement, however, was not indefinite for everyone. After four years, only about one in 30 people in the intervention group were still seeing an improvement in their condition. Researchers think that regaining weight and falling behind on diet and physical activity goals increase the risk that people will return to a diabetic state. About one in 75 in the intervention group saw complete remission of their diabetes, in which glucose levels returned to normal without medication. The study did not find, however, that individuals in the lifestyle intervention group had lower risks for heart trouble, stroke or death than did those in the control group. This recently led the National Institutes of Health to halt the [trial], noted David Arterburn, of Group Health Research Institute in Seattle, and Patrick OConnor, of HealthPartners Institute for Education and Research in Minneapolis, in an essay in the same issue of JAMA. Similar results have come out of studies looking at moreintensive medical treatment of diabetes. A more potent interventionbariatric surgeryalready appears to achieve what intensive medical and lifestyle interventions cannot: reducing cardiovascular events and mortality rates among severely obese patients with type 2 diabetes, they noted. As with any disease, however, prevention is the best strategy. The disappointing results of recent trials of intensive lifestyle and medical management in patients with existing type 2 diabetes also underscore the need to more aggressively pursue primary prevention of diabetes, Arterburn and OConnor noted. One recent study found that compared with no treatment at all, lifestyle interventions reduced the onset of type 2 diabetes by 58 percent in people with pre-diabetes (and the medication metformin reduced the onset rate by 31 percent). Bariatric surgery seemed to reduce the onset of diabetes in obese patients by 83 percent, Arterburn and OConnor pointed out in their essay. For people who already have diabetes, however, those who are still in the early stages and those with the biggest weight loss and/or fitness improvement had the best odds for beating the disease. And even if lifestyle interventions arent capable of dialing back the disease entirely, any reductionwhether through lifestyle or other changesin the need for medication and in medical complications due to diabetes can be considered an improvement in managing the disease, which already costs the U.S. health system $116 billion each year and is estimated to affect 50 million Americans by 2050. A Clinical Trial and Suicide Leave Many Questions: Part 3: Conflict of Interest Weve touched on some of the many disturbing things that happened during the clinical trial on which Dan Markingson committed suicide. In myfirst post, I asked how a psychotic, homicidal patient who was involuntarily hospitalized in a psychiatric hospital could give an informed consent for participation in a clinical trial. There appeared to have been abuse of a vulnerable patient and extraordinary coercionparticipate in this trial or be committed to a psych hospital seems to have been the bottom line. In my second post, we looked at investigator responsibilities, delegation of authority, and Good Clinical Practice tenets, all of which were violated with no consequences. Now we turn to the need to disclose conflicts of interest (COI), again a basic clinical research ethics principle that was violated. There are so many obvious conflicts of interest that it is hard to know quite where to start. Physicians The most obvious and egregious COI was that shown by Dr. Stephen Olson, who acted as both Dan Markingsons treating physician and as Principle Investigator on the CAF study. As Dr. Harrison Pope, a Harvard expert, concluded in his testimony, Olson failed to meet the standards for good clinical practice both as a principal investigator and as the study physician for Mr. Markingson. He failed his ethical responsibilities to Dan by: Enrolling him in a clinical trial he was incapable of consenting to, particularly over the objections of Dans mother, Mary Weiss. Not dropping Dan from the protocol when Dan was clearly not showing improvement. Not examining Dan closely and regularly. Dr. Olsons signature appears only twice throughout all of the study documents from 12/8/2003 until his death 5/8/2004. (p 30, Pope testimony). According to Ms. Weiss, Dan had told her that Dr. Olson would, at most, stick his head in occasionally at study visits. The medical records confirm that impression. Olson likely would have had to examine Dan more frequently if he were not on a study, as he would not have had the additional staffing funded by the study to have a surrogate see his patients. Improperly acting as both Markingsons study physician and treating physician

The conflict between Dr. Olsons roles as Dans personal physician and as researcher is further detailed by Dr. Pope. For example, it likely kept Dan from getting a second opinion and from getting additional medications. The protocol also prohibited Olson from measuring blood levels of the anti-psychotic, which would have shown if he were being compliant in taking his meds. Olson reportedly mislead the IRB, leading them to believe that Markingsons enrollment had been agreed to by his case manager, when he hadnt yet been seen by Mr. Pettit. He also failed to inform them of information about Dans inability to consent and that the judge had granted a stay of commitment for Dan if he followed recommendations. The NIH website summarizes the problem succinctly: If an investigator is also the personal health provider of the potential research participant, there may be an additional conflict of interest. A physicians duty is to honor the best interests of the patient. An investigator must do what is best for the study. These two objectives are not always consonant. Further, potential participants may be reluctant to question the advice of a health provider on whom they depend for care. Olsons involvement, and that of the department chairman and coinvestigator, Dr. Charles Schulz, appear to have been significantly driven by financial interests, though they had other incentives, too, such as increased prestige and publications. Financial Conflicts of Interest There were major financial incentives for both Drs. Olson and Schulz, as well as for the university. Dan Markingsonaka Subject 13was worth over $15,000 to the university, had he completed the study. The CAF study not only yielded $327,000, but helped generate more attention (and probably more trials) for the universitys schizophrenia program. This particular study was structured so that patients had to complete the trial in order for the full payment to be received by the institution. Thus, dropping the patient because the study drug was ineffective, or adding additional medications, was prohibited and would have resulted in significant financial penaltiesand this was not explained in the informed consent. Because the study period was one year, Olson had to keep Markingson on the trial to get the full payment; I suspect this contributed to his recommendation that Dans stay of commitment be extended to keep Dan a captive participant. Dan was far more valuable to the University and psychiatrists as a study patient than as a regular psychiatric inpatient. As Dr. Pope noted, above, ancillary personnel could do many of the assessments. And Dan was destitute, so any care they provided him outside of the study would have had minimal reimbursementwe often didnt feel that the payments from Medical Assistance even covered the cost of billing, let alone the medical care provided. (Note: not all of the grant money goes to the investigator. Much of it then has to be disbursed to the university for direct costs and for their administrative overhead charge, often 50% of the total grant. Physicians are also expected to generate monies to cover much of their salaries.) According to the Pioneer Press in 2008, Olson received $220,000 from six companies since 2002, including $149,000 from AstraZeneca, according to the state records. Schulz received $562,000, including $112,000 as a researcher and consultant to AstraZeneca. Yet these significant COIs were not disclosed to study participants. When Dr. Schulz replied to anguished Ms. Weiss 3rd letter (her first two went unanswered), he did not disclose that he was a principal investigator, nor did he forward her serious concerns to the IRB. I was shocked to read Dr. Schulzs testimony that he hadnt even read the consent form for a study in which he was a co-investigator (and was listed on the FDA Form 1572 as being a responsible party). (p 156 Schulz deposition). Nor was Schulz, a leading schizophrenia researcher, aware that informed consent requires disclosure of financial COIs: At another point, plaintiff attorney Barden read an excerpt from a bioethics book arguing for the importance of informing patients about a doctors financial ties to drug companies. Do you agree or disagree with that statement? asked Barden. I dont agree with that statement, replied Schulz, arguing that disclosing this information could confuse the situation. Have you had any training in biomedical ethics? pressed Barden. Ive taken the courses at the University of Minnesota that are required for us to participate in clinical research. And isnt this part of that training?

Im not aware, said Schulz. I dont recall that. CROs Contract research organizations (CROs) are intermediaries hired by a sponsor (generally pharmaceutical) to administer a clinical trial. This used to be done by the pharma company itself. As business has gotten more competitive, they generally farm that work out now, to avoid having to have their own staff. Quintiles, responsible for the CAF study, is the largest CRO, capturing 14 percent of the lucrative $11.4 billion global market. CROs put a lot of pressure on sites to enroll. Delays in completing the approval of a new drug cost from $684,931 to $1 million per day and slow recruitment is the major factor. So they try to motivate their sites to enroll patients, sometimes with encouragement (holding them up as examples to other sites or similar recognition) or promises of further studies, sometimes by threats of having the trial taken away. They employed both tactics to manipulate the study coordinator, Jeanne Kenney, illustrated by their e-mails here and here. While CROs can often provide helpful tips from their experience doing trials, I was flabbergasted to read, In the CAF study, for instance, a Quintiles study monitor suggested that each of the CAF study site coordinators try recruiting subjects at homeless shelters. This is so coercive and unethical, it almost defies belief. . .but the longer I am in this field (and in medicine), the more disillusioned I have become. I should no longer be surprised by anything, Im afraid. University Its not just individual researchers who benefit from industry-sponsored research. The entire university feeds happily at the trough. At University of Minnesota alone, In fiscal 2010, business and industry sponsored $35.4 million in research spending at the U 5.4 percent of total research expenditures of $653.6 million. And between 2002-8, drug companies gave $88 million in gifts, grants and fees to Minnesota doctors and caregivers, including $782,000 just to the Universitys two psychiatrists. The CAF study initially suffered from poor enrollmentso much so that Quintiles, the CRO administering the study, had placed the site on probation and threatened to terminate it. So the university established a 16 bed specialty psychosis unit at Fairview hospital, known as Station 12, in part to enhance the hospitals startup schizophrenia program and meet the Us mandate to bring in more research dollars. According to a CAF Study Coordinator Teleconference, on this unit All patients are reviewed for possible research candidacy. Research staff are in contact with nurses, case managers, and attending psychiatrists daily Research staff attend morning report before inpatient rounds take place. The focus is to identify any possible subjects that may be eligible for studies. This is also focused on building a repertoire [sic] with psychiatry residents, who are often much easier to approach than attendings. So we have a University department chairman demonstrating leadership by being unaware of basic research tenets and by saying that disclosure of financial COIs would just cause confusion. What an exemplary role model. Of course, there are other COIs and pressures as well, in that successful enrollment begets publication, more fame, and more research grants from other sponsors as well. It also seems to bring considerable protection from those in power at the university. IRB The IRB holds a fair measure of blame in this tragedy, as well, though they have admitted no wrongdoing. The IRB is paid by the study to review the protocol and consent and, in theory, to provide oversight. IRBs have had their own share of bad press. For example, the Johns Hopkins IRB received scathing criticism for failures of oversight in an asthma study that led to the death of a healthy volunteer, Ellen Roche. The Office for Human Research Protections (OHRP) temporarily suspended all federally funded research involving human subjects at Hopkins. Even back in 1996, alarms were raised by the GAO that the workloads of IRBs were too heavy and precluded thorough review. The load on IRBs is well described in Trials and Tribulations.

Funding for IRBs comes from the study sponsorsin this case, AstraZeneca likely paid the U. Minnesota IRB fee. And while the fees are thousands of dollars, the workload of most IRBs results in just minutes spent on each protocol. (Note, individual members of a committee are not paid for their reviews. The funding goes to the institution for administrative overhead). There are also nonfinancial ethical conflicts of interest involving IRBs. These may be due either to excessive personal involvement or prejudgment by the experts on the board, or increasingly, from competition between the PI and IRB members. Academic institutions have an incentive to approve studies, so that they can tout that it is providing state-of-the-art medical care and that it has been selected as a research site over its competitors by a leading pharmaceutical company. But IRBs have certain responsibilities. As noted even on the U Minnesota website, The IRB reviews research projects which involve human subjects to ensure that two broad standards are upheld: first, that subjects are not placed at undue risk; second, that they give uncoerced, informed consent to their participation. The IRB works with investigators to modify projects to ensure adequate protection for its subjects welfare and right of self-determination. The Universitys processfor protecting human research subjects reflects federal regulations developed in response to such cases as the Public Health Service syphilis study and the U.S. government radiation experiments. Yet Moira Keane, the director of the IRB (and now a Board member for PRIM&R), responsible for overseeing the CAF study, was asked in her deposition, So its not the Institutional Review Boards purpose to protect clinical trial subjects, is that what youre saying? And she replied, Yes. Conclusion In the Markingson case, there are apparent conflicts of interest on multiple levels. The investigators had significant financial incentives to enroll patientsand to not alter therapy, even if a patient was not responding to therapy. Shockingly, the investigators claimed ignorance of basic ethical principles, including informing potential subjects of financial conflicts of interest. They all claimed shocking levels of ignorance about basic research ethics, although holding positions of responsibility and authority. And, at each level, they absolved each other of any responsibility for this young mans death. With failures at multiple levels of supposed safeguards, and knowing that some of the same staff provided the assessments and care for patients on the AstraZeneca study as on a major NIH trial, the CATIE study, it seems essential that OHRP and an independent review examine not only the CAF study, but take a careful look to see if similar breaches tainted the NIH trials. Who will step up? [I will continue this series in early January, looking a bit at the NIH and CATIE concerns and the University's troublesome response to this case. As Matt Lamkin just noted, "It can be difficult to keep up with the research scandals at Minnesotas Psychiatry Department over the years," so we still have lots of questions to bring up.]

Link love: December 2012 Some interesting, insightful, or amusing things Ive been reading this week. The DSM-V is out Im not a psychologist, but the DSM, or Diagnostic Systems Manual, is still important to my research, but as someone who teaches evolutionary medicine, most especially my teaching. I have been teaching the shift from the DSM-IV to DSM-V (excuse me, I guess its DSM-5 now) for the past several years, with students doing a close reading of the proposed changes, or projects on some of the new diagnoses. It will be interesting this year to have a finalized document to talk about as well as the reactions. Two of the main ones Ill be assigning: The DSM-5 has been finalized by Vaughan Bell. Bell summarizes the major changes mostly I cant believe they took out the bereavement clause for depression. The New Tamper Tantrum Disorder by David Dobbs. A smart perspective on the pathologizing of normal behavior.

Grumble grumble

Why Do Women Leave Biology? This is the page for the press release of an article in BioScience, but it links to the pdf of the manuscript. Shelley Adamo takes a smart look at the factors that drive attrition of female scientists. She suggests that the factors that are blamed for fewer female scientists exist in medicine, but the same gender differences in attrition dont exist. Adamo claims policy issues drive differences instead (for instance, mandated parental leave seems to reduce attrition in Canada but doesnt exist in the US). I agree. A week of a students electrodermal activity. Teachers, check out the activity during classes and when sleeping. Decide youd probably be better napping during your own lecture after all. How to Email a Professor over at WikiHow. Overall not bad advice. I find it interesting that in the how to address professors section, they tell you how bad it is to call a professor Mr., but only say its bad to call a female professor (note professors are default-male) Mrs. Personally, I take issue with anything that isnt Dr. or Prof. if I dont know the student. Once I know the student, particularly if I advise them, Kate is fine. Stop Saying That. A great blog post that points out the error of complaining that women should put as much time and effort into researching their birth as they do researching their next smartphone. Sexist humor leads to more sexism. In the same vein as stop saying that, stop permitting sexist humor in your workspace, your home, among your friends. Dont be a silent witness, and be the guy who interrupts sexism. You dont have to be obnoxious about it, but if you let it go, youre telling your friends that being sexist is ok. I Am the Woman in Your Department Who Does All the Committee Work at McSweeneys. I didnt know whether to laugh or cry about this one.

Re-emerge from a tough week Mouse research saves a little girl with leukemia. Because my husband is a two-time cancer survivor, with many of his treatments first being tested in animals, I am grateful to animal researchers every single day. Your Holiday Mom. A blog that posts letters from parents who love and support LGBTQ kiddos. Have a tissue handy. Also, make it abundantly clear to anyone around you who needs to know it that you are a holiday mom, too, but with actions over words. How do you pack your bag for a 7 year, 22,000 mile international reporting assignment? Journalist Salopek will walk the out of Africa route to South America. I highly recommend a few pairs of Ex Officio underwear they last for years and you can wash and hang dry them overnight. The flipped academic: turning higher education on its head. This article describes academics who are doing outreach or making their results available to the public before putting them in academic jargon-speak and up for peer review. Certainly an article that supports those of us that blog, but I didnt see a clear way the flipped academic was going to push her university to consider her for tenure under that model. Also, why are we into flipping so much in academia right now (Ive also read a few articles on flipping the classroom)? Why not call it inverted or transparent or outreach-focused? And now my favorite post: Michael Eisen puts Darwins Tangled Bank in verse. Eisen wrote this poem because his daughter needed to recite a poem for school, and he wanted to give her something scientific and beautiful. He totally wins at parenting. Some day my daughter will learn this, too. Common Antibiotic Not Helpful for Cough and Respiratory Infection When I was growing up in the 1980s and 90s with two younger brothers, the antibiotic amoxicillin was a frequent guest in our house. Strep throat, sinus infections, sore throats, coughs; we all remember that thick, pink, bubble gum-flavored liquid perhaps a little too well. But this popular drug, like many antibiotics, is overprescribedoften given for illnesses that it will not help, such as viral infections.A new study shows that it is indeed no more helpful than a placebo in treating patients with a non-pneumonia lower respiratory tract infection, such as a nagging cough. The research complements a paperpublished in February in JAMA, The Journal of the American Medical Association, that found that amoxicillin (know by names such as Amoxil, Alphamox, Dispermox, Trimox and others) is not effective in treating sinus infections when tested against a placebo. For the new study, published online December 18 in The Lancet Infectious Diseases, researchers recruited 2,061 patients 18 years and older (across a dozen European countries) who went to their doctor for a lower-respiratory infection that was not suspected to be pneumonia and had a cough lasting fewer than four weeks. Half of the hackers were randomly assigned to receive amoxicillin and the other half received a placebo. Both

groups were instructed to take their medication three times a day for seven days; and neither the patients nor clinicians knew which treatment was which. Participating patients received follow-up phone interviews and completed daily diary entries for symptoms (detailing, for example, cough, phlegm, runny nose, headaches, feeling unwell, etc.) as well as for side effects (including diarrhea, rashes, vomiting, etc.) for up to four weeks. The severity or length of moderate or intense symptoms was about the same for both the antibiotic and placebo groups. And there was only a slightly higher rate of new or worsening symptoms for those patients taking a placebo (19.3 percent) than for those taking the antibiotic (15.9 percent). The findings held even in patients 60 and older, who have been thought to benefit more from antibiotic treatment for such infections. Patients given amoxicillin dont recover much quicker or have significantly fewer symptoms, said Paul Little, of Primary Care and Population Sciences Division at the University of Southampton in the U.K., and co-author on the new study, in a prepared statement. Our results show that most people get better on their own. Whats more, the study showed that more people taking the amoxicillin (which is in the penicillin family) experienced side effects such as diarrhea, rashes and/or vomiting than those taking the placebo. The findings should encourage physicians in primary care to refrain from antibiotic treatment in low-risk patients, said Philipp Schuetz of the Medical University Department Kantonsspital Aarau in Switzerland in a prepared statement. Schuetz wrote an essay published in the same issue of The Lancet Infectious Diseases. For the subset of patients for whom the drug appeared to have had a slight benefit, researchers could now begin looking to see what might set themor their infectionsapart. Guidance from measurements of specific blood biomarkers of bacterial infection might help to identify the few individuals who will benefit from antibiotics despite the apparent absence of pneumonia and avoid the toxic effects and costs of those drugs, Schuetz said. Amoxicillin is the eighth most commonly prescribed drug in the U.S., with some 52.3 million prescriptions written each year, according to the IMS research groupsInstitute for Healthcare Informatics 2011 report. Its patent has expired, so generic versions of this drug have made it exceptionally affordableoften less than $25 per course even without insurance. Uncomplicated lower respiratory tract infections, such as the ones being tracked in the study, are often caused by viruses, which are not susceptible to antibiotics. But doctors are often not able to identify a virusimmediatelyespecially in rushed and resource-strained clinical settingsleading physicians to often prescribe antibiotics as a cautionary measure. This drug habit, however, might be doing more harm than good. Using amoxicillin to treat respiratory infections in patients not suspected of having pneumonia is not likely to help and could be harmful, Little said. Overuse of antibioticswhich is dominated by primary care prescribingparticularly when they are ineffective, can lead to side effectse.g. diarrhea, rash vomitingand the development of resistance. The biggest hurdle might now be explaining to patients that these familiar drugs are not actually helping any better than a sugar pill would. It is difficult to convince patients and their physicians against antibiotic use, Schuetz wrote. But the new findings should help convince everyone to think twice before starting an antibiotic prescription. This new study also underscores the natural lengthiness of lower respiratory tract infections. A person might be infected for more than a week before showing symptoms. The severe symptoms can, themselves, last for a week, and gradually improving symptoms can linger for much longer. So in the case of these ailments, perhaps time itself is the best treatment.

Вам также может понравиться