Sunday, March 30, 2014

Some "green energy" reminds us of leprechauns

Wind power, New England's prime renewable resource, remains a favorite of "green energy" enthusiasts--as long as it isn't harvested in Brookline, MA. If it were, Brookline would likely sprout a crop of environmentalists with outlooks different from ones we often hear now. That has already happened in several Massachusetts towns hosting wind turbines, from Falmouth and Kingston in the southeast to Princeton and Florida in the northwest.

Today's giant machines are much more intrusive than farm windmills that once dotted the countryside in days before rural electricity and small wind turbines that began to appear more than 30 years ago in California and several European countries. As tall as 50-story office towers, their huge moving parts weigh many tons.

According to German standards, safety and health require about a mile between residents and giant wind turbines. New England standards, requiring far less, were often proposed by the wind-power industry; they tend to favor its interests. Their failures to protect nearby residents from health problems and life disturbances, caused by turbine noise and flicker, have resulted in trenchant protests.

When Falmouth and Kingston, MA, tried skimping on distances for just a few giant turbines, they drew lawsuits from angry residents; some Falmouth operations have been curtailed. Another bitter dispute over turbine noise has been reported in Vinalhaven, ME. Opposition movements in western Massachusetts aim to block any more wind turbines in the Berkshire region. That is where the only two sizable wind-power plants in Massachusetts are located, with a total of 29 giant turbines, and where the state's strongest land-based winds are found.

Maine, New Hampshire and Vermont now have a total of 334 giant wind turbines, producing around 90 percent of New England's wind-powered electricity. The former enthusiasm for wind power in those states has dwindled. Instead, dozens of local protest groups and regional organizations have sprung up, opposing wind power and trying to curtail current wind-power plants. Town after town has enacted laws restricting wind turbines.

Practical options to locate giant turbines contribute to high prices for wind-powered electricity: two to five times the recent, average wholesale price for bulk electricity--as supplied to the New England grid of high-voltage power lines. Relative costs of generating wind power have not fallen much below the levels experienced with the world's first large wind turbine--the former Smith-Putnam plant in Castleton, VT, opened in 1941. Of course, wind turbines have improved since then, but so have conventional generating plants.

Some people seem unfamiliar with electricity pricing. It has three main parts: generation, long-distance transmission and local distribution. Wind-power promoters often mention only wholesale prices for generation: bulk electricity sent into the New England grid. Those prices don't include long-distance transmission and local distribution, so they will be lower than the total prices figured from electricity bills.

Wind-power promoters count on people confusing wholesale prices with retail prices--electricity as delivered to homes and businesses--which are often three to four times as much. Last year promoters managed to entangle a Boston Globe business reporter, who wrote a badly confused article about prices of wind power.

Eventually, it might be possible for unsubsidized, land-based wind power to compete financially. If wholesale prices for natural gas were to increase enough from the historic New England lows of 2012, the unsubsidized, full costs for electricity from large, land-based New England wind-power plants could become comparable to the increased costs from modern, efficient generating plants using combined-cycle natural gas.

Generation costs from ocean-based plants, including Cape Wind and Deepwater Wind, would remain far higher. Their recent projected wholesale prices, about 2-1/2 times the recent wholesale prices from land-based plants, show little improvement over similarly unfavorable situations found two decades ago in Europe, where ocean-based wind power began.

Operators of the Vermont Yankee nuclear plant in Vernon, VT, and the huge Brayton Point coal-fired plant in Somerset, MA, have given up waiting for increases in natural-gas prices. They say they can't compete with natural gas-fired generators. They are closing those power plants in 2014 and in 2017.

So far, after more than 30 years of modern wind-power development in the U.S., wholesale prices for wind power still fail to reflect actual costs of generation. Major amounts of the wind-power costs remain offset by back-door government subsidies, and wind-power operators continue sponging for transmission and backup.

However, the 2009 federal "stimulus" subsidies have been used up. New federal tax-credit subsidies stopped at the end of 2013. Effects are already being felt. In Maine, New Hampshire and Vermont, no new wind plant opened during 2013--the first blank year since 2006. In southern New England, only seven giant wind turbines were commissioned during 2013, scattered among four small plants.

Even with subsidies, wind developers face growing problems. Hardly any spare transmission capacity remains in New England's mountain areas. Financially, those became the region's best locations for wind turbines. Now, either wind developers there must overcome strong local opposition to new power lines and bear high costs, or else they can expect to be curtailed often by the region's electricity grid supervisor. That already happens occasionally. Curtailments happen frequently to some speculative wind-power plants in Texas and New York.

The electricity grids keep wind-backup generators running--enough to replace a substantial fraction of power being generated by wind turbines. Ramping up and down, backup generators other than hydro and nuclear will produce higher emissions than achieved with sustained generation and release emissions when not generating.

In New England, we pay indirectly for wind backup, so that most costs are not counted in wind-power prices. The extra emissions from fuel burned to maintain wind backup are rarely tallied. When they have been, wind-powered electricity looks little cleaner than electricity generated using combined-cycle natural gas.

Overall, wind power has failed to make convincing progress toward a socially responsible source of energy. In their efforts to reduce costs of wind power, manufacturers opted to develop giant, noisy turbines costing millions of dollars each--rather than building less expensive versions of small, quiet ones. That approach made wind power into a game of money and politics, mostly run by and for big companies. The intrusive, costly machines have turned thoughtful environmentalists who might have been supporters into opponents.

Tuesday, December 10, 2013

The wages of death

As misbegotten as some of the U.S. government's software projects have proven, they are dwarfed in complexity, costs and hazards by projects to clean up nuclear weapons plants built during and shortly after World War II. Those and similar plants built in Britain, France, China and the former Soviet Union are probably the worst man-made environmental hazards ever. What would have been difficult challenges under the best of circumstances became scandals, as one government report put it, through management that "has not established an institutional culture that honors protection of environment, safety and health." [1 p. 13]

Environmental contamination at U.S. nuclear weapons plants began to be reported by national news media in the 1970s. [2] [3] By the late 1980s, plants and their surroundings had become horribly contaminated by radioactive substances and by processing chemicals that the plants used. Careless workers and managers spread dangerous substances through workplaces, let them blow around in the wind and dumped what they called "waste" into ponds, wells, streams, earth trenches and containers lightly built for what they held. In a spirit that survived long after its origins in the era of World War II, production commonly took priority over everything else. [4] Following news and reports in the 1970s and 1980s exposing hazards of the plants and then collapse of the former Soviet Union, the federal government wound down most work at plants that were still active.

In 1988 Congress authorized a Defense Nuclear Safety Board to oversee plant cleanup. In 1989, retired Adm. James Watkins, then Secretary of Energy, established in his department an Office of Environmental Restoration and Waste Management, now known as the Office of Environmental Management. The same year, on behalf of the Herbert Bush administration, he announced a program to clean up U.S. nuclear weapons plants. [5] It was then expected to take "30 or more" years. Conditions of plant environments were not then reliably known, and the cost and complexity of such a program were underestimated. Only a year later, the Administration's estimate of $19.5 billion for the first five years had grown by half to $28.6 billion, in 1990 dollars. [6]

Of more than a dozen large plants involved with nuclear weapons, the worst contaminated were:
* Oak Ridge, 1942, in Tennessee, site of uranium enrichment using gaseous diffusion chambers
* Hanford, 1943, in Washington, site of plutonium breeding in reactors and chemical extraction
* INL, 1949, in Idaho, also INEL and INEEL, site of experimental reactor and nuclear materials testing
* Fernald, 1951, in Ohio, site of manufacturing for uranium feedstocks and reactor fuel elements
* Rocky Flats, 1951, in Colorado, site of plutonium alloying and forming, making shapes for weapons
* Savannah River, 1952, in South Carolina, also plutonium breeding in reactors and chemical extraction
These plants have been used for work outside their primary purposes. Oak Ridge was a development site for many World War II nuclear-weapons materials, and a large variety of facilities were built there. Savannah River has also performed radiochemical separations and produced most of the tritium used in weapons.

U.S. nuclear weapons plants represent a blend of technology, business and government. The federal government has usually hired industrial companies to develop and operate the technology-intensive plants:
* Oak Ridge: DuPont 1942-1945, Union Carbide 1948-1984, Lockheed Martin 1984-2000, Battelle 2000-present [7]
* Hanford: DuPont 1943-1946, General Electric 1946-1967, Westinghouse 1987-1996, Fluor 1996-present [8]
* INL: Phillips Petroleum 1949-1966, Idaho Nuclear 1966-1971, Aerojet 1972-1976, Westinghouse 1976-1994, Lockheed Martin 1994-1999, Bechtel 1999-2005, Battelle 2005-present [9]
* Fernald: National Lead 1951-1985, Westinghouse 1985-1992, Fluor 1992-2006 [10]
* Rocky Flats: Dow Chemical 1951-1975, Rockwell 1975-1990, EG&G 1990-1995, Kaiser-Hill 1995-2005 [11] [12]
* Savannah River: DuPont 1952-1989, Westinghouse 1989-2008, SRNS and URS 2008-present [13]
DuPont was prime contractor for the Manhattan Project, developing the nuclear weapons used in World War II. Many of the later companies are special-purpose combinations. URS, for example, coordinates work by Bechtel, CH2M Hill, Babcock & Wilcox and Areva.

By the third edition of the Department of Energy plan in 1991, projected spending on the cleanup had grown to near $7 billion a year, while the plan continued to set a 30-year target for completion. [14] That would have amounted to around $200 billion for the program in 1990 dollars or $350 billion in 2013 dollars. However, at that time Rocky Flats, Savannah River, INL and Oak Ridge continued as active plants, and cleanup was not estimated for them. Over succeeding years, plants were surveyed for inventories of hazardous substances, more plants and facilities were added to the cleanup program, standards of remediation were changed and some new processes for handling hazardous substances were developed.

By 1995, an incomplete inventory of nuclear weapons plant hazards had found:
* over 330 radioactive waste tanks of up to a million gallons
* over 3,700 contaminated sites spread over 3,365 square miles
* over 5,700 plumes of contamination in soil and groundwater
* over 1,000,000 55-gallon drums and boxes of chemical waste
* 77,000,000 gallons of liquid high-level radioactive waste
* 385,000 cubic meters of solid high-level radioactive waste
* 250,000 cubic meters of solid long-lived radioactive waste
* 2,500,000 cubic meters of solid low-level radioactive waste

The 1995 report displaying this inventory complained that the Department of Energy "has received about $23 billion for environmental management since 1989, yet little cleanup has resulted." [15] The department spent most of its efforts through 1995 assembling an inventory of hazards, evaluating standards of remediation, developing processes to treat radioactive waste and planning cleanups of individual sites and facilities. Environmental Management became the largest program in the department's budget.

While there are many major hazards associated with nuclear weapons production, liquid high-level radioactive wastes from plutonium production have been the most troublesome byproducts. They contain cesium and strontium fission products that are very dangerous and difficult to shield. About 92 million gallons accumulated between 1943 and shutdown of the last plutonium separation facility in 1990. As of 2012, according to the Department of Energy, Hanford had near 60 percent of the high-level wastes, Savannah River had near 40 percent and about a percent was at INL. [16] Other accounts offer somewhat different distributions.

The plutonium processing wastes began as acidic solutions. They were neutralized with hydroxides to reduce attacks on steel storage tanks, leading some minerals in the wastes to settle as so-called "sludge" with others remaining in solution. Over time, water was evaporated, causing dissolved minerals to crystallize into a crust of so-called "salt cake." Contents of waste tanks at Savannah River have been measured at around eight percent sludge by volume, with the balance split about evenly between liquid and salt cake. Retrieving waste from tanks became much more difficult because of sludge and salt cake.

Rocky Flats, in Colorado
Of the worst contaminated plants, as of 2013 Rocky Flats was one of only two accepted by the Department of Energy as remediated. However, the extent of remediation achieved by 2005 is disputed by nearby residents, and so far the department has refused to release results of contamination surveys. While plutonium is poisonous and radiotoxic in all forms, pure metal that was processed at Rocky Flats was especially hazardous. It can ignite spontaneously if exposed to air. When a critical mass of fissile plutonium is compressed into a compact shape, it produces a nuclear explosion.

Several releases of plutonium into the environment were reported while Rocky Flats operated, 1952-1992. The plant suffered plutonium fires in 1957 and 1969, releasing oxidized plutonium dust into the air and causing hundreds of millions in damages, in 2013 dollars. The Department of Energy admitted total plutonium releases of 0.6 Curies over the plant's life. However, a state-sponsored study estimated around 20 Curies released by the 1957 fire alone, in a plume contaminating Denver and its northern suburbs. [17 pp. 23-24]

Significant concentrations of plutonium have been found in soils and dust accumulations in the vicinity of the Rocky Flats site after 2005. [18] [19] Heavy rains in the spring of 1995 brought to the surface plutonium that had been trapped underground. [20] A lethal plutonium dose is around 20 milligrams or 0.3 Curies. However, much smaller amounts of plutonium will induce cancers. Ingestion of 10 micrograms or 0.00015 Curies is estimated to double one's lifetime cancer risk. [21] Nevertheless, a study performed by a state agency in the mid-1990s did not find excess cancer incidence in the general population of the Denver area. [22]

During the second Walker Bush administration, the Department of Energy allowed "accelerated" procedures at Rocky Flats, failing to remove buried contamination, and it accepted a rushed and problematic survey of surface contamination, without enough reliability to verify the contamination limits. [19 pp. 99-103] As a result, contractor Kaiser-Hill became eligible for a large bonus, up to $560 million in incentive fees. [19 p. 4]

Fernald, in Ohio
The Fernald plant has been scrubbed and demolished, and in 2006 it was also accepted as remediated by the Department of Energy. However, the facility locations and nearby lands remain heavily contaminated with uranium-processing residues, and unless further remediated they are considered uninhabitable. Environmental pollution began early on, under National Lead management, and continued until the plant was closed in 1989.

Discoveries in a lawsuit showed that the former Atomic Energy Commission was warned about pollution risks in 1951 and that National Lead knew about contaminated groundwater by 1960. At least a million pounds of uranium, radium and thorium migrated from processing facilities and storage areas to soil and groundwater. During the 1980s, management was altering radiation exposure records of workers and concealing knowledge of radioactive contamination that was far above government limits. [23] However, no one has been sent to prison.

The Department of Energy had estimated cleanup at 32 years and $12 billion in 1995 dollars. Over ten thousand rail cars loaded with the most contaminated soil were sent to a disposal site in west Texas. [24] In 2006, the second Walker Bush administration accepted Fernald as remediated after only 13 years and $4.4 billion in 2005 dollars. As with the Rocky Flats cleanup, early project completion made the contractor, Fluor, eligible for a large bonus, up to $288 million in incentive fees. [25]

The rushed remediation project left much of Fernald's contamination at the site. Around 6 million tons of building rubble and soil had been identified as radioactively contaminated, but only about a quarter of that was moved offsite. The rest was heaped into a large landfill adjacent to the former Fernald facilities. [26]

Savannah River, in South Carolina
Parts of Savannah River remain in active use: continuing to extract tritium from irradiated reactor targets to maintain the U.S. nuclear weapons, continuing to perform radiochemical separations, building and operating facilities packaging high-level radioactive waste, and constructing a facility to produce reactor fuel from mixtures of uranium and plutonium oxides. Savannah River staff developed the current U.S. process to process and package liquid high-level radioactive waste, a 13-year effort starting in 1983 under DuPont management.

Savannah River had five reactors of one basic design, all moderated by heavy water. That allowed using natural uranium with more stability than the graphite-moderated reactors at Hanford. The Dana plant near Newport, IN, operated from 1952 to 1957 to supply heavy water. Savannah River reactors began operating in 1954 at initial power levels of about 0.4 GW, thermal. They ran with slightly enriched uranium, supplied from Oak Ridge, to increase production. By 1964 they were running at about 2 GW. Like the Hanford reactors, those at Savannah River also used fuel slugs of metallic uranium, supplied by Fernald. It was co-extruded into aluminum cans on-site, an improvement over the Hanford process. The R-reactor, oldest of the five, was shut down in 1964 after the Lyndon Johnson administration ordered cutbacks in nuclear materials. From 1985 to 1988 reactors C, L and P were shut down, leaving only the K-reactor operable. In 1992 that was also permanently closed. [27]

Savannah River has two "canyon" buildings used for plutonium extraction from irradiated nuclear fuel. The F-canyon has been sealed off, in anticipation of decontamination and demolition. The newer H-canyon is still in use for radiochemical separations. During the second Walker Bush administration, the Department of Energy made plans in 2005 and let contracts in 2007 to build a facility producing so-called "MOX" reactor fuel from mixtures of uranium and plutonium oxides. The MOX project aimed to use up plutonium from retired nuclear weapons. H-canyon was to be employed for this project, estimated in 2008 to cost at least $4.3 billion. [28]

The Savannah River waste packaging separates high-level radioactive waste stored in tanks into two main streams. A high-level stream contains most of the fission products with strong radioactivity, to be vitrified in molten glass. A medium-level stream of "salt waste" contains most of the chemical salts from plutonium extraction, mixed with residues of reactor fuel, long-lived transuranic activation products, fission products and decay products, to be immobilized as synthetic "saltstone." [29] [30] As of 2013, there were six saltstone vessels--about 150 ft in diameter and 22 ft high--with other, much larger ones in development. The worst-case radioactivity near those vessels has been estimated at about 1 rem per hour at 100 feet--delivering the maximum annual industrial exposure allowed by the U.S. in a few hours. [31 p. 4]

At the Defense Waste Processing Facility of Savannah River, temperature-resistant glass mixed with dried waste chemicals is melted into type 304L stainless-steel canisters that are closed with welded but unannealed plugs. [31] Once loaded, canisters are dangerous radiation sources. Radiation from a canister can exceed 5,000 rems per hour at the surface. [32 p. 3] A few minutes next to one could deliver a lethal radiation dose. Wet and salty environments corrode type 304L stainless steel. [33] Salty water can also leach waste from glass. While the Savannah River canisters are expected to be reliable in protected settings, they would not be likely to withstand long-term exposure to unprotected, underground settings without leaking. Savannah River has no current plans to ship canisters offsite or to store them in unprotected settings. [29 p. 18]

As of 2012, completion of processing for the high-level waste at Savannah River was projected for 2028. [34 p. 229] However, scrubbing out "heels" from storage tanks and processing that waste is expected to take a few more years. [29 pp. 3, 17, 52] Those efforts may be delayed by lack of funding to build more glass-waste storage. Current buildings have room for only about 60 percent of the high-level waste stream. [29 p. 18] Completing a salt-waste processor has already been delayed four years, from 2014 to 2018, by lack of funding. Stability of saltstone over thousands to millions of years remains a critical, unresolved issue at Savannah River.

The capacities of the 51 Savanna River high-level waste storage tanks range from 750,000 to 1,300,000 gallons. Only the eight type IV tanks built in the 1950s are similar to most Hanford tanks--a single-wall steel liner butted against concrete sides. Other single-wall tanks built in the 1950s have catch pans outside the steel liners. The 27 double-wall tanks built in the 1960s and 1970s have leak detection systems and satisfy current EPA requirements for underground storage of hazardous substances.

At least 12 of the 51 giant underground tanks are known to have leaked, all lacking double-wall construction, risking exposure of soil and groundwater to high-level waste. [35] The Savannah River managers have proven proficient at concealing the total amount of high-level radioactivity leaks. However, the U.S. Environmental Protection Agency has declared the plant a Superfund site and documented several underground plumes of waste. EPA does not yet appear to have investigated potential plumes from high-level waste tanks. [36] Because the site remains largely exempted from regulation under the Atomic Energy Acts of 1946 and 1954, Public Laws 79–585 and 83-703 as amended, EPA currently has mostly powers of persuasion.

Tanks at Savannah River began to be emptied in the 1990s, as the Defense Waste Processing Facility started to process high-level radioactive waste that had been stored in them. By 1997, two of the type IV tanks were eligible for disposal. Rather than scrub and dismantle the tanks, site manager Westinghouse left up to a few thousand gallons of residuals in each tank and dumped in about 600 truckloads of low-strength concrete--a so-called "reducing grout" typically composed of sand, fly ash, Portland cement and sodium sulfide or another chemical agent, plus additives. Its compressive strength as cured can be as low as 500 psi, around 10 to 15 percent the strength of construction-grade concrete. [37]

That cavalier action provoked strong controversy. Legal issues were raised over improper disposal of high-level radioactive waste. In the contamination zone, where concrete meets residual waste, composition is uncontrolled, and physical properties are uncertain. Westinghouse had no reliable knowledge of long-term durability for the contamination zone, which contains most of the radioactivity left in tanks. [38 pp. 56, 174ff] It had only informal, short-term studies performed by a Savannah River staffer and an Oak Ridge staffer. [35 p. 1] Since 2004, Savannah River has been allowed to reclassify residual high-level waste as low-level, through an amendment to the Defense Reauthorization Act that year inserted by Sen. Lindsay Graham (R, SC)--a favored ally of the state's nuclear industry. [39] [40]

Savannah River management wrote up detailed justifications for their approach well after the fact. [38] [41] These say the facility will be actively maintained for 100 years after tanks are filled with concrete. [41 p. 198] Since some of the radioactive components persist at significant activities for up to a million years and the area has already experienced human habitation for over ten thousand years, that is an irresponsible approach. Over tens to hundreds of years, steel. concrete and other buried materials corrode and degrade to rubble and powders. Pouring concrete over residual waste leaves the contamination zone where they meet at risk of concrete rot, in which a zone of poorly consolidated, low-strength concrete propagates.

There is no question whether radioactive substances will escape into the environment, but only how soon, how much and how hazardous? Savannah River documents describe models to estimate long-term consequences, but they do not support the efforts with realistic measurements or with time-proven data. For example, a trivial number of samples have been collected and measured from residuals left in so-called "empty" tanks. [41 pp. 218-219]

For the MOX project, the second Walker Bush administration planned to adapt a process that had been developed by the French company Areva to recover and reuse plutonium from spent fuel at commercial nuclear power-plants. However, plutonium in nuclear weapons is usually alloyed with gallium. An evaluation in 2004 by an Areva subsidiary had indicated that gallium would embrittle the cladding of MOX reactor fuel rods and shorten their useful lives, so new process development was undertaken to remove gallium. [42] The extra efforts and costs turned out to have been wasted. By 2010, tests reported by Oak Ridge and Areva showed that MOX fuel rods made from weapons plutonium alloyed with gallium performed as well as those made with plutonium recovered from commercial spent fuel. [43]

By then, Savannah River was committed to the costly modified process. As of 2013, $3.7 billion has been spent on the MOX project, and projections of total costs have reached $7.7 billion, with further increases likely. Covert origins of the MOX project were revealed in partisan politics rather than in economics or policy, again involving Sen. Graham. The Obama administration indicated it may cashier the project. [44] [45] Without the MOX project, there would be little justification to operate the H-canyon, alarming Savannah River contractors who are charging around $250 million a year to maintain the facility.

If the MOX project were abandoned and the H-canyon sealed, the only high-priority, continuing activities at Savannah River would be tritium extraction and waste packaging. It might then be feasible to shrink the plant boundaries around the acres used by the tritium facility plus the Defense Waste Processing Facility, releasing much of the current 310 square miles as a remediation site--no longer subject to the Atomic Energy Acts. That would allow EPA and South Carolina agencies to oversee many of the remaining steps needed to clean up the badly contaminated site. Otherwise, the Rocky Flats and the Fernald experiences, during the second Walker Bush administration, warn of what can happen: a rush to the exit by an industrial contractor in order to collect a bonus from a business-friendly federal Administration, leaving behind problems.

Hanford, in Washington
Hanford has long been a problem-child of the nuclear-weapons industry: the most prolific producer and also the least responsive to environmental concerns. Its production facilities have been closed and under cleanup since 1990. [46] Nearly a quarter century of cleanup so far leaves a long way to go. With about 60 percent of the U.S inventory of liquid, high-level radioactive waste, starting treatment efforts at least 23 years behind Savannah River, Hanford's cleanup could last nearly a century--and possibly far longer.

Plutonium-239 proved more effective for weapons than uranium-235 and other fissile isotopes. The Cold War that began in the 1940s brought great pressure on Hanford as the sole producer of plutonium--at first--and after that the most productive until the mid-1960s. The B-reactor, Hanford's first, began at 0.25 GW thermal in 1944, but by 1961 it was operating at 2.1 GW. The last three reactors--KE, KW and N--operated at about 4 GW, thermal, comparable to a current nuclear power-plant, and the N-reactor produced commercial electric power from 1966 to 1986. Tritium production for thermonuclear weapons began in 1949. The first full-scale thermonuclear weapon test in the South Pacific occurred in 1952. [8] Hanford was the sole producer of tritium until Savannah River started production in 1955.

Hanford's facilities and operations are more complicated than the other nuclear weapons plants, because Hanford was a site of much process development as well as high-volume production. Hanford developed nine reactors for plutonium production, of varied design--versus five at Savannah River, of similar design. Hanford built five plutonium-processing canyon facilities, using three different processes--versus two at Savannah River, both using PUREX. Hanford also houses many initiatives that faltered: construction of early plutonium devices, three plutonium extraction processes that operated a few months to a few years, production of large strontium-90 and cesium-137 capsules, irradiation of thorium to produce uranium-233 and construction of devices, production of neptunium and americium from processing waste, early waste encapsulation projects, starting in 1965, the sodium-cooled fast-flux test reactor operated from 1980 through 1993, and decades of poorly protected storage of unreprocessed, irradiated fuel from the N-reactor, from a commercial power-plant and from the fast-flux test reactor. [8] [47]

Plutonium-producing reactors at Hanford were all graphite-moderated--scaled up from "Chicago Pile 1" built by Enrico Fermi's group at Stagg Field in 1942 and the subsequent X-10 reactor at Oak Ridge. To those designs, they added water cooling pumped through aluminum tubes around metallic uranium slugs. Graphite made it possible to operate with natural uranium. Aluminum cans were used to package uranium slugs for the first eight reactors. Zirconium cladding, still used in power reactors, was developed for the N-reactor. [48]

Aside from reactors, the major production facilities at Hanford were chemical reprocessing "canyon" plants, which extracted plutonium from irradiated reactor fuel as liquids and pastes, and the Plutonium Finishing Plant, which converted the chemicals to metal "buttons." During World War II, the T-plant and B-plant operated the original bismuth phosphate process to extract plutonium. Hanford dried the chemical paste in the Z-plant and shipped that to Los Alamos, where early weapons were produced. The finishing plant opened in 1949 to produce plutonium metal, replacing the Z-plant drying facility. [48]

Several process development and research laboratories were built at Hanford from the late 1940s through the early 1950s. The C-plant opened in 1952, using the newer REDOX process. The U-plant, never used during the war, opened in 1952 for uranium recovery. The PUREX plant, opened in 1956, greatly increased productivity, and older chemical reprocessing plants were closed. In 1954 Hanford started shipping plutonium to Rocky Flats, and in 1957 the design of N-reactor began. It was the last Hanford production reactor. Using some elements similar to those of modern power reactors, including a steam generator, it was rated at 0.8 GW, electrical, and became the world's largest power reactor for several years. The period from 1943 through 1964 saw high investment in Hanford, much development of new facilities and large outputs of materials for nuclear weapons. [48]

Inside a few government agencies, but mostly kept hidden from the public until the 1980s, gross environmental insults at Hanford were an open secret. A trickle of information emerged in Bulletin of the Atomic Scientists and from a few activist organizations, but it rarely appeared in general-interest news media. [2] Until the late 1980s, the major changes occurred during the Lyndon Johnson and the Nixon administrations. Although couched to the public in jingo terms common for the day, the Johnson administration aimed 1964 cutbacks in nuclear materials production mostly at the problem-ridden Hanford plant, knowing the better managed Savannah River plant could backstop any shortage. [49] In 1971, when trying to reduce federal spending, the Nixon administration ordered all the Hanford reactors permanently closed as an economy measure, declaring them "unreliable." Pressure led to reactivation of the N-reactor later that year as a power plant. [8] [50]

The period from 1965 through 1990 saw little new development and rapidly declining outputs at Hanford. Reactors D, DR, F and H were closed in the wake of the 1964 cutbacks. By 1970, the eight oldest Hanford reactors were either shut down or on standby--never to run again. Reactors B, C, KE and KW were closed or kept closed by the 1971 cutbacks. The N-reactor continued operation until 1986, but much of the uranium irradiated after 1971 was stored at the K-reactors and never processed to extract plutonium. N-reactor was shut down in 1986, after the Chernobyl disaster led to doubts about the stability of its graphite-moderated design. [8] [47] [48]

The 1964 message was not lost on General Electric, which had taken over management of Hanford from DuPont soon after the end of World War II. "Generous Electric" then advertised a slogan interpreted by cynics as [Profit] "is our Most Important Product." Since profit at Hanford was to become in short supply, GE rapidly backed away. By 1965 it had turned over industrial medicine, then reactor operations and chemical processing operations to Hanford Occupational Health, to Douglas United and to Isochem, respectively. After 1967, Hanford effectively had no full-charge industrial manager, only managers of individual operations--a situation that persisted until most operations had been shut down and Westinghouse took over from nominal site manager Rockwell in 1987. [51]

For the United States at large, the early 1960s through the early 1970s saw a great awakening of environmental concerns and regulation. However, the 20-year de-facto experiment with matrix management at Hanford proved disastrous for environmental concerns there. As illustrated in Paul Loeb's 1982 book, job preservation took over as the ruling principle for both management and workforce. [52] Workers reporting problems usually found themselves shunned as community enemies. For many years, the few workers who covertly confided in legislators and reporters were almost the only source of public knowledge about problems at Hanford. [53] Any would-be whistleblowers in the nuclear weapons plants were at risk, because plant employees and contractors were not covered by the Civil Service Reform Act of 1978, Public Law 95-454, although some of them seem to have thought they were. [54]

While local and regional news occasionally reported mishaps at nuclear weapons plants, concerns did not spread widely until an extended series of New York Times articles beginning in October, 1988. [55] The articles helped create a climate of opinion that led former Pres. Herbert Bush to appoint retired Adm. James Watkins Secretary of Energy. Mr. Watkins soon announced plans to clean up the weapons plants. [56] However, there was little government support for whistleblowers until Hazel O'Leary's term as Secretary of Energy during the first Clinton administration, 1993-1997. In 1994 she organized a conference featuring 14 of them. [57 p. 83] She had appointed one, Casey Ruud, federal manager of high-level nuclear waste at Hanford. [58]

Except for the N-reactor, Hanford's reactors used once-through cooling. Columbia River water was filtered, treated with additives, passed through the reactors and discharged into soil or returned to the river. That approach loaded the environment with hexavalent chromium, added as a corrosion inhibitor, and with radioactive contamination from activation products, notably phosphorus-32, and fuel slugs compromised by damage or by pinholes in the cans. [46] By 1990, when they were all shut down, Hanford's reactors had discharged over 400 billion gallons of contaminated water into the environment. There were additional discharges of contaminated washwater from processing plants. [8] [47]

Government and other reviews that began in the late 1980s outlined major environmental insults at Hanford and persecution of workers who had tried to report them. [59] [60] [61] [62] [63] Unlike most comparable high-level radioactive waste-storage tanks at Savannah River, none of the 149 single-wall tanks built at Hanford from the 1940s through the 1960s had protective pans around them. The Hanford tanks were more lightly built than most of the Savannah River tanks and more varied in size. They have four capacities ranging from 55,000 to 1,000,000 gallons. The 28 double-wall tanks built in the 1970s and 1980s have 1,160,000 gallon capacity. [64] The first tank leak was reported in 1956 and confirmed in 1958. [8] Estimated leakage has been enormous. By 1989, around the time all production facilities were shut down, at least a million gallons of liquid high-level radioactive waste had already been discharged into soil and groundwater. [65]

Hanford is currently engaged in its fourth attempt at packaging high-level radioactive waste. Starting around 1958 it experimented with spray calcining but did not develop a pilot plant. From 1965 to 1971, Pacific Northwest National Laboratory operated a pilot-scale project evaluating calcining and encapsulation in glass. In 1975, another pilot project tried variations of the techniques. [66] That and the vitrification development underway at Savannah River were background for a so-called "Tri-Party Agreement." In 1989, the U.S. Department of Energy, the U.S. Environmental Protection Agency and the Washington (state) Department of Ecology agreed to cooperate toward a goal of cleaning up the Hanford site in 30 years. At that time there were no reliable assessments of the major problems at Hanford and no proven technologies to cope with the worst of them. The U.S. government committed to spend $2.8 billion on Hanford remediation over the next five years. [67]

The 1989 revolution did change outlooks of some Hanford workers, who began to see environmental concerns as opportunities rather than threats. In 1990, the local newspaper editorialized, "Huge community resources and tremendous amounts of dwindling political currency have been expended to preserve a defense mission for Hanford. It isn't working and likely won't." [68] A proposal to reopen the PUREX plant was shelved by the Department of Energy in 1990, pending review of the accumulation of irradiated but unreprocessed reactor fuel. Permanent shutdown of the N-reactor was announced the next year. Congress extended whistleblower protection to employees of nuclear weapons plants in the Energy Policy Act of 1992, Public Law 102-486, Section 2902, but only for claims after October 23, 1992, leaving several earlier whistleblowers in the lurch.

Then began a pattern of delays in meeting remediation goals for Hanford that continues to the present. Some goals are not achieved because they were too optimistic, but many are missed for other reasons. In March, 1993, the (then) General Accounting Office objected that the Department of Energy had tested far fewer samples of Hanford's radioactive waste than were needed to find out whether its vitrification process would work. [69] Because of its complex history, Hanford's wastes contain chemicals not found at Savannah River, some of which could interfere with the Savannah River technology.

The Washington (state) Department of Ecology also had issues and tossed a particularly pesky fly into the ointment. As at Savannah River, Department of Energy plans for Hanford were to separate the contents of the waste tanks into high-level and medium-level streams, vitrify the high-level stream and mix the medium-level stream into concrete to make "saltstone." Unlike its counterpart in South Carolina, the Washington state agency showed some independence--objecting that hazardous substances to be immobilized in saltstone, including iodine-129 and technetium-99, were both highly mobile and long-lived. Saltstone would not retain them reliably and would degrade and release them into the environment while their activities remained dangerous. [69 p. 31]

The Department of Energy claimed it would go ahead with Hanford's vitrification plant anyway and reserve judgement about saltstone. However, in October, 1993, timelines of the 1989 agreement were officially slid forward with a farrago of excuses, in the first of many such maneuvers. More significantly, the department as run by Ms. O'Leary abandoned plans for saltstone at Hanford and agreed to employ a vitrification process for the medium-level stream of its tank waste. [70 p. 13] The department had no such technology ready to deploy. As of 2013, it has maintained the use of saltstone for waste disposal at Savannah River, even though the hazards there are similar and the climate there is likely to degrade saltstone faster than at Hanford.

It might seem obvious that huge, uncontrolled discharges of chemical and radioactive contamination were likely to reach aquifers and migrate toward the Columbia River, which flows through the 586-square-mile site. However, for more than 40 years after the first report of a tank leak, Hanford managers maintained a rigid state of denial. [71] In 1997, consultants for the Department of Energy confirmed massive leaks reaching aquifers and said that contamination could reach the Columbia River in as little as 20 years. [72] A Department of Energy remediation plan ten years later conceded that plumes of contamination had already reached the river--releasing strontium-90, tritium, uranium, hexavalent chromium and other hazardous substances. [73] Data and analyses contributing to that plan have been pilloried as woefully inaccurate and optimistic. [74]

In 2011, the contractor currently responsible for groundwater cleanup at Hanford stated that groundwater pollution violating drinking water standards had spread under 80 square miles of the site. [75] The volume of the vadose zone alone, between the top of groundwater and the land surface, is around 3,000 billion gallons. Removing contaminants by elution--pumping and purifying--could require treatment of a similar volume of contaminated water. As of 2013, Department of Energy contractors reported treating groundwater at about 1-1/2 billion gallons per year. [76] While that might sound impressive, taken out of context, removing most of the contamination under the Hanford site at that rate could take 2,000 years or more--unless accurate surveys and effective isolation techniques can greatly reduce the soil volume to be treated.

Airborne radioactivity dispersed into the surrounding region has received episodic attention. The B-plant and T-plant reprocessing canyons, built in the mid-1940s, lacked exhaust filters of any kind. Over half a million Curies of iodine-131 were released to the atmosphere as vapor during Hanford operations from 1944 through 1948--plus aerosols bearing strontium-90, cesium-137 and other, shorter-lived radionuclides. They were emitted most abundantly while irradiated uranium slugs were being dissolved in strong acids. Air monitoring showed the city of Pasco, southeast of the reprocessing plants, to be heavily affected. [77, pp. 78-82] Enhanced release of iodine-131 during the now strongly criticized "Green Run" of 1949 amounted to only around a percent of total discharges to the atmosphere during Hanford's first five years of operations. [77, pp. 90-92]

The Hanford Environmental Dose Reconstruction Project--begun by the U.S. Department of Energy and administered by the U.S. Department of Health and Human Services since 1992--was to estimate, measure and document human radiation exposures and environmental contamination in eastern Washington, western Idaho and northeastern Oregon. However, after a surge of lawsuits stimulated by its "initial" report in 1990, the project went into hibernation, and it never conducted radiological surveys outside the Hanford site. [78] [79] [80]

A waste-vitrification facility has been under construction at Hanford since 2005, using processes developed at Savannah River and modifications developed at Hanford. It is currently projected to operate by 2022, but it almost certainly won't. [81] Under the original 1989 agreement, the plant, treating the high-level stream only, was to operate by 1999. There is no current credible estimate for the full duration or cost of Hanford cleanup. So far, the Environmental Management program at the Department of Energy has spent about $150 billion cleaning up U.S. nuclear weapons plants, in 2013 dollars. [82] No public, long-range accounting for the program appears to be maintained anywhere within the U.S. government.

[1] John Andelin, Robert W. Niblock, et al., Hazards ahead: Managing cleanup worker health and safety at the nuclear weapons complex, U.S. Office of Technology Assessment, Report OTA-BP-O-85, 1993, available at

[2] Unattributed, Company criticized by AEC on leak of radioactive waste, New York Times, August 4, 1973, p. 36

[3] James P. Sterba, Radiation traced to atom plant in Colorado, New York Times, September 27, 1973, p. 77

[4] Office of Inspector General, Alleged cover-ups of leaks of radioactive materials at Hanford, Report IGV-79-22-2-231, U.S. Department of Energy, 1980

[5] Lee Bandy, Massive cleanup proposed for nuclear-arms plants, Philadelphia Inquirer, August 2, 1989, at

[6] Keith Schneider, Cost of cleanup at nuclear sites is raised by 50 percent, New York Times, July 4, 1990, at

[7] ORNL contractors, Oak Ridge National Laboratory, 2012, at

[8] J.D. Briggs, Historical time line and information about the Hanford site, Pacific Northwest National Laboratory, 2001, at

[9] Idaho site contractor timeline, U.S. Department of Energy, 2012, at

[10] Randy McNutt, Fernald marks 50th anniversary, Cincinnati Enquirer, May 8, 2001, at

[11] Patricia Buffer (Kaiser-Hill), Rocky Flats history, U.S. Department of Energy, 2003, at

[12] Frazer R. Lockhart, et al., Rocky Flats closure legacy, U.S. Department of Energy, 2006, at

[13] T. Zack Smith, Savannah River Site, U.S. Department of Energy, 2013, at

[14] Environmental Restoration and Waste Management Five-year Plan, U.S. Department of Energy, 1991, at

[15] Robert Galvin, et al., Report of the Task Force on Alternative Futures for the DOE National Laboratories, U.S. Department of Energy, 1995, at See IV. The Environmental Cleanup Role, Section C.2.

[16] Ken Picha, Tank waste strategy update, U.S. Department of Energy, 2012, at

[17] John E. Till, et al. (Radiological Assessments Corp.), Technical Summary Report for the Historical Public Exposures Studies for Rocky Flats Phase II, Colorado Department of Public Health, 1999, at

[18] Laura Snider, Study finds Rocky Flats area still as contaminated with plutonium as 40 years ago, Boulder (CO) Daily Camera, February 18, 2012, at

[19] Gene Aloise, et al., Nuclear cleanup of Rocky Flats, U.S. Government Accountability Office, 2006, at

[20] LeRoy Moore, Science compromised in the cleanup of Rocky Flats, Rocky Mountain Peace and Justice, 2013, at

[21] Carey Sublette, Nuclear materials, Nuclear Weapons Archive, 1999, at

[22] John W. Berg, et al., Cancer incidence in ten areas around Rocky Flats, Colorado Department of Public Health, 1998, at

[23] Tim Bonfield, History repeats itself at Fernald, Cincinnati Enquirer, February 11, 1996, at

[24] Christopher Maag, Nuclear site nears end of its conversion to a park, New York Times, September 20, 2006, at

[25] Elizabethe Holland, Radioactive waste will roll through area, St. Louis Post-Dispatch, June 4, 2005, available at

[26] Mary Buckner Powers, DOE's revamped site plan shaves 20 years at Fernald, Engineering News-Record, October 2, 2006, at

[27] Mary Beth Reed, et al., Savannah River Site at Fifty, U.S. Department of Energy, 2002, at See C. 13, Reactors, fuels and power ascension.

[28] Gene Aloise, et al., DOE needs to take action to reduce risks before processing additional nuclear material at the Savannah River site's H-canyon, U.S. Government Accountability Office, 2008, at

[29] D.P. Chew, B.A. Hamm, et al., Liquid Waste System Plan, Rev. 18, Savannah River Remediation, U.S. Department of Energy, 2013, at

[30] Laura Bagwell, et al., Performance Assessment for the Saltstone Disposal Facility at the Savannah River Site, Report SRR-CWDA-2009-00017, Savannah River Remediation, U.S. Department of Energy, 2009, at

[31] Jeffrey M. Allison, Submittal of the 2004 annual update of saltstone safety basis documents, Westinghouse Savannah River Company, December 3, 2004, at

[32] Richard G. Baxter, Wasteform and canister description, U.S. Defense Waste Processing Facility, 1988, at

[33] A. Machiels, et al., Effects of marine environments on stress corrosion cracking of austenitic stainless steels, Electric Power Research Institute, 2005, at

[34] Environmental Management, FY2013 Congressional Budget Request, U.S. Department of Energy, 2012, at

[35] Savannah River site disposal facility for waste incidental to reprocessing, U.S. Nuclear Regulatory Commission, 2013, at

[36] Savanna River site, EPA ID SC1890008989, U.S. Environmental Protection Agency, 2013, at

[37] Tri T. Hoang, et al., Stabilization and solidification processes for mixed waste, U.S. Environmental Protection Agency, 1996, at

[38] Savannah River Remediation, Performance Assessment for the F-tank farm, Rev. 1, U.S. Department of Energy, 2010, at (NRC Accession ML102850339)

[39] Tom Ichniowski, Senate upholds reclassifying some nuclear waste, Engineering News-Record, June 7, 2004, at

[40] Arjun Makhijani, Savannah River at grievous risk, Institute for Energy and Environmental Research, 2004, at

[41] Savannah River Remediation, Performance Assessment for the H-tank farm, Rev. 1, U.S. Department of Energy, 2012, at

[42] James F. Mallay (Framatome), MOX-gallium considerations and limits, Nuclear Daily, 2004, available at

[43] Kevin McCoy, et al., Hot cell examination of weapons-grade MOX fuel, Areva, 2010, availavle at[3].pdf

[44] Douglas Birch and R. Jeffrey Smith, South Carolina’s delegation in Congress critical to saving costly Savannah River plant, McClatchy News, June 27, 2013, at

[45] Matthew L. Wald, U.S. moves to abandon costly reactor fuel plant, New York Times, June 26, 2013, at

[46] Roy E. Gephart, Short history of Hanford waste generation, storage and release, Pacific Northwest National Laboratory, 2003, at

[47] Michele Stenehjem Gerber, History of Hanford site defense production, Fluor Hanford, 1999, available at

[48] David Harvey, History of the Hanford site, 1943-1990, Pacific Northwest National Laboratory, 2002, at

[49] John R. Cauley, Announcement of plans for nuclear materials cutback catches Soviet flat-footed, Kansas City Times, January 9, 1964, p. 16

[50] Philip Shabecoff, Reactor on coast called a hazard, New York Times, February 7, 1971, p. 61

[51] Hanford Organizations, derived from J.D. Briggs [8], Hanford Site, Wikipedia, 2013, at

[52] Paul Loeb, Nuclear Culture: Living and Working in the World's Largest Atomic Complex, Coward, McCann & Geoghegan, 1982

[53] Les Blumenthal, Associated Press, Hanford whistleblowers tell of safety problems at DOE facilities, AP News Archive, October 22, 1987, at

[54] John K. Wiley, Associated Press, Ruling comes too late to help two Hanford whistleblowers, Spokane (WA) Spokesman-Review, June 5, 1990, p. A6

[55] William Lanouette, Tritium and the Times, Joan Shorenstein Barone Center, Harvard, 1990, at

[56] Lee Bandy, Massive cleanup proposed for nuclear-arms plants, Philadelphia Inquirer, August 2, 1989, at

[57] Office of Environmental Management, Closing the Circle on the Splitting of the Atom, U.S. Department of Energy, 1995, available at

[58] Eric Nalder, DOE hires Casey Ruud to police troublesome Hanford tank farm, Seattle Times, January 7, 1994, at

[59] Benjamin B. Pfeiffer, et al., DOE's management of single-shell tanks at Hanford, Report RCED-89-157, U.S. General Accounting Office, 1989, at

[60] Judy A. England-Joseph, et al., Hanford single-shell tank leaks greater than estimated, Report RCED-91-177, U.S. General Accounting Office, 1991, at

[61] Tim Connor, Groundwater contamination at the Hanford nuclear reservation, Hanford Education Action League, 1989, available at

[62] Michele Stenehjem, Indecent exposure, Natural History 9:6-8, 1990, available at

[63] Eric Nalder, The plot to get Ed Bricker: Hanford whistleblower was tracked and harassed, files show, Seattle Times, July 30, 1990, at

[64] Narasi Sridhar, et al., Hanford tank waste familiarization system, Center for Nuclear Regulatory Analysis, 1997, at

[65] Karen Dorn Steele (Spokane, WA, Spokesman-Review), Radioactive waste from Hanford is seeping toward the Columbia River, in (Paonia, CO) High County News, September 1, 1997, at

[66] Gustavo A. Cragnolino, et al., Hanford tank waste remediation system, Center for Nuclear Waste Regulatory Analyses, 1997, available at

[67] Keith Schneider, Agreement set for a cleanup at nuclear site, New York Times, February 27, 1989, at

[68] Kelso Gillenwater, Editorial, (Kennewick, WA) Tri-City Herald, quoted by Elouise Schumacher and David Schaefer, Hanford: Time to clean up 45-year act, Seattle Times, February 4, 1990, at

[69] Thomas C. Perry, et al., Hanford tank waste program needs cost, schedule and management changes, Report RCED-93-99, General Accounting Office, 1993, at

[70] R.J. Murkowski, Milestone M-01-00, Tri-Party Agreement milestone summary, U.S. Department of Energy, October, 1993, at "Low-Level Waste Disposal Program has planned to begin the transition from to a glass vitrification...."

[71] Karen Dorn Steele, Hanford admits groundwater threat, Spokane (WA) Spokesman-Review, June 15, 1996, at

[72] Matthew L. Wald, Radiation leaks at Hanford threaten river, experts say, October 11, 1997, New York Times, at

[73] Fluor Hanford, Hanford Integrated Groundwater and Vadoze Zone Management Plan, U.S. Department of Energy, 2007, at

[74] John R. Brodeur, Recent leaks from Hanford’s high-level waste tanks, Energy Sciences and Engineering, 2006, available at

[75] Tania Reyes-Mills (CH2M Hill), Developing a lasting groundwater solution at Hanford, Nuclear Decommissioning Report, 2011, at

[76] Geoff Tyree. Hanford site treating record amount of contaminated groundwater, U.S. Department of Energy, July 15, 2013, at

[77] Michele Stenehjem Gerber, On the Home Front: The Cold War Legacy of the Hanford Nuclear Site, University of Nebraska Press, 1992

[78] Richard Morrill, The Hanford Environmental Dose Reconstruction Project, Professional Geographer 41(2):198-203, 1989, at

[79] Technical Steering Panel, Initial Hanford Radiation Dose Estimates, Hanford Environmental Dose Reconstruction Project, U.S. Department of Energy, 1990, available at

[80] Shannon E. West, Sweeping the mess under Hanford's rug, William & Mary Environmental Law & Policy Review 29(3):807-847, 2005, at

[81] Annette Cary, All remaining Hanford vitrification plant deadlines at risk, (Kennewick, WA) Tri-City Herald, October 8, 2013, at

[82] Outlays by year, Environmental Management, U.S. Department of Energy, from Financial Management Service, U.S. Department of the Treasury, at

Thursday, November 21, 2013

The Gravy Plane, costly failures for air-traffic control

On October 1, 2013, the federal government's Web site,, crashed and burned at launch--probably attracting more attention than any other government software failure. It provides public access to the federal health-care insurance exchange, a key element of Pres. Obama's health-care reform program. The government said the site attracted an average of two million visitors a day during its first four days, while only six managed to enroll in insurance plans the first day. [1] [2]

Nevertheless, that was just a Punch-and-Judy show compared with a horrible reign of errors lasting 32 years to date: repeated failures by the U.S. government to produce well integrated and reliable automation in support of air-traffic control. From a perspective of public service, those efforts have been a bipartisan disaster: the offspring of Reagan, Herbert Bush, Clinton, Walker Bush and Obama administrations. For two major government contractors, however, an air-traffic automation disaster became "The Gravy Plane"--a lush source of high-flying government spending--now around $6 billion and climbing. [3] [4] [5]

Tragegy of mismanagement
So far, this has been a tragedy in two acts. The first attempt by the Federal Aviation Administration (FAA) at improving automation for air-traffic control--Advanced Automation System (AAS)--collapsed after 13 years work and $3.7 billion spent, with little useful delivered. It began as a pie-in-the-sky concept ordered up during the first Reagan administration, in the wake of a 1981 strike by federal air-traffic controllers. At 7 am EDT on August 3 of that year, nearly 13,000 of 17,500 members of the former Professional Air Traffic Controllers Organization walked off the job. Over 11,000 failed to return and were fired two days later. [6] [7]

Former Pres. Reagan, a technology buff who bought into "Star Wars," expected that computers could supplant air-traffic controllers. After a long siege of planning and prototyping--at government expense--IBM's former Federal Systems Division won a large continuing contract over its chief competitor, the former Hughes Aircraft. However, some promises made for AAS proved beyond capabilities of the era's main technologies, and project management was flagrantly bungled by both FAA and Federal Systems. Eventually, during the first Clinton administration, the Federal Systems contract was ended and the 13-year AAS project cashiered. [8] [9]

The second Clinton administration released so-called "NextGen" air-traffic control plans, a long-term program for automation to be carried out in a sequence of stages. NextGen was to start with an effort less ambitious than AAS. In 1998, FAA began planning the En Route Automation Modernization (ERAM) project. During its first year, the first Walker Bush administration channeled the ERAM project to Lockheed Martin--a company with key contacts in that administration--under a sole-source, no-bid contract. [10]

During more than 12 years of implementation through 2013, ERAM has never worked well enough to be placed in full and regular service. [5] Although books have been published on problems of AAS, so far no comparable account has been published on problems of ERAM. Unlike AAS, but like, ERAM may emerge from a morass of problems and may in time be marshalled into a full-service system. However, FAA had initially advertised full and regular ERAM service by 2007. In 2012 its inspector general found achieving the goal to be unlikely before 2016--making that project 18 years in planning plus implementation. [4] [11] [12]

Top dogs and watchdogs
The U.S. government was able to land men on the Moon in eight years, using technologies of the 1960s that were much less capable than those available 20 and 50 years later. Why would it take more than three decades to improve earth-based air-traffic control automation? Part of the answer looks to be manipulation of contracts, in a program far less visible than a Moon landing, by large companies that had little to lose when failing. In neglecting to plan and manage programs intelligently, monitor work rigorously and prepare alternatives, a rigid and foolish agency allowed those companies to function as monopolies--leaving the government with no other ready sources of services.

The government has several watchdogs expected to uncover mismanagement. They include the General Accountability Office, the Office of Management and Budget, the Congressional Budget Office and, since 1973, the inspectors general assigned to cabinet departments and later to agencies. The armed services have had inspectors general since the Revolutionary War, but the Department of Agriculture was the first civilian agency to utilize one. However, while all the watchdogs exhibit financial skills, few have management skills and none have technical design skills or competence with automation technologies. They live up to their reputations as "bean counters" and often fail to uncover mismanagement, if only because they just don't understand what's going on.

Software and turkeys
Computer software has sometimes been considered a field of engineering. However, unlike development of designs in other engineering fields, with software the final design is the product. There is no physical product to inspect and measure. Moreover, ultimate software designs--"code"--are often thousands of times more complex than other final engineering designs. It took about 30 years for the critical lessons about how to cope with these challenges to emerge within the software professions. So far those lessons have rarely penetrated the heads of government watchdogs. They visit software shops as foreigners, literally not speaking the languages. Thus, back in the shop, bean counters have long been called "turkeys"--not the brightest birds.

The inspector general responsible for FAA in 2005 displayed an exemplary turkey profile. Here the bean counter was foolishly preoccupied with "lines of code." The report revealed that Lockheed Martin was reusing code from legacy systems. It must somehow be interfacing new Ada code with ancient code in Jovial. It must never have performed a first-principles analysis of communication design. Both are major barriers to success, yet the bean counter was busy counting "lines of code." The bean counter did worry over the cost of finding software expertise with Ada although not with Jovial. Both emerged from military environments, with neither attracting much other use. Strong skill with Jovial today would be comparable to fluency in ancient Phoenician, while Ada might be compared with Old High German. [11] [12]

The bean counter's main sally was to recommend a "value-engineering analysis" to see if fewer new computers could be bought than one per long-range ("en route") control center. Even a bean counter might be able to compare at least $2 billion going for software with perhaps $20 million for computers and see that potential savings on computers were not going to be much of a benefit. That was where the turkey profile sharpened, but the bean counter had already given the game away with the title of the report: "FAA's En Route Modernization program is on schedule." Later events showed the project was effectively years behind schedule by that point. The bean counter had either been enlisted or been hoodwinked.

Air-traffic control automation
U.S. air-traffic control is a distributed system, currently with 22 long-range control centers, 164 regional-range centers and a local-range center for each large and medium-size airport. These are known, respectively, as "air route traffic control" or "en route" centers, "terminal radar approach control" or "Tracon" centers, and "airport traffic control towers" or just "towers." FAA oversees all the facilities, and it operates the en route centers, the Tracon centers and some of the towers, known as the "federal control towers." [13] [14] [15]

Starting in the 1960s, air-traffic control automation was developed by FAA personnel to assist the air-traffic controllers. It has undergone upgrades and computer replacements, but there has been no change to its basic structure since the 1960s. Like military backgrounds from which many FAA personnel came, but also like computers typical of the era in which it originated, the structure is hierarchical. A central coordinating computer communicates with "en route" center computers, which communicate with Tracon center computers, which communicate with radars serving their regions. Air traffic controllers coordinate handovers of flights between Tracons and towers by voice. Officially called the National Airspace System, the software is informally known as Host--an emblem of the era in which it originated.

The Host and its keepers
In the beginning was the Host. The first model of distributed computing, emerging from the regimented 1950s, was God-fearing and hierarchical. At the center was a "host computer" that coordinated activities. It took many years for software developers to recognize and overcome liabilities of the model: its susceptibility to overloading communications channels and its vulnerability to failure of host computers. If the Internet were to depend on a hierarchical organization with host computers, it would frequently fail. Instead, it was designed in the late 1980s using a robust model with distributed, independent coordination. FAA. however, continues to operate U.S. air-traffic control on the back of first-generation automation, scrambling to keep a brittle, creaky system working.

Between 1969 and 1977, Host was mounted on IBM model 9020 computers--a specialized version of the System 360, model 65, introduced in 1966. It was programmed by FAA personnel in Jovial--an early, high-level language derived from IAL, later called Algol-58--and in IBM 360 machine code. In 1982, amid fears that equipment would become unmaintainable, FAA began plans to renovate the system. A key problem was software, written partly in a little known, obsolete high-level language and partly in machine code for an obsolete computer. Because of a sense of urgency, that effort became independent of AAS and was treated as a short-term measure to sustain operations. In 1985, IBM was awarded a contract. IBM replaced the original computers with model 3083, rewrote machine code for the new computers and adapted Jovial language support to run on them. [16]

After termination of the AAS project, IBM sold its Federal Systems division to Loral, which later sold it to Lockheed Martin. As an arm of Lockheed Martin, the former Federal Systems division continued to do business with the federal government, maintaining air-traffic control under Host for FAA. Meanwhile, both Clinton administrations tried to sweep cobwebs out of FAA and to develop a sustainable architecture for air-traffic control automation. Yet a 1997 report from the (then) Government Accounting Office still complained, "FAA lacks a system architecture." Yes, indeed. That was more honking from bean counters and turkeys. Few people like those who wrote the legacy Host software, starting in the 1960s, would have recognized the term "architecture" as meaningful in their work. Like the builders of Roman roads, they became skillful at using the resources available to them to do the jobs they were assigned. By 1997, however, FAA had produced an architecture to guide future air-traffic control--NextGen--and was about to plan its first stage: ERAM. [17]

Unfortunately, time ran out. No contract for ERAM was awarded before the second Clinton administration expired. When the opportunity to let a contract for ERAM fell into the lap of the new Walker Bush administration, Lockheed Martin turned up in first post position. Norman Mineta, the new secretary of Transportation, had been senior vice president and managing director at Lockheed Martin. His deputy, Michael Jackson, had been vice president and general manager at Lockheed Martin IMS, Transportation Systems and Services. Lynne Cheney, wife of Vice President Dick Cheney, had been a Lockheed Martin director until shortly before Mr. Cheney took office. The fix was in; the new air-traffic control architecture was mostly out. [10]

Chiseling a contract
Lockheed Martin lacked practical interests in NextGen architecture except as an advertisement. To the contrary, its interests lay in preserving cranky and poorly documented Host software, for which the government had paid the company to develop unique expertise. Other organizations would be unable to compete with Lockheed Martin at projects based on Host, because they would lack the gradually acquired, hands-on expertise. The Walker Bush administration negotiated a sole-source, no-bid ERAM contract with Lockheed Martin in 2001. While it appeared even-handed, with fixed-price deliverables and performance incentives, it had an escape clause for Lockheed Martin: get the government to "accept" ERAM and then subsequent work is billed at cost-plus. [4] [18] [19]

During the Walker Bush administrations, FAA failed to require, and of course Lockheed Martin did not perform, a first-principles analysis of communication design--to find an optimum approach taking best advantage of decades of progress since Host was produced. Instead, Lockheed Martin grafted ERAM onto the legacy Host software, committing the government to indefinite support of a marginally reliable antique, housed inside a new shell. The company pushed the envelope to deliver ERAM early, promising to deploy it to the FAA "en route" control centers starting in 2005. [20]

With Lockheed Martin at work, ERAM began reproducing vulnerabilities of legacy air-traffic control. Although the company accumulated a substantial number of problem reports, nevertheless in October, 2007, FAA signed off on government acceptance. The acceptance was based on bench testing only, at the FAA Technical Center, without using ERAM in a field setting, much less using it for air-traffic control. Qualifying for "early delivery" got the company out of jail financially and justified a large "performance incentive" fee. Through 2011, the ERAM project paid Lockheed Martin over $150 million in incentive fees. [21] [22]

Truth or consequences
With the start of the first Obama administration, Lockheed Martin lost its key allies at FAA, and FAA got a new administrator in J. Randolph "Randy" Babbitt, a former commercial airline pilot and former head of the Air Line Pilots Association. Mr. Babbitt's background prepared him to take on chronic morale problems at the agency but left him unequipped to deal with decades of mismanaged automation technology. He made the mistake of assuming, just because the FAA Technology Center said ERAM was OK, that it must be. In the spring of 2009, he authorized the "en route" center in Salt Lake City to test ERAM with live air traffic--never having exercized the software by using it only to provide a backup. [23]

Once again, a government factotum ignored a well known rule taken to heart by professional software developers for decades: "What you haven't tested doesn't work." It didn't. Luck was with Mr. Babbitt, the crews and the passengers: the first three system crashes in the wee hours of weekend mornings attracted mainly protests from air-traffic controllers who could see, first hand, the dangers that were caused. They, in turn, got the ear of Utah senators, who forced a delay in further testing. It didn't help much; the bugs in ERAM were too deeply embedded to be readily detected or corrected. The next live test got more attention. At 5 am ET on November 19, FAA systems crashed so badly that air-traffic controllers had to send information between centers by Fax and hold up flights that were not already airborne. Delays rippled nationwide for most of the day. [24] [25]

The highly public consequences of the November, 2009, ERAM crash led to a stop-and-go pattern of further tests and repairs persisting through 2013, four years later, and probably for at least a few more years. The most congested regional centers--New York, Washington, Atlanta and Miami--still can't use ERAM, and seven other centers are able to use it only in low-traffic conditions, mostly at night and on weekends. Most deployment for the NextGen program remains on hold until ERAM becomes a full-service system. When and if it does, it will still lack the architectural integrity that was intended for NextGen in the mid-1990s and therefore lack the long-term reliability and extensibilty that should have accompanied a sound program. [5] [26]

[1] Tim Mullaney, Obama adviser says demand overwhelmed, USA Today, October 6, 2013, at

[2] Susan Cornwell and David Morgan, Documents show enrollment in Obamacare very small in first days, Reuters, October 31, 2013, at

[3] Robert L. Glass, ed., Software Runaways: Monumental Software Disasters, Prentice-Hall, 1998, p. 71

[4] Jeffrey B. Guzzetti, Weaknesses in program and contract management contribute to ERAM delays and put other NextGen initiatives at risk, USDOT Report No. AV-2012-179, September 13, 2012, at

[5] Jeffrey B. Guzzetti, FAA has made progress fielding ERAM, but critical work on complex sites and key capabilities remains, USDOT Report No. AV-2013-119, August 15, 2013, at

[6] Rebecca Pels, The pressures of PATCO: Strikes and stress in the 1980s, Essays in History 37 (University of Virginia), 1995, at

[7] Willis J. Nordlund, Silent Skies: The Air Traffic Controllers Strike. Praeger, 1998

[8] Mark Lewyn, Flying in place: The FAA's air control fiasco, Business Week, April 25, 1993, at

[9] Robert Britcher, The Limits of Software: People, Projects and Perspective, Addison-Wesley, 1999

[10] Matthew L. Wald, FAA to skip bids on air traffic system, New York Times, March 7, 2001, at

[11] David A. Dobbs, FAA's En Route Modernization program is on schedule, USDOT Report No. AV-2005-066. June 29, 2005, at

[12] Mike Paglione, Metrics-based approach for evaluating air-traffic control automation of the future, Federal Aviation Administration, 2006, at

[13] Federal Aviation Administration, Air Route Traffic Control Centers, 2013, at

[14] Federal Aviation Administration, Terminal Radar Approach Control Facilities, 2013, at

[15] Federal Aviation Administration, Airport Traffic Control Towers, 2013, at

[16] John Andelin, Review of FAA's 1982 National Airspace System plan, Office of Technology Assessment, 1982, available at

[17] Randolph C. Hite, Complete and enforced architecture needed for FAA, General Accounting Office, February 3, 1997, at

[18] Calvin L. Scovel, III, Challenges in meeting FAA's long-term goals for the Next Generation air transportation system, April 21, 2010, Testimony to Subcommittee on Aviation, House Committee on Transportation and Infrastructure, at

[19] Anthony N. Palladino, Protest of Raytheon Company, FAA docket no. 01-ODRA-00180, June 15, 2001, available at

[20] Unattributed, Lockheed reports good early progress on en route projects, World Aviation (Beijing, China), August 16, 2004, at

[21] James K. Reagan, Independent assessment of the ERAM program, MITRE Lincoln Laboratory, October, 2010, available at

[22] John Sheridan, FAA remains quiet on ERAM budget overruns and delays, Aviation International News, December, 2011, at

[23] Sholnn Freeman, FAA asked to do more to fix morale, Washington Post, December 1, 2009, at

[24] Joan Lowy, Associated Press, Utah lawmakers say air traffic computer not ready, Salt Lake Tribune, June 13, 2009, at

[25] Matthew L. Wald, Backlog of flight delays after computer problems, New York Times, November 20, 2009, at

[26] John Sheridan, ERAM development is reminiscent of failed AAS program, Aviation International News, October, 2013, at

Friday, November 15, 2013

Fixers and spinners, repairing the project

In early autumn, long before holidays: four fixers fixing, three spinners spinning, two layers laying--one rotten egg! That's how turned out when it went live October 1, 2013: a Web-site disaster built for the Obama administration's health-care reform program. [1] A story spun by Pres. Obama's staff was that nobody knew disaster was brewing--not at all likely. [2] Someone frequently communicating with the White House did know and likely did say. Crowned heads chose to ignore warnings, and they got what they deserved. [3] [4]

A key figure is Henry Chao, the deputy director and deputy chief information officer in the Office of Information Services, an agency of the Centers for Medicare and Medicaid Services (CMS), a bureau of the Department of Health and Human Services. He appears to have enlarged the usual scope of information services staff. He also stepped beyond the typical role of software architect, his designation in the project. Mr. Chao doesn't have an engineering degree, a business administration degree, hands-on experience developing commercial software or business experience managing software development. However, working at CMS for nearly 20 years, he is reported to have led the final design and implementation stages for several software systems--the sort of on-the-job training that once was common among software developers. One of his projects had a rocky ride: the initially faulty online information for Medicare Part D. [5] It's not clear what part Mr. Chao played in that project.

Several news reports and Mr. Chao's testimony and appearance at a televised hearing of the House Oversight and Government Reform Committee on November 13, 2013, showed a person concerned about the progress of contractors building and its back-end data import, access, analysis and export software--monitoring tasks as a diligent project manager would. At the hearing, he sounded like a seasoned software development manager, trying to avoid being a spokesperson for others. He was careful to distinguish his technical responsibilities from the activities of policymakers and from the operations and finance scopes of program directors. However, an issue that repeatedly stymied him in trying to answer questions was that no one in the Obama administration looks to have been clearly delegated the role of program director. [6]

The government customer for the software is another CMS agency, the Center for Consumer Information and Insurance Oversight. Its director since August, 2012, has been Gary Cohen, a lawyer who also serves as a deputy administrator of CMS. The Congressional committee invited to its hearing neither Mr. Cohen nor Mr. Chao's boss Tony Trenkle, the department's chief information officer. The Department of Health and Human Services was represented by Frank Baitman, the deputy assistant secretary for information services, and by Mr. Chao. [6] It's unusual for Congress to hear from workers that deep in the government ranks. Mr. Baitman had little to say and got few questions. Mr. Chao looked to have put on extra pounds--maybe a sign of his occupation's usual drugs of abuse: sugar and caffeine.

Writing for Commonwealth Fund of New York City, Jane Norman, HealthBeat associate editor for Congressional Quarterly, had reported the previous March about an insurance industry conference held a few days earlier in Washington, DC. Mr. Cohen and Mr. Chao were the speakers for the opening session of the conference. [7] Ms. Norman wrote, "Chao was frank about the stress and tension of the compressed time frame involved in setting up the exchanges." She quoted him as saying, "We are under 200 days from open enrollment, and I'm pretty nervous." [8] Another quote from Mr. Chao, widely circulated in business media, was ignored by general-interest news writers at the time: "Let's just make sure it's not a third-world experience."

According to Ms. Norman, at the industry meeting Mr. Cohen said, "I think it's only prudent not to assume everything is going to work perfectly on day one and to make sure that we've got plans in place to address things that may happen...Everyone recognizes that day one will not be perfect." About six weeks later, Mr. Cohen appeared before a subcommittee of the House Committee on Energy and Commerce. To that audience, he said, "We are on schedule, and I am confident that Americans in all states will enjoy the benefits of the Affordable Care Act...Beginning on October 1, 2013, when consumers visit the Web site of their marketplace, they will be able to submit an application...." [9] Different strokes for different folks.

The House Oversight hearing on November 13 produced theatre. Committee chair Darrell Issa (R, CA) played Carlos the Jackal--snide, bullying and juvenile--snickering at his own jokes. Todd Park, U.S. Chief Technology Officer since March, 2012, played Godfather--perhaps a old role for him since founding health IT companies AthenaHealth and Castlight Health starting at age 24. After the White House refused to send him voluntarily as a witness, Rep. Issa had issued a subpoena for Mr. Park to appear. He mentioned his Korean ancestry and deep regard for U.S. government. [6] As can happen with Congressional committees, the hearing produced little new information. Questions from reactionary committee members often turned rhetorical. Some committee members sounded foggy.

Rep. John Tierney (D, MA) developed the only sustained and intelligent dialog with a witness. At about two and three-quarters hours into the hearing, he asked Mr. Chao how a decision to disable the Web site's "anonymous shopper" feature had occurred, shortly before October 1. Chairman Issa tried to cut him off, asking, "Will the gentleman yield?" Rep. Tierney simply said, "No," a rare event in the ossified world of Congress. [6] Mr. Chao explained that getting the "plan compare" feature working, essential to submitting an insurance application, took priority over fixing bugs in "anonymous shopper." Throughout, the witnesses maintained a serious demeanor that mocked diatribes from reactionary committee members. Several times the Godfather mentioned "incredibly hard work" being put in by the "project team." No committee member thought to ask him who the team leader was.

With the wash hung out to dry after October 1, no hearing witness would offer a blanket guarantee that the Administration's goal of making the Web site function by November 30 would be reached, nor was any willing to estimate the cost of the repair work. However, committee members seemed more frustrated that no witness would say what had been conveyed to the White House before October 1 about software problems and failures to achieve schedule goals. In response to a direct question, Mr. Chao said such issues should probably go to Ms. Tavenner. Marilyn Tavenner, the CMS administrator, and Mr. Chao had been witnesses at another hearing held by the same committee June 17, but that hearing never reached such questions. Ms. Tavenner had said then, "I want to assure you that October 1, 2013, the health-insurance marketplace will be open for business." [10]

Missing in action at the November 13 Congressional committee hearing were representatives of the contractors working on and Jeffrey Zients, who was appointed the new fixer-in-chief by Pres. Obama October 22. Mr. Zients was formerly deputy director for management at the Office of Management and Budget. He organized a 3-day review, then announced that the Web site would be in full service by November 30 and that one of the project's contractors, Quality Software Services, would "oversee repairs" as a "general contractor"--not a term ordinarily used in software development. [11] It is also not clear how a government agency can legally delegate its contract management responsibilities to a "general contractor" when a project is already underway.

Quality Software Services built the data access hub for, which also serves the online exchanges run by states. This complex software is intended to provide a single, secure point of access to government data needed to verify and process health-care insurance applications, and it forwards data among the federal and state exchanges and the participating insurance companies. At the November, 2013, Congressional committee hearing, Mr. Chao claimed the hub was tested and working, although it had become the object of many complaints about poor performance and garbled information. [12] It is not clear whether Quality Software Services assumed the role of system integrator being performed by CMS. Had the Congressional committee really wanted to find out what was going wrong with the project and how defects would be repaired, it missed an opportunity to assemble some of the most knowledgeable people.

CMS and its contractors failed to conduct a reasonable program of testing before opening to public access. [13] After integration with its data access hub, the complex Web site received only about two weeks of testing--rather than several months needed for even minimal assurance of reliability. Nevertheless, on September 27, 2013, CMS Administrator Marilyn Tavenner approved open release of the largely untested software, with predictably disastrous consequences. [14] She ignored a well known rule taken to heart by professional software developers for decades: "What you haven't tested doesn't work." Government memos made available to the public in redacted form show a working environment near panic in July, 2013--the name of Marilyn Tavenner looking to have been blacked out. [15] In a message dated July 16, Henry Chao asked, "Did you see my other email about first just talking to [Marilyn] to convey just how low the confidence level and then pile on top of that the request for more money when we constantly struggle to get a release done, vacillating on delivery by due dates, and worse of all poor [quality assurance]...." [sic]

Efforts to build the federal health-care insurance exchange and its Web site seem to have been smaller in scale than legions of workers sometimes imagined. [16] CGI Federal has been the main contractor for the Web site, as distinguished from the data access hub. As of July, 2013, one news writer found only ten software developers from CGI Federal working on the "plan compare" feature. [17] However, CGI Federal was raking in big bucks. Contrary to popular impression, it never "won" a contract to build the Web site, because there has been no such contract. Instead, it has a continuing contract awarded in 2007, during the Walker Bush administration, to provide Health and Human Services with a broad range of software development and maintenance. During 2013, the total contract payments to CGI Federal were growing from under $100 million to nearing $300 million. The Administration exploited the legacy contract with CGI Federal, avoiding competitive bidding on a new contract for and following a hazardous trail blazed by the former Federal Systems Division of IBM.

During the first Reagan administration, also under a continuing contract, Federal Systems began to develop a pie-in-the-sky concept called Advanced Automation System (AAS), aimed to replace air traffic controllers who had gone on strike with computers. Over 13 years, about $3.7 billion was spent on AAS, but little useful was ever produced. During the first Clinton administration, the Federal Systems contract was terminated, and the AAS project was cashiered. The second Clinton administration then developed the NextGen air-traffic automation program, which was designed to start with a project much less ambitious than AAS, called En Route Automation Modernization (ERAM). During its first year, the Walker Bush administration awarded ERAM to Lockheed Martin--a company with key contacts in that administration--under a sole-source, no-bid contract. [18] During over 12 years of development, through 2013, ERAM has never worked well enough to be placed in full and regular service.

On a more encouraging note, while the November 13 Congressional committee hearing was underway, a session with found the Web site working and responsive. It proved much brisker than Web sites for the House Oversight Committee, the New York Times or the Washington Post. By that point, the "anonymous shopping" feature had become available, although no committee member seemed to know about it. When asked for information about available health-care insurance plans, for New York and Massachusetts, exited to Web sites of the exchanges run by those states. For New Hampshire, a federal-partnership state, displayed 11 plans offered by Anthem, the only participating insurer for that state, along with monthly prices.

[1] Tony Jewell, Descent into madness: an account of one man's visit to, October 2, 2013, at

[2] Greg Botelho and Holly Yan, Sebelius says Obamacare Web site problems blindsided the President, CNN, October 23, 2013, at

[3] Gabriel Debenedetti and Susan Cornwell, Official who made big health-care Web site decision a frequent White House visitor, Reuters, October 25, 2013, at

[4] Juliet Eilperin and Sandhya Somashekhar, Private consultants warned of risks before launch, October 1, Washington Post, November 18, 2013, at

[5] Brett Norman, Health official involved in Obamacare site also had role in Medicare rollout, Politico, October 25, 2013, at

[6] Obamacare implementation, House Committee on Oversight and Government Reform, November 13, 2013, at

[7] 2013 Exchange Conference, America's Health Insurance Plans, Ritz Carlton, Washington, DC, March 14, 2013. Schedule at a Glance, at

[8] Jane Norman, HHS working on contingency plans in case exchanges not ready in time, Commonwealth Fund (New York, NY) Newsletter, March 18, 2013, at

[9] Statement of Gary Cohen, JD, concerning the Center for Consumer Information and Insurance Oversight and the implementation of the Patient Protection and Affordable Care Act, Subcommittee on Oversight and Investigations, U. S. House Committee on Energy and Commerce, April 24, 2013, at

[10] Privacy, security and fraud, House Committee on Oversight and Government Reform, June 17, 2013, at

[11] Caroline Humer and Sharon Begley, White House says 'Obamacare' Web site will be fixed by end of November, Reuters, October 25, 2013, at

[12] Robert Pear, Sharon LaFraniere and Ian Austen, From the start, signs of trouble in federal project, New York Times, October 13, 2013, at

[13] Robert Pear, Tests of Web site only two weeks before opening, New York Times, October 25, 2013, at

[14] James Kerr and Henry Chao to Marilyn Tavenner, Re federally facilitated marketplace, undated, available at as a nonsearchable scanned image. The document includes approval of the authority to operate, signed by CMS Administrator Marilyn Tavenner, dated September 27, 2013, with reviews acknowledged by Teresa Fryer, Tony Trenkle and HHS COO Michelle Snyder. Searchable text at also posted pseudonymously.

[15] Nonsearchable images for a selection of redacted e-mail messages sent during July, 2013, concerning the project, released by the House Committee on Oversight and Government Reform, November, 2013, at without source citations or explanations. Abbreviations in some messages documented in [13].

[16] Elise Hu, Internal e-mails reveal warnings wasn't ready, National Public Radio, November 15, 2013, at

[17] Sharon Begley, As Obamacare tech woes mounted, contractor payments soared, Reuters, October 17, 2013, at

[18] Matthew L. Wald, FAA to skip bids on air traffic system, New York Times, March 7, 2001, at