Energy, The Master Resource, has always played a role in socioeconomic development, and, according to Nobel laureate Rick Smalley, acquiring enough of it is the single most important challenge facing humanity. In the case of economic growth and prosperity, nothing is more important than surplus energy. The stability of each country, and the world at large, depends on the continued availability of reasonably priced energy. The per capita energy consumption in the various regions of the world correlates with each regions wealth, health and general education level. World energy consumption has increased dramatically over time and is projected to continue increasing, to meet the needs in the developing world. This growth in energy demand will be exacerbated by the almost doubling of the world's population expected to occur within the next 50 years. The proportion of electric power to total energy used is also expected to grow during this time period.
 Continued dependence on fossil fuels, the primary source of energy both in the United States and the world at large, is a problem. There are several ways to screw up. We might run out or at least run low in regions with high demand. That will lead to economic disruptions and most likely to armed conflicts. Or maybe we will not run out and instead trash the planet through pollution, like the Gulf of Mexico oil spill, and global warming. Just to put this in context, as it is currently configured, food production and distribution uses fully two-thirds of the U.S. domestic oil production; it has been calculated that the amount of food that an average North American consumes in a year requires the equivalent of 400 gallons of petroleum to produce and ship. This is one reason why a cessation of oil exports to the United States would be highly disruptive, most of our domestic production would have to go toward feeding ourselves.
 There are a lot of potential sources of alternate energy. However, only six seem to have real feasibility at making significant contributions to our energy supply. These are hydroelectric, biomass, geothermal, wind, solar, and nuclear. There are others such as tidal power and ocean temperature difference, but they are pretty low on the practicality scale, so we will focus on the aforementioned six.
 Of the non-fossil fuel power sources, water power seems to have the least potential for further growth. Hydroelectric dams have already been built in the most likely places, and new ones are opposed because of their disruption of the environment.
 Biofuels need land to grow on, but this means either displacing agricultural lands, which is bad for the hungry poor, or converting forests into fields, which is bad for the environment. Growing plants store carbon in their roots, shoots, and leaves. When I look at a tree I know half its dry weight comes from carbon, and that’s going to end up in the atmosphere when its cut down, so there is a huge “carbon debt” embedded in biofuels.
 The global geothermal heat capacity, from over a million heat pumps, is estimated at 28 GW and is growing by about 10% annually. Geothermal electric power is far less efficient and accounted for 0.3% of global electricity production in 2007, and is growing by only 3% annually. The average worldwide geothermal gradient is 25-30 degrees C per km of depth. Outside of a tectonic plate boundary, wells would have to be drilled several kilometers deep. Geothermal fluids drawn from the deep earth may carry a mixture of gases with them, notably carbon dioxide and hydrogen sulfide. When released to the environment, these pollutants contribute to global warming, acid rain, and noxious smells in the vicinity of the plant. In addition, the dissolved gases in the hot water from geothermal sources may contain trace amounts of dangerous elements such as mercury, arsenic, and antimony which, if disposed of into rivers, can render their water unsafe to drink. And, the hydraulic fracturing process has been known to trigger earthquakes; more than 10,000 seismic events measuring up to 3.4 on the Richter Scale occurred over the first 6 days of water injection at a plant in Basel, Switzerland.
 Wind and solar suffer from low energy density and intermittent supply. It is difficult to save the energy from windy or sunny days to be used on calm or cloudy days, so energy systems employing wind and solar need significant conventional backup capacity. Solar power offers clean energy and useful heating, but it lacks the concentration to feed the needs of cities and large factories.
 With the exception of hydroelectric and nuclear, none of the alternatives offer concentrated dependable energy that is clean and free of greenhouse gases.
 Conventional power sources kill thousands of people each year with pollution. A 1000-megawatt coal-burning power plant burns more than two million tons of coal in a year. Ironically, because coal contains radioactive trace elements, even when a coal plant uses scrubbers or precipitators to filter out 95 percent of its particulate emissions, it still introduces more radioactive material into the atmosphere than does a nuclear plant. By substituting for fossil fuel plants, U.S. nuclear plants in 1991 saved 145 million tons of coal, 265,000 barrels of oil, and 1.7 trillion cubic feet of natural gas, and in the process they keep about 430 million tons of carbon dioxide out of the atmosphere.
 Emissions from burning fossil fuels (especially carbon dioxide) trap heat in the earth's atmosphere and are leading to a rise in the average world temperature. Global warming is having a devastating effect on coastal communities, agriculture, and other areas.
 That leaves nuclear, but with no nuclear fission plants ordered in the United States since 1974, it is clear that growth in the American nuclear fission industry has stalled if not reversed. One reason is obviously the decline in public confidence following the accidents at Three Mile Island, Chernobyl, and, most recently, the tsunami in Japan.
 Fission is just too dangerous, if nuclear power is to rescue us, it will have to be through fusion. The basic concept in fusion is to get two atoms close enough together so they merge into one bigger atom. This is accomplished by getting them close enough together so that the short-range attractive strong nuclear force overcomes the long-range repulsive electrostatic force. When two light nuclei fuse, their combined masses will generally be slightly smaller mass than the sum of their original masses. The difference in mass is released as energy according to Albert Einstein's mass-energy equivalence formula E=mc2.
 The risk of a runaway reaction in a fusion reactor is zero, since there is only a tiny amount of fuel inside the reactor at any one time, and fusion reactions cannot proceed without fuel. Significant deviations from the normal (optimal) operating conditions will only make the reaction rate slower and more inefficient. This is an inherent level of safety; no elaborate failsafe mechanisms are required. In comparison, a fission reactor is inherently dangerous; it typically contains enough fuel to last several years, and any deviation from the normal (precarious) operating parameters could potentially lead to a runaway meltdown situation.
 The risk of radioactive contamination from fusion reactors is low since they do not require huge stockpiles of radioactive materials. In fact, in a typical fusion reaction, tritium would be the only radioactive substance. If a reactor breeds tritium at the same rate it burns tritium, a stockpile is not even necessary. Even if there is a leak, the Beta radiation (electrons) emitted from tritium are not as dangerous as fission neutrons. Electrons cannot penetrate as deeply as neutrons, and as a consequence they are harmlessly stopped before crossing the first layer of dead skin cells. Our lungs do not absorb hydrogen gas, so even breathing the radioactive tritium gas is relatively safe. In addition, the radioactive half-life is so short that after 100 years less than 1% of the radioactivity remains.
 Earth's oceans contain a truly enormous deuterium energy reserve, one that could power mankind for more than a hundred billion years. The deuterium in one gallon of seawater has as much energy as 300 gallons of gasoline, and fifty cups of seawater has more energy than two tons of coal. However, it is not the deuterium, but the supply of neutron absorbing lithium, that places an upper limit on this reserves potential. Nevertheless, there is enough lithium, using only the amount dissolved in seawater, to supply the world's energy demand for 60 million years.
 Even though controlled nuclear fusion has been a goal of scientists for several decades, with billions of dollars spent to develop this energy resource, it has yet to become commercially viable.
 Two technical approaches to fusion power are currently under large scale research and development, magnetic confinement fusion (MCF) and inertial confinement fusion (ICF). These form the basis of a large number of fusion research programs. Magnetic confinement techniques, studied since the 1950s, are based on the principle that charged particles such as electrons and ions, i.e., deuterons and tritons, tend to be bound to magnetic lines of force. Thus the essence of the magnetic confinement approach is to trap a hot plasma in a suitably chosen magnetic field configuration for a long enough time to achieve a net energy release, which typically requires an energy confinement time of about one second. In the alternative ICF approach, fusion conditions are achieved by heating and compressing small capsules of fuel, to the ignition condition by means of tightly focused energetic beams of charged particles or photons. In this case the confinement time can be much shorter, typically less than a millionth of a second.
 A third approach to inertial fusion is currently in the concept development phase, called Z-pinch, it was initiated in 1999. Z-pinches produce X-rays from an imploding cylindrical array of current-carrying wires that, after being vaporized with several million amperes of current, stagnate on a low-density-foam cylinder, igniting the fuel inside. This approach is expected to generate high per-pulse yields, of around 3 GJ, at repetition rates of around 0.1 Hertz. A recent advance for Z-pinch technology is the concept of a recyclable transmission line, which was developed to address the principal issue of physically connecting a repetitive pulsed-power driver to a fusion target. In the RTL concept, the transmission line is designed as a low-mass (50 kg) structure that is destroyed along with the target on each shot. By making RTLs from the same material as the chamber coolant or from materials that can be easily separated from the coolant, the RTL materials can be continually recycled to manufacture new RTLs.
 The current invention, Bubble-confined Sonoluminescent-laser Fusion (BSF), produces densities and temperatures comparable to ICF but with longer confinement times.
 MCF plasmas at reactor conditions are very diffuse, because the maximum plasma density that can be confined is determined by the field strength of available magnets. Typical plasma densities are on the order of one hundred-thousandth that of air at Standard Temperature and Pressure (STP). The Lawson criterion is met by confining the plasma energy for periods of about one second.
 In the ICF approach, small capsules or pellets containing fusion fuel are compressed to extremely high densities by intense, focused beams of photons or energetic charged particles. Because of the substantially higher densities involved, the confinement times for ICF can be much shorter. In fact, no external means are required to effect the confinement; the inertia of the fuel mass is sufficient for net energy release to occur before the fuel flies apart. Typical burn times and fuel densities are 10-9 s and 5x1032 ions/m3, respectively. These densities correspond to a few hundred to a few thousand times that of ordinary condensed solids. ICF fusion produces the equivalent of small thermonuclear explosions in the target chamber. An ICF power plant design, therefore, must deal with very different physics and technology issues than an MCF power plant, although some requirements, such as tritium breeding, are common to both. Some of the challenges facing ICF power plants include the highly pulsed nature of the burn, the high rate at which the targets must be made and transported to the beam focus, and the interface between the driver beams and the reactor chamber.
 In inertial fusion the fuel is compressed and heated using driver beams. Achieving ignition requires a large amount of energy to be precisely controlled and delivered to the fuel target in a very short time, and the target must be capable of absorbing this energy efficiently. To produce net energy, the ICF system must have gain, i.e., more energy output than was used to make, compress, and heat the fuel. Driver efficiency and capsule design and fabrication are therefore important issues for an ICF reactor.
 The necessary energy can be delivered to the fuel by a variety of possible drivers. The four types of drivers receiving the most research attention are solid state lasers, KrF lasers, light-ion accelerators, and heavy ion accelerators. The leading driver for target physics experiments worldwide is the solid-state laser, and in particular the Nd:glass laser. The Nd:glass laser was the first driver to deliver the power density and irradiance that ICF required, around 1020 W/m2, and it has remained in the forefront because of its high performance, reliable technology, and ease of maintenance. In addition, new Nd:glass technology, replacing flash lamp pumping with higher efficiency laser diode pumping, has recently become available.
 Two types of ICF targets have been investigated known as direct and indirect drive targets. Direct-drive targets absorb the energy of the driver directly into the fuel capsule, whereas indirect-drive targets use a cavity, called a hohlraum, to convert the driver energy to x-rays which are then absorbed by the fuel capsule. In either case the capsules typically consist of a small plastic or glass sphere filled with tritium and deuterium - more sophisticated targets use multiple layers of different materials with the objective of making the process of ablation and compression more efficient. The indirect-drive method can tolerate greater inhomogeneities in driver illumination, albeit at the expense of the efficient delivery of energy to the capsule. In general, indirect targets have lower gains than direct-drive targets, and therefore require higher efficiency drivers. Indirect-drive targets are also more complex, but they impose less stringent requirements on the focusing and uniformity of driver energy delivered to the target. Direct-drive targets are conceptually simpler than indirect drive targets, and at low to medium laser intensity they have higher overall energy-coupling (laser to fuel capsule) efficiency, but at high intensity severe energy losses occur, due to laser backscatter, or reflection. (see figure 24)
 The concept of indirect drive originated at Livermore around 1975, but most of the details remained secret for many years. The hohlraum used to support the capsule was typically a small metal cylinder a few centimeters across and made of a heavy metal such as gold. Laser beams were focused through holes onto the interior surfaces of the cavity rather than directly onto the capsule. The intensity of the laser energy would evaporate the inner surface of the cavity, producing a dense metal plasma. The laser energy would then be converted into x-rays, which would bounce about inside the cavity, being absorbed and reemitted many times, rather like light in a room where the walls are completely covered by mirrors. These bouncing x-rays would strike the capsule many times and from all directions, effectively smoothing out the irregularities that were present in the original laser beams. Although some energy is lost in the conversion, x-rays can penetrate deeper into the plasma surrounding the heated capsule and couple their energy more effectively than longer-wavelength light, so the implosion proceeds more uniformly.
 The largest current MCF experiment is the Joint European Torus (JET). In 1997, JET produced a peak of 16.1 MW of fusion power (65% of input power), with fusion power of over 10 MW sustained for over 0.5 sec. In 2008, construction began on the experimental reactor ITER, designed to produce several times more fusion power than the power put into the plasma over many minutes. The production of net electrical power from fusion is planned for DEMO, the next generation experiment after ITER.
 In magnetic confinement schemes, like those mentioned above, neutron collisions in the first wall reduce the energy available for tritium production via 7Li. For this reason, magnetic fusion reactors typically must have (1) a neutron multiplier and (2) isotopically-enriched lithium. For example, the STARFIRE tokamak has a 5-cm-thick lead zircate neutron multiplier between two 1-cm-thick steel walls and, in addition, the lithium in the breeding blanket must be isotopically enriched to 60% 6Li. Since 6Li is the minor isotope of lithium, and the tokamak blanket volumes are large, such enrichment could be a major expense.
 The major problems experienced with magnetic containment is maintaining effective plasma containment at ignition temperatures, finding suitable "low activity" materials for reactor construction, demonstrating secondary systems including practical tritium extraction, and building reactor designs that allow their reactor core to be removed when its materials become embrittled due to the neutron flux. Practical commercial generators based on the tokomak concept are far in the future.
 The most critical shielding requirement is the protection of the superconducting coils (SCC) from excess nuclear heating, radiation damage, dose, and neutron fluence. In tokamak reactors, the SCCs operate at cryogenic temperatures (4 K). Each watt of thermal power deposited in the magnets by neutrons and secondary gamma rays requires ~500 watts of refrigeration power to remove the added heat. For reactors designed to produce 1-10 GW of fusion power, an attenuation factor of 105 to 106 is required in the blanket-shield to assure heating rate limits are not exceeded in the coils. In general, an inboard shield thickness of more than a meter is required to achieve this reduction.
 In inertial ablation, fusion temperatures and densities are attained within a small BB size fuel target that is blasted with a focused laser beam. If more than a few milligrams of fuel is used (and efficiently fused), the explosion could destroy the machine, so theoretically, controlled thermonuclear fusion using inertial confinement would be done using tiny pellets of fuel which explode several times a second. During these explosions the fuel really has no confinement at all, it simply flies apart, but it takes a certain length of time to do this, and until then it can fuse. The High Power laser Energy Research facility (HiPER) is undergoing preliminary design for possible construction in the European Union starting around 2010.
 Problems with ICF's present stage of development are associated with the complicated and cumbersome mechanics required for aiming and firing the lasers which must be aligned to within 50 microns (less than the thickness of a piece of paper) on super-cooled targets flying on optically tracked injection trajectories several meters away, the enormous energy spikes needed to power the lasers, energy recovery, control of neutron damage, and reduction of firing time cycle. For Laser Fusion there are two critical elements on the path to achieving practical IFE. First, a reliable, durable laser that can meet the IFE requirements for efficiency needs to be produced. Second, higher gain targets that can be readily fabricated in large numbers need to be designed. One of the goals for NIF is to reduce the firing time to 5 hours. Previous devices generally had much longer cooling down periods to allow the flashlamps and laser glass to regain its shape after firing caused thermal expansion, limiting use to one or fewer firings a day. Another major challenge for NIF is to control laser-plasma interaction effects with only a modest (10-20%) energy penalty.
 Most mainline systems (except for liquid-metal-wall ICF reactors, such as HYLIFE) have steel first walls, which are necessary to maintain a good quality vacuum and to endure the intense x-ray and neutron radiation. The first walls of all such reactors will be highly radioactive (2 to 5 billion curies). In addition, these first walls will require replacement every few years because of neutron-induced damage, either from helium embrittlement or from atomic displacements. Because both neutron energy and neutron population are reduced in the steel first walls of these reactors, neutron multipliers (such as lead or beryllium) or isotopic enrichment of Li-6 are usually required to achieve acceptable tritium breeding ratios. The same applies to magnetic fusion reactor chamber walls. For example, the STARFIRE tokamak walls will have a radioactivity of more than 5 billion curies and must be replaced every four or five years.
 The current invention (BSF) has a “compact blanket” design, where the blanket (protective layer of neutron absorbing material) comes before the first wall, instead of after it. This alone is enough to reduce neutron-induced radioactivity in the chamber wall by several orders of magnitude.
 The yields obtainable from ICF targets increase with the amount of driver energy. The rate of this increase is much greater than linear, so that any doubling of the input (driver energy) would produce way more than twice as much output (fusion energy). Ideally, to maximize efficiency, the reactor should operate at the highest yield it can withstand, even if this means operating at a lower repetition rate. As a bonus, operating at a lower repetition rate makes it easier to pump out vaporized material between pulses. But, it should be noted that, in general, the only way ICF systems can handle high yields is by using compact blankets.
 The conventional scheme for inertial confinement uses the same laser for both compression and heating of the capsule. More recent work has demonstrated that significant savings in the overall energy requirements for laser drivers are possible using a technique known as "fast ignition." Fast ignition has separate stages for compression and for heating, using one laser to compress the plasma, followed by a very intense fast-pulsed laser to heat the core of the capsule after it is compressed. At the same time, advances in solid state lasers have improved the "driver" systems' efficiency by about ten fold, almost making even the large "traditional" (volume ignition) machines practical. The laser-based concept has other advantages as well. The reactor core is mostly exposed, as opposed to being wrapped in a huge magnet as in the tokamak. This makes the problem of removing energy from the system somewhat simpler, and should mean that a laser-based device would be much easier to perform maintenance on, such as core replacement. Additionally, the lack of strong magnetic fields allows for a wider variety of low-activation materials, including carbon fiber, which would reduce both the frequency of such neutron activations and the rate core irradiation. In other ways the program has many of the same problems as the tokamak; practical methods of energy removal and tritium recycling need to be demonstrated, and in addition there is always the possibility of new previously unseen problems arising.
 Despite optimism dating back to the 1950s about the wide-scale harnessing of fusion power, there are still significant barriers standing between current scientific understanding and technological capabilities and the practical realization of fusion as an energy source. Research, while making steady progress, has also continually thrown up new difficulties. Therefore the question of whether or not an economically viable fusion plant is even possible cannot be answered with certainty.
 But one thing is certain, if BSF performs as expected it would not merely be better than some of our current energy producing technologies, it would be far superior to all of them. BSF is not incremental on current technology, it is a giant leap into unexplored territory. It represents a transformational technology that is capable of disrupting the status quo and changing the energy landscape. Transformational energy technologies have the potential to create new paradigms in how energy is produced, transmitted, used, and/or stored. The world needs transformational energy-related technologies to overcome the threats posed by climate change and energy security. These threats arise from our reliance on traditional use of fossil fuels and the dominant use of oil in transportation.Return home. View My Stats