By Sally Warner
For the first time, surfing is on the Olympic stage.
The surfing event will last for three days and has to run within the dates from July 25 to August 1. The reason for this window? Not all waves are created equal, and organizers and surfers will wait for the best day full of the best waves to hold the competition.
As a recreational surfer and physical oceanographer, I spend a lot of time thinking about waves. But for many people, this year's Olympics will be their first time watching the sport. They might be wondering:
What generates the waves that surfers will ride at the Olympics? Where do the waves come from? And why will the new Olympians be surfing at Tsurigasaki Beach?
Wind Creates Waves
Think for a few seconds about what happens when you throw a stone into a serene pond. It creates a ring of waves – depressions and elevations of the water's surface – that spread out from the center.
Waves in the ocean act similarly by propagating outward from where they are generated. The key difference is that the vast majority of ocean waves are formed by wind. As the wind blows over the surface of the water, some of the energy of the wind is transferred into the water, creating waves. The biggest and most powerful wind-generated waves are produced by strong storms that blow for a sustained period of time over a large area of the ocean.
The waves within a storm are usually messy and chaotic, but as they move away from the storm they grow more organized as faster waves outrun slower waves. This organization of the waves creates “swell," or regularly spaced lines of waves.
Seafloors Break Waves
As waves travel across the ocean, they don't actually bring water with them – a wave from a storm 1,000 miles away isn't made of water from 1,000 miles away. Waves are actually just energy moving from water molecule to water molecule. This energy doesn't just move through the top layer of the ocean, either. Ocean waves extend far below the surface, sometimes as deep as 500 feet. When waves move into shallower water close to shore, they start to “feel" the seafloor as it pulls and drags on them, slowing them down. As seafloor gets shallower, it pushes upwards against the bottoms of waves, but the energy has to go somewhere, so the waves grow taller.
As the waves move toward shore, the water gets ever more shallow and the waves keep growing until, eventually, they become unstable and the wave “breaks" as the crest spills over toward shore.
It is only here, after a wave has traveled perhaps thousands of miles, that the surfing starts. To catch a wave, a surfer paddles toward shore until their speed matches that of the wave. As soon as the wave starts to break, the surfer stands up quickly and maneuvers the surf board with their feet and weight to ride the wave just ahead of the crashing lip.
Waves at the Olympics
The waves that surfers ride at Tsurigasaki Beach for the Olympics will be generated from one of two different types of wind: trade winds and typhoons.
Trade winds consistently blow around 11 to 15 mph (18 to 24 kph) in a band that stretches across the Pacific Ocean from approximately Mexico to the Philippines. These winds generate small “trade swells" that propagate northward toward the east coast of Japan and are usually a few feet tall when they arrive.
But if the surfers and spectators are lucky, a typhoon with wind speeds greater than 74 mph (119 kph) will be supplying powerful waves for the event. Typhoons are what hurricanes are called in much of Asia and are common near Japan and China during summer and fall. Winds in a typhoon are much stronger than the trade winds. Therefore, they generate much bigger waves. Olympic surfers obviously do not want a typhoon to hit Japan. What they want is for a typhoon to form about 500 to 1,500 miles (800 to 2,400 km) to the southeast of Japan and generate big waves that will hit the coast of Japan after traveling across the ocean for one to three days.
Based on the current weather and surf forecasts, it looks like just such a situation will happen. As of July 22, 2021, weather models are predicting that a tropical cyclone or typhoon will almost certainly develop to the southeast of Japan over the next few days, and the winds from this storm will send a powerful swell to the Olympics. Currently, models are predicting that the waves could be 7 feet (2.1 m) at Tsurigasaki Beach, just in time for the surfing event to start.
Once the swell from the trade winds or a far-off typhoon reaches Tsurigasaki Beach, it is the seafloor that will determine where the waves break. Tsurigasaki Beach is a “beach break," which means that the seafloor is sand, rather than rocks or coral reef. There are a series of human-made rock walls, called groins, sticking out perpendicularly from the beach. These have been engineered to prevent sand from moving along the beach and are meant to slow erosion. These groins create shallow sandbars a few hundred yards from shore that incoming waves will break on. This is where the athletes will surf.
When you tune in to watch the surfing competition at the Olympics, marvel at the amazing skills of elite surfers, but remember too the far-off storms and the underwater sandbars that come together to create the beautiful waves.
Portions of this article originally appeared in an article published on Dec. 3, 2020.
Sally Warner is an assistant professor of climate science at Brandeis University.
Disclosure statement: Sally Warner does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Reposted with permission from The Conversation.
By Jeremy Dertien, Courtney Larson and Sarah Reed
Seeing animals and birds is one of the main draws of spending time in nature. But as researchers who study conservation, wildlife and human impacts on wild places, we believe it's important to know that you can have major effects on wildlife just by being nearby.
In a recent review of hundreds of studies covering many species, we found that the presence of humans can alter wild animal and bird behavior patterns at much greater distances than most people may think. Small mammals and birds may change their behavior when hikers or birders come within 300 feet (100 meters) – the length of a football field. Large birds like eagles and hawks can be affected when humans are over 1,300 feet (400 meters) away – roughly a quarter of a mile. And large mammals like elk and moose can be affected by humans up to 3,300 feet (1,000 meters) away – more than half a mile.
A hiker about 75 feet from a bull elk in Yellowstone National Park. Jacob W. Frank, NPS/Flickr
Many recent studies and reports have shown that the world is facing a biodiversity crisis. Over the past 50 years, Earth has lost so many species that many scientists believe the planet is experiencing its sixth mass extinction – due mainly to human activities.
Protected areas, from local open spaces to national parks, are vital for conserving plants and animals. They also are places where people like to spend time in nature. We believe that everyone who uses the outdoors should understand and respect this balance between outdoor recreation, sustainable use and conservation.
How Human Presence Affects Wildlife
Pandemic lockdowns in 2020 confined many people indoors – and wildlife responded. In Istanbul, dolphins ventured much closer to shore than usual. Penguins explored quiet South African Streets. Nubian ibex grazed on Israeli playgrounds. The fact that animals moved so freely without people present shows how wild species change their behavior in response to human activities.
Decades of research have shown that outdoor recreation, whether it's hiking, cross-country skiing or riding all-terrain vehicles, has negative effects on wildlife. The most obvious signs are behavioral changes: Animals may flee from nearby people, decrease the time they feed and abandon nests or dens.
Other effects are harder to see, but can have serious consequences for animals' health and survival. Wild animals that detect humans can experience physiological changes, such as increased heart rates and elevated levels of stress hormones.
And humans' outdoor activities can degrade habitat that wild species depend on for food, shelter and reproduction. Human voices, off-leash dogs and campsite overuse all have harmful effects that make habitat unusable for many wild species.
Disturbing shorebirds can cause them to stop eating, stop feeding their young or flee their nests, leaving chicks vulnerable.
Effects of Human Presence Vary for Different Species
For our study we examined 330 peer-reviewed articles spanning 38 years to locate thresholds at which recreation activities negatively affected wild animals and birds. The main thresholds we found were related to distances between wildlife and people or trails. But we also found other important factors, including the number of daily park visitors and the decibel levels of people's conversations.
The studies that we reviewed covered over a dozen different types of motorized and nonmotorized recreation. While it might seem that motorized activities would have a bigger impact, some studies have found that dispersed “quiet" activities, such as day hiking, biking and wildlife viewing, can also affect which wild species will use a protected area.
Put another way, many species may be disturbed by humans nearby, even if those people are not using motorboats or all-terrain vehicles. It's harder for animals to detect quiet humans, so there's a better chance that they'll be surprised by a cross-country skier than a snowmobile, for instance. In addition, some species that have been historically hunted are more likely to recognize – and flee from – a person walking than a person in a motorized vehicle.
Generally, larger animals need more distance, though the relationship is clearer for birds than mammals. We found that for birds, as bird size increased, so did the threshold distance. The smallest birds could tolerate humans within 65 feet (20 meters), while the largest birds had thresholds of roughly 2,000 feet (600 meters). Previous research has found a similar relationship. We did not find that this relationship existed as clearly for mammals.
We found little research on impact thresholds for amphibians and reptiles, such as lizards, frogs, turtles and snakes. A growing body of evidence shows that amphibians and reptiles are disturbed and negatively affected by recreation. So far, however, it's unclear whether those effects reflect mainly the distance to people, the number of visitors or other factors.
Human recreation starts to affect wild creatures' behavior and physical state at different distances. Small mammals and birds tolerate closer recreation than do larger birds of prey and large mammals. Sarah Markes, CC BY-ND
How to Reduce Your Impact on Wildlife
While there's much still to learn, we know enough to identify some simple actions people can take to minimize their impacts on wildlife. First, keep your distance. Although some species or individual animals will become used to human presence at close range, many others won't. And it can be hard to tell when you are stressing an animal and potentially endangering both it and yourself.
Second, respect closed areas and stay on trails. For example, in Jackson Hole, Wyoming, wildlife managers seasonally close some backcountry ski areas to protect critical habitat for bighorn sheep and reduce stress on other species like moose, elk and mule deer. And rangers in Maine's Acadia National Park close several trails annually near peregrine falcon nests. This reduces stress to nesting birds and has helped this formerly endangered species recover.
Getting involved with educational or volunteer programs is a great way to learn about wildlife and help maintain undisturbed areas. As our research shows, balancing recreation with conservation means opening some areas to human use and keeping others entirely or mostly undisturbed.
As development fragments wild habitat and climate change forces many species to shift their ranges, movement corridors between protected areas become even more important. Our research suggests that creating recreation-free wildlife corridors of at least 3,300 feet (1,000 meters) wide can enable most species to move between protected areas without disturbance. Seeing wildlife can be part of a fun outdoor experience – but for the animals' sake, you may need binoculars or a zoom lens for your camera.
Jeremy Dertien is a Ph.D. candidate in forestry and environmental conservation at Clemson University.
Courtney Larson is an adjunct assistant professor at the University of Wyoming.
Sarah Reed is affiliate faculty in fish, wildlife and conservation Biology at Colorado State University.
Disclosure statements: Jeremy Dertien receives funding from Sonoma Land Trust. Courtney Larson received funding from the California Department of Fish and Wildlife. Sarah Reed receives funding from Sonoma Land Trust.
Reposted with permission from The Conversation.
- New WWF Report Calls for Protecting Nature to Prevent Future ... ›
- Great Apes Could Lose 94% of African Home Due to Climate Crisis ... ›
- An Emerging Threat to Conservation: Fear of Nature - EcoWatch ›
Making the switch to solar energy can help you lower or even eliminate your monthly electric bills while reducing your carbon footprint. However, before installing a clean energy system in your home, you must first answer an important question: "How many solar panels do I need?"
To accurately calculate the ideal number of solar panels for your home, you'll need a professional assessment. However, you can estimate the size and cost of the system based on your electricity bills, energy needs and available roof space. This article will tell you how.
If you make a purchase using the links included, we may earn commission.
Factors That Influence How Many Solar Panels You Need
To determine how many solar panels are needed to power a house, several factors must be considered. For example, if there are two identical homes powered by solar energy in California and New York, with exactly the same energy usage, the California home will need fewer solar panels because the state gets more sunshine.
The following are some of the most important factors to consider when figuring out many solar panels you need:
Size of Your Home and Available Roof Space
Larger homes tend to consume more electricity, and they generally need more solar panels. However, they also have the extra roof space necessary for larger solar panel installations. There may be exceptions to this rule — for example, a 2,000-square-foot home with new Energy Star appliances may consume less power than a 1,200-square-foot home with older, less-efficient devices.
When it comes to installation, solar panels can be placed on many types of surfaces. However, your roof conditions may limit the number of solar panels your home can handle.
For example, if you have a chimney, rooftop air conditioning unit or skylight, you'll have to place panels around these fixtures. Similarly, roof areas that are covered by shadows are not suitable for panels. Also, most top solar companies will not work on asbestos roofs due to the potential health risks for installers.
Amount of Direct Sunlight in Your Area
Where there is more sunlight available, there is more energy that can be converted into electricity. The yearly output of each solar panel is higher in states like Arizona or New Mexico, which get a larger amount of sunlight than less sunny regions like New England.
The World Bank has created solar radiation maps for over 200 countries and regions, including the U.S. The map below can give you an idea of the sunshine available in your location. Keep in mind that homes in sunnier regions will generally need fewer solar panels.
© 2020 The World Bank, Source: Global Solar Atlas 2.0, Solar resource data: Solargis.
Number of Residents and Amount of Energy You Use
Households with more members normally use a higher amount of electricity, and this also means they need more solar panels to increase energy production.
Electricity usage is a very important factor, as it determines how much power must be generated by your solar panel system. If your home uses 12,000 kilowatt-hours (kWh) per year and you want to go 100% solar, your system must be capable of generating that amount of power.
Type of Solar Panel and Efficiency Rating
High-efficiency panels can deliver more watts per square foot, which means you need to purchase fewer of them to reach your electricity generation target. There are three main types of solar panels: monocrystalline, polycrystalline and thin-film. In general, monocrystalline panels are the most efficient solar panels, followed closely by polycrystalline panels. Thin-film panels are the least efficient.
How to Estimate the Number of Solar Panels You Need
So, based on these factors, how many solar panels power a home? To roughly determine how many solar panels you need without a professional assessment, you'll need to figure out two basic things: how much energy you use and how much energy your panels will produce.
According to the latest data from the U.S. Energy Information Administration (EIA), the average American home uses 10,649 kWh of energy per year. However, this varies depending on the state. For example:
- Louisiana homes have the highest average consumption, at 14,787 kWh per year.
- Hawaii homes have the lowest average consumption, at 6,298 kWh per year.
To more closely estimate how much energy you use annually, add up the kWh reported on your last 12 power bills. These numbers will fluctuate based on factors like the size of your home, the number of residents, your electricity consumption habits and the energy efficiency rating of your home devices.
Solar Panel Specific Yield
After you determine how many kWh of electricity your home uses annually, you'll want to figure out how many kWh are produced by each of your solar panels during a year. This will depend on the specific type of solar panel, roof conditions and local peak sunlight hours.
In the solar power industry, a common metric used to estimate system capacity is "specific yield" or "specific production." This can be defined as the annual kWh of energy produced for each kilowatt of solar capacity installed. Specific yield has much to do with the amount of sunlight available in your location.
You can get a better idea of the specific yield that can be achieved in your location by checking reliable sources like the World Bank solar maps or the solar radiation database from the National Renewable Energy Laboratory.
To estimate how many kW are needed to run a house, you can divide your annual kWh consumption by the specific yield per kilowatt of solar capacity. For example, if your home needs 15,000 kWh of energy per year, and solar panels have a specific yield of 1,500 kW/kW in your location, you will need a system size of around 10 kilowatts.
Paradise Energy Solutions has also come up with a general formula to roughly ballpark the solar panel system size you need. You can simply divide your annual kWh by 1,200 and you will get the kilowatts of solar capacity needed. So, if the energy consumption reported on your last 12 power bills adds up to 24,000 kWh, you'll need a 20 kW system (24,000 / 1,200 = 20).
So, How Many Solar Panels Do I Need?
Once you know the system size you need, you can check your panel wattage to figure how many panels to purchase for your solar array. Multiply your system size by 1,000 to obtain watts, then divide this by the individual wattage of each solar panel.
Most of the best solar panels on the market have an output of around 330W to 360W each. The output of less efficient panels can be as low as 250W.
So, if you need a 10-kW solar installation and you're buying solar panels that have an output of 340W, you'll need 30 panels. Your formula will look like this: 10,000W / 340W = 29.4 panels.
If you use lower-efficiency 250-watt solar panels, you'll need 40 of them (10,000W / 250W = 40) panels.
Keep in mind that, although the cost of solar panels is lower if you choose a lower-efficiency model over a pricier high-efficiency one, the total amount you pay for your solar energy system may come out to be the same or higher because you'll have to buy more panels.
How Much Roof Space Do You Need for a Home Solar System?
After you estimate how many solar panels power a house, the next step is calculating the roof area needed for their installation. The exact dimensions may change slightly depending on the manufacturer, but a typical solar panel for residential use measures 65 inches by 39 inches, or 17.6 square feet. You will need 528 square feet of roof space to install 30 panels, and 704 square feet to install 40.
In addition to having the required space for solar panels, you'll also need a roof structure that supports their weight. A home solar panel weighs around 20 kilograms (44 pounds), which means that 30 of them will add around 600 kilograms (1,323 pounds) to your roof.
You will notice that some solar panels are described as residential, while others are described as commercial. Residential panels have 60 individual solar cells, while commercial panels have 72 cells, but both types will work in any building. Here are a few key differences:
- Commercial solar panels produce around 20% more energy, thanks to their extra cells.
- Commercial panels are also more expensive, as well as 20% larger and heavier.
- Residential 60-cell solar panels are easier to handle in home installations, which saves on labor, and their smaller size helps when roof dimensions are limited.
Some of the latest solar panel designs have half-cells with a higher efficiency, which means they have 120 cells instead of 60 (or 144 instead of 72). However, this doesn't change the dimensions of the panels.
Conclusion: Are Solar Panels Worth it for Your Home?
Solar panels produce no carbon emissions while operating. However, the EIA estimates fossil fuels still produce around 60% of the electricity delivered by U.S. power grids.
Although the initial investment in solar panels is steep, renewable energy systems make sense financially for many homeowners. According to the Department of Energy, they have a typical payback period of about 10 years, while their rated service life is up to 30 years. After recovering your initial investment, you will have a source of clean and free electricity for about two decades.
Plus, even if you have a large home or find you need more solar panels than you initially thought you would, keep in mind that there are both federal and local tax credits, rebates and other incentives to help you save on your solar power system.
To get a free, no-obligation quote and see how much a solar panel system would cost for your home, fill out the 30-second form below.
California Is Planning Floating Wind Farms Offshore to Boost Its Power Supply – Here's How They Work
By Matthew Lackner
Northern California has some of the strongest offshore winds in the U.S., with immense potential to produce clean energy. But it has a problem. Its continental shelf drops off quickly, making building traditional wind turbines directly on the seafloor costly if not impossible.
Once water gets more than about 200 feet deep – roughly the height of an 18-story building – these “monopile" structures are pretty much out of the question.
A solution has emerged that's being tested in several locations around the world: making wind turbines that float. In fact, in California, where drought is putting pressure on the hydropower supply and fires have threatened electricity imports from the Pacific Northwest, the state is moving forward on plans to develop the nation's first floating offshore wind farms as we speak.
So how do they work?
Three Main Ways to Float a Turbine
A floating wind turbine works just like other wind turbines – wind pushes on the blades, causing the rotor to turn, which drives a generator that creates electricity. But instead of having its tower embedded directly into the ground or the sea floor, a floating wind turbine sits on a platform with mooring lines, such as chains or ropes, that connect to anchors in the seabed below.
These mooring lines hold the turbine in place against the wind and keep it connected to the cable that sends its electricity back to shore.
Most of the stability is provided by the floating platform itself. The trick is to design the platform so the turbine doesn't tip too far in strong winds or storms.
Three of the common types of floating wind turbine platform. Josh Bauer / NREL
There are three main types of platforms:
- A spar buoy platform is a long hollow cylinder that extends downwards from the turbine tower. It floats vertically in deep water, weighted with ballast in the bottom of the cylinder to lower its center of gravity. It's then anchored in place, but with slack lines that allow it to move with the water to avoid damage. Spar buoys have been used by the oil and gas industry for years for offshore operations.
- Semi-submersible platforms have large floating hulls that spread out from the tower, also anchored to prevent drifting. Designers have been experimenting with multiple turbines on some of these hulls.
- Tension leg platforms have smaller platforms with taut lines running straight to the floor below. These are lighter but more vulnerable to earthquakes or tsunamis because they rely more on the mooring lines and anchors for stability.
Each platform must support the weight of the turbine and remain stable while the turbine operates. It can do this in part because the hollow platform, often made of large steel or concrete structures, provides buoyancy to support the turbine. Since some can be fully assembled in port and towed out for installation, they might be far cheaper than fixed-bottom structures, which requires specialty boats for installation on site.
The University of Maine has been experimenting with a small floating wind turbine, about one-eighth scale, on a semi-submersible platform. It plans to launch a full-scale version with corporate partners in 2023. AP Photo / Robert F. Bukaty
Floating platforms can support wind turbines that can produce 10 megawatts or more of power – that's similar in size to other offshore wind turbines and several times larger than the capacity of a typical onshore wind turbine you might see in a field.
Why Do We Need Floating Turbines?
Some of the strongest wind resources are away from shore in locations with hundreds of feet of water below, such as off the U.S. West Coast, the Great Lakes, the Mediterranean Sea, and the coast of Japan.
In May 2021, Interior Secretary Deb Haaland and California Gov. Gavin Newsom announced plans to open up parts of the West Coast, off central California's Morro Bay and near the Oregon state line, for offshore wind power. The water there gets deep quickly, so any wind farm that is even a few miles from shore will require floating turbines. Newsom said the area could initially provide 4.6 gigawatts of clean energy, enough to power 1.6 million homes. That's more than 100 times the total U.S. offshore wind power today.
Globally, several full-scale demonstration projects are already operating in Europe and Asia. The Hywind Scotland project became the first commercial-scale offshore floating wind farm in 2017, with five 6-megawatt turbines supported by spar buoys designed by the Norwegian energy company Equinor.
While floating offshore wind farms are becoming a commercial technology, there are still technical challenges that need to be solved. The platform motion may cause higher forces on the blades and tower, and more complicated and unsteady aerodynamics. Also, as water depths get very deep, the cost of the mooring lines, anchors, and electrical cabling may becomes very high, so cheaper but still reliable technologies will be needed.
Expect to see more offshore turbines supported by floating structures in the near future.
Matthew Lackner is a professor of mechanical engineering at the University of Massachusetts Amherst.
Disclosure: Matthew Lackner receives funding from the U.S. Department of Energy.
Reposted with permission from The Conversation.
By Rodney Holcomb and Danielle Bellmer
How would you like to dig into a "recycled" snack? Or take a swig of juice with "reprocessed" ingredients made from other food byproducts? Without the right marketing, these don't sound like the most appetizing options.
Enter "upcycling." That's the relatively recent term for the age-old concept of using low-valued foods or food processing byproducts to generate new food products. Time-honored examples of this concept include sausages made from meat scraps and jams or jellies made from overripe fruit. In many cases, this waste would have otherwise been used as animal feed or sent to the compost pile.
The Upcycled Food Association defines upcycled foods as those that "use ingredients that otherwise would not have gone to human consumption, are procured and produced using verifiable supply chains, and have a positive impact on the environment." An official definition may allow manufacturers to market to a target audience and encourage consumers and food processors to consider upcycled products. The Association launched a new Upcycled Certification Standard in 2021. Soon enough you may notice an upcycled label on items at the grocery store.
Food waste is a monumental problem, and this nascent trend, with a buzzy new name designed to appeal to consumers, could help. As an economist and a food engineer, we've worked with food companies to minimize waste and find markets for underutilized or otherwise trashed food items. Here's how upcycling works.
Massive Amounts of Food Get Wasted
Globally, more than one-third of all current food production will be lost or wasted somewhere between the farm or ranch and the consumer's garbage can. Food "losses" may be due to improper handling or storage conditions on the farm or in the food distribution process, whereas food "waste" often results from limited retail shelf life or consumers simply not making use of perishable products before they spoil in the fridge.
Worldwide annual loss estimates for highly perishable crops, such as fruits and vegetables, exceed 20%, with certain leafy greens and tropical fruits exceeding 40%. In the U.S. alone, estimates of food loss and waste in recent years have ranged from US$200 billion to $300 billion. Both the World Trade Organization and the U.N. Food and Agriculture Organization have increased emphasis on preventing food insecurity by minimizing food loss and food waste.
In addition to the financial impact, food waste also contributes to environmental problems. The FAO estimates that about 8% of the world's total greenhouse gas emissions can be traced to the carbon footprint of food loss and waste. Landfills generate greenhouse gas emissions, and recent U.S. Environmental Protection Agency estimates indicate food waste is the single largest contributor to landfill volume, making up more than a fifth of what ends up at the dump.
In addition, when food is wasted, all of the natural resources used to produce the food, including water, energy and land resources, are wasted.
Peels, Shells and Past-Their-Prime Ingredients
From an economics standpoint, finding market outlets for otherwise wasted products makes sense, and the food industry recognizes that fact. Much of what's left over as waste once a food is processed contains valuable nutritional components, even though it's currently only used for animal feed or just thrown away. Fortunately, current laws require animal feed to be treated the same as human food, so many waste streams are already handled using sanitary practices and are safe for human consumption.
A number of economically viable upcycled products are currently on the market. Fruit pomace – all the fibrous bits left after fruit juice production – bolsters the flavor and nutritional content of snack foods. Wheat middlings – everything left after milling that's not flour – are added to breakfast cereals to increase the content of vitamins, minerals and fiber. Whey protein from cheese production increases the protein content of health bars and protein shakes.
There's flour made from the pulp byproducts of soybean and almond milk production, which is sold as baking mixes or upcycled flours. There's craft beer that uses surplus unsold bread as the fermentation substrate. One group collects and distributes second-tier produce before it goes bad. Other examples include pecan shell flour, dried vegetable peels as soup ingredients, and powders made from waste fruits and vegetables that can be added to beverages and snack bars.
With our colleagues here at the Robert M. Kerr Food and Agricultural Products Center at Oklahoma State, we've had the opportunity to work on a number of products that would be considered upcycled foods.
Ideas for new upcycled products come from researchers within our facility who identify a waste stream with untapped potential, or they originate with an entrepreneur who has a product idea. Either way, interdisciplinary teams here brainstorm ideas, create experimental prototypes and eventually conduct sensory evaluations – addressing the look, taste, aroma or texture of a potential new product.
One recent example is the creation of a new snack chip from brewer's spent grain, the solid waste generated in the beer-brewing industry. Another current project is the creation of Kpomo. Also known as Ponmo or Kanda in Nigeria, where it's traditionally popular, this food is made from beef hide that's been cleaned and precooked.
With any food product, consumer acceptance depends largely on taste, convenience and price. Moving forward, food processors will still need new products made from waste resources to make economic sense. But research has shown that the term "upcycled" as a proxy for environmental sustainability on a food label resonates with both millennials and baby boomers and can make them more likely to buy these products. Foods labeled "upcycled" await your shopping dollars now.
Rodney Holcomb is a professor of agricultural economics at Oklahoma State University.
Danielle Bellmer is a professor of biosystems and agricultural engineering at Oklahoma State University.
Disclosure statement: Rodney Holcomb receives funding from USDA to examine local food production and marketing. Danielle Bellmer does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Reposted with permission from The Conversation.
- 7 Best Eco-Friendly Meal Kit Delivery Services of 2021 - EcoWatch ›
- Upcycled Food Is Officially Defined, Paving the Way to Reduce Food ... ›
- 23 Organizations Eliminating Food Waste During COVID-19 ... ›
- Harnessing Food Waste to Empower Communities in Brazil ... ›
By Hom Dhakal
Plastics are very useful materials. They've contributed significant benefits to modern society. But the unprecedented amount of plastics produced over the past few decades has caused serious environmental pollution.
Packaging alone was responsible for 46% out of 340 million tonnes of plastic waste generated globally in 2018. Although plastic recycling has increased significantly in recent years, most plastics used today are single-use, non-recyclable and non-biodegradable.
The demand for food will double by 2050. This will probably increase the amount of waste from food and its plastic packaging, putting poorer countries under tremendous pressure to manage waste disposal.
To tackle the issues of environmental damage, we need more sustainable materials that we can recycle or that biodegrade. There's been a surge in plant-based plastics, but many of these can only be composted using industrial processes, not by people at home.
Now researchers at the University of Cambridge have found a way to make plastic from abundant and sustainable plant proteins. Inspired by spider silk, the film works in a way similar to other plastics, but it can be composted at home.
Types of Plastic
Synthetic and non-biodegradable plastics commonly used for food packaging include polythene terephthalate (PET), polystyrene (PS) and crystalline polythene terephthalate (CPET).
There are some processes in place for disposing of PET – namely mechanical and chemical recycling techniques – but most plastic around the world is still sent to landfills. PET can take hundreds of years to decompose and it's non-biodegradable. This means it can continue to pollute the ecosystem for many years.
Making plastic requires lots of energy. Then, when plastics are thrown away, they cause environmental damage, including global warming, greenhouse gas emissions and damage to marine life.
On the other hand, there are some biodegradable plant-based plastics, such as polylactic acid (PLA), polybutylene succinate (PBS), polycaprolactone) (PCL) and polyhydroxyalkanotes (PHAs), which are friendlier to the environment than non-renewable polymers.
PLA polymers are produced from renewable resources and have the advantage of being recyclable and compostable. This makes PLA a much more environmentally friendly material than PET, PS and CPET. However, their long-term durability and stability are lower than their synthetic counterparts.
The New Material
The new research has investigated the potential use of a biodegradable and renewable polymer, such as soy protein, to make a new material that could be an alternative to other plant-based plastics.
The researchers created a plant-based plastic and added nanoparticles – particles smaller than one millionth of a metre. This meant they could control the structure of the material to create flexible films, with a material that looks like spider silk on a molecular level. They've called it a “vegan spider silk."
The team used various techniques, including scanning electron microscopy and transmission electron microscopy to study the structure of the film.
They analyzed important properties, such as barrier properties and moisture absorption. They found the nanoparticles helped to increase the various properties – strength and long-term durability and stability – significantly.
By creating a plastic with a more environmentally friendly manufacturing process, made from sustainable materials itself, a significant amount of energy can be saved. This is one of the most exciting parts of this study.
This new material could help solve some of the problems that plastic pollution has caused to the environment – by introducing a material from renewable source with enhanced properties suitable for many engineering applications, including packaging.
The study could help to scale up the production of sustainable packaging materials, using natural resources and less energy consumption, while reducing the amount of plastic going into landfill.
Disclosure statement: Hom Dhakal does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Reposted with permission from The Conversation.
- Plastic Pollution Raises Beach Temperatures, Threatening Marine ... ›
- Mattel Wants to Recycle Old Barbie Dolls - EcoWatch ›
- Ocean Plastic: What You Need to Know - EcoWatch ›
By Adam Uliana
Most people on Earth get fresh water from lakes and rivers. But these account for only 0.007% of the world's water. As the human population has grown, so has demand for fresh water. Now, two out of every three people in the world face severe water scarcity at least one month a year.
Other water sources – like seawater and wastewater – could be used to meet growing water needs. But these water sources are full of salt and usually contain such contaminants as toxic metals. Scientists and engineers have developed methods to remove salts and toxins from water – processes called desalination. But existing options are expensive and energy-intensive, especially because they require a lot of steps. Current desalination techniques also create a lot of waste – around half of the water fed into some desalination plants is lost as wastewater containing all of the removed salts and toxins.
I am a doctoral student in chemical and biomolecular engineering and part of a team that recently created a new water-purification method that we hope can make desalination more efficient, the waste easier to manage and the size of water treatment plants smaller. This technology features a new type of filter that can target and capture toxic metals while removing salt from water at the same time.
Membranes filled with small particles that can capture specific toxic metals can clean water in one step. Adam Uliana / CC BY-ND
Designing an All-in-One Filter
To build a single filter that could both capture metals and remove salt, my colleagues and I first needed a material that could remove many different contaminants – mostly heavy metals – from water. To do this, we turned to tiny, absorbent particles called porous aromatic frameworks. These particles are designed to selectively capture individual contaminants. For example, one type of absorbent particle can catch only mercury. Other types specifically remove only copper, iron, or boron. I then embedded these four different types of particles into thin plastic membranes, essentially creating custom filters that would capture contaminants according to the type of particle I put in the membrane.
A colleague and I then placed these membrane filters into an electrodialysis water purifier. Electrodialysis is a method that uses electricity to pull salts and toxins out of water, across a membrane and into a separate waste stream. This waste – often called brine – can become toxic and expensive to dispose in existing desalination processes.
This new approach to desalination – called ion-capture electrodialysis – uses thin membranes and electricity to capture toxic metals as they are pulled from water along with salts. Ada Uliana / CC BY-N
In my team's modified process, called ion-capture elecrodialysis, our hope was that the membranes packed with the tiny metal-absorbing particles would capture toxic metals instead of allowing them to move into the brine. This would achieve three benefits at the same time in an energy-efficient manner: Salts and metals would be removed from the water; the toxic metals would be captured in a small, easily disposable membrane – or even potentially be reused; and the salty waste stream would be nontoxic.
How Effective is Ion-Capture Electrodialysis?
Once our team had successfully made these membranes, we needed to test them. The first test I ran used membrane filters embedded with mercury-capturing absorbents to purify water from three sources that contained both mercury and salts: groundwater, brackish water and industrial wastewater. To our team's excitement, the membranes captured all the mercury in every test. Additionally, the membranes were also great at getting rid of salt – over 97% was removed from the dirty water. After just one pass through our new electrodialysis machine, the water was perfectly drinkable. Importantly, further experiments showed that no mercury can pass through the filter until nearly all the absorbent particles in the filter are used up.
My colleagues and I then needed to see whether our ion-capture electrodialysis process would work on other common harmful metals. I tested three membrane filters that contained absorbents for copper, iron or boron. Every filter was a success. Each filter captured all of the target contaminants without any detectable amount passing into the brine, while simultaneously removing over 96% of salts from the water, purifying the water to usable conditions.
Our results show that our new water purification method can selectively capture many common contaminants while also removing salt from water. But there are still other technological challenges to figure out.
First, the highly selective absorbent particles – the porous aromatic frameworks – that my colleagues and I mixed into the membrane are too expensive to put into mass-produced filters. It is probably possible to place cheaper – but lower-quality – absorbents into the filters instead, but this might worsen the water purification performance.
Second, engineers like me still also need to test ion-capture electrodialysis on scales larger than those used in the laboratory. Issues can often come up in new technologies during this transition from the laboratory into industry.
Finally, water treatment plant engineers would need to come up with a way to pause the process right before the membrane absorbents are maxed out. Otherwise, the toxic contaminants would start to leak through the filter into the brine wastewater. The engineers could then restart the process after replacing the filter or after removing the metals from the filter and collecting them as separate waste.
We hope our work will lead to new methods that can efficiently and effectively purify water sources that are more abundant – yet more contaminated – than fresh water. The work really is worth it. After all, the effects of water scarcity are gigantic, on both a social and worldwide level.
Disclosure statement: Adam Uliana receives funding from the National Science Foundation through a Graduate Research Fellowship and is an Affiliate of the Lawrence Berkeley National Laboratory. The University of California, Berkeley has applied for two patents on some of the technology discussed here; Adam Uliana is listed as co-inventor on both.
Reposted with permission from The Conversation.
- Harmful PFAS Compounds Pollute Water in Every State - EcoWatch ›
- Nearly 60 million Americans don't drink their tap water, research ... ›
- Americans Are Most Concerned About Drinking Polluted Water ... ›
By Michael Childers
If you're headed out into the wild this summer, you may need to jump online and book a reservation before you go. For the second consecutive year, reservations are required to visit Yosemite, Rocky Mountain and Glacier national parks. Other popular sites, including Maine's Acadia National Park, encourage visitors to buy entrance passes in advance.
Limiting visitors has two purposes: reducing COVID-19 risks and allowing some parks to recover from recent wildfires. Rocky Mountain will allow 75% to 85% of capacity. Yosemite will again restrict the number of vehicles allowed in; last year, it hosted half of its average 4 million annual visitors.
Nationwide, some U.S. parks were emptier than normal during the pandemic, while Yellowstone and others were near capacity. But the pandemic likely was a temporary pause in a rising tide of visitors.
America's national parks face a popularity crisis. From 2010 to 2019, the number of national park visitors spiked from 281 million to 327 million, largely driven by social media, advertising and increasing foreign tourism.
This exponential growth is generating pollution and putting wildlife at risk to a degree that threatens the future of the park system. And with Americans eager to get back out into the world, the summer of 2021 promises to be one of the busiest domestic travel seasons in recent history. Reservations and other policies to manage visitor numbers could become features at many of the most popular parks.
Protecting Treasured Lands
In my work, I've explored the history of national parks and the factors that drive people to seek experiences outdoors. I've also studied the impacts of national park visitation and ways to keep the public from loving national parks to death.
Much of that research has focused on California's Yosemite National Park, which contains nearly 1,200 square miles of wilderness, including iconic granite rock formations, deep valleys, waterfalls and ancient giant sequoias.
Its creation dates to the Civil War. In 1864, with this landscape threatened by an influx of settlers and visitors, Abraham Lincoln signed the Yosemite Act, which ceded the region to California for “public use, resort, and recreation." This step set a precedent that parks were for everyone's benefit and enjoyment. Congress made Yosemite a national park in 1890.
Influenced by naturalist John Muir, President Theodore Roosevelt established five new parks in the early 1900s, along with 16 national monuments that included the Grand Canyon. Roosevelt wanted to protect these natural treasures from hunting, mining, logging and other exploitation.
President Theodore Roosevelt arriving at Yellowstone National Park in 1903. Library of Congress / CC BY-ND
To coordinate management, Congress established the
National Park Service and the National Park System in 1916. The National Park Service Organic Act directs the agency to protect the parks' wildlife and natural and cultural heritage “in such manner and by such means as will leave them unimpaired for the enjoyment of future generations" – a mission that is becoming increasingly difficult today.
Loving the Parks to Death
Americans fell in love with their parks – and several waves of overpopularity nearly destroyed the very experiences that drew people there.
The advent of automobile tourism in the 1920s opened national parks to hundreds of thousands of new visitors, who overwhelmed limited, aging roads, trails, restrooms, water treatment systems and visitor facilities. Ironically, relief came during the Great Depression. The New Deal funded massive construction projects in the parks, including campground comfort stations, museums and other structures. Hundreds of miles of roads and trails opened wild backcountry.
Between 1929 and 1941, the number of annual park visitors grew from 3 million to 20 million. This increasing torrent slowed only when the U.S. entered World War II.
Dedication of Going-to-the-Sun Road in Glacier National Park, Montana, on July 15, 1933. George A. Grant / NPS / Flickr
In the postwar boom, people returned en masse. The National Park Service launched
“Mission 66," another flurry of construction that again expanded capacity.
Conservationists and others condemned the development, alarmed by its environmental impacts and the threat of overcrowding. By the mid-1960s, total yearly park visitation exceeded 100 million.
Riding the Tourism Wave
Today the national park system has grown to comprise 63 national parks, with ever more visitors, plus 360 sites with other designations, such as national seashores, monuments and battlefields. Some of these other sites, such as Cape Cod National Seashore in Massachusetts and Gettysburg National Military Park in Pennsylvania, also attract millions of visitors yearly.
In 2019, a record-setting 327 million people visited the national parks, with the heaviest impacts on parks located near cities, like Rocky Mountain National Park outside Denver. This crowding spotlighted problems that park officials had been raising concerns about for years: The parks are underfunded, overrun, overbuilt and threatened by air and water pollution in violation of the laws and executive orders that protected them.
Park horror stories have grown common in recent years. They include miles-long traffic jams in Yellowstone, three-hour waits to enter Yosemite, trails littered with trash and confrontations between tourists and wildlife.
In 2020, Congress passed the
Great American Outdoors Act, which will provide up to US$1.9 billion a year for five years to address the park system's nearly $12 billion maintenance backlog. This long list of postponed projects reflects Congress' reluctance to adequately fund the national park system over many years.
But as the New Deal and Mission 66 demonstrated, increased infrastructure spending often boosts visitation. The Great American Outdoors Act doesn't cover conservation efforts or significant personnel needs, which will require increased federal funding. Many repairs are needed throughout the parks, but the system's future sustainability relies more on staffing than infrastructure.
A cruise ship approaches Margerie Glacier in Alaska's Glacier Bay National Park in 2018. NPS / Flickr
And neither more money nor additional park rangers will solve the overcrowding crisis. I believe the most popular national parks need a reservation system to save these protected lands from further damage.
This won't be a popular solution, since it contradicts the founding premise that national parks were built for public benefit and enjoyment. Critics have already created a petition opposing Rocky Mountain National Park's timed entry permits as unnecessary, unfair, undemocratic and discriminatory.
But the parks' unrelenting popularity is making it impossible to preserve them “unimpaired." In my view, crowd control has become essential in the most popular parks.
While there is only one Yosemite Valley, the national park system offers many less crowded destinations. Sites such as Hovenweep National Monument in Colorado and Utah and the Brown v. Board of Education National Historic Site in Kansas deserve attention for their natural beauty and the depth they add to Americans' shared heritage.
Michael Childers is an assistant professor of History at Colorado State University.
Disclosure statement: Michael Childers does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Reposted with permission from The Conversation.
- America's Newest National Park Is Also the First in West Virginia ... ›
- 'Plastic Rain' Is Pouring Down in National Parks - EcoWatch ›
By Athena Masson
Every coastline in the North Atlantic is vulnerable to tropical storms, but some areas are more susceptible to hurricane destruction than others.
To understand why as the region heads into what's forecast to be another busy hurricane season, let's look more closely at how tropical storms form and what turns them into destructive monsters.
Ingredients of a Hurricane
Three key ingredients are needed for a hurricane to form: warm sea surface water that's at least about 80 degrees Fahrenheit (26.5 C), a thick layer of moisture extending from the sea surface to roughly 20,000 feet and minimal vertical wind shear so the thunderstorm can grow vertically without interruption.
These prime conditions are often found in the tropical waters off the west coast of Africa.
Hurricanes can also form in the Gulf of Mexico and the Caribbean, but the ones that start close to Africa have thousands of miles of warm water ahead that they can draw energy from as they travel. That energy can help them grow into powerful hurricanes.
Wind currents set most tropical storms on a course westward from Africa toward the Caribbean, Florida and the Gulf of Mexico. Some drift northward into the midlatitudes, where the prevailing winds shift from west to east and cause them to curve back out into the Atlantic.
Others encounter cooler ocean temperatures that rob them of fuel, or high wind shear that breaks them apart. That's why tropical cyclones rarely hit northern states or Europe, though it does happen.
Time of Season Also Influences Hurricane Paths
Early in the season, in June and July, sea surface temperatures are still warming and atmospheric wind shear slowly decreases across the open Atlantic. Most early-season hurricanes develop in a small area of the Caribbean and Gulf of Mexico where prime conditions begin early.
They typically form close to land, so coastal residents don't have much time to prepare, but these storms also don't have ideal conditions to gain strength. Texas, Louisiana and Mississippi, as well as Central America, are more likely to see hurricane strikes early in the season, as the trade winds favor an east-to-west motion.
As surface waters gain heat over the summer, hurricane frequency and severity begin to increase, especially into the peak hurricane months of August through October.
Toward the end of the season, trade winds begin to shift from west to east, ocean temperatures start to fall, and cold fronts can help divert storms away from the western Gulf and push them toward the Florida Panhandle.
Shape of the Seafloor Matters for Destructiveness
The shape of the seafloor can also play a role in how destructive hurricanes become.
Hurricane strength is currently measured solely on a storm's maximum sustained wind speeds. But hurricanes also displace ocean water, creating a surge of high water that their winds push toward shore ahead of the storm.
This storm surge is often the greatest threat to life and property from a hurricane, accounting for about 49% of all direct fatalities between 1963 and 2012. Hurricane Katrina (2005) is a prime example: An estimated 1,500 people lost their lives when Katrina hit New Orleans, many of them in the storm surge flooding.
If the continental shelf where the hurricane hits is shallow and slopes gently, it generally produces a greater storm surge than a steeper shelf.
As a result, a major hurricane hitting the Texas and Louisiana Gulf Coast – which has a very wide and shallow continental shelf – may produce a 20-foot storm surge. However, the same hurricane might produce only a 10-foot storm surge along the Atlantic coastline, where the continental shelf drops off very quickly.
Where Are the Hurricane Hot Spots?
A few years ago, the National Oceanic and Atmospheric Administration analyzed the probability of U.S. coastlines' being hit by a tropical storm based on storm hits from 1944 and 1999.
It found that New Orleans had about a 40% chance each year of a tropical storm strike. The chances rose for Miami and Cape Hatteras, North Carolina, both at 48%. San Juan, Puerto Rico, which has seen some devastating storms in recent years, was at 42%.
Hurricanes, which have sustained wind speeds of at least 74 miles per hour, were also more frequent in the three U.S. locations. Miami and Cape Hatteras were found to have a 16% chance of a direct hit by a hurricane in any given year, and New Orleans' chance was estimated at 12%.
Each of these locations is vulnerable to a hurricane because of its location, but also its shape. North Carolina and Florida “stick out like a sore thumb" and are often grazed by hurricanes that curve up the east coast of the U.S.
Climate Change Changes the Risk
As sea surface temperatures rise with the warming of the planet, more areas outside of these usual hurricane regions may see more tropical storms.
I analyzed tropical cyclones in the North Atlantic that made landfall from 1972 to 2019 to look for changes over the past half-century.
During the first six years of that period, 1972-77, the Atlantic averaged four direct hits per year. Of those, 75% were in the usual hurricane-prone areas, such as the Southern United States, the Caribbean and Central America. Six storms made landfall elsewhere, including New England, Canada and the Azores.
By 2014-19, the Atlantic averaged 7.6 direct hits per year. While the U.S. took the majority of those hits, Europe has been showing a steady increase in cyclones making landfall. Major hurricanes – those with sustained wind speeds of 111 miles per hour and above – are also more common than they were in the 1970s and '80s.
While southern coastal locations of the United States may be the most vulnerable to tropical cyclone impacts, it is important to understand that a devastating cyclone can hit anywhere along the Atlantic and Gulf coasts.
The National Hurricane Center is forecasting another busy season in 2021, though it is not expected to be as extreme as 2020's record 30 named storms. Even if an area hasn't experienced a hurricane in several years, residents are advised to prepare for the season as if their area will take a hit – just in case.
Disclosure statement: Athena Masson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Reposted with permission from The Conversation.
- Hurricanes and the Climate Crisis: What You Need to Know ... ›
- Record Heat Means Hurricanes Gain Ferocity Faster - EcoWatch ›
- 2 Hurricanes Could Strike U.S. on the Same Day for the First Time in ... ›
By Shannon Schmoll
The first lunar eclipse of 2021 is going to happen during the early hours of May 26. But this is going to be an especially super lunar event, as it will be a supermoon, a lunar eclipse and a red blood moon all at once. So what does this all mean?
What's a Super Moon?
A supermoon occurs when a full or new moon coincides with the moon's closest approach to Earth.
The moon's orbit around Earth is not perfectly circular. This means the moon's distance from Earth varies as it goes around the planet. The closest point in the orbit, called the perigee, is roughly 28,000 miles closer to Earth than the farthest point of the orbit. A full moon that happens near the perigee is called a supermoon.
So why is it super? The relatively close proximity of the moon makes it seem a little bit bigger and brighter than usual, though the difference between a supermoon and a normal moon is usually hard to notice unless you're looking at two pictures side by side.
How Does a Lunar Eclipse Work?
A lunar eclipse happens when Earth's shadow covers all or part of the moon. This can only happen during a full moon, so first, it helps to understand what makes a full moon.
Like Earth, half of the moon is illuminated by the sun at any one time. A full moon happens when the Moon and the Sun are on opposite sides of Earth. This allows you see the entire lit-up side, which looks like a round disc in the night sky.
If the moon had a totally flat orbit, every full moon would be a lunar eclipse. But the moon's orbit is tilted by about 5 degrees relative to Earth's orbit. So, most of the time a full moon ends up a little above or below the shadow cast by Earth.
But twice in each lunar orbit, the moon is on the same horizontal plane as both Earth and Sun. If this corresponds to a full moon, the Sun, Earth and the moon will form a straight line and the moon will pass through Earth's shadow. This results in a total lunar eclipse.
To see a lunar eclipse, you need to be on the night side of Earth while the moon passes through the shadow. The best place to see the eclipse on May 26, 2021, will be the middle of the Pacific Ocean, Australia, the East Coast of Asia and the West Coast of the Americas. It will be visible on the eastern half of the U.S., but only the very earliest stages before the moon sets.
Why Does the Moon Look Red?
When the moon is completely covered by Earth's shadow it will darken, but doesn't go completely black. Instead, it takes on a red color, which is why total lunar eclipses are sometimes called red or blood moons.
Sunlight contains all colors of visible light. The particles of gas that make up Earth's atmosphere are more likely to scatter blue wavelengths of light while redder wavelengths pass through. This is called Rayleigh scattering, and it's why the sky is blue and sunrises and sunsets are often red.
In the case of a lunar eclipse, red light can pass through Earth's atmosphere and is refracted – or bent – toward the moon, while blue light is filtered out. This leaves the moon with a pale reddish hue during an eclipse.
Hopefully you will be able to go see this super lunar eclipse. When you do, now you will know exactly what makes for such a special sight.
Portions of this story originally appeared in a previous article published on Jan. 24, 2018.
Shannon Schmoll is director of the Abrams Planetarium, Department of Physics and Astronomy, Michigan State University.
Disclosure statement: Shannon Schmoll receives funding from the Institute of Museum and Library Services and the National Science Foundation.
Republished with permission from The Conversation.
- 2021: What Astronomical and Space Events Await Us This Year ... ›
- What Is the Summer Solstice? An Astronomer Explains - EcoWatch ›
By Avalon C.S. Owens and Sara Lewis
Before humans invented fire, the only things that lit up the night were the moon, the stars and bioluminescent creatures – including fireflies. These ambassadors of natural wonder are soft-bodied beetles that emit “cold light," using a biochemical reaction housed in their abdominal lanterns.
Fireflies exchange bioluminescent courtship signals as a precursor to mating. In doing so, they construct spectacular light shows that inspire joy and delight in people all around the world. Unfortunately, human activities threaten to extinguish these silent sparks.
In recent decades, fireflies have vanished from many places where they were once found. Like other insects, fireflies are threatened by habitat loss and pesticide use. They are also uniquely vulnerable to the harmful effects of light pollution.
A Life in the Dark
Fireflies evolved some 100 million years ago and have blossomed into more than 2,200 species that are found on every continent except Antarctica. Here in North America, nearly 150 different species of flashing firefly light up our summer nights.
Most North American species have a two- to four-week mating season. Each evening, males and females engage in a dash of light flirtation. The males fly around, producing a species-specific pattern of flashes. Females, perched in the undergrowth, discreetly respond when they are interested with flashes of their own.
For the vast majority of evolutionary history, nighttime light sources were predictable and short-lived: The sun set, and the moon waned. But as advances in technology made it cheaper and easier for humans to light up their environment, light pollution has become a constant presence in urban, suburban and rural habitats.
Human-caused light sources – house lights, path lights, streetlights – often shine all night, year-round. Humans can use curtains to block out a neighbor's annoying LED floodlight, but nocturnal animals aren't so fortunate. The more we light up the night, the less space we leave for the firefly flash dance.
Synchronous fireflies, native to the U.S. Southeast, coordinate their flashes into bursts that ripple through groups of insects.
Blinded by the Light
We and other firefly researchers have become increasingly worried about the future of these remarkable insects. More than a decade of scientific research offers ample evidence that light pollution is a threat to firefly reproduction.
The fundamental problem is visibility: Fireflies use their bioluminescence to flirt in the dark. It doesn't work so well with the lights on.
Scientists have known for some time that direct illumination from a nearby streetlight makes male fireflies flash less, but that is only half the story. As with most animals that engage in complex courtship rituals, female fireflies are the choosy ones – and they are watching the show with the rest of us. When a female sees a male she likes, she flashes back. He zips over, and that's when the magic happens.
Our recent lab study shows that females of a common New England firefly species are even more sensitive to direct illumination than their male counterparts. Under artificial light, males flash about half as often, while females rarely, if ever, flash back.
It may be that female fireflies are quite literally blinded by the light shining down into their eyes. Or even if they do manage to pick out a male flash pattern here and there, they might not think it worth a reply. Previous research shows that female fireflies prefer bright flashes over dim ones, and background light can turn an otherwise bright flash into one that is dull and unimpressive.
The brightness of the artificial light source makes a big difference, but its dominant color is also a factor. Fireflies don't see blue or red light very well because they have evolved to focus in on the particular yellow-green hue that they use to communicate. Amber light, which has a yellow-orange hue, is most disruptive to firefly courtship – even more so than white light – because it approaches the color of firefly bioluminescence.
Help Fireflies Reclaim the Night
First, remove unnecessary light. Lights left on in the middle of the night – especially in natural habitats like backyards, parks and reserves – too often go unused by anyone. Install motion detectors, timers and shielding to ensure that light goes only where people need it, when they need it. These devices can pay for themselves over the long term.
Finally, remember this: The redder the better! When buying new outdoor lights, opt for monochrome red LEDs. Some lighting manufacturers have begun to tout amber LEDs as “insect-friendly," but they are not thinking about fireflies. And while it's true that amber light doesn't attract as many flying insects as white light, red light attracts even fewer.
As with any harmful environmental pollutant, limiting how much artificial light we create will always be more effective than trying to lessen its impact. Fortunately, light pollution is instantly and completely reversible, which means that we can change things for the better for fireflies with the flip of a switch.
Fireflies give us so much, and don't demand a lot in return – just a bit of dark night to call their own.
Avalon C.S. Owens is a Ph.D. candidate in biology at Tufts University and Sara Lewis Professor of Biology at Tufts University.
Reposted with permission from The Conversation.
- Fireflies Face Extinction From Habitat Loss, Light Pollution and ... ›
- Light Pollution: The Dangers of Bright Skies at Night - EcoWatch ›
- How Do These Fireflies Sync Their Iconic Flashes? New Research Has Answers ›
Using Captured CO2 in Everyday Products Could Help Fight Climate Change, But Will Consumers Want Them?
By Lucca Henrion, Joe Árvai, Lauren Lutzke and Volker Sick
How would you feel if that captured carbon dioxide were in your child's toys, or in the concrete under your house?
The technology to capture climate-warming carbon dioxide emissions from smokestacks, and even from the air around us, already exists; so too does the technology to use this carbon dioxide to make products like plastics, concrete, carbonated drinks and even fuel for aircraft and automobiles.
That combination – known as carbon capture and utilization – could take up billions of tons of carbon dioxide emissions if the technologies were adopted across a range of sectors worldwide.
But for that to happen, the public will have to accept these new products. Will they? That's a question we have been exploring as engineers who work on carbon capture technologies and as social psychologists.
One Key to Success: CCU Adds Economic Value
Studies show that to stabilize the climate by 2050, the world will have to do more than just stop greenhouse emissions. It also will have to remove huge amounts of carbon dioxide from the atmosphere. Trees, soils and oceans naturally store some carbon dioxide, but human activities produce about five times more than nature can handle.
That's why technologies that can reuse carbon dioxide to avoid fossil fuel use – or even better, lock it away in long-lived products like cement – are essential.
The key to carbon capture and utilization's potential is that these products have economic value. That value can give companies the incentive to deploy the technology at the global scale necessary to slow climate change.
Carbon capture technology itself isn't new. Initially, captured carbon dioxide was used to force oil and gas out of old wells. Once emissions are captured, typically from an industrial smokestack via a complex chemical filter, they can be pumped deep underground and stored in depleted oil reservoirs or porous rock formations. That keeps the carbon dioxide from reaching the atmosphere, where it can contribute to climate change.
But storing carbon dioxide in the ground doesn't create a new product. The absence of an economic return – coupled with concerns about storing carbon dioxide underground have slowed the adoption of the technology in most countries.
How Do People Feel About Carbon Dioxide-Based Products?
For many products made with captured carbon dioxide, success will depend on whether the public accepts them.
Two of us recently conducted one of the first large-scale studies to examine public perception of carbon dioxide-based products in the U.S. to find out. We asked over 2,000 survey participants if they would be willing to consume or use various carbon dioxide-based products, including carbonated beverages, plastic food storage containers, furniture made with foam or plastic, and shatterproof glass.
We found that most people knew little about carbon capture and use. However, 69% were open to the idea after learning how it worked and how it helped reduce the emissions contributing to climate change.
There was one exception when we asked about different types of products people might be willing to use: Fewer people – only 56% – were open to the idea of using captured carbon dioxide in carbonated beverages.
Safety was a concern for many people in the survey. One-third didn't know if these products might pose a health risk, and others thought they would. It's important to understand that products made with captured carbon dioxide are subject to the same safety regulations as traditional materials used in food and consumer products. This includes filtering out unwanted pollutants in the flue gas before using the carbon dioxide in carbonated beverages or plastics.
When carbon dioxide is used as a raw material, it becomes chemically stable once it is used to create a product, meaning carbon dioxide used to create plastic will not turn back into a gas on its own.
What people may not realize is that the majority of carbon dioxide currently used nationwide is already a fossil fuel byproduct from the steam-methane reforming process. This carbon dioxide is used widely for purposes that include making dry ice, performing certain medical procedures and carbonating your favorite soda.
Overall, we found that people were open to using these products, and that trend crossed all ages, levels of education and political ideologies.
Carbon capture and use already has bipartisan support in Washington, and the Department of Energy is funding research in carbon management. Bipartisan consumer support could quickly expand its use, creating another way to keep carbon emissions out of the air.
Over 77 million tons of carbon dioxide was captured worldwide in 2020, but use of that carbon dioxide lags behind. One use that is quickly expanding is using carbon dioxide to cure, or harden, concrete. A company called CarbonCure, for example, has permanently stored over 90,000 tons of captured carbon dioxide in concrete to date.
Recently, Unilever and partners piloted replacing fossil-based ethanol with carbon dioxide-based ethanol for manufacturing laundry detergent, significantly reducing the associated ethanol emissions. Both are cost-competitive methods to capture and use carbon dioxide, and they demonstrate why carbon capture and use could be the most market-friendly way to remove carbon dioxide on a large scale.
How Innovators Can Improve Public Perception
Some emerging technologies could help address the perceived risks of ingesting carbon captured from industrial emissions.
For example, a Coca-Cola subsidiary is piloting a project in which carbon dioxide is captured directly from ambient air using direct air carbon capture technology and then used in drinks. Although it's currently expensive, the costs of direct air carbon capture are expected to fall as it is used more widely, and its use could reduce people's concerns about health risks.
The most important steps may be educating the public about the process and the value of carbon dioxide-based products. Companies can alleviate concerns by being open about how they use carbon dioxide, why their products are safe and the benefits they hold for the climate.
Lucca Henrion is a research fellow at the Global CO2 Initiative, University of Michigan.
Joe Árvai is the Dana and David Dornsife Professor of Psychology and director of the Wrigley Institute for Environmental Studies, USC Dornsife College of Letters, Arts and Sciences.
Lauren Lutzke is a Ph.D. student at the USC Dornsife College of Letters, Arts and Sciences
Volker Sick is the Arthur F. Thurnau Professor; DTE Energy Professor of Advanced Energy Research; and director of the Global CO2 Initiative, University of Michigan.
Disclosure statement: Lucca Henrion works as a research fellow in the Global CO2 Initiative at the University of Michigan. He is a volunteer with the Open Air Collective. Joe Árvai receives funding from The National Science Foundation. Lauren Lutzke previously received funding from the Erb Institute for Global Sustainable Enterprise, and the Global CO2 Initiative, both at the University of Michigan. Volker Sick receives funding from the US Department of Energy, NRC Canada, and the University of Michigan.
Reposted with permission from The Conversation.
By Mojtaba Sadegh, Amir AghaKouchak and John Abatzoglou
Just about every indicator of drought is flashing red across the western U.S. after a dry winter and warm early spring. The snowpack is at less than half of normal in much of the region. Reservoirs are being drawn down, river levels are dropping and soils are drying out.
It's only May, and states are already considering water use restrictions to make the supply last longer. California's governor declared a drought emergency in 41 of 58 counties. In Utah, irrigation water providers are increasing fines for overuse. Some Idaho ranchers are talking about selling off livestock because rivers and reservoirs they rely on are dangerously low and irrigation demand for farms is only just beginning.
Scientists are also closely watching the impact that the rapid warming and drying is having on trees, worried that water stress could lead to widespread tree deaths. Dead and drying vegetation means more fuel for what is already expected to be another dangerous fire season.
U.S. Interior Secretary Deb Haaland and Agriculture Secretary Tom Vilsack told reporters on May 13, 2021, that federal fire officials had warned them to prepare for an extremely active fire year. “We used to call it fire season, but wildland fires now extend throughout the entire year, burning hotter and growing more catastrophic in drier conditions due to climate change," Vilsack said.
The U.S. Drought Monitor for mid-May shows nearly half of the West in severe or extreme drought. National Drought Mitigation Center/USDA/NOAA
The Many Faces of Drought
Several types of drought are converging in the West this year, and all are at or near record levels.
When too little rain and snow falls, it's known as meteorological drought. In April, precipitation across large parts of the West was less than 10% of normal, and the lack of rain continued into May.
Rivers, lakes, streams and groundwater can get into what's known as hydrological drought when their water levels fall. Many states are now warning about low streamflow after a winter with less-than-normal snowfall and warm spring temperatures in early 2021 speeding up melting. The U.S. Bureau of Reclamation said Lake Mead, a giant Colorado River reservoir that provides water for millions of people, is on pace to fall to levels in June that could trigger the first federal water shortage declaration, with water use restrictions across the region.
Dwindling soil moisture leads to another problem, known as agricultural drought. The average soil moisture levels in the western U.S. in April were at or near their lowest levels in over 120 years of observations.
Four signs of drought. Climate Toolbox
These factors can all drive ecosystems beyond their thresholds – into a condition called ecological drought – and the results can be dangerous and costly. Fish hatcheries in Northern California have started trucking their salmon to the Pacific Ocean, rather than releasing them into rivers, because the river water is expected to be at historic low levels and too warm for young salmon to tolerate.
One of the West's biggest water problems this year is the low snowpack.
The western U.S. is critically dependent on winter snow slowly melting in the mountains and providing a steady supply of water during the dry summer months. But the amount of water in snowpack is on the decline here and across much of the world as global temperatures rise.
Several states are already seeing how that can play out. Federal scientists in Utah warned in early May that more water from the snowpack is sinking into the dry ground where it fell this year, rather than running off to supply streams and rivers. With the state's snowpack at 52% of normal, streamflows are expected to be well below normal through the summer, with some places at less than 20%.
Snowpack is typically measured by the amount of water it holds, known as snow water equivalent. National Resource Conservation Service
It's important to understand that drought today isn't only about nature.
More people are moving into the U.S. West, increasing demand for water and irrigated farmland. And global warming – driven by human activities like the burning of fossil fuels – is now fueling more widespread and intense droughts in the region. These two factors act as additional straws pulling water from an already scarce resource.
As demand for water has increased, the West is pumping out more groundwater for irrigation and other needs. Centuries-old groundwater reserves in aquifers can provide resilience against droughts if they are used sustainably. But groundwater reserves recharge slowly, and the West is seeing a decline in those resources, mostly because water use for agriculture outpaces their recharge. Water levels in some wells have dropped at a rate of 6.5 feet (2 meters) per year.
The result is that these regions are less able to manage droughts when nature does bring hot, dry conditions.
California fish hatcheries have started trucking their salmon to the Pacific Ocean because the rivers they are usually released into are too low and warm. AP Photo / Rich Podroncelli
Rising global temperatures also play several roles in drought. They influence whether precipitation falls as snow or rain, how quickly snow melts and, importantly, how quickly the land, trees and vegetation dry out.
Extreme heat and droughts can intensify one another. Solar radiation causes water to evaporate, drying the soil and air. With less moisture, the soil and air then heat up, which dries the soil even more. The result is extremely dry trees and grasses that can quickly burn when fires break out, and also thirstier soils that demand more irrigation.
Alarmingly, the trigger for the drying and warming cycle has been changing. In the 1930s, lack of precipitation used to trigger this cycle, but excess heat has initiated the process in recent decades. As global warming increases temperatures, soil moisture evaporates earlier and at larger rates, drying out soils and triggering the warming and drying cycle.
Fire Warnings Ahead
Hot, dry conditions in the West last year fueled a record-breaking wildfire season that burned over 15,900 square miles (41,270 square kilometers), including the largest fires on record in Colorado and California.
As drought persists, the chance of large, disastrous fires increases. The seasonal outlook of warmer and drier-than-normal conditions for summer and fire season outlooks by federal agencies suggest another tough, long fire year is ahead.
Mojtaba Sadegh is an assistant professor of civil engineering at Boise State University.
Amir AghaKouchak is an associate professor of civil & environmental engineering at the University of California, Irvine.
John Abatzoglou is an associate professor of engineering at the University of California, Merced.
Disclosure statement: Mojtaba Sadegh receives funding from the National Science Foundation. Amir AghaKouchak receives funding from National Science Foundation, National Oceanic and Atmospheric Administration and National Aeronautics and Space Administration. John Abatzoglou receives funding from the National Oceanic and Atmospheric Administration and the National Science Foundation.
Reposted with permission from The Conversation.
- Drought-Stricken Colorado River Basin Could See Additional 20 ... ›
- Wisconsin Declares State of Emergency Due to High Wildfire Risk ... ›
- The Vicious Climate-Wildfire Cycle - EcoWatch ›
- California's 2020 Castle Fire Is Still Burning - EcoWatch ›
- 2020 Sets New U.S. Wildfire Record - EcoWatch ›
- Climate-Fueled Drought Puts American West in Peril Ahead of Wildfire Season ›