Let me share some thoughts with you. Let me be a source of encouragement and support. May you never forget your dreams and remember miracles happen every day. Let us laugh, have fun, and inspire one another.
I know it may be hard to convince you, but let me try: Don’t kill the next spider you see in your home.
Why? Because spiders are an important part of nature and our indoor ecosystem – as well as being fellow organisms in their own right.
People like to think of their dwellings as safely insulated from the outside world, but many types of spiders can be found inside. Some are accidentally trapped, while others are short-term visitors. Some species even enjoy the great indoors, where they happily live out their lives and make more spiders. These arachnids are usually secretive, and almost all you meet are neither aggressive nor dangerous. And they may be providing services like eating pests – some even eat other spiders.
My colleagues and I conducted a visual survey of 50 North Carolina homes to inventory just which arthropods live under our roofs. Every single house we visited was home to spiders. The most common species we encountered were cobweb spiders and cellar spiders.
Both build webs where they lie in wait for prey to get caught. Cellar spiders sometimes leave their webs to hunt other spiders on their turf, mimicking prey to catch their cousins for dinner.
Although they are generalist predators, apt to eat anything they can catch, spiders regularly capture nuisance pests and even disease-carrying insects – for example, mosquitoes. There’s even a species of jumping spider that prefers to eat blood-filled mosquitoes in African homes. So killing a spider doesn’t just cost the arachnid its life, it may take an important predator out of your home.
It’s natural to fear spiders. They have lots of legs and almost all are venomous - though the majority of species have venom too weak to cause issues in humans, if their fangs can pierce our skin at all. Even entomologists themselves can fall prey to arachnophobia. I know a few spider researchers who overcame their fear by observing and working with these fascinating creatures. If they can do it, so can you!
Spiders are not out to get you and actually prefer to avoid humans; we are much more dangerous to them than vice versa. Bites from spiders are extremely rare. Although there are a few medically important species like widow spiders and recluses, even their bites are uncommon and rarely cause serious issues.
If you truly can’t stand that spider in your house, apartment, garage, or wherever, instead of smashing it, try to capture it and release it outside. It’ll find somewhere else to go, and both parties will be happier with the outcome.
But if you can stomach it, it’s OK to have spiders in your home. In fact, it’s normal. And frankly, even if you don’t see them, they’ll still be there. So consider a live-and-let-live approach to the next spider you encounter.
Antarctica is a vast icy wasteland covered by the world’s largest ice sheet. This ice sheet contains about 90% of fresh water on the planet. It acts as a massive heat sink and its meltwater drives the world’s oceanic circulation. Its existence is therefore a fundamental part of Earth’s climate.
Less well known is that Antarctica is also host to several active volcanoes, part of a huge “volcanic province” which extends for thousands of kilometres along the western edge of the continent. Although the volcanic province has been known and studied for decades, about 100 “new” volcanoes were recently discovered beneath the ice by scientists who used satellite data and ice-penetrating radar to search for hidden peaks.
These sub-ice volcanoes may be dormant. But what would happen if Antarctica’s volcanoes awoke?
We can get some idea by looking to the past. One of Antarctica’s volcanoes, Mount Takahe, is found close to the remote centre of the West Antarctic Ice Sheet. In a new study, scientists implicate Takahe in a series of eruptions rich in ozone-consuming halogens that occurred about 18,000 years ago.
These eruptions, they claim, triggered an ancient ozone hole, warmed the southern hemisphere which caused glaciers to melt, and helped bring the last ice age to a close.
This sort of environmental impact is unusual. For it to happen again would require a series of eruptions, similarly enriched in halogens, from one or more volcanoes that are currently exposed above the ice. Such a scenario is unlikely although, as the Takahe study shows, not impossible. More likely is that one or more of the many subglacial volcanoes, some of which are known to be active, will erupt at some unknown time in the future.
Eruptions below the ice
Because of the enormous thickness of overlying ice, it is unlikely that volcanic gases would make it into the atmosphere. So an eruption wouldn’t have an impact like that postulated for Takahe. However, the volcanoes would melt huge caverns in the base of the ice and create enormous quantities of meltwater. Because the West Antarctic Ice Sheet is wet rather than frozen to its bed – imagine an ice cube on a kitchen work top – the meltwater would act as a lubricant and could cause the overlying ice to slip and move more rapidly. These volcanoes can also stabilise the ice, however, as they give it something to grip onto – imagine that same ice cube snagging onto a lump-shaped object.
In any case, the volume of water that would be generated by even a large volcano is a pinprick compared with the volume of overlying ice. So a single eruption won’t have much effect on the ice flow. What would make a big difference, is if several volcanoes erupt close to or beneath any of West Antarctica’s prominent “ice streams”.
Ice streams are rivers of ice that flow much faster than their surroundings. They are the zones along which most of the ice in Antarctica is delivered to the ocean, and therefore fluctuations in their speed can affect the sea level. If the additional “lubricant” provided by multiple volcanic eruptions was channelled beneath ice streams, the subsequent rapid flow may dump unusual amounts of West Antarctica’s thick interior ice into the ocean, causing sea levels to rise.
Under-ice volcanoes are probably what triggered rapid flow of ancient ice streams into the vast Ross Ice Shelf, Antarctica’s largest ice shelf. Something similar might have occurred about 2,000 years ago with a small volcano in the Hudson Mountains that lie underneath the West Antarctica Ice Sheet – if it erupted again today it could cause the nearby Pine Island Glacier to speed up.
The volcano–ice melt feedback loop
Most dramatically of all, a large series of eruptions could destabilise many more subglacial volcanoes. As volcanoes cool and crystallise, their magma chambers become pressurised and all that prevents the volcanic gases from escaping violently in an eruption is the weight of overlying rock or, in this case, several kilometres of ice. As that ice becomes much thinner, the pressure reduction may trigger eruptions. More eruptions and ice melting would mean even more meltwater being channelled under the ice streams.
Potentially a runaway effect may take place, with the thinning ice triggering more and more eruptions. Something similar occurred in Iceland, which saw an increase in volcanic eruptions when glaciers began to recede at the end of the last ice age.
So it seems the greatest threat from Antarctica’s many volcanoes will be if several erupt within a few decades of each other. If those volcanoes have already grown above the ice and their gases were rich in halogens then enhanced warming and rapid deglaciation may result. But eruptions probably need to take place repeatedly over many tens to hundreds of years to have a climatic impact.
More likely is the generation of large quantities of meltwater during subglacial eruptions that might lubricate West Antarctica’s ice streams. The eruption of even a single volcano situated strategically close to any of Antarctica’s ice streams can cause significant amounts of ice to be swept into the sea. However, the resulting thinning of the inland ice is also likely to trigger further subglacial eruptions generating meltwater over a wider area and potentially causing a runaway effect on ice flow. John Smellie, Professor of Volcanology, University of Leicester
You’ve been drinking the fascist, white supremacist, white neo-Nazi milk … To be a successful antifa soldier, you have to become a soy boy.
In one of his satirical YouTube videos, alt-right commentator James Allsup suggests that what epitomises the anti-fascist, feminist, politically correct people he lambasts is that they drink soy instead of dairy milk.
Allsup is one of many members of the so-called alt-right, many of whom use social media and ironic humour to promote racist, sexist, antisemitic and other offensive views. Allsup himself attended last summer’s infamous Unite the Right gathering of neo-Nazis and white supremacists. His Twitter account was suspended last December on the grounds that his political views violated the company’s terms of use.
Allsup’s video is part of a viral flurry of tweets, memes, and videos depicting the battle between dairy and soy milk and all they represent.
The #MilkTwitter hashtag went viral after an incident that’s since been dubbed the “milk party”, in which a large gathering of white men descended on an anti-Trump art installation a few weeks after Trump’s inauguration. The men carried cartons of milk and voiced everything from off-colour taunts to explicitly racist, sexist, anti-Semitic and homophobic rants. After taking a swig of milk from his carton, one bare-chested man approached the camera and sneered. “An ice cold glass of pure racism,” he growled into the lens.
After that night, milk quickly went viral, joining the ranks of Pepe the Frog and the “okay” emoji as symbols of 21st century, post-Obama white supremacy. Pro-Trump supporters began carrying cartons of it to rallies and Richard Spencer and other prominent figures of the “alt-right” movement added milk-bottle emojis to their Twitter profiles. The #SoyBoy hashtag followed a few months later, going viral in the spring of 2017 and remains popular today.
For members of the alt-right, dairy milk symbolises strength of body and society; drinking it reinforces notions of white superiority and idealised visions of masculinity. Soy milk represents weakness, emasculation, and all things politically correct. The hashtags #MilkTwitter and #SoyBoy celebrate traditional gender norms and the “good old days” of white-dominated patriarchy, while ridiculing diversity and feminism.
Alt-right rhetoric
But #MilkTwitter and #SoyBoy don’t exist in a vacuum: milk has long been used as a symbol for and tool of oppression and exploitation. Even the verb “to milk” means “to exploit”.
There’s a long history of association between dairy milk and white supremacy, as legal scholar Andrea Freeman explores. Freeman traces the link back a century, with official US government documents from the 1920s suggesting a link between white people, milk-drinking and a superior intellect.
Similarly, sociologist Melanie DuPuis has described how milk was central to the construction of the modern Western nation state. The nutritionally “perfect” white drink was symbolically linked to the white-skinned bodies that were better able to digest it due to a genetic mutation known as lactase persistence. Early 20th century milk advertisements perpetuated this trope, often juxtaposing images of healthy-looking, light-skinned people with sickly-looking, darker-skinned ones. “By declaring milk perfect,” says DuPuis, “white northern Europeans announced their own perfection”.
Where dairy has symbolised white superiority, soy has long represented notions of weakness, nonwhiteness, and emasculation. Feminist and animal rights advocate Carol Adams has discussed how 19th century scholars justified British colonialism in part by dividing the world into “intellectually superior meat eaters and inferior plant eaters”. Asian cultures heavy in soy and rice consumption occupied the latter category.
In contemporary discussions, scientific studies linking phytoestrogens in soybeans to lower sperm count have been used – by alt-right trolls and mainstream media alike – to create a narrative that soy emasculates men who consume it.
“This is gonna fill you full of estrogen, this is gonna block all your testosterone,” Allsup pronounces on YouTube, holding up a carton of soy milk. “We’re gonna be drinking only soy milk, and it’s gonna flush all that testosterone – which is a word that means white supremacy – out of your body.” While Allsup obviously intends his video to be funny, screen shots from a Men’s Health article about soy’s potential to “undermine everything it means to be male” suggests he nevertheless takes the threat of soy seriously.
Taking alt-right irony seriously
Many dismiss the racist, sexist, anti-Semitic and homophobic rants on #MilkTwitter and #SoyBoy as being nothing more than ironic antics targeting politically correct “normies” who can’t take a joke. But irony and ambiguity are worth taking seriously: they are established strategies of alt-right trolls who seek to exploit Poe’s Law, the notion that it’s virtually impossible to distinguish between satire and sincerity online. Irony allows for extremist views to hide in plain sight – in the words of prominent neo-Nazi Andrew Anglin, “non-ironic Nazism masquerading as ironic Nazism”.
The danger of Poe’s Law, explained by Jason Wilson in an article for The Guardian about alt-right tactics, isn’t that satire may be mistaken for sincerity, but that “every ‘ironic’ repetition of far-right ideals contributes to a climate in which racism, misogyny, or Islamophobia is normalised”. Because of that, says Angela Nagle, who studies the alt-right’s online culture wars, “the best response is to stubbornly take the ‘alt-right’ at their word”.
Vegan vs. dairy masculinities
The strategy of those using #MilkTwitter and #SoyBoy is to mix a carefully constructed view of history and cherry-picked science to reinforce sexist and racist beliefs, while fostering a fear of contemporary shifts in power away from white males and towards women, people of colour, and those occupying space outside the cultural mainstream.
Vegans are often the target of #MilkTwitter and #SoyBoy taunts, with “the vegan agenda” being code for all things weak, effeminate and politically correct. Vegans, after all, drink soy (or other plant milk) instead of dairy, typically for ethical reasons related to caring about animals’ welfare – another female-coded trait.
The irony of #MilkTwitter and #SoyBoy casting vegan men as less masculine is that it is hard to imagine a more feminine-coded substance than estrogen-filled animal milk, coming from the breasts of female mammals. But in the identity politics of the alt-right, linking dairy milk to white supremacy, such complexities are taken lightly. After all, they are only joking, right?
Motherhood is getting considerable attention, even if much of the news is concerning. Fertility rates are falling in America as millennials decide not to have children. This should hardly come as a surprise. The cost of raising a child to adulthood has been increasing and real median household income has only just regained its 1999 level.
At this time, when it could be argued that maternity is in decline, Mary Shelley’s classic work of literature, “Frankenstein,” celebrating its 200th anniversary this year, invites us to reflect on the deeper importance of mothers in our lives.
Shelley, who published the work at the age of only 19, had many reasons to make motherhood a major theme. Her mother, the feminist Mary Wollstonecraft, had died from complications arising from her birth. Shelley’s own attempts at motherhood would result in multiple miscarriages and the deaths of three children. Not surprisingly, mothers in “Frankenstein” are conspicuous by their absence – with disastrous consequences.
The creation of Frankenstein
Frankenstein tells the tale of young scientist Victor Frankenstein, who is so horrified at the prospect of death that he seeks a means of restoring life to the deceased. He creates an 8-foot-tall humanoid creature, whose appearance renders it loathsome to all and to which he never gives a proper name. Spurned by its creator, the creature develops a desire for revenge and soon takes the lives of everyone dear to Victor.
At numerous points, the novel highlights the devastating effects of maternal absence.
To begin with, mothers in Frankenstein are quite short-lived. Victor Frankenstein’s mother, an orphan, dies of scarlet fever while nursing Victor’s “cousin” and eventual wife, Elizabeth. While on their honeymoon, Elizabeth too is killed by the monster. Justine, the Frankensteins’ housekeeper, is falsely convicted of the murder of Victor’s younger brother, also grows up motherless.
Frankenstein’s most dramatic instance of motherlessness is the monster itself, a human being created by a man alone. Reflecting on this feat, Victor remarks that “no father could claim the gratitude of his child so completely” as he would deserve of the new race of creatures he sought to create.
Simply put, he devises a new way of bestowing life that completely sidesteps the need for conception, pregnancy and childbirth.
Yet Victor had not done away entirely with the need for maternity. For though he had “selected the creature’s features as beautiful,” the moment he beholds it stirring, he recoils.
“The beauty of the dream vanished, and breathless horror and disgust filled my heart.”
Unbound by any maternal affection or calling, Victor is “unable to endure” the being he had created and rushes out of the room. His creation was never part of him, and he feels at liberty to abandon it.
When the creator matters more
The roots of the problem lie largely in the fact that Victor has moved procreation from the domain of the natural – the purview of Mother Nature – to that of the technological.
His quest is a purely scientific one – a study of chemistry, anatomy and the decay of the human body. It is so devoid of any regard for the sanctity of life that Victor came to regard a churchyard as nothing more than a “receptacle of bodies deprived of life,” implying that a living child might be little more than a body not yet deprived of life.
To him, there is nothing mysterious about life and death. The animation of lifeless matter looms before him as nothing more than a daunting but purely technical challenge. He dreams of the power “to renew life” and becomes so engrossed in this one pursuit that his eyes “become insensible to the charms of nature,” including the unfolding of the seasons around him. A “single great object” swallows up “every habit of his nature.” In short, his scientific quest has left him with no appreciation for life’s beauty and mystery.
What long reigned as one of the most mysterious and awe-inspiring experiences in human life – the birth of a human being – has in Victor’s mind become little more than proof of his own greatness:
“I was surprised that among so many men of genius who had directed their enquiries toward the same science, I alone should be reserved to discover so astonishing a secret.”
To Victor, the act of creation says much less about the creature than the creator.
Devoid of the feminine, bringing forth new life becomes a completely masculine act, an exercise of mastery and control over a reluctant but ultimately compliant nature. Victor’s cold detachment from his creation contrasts sharply with the experience of childbirth as described by those who have been through it – a description not of conquest but endurance and the unfolding of something that resists control.
The experience of labor and birth
Consider this account of labor by the 20th-century activist Dorothy Day in her essay, “Having a Baby: A Christmas Story”:
“Where before there had been waves, there were now tidal waves. Earthquake and fire swept my body. My spirit was a battleground on which thousands were butchered in a most horrible manner.”
It is not difficult to imagine Day having just read Frankenstein’s account of bringing forth new life when she penned these lines about men giving birth:
“‘What do they know about it, the idiots?’ I thought. And it gave me pleasure to imagine one of them in the throes of childbirth. How they would groan and holler and rebel. And wouldn’t they make everybody else miserable around them?”
In Day’s account, gestation and child birth are not like pushing buttons on a control panel but a journey along which the mother is swept – something she does not so much choose as endure. And when it is over, she is presented with a baby fashioned less by her than in her and through her. The form of the baby, from its generic sex to its distinctive features, is a joyful surprise even to the woman who has served as the home of its development for over three-quarters of a year.
For Victor, the process is quite different. He too is surprised, but his surprise reflects the fact that, although he has in fact painstakingly selected each of the creature’s features, the end result turns out radically different from what he envisioned. He thought every aspect of the creature was subject to his control, but instead of a superman he has produced a monster. His horror is magnified by the fact that his creature is his product, while Dorothy Day receives her daughter as something more akin to a gift.
What does a mother add?
Thanks to “Frankenstein,” we can pose a question the answer to which would have seemed obvious throughout most of the course of human history: What does a mother add? The answer, in simplest terms, is that mothers add to life something that Victor Frankenstein – who treats the whole process of creation as nothing more than a challenge to his own ingenuity – is unable even to recognize, let alone wield: the power of a love that puts creature before creator.
Victor has made something new, but it was never a part of him, and from the moment he lays eyes on it he seeks to disassociate himself from it. Because the creature’s appearance disappoints him, he feels within his rights to turn his back on it – to abandon it to a world utterly unprepared to receive it. The circumstances of the creature’s birth may be monstrous, but it is not yet a monstrosity. Only by depriving it of any semblance of love does Victor create a true monster.
By showing us a world from which mothers are largely absent, Mary Shelley reminds us that the genius of motherhood lies less in biological reproduction than in the capacity to love. Human beings need love to develop and thrive. We honor this capacity of mothers when we say that someone has a face that “only a mother could love.”
Perhaps Victor’s creature would never had developed into a monster in the first place, if only it had enjoyed the love of a mother.
Right now, your computer might be using its memory and processor power – and your electricity – to generate money for someone else, without you ever knowing. It’s called “cryptojacking,” and it is an offshoot of the rising popularity of cryptocurrencies like bitcoin.
Instead of minting coins or printing paper money, creating new units of cryptocurrencies, which is called “mining,” involves performing complex mathematical calculations. These intentionally difficult calculations securely record transactions among people using the cryptocurrency and provide an objective record of the “order” in which transactions are conducted.
The user who successfully completes each calculation gets a reward in the form of a tiny amount of that cryptocurrency. That helps offset the main costs of mining, which involve buying advanced computer processors and paying for electricity to run them. It is not surprising that enterprising cryptocurrency enthusiasts have found a way to increase their profits, mining currency for themselves by using other people’s processing and electrical power.
The mining script can be very small – just a few lines of text that download a small program from a web server, activate it on the user’s own browser and tell the program where to credit any mined cryptocurrency. The user’s computer and electricity do all the work, and the person who wrote the code gets all the proceeds. The computer’s owner may never even realize what’s going on.
Is all cryptocurrency mining bad?
There are legitimate purposes for this sort of embedded cryptocurrency mining – if it is disclosed to users rather than happening secretly. Salon, for example, is asking its visitors to help provide financial support for the site in one of two ways: Either allow the site to display advertising, for which Salon gets paid, or let the site conduct cryptocurrency mining while reading its articles. That’s a case when the site is making very clear to users what it’s doing, including the effect on their computers’ performance, so there is not a problem. More recently, a UNICEF charity allows people to donate their computer’s processing power to mine cryptocurrency.
However, many sites do not let users know what is happening, so they are engaging in cryptojacking. Our initial analysis indicates that many sites with cryptojacking software are engaged in other dubious practices: Some of them are classified by internet security firm FortiGuard as “malicious websites,” known to be homes for destructive and malicious software. Other cryptojacking sites were classified as “pornography” sites, many of which appeared to be hosting or indexing potentially illegal pornographic content.
The longer a person stays on a cryptojacked website, the more cryptocurrency their computer will mine. The most successful cryptojacking efforts are on streaming media sites, because they have lots of visitors who stay a long time. While legitimate streaming websites such as YouTube and Netflix are safe for users, some sites that host pirated videos are targeting visitors for cryptojacking.
Other sites extend a user’s apparent visit time by opening a tiny additional browser window and placing it in a hard-to-spot part of the screen, say, behind the taskbar. So even after a user closes the original window, the site stays connected and continues to mine cryptocurrency.
What harm does cryptojacking do?
The amount of electricity a computer uses depends on what it’s doing. Mining is very processor-intensive – and that activity requires more power. So a laptop’s battery will drain faster if it’s mining, like when it’s displaying a 4K video or handling a 3D rendering.
Similarly, a desktop computer will draw more power from the wall, both to power the processor and to run fans to prevent the machine from overheating. And even with proper cooling, the increased heat can take its own toll over the long term, damaging hardware and slowing down the computer.
This harms not only individuals whose computers are hijacked for cryptocurrency mining, but also universities, companies and other large organizations. A large number of cryptojacked machinesacross an institution can consume substantial amounts of electricity and damage large numbers of computers.
Protecting against cryptojacking
Users may be able to recognize cryptojacking on their own. Because it involves increasing processor activity, the computer’s temperature can climb – and the computer’s fan may activate or run more quickly in an attempt to cool things down.
People who are concerned their computers may have been subjected to cryptojacking should run an up-to-date antivirus program. While cryptojacking scripts are not necessarily actual computer viruses, most antivirus software packages also check for other types of malicious software. That usually includes identifying and blocking mining malware and even browser-based mining scripts.
Cryptocurrency mining can be a legitimate source of revenue – but not when done secretly or by hijacking others’ computers to do the work and having them pay the resulting financial costs. Pranshu Bajpai, Security Researcher, PhD Candidate, Michigan State University and Richard Enbody, Associate Professor, Computer Science & Engineering, Michigan State University
This article was originally published on The Conversation.
Around the globe, about 815 million people – 11 percent of the world’s population – went hungry in 2016, according to the latest data from the United Nations. This was the first increase in more than 15 years.
Between 1990 and 2015, due largely to a set of sweeping initiatives by the global community, the proportion of undernourished people in the world was cut in half. In 2015, U.N. member countries adopted the Sustainable Development Goals, which doubled down on this success by setting out to end hunger entirely by 2030. But a recent U.N. report shows that, after years of decline, hunger is on the rise again.
As evidenced by nonstop news coverage of floods, fires, refugees and violence, our planet has become a more unstable and less predictable place over the past few years. As these disasters compete for our attention, they make it harder for people in poor, marginalized and war-torn regions to access adequate food.
I study decisions that smallholder farmers and pastoralists, or livestock herders, make about their crops, animals and land. These choices are limited by lack of access to services, markets or credit; by poor governance or inappropriate policies; and by ethnic, gender and educational barriers. As a result, there is often little they can do to maintain secure or sustainable food production in the face of crises.
The new U.N. report shows that to reduce and ultimately eliminate hunger, simply making agriculture more productive will not be enough. It also is essential to increase the options available to rural populations in an uncertain world.
Conflict and climate change threaten rural livelihoods
Around the world, social and political instability are on the rise. Since 2010, state-based conflict has increased by 60 percent and armed conflict within countries has increased by 125 percent. More than half of the food-insecure people identified in the U.N. report (489 million out of 815 million) live in countries with ongoing violence. More than three-quarters of the world’s chronically malnourished children (122 million of 155 million) live in conflict-affected regions.
War hits farmers especially hard. Conflict can evict them from their land, destroy crops and livestock, prevent them from acquiring seed and fertilizer or selling their produce, restrict their access to water and forage, and disrupt planting or harvest cycles. Many conflicts play out in rural areas characterized by smallholder agriculture or pastoralism. These small-scale farmers are some of the most vulnerable people on the planet. Supporting them is one of the U.N.‘s key strategies for reaching its food security targets.
Disrupted and displaced
Without other options to feed themselves, farmers and pastoralists in crisis may be forced to leave their land and communities. Migration is one of the most visible coping mechanisms for rural populations who face conflict or climate-related disasters.
Globally, the number of refugees and internally displaced persons doubled between 2007 and 2016. Of the estimated 64 million people who are currently displaced, more than 15 million are linked to one of the world’s most severe conflict-related food crises in Syria, Yemen, Iraq, South Sudan, Nigeria and Somalia.
While migrating is uncertain and difficult, those with the fewest resources may not even have that option. New research by my colleagues at the University of Minnesota shows that the most vulnerable populations may be “trapped” in place, without the resources to migrate.
Displacement due to climate disasters also feeds conflict. Drought-induced migration in Syria, for example, has been linked to the conflict there, and many militants in Nigeria have been identified as farmers displaced by drought.
Supporting rural communities
To reduce world hunger in the long term, rural populations need sustainable ways to support themselves in the face of crisis. This means investing in strategies to support rural livelihoods that are resilient, diverse and interconnected.
Many large-scale food security initiatives supply farmers with improved crop and livestock varieties, plus fertilizer and other necessary inputs. This approach is crucial, but can lead farmers to focus most or all of their resources on growing more productive maize, wheat or rice. Specializing in this way increases risk. If farmers cannot plant seed on time or obtain fertilizers, or if rains fail, they have little to fall back on.
Increasingly, agricultural research and development agencies, NGOs and aid programs are working to help farmers maintain traditionally diverse farms by providing financial, agronomic and policy support for production and marketing of native crop and livestock species. Growing many different locally adapted crops provides for a range of nutritional needs and reduces farmers’ risk from variability in weather, inputs or timing.
While investing in agriculture is viewed as the way forward in many developing regions, equally important is the ability of farmers to diversify their livelihood strategies beyond the farm. Income from off-farm employment can buffer farmers against crop failure or livestock loss, and is a key component of food security for many agricultural households.
Training, education, and literacy programs allow rural people to access a greater range of income and information sources. This is especially true for women, who are often more vulnerable to food insecurity than men.
Conflict also tears apart rural communities, breaking down traditional social structures. These networks and relationships facilitate exchanges of information, goods and services, help protect natural resources, and provide insurance and buffering mechanisms.
In many places, one of the best ways to bolster food security is by helping farmers connect to both traditional and innovative social networks, through which they can pool resources, store food, seed and inputs and make investments. Mobile phones enable farmers to get information on weather and market prices, work cooperatively with other producers and buyers and obtain aid, agricultural extension or veterinary services. Leveraging multiple forms of connectivity is a central strategy for supporting resilient livelihoods.
In the past two decades the world has come together to fight hunger. This effort has produced innovations in agriculture, technology and knowledge transfer. Now, however, the compounding crises of violent conflict and a changing climate show that this approach is not enough. In the planet’s most vulnerable places, food security depends not just on making agriculture more productive, but also on making rural livelihoods diverse, interconnected and adaptable.
Mercury pollution is a problem usually associated with fish consumption. Pregnant women and children in many parts of the world are advised to eat fish low in mercury to protect against the adverse health impacts, including neurological damages, posed by a particularly toxic form of mercury, methylmercury.
But some people in China, the world’s largest mercury emitter, are exposed to more methylmercury from rice than they are from fish. In a recent study, we explored the extent of this problem and which direction it could go in the future.
We found that China’s future emissions trajectory can have a measurable influence on the country’s rice methylmercury. This has important implications not only in China but across Asia, where coal use is increasing and rice is a staple food. It is also relevant as countries across the world implement the Minamata Convention, a global treaty to protect human health and the environment from mercury.
Why is mercury a problem in rice?
Measurements of methylmercury in rice in China from the early 2000s were in areas where mercury mining and other industrial activities led to high mercury levels in soil that was then taken up by rice plants. More recent research, however, has shown that methylmercury in rice is also elevated in other areas of China. This suggests that airborne mercury – emitted by sources such as coal-fired power plants and subsequently settling onto the land – might also be a factor.
To better understand the process of methylmercury accumulation in rice through deposition – that is, mercury originating from the air that rains out or settles to the land – we constructed a computer model to analyze the relative importance of soil and atmospheric sources of rice methylmercury. Then we projected how future methylmercury concentrations could change under different emissions scenarios.
Concentrations of methylmercury in rice are lower than those in fish, but, in central China, people eat much more rice than fish. Studies have calculated that residents in areas with mercury-contaminated soil consume more methylmercury than the U.S. EPA’s reference dose of 0.1 microgram methylmercury per kilogram of body weight per day, a level set to protect against adverse health outcomes such as decreased IQ. Recent data suggest that other neurodevelopmental impacts from methylmercury might occur at levels below the reference dose. Few health studies, however, have examined impacts of methylmercury exposure to rice consumers specifically.
To identify the potential scope of the problem, we compared the areas in China where mercury deposition is expected to be high based on mercury models, with maps of rice production. We found that provinces with high mercury deposition also produce substantial amounts of rice. Seven provinces in central China (Henan, Anhui, Jiangxi, Hunan, Guizhou, Chongqing and Hubei) account for 48 percent of Chinese rice production and receive nearly double the atmospheric mercury deposition as the rest of China.
We calculated that mercury deposition could increase nearly 90 percent or decrease by 60 percent by 2050, depending on future policies and technologies.
Our modeling approach
To understand how mercury from the atmosphere might be incorporated into rice as methylmercury, we built a model to simulate mercury in rice paddies. Methylmercury is produced in the environment by biological activity – specifically, by bacteria. Often, this occurs in flooded environments such as wetlands and sediments. Similarly, rice paddies are kept flooded during the growing season, and the nutrient-rich environment created by rice roots support both the bacterial growth and methylmercury production.
Our rice paddy model simulates how mercury changes form, accumulates and converts to methylmercury in different parts of the ecosystem, including in the water, the soil and the rice plants.
In our model, mercury enters the standing flooded water via deposition and irrigation processes, and then moves among water, soil and plants. After initializing and calibrating the model, we ran it for the typical five-month duration from planting seedlings to rice harvest and compared our results to measurements of mercury in rice from China. We also conducted different simulations with varying atmospheric deposition and soil mercury concentrations.
Despite its simplicity, our model was able to reproduce how rice methylmercury concentrations vary across different Chinese provinces. Our model was able to accurately reflect how higher soil mercury concentrations led to higher concentrations in rice.
But the soil wasn’t the whole story. Mercury from water – which can come from the flooded water in rice paddies or the water held in the soil – can also influence concentrations in rice. How much depends on the relative rates of different processes within soil and water. Under some conditions, a portion of the mercury in rice can come from the mercury in the atmosphere, once that mercury is deposited to the rice paddy. This suggested that changing emissions of mercury could potentially affect concentrations in rice.
Future emissions can influence rice
How will the rates of mercury in rice change in the future?
We examined a high emission scenario, which assumes no new policies to control mercury emissions by 2050, and a low emission scenario, where China uses less coal and coal-fired power plants have advanced mercury emission controls. Median Chinese rice methylmercury concentrations increased by 13 percent in the high scenario and decreased by 18 percent under the low scenario. Regions where rice methylmercury declined the most under strict policy controls were in central China, where rice production is high and rice is an important source of methylmercury exposure.
Managing mercury concentrations in rice thus requires an integrated approach, addressing both deposition and soil and water contamination. Understanding local conditions is also important: Other environmental factors not captured by our model, such as soil acidity, can also influence methylmercury production and accumulation to rice.
Different rice production strategies can also help – for example, alternating wetting and drying cycles in rice cultivation can reduce water consumption and methane emissions as well as rice methylmercury concentrations.
Our scenarios likely underestimate the potential health benefits of Minamata Convention controls in China, which is a party to the Convention. We include in our scenarios only changes in air emissions from power generation, while the Convention controls emissions from other sectors, bans mercury mining and addresses contaminated sites and land and water releases.
Reducing mercury could also be beneficial for other rice-producing countries, but at present, there are few data available outside China. However, our research suggests that the problem of mercury is not just a fish story – and that policy efforts can indeed make a difference.
When Mark Zuckerberg told Congress Facebook would use artificial intelligence to detect fake news posted on the social media site, he wasn’t particularly specific about what that meant. Given my own work using image and video analytics, I suggest the company should be careful. Despite some basic potential flaws, AI can be a useful tool for spotting online propaganda – but it can also be startlingly good at creating misleading material.
Researchers already know that online fake news spreads much more quickly and more widely than real news. My research has similarly found that online posts with fake medical information get more views, comments and likes than those with accurate medical content. In an online world where viewers have limited attention and are saturated with content choices, it often appears as though fake information is more appealing or engaging to viewers.
The problem is getting worse: By 2022, people in developed economies could be encountering more fake news than real information. This could bring about a phenomenon researchers have dubbed “reality vertigo” – in which computers can generate such convincing content that regular people may have a hard time figuring out what’s true anymore.
However, those methods assume the people who spread fake news don’t change their approaches. They often shift tactics, manipulating the content of fake posts in efforts to make them look more authentic.
Context is also key. Words’ meanings can change over time. And the same word can mean different things on liberal sites and conservative ones. For example, a post with the terms “WikiLeaks” and “DNC” on a more liberal site could be more likely to be news, while on a conservative site it could refer to a particular set of conspiracy theories.
Using AI to make fake news
The biggest challenge, however, of using AI to detect fake news is that it puts technology in an arms race with itself. Machine learning systems are already proving spookily capable at creating what are being called “deepfakes” – photos and videos that realistically replace one person’s face with another, to make it appear that, for example, a celebrity was photographed in a revealing pose or a public figure is saying things he’d never actually say. Even smartphone apps are capable of this sort of substitution – which makes this technology available to just about anyone, even without Hollywood-level video editing skills.
Researchers are already preparing to use AI to identify these AI-created fakes. For example, techniques for video magnification can detect changes in human pulse that would establish whether a person in a video is real or computer-generated. But both fakers and fake-detectors will get better. Some fakes could become so sophisticated that they become very hard to rebut or dismiss – unlike earlier generations of fakes, which used simple language and made easily refuted claims.
Human intelligence is the real key
The best way to combat the spread of fake news may be to depend on people. The societal consequences of fake news – greater political polarization, increased partisanship, and eroded trust in mainstream media and government – are significant. If more people knew the stakes were that high, they might be more wary of information, particularly if it is more emotionally based, because that’s an effective way to get people’s attention.
When someone sees an enraging post, that person would do better to investigate the information, rather than sharing it immediately. The act of sharing also lends credibility to a post: When other people see it, they register that it was shared by someone they know and presumably trust at least a bit, and are less likely to notice whether the original source is questionable.
Facebook could use its partnerships with news organizations and volunteers to train AI, continually tweaking the system to respond to propagandists’ changes in topics and tactics. This won’t catch every piece of news posted online, but it would make it easier for large numbers of people to tell fact from fake. That could reduce the chances that fictional and misleading stories would become popular online.
The King James Bible, often referred to as the “authorised version”, is one of the most widely read and influential books in history. Published for the first time in 1611 at the behest of King James I of England, the translation was the work of more than 40 scholars, who started from the original Hebrew and Greek texts of the Bible.
Because of the Bible’s fame, people might be surprised to hear that it is still possible to find previously unknown and unidentified sources that shed light on how it came together. In fact, the process of translation remains mysterious – and there is plenty of work left to be done on how this was done. This reflects the wider possibilities of research into pre-modern literature and history – and there is still a huge amount to find out through archival research.
Over the past few years, I have been researching new evidence about how the bible was translated and have identified three new pieces of evidence that had been written by the King James translators in the course of their work. Before that, scholars had only found four: a copy of an earlier English translation, parts of which were apparently revised by some of the translators; an anonymous draft of part of the New Testament; a set of notes on part of the Apocrypha by the translator Samuel Ward; and notes on the New Testament by another translator, John Bois. Nothing had been added to these sources since the 1970s.
Fresh information
My work brings the total number of sources from four up to seven. But what are the three new items? The first thing that links the three items is that they were not accurately catalogued. The first was a printed copy of the Old Testament in the Bodleian Libraries in Oxford, annotated heavily by Bois, a linguist of whom it is said he could “write Hebrew with an elegant hand” by the age of six.
But although the sub-collection this book belongs to has been in Oxford for centuries, it still has not been catalogued to modern standards.
The information available for each book is basic – in this case, the catalogue entry did not reveal that the book contained annotations, much less that they were by a well-known biblical translator. It is one of hundreds of thousands of early printed books all over the world that scholars still have to inspect in person in order to find out what they contain.
The same is true of the second and third items I found. The second was a set of handwritten letters exchanged between Bois and the celebrated French scholar, Isaac Casaubon – who had arrived in England in 1610 at the behest of James I and who also participated in the translation. These letters have been in the British Library for about two centuries, but the catalogue says nothing about them other than the names of the correspondents.
The third item was a series of notes in the Bodleian Libraries which Casaubon made after discussing various problems of translation with another translator, Andrew Downes, a professor of Greek at Cambridge University.
Similarly, the notebook containing this record of the translation has a catalogue entry, but it is patchy, imprecise and does not capture the required level of detail. Again, there are thousands of partially catalogued manuscripts all over the world that stand ready to yield secrets like these to researchers who are willing to take a punt and consult them directly.
Common language
The next factor that links all three discoveries might surprise readers who think of the King James Bible as a distinctively “English” cultural product: they were all written in Latin, and they all involved some sort of foreign as well as English input. The printed edition of the Old Testament which Bois annotated had been published in Rome and Bois and Casaubon corresponded with each other in Latin. Casaubon’s conversations with Downes, similarly, were held and recorded in Latin, because Casaubon could not speak or write English.
Latin was the closest thing Europe had to a common language at the time, especially for its intellectual elites. Because comparatively few scholars of this period can read Latin, even a little knowledge of the language can open many different doors to unknown dimensions of early modern culture.
Golden age of English writing?
One thing is even more important than access to under-catalogued collections or material in unfamiliar languages, however. I needed a reason to do this research in the first place. In my case, there were two overarching motivations for my work on the King James Bible.
First, I was interested in these sorts of sources because I was interested in what the history of scholarly practices could tell us about the history of religion. Religion is often studied as though it were a matter of unquestioning faith, spiritual piety or clashes between fixed, mutually exclusive doctrines.
I wanted to show that Christian readers of the Bible in the early modern period were at the cutting edge of intellectual culture and were capable of seeing their sacred texts as historically and culturally specific documents. The sources I found illustrate how and why they could do this, as another commentator has already observed.
Second, I wanted to unpick the commonplace notion that the King James Bible, like other translations of the Bible into English from this period, was a product of a newly independent, assertive national literary culture: the culture of writers like Shakespeare, to take the most famous contemporary example. It may have come to look this way in subsequent centuries, but at the time it bore witness to constant cooperation and exchange between English and continental scholars.
The vast majority of researchers in my field are like me: they don’t enjoy the task of wading through library catalogues or reading Latin manuscripts for its own sake. The reason they do those things in the first place is to test and critique the grander narratives which we tell each other about the past. Nicholas Hardy, Fellow in English Literature, University of Birmingham
This article was originally published on The Conversation.