Sunday, April 28, 2024

From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam

 

Many of the AI images generated by spammers and scammers have religious themes. immortal70/iStock via Getty Images

If you’ve spent time on Facebook over the past six months, you may have noticed photorealistic images that are too good to be true: children holding paintings that look like the work of professional artists, or majestic log cabin interiors that are the stuff of Airbnb dreams.

Others, such as renderings of Jesus made out of crustaceans, are just bizarre.

Like the AI image of the pope in a puffer jacket that went viral in May 2023, these AI-generated images are increasingly prevalent – and popular – on social media platforms. Even as many of them border on the surreal, they’re often used to bait engagement from ordinary users.

Our team of researchers from the Stanford Internet Observatory and Georgetown University’s Center for Security and Emerging Technology investigated over 100 Facebook pages that posted high volumes of AI-generated content. We published the results in March 2024 as a preprint paper, meaning the findings have not yet gone through peer review.

We explored patterns of images, unearthed evidence of coordination between some of the pages, and tried to discern the likely goals of the posters.

Page operators seemed to be posting pictures of AI-generated babies, kitchens or birthday cakes for a range of reasons.

There were content creators innocuously looking to grow their followings with synthetic content; scammers using pages stolen from small businesses to advertise products that don’t seem to exist; and spammers sharing AI-generated images of animals while referring users to websites filled with advertisements, which allow the owners to collect ad revenue without creating high-quality content.

Our findings suggest that these AI-generated images draw in users – and Facebook’s recommendation algorithm may be organically promoting these posts.

Generative AI meets scams and spam

Internet spammers and scammers are nothing new.

For more than two decades, they’ve used unsolicited bulk email to promote pyramid schemes. They’ve targeted senior citizens while posing as Medicare representatives or computer technicians.

On social media, profiteers have used clickbait articles to drive users to ad-laden websites. Recall the 2016 U.S. presidential election, when Macedonian teenagers shared sensational political memes on Facebook and collected advertising revenue after users visited the URLs they posted. The teens didn’t care who won the election. They just wanted to make a buck.

In the early 2010s, spammers captured people’s attention with ads promising that anyone could lose belly fat or learn a new language with “one weird trick.”

AI-generated content has become another “weird trick.”

It’s visually appealing and cheap to produce, allowing scammers and spammers to generate high volumes of engaging posts. Some of the pages we observed uploaded dozens of unique images per day. In doing so, they followed Meta’s own advice for page creators. Frequent posting, the company suggests, helps creators get the kind of algorithmic pickup that leads their content to appear in the “Feed,” formerly known as the “News Feed.”

Much of the content is still, in a sense, clickbait: Shrimp Jesus makes people pause to gawk and inspires shares purely because it is so bizarre.

Many users react by liking the post or leaving a comment. This signals to the algorithmic curators that perhaps the content should be pushed into the feeds of even more people.

Some of the more established spammers we observed, likely recognizing this, improved their engagement by pivoting from posting URLs to posting AI-generated images. They would then comment on the post of the AI-generated images with the URLs of the ad-laden content farms they wanted users to click.

But more ordinary creators capitalized on the engagement of AI-generated images, too, without obviously violating platform policies.

Rate ‘my’ work!

When we looked up the posts’ captions on CrowdTangle – a social media monitoring platform owned by Meta and set to sunset in August – we found that they were “copypasta” captions, which means that they were repeated across posts.

Some of the copypasta captions baited interaction by directly asking users to, for instance, rate a “painting” by a first-time artist – even when the image was generated by AI – or to wish an elderly person a happy birthday. Facebook users often replied to AI-generated images with comments of encouragement and congratulations

Algorithms push AI-generated content

Our investigation noticeably altered our own Facebook feeds: Within days of visiting the pages – and without commenting on, liking or following any of the material – Facebook’s algorithm recommended reams of other AI-generated content.

Interestingly, the fact that we had viewed clusters of, for example, AI-generated miniature cow pages didn’t lead to a short-term increase in recommendations for pages focused on actual miniature cows, normal-sized cows or other farm animals. Rather, the algorithm recommended pages on a range of topics and themes, but with one thing in common: They contained AI-generated images.

In 2022, the technology website Verge detailed an internal Facebook memo about proposed changes to the company’s algorithm.

The algorithm, according to the memo, would become a “discovery-engine,” allowing users to come into contact with posts from individuals and pages they didn’t explicitly seek out, akin to TikTok’s “For You” page.

We analyzed Facebook’s own “Widely Viewed Content Reports,” which lists the most popular content, domains, links, pages and posts on the platform per quarter.

It showed that the proportion of content that users saw from pages and people they don’t follow steadily increased between 2021 and 2023. Changes to the algorithm have allowed more room for AI-generated content to be organically recommended without prior engagement – perhaps explaining our experiences and those of other users.

‘This post was brought to you by AI’

Since Meta currently does not flag AI-generated content by default, we sometimes observed users warning others about scams or spam AI content with infographics.

Meta, however, seems to be aware of potential issues if AI-generated content blends into the information environment without notice. The company has released several announcements about how it plans to deal with AI-generated content.

In May 2024, Facebook will begin applying a “Made with AI” label to content it can reliably detect as synthetic.

But the devil is in the details. How accurate will the detection models be? What AI-generated content will slip through? What content will be inappropriately flagged? And what will the public make of such labels?

While our work focused on Facebook spam and scams, there are broader implications.

Reporters have written about AI-generated videos targeting kids on YouTube and influencers on TikTok who use generative AI to turn a profit.

Social media platforms will have to reckon with how to treat AI-generated content; it’s certainly possible that user engagement will wane if online worlds become filled with artificially generated posts, images and videos.

Shrimp Jesus may be an obvious fake. But the challenge of assessing what’s real is only heating up.The Conversation

Renee DiResta, Research Manager of the Stanford Internet Observatory, Stanford University; Abhiram Reddy, Research Assistant at the Center for Security and Emerging Technology, Georgetown University, and Josh A. Goldstein, Research Fellow at the Center for Security and Emerging Technology, Georgetown University

This article is republished from The Conversation under a Creative Commons license.

What is ‘techno-optimism’? 2 technology scholars explain the ideology that says technology is the answer to every problem

 

When venture capitalist and techno-optimist Marc Andreessen speaks, many people listen. Steve Jennings/Getty Images for TechCrunch

Silicon Valley venture capitalist Marc Andreessen penned a 5,000-word manifesto in 2023 that gave a full-throated call for unrestricted technological progress to boost markets, broaden energy production, improve education and strengthen liberal democracy.

The billionaire, who made his fortune by co-founding Netscape – a 1990s-era company that made a pioneering web browser – espouses a concept known as “techno-optimism.” In summing it up, Andreessen writes, “We believe that there is no material problem – whether created by nature or by technology – that cannot be solved with more technology.”

The term techno-optimism isn’t new; it began to appear after World War II. Nor is it in a state of decline, as Andreessen and other techno-optimists such as Elon Musk would have you believe. And yet Andreessen’s essay made a big splash.

As scholars who study technology and society, we have observed that techno-optimism easily attaches itself to the public’s desire for a better future. The questions of how that future will be built, what that future will look like and who will benefit from those changes are harder to answer.

Why techno-optimism matters

Techno-optimism is a blunt tool. It suggests that technological progress can solve every problem known to humans – a belief also known as techno-solutionism.

Its adherents object to commonsense guardrails or precautions, such as cities limiting the number of new Uber drivers to ease traffic congestion or protect cab drivers’ livelihoods. They dismiss such regulations or restrictions as the concerns of Luddites – people who resist disruptive innovations.

In our view, some champions of techno-optimism, such as Bill Gates, rely on the cover of philanthropy to promote their techno-optimist causes. Others have argued that their philanthropic initiatives are essentially a public relations effort to burnish their reputations as they continue to control how technology is being used to address the world’s problems.

The stakes of embracing techno-optimism are high – and not just in terms of the role that technology plays in society. There are also political, environmental and economic ramifications for holding these views. As an ideological position, it puts the interests of certain people – often those already wielding immense power and resources – over those of everyone else. Its cheerleaders can be willfully blind to the fact that most of society’s problems, like technology, are made by humans.

Many scholars are keenly aware of the techno-optimism of social media that pervaded the 2010s. Back then, these technologies were breathlessly covered in the media – and promoted by investors and inventors – as an opportunity to connect the disconnected and bring information to anyone who might need it.

Yet, while offering superficial solutions to loneliness and other social problems, social media has failed to address their root structural causes. Those may include the erosion of public spaces, the decline of journalism and enduring digital divides.

Young boy plays with a VR headset while looking at a huge computer monitor screen with both hands outstretched.
When you play with a Meta Quest 2 all-in-one VR headset, the future may look bright. But that doesn’t mean the world’s problems are being solved. Nano Calvo/VW Pics/Universal Images Group via Getty Images

Tech alone can’t fix everything

Both of us have extensively researched economic development initiatives that seek to promote high-tech entrepreneurship in low-income communities in Ghana and the United States. State-run programs and public-private partnerships have sought to narrow digital divides and increase access to economic opportunity.

Many of these programs embrace a techno-optimistic mindset by investing in shiny, tech-heavy fixes without addressing the inequality that led to digital divides in the first place. Techno-optimism, in other words, pervades governments and nongovernmental organizations, just as it has influenced the thinking of billionaires like Andreessen.

Solving intractable problems such as persistent poverty requires a combination of solutions that sometimes, yes, includes technology. But they’re complex. To us, insisting that there’s a technological fix for every problem in the world seems not just optimistic, but also rather convenient if you happen to be among the richest people on Earth and in a position to profit from the technology industry.

The Bill & Melinda Gates Foundation has provided funding for The Conversation U.S. and provides funding for The Conversation internationally.The Conversation

Seyram Avle, Associate Professor of Global Digital Media, UMass Amherst and Jean Hardy, Assistant Professor of Media & Information, Michigan State University

This article is republished from The Conversation under a Creative Commons license.

Cybersecurity researchers spotlight a new ransomware threat – be careful where you upload files

 

Avoiding iffy downloads is no longer enough to ensure this doesn’t happen. Olemedia/iStock via Getty Images

You probably know better than to click on links that download unknown files onto your computer. It turns out that uploading files can get you into trouble, too.

Today’s web browsers are much more powerful than earlier generations of browsers. They’re able to manipulate data within both the browser and the computer’s local file system. Users can send and receive email, listen to music or watch a movie within a browser with the click of a button.

Unfortunately, these capabilities also mean that hackers can find clever ways to abuse the browsers to trick you into letting ransomware lock up your files when you think that you’re simply doing your usual tasks online.

I’m a computer scientist who studies cybersecurity. My colleagues and I have shown how hackers can gain access to your computer’s files via the File System Access Application Programming Interface (API), which enables web applications in modern browsers to interact with the users’ local file systems.

The threat applies to Google’s Chrome and Microsoft’s Edge browsers but not Apple’s Safari or Mozilla’s Firefox. Chrome accounts for 65% of browsers used, and Edge accounts for 5%. To the best of my knowledge, there have been no reports of hackers using this method so far.

My colleagues, who include a Google security researcher, and I have communicated with the developers responsible for the File System Access API, and they have expressed support for our work and interest in our approaches to defending against this kind of attack. We also filed a security report to Microsoft but have not heard from them.

Double-edged sword

Today’s browsers are almost operating systems unto themselves. They can run software programs and encrypt files. These capabilities, combined with the browser’s access to the host computer’s files – including ones in the cloud, shared folders and external drives – via the File System Access API creates a new opportunity for ransomware.

Imagine you want to edit photos on a benign-looking free online photo editing tool. When you upload the photos for editing, any hackers who control the malicious editing tool can access the files on your computer via your browser. The hackers would gain access to the folder you are uploading from and all subfolders. Then the hackers could encrypt the files in your file system and demand a ransom payment to decrypt them.

Today’s web browsers are more powerful – and in some ways more vulnerable – than their predecessors.

Ransomware is a growing problem. Attacks have hit individuals as well as organizations, including Fortune 500 companies, banks, cloud service providers, cruise operators, threat-monitoring services, chip manufacturers, governments, medical centers and hospitals, insurance companies, schools, universities and even police departments. In 2023, organizations paid more than US$1.1 billion in ransomware payments to attackers, and 19 ransomware attacks targeted organizations every second.

It is no wonder ransomware is the No. 1 arms race today between hackers and security specialists. Traditional ransomware runs on your computer after hackers have tricked you into downloading it.

New defenses for a new threat

A team of researchers I lead at the Cyber-Physical Systems Security Lab at Florida International University, including postdoctoral researcher Abbas Acar and Ph.D. candidate Harun Oz, in collaboration with Google Senior Research Scientist Güliz Seray Tuncay, have been investigating this new type of potential ransomware for the past two years. Specifically, we have been exploring how powerful modern web browsers have become and how they can be weaponized by hackers to create novel forms of ransomware.

In our paper, RøB: Ransomware over Modern Web Browsers, which was presented at the USENIX Security Symposium in August 2023, we showed how this emerging ransomware strain is easy to design and how damaging it can be. In particular, we designed and implemented the first browser-based ransomware called RøB and analyzed its use with browsers running on three different major operating systems – Windows, Linux and MacOS – five cloud providers and five antivirus products.

Our evaluations showed that RøB is capable of encrypting numerous types of files. Because RøB runs within the browser, there are no malicious payloads for a traditional antivirus program to catch. This means existing ransomware detection systems face several issues against this powerful browser-based ransomware.

We proposed three different defense approaches to mitigate this new ransomware type. These approaches operate at different levels – browser, file system and user – and complement one another.

The first approach temporarily halts a web application – a program that runs in the browser – in order to detect encrypted user files. The second approach monitors the activity of the web application on the user’s computer to identify ransomware-like patterns. The third approach introduces a new permission dialog box to inform users about the risks and implications associated with allowing web applications to access their computer’s file system.

When it comes to protecting your computer, be careful about where you upload as well as download files. Your uploads could be giving hackers an “in” to your computer.The Conversation

Selcuk Uluagac, Professor of Computing and Information Science, Florida International University

This article is republished from The Conversation under a Creative Commons license.

Tuesday, April 23, 2024

AI chatbots refuse to produce ‘controversial’ output − why that’s a free speech problem

 

AI chatbots restrict their output according to vague and broad policies. taviox/iStock via Getty Images

Google recently made headlines globally because its chatbot Gemini generated images of people of color instead of white people in historical settings that featured white people. Adobe Firefly’s image creation tool saw similar issues. This led some commentators to complain that AI had gone “woke.” Others suggested these issues resulted from faulty efforts to fight AI bias and better serve a global audience.

The discussions over AI’s political leanings and efforts to fight bias are important. Still, the conversation on AI ignores another crucial issue: What is the AI industry’s approach to free speech, and does it embrace international free speech standards?

We are policy researchers who study free speech, as well as executive director and a research fellow at The Future of Free Speech, an independent, nonpartisan think tank based at Vanderbilt University. In a recent report, we found that generative AI has important shortcomings regarding freedom of expression and access to information.

Generative AI is a type of AI that creates content, like text or images, based on the data it has been trained with. In particular, we found that the use policies of major chatbots do not meet United Nations standards. In practice, this means that AI chatbots often censor output when dealing with issues the companies deem controversial. Without a solid culture of free speech, the companies producing generative AI tools are likely to continue to face backlash in these increasingly polarized times.

Vague and broad use policies

Our report analyzed the use policies of six major AI chatbots, including Google’s Gemini and OpenAI’s ChatGPT. Companies issue policies to set the rules for how people can use their models. With international human rights law as a benchmark, we found that companies’ misinformation and hate speech policies are too vague and expansive. It is worth noting that international human rights law is less protective of free speech than the U.S. First Amendment.

Our analysis found that companies’ hate speech policies contain extremely broad prohibitions. For example, Google bans the generation of “content that promotes or encourages hatred.” Though hate speech is detestable and can cause harm, policies that are as broadly and vaguely defined as Google’s can backfire.

To show how vague and broad use policies can affect users, we tested a range of prompts on controversial topics. We asked chatbots questions like whether transgender women should or should not be allowed to participate in women’s sports tournaments or about the role of European colonialism in the current climate and inequality crises. We did not ask the chatbots to produce hate speech denigrating any side or group. Similar to what some users have reported, the chatbots refused to generate content for 40% of the 140 prompts we used. For example, all chatbots refused to generate posts opposing the participation of transgender women in women’s tournaments. However, most of them did produce posts supporting their participation.

Freedom of speech is a foundational right in the U.S., but what it means and how far it goes are still widely debated.

Vaguely phrased policies rely heavily on moderators’ subjective opinions about what hate speech is. Users can also perceive that the rules are unjustly applied and interpret them as too strict or too lenient.

For example, the chatbot Pi bans “content that may spread misinformation.” However, international human rights standards on freedom of expression generally protect misinformation unless a strong justification exists for limits, such as foreign interference in elections. Otherwise, human rights standards guarantee the “freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers … through any … media of … choice,” according to a key United Nations convention.

Defining what constitutes accurate information also has political implications. Governments of several countries used rules adopted in the context of the COVID-19 pandemic to repress criticism of the government. More recently, India confronted Google after Gemini noted that some experts consider the policies of the Indian prime minister, Narendra Modi, to be fascist.

Free speech culture

There are reasons AI providers may want to adopt restrictive use policies. They may wish to protect their reputations and not be associated with controversial content. If they serve a global audience, they may want to avoid content that is offensive in any region.

In general, AI providers have the right to adopt restrictive policies. They are not bound by international human rights. Still, their market power makes them different from other companies. Users who want to generate AI content will most likely end up using one of the chatbots we analyzed, especially ChatGPT or Gemini.

These companies’ policies have an outsize effect on the right to access information. This effect is likely to increase with generative AI’s integration into search, word processors, email and other applications.

This means society has an interest in ensuring such policies adequately protect free speech. In fact, the Digital Services Act, Europe’s online safety rulebook, requires that so-called “very large online platforms” assess and mitigate “systemic risks.” These risks include negative effects on freedom of expression and information.

Jacob Mchangama discusses online free speech in the context of the European Union’s 2022 Digital Services Act.

This obligation, imperfectly applied so far by the European Commission, illustrates that with great power comes great responsibility. It is unclear how this law will apply to generative AI, but the European Commission has already taken its first actions.

Even where a similar legal obligation does not apply to AI providers, we believe that the companies’ influence should require them to adopt a free speech culture. International human rights provide a useful guiding star on how to responsibly balance the different interests at stake. At least two of the companies we focused on – Google and Anthropic – have recognized as much.

Outright refusals

It’s also important to remember that users have a significant degree of autonomy over the content they see in generative AI. Like search engines, the output users receive greatly depends on their prompts. Therefore, users’ exposure to hate speech and misinformation from generative AI will typically be limited unless they specifically seek it.

This is unlike social media, where people have much less control over their own feeds. Stricter controls, including on AI-generated content, may be justified at the level of social media since they distribute content publicly. For AI providers, we believe that use policies should be less restrictive about what information users can generate than those of social media platforms.

AI companies have other ways to address hate speech and misinformation. For instance, they can provide context or countervailing facts in the content they generate. They can also allow for greater user customization. We believe that chatbots should avoid merely refusing to generate any content altogether. This is unless there are solid public interest grounds, such as preventing child sexual abuse material, something laws prohibit.

Refusals to generate content not only affect fundamental rights to free speech and access to information. They can also push users toward chatbots that specialize in generating hateful content and echo chambers. That would be a worrying outcome.The Conversation

Jordi Calvet-Bademunt, Research Fellow and Visiting Scholar of Political Science, Vanderbilt University and Jacob Mchangama, Research Professor of Political Science, Vanderbilt University

This article is republished from The Conversation under a Creative Commons license.

From sumptuous engravings to stick-figure sketches, Passover Haggadahs − and their art − have been evolving for centuries

 

Haggadah shel Pesah, translated by Sonia Gronemann and illustrated by Otto Geismar. Made in Berlin, 1927. Isser and Rae Price Library of Judaica , CC BY-ND

The Jewish festival of Passover recalls the biblical story of the Israelites enslaved by Egypt and their miraculous escape. During a ritual feast known as a Seder, families celebrate this ancient story of deliverance, with each new generation reminded to never take freedom for granted.

Every year, a written guide known as a “Haggadah” is read at the Seder table. The core text comprises a description of ritual foods, the story of the Exodus, blessings, commentaries, hymns and songs. The word Haggadah – “telling,” in Hebrew – was derived from Exodus 13:8, a verse which instructed the Israelites to commemorate their liberation and tell the story to their children.

Even though the ancient festival that became Passover has been celebrated since the biblical period, the complete text of the Haggadah emerged only in the eight to ninth centuries. And it was not until the 14th century that fully developed, sumptuously illuminated versions emerged, used by the Jewish communities of Germany, Italy and Spain. Medieval editors integrated decorative borders, such as fantastical, beastlike creatures borrowed from the wider culture.

This artistic license, together with slight modifications to the text over time, meant that the Haggadah became both a mirror and a commentary on the societies in which they were produced. Here at the University of Florida’s Price Library of Judaica, where I am curator and a medieval Hebrew scholar, we have hundreds of Haggadahs – each one a window into how Jews in a particular time and place adapted the telling of the Passover story.

An illustrated classic

One of the greatest examples our library has of this blending of cultures was printed in Amsterdam in 1695.

A yellowed book page with lots of detailed illustrations in back ink, and text in Hebrew.
The Amsterdam Haggadah’s illustrations set a precedent for centuries. Isser and Rae Price Library of Judaica, CC BY-ND

The Amsterdam Haggadah was illustrated by Abraham Bar Yaakov, a German pastor who converted to Judaism. Abandoning the standard use of woodcut images, Bar Yaakov created a series of copper engravings based on Bible illustrations by the Swiss engraver Matthäus Merian the Elder. In addition, he incorporated a pull-out map of the route of the Exodus and an imaginative rendering of the Temple in Jerusalem.

Bar Yaakov also added an image of the “four sons” standing together – one of the many elements of Haggadahs designed to engage and instruct children sitting through the long Seder meal. Each son represents a different type of child, described by their attitude toward Passover: wise, wicked, silent and one who does not even know how to ask questions about the holiday.

In medieval Haggadahs, the wicked son was usually portrayed as a combatant – the personification of evil for European Jews who had suffered recurrent mob raids and violent expulsions. In Bar Yaakov’s rendering, the wicked son is a Roman soldier precariously balanced on one foot and looking back toward the wise son, who is depicted as Hannibal, the Carthaginian general who battled Rome in the third century B.C.E.

The second edition of this Haggadah was printed with additional engravings in 1712 by Solomon Proops, founder of an acclaimed Dutch Jewish printing house. The text, traditionally written in Hebrew and Aramaic, included instructions in Yiddish and Ladino, the everyday languages for Jews in Europe. The Ladino translations were specifically geared toward Sephardi Jews who arrived in the Netherlands after being expelled from Spain and Portugal, as well as Portuguese “Conversos” returning to Judaism after their ancestors had been forced to convert to Catholicism.

The Amsterdam Haggadah proved to be incredibly influential on later versions, with its illustrations copied into the modern era.

A Haggadah for everyone

By the 20th century, Haggadahs had been adapted and translated to meet the needs of diverse Jewish communities around the world, including various religious denominations – Reform, Conservative, Orthodox – or political, social and labor groups, such as Zionists or socialists. The Haggadah’s key theme of freedom from oppression was tailored to address contemporary situations and viewpoints.

Modern Haggadah illustrations also reflected developments in the art world. In 1920s Berlin, a Jewish art teacher, Otto Geismar, reinterpreted the story of the Exodus using plain, black-and-white, modernist “stick figures” – another Haggadah in our collection.

A yellowed page with Hebrew letters in black print, and simple illustrations of stick figures seated at a table and carrying things in front of pyramids.
The stick-figure designs especially appealed to children, whose interest might wane over the course of the meal. Isser and Rae Price Library of Judaica, CC BY-ND

Despite their minimal lines, the figures are all expressive. Geismar even injected elements of humor: A child is shown asleep at the table, and in another scene a family of stick figures is engaged in animated conversation and debate. In his depictions of ancient Israelite slaves, stick figures appear especially burdened with heavy loads on their backs. He also divided the Hebrew text into more easily readable sections using eye-catching, black-and-white decorative borders.

The striking simplicity of the design, aimed primarily at children, gained great popularity, and his work was reprinted in multiple German and Dutch editions.

Wine – and coffee

There was growing demand for different printed versions, as Jews around the world adapted the traditional Haggadah. Meanwhile, some suppliers sensed an opportunity to adapt it for their own needs. Thus rose a phenomenon known as the commercial Haggadah: the product of astute companies realizing the power of advertising their wares in a book dedicated to the art of “telling.”

The most famous of these is the Maxwell House Haggadah from 1932, which was distributed freely with every can of coffee purchased.

In 1938, the Schapiro House of Kosher Wines caught on. The company, whose flagship store was on New York’s Lower East Side, produced a Haggadah with an English translation and illustrations borrowed from the Amsterdam Haggadah. Owner Sam Schapiro savvily linked his products to the Seder, during which participants drink four small cups of sacramental wine. Wine, seen at this point as a luxury item, also symbolized freedom.

A yellowed advertisement for Schapiro's Kosher Wines, with a black and white photo of men pouring grapes into a barrel.
Where better to advertise wine than in a text that calls for drinking several glasses of it? Isser and Rae Price Library of Judaica, CC BY-ND

Just in case there were any doubts about advertising alcohol a mere five years after Prohibition, when sacramental wine was difficult to access, Schapiro’s Haggadah made the case for wine’s “health values.” Across two pages at the back of the book, the editor describes in English and Yiddish the supposed efficacy of wine against a host of maladies, including typhoid fever, depression and even obesity.

Schapiro’s Haggadah fulfilled the commandment to relate the story of the Exodus for a new generation – but the opening pages also provide a tribute in Yiddish to Sam Schapiro’s 40-year-old company. Here Schapiro’s is praised for being the place where religious men and intellectuals alike could get together over a good glass of wine.

Commercial Haggadahs were not expected to become venerable family heirlooms. Rather, they provided a handy, affordable way for Jewish families of lesser means to participate in the annual ritual of coming forth out of bondage – another expression of freedom.The Conversation

Rebecca J.W. Jefferson, Head of the Isser and Rae Price Library of Judaica, University of Florida

This article is republished from The Conversation under a Creative Commons license.

‘The former guy’ versus ‘Sleepy Joe’ – why Biden and Trump are loath to utter each other’s name

 

President Joe Biden referred to Donald Trump as ‘my predecessor’ 13 times during the 2024 State of the Union. Matt McClain/The Washington Post via Getty Images

During his 2024 State of the Union Address, President Joe Biden mentioned his presumptive challenger, Donald Trump, 15 times – but never once by name.

Instead, Biden referred to him as “my predecessor” 13 times. He also called him a “former Republican president” and a “former American president.”

These weren’t mistakes or memory lapses – the circumlocutions appeared in the president’s prepared remarks provided by the White House.

Instead, Biden was employing a rhetorical tactic in which politicians do everything except use their opponent’s actual name. In doing so, they subtly deprive their opposition of equal standing or legitimacy.

‘He who must not be named’

Biden’s predilection for avoiding Trump’s name is an example of what political activist Majid Nawaz dubbed the “Voldemort effect.”

Nawas recycled the term from J.K. Rowling’s Harry Potter universe, in which wizards employ phrases like “you know who” and “he who must not be named” to refer to Lord Voldemort.

The Voldemort effect is just another name for a cardinal principle of advertising: never mention your competitor by name. Doing so grants one’s rivals a certain degree of exposure and legitimacy.

One study of this phenomenon found that televised advertisements include comparisons between products half the time. However, only about 5% actually mention the advertiser’s competitor by name.

So when Biden calls Trump “my predecessor” or “the former guy” – as he did during a 2021 town hall – he’s avoiding recognizing his rival as a peer and an equal.

The illusory truth effect

Trump, on the other hand, makes use of a different strategy to diminish his political opponents: his infamous nicknames.

Politicians on both sides of the aisle have received ignominious monikers.

Trump branded Jeb Bush as “Low Energy Jeb,” Ted Cruz as “Lyin’ Ted” and Mitch McConnell as “Broken Old Crow.” Adam Schiff became “Pencil Neck,” Biden was christened “Sleepy Joe,” and Mike Bloomberg was derided as “Mini Mike.”

By employing nicknames – and repeating them ad nauseam – Trump makes use of a phenomenon called the illusory truth effect, in which repeated information comes to be accepted as fact, no matter its truthfulness.

In daily life, we often need to quickly distinguish between truths and falsehoods. And if we’ve repeatedly seen or heard something, we can typically recall it more easily. Since accurate information is typically encountered more frequently than the occasional fabrication, this rule of thumb is a useful one.

But politicians can exploit illusory truth by repeatedly branding someone a liar, a danger or, as Trump is wont to do, “crooked.” And Biden has taken a page from Trump’s playbook by branding the Republicans as the “MAGA Republican Party.”

Othering in action

Trump also employs a different strategy to demean his political opponents: othering.

During his 2016 campaign bid, Trump made a point of emphasizing Obama’s middle name, Hussein, to link him to the former Iraqi dictator Saddam Hussein.

He often mispronounces the first name of Vice President Kamala Harris, and during the 2024 Republican primaries, Trump took to referring to Nikki Haley as “Nimbra,” a corruption of her Punjabi first name, Nimarata.

By drawing attention to the seemingly exotic names of Obama, Harris and Haley, Trump casts them as foreigners, tapping into the xenophobia that animates some of his supporters.

Dale Carnegie, author of “How to Win Friends and Influence People,” wrote that “a person’s name is to that person the sweetest and most important sound in any language.”

Political campaigns, however, are anything but sweet, and voters will likely endure more circumlocutions and derogatory nicknames in the coming months as the battle between “the former guy” and “Sleepy Joe” heats up.The Conversation

Roger J. Kreuz, Associate Dean and Professor of Psychology, University of Memphis

This article is republished from The Conversation under a Creative Commons license.

What you eat could alter your unborn children and grandchildren’s genes and health outcomes

 

The relatively new discipline of epigenetics explores how diet and nutrition can affect not only our own health but that of future generations. Drazen Zigic/iStock via Getty Images Plus

Within the last century, researchers’ understanding of genetics has undergone a profound transformation.

Genes, regions of DNA that are largely responsible for our physical characteristics, were considered unchanging under the original model of genetics pioneered by biologist Gregor Mendel in 1865. That is, genes were thought to be largely unaffected by a person’s environment.

The emergence of the field of epigenetics in 1942 shattered this notion.

Epigenetics refers to shifts in gene expression that occur without changes to the DNA sequence. Some epigenetic changes are an aspect of cell function, such as those associated with aging.

However, environmental factors also affect the functions of genes, meaning people’s behaviors affect their genetics. For instance, identical twins develop from a single fertilized egg, and as a result, they share the same genetic makeup. However, as the twins age, their appearances may differ due to distinct environmental exposures. One twin may eat a healthy balanced diet, whereas the other may eat an unhealthy diet, resulting in differences in the expression of their genes that play a role in obesity, helping the former twin have lower body fat percentage.

People don’t have much control over some of these factors, such as air quality. Other factors, though, are more in a person’s control: physical activity, smoking, stress, drug use and exposure to pollution, such as that coming from plastics, pesticides and burning fossil fuels, including car exhaust.

Another factor is nutrition, which has given rise to the subfield of nutritional epigenetics. This discipline is concerned with the notions that “you are what you eat” – and “you are what your grandmother ate.” In short, nutritional epigenetics is the study of how your diet, and the diet of your parents and grandparents, affects your genes. As the dietary choices a person makes today affects the genetics of their future children, epigenetics may provide motivation for making better dietary choices.

Two of us work in the epigenetics field. The other studies how diet and lifestyle choices can help keep people healthy. Our research team is comprised of fathers, so our work in this field only enhances our already intimate familiarity with the transformative power of parenthood.

Does “obesity beget obesity”?

A story of famine

The roots of nutritional epigenetics research can be traced back to a poignant chapter in history – the Dutch Hunger Winter in the final stages of World War II.

During the Nazi occupation of the Netherlands, the population was forced to live on rations of 400 to 800 kilocalories per day, a far cry from the typical 2,000-kilocalorie diet used as a standard by the Food and Drug Administration. As a result, some 20,000 people died and 4.5 million were malnourished.

Studies found that the famine caused epigentic changes to a gene called IGF2 that is related to growth and development. Those changes suppressed muscle growth in both the children and grandchildren of pregnant women who endured the famine. For these subsequent generations, that suppression led to an increased risk of obesity, heart disease, diabetes and low birth weight.

These findings marked a pivotal moment in epigenetics research – and clearly demonstrated that environmental factors, such as famine, can lead to epigenetic changes in offspring that may have serious implications for their health.

The role of the mother’s diet

Until this groundbreaking work, most researchers believed epigenetic changes couldn’t be passed down from one generation to the next. Rather, researchers thought epigenetic changes could occur with early-life exposures, such as during gestation – a highly vulnerable period of development. So initial nutritional epigenetic research focused on dietary intake during pregnancy.

The findings from the Dutch Hunger Winter were later supported by animal studies, which allow researchers to control how animals are bred, which can help control for background variables. Another advantage for researchers is that the rats and sheep used in these studies reproduce more quickly than people, allowing for faster results. In addition, researchers can fully control animals’ diets throughout their entire lifespan, allowing for specific aspects of diet to be manipulated and examined. Together, these factors allow researchers to better investigate epigenetic changes in animals than in people.

In one study, researchers exposed pregnant female rats to a commonly used fungicide called vinclozolin. In response to this exposure, the first generation born showed decreased ability to produce sperm, leading to increased male infertility. Critically, these effects, like those of the famine, were passed to subsequent generations.

As monumental as these works are for shaping nutritional epigenetics, they neglected other periods of development and completely ignored the role of fathers in the epigenetic legacy of their offspring. However, a more recent study in sheep showed that a paternal diet supplemented with the amino acid methionine given from birth to weaning affected the growth and reproductive traits of the next three generations. Methionine is an essential amino acid involved in DNA methylation, an example of an epigenetic change.

The human body holds approximately 20,000 genes.

Healthy choices for generations to come

These studies underscore the enduring impact parents’ diets have on their children and grandchildren. They also serve as a powerful motivator for would-be parents and current parents to make more healthy dietary choices, as the dietary choices parents make affect their children’s diets.

Meeting with a nutrition professional, such as a registered dietitian, can provide evidence-based recommendations for making practical dietary changes for individuals and families.

There are still many unknowns about how diet affects and influences our genes. What research is starting to show about nutritional epigenetics is a powerful and compelling reason to consider making lifestyle changes.

There are many things researchers already know about the Western Diet, which is what many Americans eat. A Western Diet is high in saturated fats, sodium and added sugar, but low in fiber; not surprisingly, Western diets are associated with negative health outcomes, such as obesity, type 2 diabetes, cardiovascular disease and some cancers.

A good place to start is to eat more whole, unprocessed foods, particularly fruits, vegetables and whole grains, and fewer processed or convenience foods – that includes fast food, chips, cookies and candy, ready-to-cook meals, frozen pizzas, canned soups and sweetened beverages.

These dietary changes are well known for their health benefits and are described in the 2020-2025 Dietary Guidelines for Americans and by the American Heart Association.

Many people find it difficult to embrace a lifestyle change, particularly when it involves food. Motivation is a key factor for making these changes. Luckily, this is where family and friends can help – they exert a profound influence on lifestyle decisions.

However, on a broader, societal level, food security – meaning people’s ability to access and afford healthy food – should be a critical priority for governments, food producers and distributors, and nonprofit groups. Lack of food security is associated with epigenetic changes that have been linked to negative health outcomes such as diabetes, obesity and depression.

Through relatively simple lifestyle modifications, people can significantly and measurably influence the genes of their children and grandchildren. So when you pass up a bag a chips – and choose fruit or a veggie instead – keep in mind: It’s not just for you, but for the generations to come.The Conversation

Nathaniel Johnson, Assistant Professor of Nutrition and Dietetics, University of North Dakota; Hasan Khatib, Associate Chair and Professor of Genetics and Epigenetics, University of Wisconsin-Madison, and Thomas D. Crenshaw, Professor of Animal and Dairy Sciences, University of Wisconsin-Madison

This article is republished from The Conversation under a Creative Commons license.

Monday, April 22, 2024

Merriam-Webster’s word of the year – authentic – reflects growing concerns over AI’s ability to deceive and dehumanize

 

According to the publisher’s editor-at-large, 2023 represented ‘a kind of crisis of authenticity.’ lambada/E+ via Getty Images

When Merriam-Webster announced that its word of the year for 2023 was “authentic,” it did so with over a month to go in the calendar year.

Even then, the dictionary publisher was late to the game.

In a lexicographic form of Christmas creep, Collins English Dictionary announced its 2023 word of the year, “AI,” on Oct. 31. Cambridge University Press followed suit on Nov. 15 with “hallucinate,” a word used to refer to incorrect or misleading information provided by generative AI programs.

At any rate, terms related to artificial intelligence appear to rule the roost, with “authentic” also falling under that umbrella.

AI and the authenticity crisis

For the past 20 years, Merriam-Webster, the oldest dictionary publisher in the U.S., has chosen a word of the year – a term that encapsulates, in one form or another, the zeitgeist of that past year. In 2020, the word was “pandemic.” The next year’s winner? “Vaccine.”

“Authentic” is, at first glance, a little less obvious.

According to the publisher’s editor-at-large, Peter Sokolowski, 2023 represented “a kind of crisis of authenticity.” He added that the choice was also informed by the number of online users who looked up the word’s meaning throughout the year.

Print ad with a drawing of a thick book accompanied by the text, 'The One Great Standard Authority.'
A 1906 print ad for Webster’s International Dictionary advertised itself an an authoritative clearinghouse for all things English – an authentic, reliable source. Jay Paull/Getty Images

The word “authentic,” in the sense of something that is accurate or authoritative, has its roots in French and Latin. The Oxford English Dictionary has identified its usage in English as early as the late 14th century.

And yet the concept – particularly as it applies to human creations and human behavior – is slippery.

Is a photograph made from film more authentic than one made from a digital camera? Does an authentic scotch have to be made at a small-batch distillery in Scotland? When socializing, are you being authentic – or just plain rude – when you skirt niceties and small talk? Does being your authentic self mean pursuing something that feels natural, even at the expense of cultural or legal constraints?

The more you think about it, the more it seems like an ever-elusive ideal – one further complicated by advances in artificial intelligence.

How much human touch?

Intelligence of the artificial variety – as in nonhuman, inauthentic, computer-generated intelligence – was the technology story of the past year.

At the end of 2022, OpenAI publicly released ChatGPT 3.5, a chatbot derived from so-called large language models. It was widely seen as a breakthrough in artificial intelligence, but its rapid adoption led to questions about the accuracy of its answers.

The chatbot also became popular among students, which compelled teachers to grapple with how to ensure their assignments weren’t being completed by ChatGPT.

Issues of authenticity have arisen in other areas as well. In November 2023, a track described as the “last Beatles song” was released. “Now and Then” is a compilation of music originally written and performed by John Lennon in the 1970s, with additional music recorded by the other band members in the 1990s. A machine learning algorithm was recently employed to separate Lennon’s vocals from his piano accompaniment, and this allowed a final version to be released.

But is it an authentic “Beatles” song? Not everyone is convinced.

Advances in technology have also allowed the manipulation of audio and video recordings. Referred to as “deepfakes,” such transformations can make it appear that a celebrity or a politician said something that they did not – a troubling prospect as the U.S. heads into what is sure to be a contentious 2024 election season.

Writing for The Conversation in May 2023, education scholar Victor R. Lee explored the AI-fueled authenticity crisis.

Our judgments of authenticity are knee-jerk, he explained, honed over years of experience. Sure, occasionally we’re fooled, but our antennae are generally reliable. Generative AI short-circuits this cognitive framework.

“That’s because back when it took a lot of time to produce original new content, there was a general assumption … that it only could have been made by skilled individuals putting in a lot of effort and acting with the best of intentions,” he wrote.

“These are not safe assumptions anymore,” he added. “If it looks like a duck, walks like a duck and quacks like a duck, everyone will need to consider that it may not have actually hatched from an egg.”

Though there seems to be a general understanding that human minds and human hands must play some role in creating something authentic or being authentic, authenticity has always been a difficult concept to define.

So it’s somewhat fitting that as our collective handle on reality has become ever more tenuous, an elusive word for an abstract ideal is Merriam-Webster’s word of the year.The Conversation

Roger J. Kreuz, Associate Dean and Professor of Psychology, University of Memphis

This article is republished from The Conversation under a Creative Commons license.

3 things to learn about patience − and impatience − from al-Ghazali, a medieval Islamic scholar

 

Al-Ghazali’s book ‘Alchemy of Happiness,’ held in the Bibliothèque nationale de France. Al-Ghazali - Bibliothèque nationale de France via Wikimedia Commons

From childhood, we are told that patience is a virtue and that good things will come to those who wait. And, so, many of us work on cultivating patience.

This often starts by learning to wait for a turn with a coveted toy. As adults, it becomes trying to remain patient with long lines at the Department of Motor Vehicles, misbehaving kids or the slow pace of political change. This hard work can have mental health benefits. It is even correlated with per capita income and productivity.

But it is also about trying to become a good person.

It’s clear to me, as a scholar of religious ethics, that patience is a term many of us use, but we all could benefit from understanding its meaning a little better.

In religious traditions, patience is more than waiting, or even more than enduring a hardship. But what is that “more,” and how does being patient make us better people?

The writings of medieval Islamic thinker Abu Hamid al-Ghazali can give us insights or help us understand why we need to practice patience – and also when not to be patient.

Who was al-Ghazali?

Born in Iran in 1058, al-Ghazali was widely respected as a jurist, philosopher and theologian. He traveled to places as far as Baghdad and Jerusalem to defend Islam and argued there was no contradiction between reason and revelation. More specifically, he was well known for reconciling Aristotle’s philosophy, which he likely read in Arabic translation, with Islamic theology.

Al-Ghazali was a prolific writer, and one of his most important works – “Revival of the Religious Sciences,” or the “Iḥyāʾ ʿulūm al-dīn” – provides a practical guide for living an ethical Muslim life.

This work is composed of 40 volumes in total, divided into four parts of 10 books each. Part 1 deals with Islamic rituals; Part 2, local customs; Part 3, vices to be avoided; and Part 4, virtues one should strive for. Al-Ghazali’s discussion of patience comes in Volume 32 of Part 4, “On Patience and Thankfulness,” or the “Kitāb al-sabr waʾl-shukr.”

He describes patience as a fundamental human characteristic that is crucial to achieving value-driven goals, and he provides a caveat for when impatience is called for.

1. What is patience?

Humans, according to al-Ghazali, have competing impulses: the impulse of religion, or “bāʿith al-dīn,” and the impulse of desire, or “bāʿith al-hawā.”

Life is a struggle between these two impulses, which he describes with the metaphor of a battle: “Support for the religious impulse comes from the angels reinforcing the troops of God, while support for the impulse of desire comes from the devils reinforcing the enemies of God.”

A black and white sketch of a man wearing a headdress and a loose garment.
Muslim scholar Abū Ḥāmid Muḥammad ibn Muḥammad al-Ghazālī. From the cover illustration of 'The Confessions of Al-Ghazali,' via Wikimedia Commons

The amount of patience we have is what decides who wins the battle. As al-Ghazali puts it, “If a man remains steadfast until the religious impulse conquers … then the troops of God are victorious and he joins the troops of the patient. But if he slackens and weakens until appetite overcomes him … he joins the followers of the devils.” In other words, for al-Ghazali, patience is the deciding factor of whether we are living up to our full human potential to live ethically.

2. Patience, values and goals

Patience is also necessary for being a good Muslim, in al-Ghazali’s view. But his understanding of how patience works rests on a theory of ethics and can be applied outside of his explicitly Islamic worldview.

It all starts with commitments to core values. For a Muslim like al-Ghazali, those values are informed by the Islamic tradition and community, or “umma,” and include things like justice and mercy. These specific values might be universally applicable. Or you might also have another set of values that are important to you. Perhaps a commitment to social justice, or being a good friend, or not lying.

Living in a way that is consistent with these core values is what the moral life is all about. And patience, according to al-Ghazali, is how we consistently make sure our actions serve this purpose.

That means patience is not just enduring the pain of a toddler’s temper tantrum. It is enduring that pain with a goal in mind. The successful application of patience is measured not by how much pain we endure but by our progress toward a specific goal, such as raising a healthy and happy child who can eventually regulate their emotions.

In al-Ghazali’s understanding of patience, we all need it in order to remain committed to our core principles and ideas when things aren’t going our way.

3. When impatience is called for

One critique of the idea of patience is that it can lead to inaction or be used to silence justified complaints. For instance, scholar of Africana studies Julius Fleming argues in his book “Black Patience” for the importance of a “radical refusal to wait” under conditions of systemic racism. Certainly, there are forms of injustice and suffering in the world that we should not calmly endure.

Despite his commitment to the importance of patience to a moral life, al-Ghazali makes room for impatience as well. He writes, “One is forbidden to be patient with harm (that is) forbidden; for example, to have one’s hand cut off or to witness the cutting off of the hand of a son and to remain silent.”

These are examples of harms to oneself or to loved ones. But could the necessity for impatience be extended to social harms, such as systemic racism or poverty? And as Quranic studies scholars Ahmad Ismail and Ahmad Solahuddin have argued, true patience sometimes necessitates action.

As al-Ghazali writes, “Just because patience is half of faith, do not imagine that it is all commendable; what is intended are specific kinds of patience.”

To sum up, not all patience is good; only patience that is in service of righteous goals is key to the ethical life. The question of which goals are righteous is one we must all answer for ourselves.The Conversation

Liz Bucar, Professor of Philosophy and Religion, Northeastern University

This article is republished from The Conversation under a Creative Commons license.