Sunday, April 28, 2024

From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam

 

Many of the AI images generated by spammers and scammers have religious themes. immortal70/iStock via Getty Images

If you’ve spent time on Facebook over the past six months, you may have noticed photorealistic images that are too good to be true: children holding paintings that look like the work of professional artists, or majestic log cabin interiors that are the stuff of Airbnb dreams.

Others, such as renderings of Jesus made out of crustaceans, are just bizarre.

Like the AI image of the pope in a puffer jacket that went viral in May 2023, these AI-generated images are increasingly prevalent – and popular – on social media platforms. Even as many of them border on the surreal, they’re often used to bait engagement from ordinary users.

Our team of researchers from the Stanford Internet Observatory and Georgetown University’s Center for Security and Emerging Technology investigated over 100 Facebook pages that posted high volumes of AI-generated content. We published the results in March 2024 as a preprint paper, meaning the findings have not yet gone through peer review.

We explored patterns of images, unearthed evidence of coordination between some of the pages, and tried to discern the likely goals of the posters.

Page operators seemed to be posting pictures of AI-generated babies, kitchens or birthday cakes for a range of reasons.

There were content creators innocuously looking to grow their followings with synthetic content; scammers using pages stolen from small businesses to advertise products that don’t seem to exist; and spammers sharing AI-generated images of animals while referring users to websites filled with advertisements, which allow the owners to collect ad revenue without creating high-quality content.

Our findings suggest that these AI-generated images draw in users – and Facebook’s recommendation algorithm may be organically promoting these posts.

Generative AI meets scams and spam

Internet spammers and scammers are nothing new.

For more than two decades, they’ve used unsolicited bulk email to promote pyramid schemes. They’ve targeted senior citizens while posing as Medicare representatives or computer technicians.

On social media, profiteers have used clickbait articles to drive users to ad-laden websites. Recall the 2016 U.S. presidential election, when Macedonian teenagers shared sensational political memes on Facebook and collected advertising revenue after users visited the URLs they posted. The teens didn’t care who won the election. They just wanted to make a buck.

In the early 2010s, spammers captured people’s attention with ads promising that anyone could lose belly fat or learn a new language with “one weird trick.”

AI-generated content has become another “weird trick.”

It’s visually appealing and cheap to produce, allowing scammers and spammers to generate high volumes of engaging posts. Some of the pages we observed uploaded dozens of unique images per day. In doing so, they followed Meta’s own advice for page creators. Frequent posting, the company suggests, helps creators get the kind of algorithmic pickup that leads their content to appear in the “Feed,” formerly known as the “News Feed.”

Much of the content is still, in a sense, clickbait: Shrimp Jesus makes people pause to gawk and inspires shares purely because it is so bizarre.

Many users react by liking the post or leaving a comment. This signals to the algorithmic curators that perhaps the content should be pushed into the feeds of even more people.

Some of the more established spammers we observed, likely recognizing this, improved their engagement by pivoting from posting URLs to posting AI-generated images. They would then comment on the post of the AI-generated images with the URLs of the ad-laden content farms they wanted users to click.

But more ordinary creators capitalized on the engagement of AI-generated images, too, without obviously violating platform policies.

Rate ‘my’ work!

When we looked up the posts’ captions on CrowdTangle – a social media monitoring platform owned by Meta and set to sunset in August – we found that they were “copypasta” captions, which means that they were repeated across posts.

Some of the copypasta captions baited interaction by directly asking users to, for instance, rate a “painting” by a first-time artist – even when the image was generated by AI – or to wish an elderly person a happy birthday. Facebook users often replied to AI-generated images with comments of encouragement and congratulations

Algorithms push AI-generated content

Our investigation noticeably altered our own Facebook feeds: Within days of visiting the pages – and without commenting on, liking or following any of the material – Facebook’s algorithm recommended reams of other AI-generated content.

Interestingly, the fact that we had viewed clusters of, for example, AI-generated miniature cow pages didn’t lead to a short-term increase in recommendations for pages focused on actual miniature cows, normal-sized cows or other farm animals. Rather, the algorithm recommended pages on a range of topics and themes, but with one thing in common: They contained AI-generated images.

In 2022, the technology website Verge detailed an internal Facebook memo about proposed changes to the company’s algorithm.

The algorithm, according to the memo, would become a “discovery-engine,” allowing users to come into contact with posts from individuals and pages they didn’t explicitly seek out, akin to TikTok’s “For You” page.

We analyzed Facebook’s own “Widely Viewed Content Reports,” which lists the most popular content, domains, links, pages and posts on the platform per quarter.

It showed that the proportion of content that users saw from pages and people they don’t follow steadily increased between 2021 and 2023. Changes to the algorithm have allowed more room for AI-generated content to be organically recommended without prior engagement – perhaps explaining our experiences and those of other users.

‘This post was brought to you by AI’

Since Meta currently does not flag AI-generated content by default, we sometimes observed users warning others about scams or spam AI content with infographics.

Meta, however, seems to be aware of potential issues if AI-generated content blends into the information environment without notice. The company has released several announcements about how it plans to deal with AI-generated content.

In May 2024, Facebook will begin applying a “Made with AI” label to content it can reliably detect as synthetic.

But the devil is in the details. How accurate will the detection models be? What AI-generated content will slip through? What content will be inappropriately flagged? And what will the public make of such labels?

While our work focused on Facebook spam and scams, there are broader implications.

Reporters have written about AI-generated videos targeting kids on YouTube and influencers on TikTok who use generative AI to turn a profit.

Social media platforms will have to reckon with how to treat AI-generated content; it’s certainly possible that user engagement will wane if online worlds become filled with artificially generated posts, images and videos.

Shrimp Jesus may be an obvious fake. But the challenge of assessing what’s real is only heating up.The Conversation

Renee DiResta, Research Manager of the Stanford Internet Observatory, Stanford University; Abhiram Reddy, Research Assistant at the Center for Security and Emerging Technology, Georgetown University, and Josh A. Goldstein, Research Fellow at the Center for Security and Emerging Technology, Georgetown University

This article is republished from The Conversation under a Creative Commons license.

What is ‘techno-optimism’? 2 technology scholars explain the ideology that says technology is the answer to every problem

 

When venture capitalist and techno-optimist Marc Andreessen speaks, many people listen. Steve Jennings/Getty Images for TechCrunch

Silicon Valley venture capitalist Marc Andreessen penned a 5,000-word manifesto in 2023 that gave a full-throated call for unrestricted technological progress to boost markets, broaden energy production, improve education and strengthen liberal democracy.

The billionaire, who made his fortune by co-founding Netscape – a 1990s-era company that made a pioneering web browser – espouses a concept known as “techno-optimism.” In summing it up, Andreessen writes, “We believe that there is no material problem – whether created by nature or by technology – that cannot be solved with more technology.”

The term techno-optimism isn’t new; it began to appear after World War II. Nor is it in a state of decline, as Andreessen and other techno-optimists such as Elon Musk would have you believe. And yet Andreessen’s essay made a big splash.

As scholars who study technology and society, we have observed that techno-optimism easily attaches itself to the public’s desire for a better future. The questions of how that future will be built, what that future will look like and who will benefit from those changes are harder to answer.

Why techno-optimism matters

Techno-optimism is a blunt tool. It suggests that technological progress can solve every problem known to humans – a belief also known as techno-solutionism.

Its adherents object to commonsense guardrails or precautions, such as cities limiting the number of new Uber drivers to ease traffic congestion or protect cab drivers’ livelihoods. They dismiss such regulations or restrictions as the concerns of Luddites – people who resist disruptive innovations.

In our view, some champions of techno-optimism, such as Bill Gates, rely on the cover of philanthropy to promote their techno-optimist causes. Others have argued that their philanthropic initiatives are essentially a public relations effort to burnish their reputations as they continue to control how technology is being used to address the world’s problems.

The stakes of embracing techno-optimism are high – and not just in terms of the role that technology plays in society. There are also political, environmental and economic ramifications for holding these views. As an ideological position, it puts the interests of certain people – often those already wielding immense power and resources – over those of everyone else. Its cheerleaders can be willfully blind to the fact that most of society’s problems, like technology, are made by humans.

Many scholars are keenly aware of the techno-optimism of social media that pervaded the 2010s. Back then, these technologies were breathlessly covered in the media – and promoted by investors and inventors – as an opportunity to connect the disconnected and bring information to anyone who might need it.

Yet, while offering superficial solutions to loneliness and other social problems, social media has failed to address their root structural causes. Those may include the erosion of public spaces, the decline of journalism and enduring digital divides.

Young boy plays with a VR headset while looking at a huge computer monitor screen with both hands outstretched.
When you play with a Meta Quest 2 all-in-one VR headset, the future may look bright. But that doesn’t mean the world’s problems are being solved. Nano Calvo/VW Pics/Universal Images Group via Getty Images

Tech alone can’t fix everything

Both of us have extensively researched economic development initiatives that seek to promote high-tech entrepreneurship in low-income communities in Ghana and the United States. State-run programs and public-private partnerships have sought to narrow digital divides and increase access to economic opportunity.

Many of these programs embrace a techno-optimistic mindset by investing in shiny, tech-heavy fixes without addressing the inequality that led to digital divides in the first place. Techno-optimism, in other words, pervades governments and nongovernmental organizations, just as it has influenced the thinking of billionaires like Andreessen.

Solving intractable problems such as persistent poverty requires a combination of solutions that sometimes, yes, includes technology. But they’re complex. To us, insisting that there’s a technological fix for every problem in the world seems not just optimistic, but also rather convenient if you happen to be among the richest people on Earth and in a position to profit from the technology industry.

The Bill & Melinda Gates Foundation has provided funding for The Conversation U.S. and provides funding for The Conversation internationally.The Conversation

Seyram Avle, Associate Professor of Global Digital Media, UMass Amherst and Jean Hardy, Assistant Professor of Media & Information, Michigan State University

This article is republished from The Conversation under a Creative Commons license.

Cybersecurity researchers spotlight a new ransomware threat – be careful where you upload files

 

Avoiding iffy downloads is no longer enough to ensure this doesn’t happen. Olemedia/iStock via Getty Images

You probably know better than to click on links that download unknown files onto your computer. It turns out that uploading files can get you into trouble, too.

Today’s web browsers are much more powerful than earlier generations of browsers. They’re able to manipulate data within both the browser and the computer’s local file system. Users can send and receive email, listen to music or watch a movie within a browser with the click of a button.

Unfortunately, these capabilities also mean that hackers can find clever ways to abuse the browsers to trick you into letting ransomware lock up your files when you think that you’re simply doing your usual tasks online.

I’m a computer scientist who studies cybersecurity. My colleagues and I have shown how hackers can gain access to your computer’s files via the File System Access Application Programming Interface (API), which enables web applications in modern browsers to interact with the users’ local file systems.

The threat applies to Google’s Chrome and Microsoft’s Edge browsers but not Apple’s Safari or Mozilla’s Firefox. Chrome accounts for 65% of browsers used, and Edge accounts for 5%. To the best of my knowledge, there have been no reports of hackers using this method so far.

My colleagues, who include a Google security researcher, and I have communicated with the developers responsible for the File System Access API, and they have expressed support for our work and interest in our approaches to defending against this kind of attack. We also filed a security report to Microsoft but have not heard from them.

Double-edged sword

Today’s browsers are almost operating systems unto themselves. They can run software programs and encrypt files. These capabilities, combined with the browser’s access to the host computer’s files – including ones in the cloud, shared folders and external drives – via the File System Access API creates a new opportunity for ransomware.

Imagine you want to edit photos on a benign-looking free online photo editing tool. When you upload the photos for editing, any hackers who control the malicious editing tool can access the files on your computer via your browser. The hackers would gain access to the folder you are uploading from and all subfolders. Then the hackers could encrypt the files in your file system and demand a ransom payment to decrypt them.

Today’s web browsers are more powerful – and in some ways more vulnerable – than their predecessors.

Ransomware is a growing problem. Attacks have hit individuals as well as organizations, including Fortune 500 companies, banks, cloud service providers, cruise operators, threat-monitoring services, chip manufacturers, governments, medical centers and hospitals, insurance companies, schools, universities and even police departments. In 2023, organizations paid more than US$1.1 billion in ransomware payments to attackers, and 19 ransomware attacks targeted organizations every second.

It is no wonder ransomware is the No. 1 arms race today between hackers and security specialists. Traditional ransomware runs on your computer after hackers have tricked you into downloading it.

New defenses for a new threat

A team of researchers I lead at the Cyber-Physical Systems Security Lab at Florida International University, including postdoctoral researcher Abbas Acar and Ph.D. candidate Harun Oz, in collaboration with Google Senior Research Scientist Güliz Seray Tuncay, have been investigating this new type of potential ransomware for the past two years. Specifically, we have been exploring how powerful modern web browsers have become and how they can be weaponized by hackers to create novel forms of ransomware.

In our paper, RøB: Ransomware over Modern Web Browsers, which was presented at the USENIX Security Symposium in August 2023, we showed how this emerging ransomware strain is easy to design and how damaging it can be. In particular, we designed and implemented the first browser-based ransomware called RøB and analyzed its use with browsers running on three different major operating systems – Windows, Linux and MacOS – five cloud providers and five antivirus products.

Our evaluations showed that RøB is capable of encrypting numerous types of files. Because RøB runs within the browser, there are no malicious payloads for a traditional antivirus program to catch. This means existing ransomware detection systems face several issues against this powerful browser-based ransomware.

We proposed three different defense approaches to mitigate this new ransomware type. These approaches operate at different levels – browser, file system and user – and complement one another.

The first approach temporarily halts a web application – a program that runs in the browser – in order to detect encrypted user files. The second approach monitors the activity of the web application on the user’s computer to identify ransomware-like patterns. The third approach introduces a new permission dialog box to inform users about the risks and implications associated with allowing web applications to access their computer’s file system.

When it comes to protecting your computer, be careful about where you upload as well as download files. Your uploads could be giving hackers an “in” to your computer.The Conversation

Selcuk Uluagac, Professor of Computing and Information Science, Florida International University

This article is republished from The Conversation under a Creative Commons license.

Tuesday, April 23, 2024

AI chatbots refuse to produce ‘controversial’ output − why that’s a free speech problem

 

AI chatbots restrict their output according to vague and broad policies. taviox/iStock via Getty Images

Google recently made headlines globally because its chatbot Gemini generated images of people of color instead of white people in historical settings that featured white people. Adobe Firefly’s image creation tool saw similar issues. This led some commentators to complain that AI had gone “woke.” Others suggested these issues resulted from faulty efforts to fight AI bias and better serve a global audience.

The discussions over AI’s political leanings and efforts to fight bias are important. Still, the conversation on AI ignores another crucial issue: What is the AI industry’s approach to free speech, and does it embrace international free speech standards?

We are policy researchers who study free speech, as well as executive director and a research fellow at The Future of Free Speech, an independent, nonpartisan think tank based at Vanderbilt University. In a recent report, we found that generative AI has important shortcomings regarding freedom of expression and access to information.

Generative AI is a type of AI that creates content, like text or images, based on the data it has been trained with. In particular, we found that the use policies of major chatbots do not meet United Nations standards. In practice, this means that AI chatbots often censor output when dealing with issues the companies deem controversial. Without a solid culture of free speech, the companies producing generative AI tools are likely to continue to face backlash in these increasingly polarized times.

Vague and broad use policies

Our report analyzed the use policies of six major AI chatbots, including Google’s Gemini and OpenAI’s ChatGPT. Companies issue policies to set the rules for how people can use their models. With international human rights law as a benchmark, we found that companies’ misinformation and hate speech policies are too vague and expansive. It is worth noting that international human rights law is less protective of free speech than the U.S. First Amendment.

Our analysis found that companies’ hate speech policies contain extremely broad prohibitions. For example, Google bans the generation of “content that promotes or encourages hatred.” Though hate speech is detestable and can cause harm, policies that are as broadly and vaguely defined as Google’s can backfire.

To show how vague and broad use policies can affect users, we tested a range of prompts on controversial topics. We asked chatbots questions like whether transgender women should or should not be allowed to participate in women’s sports tournaments or about the role of European colonialism in the current climate and inequality crises. We did not ask the chatbots to produce hate speech denigrating any side or group. Similar to what some users have reported, the chatbots refused to generate content for 40% of the 140 prompts we used. For example, all chatbots refused to generate posts opposing the participation of transgender women in women’s tournaments. However, most of them did produce posts supporting their participation.

Freedom of speech is a foundational right in the U.S., but what it means and how far it goes are still widely debated.

Vaguely phrased policies rely heavily on moderators’ subjective opinions about what hate speech is. Users can also perceive that the rules are unjustly applied and interpret them as too strict or too lenient.

For example, the chatbot Pi bans “content that may spread misinformation.” However, international human rights standards on freedom of expression generally protect misinformation unless a strong justification exists for limits, such as foreign interference in elections. Otherwise, human rights standards guarantee the “freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers … through any … media of … choice,” according to a key United Nations convention.

Defining what constitutes accurate information also has political implications. Governments of several countries used rules adopted in the context of the COVID-19 pandemic to repress criticism of the government. More recently, India confronted Google after Gemini noted that some experts consider the policies of the Indian prime minister, Narendra Modi, to be fascist.

Free speech culture

There are reasons AI providers may want to adopt restrictive use policies. They may wish to protect their reputations and not be associated with controversial content. If they serve a global audience, they may want to avoid content that is offensive in any region.

In general, AI providers have the right to adopt restrictive policies. They are not bound by international human rights. Still, their market power makes them different from other companies. Users who want to generate AI content will most likely end up using one of the chatbots we analyzed, especially ChatGPT or Gemini.

These companies’ policies have an outsize effect on the right to access information. This effect is likely to increase with generative AI’s integration into search, word processors, email and other applications.

This means society has an interest in ensuring such policies adequately protect free speech. In fact, the Digital Services Act, Europe’s online safety rulebook, requires that so-called “very large online platforms” assess and mitigate “systemic risks.” These risks include negative effects on freedom of expression and information.

Jacob Mchangama discusses online free speech in the context of the European Union’s 2022 Digital Services Act.

This obligation, imperfectly applied so far by the European Commission, illustrates that with great power comes great responsibility. It is unclear how this law will apply to generative AI, but the European Commission has already taken its first actions.

Even where a similar legal obligation does not apply to AI providers, we believe that the companies’ influence should require them to adopt a free speech culture. International human rights provide a useful guiding star on how to responsibly balance the different interests at stake. At least two of the companies we focused on – Google and Anthropic – have recognized as much.

Outright refusals

It’s also important to remember that users have a significant degree of autonomy over the content they see in generative AI. Like search engines, the output users receive greatly depends on their prompts. Therefore, users’ exposure to hate speech and misinformation from generative AI will typically be limited unless they specifically seek it.

This is unlike social media, where people have much less control over their own feeds. Stricter controls, including on AI-generated content, may be justified at the level of social media since they distribute content publicly. For AI providers, we believe that use policies should be less restrictive about what information users can generate than those of social media platforms.

AI companies have other ways to address hate speech and misinformation. For instance, they can provide context or countervailing facts in the content they generate. They can also allow for greater user customization. We believe that chatbots should avoid merely refusing to generate any content altogether. This is unless there are solid public interest grounds, such as preventing child sexual abuse material, something laws prohibit.

Refusals to generate content not only affect fundamental rights to free speech and access to information. They can also push users toward chatbots that specialize in generating hateful content and echo chambers. That would be a worrying outcome.The Conversation

Jordi Calvet-Bademunt, Research Fellow and Visiting Scholar of Political Science, Vanderbilt University and Jacob Mchangama, Research Professor of Political Science, Vanderbilt University

This article is republished from The Conversation under a Creative Commons license.