Sunday, December 8, 2024

Can you choose to believe something, just like that?

 

I decide, therefore I believe? duoogle/iStock via Getty Images Plus

Some years ago, I was in a lively conversation with a software developer about arguments for and against God’s existence. After discussing their merits and shortcomings, he paused – perhaps a little impatiently – and said, “You know, these arguments really don’t matter that much. I choose to believe in God. Believing is so valuable for my life.”

But is that how belief works – can you simply choose to believe?

People can, of course, choose to read certain sources, spend time with certain groups, or reflect on a certain matter – all of which influence their beliefs. But all of these choices involve evidence of some kind. We often choose which evidence to expose ourselves to, but the evidence itself seems to be in the driver’s seat in causing beliefs.

For much of the past 2,000 years, philosophers would have been perfectly comfortable with the software developer’s claim that belief is a matter of choice. A long line of distinguished thinkers – from the Stoic philosopher Epictetus and Saint Augustine of Hippo to French rationalist René Descartes and early feminist Mary Astell have held that people can exercise at least some control over their beliefs.

Over the past half-century, however, “doxastic voluntarism” – the idea that belief is under the control of the will – has been widely rejected. Most current philosophers don’t think people can immediately believe something “just like that,” simply because they want to. What beliefs someone ends up having are determined by the people and environments they are exposed to – from beliefs about a deity to beliefs about the solar system.

As a philosophy professor myself, I’ve dedicated years of reflection to this issue. I’ve come to think both camps get something right.

Reflecting reality

Some philosophers think that the nature of belief itself ensures that people cannot just choose what to believe.

They argue that beliefs have a “truth-aim” built into them: that is, beliefs characteristically represent reality. And sadly, reality often does not obey our wishes and desires; we cannot just decide to think reality is a certain way.

No matter how much I may want to be 6 feet, 8 inches tall, reality will faithfully imprint it upon my consciousness that I am 5'11" every time I glance in the mirror or make an appearance on the basketball court. Were I to resolve to believe that I am 6'8", I would quickly find that such resolutions are wholly ineffective.

An above-court photograph of several men playing basketball, with one man about to dunk the ball through the hoop.
Sometimes belief, no matter how strong, just can’t keep denying reality. skynesher/E+ via Getty Images

Or consider another example. If belief were truly voluntary, I would gladly relinquish my belief that climate change is afoot – imagine how less worried I’d be. But I cannot. The evidence, along with the widespread agreement among scientific authorities, has indelibly impressed upon my mind that climate change is part of reality.

Regardless of whether I want to believe or not believe, bare desire isn’t enough to make it happen. Beliefs seem largely outside of our direct control.

Who’s responsible?

But if that’s true, some rather alarming consequences seem to follow. It seems we had better stop blaming people for their beliefs, no matter how far-fetched.

Suppose I believe a dangerous falsehood: that Bill Gates used the COVID-19 vaccine to implant microchips in people, or that climate change is a hoax, or that the Holocaust is an elaborate fabrication. If belief is involuntary, it looks as though I am innocent of any wrongdoing. These beliefs just happened to me, so to speak. If beliefs are not voluntary, then they seem the spontaneous result of my being exposed to certain influences and ideas – including, in this case, conspiracy theory chat forums.

Now, people can choose what influences they allow into their lives – to some extent. I can decide where to gather information about climate trends: a chat forum, the mainstream media, or the United Nations’ Intergovernmental Panel on Climate Change. I can decide how much to reflect on what such sources tell me, along with their motivations. Almost all contemporary philosophers think that people can exert this type of voluntary control over their beliefs.

A young woman in a pink turtleneck sits in front of open books and an open laptop at an indoor table.
We make choices about what evidence to look at – but even those choices are shaped by external influences. Maskot/Getty Images

But does that mean I am responsible for the beliefs I arrive at? Not necessarily.

After all, which sources we decide to consult, and how we evaluate them, can also be shaped by our preexisting beliefs. I am not going to trust the U.N. climate panel’s latest report if, say, I believe it is a part of a global conspiracy to curtail free markets – especially not if I had many similar beliefs drummed into me since childhood.

It gets difficult to see how individuals could have any meaningful freedom over their beliefs, or any meaningful responsibility.

The murky middle

Research has led me to think that things are a bit less grim – and a bit less black and white.

Philosopher Elizabeth Jackson and I recently carried out a study, not yet published, involving more than 300 participants. We gave them brief summaries of several scenarios where it was unclear whether an individual had committed a crime.

The evidence was ambiguous, but we asked participants whether they could choose to believe the individual was innocent “just like that,” without having to gather evidence or think critically. Many people in the study said that they could do exactly this.

It’s possible they were mistaken. Still, several recent studies at the intersection of philosophy and psychology suggest people can control some of their beliefs, especially in situations where the evidence is ambiguous.

And that describes many of the most important propositions people are forced to consider, from politics and careers to romance: Who is the best candidate? Which path should I pursue? Is she the one?

So, it looks like we have some reason to think people are able to directly control their beliefs, after all. And if the evidence for God is similarly ambiguous, perhaps my software developer was right that he could decide to believe.The Conversation

Mark Boespflug, Assistant Professor of Philosophy, Fort Lewis College

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Sunday, October 13, 2024

What the ancient Indian text Bhagavad Gita can teach about not putting too much of our identity and emotions into work

 

This famous scene from the Bhagavad Gita, featuring the god Krishna with his cousin, Prince Arjuna, on a chariot heading into war. Pictures From History/Universal Images Group via Getty Images

A 2023 Gallup poll found that U.S. employees are generally unhappy at work. The number of those who feel angry and disconnected with their organization’s mission is climbing.

An analysis of data from 60,000 employees by BambooHR, an HR software platform, also found that workplace morale was getting worse: “Employees aren’t experiencing highs or lows — instead, they are expressing a sense of resignation or even apathy.”

As a scholar of South Asian religions, I argue that a mindfulness technique called “nishkama karma” – acting without desire – described in an ancient but popular Indian text called the “Bhagavad Gita,” may prove useful for navigating the contemporary world of work.

The Gita presents a variety of “yogas,” or disciplined religious paths. One such path suggests adopting an attitude of righteous resignation – a kind of Stoic equanimity or even-mindedness. In the workplace, this might mean performing one’s professional duties to the best of one’s ability – but without being overly concerned about the results for one’s personal advancement.

The Gita and action

The “Bhagavad Gita,” or “Song of the Lord,” is an 18-chapter dialogue between Krishna, the Lord of the Universe, and the warrior-hero Arjuna. Found in the sixth book of the world’s longest epic poem, the “Mahabharata,” the Gita was likely composed between the third century B.C.E. and the third century C.E.

The Gita opens on a battlefield where Arjuna, the beleaguered champion of the Pandavas, is set to fight his cousins, the Kauravas, along with his uncles and former teachers, for the rightful control of the ancestral kingdom.

Arjuna is faced with the moral ambiguity of internecine warfare. He is stuck in a dilemma between obligations to his kin and former teachers and obligations to his “dharma” – religious and social duty – as a warrior to fight against them. Arjuna is therefore understandably reluctant to act.

Krishna, who has assumed the humble guise of Arjuna’s charioteer in the story, advises Arjuna that it is impossible for anyone to refrain entirely from all action: “There is no one who can remain without action even for a moment. Indeed, all beings are compelled to act by their qualities born of material nature” (3.5).

Even choosing not to act is itself a kind of action. Krishna instructs Arjuna to perform his duties as a warrior regardless of how he feels about the prospect of fighting against family and friends: “Fight for the sake of duty, treating alike happiness and distress, loss and gain, victory and defeat. Fulfilling your responsibility in this way, you will never incur sin” (2.38).

Given the inevitability of action, Krishna advises Arjuna to cultivate an attitude of nonattached equanimity or even-mindedness toward the results of his actions. Unlike feeling detached from the work process itself, cultivating an attitude of detachment from the results of one’s work is presented in the Gita as a method for gaining a clear and stable mind.

‘Nishkama karma,’ or nonattached action

The term that the Gita uses, variously rendered as “work” or “action,” is “karma.” Derived from the Sanskrit root “kri” – to do, to act or to make, karma has a range of meanings in Hindu literature. In early Vedic thought, karma referred to the performance of a sacrifice and the results that followed.

By the time of the composition of the Gita, over a 1,000 years later, the concept of karma had expanded considerably. From the sixth century B.C.E. onward, Hindu texts typically describe karma as any thought, word or deed, and its consequences in this or a future lifetime.

Statues of two seated men, with one of them talking to the other who appears despondent.
Carved statues of Lord Krishna and Arjuna seated on their chariot at the Viswashanti Ashram, Bengaluru, India. Wirestock/iStock via Getty Images plus

Krishna explains to Arjuna that his actions or karma should follow dharma, the religious and social obligations inherent in his role as a warrior of the Pandavas. And the proper dharmic attitude toward the results of action is nonattachment.

The word that describes this nonattachment is “nishkama,” or without desire – the proper spirit in which karma is to be undertaken. From the perspective of the Gita – a perspective shared widely in traditional Indian thought – desire is inherently problematic due to its insistent preoccupation with the self. By reducing desire, however, one can perform one’s work or action without the constant distraction of seeking praise or avoiding blame.

Furthermore, since knowing the outcome of one’s actions is impossible, the Gita advises performing one’s duties without a sense of ego in a spirit of service to the world. “Therefore, without attachment, always do whatever action has to be done; for it is through acting without attachment that one attains the highest state,” as Krishna says to Arjuna (3.19).

The flow state

In his modern classic “Flow: The Psychology of Optimal Experience,” psychologist Mihaly Csikszentmihalyi writes about the optimal mental state that may be experienced while performing an engaging task. Csikszentmihalyi describes “flow” as a mental state where one is fully immersed in the task at hand. In such a state, attention is focused on the work being done without any self-conscious concerns about performance or outcome.

By way of example, Csikszentmihalyi asked readers to consider downhill skiing. He noted that while one is fully engaged in the process itself, there is no place for distraction. For a skier, he said, “There is no room in your awareness for conflicts and contradictions; you know that distracting thought or emotion might get you buried face down in the snow.”

Csikszentmihalyi’s research suggests that problems like distraction, feeling detached from one’s work, and job dissatisfaction can arise when people lose sight of the action of work itself. As Csikszentmihalyi writes, “The problem arises when people are so fixated on what they want to achieve that they cease to derive pleasure from the present. When that happens, they forfeit their chance of contentment.”

Acting without attachment

A fragmented mind that approaches work or action with an agenda of gaining power, wealth or fame cannot perform at its best. The Gita suggests that the secret to success at work is cultivating a balanced state of mind that isn’t fixated on ego inflation and self-promotion.

It is impossible to be fully present during the performance of a task if one is speculating about unknowable future contingencies or ruminating about past outcomes. Likewise, for Csikszentmihalyi, cultivating the “flow state” means actively remaining present and engaged while performing a task.

Csikszentmihalyi’s writings about the “flow state” resonate with the advice of Krishna in the Gita: “As ignorant people perform their duties with attachment to the results, O scion of Bharat (an epithet for Arjuna), so should the wise act without attachment, for the sake of leading people on the right path” (3.25).

Nishkama karma and the “flow state” are not identical ideas. However, they share at least one fundamental assumption: Focusing on the task at hand, with no thought of gain or loss, is necessary for achieving our best, most satisfying work.The Conversation

Robert J. Stephens, Principal Lecturer in Religion, Clemson University

This article is republished from The Conversation under a Creative Commons license.

Monday, September 9, 2024

Robots are coming to the kitchen − what that could mean for society and culture

 

Robotic kitchens aren’t on homemakers’ must-have lists yet, but they are starting to gain traction in restaurants. Robert Michael/picture alliance via Getty Images

Automating food is unlike automating anything else. Food is fundamental to life – nourishing body and soul – so how it’s accessed, prepared and consumed can change societies fundamentally.

Automated kitchens aren’t sci-fi visions from “The Jetsons” or “Star Trek.” The technology is real and global. Right now, robots are used to flip burgers, fry chicken, create pizzas, make sushi, prepare salads, serve ramen, bake bread, mix cocktails and much more. AI can invent recipes based on the molecular compatibility of ingredients or whatever a kitchen has in stock. More advanced concepts are in the works to automate the entire kitchen for fine dining.

Since technology tends to be expensive at first, the early adopters of AI kitchen technologies are restaurants and other businesses. Over time, prices are likely to fall enough for the home market, possibly changing both home and societal dynamics.

Can food technology really change society? Yes, just consider the seismic impact of the microwave oven. With that technology, it was suddenly possible to make a quick meal for just one person, which can be a benefit but also a social disruptor.

Familiar concerns about the technology include worse nutrition and health from prepackaged meals and microwave-heated plastic containers. Less obviously, that convenience can also transform eating from a communal, cultural and creative event into a utilitarian act of survival – altering relationships, traditions, how people work, the art of cooking and other facets of life for millions of people.

For instance, think about how different life might be without the microwave. Instead of working at your desk over a reheated lunch, you might have to venture out and talk to people, as well as enjoy a break from work. There’s something to be said for living more slowly in a society that’s increasingly frenetic and socially isolated.

Convenience can come at a great cost, so it’s vital to look ahead at the possible ethical and social disruptions that emerging technologies might bring, especially for a deeply human and cultural domain – food – that’s interwoven throughout daily life.

With funding from the U.S. National Science Foundation, my team at California Polytechnic State University is halfway into what we believe is the first study of the effects AI kitchens and robot cooks could have on diverse societies and cultures worldwide. We’ve mapped out three broad areas of benefits and risks to examine.

You aren’t likely to have a robotic home kitchen anytime soon, but several companies are making them and marketing them to early adopters.

Creators and consumers

The benefits of AI kitchens include enabling chefs to be more creative, as well as eliminating repetitive, tedious tasks such as peeling potatoes or standing at a workstation for hours. The technology can free up time. Not having to cook means being able to spend more time with family or focus on more urgent tasks. For personalized eating, AI can cater to countless special diets, allergies and tastes on demand.

However, there are also risks to human well-being. Cooking can be therapeutic and provides opportunities for many things: gratitude, learning, creativity, communication, adventure, self-expression, growth, independence, confidence and more, all of which may be lost if no one needs to cook. Family relationships could be affected if parents and children are no longer working alongside each other in the kitchen – a safe space to chat, in contrast to what can feel like an interrogation at the dining table.

The kitchen is also the science lab of the home, so science education could suffer. The alchemy of cooking involves teaching children and other learners about microbiology, physics, chemistry, materials science, math, cooking techniques and tools, food ingredients and their sourcing, human health and problem-solving. Not having to cook can erode these skills and knowledge.

Community and cultures

AI can help with experimentation and creativity, such as creating elaborate food presentations and novel recipes within the spirit of a culture. Just as AI and robotics help generate new scientific knowledge, they can increase understanding of, say, the properties of food ingredients, their interactions and cooking techniques, including new methods.

But there are risks to culture. For example, AI could bastardize traditional recipes and methods, since AI is prone to stereotyping, for example flattening or oversimplifying cultural details and distinctions. This selection bias could lead to reduced diversity in the kinds of cuisine produced by AI and robot cooks. Technology developers could become gatekeepers for food innovation, if the limits of their machines lead to homogeneity in cuisines and creativity, similar to the weirdly similar feel of AI art images across different apps.

Also, think about your favorite restaurants and favorite dinners. How might the character of those neighborhoods change with automated kitchens? Would it degrade your own gustatory experience if you knew those cooking for you weren’t your friends and family but instead were robots?

a robotic arm behind a glass wall as two people stand in front of the glass watching
Robotic kitchens are beginning to show up in restaurants, particularly fast-food places. CFOTO/Future Publishing via Getty Images

The hope with technology is that more jobs will be created than jobs lost. Even if there’s a net gain in jobs, the numbers hide the impact on real human lives. Many in the food service industry – one of the most popular occupations in any economy – could find themselves unable to learn new skills for a different job. Not everyone can be an AI developer or robot technician, and it’s far from clear that supervising a robot is a better job than cooking.

Philosophically, it’s still an open question whether AI is capable of genuine creativity, particularly if that implies inspiration and intuition. Assuming so may be the same mistake as thinking that a chatbot understands what it’s saying, instead of merely generating words that statistically follow the previous words. This has implications for aesthetics and authenticity in AI food, similar to ongoing debates about AI art and music.

Safety and responsibility

Because humans are a key disease vector, robot cooks can improve food safety. Precision trimming and other automation can reduce food waste, along with AI recipes that can make the fullest use of ingredients. Customized meals can be a benefit for nutrition and health, for example, in helping people avoid allergens and excess salt and sugar.

The technology is still emerging, so it’s unclear whether those benefits will be realized. Foodborne illnesses are an unknown. Will AI and robots be able to smell, taste or otherwise sense the freshness of an ingredient or the lack thereof and perform other safety checks?

Physical safety is another issue. It’s important to ensure that a robot chef doesn’t accidentally cut, burn or crush someone because of a computer vision failure or other error. AI chatbots have been advising people to eat rocks, glue, gasoline and poisonous mushrooms, so it’s not a stretch to think that AI recipes could be flawed, too. Where legal regimes are still struggling to sort out liability for autonomous vehicles, it may similarly be tricky to figure out liability for robot cooks, including if hacked.

Given the primacy of food, food technologies help shape society. The kitchen has a special place in homes, neighborhoods and cultures, so disrupting that venerable institution requires careful thinking to optimize benefits and reduce risks.The Conversation

Patrick Lin, Professor of Philosophy, California Polytechnic State University

This article is republished from The Conversation under a Creative Commons license.

What is the Shroud of Turin and why is there so much controversy around it?

 

An image of the Shroud of Turin, which purports to show the face of Jesus. Pierre Perrin/Sygma via Getty Images

The Cathedral of St. John the Baptist in Turin, Italy, houses a fascinating artifact: a massive cloth shroud that bears the shadowy image of a man who appears to have been crucified. Millions of Christians around the world believe that this shroud – commonly called the Shroud of Turin – is the cloth that was used to bury Jesus after his crucifixion and that the image on the shroud was produced miraculously when he was resurrected.

The evidence, however, tells a different story.

Scientists have questioned the validity of the claims about the shroud being a first-century object. Evidence from carbon-14 dating points to the shroud being a creation from the Middle Ages. Skeptics, however, dismiss these tests as flawed. The shroud remains an object of faith, intrigue and controversy that reappears periodically in the public sphere, as it has in recent weeks.

As a scholar of early Christianity, I have long been interested in why people are motivated to create objects like the shroud and also why people are drawn to revere them as authentic.

The shroud and its history

The first public appearance of the shroud was in 1354, when it was displayed publicly in Lirey, a small commune in central France. Christian pilgrims traveled from all over to gaze upon the image of the crucified Jesus.

Pilgrimages like this were common during the Middle Ages, when relics of holy people began to appear throughout Europe. The relic trade was big business at the time; relics were bought and sold, and pilgrims often paid a fee to visit them.

Many believed that these relics were genuine. In addition to the shroud, pilgrims visited Jesus’ crib, splinters from the cross and Jesus’ foreskin, just to name a few.

But even in the 14th century, when the relic trade in Europe was flourishing, some were suspicious.

In 1390, only a few decades after the shroud was displayed in Lirey, a French bishop named Pierre d’Arcis claimed in a letter to Pope Clement VII not only that the shroud was a fake but that the artist responsible for its creation had already confessed to creating it. Clement VII agreed with the assessment of the shroud, although he permitted its continued display as a piece of religious art.

The shroud and science

The shroud has been the subject of much scientific investigation in the past several decades. Data from scientific tests matches what scholars know about the shroud from historical records.

In 1988, a team of scientists used carbon-14 dating to determine when the fabric of the shroud was manufactured. The tests were performed at three labs, all working independently. Based on data from these labs, scientists said there was “conclusive evidence” that the shroud originated between the years 1260 and 1390.

Results from another scientific study over 30 years later appeared to debunk these findings. Using an advanced X-ray technique to study the structure of materials, the scientists concluded that the fabric of the shroud was much older and could likely be from the first century. They also noted, however, that their results could be considered conclusive only if the shroud had been stored at a relatively constant temperature and humidity – between 68-72.5 degrees Fahrenheit and 55% to 75% – for the entirety of two millennia.

This would be highly unlikely for any artifact from that period. And when it comes to the shroud, the conditions under which it has survived have been less than ideal.

In 1532, while the shroud was being kept in Chambéry in southern France, the building it was housed in caught fire. The silver case that held the shroud melted; despite intricate repair attempts, the burn marks in the fabric remain visible to this day. It was saved from another fire in Turin as recently as 1997.

Despite the ongoing debate, the carbon-14 dating results have continued to provide the most compelling scientific evidence that the shroud is a product of the Middle Ages and not an ancient relic.

The shroud as religious art

The shroud is undeniably a masterful work of art, crafted with remarkable skill and using methods that were complicated and ahead of their time. For centuries, many experts struggled to understand how the image was imprinted onto the fabric, and it wasn’t until 2009 that scientists were successfully able to reproduce the technique using medieval methods and materials.

Pope Francis once referred to the shroud as an “icon,” a type of religious art that can be used for a variety of purposes, including teaching, theological expression and even worship. Without addressing the authenticity of the shroud, the pope suggested that by prompting reflection on the face and body of the crucified Jesus, the shroud encouraged people to also consider those around them who may be suffering.

It is at least possible that the shroud was created as a tool that would encourage viewers to meditate on the death of Jesus in a tangible way.

Ultimately, the shroud of Turin will continue to intrigue and draw both believers and skeptics into a debate that has spanned centuries. But I believe that the shroud encourages viewers to think about how history, art and belief come together and influence how we see the past.The Conversation

Eric Vanden Eykel, Associate Professor of Religious Studies, Ferrum College

This article is republished from The Conversation under a Creative Commons license. Read the original article.