Sunday, June 20, 2021

South Africa needs new thinking for its democracy to work for all

 

Many black South Africans live in appalling conditions with no running water or electricity 27 years into democracy. Anders Pettersson/Getty Images

Claims of racial bias against black pupils at affluent private schools are becoming a routine South African event. They are also a symbol of the country’s reality.

The private schools were created by and for white people – this is reflected in their rules and customs. But they are seen as centres of education quality and so black people who can afford them send their children to them. But they either can’t or won’t change into institutions which include everyone. All of which is a fair description of South Africa since 1994.

In a just published book, Prisoners of the Past: South African Democracy and the Legacy of Minority Rule, I argue that the new order created when racial laws were scrapped in 1994 is “path dependent” – patterns which held sway in the old order are carried into the new. This does not mean that, as some claim, nothing has changed – anyone who claims there is no difference between racial minority rule and democracy was either not alive before 1994 or not paying attention. But core realities have not changed.

Before 1994, South Africa was divided into white insiders and black outsiders. Some who were outsiders are now insiders, the minority who have a regular income from the formal economy. But most remain outside. Within the insider group there are divisions: race is the most important.

Although racism is now outlawed, racial pecking orders survive – some black people have been absorbed into a still white-run economy, a reality confirmed by the make-up of boards and senior management. Middle class black people are among the angriest South Africans – they enjoy opportunities and hold qualifications which were unavailable to their parents, but experience many of the same racial attitudes. This fuels conflicts which sound like campaigns for radical economic change but are driven by middle-class anger at the survival of racial barriers.

In the suburbs, people vote overwhelmingly for the opposition and hold the governing African National Congress (ANC) in contempt – in the townships and shack settlements where poor people live, the ANC still dominates although it has lost some ground. But the quality of public services in suburbs is still way above that in townships and authorities are much more likely to listen to suburbanites who pressure them. Under apartheid, too, the suburbs were well served and well heard, the townships were neither.

Wrong solutions

Why has democracy not ended these patterns? First, because the negotiations which ended minority rule tackled only the most obvious problem – that most South Africans were denied citizenship rights. No progress was possible without ending this, but it was only a part of what needed to change. There were no negotiated agreements on the economy or the professions or education.

In theory, changing the political system was meant to ensure that everything else changed too. But habits and hierarchies do not disappear simply because political rules change – neither does the balance of power in the economy and society. The political system is now controlled by the formerly excluded black majority – other areas of the country’s life are not.

Second, the political elite who took over in 1994 have not tried to change these realities because they – with, ironically, the old white economic and cultural elite – believe that the goal of democratic South Africa is to extend to everyone what whites enjoyed under apartheid. They have not built a new economic, cultural and social order – they have tried to slot as many black people as possible into what exists. The parents who send their children to suburban private schools and hope they will be treated with respect are following the same path.

Apartheid was good to whites. It gave them the vote and freedom of speech as long as they were not too sympathetic to blacks. It created large formal businesses and, in its heyday, whites were guaranteed a formal job. The suburbs of major cities resembled California in the US. It is this which the new and old elite want to extend to everyone.

Despite much talk of black economic empowerment, far more effort is devoted to the role of black people in the corporations which have dominated the economy for decades than to promoting black-owned businesses. In the professions, new black entrants have been expected to conform to the habits and rules created when whites controlled the society. Culturally, apartheid and its values may be discredited but the West remains the centre of attention.

Everyone can’t have what whites had under apartheid because there isn’t nearly enough of it to go around. While political rights can be enjoyed by everyone, apartheid’s economic and social benefits were what a fraction of the country enjoyed by using force to deny it to the rest. Once apartheid went, the living standards of the minority needed to adjust to what a middle income country could afford. Because they haven’t, there are just so many black people who can benefit.

It is common for the South African debate to blame the government, or particular people in it, for the country’s difficulties. But it is the realities described here which explain poor growth, continued inequality and the many problems of which the debate complains.

New thinking

It is easy to see why the white elites prefers the old arrangements – but why does the new black leadership want them? Anywhere one group dominates others, the standards and habits of the group in charge come to be seen as the measure of the good society: for the leadership of those at the wrong end of this, domination only ends when everyone shares in them. The response is very human – but it keeps alive the old order with its inequalities and unfairness.

Despite change in important areas this is the reality of post-1994 South Africa. It ensures that the country does not reach anything like its potential – that it is not only less humane than it might be but less well-off too because many are still barred from using their talents and energies to help it to grow.

South Africa is not, the book argues, doomed to follow this path forever. Change needs, firstly, new thinking, an approach which seeks a society which works for all its people. This is unlikely to come from elites, who are wedded to the present, but could be the product of campaigning by citizens. It could create the ground for negotiation which would tackle what the 1994 deal left untouched – how to create a new, shared, society and not only a new political order.

It is this path, not the constant search for the perfect political leader who will solve all problems, which could enable South Africa to bury its past and create a better future.The Conversation

Steven Friedman, Professor of Political Studies, University of Johannesburg

This article is republished from The Conversation under a Creative Commons license.

Faith still shapes morals and values even after people are ‘done’ with religion

 

For many, leaving religion does not mean leaving behind religious morals and values. Jesus Gonzalez/Moment via Getty

Religion forms a moral foundation for billions of people throughout the world.

In a 2019 survey, 44% of Americans – along with 45% of people across 34 nations – said that belief in God is necessary “to be moral and have good values.” So what happens to a person’s morality and values when they lose faith?

Religion influences morals and values through multiple pathways. It shapes the way people think about and respond to the world, fosters habits such as church attendance and prayer, and provides a web of social connections.

As researchers who study the psychology and sociology of religion, we expected that these psychological effects can linger even after observant people leave religion, a group we refer to as “religious dones.” So together with our co-authors Daryl R. Van Tongeren and C. Nathan DeWall, we sought to test this “religion residue effect” among Americans. Our research addressed the question: Do religious dones maintain some of the morals and values of religious Americans?

In other words, just because some people leave religion, does religion fully leave them?

Measuring the religious residue effect

Recent research demonstrates that religious dones around the world fall between the never religious and the currently religious in terms of thoughts, feelings and behaviors. Many maintain some of the attributes of religious people, such as volunteering and charitable giving, even after they leave regular faith practices behind. So in our first project, we examined the association between leaving religion and the five moral foundations commonly examined by psychologists: care/harm, fairness/cheating, ingroup loyalty/betrayal, authority/subversion and purity/degradation.

We found that religious respondents were the most likely to support each of the five moral foundations. These involve intuitive judgments focusing on feeling the pain of others, and tapping into virtues such as kindness and compassion. For instance, religious Americans are relatively likely to oppose acts they deem “disgusting,” which is a component of the purity/degradation scale. This aligns with previous research on religion and moral foundations.

Most importantly, and in line with the religion residue hypothesis, we have found what we call a “stairstep pattern” of beliefs. The consistently religious are more likely than the dones to endorse each moral foundation, and the religious dones are more likely to endorse them than the consistently nonreligious. The one exception was the moral foundation of fairness/cheating, which the dones and the consistently religious supported at similar rates.

Put another way, after leaving religion, religious dones maintain some emphasis on each of the five moral foundations, though less so than the consistently religious, which is why we refer to this as a stairstep pattern.

Our second project built on research showing that religion is inextricably linked with values, particularly Schwartz’s Circle of Values, the predominant model of universal values used by Western psychologists. Values are the core organizing principles in people’s lives, and religion is positively associated with the values of security, conformity, tradition and benevolence. These are “social focus values”: beliefs that address a generally understood need for coordinated social action.

For this project, we asked a single group of study participants the same questions as they grew older over a period of 10 to 11 years. The participants were adolescents in the first wave of the survey, and in their mid-to-late 20s in the final wave.

Our findings revealed another stairstep pattern: The consistently religious among these young adults were significantly more likely than religious dones to support the social focus values of security, conformity and tradition; and religious dones were significantly more likely to support them than the consistently nonreligious. While a similar pattern emerged with the benevolence value, the difference between the religious dones and the consistently nonreligious was not statistically significant.

Together, these projects show that the religion residue effect is real. The morals and values of religious dones are more similar to those of religious Americans than they are to the morals and values of other nonreligious Americans.

Our follow-up analyses add some nuance to that key finding. For instance, the enduring impact of religious observance on values appears to be strongest among former evangelical Protestants. Among dones who left mainline Protestantism, Catholicism and other religious traditions, the religion residue effect is smaller and less consistent.

Our research also suggests that the religious residue effect can decay. The more time that passes after people leave religion, the more their morals and values come to resemble those of people who have never been religious. This is an important finding, because a large and growing number of Americans are leaving organized religion, and there is still much to be learned about the psychological and social consequences of this decline in religion.

The growing numbers of nonreligious

As recently as 1990, only 7% of Americans reported having no religion. Thirty years later, in 2020, the percentage claiming to be nonreligious had quadrupled, with almost 3 in 10 Americans having no religion. There are now more nonreligious Americans than affiliates of any one single religious tradition, including the two largest: Catholicism and evangelical Protestantism.

This shift in religious practice may fundamentally change Americans’ perceptions of themselves, as well as their views of others. One thing that seems clear, though, is that those who leave religion are not the same as those who have never been religious. Given the rapid and continued growth in the number of nonreligious Americans, we expect that this distinction will become increasingly important to understanding the morals and values of the American people.


Philip Schwadel, Professor of Sociology, University of Nebraska-Lincoln and Sam Hardy, Professor of Psychology, Brigham Young University

This article is republished from The Conversation under a Creative Commons license.

Wednesday, June 2, 2021

The next pandemic is already happening – targeted disease surveillance can help prevent it

 

Sustained surveillance for disease outbreaks at global hot spots may be the key to preventing the next pandemic. MR.Cole_Photographer/Getty Images

As more and more people around the world are getting vaccinated, one can almost hear the collective sigh of relief. But the next pandemic threat is likely already making its way through the population right now.

My research as an infectious disease epidemiologist has found that there is a simple strategy to mitigate emerging outbreaks: proactive, real-time surveillance in settings where animal-to-human disease spillover is most likely to occur.

In other words, don’t wait for sick people to show up at a hospital. Instead, monitor populations where disease spillover actually happens.

The current pandemic prevention strategy

Global health professionals have long known that pandemics fueled by zoonotic disease spillover, or animal-to-human disease transmission, were a problem. In 1947, the World Health Organization established a global network of hospitals to detect pandemic threats through a process called syndromic surveillance. The process relies on standardized symptom checklists to look for signals of emerging or reemerging diseases of pandemic potential among patient populations with symptoms that can’t be easily diagnosed.

This clinical strategy relies both on infected individuals coming to sentinel hospitals and medical authorities who are influential and persistent enough to raise the alarm.

Sentinel surveillance recruits select health institutions and groups to monitor potential disease outbreaks.

There’s only one hitch: By the time someone sick shows up at a hospital, an outbreak has already occurred. In the case of SARS-CoV-2, the virus that causes COVID-19, it was likely widespread long before it was detected. This time, the clinical strategy alone failed us.

Zoonotic disease spillover is not one and done

A more proactive approach is currently gaining prominence in the world of pandemic prevention: viral evolutionary theory. This theory suggests that animal viruses become dangerous human viruses incrementally over time through frequent zoonotic spillover.

It’s not a one-time deal: An “intermediary” animal such as a civet cat, pangolin or pig may be required to mutate the virus so it can make initial jumps to people. But the final host that allows a variant to become fully adapted to humans may be humans themselves.

Viral evolutionary theory is playing out in real time with the rapid development of COVID-19 variants. In fact, an international team of scientists have proposed that undetected human-to-human transmission after an animal-to-human jump is the likely origin of SARS-CoV-2.

Viruses jump species through a process of random mutations that allow them to successfully infect their hosts.

When novel zoonotic viral disease outbreaks like Ebola first came to the world’s attention in the 1970s, research on the extent of disease transmission relied on antibody assays, blood tests to identify people who have already been infected. Antibody surveillance, also called serosurveys, test blood samples from target populations to identify how many people have been infected. Serosurveys help determine whether diseases like Ebola are circulating undetected.

Turns out they were: Ebola antibodies were found in more than 5% of people tested in Liberia in 1982, decades before the West African epidemic in 2014. These results support viral evolutionary theory: It takes time – sometimes a lot of time – to make an animal virus dangerous and transmissible between humans.

What this also means is that scientists have a chance to intervene.

Measuring zoonotic disease spillover

One way to take advantage of the lead time for animal viruses to fully adapt to humans is long-term, repeated surveillance. Setting up a pandemic threats warning system with this strategy in mind could help detect pre-pandemic viruses before they become harmful to humans. And the best place to start is directly at the source.

My team worked with virologist Shi Zhengli of the Wuhan Institute of Virology to develop a human antibody assay to test for a very distant cousin of SARS-CoV-2 found in bats. We established proof of zoonotic spillover in a small 2015 serosurvey in Yunnan, China: 3% of study participants living near bats carrying this SARS-like coronavirus tested antibody positive. But there was one unexpected result: None of the previously infected study participants reported any harmful health effects. Earlier spillovers of SARS coronaviruses – like the first SARS epidemic in 2003 and Middle Eastern Respiratory Syndrome (MERS) in 2012 – had caused high levels of illness and death. This one did no such thing.

Researchers conducted a larger study in Southern China between 2015 and 2017. It’s a region home to bats known to carry SARS-like coronaviruses, including the one that caused the original 2003 SARS pandemic and the one most closely related to SARS-CoV-2.

Fewer than 1% of participants in this study tested antibody positive, meaning they had been previously infected with the SARS-like coronavirus. Again, none of them reported negative health effects. But syndromic surveillance – the same strategy used by sentinel hospitals – revealed something even more unexpected: An additional 5% of community participants reported symptoms consistent with SARS in the past year.

This study did more than just provide the biological evidence needed to establish proof of concept to measure zoonotic spillover. The pandemic threats warning system also picked up a signal for a SARS-like infection that couldn’t yet be detected through blood tests. It may even have detected early variants of SARS-CoV-2.

Had surveillance protocols been in place, these results would have triggered a search for community members who may have been part of an undetected outbreak. But without an established plan, the signal was missed.

From prediction to surveillance to genetic sequencing

The lion’s share of pandemic prevention funding and effort over the past two decades has focused on discovering wildlife pathogens, and predicting pandemics before animal viruses can infect humans. But this approach has not predicted any major zoonotic disease outbreaks – including H1N1 influenza in 2009, MERS in 2012, the West African Ebola epidemic in 2014 or the current COVID-19 pandemic.

Gregory Gray and his team at Duke University recently discovered a novel canine coronavirus at a global “hot spot” through surveillance and genetic sequencing.

Predictive modeling has, however, provided robust heat maps of the global “hot spots” where zoonotic spillover is most likely to occur.

Long-term, regular surveillance at these “hot spots” could detect spillover signals, as well as any changes that occur over time. These could include an uptick in antibody-positive individuals, increased levels of illness and demographic changes among infected people. As with any proactive disease surveillance, if a signal is detected, an outbreak investigation would follow. People identified with symptoms that can’t be easily diagnosed can then be screened using genetic sequencing to characterize and identify new viruses.

This is exactly what Greg Gray and his team from Duke University did in their search for undiscovered coronaviruses in rural Sarawak, Malaysia, a known “hot spot” for zoonotic spillover. Eight of 301 specimens collected from pneumonia patients hospitalized in 2017-2018 were found to have a canine coronavirus never before seen in humans. Complete viral genome sequencing not only suggested that it had recently jumped from an animal host – it also harbored the same mutation that made both SARS and SARS-CoV-2 so deadly.

Let’s not miss the next pandemic warning signal

The good news is that surveillance infrastructure in global “hot spots” already exists. The Connecting Organisations for Regional Disease Surveillance program links six regional disease surveillance networks in 28 countries. They pioneered “participant surveillance,” partnering with communities at high risk for both initial zoonotic spillover and the gravest health outcomes to contribute to prevention efforts.

For example, Cambodia, a country at risk of pandemic avian influenza spillover, established a free national hotline for community members to report animal illnesses directly to the Ministry of Health in real time. Boots-on-the-ground approaches like these are key to a timely and coordinated public health response to stop outbreaks before they become pandemics.

It is easy to miss warning signals when global and local priorities are tentative. The same mistake need not happen again.The Conversation

Maureen Miller, Adjunct Associate Professor of Epidemiology, Columbia University

This article is republished from The Conversation under a Creative Commons license.

Your phone and your brain - what we know so far

 

Media multitasking: constantly juggling media and non-media activities, often using multiple digital devices. GettyImages

A defining characteristic of the way many people live today is persistent online connectedness. Since the introduction of smartphones about 15 years ago, the rapid and broad adoption of these devices has had an impact on people’s behaviour at all hours of the day. Forecasts suggest that the number of smartphone connections in sub-Saharan Africa will reach 678 million by the end of 2025, representing an adoption rate of 65%.

Many people check their phones when they wake up, use them while travelling to work and constantly keep an eye on them while at work. A phone screen is the last thing many see before falling asleep. These behaviours have become so ingrained in routines that people rarely step back and consider the impacts on their bodies and minds.

Interaction with devices frequently involves mental switches from one context to another. Phones keep people permanently available for communication and information from different areas of life. Work-related emails, personal instant messages, social media posts, news, entertainment – it all becomes mixed and interwoven in a constant stream of beeps, pings and flashing notification icons.

Over the past decade, people have adapted to cope with this deluge of information in various ways. The most notable is media multitasking: constantly juggling media and non-media activities, often using multiple digital devices connected to various online platforms.

In a landmark study in 2009, researchers at Stanford University posed an important question: are people who juggle media like this better at switching tasks than people who don’t? Do heavy media multitaskers process information differently from light media multitaskers? It seemed likely that frequent multitasking would train a person’s brain to switch more effectively between cognitive tasks.

But the study made the surprising discovery that high media multitaskers performed worse in task switching. They were also more easily distracted by information that did not relate to the task at hand.

In the decade since the study, various research groups around the world have investigated the issue. Some studies confirmed the original findings, some did not, and some made contrary findings.

In the process various hypotheses have been proposed. The main one is that high media multitaskers tend to distribute their attention more broadly than low media multitaskers. Rather than focusing narrowly on particular tasks or streams of sensory input, their brains continuously monitor the environment for cues. It’s hard for them to filter out these cues. So they frequently switch away from one task to a different one.

Media multitasking and cognitive control

Given the prevalence of media multitasking, and the importance of paying attention in a variety of academic, professional, and social contexts, we need a clearer understanding of the associations between media multitasking and cognitive control. Cognitive control involves working memory and inhibition to achieve a goal – focusing attention on particular mental tasks and ignoring distractions.

To produce a picture of the research in this domain, my colleague and I analysed the studies that have been done since 2009. We looked at 118 assessments reported across 46 different studies.

The picture that emerges from all this evidence is less clear than the original findings suggest. To make sense of the interaction between media multitasking and cognitive control, it is important to consider the strength of their association.

Our analysis of the 46 studies suggests that media multitasking behaviour has only a small effect on cognitive control.

What’s more, we found that the effect was only observable in studies where people reported their own experiences. In these studies, respondents typically answered a set of questions about failures of attention they experienced in everyday life.

Another way of measuring attention involves doing tasks in laboratory settings. These tasks are designed to measure participants’ performance in areas like working memory.

Our analysis indicates that, across all measures considered, high media multitaskers don’t perform worse than low media multitaskers in such tasks. This contradicts the original findings and suggests that there are no differences between the cognitive control abilities of high and low media multitaskers.

The difference in results from performance-based and self-report studies is important. One kind of study looks directly at a person’s performance in an artificial setting. The other reports what people think about their actions in the real world. And outside the lab, behaviour is influenced by multiple, often conflicting goals which compete for people’s attention.

It’s also still unclear whether media multitasking causes cognitive control problems. Do people, through constant switching among tasks, train their brains to constantly scan the environment for new cues? And does this harm or limit the ability to focus closely on a task for a length of time?

Regaining focus

All in all, the evidence indicates that there’s still a lot of uncertainty about the long-term impacts of digital device use on cognition. And there’s been limited time to carefully consider the benefits and costs of this way of living.

But aside from potential long-term effects, it’s worth thinking about the effects on daily routines. In their rushed and noisy, device-driven lives, people may not be paying enough attention to meaningful goals.

Organisational psychologist and bestselling author, Adam Grant, in a recent article in the New York Times, emphasises the importance of being immersed in tasks that generate a sense of progress. These tasks contribute to daily joy and motivation. But for many people such immersion in tasks is unattainable when their minds start to reflect the clutter of their social media feeds.The Conversation

Daniel B. le Roux, Senior Lecturer, Stellenbosch University

This article is republished from The Conversation under a Creative Commons license.