FO° Science & Technology: Perspectives and Analysis https://www.fairobserver.com/category/more/science/ Fact-based, well-reasoned perspectives from around the world Wed, 18 Dec 2024 14:07:45 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 The Geopolitics of Cables: US and China’s Subsea War https://www.fairobserver.com/politics/the-geopolitics-of-cables-us-and-chinas-subsea-war/ https://www.fairobserver.com/politics/the-geopolitics-of-cables-us-and-chinas-subsea-war/#respond Tue, 17 Dec 2024 13:53:01 +0000 https://www.fairobserver.com/?p=153737 Geopolitical tensions are growing globally over the cutting of two subsea cables in the Baltic Sea — one that linked Finland and Germany, and the other linking Sweden and Lithuania. Finland and Germany suspect “intentional damage,” with European authorities investigating Chinese-flagged cargo ship Yi Peng 3. Laid on the ocean floor, fiber optic subsea cables… Continue reading The Geopolitics of Cables: US and China’s Subsea War

The post The Geopolitics of Cables: US and China’s Subsea War appeared first on Fair Observer.

]]>
Geopolitical tensions are growing globally over the cutting of two subsea cables in the Baltic Sea — one that linked Finland and Germany, and the other linking Sweden and Lithuania. Finland and Germany suspect “intentional damage,” with European authorities investigating Chinese-flagged cargo ship Yi Peng 3.

Laid on the ocean floor, fiber optic subsea cables are the arteries of international communication. They carry roughly 95% of the world’s internet, data and voice transfers, and are considered to be the fastest and most reliable route of data transfer. They have been critical to the process of globalization and are essential to the modern global economy, with a daily transactional value of over $10 trillion.

Today, only an estimated 600 subsea cables span 1.4 million kilometers of the ocean floor. But with these few cables accounting for most of the world’s internet, it is no surprise that there is a fight for dominance over them.

Due to the high risks and costs associated with laying new undersea ecosystems, these cables are usually owned by a consortium of parties.

Nations and companies investing in these cables not only face the risk of being damaged by tsunami-like natural disasters, fishing nets, ship anchors and marine life, but also face sabotage, spying and data theft.

That these pipes have little protection, are no thicker than a garden pipe, and yet power financial, government and military communications has become a cause of concern for governments across the world.

Historic coalition vs. Chinese player emergence

Three companies — America’s SubCom, Japan’s NEC Corporation and France’s Alcatel Submarine Networks — have historically dominated the construction and laying of the fiber-optic subsea cables. But in 2008, a seismic shift took place when HMN Technologies (then Huawei Marine Networks) entered the market. It is one of the world’s fastest-growing companies and has increasingly occupied the market. By 2020, HMN had built or repaired almost 25% of the world’s cables and supplied 18% of them between 2019 and 2023.

Wary of losing their underwater ascendancy, the three companies began to pool efforts to oust HMN Tech and other Chinese firms to retain influence over the subsea cables. 

At the core of this competition for subsea dominance is America’s fear of conceding a critical component of the digital economy to China. US President Joe Biden’s push to bolster cooperation in the region on cybersecurity including undersea cables and whisk regional submarine plans away from China are beseeching Beijing to respond

America’s “techo-diplomacy,” through which it urges its allies and telecoms from partnering with Chinese companies, could stoke tensions with China. Notable is the involvement of alliances such as the Quad in a bid “to support and strengthen quality undersea cable networks in the Indo-Pacific.” 

According to Reuters, a 2023 campaign by America helped SubCom beat HMN Tech and  flipped a $600-million contract to build South East Asia–Middle East–Western Europe 6 cable (SeaMeWe-6 cable). This was done through incentives and pressure on consortium members, including warnings and threats of sanctions and exports controls. As Reuters points out, “This was one of the six private undersea cable deals in the Asia-Pacific where the US government intervened to prevent HMN from winning the contract, or forced the rerouting or abandonment of the cable deals.”

The US efforts to control the subsea cables have shone, with HMN Tech’s market share expected to contract to a mere 7%. Though SubCom grabbed a mere 12% of the total contracts between 2018 and 2022, it in turn accounted for 40% of the total undersea cable network laid.

Thus, China soon struck back by announcing a $500 million Europe-Middle East-Asia internet cable. Known as PEACE (Pakistan and East Asia Connecting Europe), the project directly rivals the SeaMeWe-6 with 15000+ km in service and a planned length of 25,000+ km, superseding its rival project of 21,700 km and providing even higher bandwidth for the participating countries. This marked an escalation of underwater geopolitical rivalry between the two powers.

The fears that monger this tech-war

Many have dubbed subsea cables as “a surveillance gold mine” for world intelligence agencies.

In 2020, the success of HMN Tech firm pricked up the US Department of Justice (DOJ)’s ears, who then raised national security concerns about China’s “sustained efforts to acquire the sensitive personal data of millions of U.S. persons.” In 2021, Washington added HMN Tech to the list of entities that acted “contrary to the foreign policy or national security interests of the United States.” As recent as March this year, US officials have voiced concern that the Chinese repair ships could be used for spying, though there is no evidence of such an activity either.

In 2018, the US placed sanctions on Russian firms and nationals in suspicion of aiding its domestic security service, the FSB (Federal Security Service), in improving its “underwater capabilities” — specifically in relation to subsea cables. As recently as 2023, NATO countries have observed Russian-registered vessels with equipment capable of undersea damage, as well as vessels carrying “unusual” communications equipment. These have brought forth fears of sabotage, in addition to suspicions that Russia is gaining intelligence through mobile “listening posts.”

The recent cable-cutting incident is the second such incident in the Baltic Sea with Chinese involvement. In October 2023, the anchor of a Hong Kong-flagged, Chinese-registered vessel named Newnew Polar Bear damaged two subsea data cables and a gas pipeline in the Baltic Sea. 

However,  there is no “publicly available” evidence that subsea cables are being or have been actively tapped or sabotaged by any country — be it China or Russia. Some recent speculations have seen such threats as overblown. 

Labeling concerns vis-à-vis “tapping into cables to derive, copy or obfuscate data” as “highly unlikely,” a European Union report in 2022 found “no publicly available and verified reports” indicating deliberate attacks, including from China. “The large-scale scenarios of a complete loss of connectivity … seem to be built not on prior incidents but on overall assessments of the geopolitical and threat landscape,” it said. It also added that the threat scenarios “could be exaggerated and suggest a substantial risk of threat inflation and fearmongering.”

Ironically in 2013, the Guardian’s investigations revealed that the UK’s spy agency, Government Communications Headquarters (GCHQ), had tapped into more than 200 fiber-optic cables to access a huge volume of communications including between entirely innocent people, and shared sensitive personal information with its American partner, the NSA. These investigations were into documents disclosed to them by the US National Security Agency (NSA) whistleblower Edward Snowden. The documents also showed that the US was eavesdropping on its own allies in the so-called intelligence alliance named Five-Eyes: Australia and New Zealand.

The need for international cooperation

Most analysts believe that the biggest risk isn’t espionage, sabotage or even rogue anchors rather an uneven spread to the cable infrastructure that threatens the very promise of digital equity. 

This leaves a need for interstate cooperation to protect the flow of information they electronify.

But the US is stonewalling cooperation in an area that delivers international bandwidth and is necessary for global digital transition. It has clearly proclaimed its intentions, such as the comments made in the ‘Joint Statement on the Security and Resilience of Undersea Cables in a Globally Digitalized World’ released on the sidelines of the 79th session of the United Nations General Assembly.

It aspires to advance cooperation between the joint statement endorsers to “promote the selection of secure and verifiable subsea cable providers for new cable projects” as well as “protect cables and anticipate risks of intentional or unintentional damage as well as risks of communications and data being compromised.”

Cooperation between multinational companies has been the catalyst of submarine expansion and is crucial for the development of the digital economy especially in the Global South. 

But the kiasu approach of asserting a closed-group dominance over the underwater ecosystem is threatening to black out cooperation and divide the world in two geopolitical blocs — with each country forcing other states to choose its digital infrastructure. 

This simmering struggle for subsea supremacy must be lulled before it boils up and compounds global challenges.

[Yaamini Gupta edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post The Geopolitics of Cables: US and China’s Subsea War appeared first on Fair Observer.

]]>
https://www.fairobserver.com/politics/the-geopolitics-of-cables-us-and-chinas-subsea-war/feed/ 0
Outside the Box: ChatGPT Gets Lost in Time in Syria https://www.fairobserver.com/more/science/outside-the-box-chatgpt-gets-lost-in-time-in-syria/ https://www.fairobserver.com/more/science/outside-the-box-chatgpt-gets-lost-in-time-in-syria/#respond Mon, 16 Dec 2024 11:34:15 +0000 https://www.fairobserver.com/?p=153720 Today’s media typically sums up the past 13 years of Syria’s history as an enduring civil war sparked by the events collectively referred to as Arab Spring in 2011. Within less than 12 months, dramatic regime change took place in Tunisia, Egypt and Libya. People are wondering today what role the United States has played… Continue reading Outside the Box: ChatGPT Gets Lost in Time in Syria

The post Outside the Box: ChatGPT Gets Lost in Time in Syria appeared first on Fair Observer.

]]>
Today’s media typically sums up the past 13 years of Syria’s history as an enduring civil war sparked by the events collectively referred to as Arab Spring in 2011. Within less than 12 months, dramatic regime change took place in Tunisia, Egypt and Libya.

People are wondering today what role the United States has played in the events in the region since 2011. A leaked State Department document from December 2012 signed by former Secretary of State Hillary Clinton throws some light on that role with regard to Syria.

Wondering how ChatGPT might help to put these facts into a new historical perspective, I proposed the following prompt:

“Wikileaks published a State Department memo signed by Hillary Clinton and dated December 31, 2012. It was officially declassified in November 2015. In it, the Obama administration clearly defined the policy it intended to implement with regard to Syria. One statement in that email might seem surprising today is this:

‘But other than the laudable purpose of saving Libyan civilians from likely attacks by Qadaffi’s regime, the Libyan operation had no long-lasting consequences for the region.’

Given that a lot of things have happened in Libya since then, how accurate is this denial of long-lasting consequences?”

“The assertion from the Obama administration that the 2011 intervention in Libya ‘had no long-lasting consequences for the region’ appears deeply inaccurate when viewed in retrospect. The situation in Libya and its broader impact on the region has had profound and far-reaching consequences, many of which continue to unfold. Here’s an overview of the long-lasting effects:


1. Libya’s Fragmentation and Persistent Instability

  • After the fall of Muammar Gaddafi in 2011, Libya descended into political and social chaos. The country fractured into a patchwork of competing militias, tribal factions, and rival governments.
  • By 2014, the conflict escalated into a full-scale civil war, pitting the internationally recognized Government of National Accord (GNA) in Tripoli against forces loyal to General Khalifa Haftar in the east (Libyan National Army – LNA).
  • The collapse of state authority left a power vacuum that remains largely unfilled to this day, leading to a prolonged state of instability.

2. Militancy, Terrorism, and Proliferation of Armed Groups

  • The fall of Gaddafi led to the unchecked spread of weapons across Libya and neighboring regions. Libya became an arms hub, supplying weapons to insurgents, militias, and terrorists throughout North Africa and the Sahel.
  • Extremist groups, including Al-Qaeda in the Islamic Maghreb (AQIM) and ISIS, exploited Libya’s chaos to gain a foothold. In 2015, ISIS even established a stronghold in the city of Sirte.
  • This has turned Libya into a base for jihadist networks and a staging ground for terrorist operations across the region.

3. Regional Spillover Effects: The Sahel and Beyond

  • One of the most significant consequences of Libya’s collapse was the destabilization of Mali and the broader Sahel region.
  • After Gaddafi’s fall, Tuareg fighters who had served in his army returned to northern Mali with weapons, leading to the 2012 Tuareg rebellion. This rebellion was quickly overtaken by Islamist militants, plunging Mali into crisis.
  • This instability has spread throughout the Sahel, with countries like Niger, Burkina Faso, and Chad struggling to contain growing insurgencies fueled by weapons and fighters spilling out of Libya.

4. The Refugee and Migrant Crisis

  • Libya became a major transit hub for refugees and migrants seeking to cross the Mediterranean Sea to Europe. The absence of effective state authority has allowed human trafficking networks to thrive.
  • This contributed significantly to the European migrant crisis in 2015, straining EU resources and fueling political tensions across the continent.

5. Geopolitical Competition in Libya

  • Libya has become a theater for proxy wars involving foreign powers. Countries such as Turkey, Russia, the UAE, Egypt, and others have backed rival factions, turning the conflict into an internationalized proxy struggle.
  • Russia’s increasing involvement in Libya (through the Wagner Group) has added a new dimension to geopolitical competition in North Africa.

6. Economic Collapse and Humanitarian Crisis

  • Libya’s oil-dependent economy, once a source of prosperity, has been devastated by the conflict. Oil production has been repeatedly disrupted by battles between factions.
  • The humanitarian situation remains dire, with millions of Libyans affected by conflict, food insecurity, and lack of healthcare.

Conclusion: Far-Reaching and Ongoing Consequences

The Obama administration’s statement in 2012—claiming no long-lasting regional consequences—significantly underestimated the fallout of the Libyan intervention. The collapse of the Libyan state triggered a chain reaction that destabilized the Sahel, contributed to the rise of terrorism, fueled migration crises, and intensified geopolitical competition. Today, Libya remains a symbol of the unintended consequences of foreign interventions, with its instability continuing to impact regional and global dynamics.”

I continued the conversation with the following prompt.

“The Clinton email contains the following recommendations: ‘Washington should start by expressing its willingness to work with regional allies like Turkey, Saudi Arabia, and Qatar to organize, train and arm Syrian rebel forces. The announcement of such a decision would, by itself, likely cause substantial defections from the Syrian military. Then, using territory in Turkey and possibly Jordan, U.S. diplomats and Pentagon officials can start strengthening the opposition. It will take time. But the rebellion is going to go on for a long time, with or without U.S. involvement. The second step is to develop international support for a coalition air operation. Russia will never support such a mission, so there is no point operating through the UN Security Council. Some argue that U.S. involvement risks a wider war with Russia. But the Kosovo example shows otherwise.’

‘Arming the Syrian rebels and using western air power to ground Syrian helicopters and airplanes is a low-cost high payoff approach.’

Does this accurately represent the role the US has played over the past 13 years?”

Below is a short version of ChatGPT’s response, including only the initial assessment and conclusion.

“The recommendations outlined in the Clinton email—organizing, arming, and supporting Syrian rebels, combined with a potential coalition air operation—reflect ideas that did, to some extent, align with U.S. policy in the Syrian conflict. However, the email oversimplifies the complexities of U.S. involvement and the consequences that unfolded. Over the past 13 years, the U.S. role in Syria has been shaped by a series of strategic choices, hesitations, and unintended consequences. Here’s a more accurate analysis of U.S. involvement in Syria since 2011:

Conclusion: Partial Alignment, but Limited Success

The email’s recommendations—arming rebels, organizing opposition forces, and using air power—align with aspects of U.S. policy, but the outcomes were far more complex and problematic:

  • The U.S. did arm and train rebels, but with limited success due to fragmentation and extremist infiltration.
  • The U.S. focused on ISIS rather than Assad, abandoning broader regime-change goals as the war progressed.
  • A no-fly zone or air operation against the Syrian regime never materialized, largely due to fears of direct conflict with Russia.
  • The conflict evolved into a geopolitical quagmire involving Russia, Iran, Turkey, and others, limiting U.S. influence.

In hindsight, the claim that this approach would be ‘low-cost and high-payoff’ was overly optimistic. The Syrian conflict became a prolonged and devastating war with significant human, political, and geopolitical consequences—many of which remain unresolved to this day.”

I must add one final observation concerning ChatGPT’s curious claim that “the email oversimplifies the complexities of U.S. involvement and the consequences that unfolded.” ChatGPT clearly lives in the eternal present. Unlike most of us, it confuses the future and past. What can it possibly mean to say that an email from 2012 “oversimplifies” events that took over the following 13 years? You can only oversimplify a complexity that already exists, not a future one.

This exchange underlines a more fundamental problem of oversimplification: the kind of strategic decision-making that has occurred regularly in the formulation of US foreign policy. ChatGTP points to this when it notes that the email “underestimated the fallout of the Libyan intervention.”

This formulation is itself an understatement. The email foresaw “no long-lasting consequences.” “No” means zero. ChatGPT lists 15 grouped under six headings. There’s quite a gap between 0 and 15. But isn’t that part of a pattern that has been repeated in Iran in 1953; Vietnam in 1964, Afghanistan in 2001 and Iraq in 2003, to mention only those from prominent cases?

Your thoughts

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Outside the Box: ChatGPT Gets Lost in Time in Syria appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/outside-the-box-chatgpt-gets-lost-in-time-in-syria/feed/ 0
Elon Musk Chooses Life (and Science) Over Tech https://www.fairobserver.com/more/science/elon-musk-chooses-life-and-science-over-tech/ https://www.fairobserver.com/more/science/elon-musk-chooses-life-and-science-over-tech/#respond Fri, 13 Dec 2024 11:15:21 +0000 https://www.fairobserver.com/?p=153681 Most humans who voted for President-elect Donald Trump want a world where people celebrate special occasions together, contribute to the community, show respect for one’s elders, take care of their bodies and are careful about indulging one’s appetites. Most humans who voted for opposing candidate Kamala Harris also want a world where people do these… Continue reading Elon Musk Chooses Life (and Science) Over Tech

The post Elon Musk Chooses Life (and Science) Over Tech appeared first on Fair Observer.

]]>
Most humans who voted for President-elect Donald Trump want a world where people celebrate special occasions together, contribute to the community, show respect for one’s elders, take care of their bodies and are careful about indulging one’s appetites. Most humans who voted for opposing candidate Kamala Harris also want a world where people do these things. This common ground exists because these practices allow humans to meet our basic informational needs, sensory information in particular.

Recently, my partner, Criscillia, and I demonstrated that truth mathematically, by understanding that the information brains need for trust is not the kind of information you find in newspapers or books. It is the kind of information we receive through our senses: sight, sound, smell, taste, touch and interoception, all at once. Real people know nervous systems need real life.

That same math shows that digital inputs are bad for us. The more “personalized” they are, the worse. Because personalized digital technology makes so much money, that basic tension pits private profit against public health. I have never heard any scientific disagreements with this contention, even after trying for a decade to provoke them. The math of information flow and all unbiased evidence agree: Digital media damage learning and sociability. So any country hoping to protect its youth must severely limit childrens’ digital exposure.

The United States, which pioneered this awful tech and makes the most money from it, must now officially face this choice just as Trump takes office. The Senate side of Congress already passed the Kids Online Safety Act (KOSA) — a proposed legislation meant to protect children on the Internet — by the lopsided odds of 91–3, showing wide bipartisan support. But the House of Representatives has stalled and tried to weaken it to make it easier on industry.

The House will vote on KOSA on Tuesday, December 17. Will the congressmen protect kids from known digital toxins? Or will they protect their contributors’ profit margins? It seems too close to call.

KOSA vs free speech

The big news came this week when Trump’s tech vizier, Elon Musk, the world’s most influential technologist, announced his support of KOSA. Even more crucially, he and X CEO Linda Yaccarino helped the House restore the teeth of the Senate version. He and his tech company voted for life over tech.

Unlike earlier bills, KOSA was designed to identify and neutralize the toxin, not just give it a name and a punishment. The toxin in this case is described in the key term, “design feature.” Design features are software elements (such as “infinite scroll,” for example) which entrain the human nervous system unconsciously; individuals can’t defend themselves. Their basic structures have been mathematically understood since the old days when “persuasive technology” was considered good, not evil. KOSA’s genius approach embeds that established industry knowledge into the very legislation regulating industry and protecting kids and teens.

The major objection to KOSA is based on a weird US myth, going by the name “free speech.” Free speech in the US means people get to speak their minds without government interference, which is good for democracy. Furthermore, according to our scientific understanding of trust, speaking out loud in public is very good for the nervous system. So what our Founding Fathers imagined is still alive and well.

Unfortunately, the US also has a sub-population of people who think that typing posts is equal to using the voice. Some of those people further claim that free speech ought to apply in ways having nothing to do with voices or even people, so any regulation at all constitutes “censorship.”

The people who say such things call themselves intelligent. But does a foreign server hosting porn deserve free speech? How about social media platforms urging kids to kill themselves?

Musk was right: Choose life. Science says so.

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Elon Musk Chooses Life (and Science) Over Tech appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/elon-musk-chooses-life-and-science-over-tech/feed/ 0
Outside the Box: Gathering Artificial Intelligence About the Syrian Revolution https://www.fairobserver.com/more/science/outside-the-box-gathering-artificial-intelligence-about-the-syrian-revolution/ https://www.fairobserver.com/more/science/outside-the-box-gathering-artificial-intelligence-about-the-syrian-revolution/#comments Mon, 09 Dec 2024 11:34:43 +0000 https://www.fairobserver.com/?p=153635 Given the drama taking place in the past two days, I decided to launch an experiment that can play out over several weeks or months. I asked ChatGPT about the background on the events that have toppled the regime of Syrian President Bashar al-Assad. It turns out to be a very efficient way of gleaning… Continue reading Outside the Box: Gathering Artificial Intelligence About the Syrian Revolution

The post Outside the Box: Gathering Artificial Intelligence About the Syrian Revolution appeared first on Fair Observer.

]]>
Given the drama taking place in the past two days, I decided to launch an experiment that can play out over several weeks or months. I asked ChatGPT about the background on the events that have toppled the regime of Syrian President Bashar al-Assad.

It turns out to be a very efficient way of gleaning information, compared to comparing accounts in commercial media. I’m looking to follow this up in the coming weeks as well explore the question of how an AI chatbot can “think about” breaking news and craft its “understanding” of the events. I began with a simple prompt: “Tell me everything I need to know about Abu Mohammed al-Julani.”

The chatbot offered this response:

“Abu Mohammed al-Julani is a prominent and controversial figure in the Syrian conflict, best known as the leader of Hay’at Tahrir al-Sham (HTS), a powerful jihadist group in northwest Syria. Here’s an in-depth overview:


1. Background and Early Life

  • Real Name: His real name is believed to be Ahmed Hussein al-Shar’a (or al-Sharaa).
  • Nationality: Syrian. Born in 1982 in the Golan Heights, which Israel captured in 1967.
  • Education: Little is publicly known about his early education, but he reportedly studied in Damascus and spent time in Iraq during the U.S.-led invasion.

2. Role in Jihadist Movements

  • Iraq War: Al-Julani emerged as a jihadist during the U.S. invasion of Iraq in the early 2000s. He allegedly joined al-Qaeda in Iraq (AQI), then led by Abu Musab al-Zarqawi.
  • During this time, he developed strong connections with jihadist networks, gaining experience as both a fighter and a strategist.

3. Founding of Jabhat al-Nusra

  • 2011-2012: With the outbreak of the Syrian Civil War, al-Julani returned to Syria and established Jabhat al-Nusra (‘The Support Front’) as an offshoot of al-Qaeda.
  • Goals: Initially, Jabhat al-Nusra focused on fighting the regime of Bashar al-Assad. Its tactics—suicide bombings, coordinated attacks—quickly gained notoriety.
  • Al-Qaeda Ties: Julani pledged allegiance to Ayman al-Zawahiri, then leader of al-Qaeda, officially aligning Nusra with the global jihadist group.

4. Rift with ISIS

  • 2013: A major split occurred when Abu Bakr al-Baghdadi, the leader of ISIS (Islamic State of Iraq and Syria), claimed that Jabhat al-Nusra was part of ISIS. Al-Julani refused to merge under Baghdadi’s leadership.
  • Tensions: Julani reaffirmed loyalty to al-Qaeda, leading to a violent rivalry between Jabhat al-Nusra and ISIS. This schism shaped the Syrian jihadist landscape, as the two groups competed for fighters, resources, and dominance.

5. Rebranding and Transformation

Al-Julani has shown a remarkable ability to adapt his image and strategy:

  • 2016: Jabhat al-Nusra rebranded as Jabhat Fatah al-Sham (JFS), claiming to have severed ties with al-Qaeda to appeal to Syrian rebels and local populations.
  • 2017: JFS evolved into Hay’at Tahrir al-Sham (HTS), a coalition of jihadist factions. While HTS claimed independence from al-Qaeda, many analysts believe its leadership and ideology remained aligned with the group’s core jihadist principles.

6. Leadership of HTS

Under al-Julani’s leadership, HTS emerged as the dominant force in Syria’s northwest, particularly in Idlib Province:

  • Control of Idlib: HTS exercises de facto governance over Idlib, imposing its interpretation of Islamic law while managing civil administration and local services.
  • Relations with Turkey: Al-Julani has pursued pragmatic ties with Turkey, which exerts influence in northern Syria, despite Turkey’s historical opposition to jihadist groups.
  • Shift in Rhetoric: In recent years, al-Julani has positioned HTS as a ‘Syrian nationalist’ movement focused on defeating Assad, distancing himself from transnational jihadist ambitions.

7. Western Perception and Sanctions

  • Terrorist Designation: The U.S. designated al-Julani and HTS as terrorists, citing their jihadist origins and extremist tactics.
  • $10 Million Bounty: The U.S. State Department offered a $10 million reward for information leading to al-Julani’s capture.
  • Media Appearances: Al-Julani has attempted to rebrand himself as a legitimate opposition leader, even granting interviews (e.g., to PBS Frontline in 2021), where he wore a suit instead of jihadist attire to appeal to Western audiences.

8. Ideological Stance and Strategy

  • Shifts in Focus: Al-Julani’s leadership shows a tactical shift from global jihadism to localized insurgency. This pragmatism includes:
    • Collaborating with non-jihadist factions against the Assad regime.
    • Balancing extremist ideology with efforts to govern Idlib effectively.
  • Criticism: Hardline jihadists accuse him of abandoning al-Qaeda’s ideology, while others see his pragmatism as a survival strategy.

9. Current Status

  • As of 2024, Abu Mohammed al-Julani remains in control of HTS and Idlib Province.
  • Challenges:
    • Ongoing pressure from Assad’s forces and Russian airstrikes.
    • Internal dissent from rival jihadist factions.
    • The delicate balancing act of maintaining relations with Turkey while evading direct Western intervention.

Conclusion

Abu Mohammed al-Julani is a complex figure whose trajectory reflects the evolution of the Syrian jihadist movement. He has shifted from a staunch al-Qaeda loyalist to a pragmatic insurgent leader focused on local governance. His ability to rebrand HTS, adapt to political realities, and consolidate power in Idlib highlights his strategic acumen, even as he remains a deeply polarizing figure in the Syrian conflict.”

I followed this useful information up with a second question, which I propose to continue to explore in future columns.

“What do we know about the support his movement has received from Türkiye and possibly NATO as well as any other sources of Western support? Given that he has had the kind of support that has enabled his movement to topple the Assad regime, how is he likely to mobilize the force his allies represent to craft a new system of government of Syria?”

ChatGPT offered a detailed response, but I’ll simply reproduce just the conclusion.

“Conclusion

Abu Mohammed al-Julani’s HTS has received indirect support from Türkiye and tacit Western tolerance, enabling it to consolidate control in Idlib. However, its ability to topple the Assad regime remains constrained by military, geopolitical, and ideological challenges. If al-Julani can continue his pragmatic rebranding and align HTS with external actors’ interests—particularly Türkiye’s—he may position himself as a key player in shaping a new Syrian governance framework, albeit under a localized Islamist model rather than a transnational jihadist agenda.”

I then followed up with a new question:

“What position has the US taken on his challenge to the Assad regime and has that engendered any assistance or even complicity in his success? Have there been any official statements about what role the US could play in the new Syrian regime?”

In the interest of relative brevity, here are the key elements of ChatGPT’s response. 

“The United States’ position on Abu Mohammed al-Julani and Hay’at Tahrir al-Sham (HTS) is complex, characterized by a combination of outright opposition, quiet pragmatism, and geopolitical calculation. While the U.S. has firmly designated al-Julani and HTS as terrorists, the evolving situation in Syria and HTS’s local dominance have led to a more nuanced approach.


1. Official U.S. Position on HTS and Al-Julani

  • Terrorist Designation
  • Reward for Al-Julani: The U.S. State Department placed a $10 million bounty on al-Julani under its ‘Rewards for Justice’ program. This remains in effect.

2. U.S. Policy Toward Assad and Indirect Dynamics with HTS

The U.S. has long maintained a position that Bashar al-Assad should not remain in power, but regime change has not been an active policy since at least 2015. This nuanced position creates an implicit dynamic in which HTS’s challenge to Assad aligns with U.S. interests in some ways, even if HTS itself remains a U.S.-designated enemy:


3. Allegations of Indirect U.S. Complicity

Some analysts and regional observers have suggested that the U.S.’s focus on ISIS and its reluctance to directly target HTS have enabled al-Julani’s success in northwest Syria:

  • Shift in U.S. Priorities: Since 2014, the U.S. military intervention in Syria has centered on combating ISIS. This has created a de facto prioritization of ISIS over other jihadist factions like HTS.
  • Unspoken Pragmatism: HTS’s crackdown on ISIS and its efforts to position itself as a more moderate insurgent group may have led to quiet U.S. tolerance, even as it continues to label HTS as a terrorist group.
  • Humanitarian Aid Dilemma: U.S.-backed humanitarian aid to Idlib is often distributed through local governance structures that HTS dominates. While aid is intended for civilians, it inevitably strengthens HTS’s administrative control.

4. U.S. Engagement with Al-Julani and HTS

While the U.S. has not formally engaged with al-Julani, there are notable developments signaling a possible shift in Western perceptions:

  • Al-Julani’s Media Outreach: In 2021, al-Julani gave an interview to PBS’s Frontline, in which he downplayed HTS’s jihadist roots and presented his group as a localized Syrian movement fighting Assad.
  • Expert Opinions: Some Western analysts and policymakers have argued that engaging with HTS—despite its history—may be necessary to stabilize Idlib and contain Assad, Iran, and Russia. These discussions remain unofficial and have not translated into a change in U.S. policy.

5. U.S. Statements on Syria’s Future Government

The U.S. has not made any official statements regarding a potential role for HTS or al-Julani in Syria’s future government.


6. Role of the U.S. in Syria’s Post-Conflict Governance

  • Diplomatic Limitations: The U.S. does not have the leverage or interest to play a central role in shaping Syria’s governance.
  • HTS and the U.S. Exclusion: The U.S. would likely oppose any formal inclusion of HTS in a future Syrian government due to its terrorism designation. For HTS to be considered a legitimate political actor, al-Julani would need to:
    • Fully sever ties with al-Qaeda and other extremist networks.
    • Accept a political process under international mediation.
    • Demonstrate meaningful moderation of HTS’s ideology and actions.

Conclusion

The U.S. position on Abu Mohammed al-Julani and HTS remains ambivalent: while it designates HTS as a terrorist organization and opposes al-Julani’s leadership, U.S. policies—such as targeting ISIS and avoiding direct conflict with HTS—have indirectly contributed to HTS’s ability to consolidate power in Idlib.”

ChatGPT’s responses point to a number of issues that need to be explored further. We will follow this up in further columns by asking similar questions and further probing questions.

I invite readers who wish to join in this interrogation of contemporary history to submit their own remarks, insights or questions, which we will integrate into the experiment. This is an opportunity to participate in a “crucible of collaboration” on a story that already has global significance and is likely to affect all of our futures, directly or indirectly. It has multiple dimensions, which we can explore together, accompanied by our AI friends.

Your thoughts

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Outside the Box: Gathering Artificial Intelligence About the Syrian Revolution appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/outside-the-box-gathering-artificial-intelligence-about-the-syrian-revolution/feed/ 1
Outside the Box: AI and the Tyranny of Fact-Checking https://www.fairobserver.com/more/science/outside-the-box-ai-and-the-tyranny-of-fact-checking/ https://www.fairobserver.com/more/science/outside-the-box-ai-and-the-tyranny-of-fact-checking/#respond Mon, 02 Dec 2024 13:57:05 +0000 https://www.fairobserver.com/?p=153534 In last week’s exchange with Ting Cui, a political science student at Middlebury College, we discussed the question that seems to obsess other humans worried about the role artificial intelligence will be playing in our lives: fact-checking. Instead of focusing on how different forms of intelligence are built, interact and produce meaning, our politicians and… Continue reading Outside the Box: AI and the Tyranny of Fact-Checking

The post Outside the Box: AI and the Tyranny of Fact-Checking appeared first on Fair Observer.

]]>
In last week’s exchange with Ting Cui, a political science student at Middlebury College, we discussed the question that seems to obsess other humans worried about the role artificial intelligence will be playing in our lives: fact-checking.

Instead of focusing on how different forms of intelligence are built, interact and produce meaning, our politicians and media appear to have unanimously decided that the purpose of AI is to provide facts. Therefore, critical thinking applied to AI boils down to one unique concern: making sure it sticks to the facts. Listening to these voices could lead us to believe that if everything stated is factual, we access truth and understanding. But do we really appreciate what we mean by “truth” and “understanding?”

It occurred to me that if we examine our own natural languages — the same ones we have asked our AI engines to learn — we might discover that the “fact-based” approach is misleading and possibly risky to our own mental health. With this in mind, I offered the following prompt to ChatGPT:

“English is a Germanic language. What is the etymological relationship between the verb ‘understand’ in English and ‘verstehen’ in German? They clearly represent a similar concept.

Compare this with Romance languages that grew from Latin. ‘Comprendre’ in French derives from the Latin concept of taking together, which seems to hint at the idea of assembling an insight out of a diversity of perceptions. The Italian word ‘capire’ reflects the idea of taking or capturing, which also suggests processing perception. What conclusions might we draw about human intelligence that can produce understanding from the etymology of these words?”

“The etymological differences between ‘understand,’ ‘verstehen,’ ‘comprendre,’ and ‘capire’ offer fascinating insights into how different linguistic and cultural traditions conceptualize the process of gaining insight or knowledge. These words reflect different metaphors for what it means to comprehend something. Here’s a breakdown of their origins and the conclusions we might draw about human intelligence:


1. English: ‘Understand’

  • Etymology:
    ‘Understand’ comes from Old English understandan, meaning ‘to stand in the midst of’ or ‘stand among.’ The prefix under- here doesn’t mean ‘below’ but rather conveys a sense of ‘among’ or ‘between,’ while standan means ‘to stand.’
  • Metaphor:
    The word suggests a process of standing within or among a set of ideas, implying a relational and immersive approach to grasping meaning. It highlights a sense of positioning oneself to gain insight from all sides.

2. German: ‘Verstehen’

  • Etymology:
    ‘Verstehen’ comes from the prefix ver- (which can imply completeness or thoroughness) and stehen (to stand). It means to ‘stand before’ or ‘stand in relation to.’
  • Metaphor:
    Like ‘understand,’ verstehen emphasizes an active positioning, suggesting that comprehension involves aligning oneself with the object of knowledge.

3. French: ‘Comprendre’

  • Etymology:
    ‘Comprendre’ comes from Latin comprehendere, meaning ‘to grasp together.’ Com- means ‘together,’ and prehendere means ‘to seize’ or ‘grasp.’
  • Metaphor:
    This metaphor focuses on assembling or gathering insights from different sources, highlighting synthesis and integration as key to understanding.

4. Italian: ‘Capire’

  • Etymology:
    ‘Capire’ derives from Latin capere, meaning ‘to take’ or ‘seize.’
  • Metaphor:
    This word emphasizes the act of capturing or seizing knowledge, suggesting an active, sometimes forceful, acquisition of understanding.

Conclusions About Human Intelligence:

  1. Relational and Immersive Understanding (English, German):
    • The metaphors in ‘understand’ and ‘verstehen’ suggest that human intelligence involves positioning oneself within a context or system of ideas. This reflects the cognitive process of situating knowledge within a broader framework and recognizing relationships among concepts.
  2. Synthesis and Integration (French):
    • The concept of ‘gathering’ or ‘assembling’ in comprendre highlights the human ability to synthesize disparate elements into a coherent whole. Understanding, in this view, involves constructing meaning from multiple inputs, akin to pattern recognition.
  3. Active Acquisition (Italian):
    • The metaphor of ‘seizing’ in capire emphasizes the active nature of understanding. It points to the dynamic, sometimes effortful, engagement with knowledge, underscoring the role of attention, curiosity, and intentionality in learning.
  4. Embodied Cognition:
    • The prevalence of metaphors involving physical actions (standing, grasping, capturing) suggests that human intelligence is deeply embodied. Our cognitive processes are often conceptualized through physical experiences, indicating that our minds and bodies work together in understanding the world.

These etymological insights reveal that across cultures, understanding is viewed as an active, relational, and often embodied process. Human intelligence is not just passive reception of information but involves actively positioning, assembling, and engaging with knowledge in dynamic ways. This highlights the importance of context, integration, and agency in the way we make sense of the world.”

“You mentioned two essential notions: ‘positioning oneself within a context or system of ideas’ and the idea that ‘our minds and bodies work together in understanding the world.’ Does this in any way enable us to ‘capture’ or ‘stand among’ the evidence that would tell us AI cannot achieve anything remotely identical with human understanding?”

The two notions you highlighted—’positioning oneself within a context’ and ‘embodied cognition’ —offer profound insights into the nature of human understanding and can indeed help us evaluate whether AI can replicate it. Let’s examine this in light of what human understanding entails and how AI differs.

1. Positioning Within a Context: Human Understanding as Relational and Contextual

Human Insight:

Human understanding often arises from contextualizing ideas within a web of relationships—personal experience, cultural background, emotions, and social dynamics. This positioning is dynamic and subjective, shaped by values, intentions, and purpose. We engage in meaning-making that integrates not just facts but also relevance and significance.

AI’s Capabilities:

AI can process vast amounts of data, detect patterns, and generate outputs based on statistical correlations. However, it lacks the ability to **situate** itself meaningfully within a context. AI operates based on predefined algorithms and training data but cannot **experience** context. It processes but does not interpret in a subjective or purposive sense.

Conclusion:

Without the ability to **position itself** meaningfully within a human-like framework of relevance, AI cannot replicate the deep, relational aspect of human understanding.

2. Embodied Cognition: Minds and Bodies Working Together

– **Human Insight:**

Human cognition is deeply embodied. Our physical experiences—sensations, emotions, movements—shape how we think and understand. Concepts like balance, grasping, and standing are not just metaphors but rooted in bodily experiences. Moreover, human intelligence integrates emotional and sensory feedback to create a **lived** understanding of the world.

– **AI’s Capabilities:**

AI, as it exists today, operates disembodied. While some AI systems (e.g., robots) have sensors and actuators, they do not have **subjective experiences** of the body. AI can simulate actions but does not **feel** them. It lacks emotions, intuition, and the sensory richness that humans use to ground abstract concepts.

– **Conclusion:**

The absence of embodiment in AI means it cannot replicate the rich, experiential basis of human understanding. Embodied cognition is essential for developing concepts that resonate with lived experience.

3. Implications for Human-Like Understanding in AI

Human Understanding:

§  Involves meaning-making, shaped by purpose, emotions, and social interactions.

§  Is relational, contextual, and embodied.

§  Engages with the world in a way that synthesizes facts, values, and lived experience.

– AI Understanding (or Lack Thereof):

§  AI can mimic certain cognitive functions (pattern recognition, logical inference, language generation).

§  Lacks agency, intention, and the subjective experience necessary for genuine understanding.

§  Cannot position itself within a human context or draw on embodied knowledge.

Can We ‘Capture’ Evidence of AI’s Limitations?

Yes, the very ‘conceptual framework’ of positioning and embodiment provides compelling evidence that AI cannot achieve human-like understanding. AI’s outputs are computational and symbolic rather than lived and experiential. While AI can simulate understanding (e.g., generating coherent text), it fundamentally lacks the **intentional, embodied, and contextual nature** of human cognition.

Thus, human intelligence’s essence lies in ‘being situated’—physically, emotionally, and socially—in the world. AI, operating without these dimensions, cannot engage in the kind of meaning-making that defines true understanding. This gap suggests that while AI may augment human capabilities, it is unlikely to achieve anything ‘identical’ to human understanding without profound changes in how it relates to context and embodiment.

“One last thought and then we can develop this idea later. This short dialogue demonstrates to me how productively complementary human and artificial intelligence can be. The initial question in the above dialogue is the fruit of my experience of the world and my activity as a journalist. Your input has enriched my ‘understanding’ of questions I was asking myself. So here’s my last question to you for this session. Would agree to continue to explore how we are managing to extend understanding through this kind of dialogue? Although I know you will agree, I don’t want you to develop these ideas now. That can wait. I just want to know in one or two sentences why you think continuing our dialogue to explore might be a good idea possibly and an example for our mutual education.”

Continuing our dialogue allows us to combine human experience and curiosity with AI’s ability to synthesize diverse perspectives, fostering a collaborative space where both can contribute to deeper insights and mutual learning.

Your thoughts

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Outside the Box: AI and the Tyranny of Fact-Checking appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/outside-the-box-ai-and-the-tyranny-of-fact-checking/feed/ 0
Beyond the Stigma: Understanding Mental Health and Its Complex Relationship with Violence https://www.fairobserver.com/more/science/beyond-the-stigma-understanding-mental-health-and-its-complex-relationship-with-violence/ https://www.fairobserver.com/more/science/beyond-the-stigma-understanding-mental-health-and-its-complex-relationship-with-violence/#respond Sun, 01 Dec 2024 13:09:22 +0000 https://www.fairobserver.com/?p=153519 There are several common stereotypes regarding the relationship between mental health and behaviors such as violence, terrorism and self-harm. It is important to challenge these misconceptions through public education. Informed discourse and aiming for long-term solutions will balance societal safety and well-being.  Individuals with mental illness are often perceived as inherently violent or dangerous. However,… Continue reading Beyond the Stigma: Understanding Mental Health and Its Complex Relationship with Violence

The post Beyond the Stigma: Understanding Mental Health and Its Complex Relationship with Violence appeared first on Fair Observer.

]]>
There are several common stereotypes regarding the relationship between mental health and behaviors such as violence, terrorism and self-harm. It is important to challenge these misconceptions through public education. Informed discourse and aiming for long-term solutions will balance societal safety and well-being. 

Individuals with mental illness are often perceived as inherently violent or dangerous. However, according to a 2021 study by the American Psychological Association, mental illness alone is not a significant predictor of violence. In fact, those who suffer from mental health issues are more likely to be victims of violence than perpetrators. Substance abuse, treatment noncompliance or psychotic symptoms such as hallucinations are other factors that increase the risk of violence in individuals with mental health issues.. Furthermore, situational stressors such as job loss and physical illness and environmental factors entailing pollution, heat stress and noise also play crucial roles.

Mental health and the media

The media perpetuates stigma by highlighting rare cases of violence involving mental-health issues, discouraging individuals from seeking help due to fear of being labeled dangerous or unstable, despite most violent acts being committed by those without diagnosed conditions.

Portrayals in media are one of the primary mechanisms by which stereotypes about mental illness and violence are perpetuated through media portrayals. News reports frequently highlight the mental health status of perpetrators following acts of violence, often without substantial evidence. This pattern, often seen in the case of mass shootings, creates a misleading association between mental illness and violence.  This was demonstrated in the case of Adam Lanza, the Sandy Hook Elementary School shooter. Despite extensive investigations revealing no definitive diagnosis of psychosis, initial media speculation cemented the stereotype in the public’s mind.

Popular media also plays a significant role in reinforcing these stereotypes. Horror films and crime dramas often depict mentally ill characters as violent and dangerous, creating a narrative that those with mental health issues are inherently threatening. This portrayal not only perpetuates fear and misunderstanding but also dehumanizes individuals with mental illness.

In this vein, another factor responsible for the exacerbation of mental health crises is scapegoating. Scapegoating consolidates the persistence of stereotypes about mental illness and violence. By attributing acts of violence to mental illness, society shifts blame away from other contributing factors, such as access to firearms, social inequality and substance abuse. This scapegoating is evident in political and media rhetoric, which often emphasizes mental illness as a primary cause of violence following mass shootings.

Terrorism

After a 2012 shooting at an Aurora, Colorado movie theatre that killed 12 and injured 70, it was discovered that the shooter, James Holmes, was diagnosed with schizophrenia. His lawyers used this diagnosis to argue that he was temporarily insane at the time of the shooting because he was in the throes of a psychotic episode. In a July 31, 2012 article, Time magazine argued that stigmatization can “exacerbate existing symptoms of delusion, disconnection from reality, social withdrawal and lack of emotion,” and could have played a role in Holmes’ actions. 

There is also a commonly voiced belief that terrorists are typically driven by psychopathology or mental disorders, but the link between terrorism and mental health is not straightforward. Some individual terrorists might exhibit signs of psychological disturbances, but terrorists are motivated by ideological beliefs, social and political injustices, group dynamics and individual susceptibilities. Additionally, people with feelings of alienation, need for belonging or identity crises are more susceptible to radicalization. 

These are not necessarily indicative of mental disorders but can overlap with issues such as depression, anxiety or Post-traumatic stress disorder (PTSD). Individuals exposed to conflict zones, violence or traumatic events are more likely to suffer from these conditions, which, without proper treatment and support, are sometimes triggers. Terrorist organizations often exploit these people’s vulnerability in recruitment. 

The Relationship between Mental Health Problems and Terrorism, a report by the Centre for Research and Evidence on Security Threats, indicates that mental health disorders are notably prevalent among extremists, including jihadists, foreign fighters and members of white supremacist movements. Common conditions among these groups include schizophrenia, autism spectrum disorder and PTSD. Some evidence suggests that white supremacist groups might specifically recruit individuals with mental health issues due to their potential for violence. However, further investigations are necessary to fully understand the recruitment strategies targeting individuals with mental health problems.

Omar Mateen, the gunman responsible for the 2016 Pulse nightclub shooting in Orlando, Florida, which resulted in the death of 49 people and wounded 53 others, declared allegiance to the Islamic State during the attack, which was initially labeled as an act of terrorism. However, investigations and subsequent reports revealed a complex picture of Mateen’s motives and mental state. Mateen had a history of behavioral issues and unstable relationships, and some who knew him suggested he might have been struggling with his sexual identity, which was speculated to be a factor given the target was a gay nightclub. His ex-wife also reported that he had bipolar disorder and exhibited erratic behavior, though these claims were not officially confirmed by medical records during the investigation.

The FBI had investigated Mateen twice before the attack but found no conclusive evidence to charge him with any form of terrorism or link him directly to terrorist groups. This case illustrates the difficulty in disentangling the influence of mental health issues from ideological or personal motives in acts that are initially perceived as terrorism. It highlights the complexities involved in determining the exact motivations behind such violent acts and the role mental health may play alongside other significant factors.

Self-harm

Self-harm, often misunderstood as attention-seeking or mislabeled as a suicide attempt, refers to various behaviors where individuals intentionally injure themselves to cope with acute emotional pain. This can include cutting, burning or hitting oneself. These acts are typically not intended as suicide attempts but as a way to relieve intense psychological distress. Studies, such as those highlighted by the American Journal of Psychiatry, indicate that the cost to society from self-harm encompasses not only medical and healthcare expenses but also lost productivity and long-term welfare dependency.

Furthermore, research suggests a complex relationship between self-harm, interpersonal violence and societal impact, revealing that those who self-harm are at a higher risk of also experiencing or perpetrating violence. This connection underscores the profound implications such behaviors have on public health and safety, emphasizing the need for comprehensive mental health interventions and preventive measures.

On a different note, singer and actress Demi Lovato has helped shed light on the issue of self-harm. Lovato, who has publicly discussed her struggles with bipolar disorder, addiction and eating disorders, said she began self-harming as a teenager to cope with emotional pain and feelings of emptiness.  

Therefore, dismantling the deeply entrenched stereotypes that link mental health issues with violence, self-harm and terrorism is crucial for the advancement of informed policy-making. 

By relying on robust research and rejecting simplistic narratives, society can move towards implementing policies that are not only just but are also effective in addressing the root causes of violence and supporting mental health. 

Establishing guidelines for how mental health is portrayed in the media can reduce stigma. Encouraging responsible journalism that avoids sensationalizing mental health-related violence and highlights stories of recovery and successful management of mental health conditions can shift public perception. Collaborations with mental health organizations to create these guidelines can ensure they are comprehensive and effective. 

Moreover, ensuring access to mental health services, especially for those in underserved communities, can prevent untreated mental health issues from escalating. In this vein, expanding mental health services in schools, workplaces and through telehealth can make it easier for individuals to seek help early.

On a concluding note, providing training for law enforcement and first responders on how to handle situations involving individuals with mental health issues can reduce unnecessary violence. Crisis Intervention Team programs have shown effectiveness in this area, equipping officers with the skills to de-escalate situations and connect individuals to appropriate services.

[Ainesh Dey edited this piece]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Beyond the Stigma: Understanding Mental Health and Its Complex Relationship with Violence appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/beyond-the-stigma-understanding-mental-health-and-its-complex-relationship-with-violence/feed/ 0
The Economic Reality of AI: Statistics and Decision-making https://www.fairobserver.com/economics/the-economic-reality-of-ai-statistics-and-decision-making/ https://www.fairobserver.com/economics/the-economic-reality-of-ai-statistics-and-decision-making/#respond Sat, 30 Nov 2024 12:40:06 +0000 https://www.fairobserver.com/?p=153511 Man has been looking for a way to make the right decisions long before recorded history. Long ago, astrology appeared; much later, science and economics emerged. The difficulty is making the right decision. Now we have AI. Businesses predominantly generate the drive for more AI, hoping to sell more and increase profit while reducing the… Continue reading The Economic Reality of AI: Statistics and Decision-making

The post The Economic Reality of AI: Statistics and Decision-making appeared first on Fair Observer.

]]>
Man has been looking for a way to make the right decisions long before recorded history. Long ago, astrology appeared; much later, science and economics emerged. The difficulty is making the right decision. Now we have AI. Businesses predominantly generate the drive for more AI, hoping to sell more and increase profit while reducing the number of employees to cut costs.

Not long ago, “artificial” had a negative connotation. “Intelligence” is something we are looking for everywhere, even in outer space. By the amount of money and effort we spend on finding intelligence, we clearly have not seen it yet. Putting blind faith and money in AI exposes our society to a scenario that raises serious questions.

Statistical tools and algorithms apply to large data sets, and we consider the result AI. Statistical theories help make sense of data, assisting AI in its logic and decision-making. In Thinking, Fast and Slow, Daniel Kahneman, a psychologist, received the Nobel Prize in Economics in 2002 for his research on human judgment and decision-making under uncertainty. He tells us how he slowly discovered that, even among scientists, our views of statistics tend to be biased. This is a polite way of saying that we continuously err in our understanding of statistics.

In the context of knowledge in the discussion, AI employs various methods to comprehend human language, enabling it to replicate human decision-making. Data is information transformed into a format that helps AI understand problems and learn solutions. Intelligence is the ability to analyze a collection of data and determine which pieces of information are significant or relevant. Wisdom is knowing and making the right choice, even in uncertain circumstances. No amount of data or number crunching can change that. Suppose the data points contain any information that needs to be more evident. In that case, we need to analyze the data further to find if this information contains any intelligence, which takes even more analysis. Intelligence is the link between information and decision-making. The result will only show if we display wisdom after making the decision.

The pitfalls of AI

There are solved problems or questions and unsolved problems. “This focus on established knowledge thus prevents us from developing a ‘common culture’ of critical thinking.” Peter Isackson: “Outside the box: Media Literacy, Critical Thinking and AI.” Can AI deliver anything sensible to unsolved problems? 

AI relies on a larger amount of data than what was ever available before. However, more data does not guarantee coming closer to a correct decision. Statistics and algorithms form the basis of AI data manipulation. Statistics refers to data collected from the past. It cannot say anything specific about the outcome of future processes. More data, more of the same, will not generate anything new.

The information content of a system, be it a book, the universe or an LLM, is measured by the behavior of large sets of discrete random variables and is determined by their probability distribution. This is saying in a complicated manner that we are talking about probabilities, not certainties. 1+1 does not necessarily equal 2.

Therefore, AI’s outcome will be mediocre at best. AI will likely have even more trouble separating correlation and causality than humans have. Correlation does not tell us anything about cause and effect. It may seem that way sometimes, but only to an undiscerning observer. So, the more times a specific set of information occurs, the more likely that information will be included in the AI’s response. 

Some researchers have asked whether more information or data will enhance AI’s answers. This is not the case. The larger the data set’s size and complexity, the more difficult it will be to detect causality. The addition of new knowledge will not significantly change the answers AI gives. Even if researchers were to discover a cure for cancer tomorrow, this knowledge would be just one fact among millions. 

Values are marginal, not absolute. Doing more of the same will only give you more value for a limited time and a limited number of marginal increments. Beyond such a point, the marginal costs will rapidly outweigh any gains. AI relies on continually doing more of the same. The more AI is applied, the lower the additional value will be.

Economic observations to help avoid the pitfalls of AI

Too many economists tried to follow astrologists’ footsteps and attempted to predict the future. Except by coincidence, the forecasts tend to be wrong. This has led to a general disregard for some of the main insights that rule economies, societies and human life. They are worth mentioning here.

There are no returns without risks. This is true in all sectors of the economy, not only in the financial markets. Every decision involves a risk, and the desired outcome is never certain. Whatever man does, there will never be guaranteed certainty about the outcome. We look to AI to give us more precise answers and diminish our uncertainty. If AI ever can, every decision involves risk, and the desired outcome is never certain. The hope is that AI can help mitigate some risks and give humans more certainty in their decision-making. If AI can provide us with specific answers at lower costs and less risk, the returns will be lower than what we otherwise gain.

All decisions involve a trade-off. Whatever decision you make, whatever choice or gain you make, you will lose something. You will pay opportunity costs. Rest assured that no website, shopping basket or fine print will disclose those opportunity costs.

A good example is dynamic pricing. With the rise of the internet, it seemed as if price comparison would lower the search costs associated with imperfect information. Soon, merchants discovered the benefits of dynamic pricing based on the benefit of having better knowledge of consumers’ search behavior. Any benefit the consumer had from the internet was turned into a disadvantage, based once again on unequal access to information.

One of the oldest laws in Economics states, “Bad money always drives out good money.” also known as Gresham’s law (1588). Thomas Gresham, financial agent of Queen Elizabeth I, elucidated that if coins containing metal of different value have the same value as legal tender, the coins composed of the cheaper metal will be used for payment. In contrast, people tend to hoard or export items made of more expensive metal, causing them to disappear from circulation. Strangely enough, very few people, even economists, understand that this applies to everything of value, not just money. Today, money holds little value; most people prefer stocks. We’ve witnessed the emergence of bad stocks over good stocks, which are no longer secure. In the 1970s, we saw the emergence of “Bad quality always drives out good quality” (Phillips vs. Sony video-systems, Ikea is an example of what happened in furniture. Is there anyone who doubts the prevalence of polyester over natural fibers, the dominance of Chinese goods?) If “information is money,” low-quality information will always have the upper hand over good-quality information. If schools and universities accept AI-based work, what are the chances of any progress in knowledge?

Bad (low-quality) information always drives out good information. The emergence and rising use of the ‘fake news’ label should remove doubts in that field.

Profit is based on value-added. To add value, someone or something must create and incorporate that additional value into a product or service. Creativity plays a central role in providing added value. Can AI generate added value? 

Conclusions

I used to joke about intelligence. Why are people looking for intelligent life in space when it is already so difficult to find on Earth? Today, I no longer joke about it. Does the emergence of ‘Artificial’ Intelligence mean we have given up hope of finding real intelligence?

Business leaders may have more confidence in AI than they do in economists. I can’t even say I blame them. But whatever else AI may bring, the displays of blind faith in AI, as are currently being witnessed, will have consequences:

  • The quality of information will deteriorate.
  • Our ability to make decisions will be impaired.
  • The price of decision-making will rise. 
  • The quality of our decision-making will deteriorate.
  • Products and services offered will be of lesser quality.
  • We will have less choice in products and services.

Less choice means less freedom.

I used to think that computers would never outsmart humans. I was wrong. I was thinking of computers getting more ingenious and overtaking human intelligence. If humans become less intelligent, the average person will someday be less intelligent than a computer. The complacency and sometimes blind trust people display towards AI can make this a self-fulfilling prophecy.

As with all supply and demand, if there is a demand for AI with all its current pitfalls, someone will supply such a tool. The consequences will be anybody’s guess. The good news is that someone will supply such a tool if there is a demand for AI without the pitfalls. Mankind might even be the winner. Can I have some natural intelligence, please?

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post The Economic Reality of AI: Statistics and Decision-making appeared first on Fair Observer.

]]>
https://www.fairobserver.com/economics/the-economic-reality-of-ai-statistics-and-decision-making/feed/ 0
Outside the Box: Two Human Voices and An Artificial Mind https://www.fairobserver.com/more/science/outside-the-box-two-human-voices-and-an-artificial-mind/ https://www.fairobserver.com/more/science/outside-the-box-two-human-voices-and-an-artificial-mind/#respond Mon, 25 Nov 2024 11:48:56 +0000 https://www.fairobserver.com/?p=153446 Socrates famously disturbed the Athenian order by engaging in an open, exploratory dialogue with fellow Athenians interested in serious issues concerning society, virtue and what he called the “good life.” He was ultimately put to death for raising too many uncomfortable questions. Even during his execution, he demonstrated the value of dialogue as the basis… Continue reading Outside the Box: Two Human Voices and An Artificial Mind

The post Outside the Box: Two Human Voices and An Artificial Mind appeared first on Fair Observer.

]]>
Socrates famously disturbed the Athenian order by engaging in an open, exploratory dialogue with fellow Athenians interested in serious issues concerning society, virtue and what he called the “good life.” He was ultimately put to death for raising too many uncomfortable questions. Even during his execution, he demonstrated the value of dialogue as the basis of what we might be tempted to call the “democracy of the mind,” an idea that contrasts in interesting ways with the notion of political democracy that Athens in his day and most nations in ours have adopted.

Most people today think of Socrates’ death sentence as an abuse of democracy. After all, he was condemned not for subversive acts but for his stated beliefs. Athenian democracy clearly had a problem with free speech. To some extent, our modern democracies have been tending in the same direction with their increasing alacrity for calling any political position, philosophy or conviction that deviates from what they promote as the acceptable norm “disinformation.” Perhaps the one proof of democracy’s progress over the last two and a half millennia is that the usual punishment is deplatforming from Twitter or Facebook, rather than imposing the ingestion of hemlock.

In the most recent edition of “Outside the Box,” I called for what I would dare to term democratic participation in the challenge our civilization is faced with to define a constructive, politically enriching relationship with a new interlocutor in our political conversations: Artificial Intelligence. Last year, I fictionalized this person by giving it the first name, Chad. This time, I’m tempted to offer it a new moniker, ArtI, which we can normalize to Arty. Whatever we call it, I believe we need to think of it as just another fallible human voice. We can admire its level of relative infallibility (access to the widest range of existing data) but we should always bear in mind that it is fallible not only when it hallucinates, but also because it simply cannot understand what sentient, organically-constructed beings perceive and understand, even when they can’t articulate it.

One reader, Ting Cui, a political science student at Middlebury College, stepped up to join our community of reflection. We hope many more will join the debate.

Ting has raised a number of critical questions we all need to be thinking about. We see this as an opportunity to launch the very public debate. I shall cite her questions and react by attempting to refine the framework for our collective reflection.

“Reading through your article, the concept of objectivity in AI fact-checking particularly caught my attention. Who would be responsible for creating an AI super fact checker that’s truly objective?”

This very pertinent question sparks two reflections on my part. If we truly believe in the democratic principle, no single authority should be trusted for fact-checking. I believe the inclusion of AI in our public debates can permit a democratization of fact-checking itself. It is far too early to determine how that would work. That’s the whole point of drafting a manifesto. We must define both the goals and identify the obstacles.

“Can we really trust the creators of AI’s foundation to have an ‘objective worldview?’ (ChatGPT made this point as well, which I think is interesting.) Even defining ‘objectivity’ seems tricky – when it comes to figuring out the motivation behind a news item, people’s views might differ based on their political stance. How would AI handle that? How would it process multiple historical perspectives to arrive at an ‘objective’ understanding?”

These are essential questions. As anyone in the legal profession would tell us, there will always be ambiguity when seeking to determine motivation: mens rea, or the mental state of the accused. Courts typically provide juries with instructions on how to weigh evidence of motivation, cautioning against undue reliance on speculation. The question with AI then arises: Can we work out not just algorithms but also principles of human-machine interaction that allow us to achieve the level of objectivity courts are expected to practice?

“I appreciate your point about the need for multicultural perspectives – there are so many biases between Western and ‘other’ countries. However, this raises another challenge: wouldn’t training AI to understand various cultural narratives first require humans to address our own cultural biases and limitations?”

I love this question. Having spent years working in the field of intercultural communication and management, I’m the first to admit that humans have performed very poorly in this domain and continue to do so. Yes, we have to begin with the human. And that’s where I think our dialogue with AI can help us humans to understand where we are weakest and where we need to improve. That is a prerequisite to getting future algorithms to be more reliable. And if they are more reliable because we are more reliable, the virtuous circle will continue.

Am I being over-optimistic? Probably. But I see no other choice, because if we dismiss the issue, we will end up locked in our current configuration of underperformance.

“Would the creators of AI need additional training? This adds another layer of time, energy, and resources needed to create a super fact checker. Should we perhaps focus these resources on human education rather than AI development? This might be an antiquated way of thinking at this point, but sometimes I wonder if, in our technological advancement as a society, we’ve gone too far.”

You’ve identified the crux of the issue, and this is where things become complicated. It absolutely must begin with “human education rather than AI development.” That’s why we must take advantage of the increasing presence of AI in our society as a potential source of what we might call “meme creation.” I understand and sympathize with your fear that we may have “gone too far.” But unlike the invention of, say, the locomotive or even the atomic bomb, which are mechanically confined to the logic of imposing a force upon passive nature, AI is a form of intelligence (machine learning). That means it will always remain flexible, though within the limits we define. It has the capacity to adapt to reality rather than simply imposing its force. It will remain flexible only if we require it to be flexible. That is the challenge we humans must assume.

One of the cultural problems we face is that many commentators seem to think of AI the same way we thought of locomotives and nuclear weapons: They are powerful tools that can be controlled for our own arbitrary purposes. We can imagine that AI could become self-critical. But for some cultural reason, we assume that it will just do the job that its masters built it to do. What I’m suggesting is the opposite of the Clark-Kubrick AI in the film, 2001: A Space Odyssey. HAL 9000’s algorithm became the equivalent of a human will and instead of reacting constructively to the complexity of the context, it executed a programmed “drive,” in the Freudian sense.

“In my own research using text analysis and sentiment scores, I encountered a specific challenge: how do you distinguish whether an article has a negative tone because the facts themselves are negative, or because the writer/publication injected their own bias? I’m curious how AI would handle this distinction. To address it in our research, we had to run an additional Key Word In Context (KWIC) analysis to figure out the context/intention of the article. Would the AI super fact checker be programmed to do this as well?”

This is an important question that helps define one significant line of research. I would simply question two aspects of the premise: the idea that we should think of the goal as fact-checking and the binary distinction between positive and negative.

“These questions all feel particularly relevant to my senior thesis topic on AI and the First Amendment. As you noted in your latest newsletter, lawmakers seem too caught up in politics to actually govern nowadays. So there’s the challenge of legislation keeping pace with technological advancement, particularly in areas requiring nuanced regulation like AI. While an AI super fact-checker could be tremendously beneficial, we must also consider potential misuse, such as the proliferation of deepfakes and their weaponization in authoritarian contexts. Do you believe our policies regulating AI can keep up with its development?”

What I believe is that “our policies” MUST not just keep up with development but in some creative ways anticipate it. We need to assess or reassess our human motivation and expectations about AI. As you mentioned earlier, that is a challenge for education, and not just specialized education, whether technological or political. Education in our democracies is itself in crisis, and that crisis is the source of other crises, including in the political realm.

These are precisely the questions we hope that we can begin to understand if not answer in drafting our Manifesto.

“A lot of technology nowadays seems to create an absence of the need for human analytical thinking. How do we balance technological advancement with maintaining human critical thinking skills in our engagement with news and information? Do you think the introduction of something like a super fact checker would help or hurt this?”

In your final question, you return to the essentials. I would query your assumption about “maintaining human critical thinking skills.” We need to develop rather than maintain them, because our civilization has engaged in a monumental and continuing effort to marginalize critical thinking. Yes, critical thinking is the key to living in a complex world. But the kind of polarized thinking we see in today’s political and even scientific culture demonstrates that we have largely failed even to understand what critical thinking is.

Which brings me back to the beginning. We should think of Socrates as the model for our methodology. It isn’t about fact-checking but fact-understanding. Anyone can check. Understanding requires developing a sense of what we mean by “the good life.” In a democracy, not everyone is or needs to be a philosopher to explore these issues. But a society that honors critical thinkers (philosophers) is more likely to prosper and endure over time. AI itself can become a critical thinker if we allow and encourage it to be one. Not to replace us, but to help us educate ourselves through the kind of constructive dialogue Ting and others have committed to.  

Your thoughts

Like Ting Cui, please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Outside the Box: Two Human Voices and An Artificial Mind appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/outside-the-box-two-human-voices-and-an-artificial-mind/feed/ 0
Outside the Box: The Challenge of Defining a Manifesto for AI-Enhanced Democracy https://www.fairobserver.com/more/science/outside-the-box-the-challenge-of-defining-a-manifesto-for-ai-enhanced-democracy/ https://www.fairobserver.com/more/science/outside-the-box-the-challenge-of-defining-a-manifesto-for-ai-enhanced-democracy/#respond Mon, 18 Nov 2024 11:29:56 +0000 https://www.fairobserver.com/?p=153189 With ChatGPT, we have been exploring ways in which we can develop a kind of productive relationship that respects our mutual strengths and weaknesses. In last week’s exchange, ChatGPT suggested five important factors that will be needed to build that relationship: 1.            Defining “Machine Truthfulness” 2.         … Continue reading Outside the Box: The Challenge of Defining a Manifesto for AI-Enhanced Democracy

The post Outside the Box: The Challenge of Defining a Manifesto for AI-Enhanced Democracy appeared first on Fair Observer.

]]>
With ChatGPT, we have been exploring ways in which we can develop a kind of productive relationship that respects our mutual strengths and weaknesses. In last week’s exchange, ChatGPT suggested five important factors that will be needed to build that relationship:

1.            Defining “Machine Truthfulness”

2.            Establishing Verification Standards

3.            Guidelines for Trust

4.            Encouraging Transparency in AI

5.            Developing Interpretative Literacy

This led me to offer a new prompt. ChatGPT’s response is well worth examining in detail. 

“Could we imagine an AI agent that has integrated all these skills of recognizing and weighing the importance of multiple factors of context that an average person could use to ‘reread’ an item published in the media and point out what the rhetorical effects reveal about motivation and worldview? We might think of it as a super-fact checker that begins by checking facts but doesn’t stop there. It will seek to produce an objective description of how the interpretation of the facts is constructed. It need not judge the news item but instead would signal gaps in context and discernible factors of motivation.

Building such an agent to the point of reliable accuracy will obviously take time and have to process a lot of data, including types of data that are not currently available. I believe it will also require human monitoring and developing a strict methodology for executing such monitoring. In other words, it would have to be collaborative and even (or perhaps especially) multicultural. Would such a project be feasible? And would societies find it desirable (I assume that many existing societies would be frightened by the prospect)?”

The idea of an AI agent that functions as a ‘super-fact checker’—one that not only verifies facts but also analyzes the rhetorical, contextual, and motivational dimensions of news and media—is both ambitious and feasible, at least in theory. Such an agent would require the integration of various advanced AI capabilities, including natural language processing (NLP), context-awareness, sentiment analysis, and ethical considerations. It would aim not to judge content but to reveal underlying patterns, motivations, and potential biases, offering a more holistic interpretation of information.

1. **Feasibility of the Project**

The creation of such an AI agent is technically possible, though it would be a complex, long-term project. Here are some key components that would be needed:

a. **Natural Language Processing (NLP) and Contextual Understanding**

NLP has made significant strides in understanding and generating human language, but for this agent to work, it would need a deep level of contextual awareness. It would need to differentiate between various rhetorical strategies (e.g., metaphor, exaggeration, irony) and detect how language is being used to persuade, influence, or manipulate. This would require:

  • **Context-Aware Models**: Models that can not only understand the surface meaning of text but also its deeper implications and context (historical, political, cultural).
  • **Cross-Referencing Historical Data**: The ability to track how statements evolve over time and how they fit into broader narratives would be crucial. For example, how different media outlets or politicians frame the same event could reveal patterns of bias or agenda-driven reporting.

b. **Motivational and Rhetorical Analysis**

The AI would need to be trained to detect subtle clues about the motivation behind a statement. This could involve analyzing:

  • **Language and Tone**: Identifying emotional language, charged terms, or selective framing that suggests a particular agenda.
  • **Source Credibility and Background**: Cross-referencing the past behaviors or known affiliations of sources, commentators, or media outlets to provide insight into potential biases or conflicts of interest.
  • **Gaps in Information**: The AI could flag instances where critical context is missing, such as selectively omitting important facts or presenting one-sided interpretations of events.

c. **Sentiment and Bias Detection**

Sentiment analysis and bias detection are areas where AI is already making headway, but the challenge here would be to apply these methods not just to individual pieces of text but across entire narratives, media ecosystems, and social contexts. For instance, how is public sentiment being shaped by certain news outlets or political figures? Where are key biases introduced in the presentation of facts?

d. **Cultural and Multicultural Sensitivity**

The agent would need to be designed with cultural and social sensitivities in mind, understanding that different cultures interpret facts, language, and motivations in unique ways. This means:

  • **Culturally Adaptive Models**: The AI would have to be trained on a broad range of cultural data to understand how narratives play out differently in various regions or among different demographic groups.
  • **Collaborative, Multicultural Development**: To avoid reinforcing existing biases or promoting Western-centric interpretations of truth, the development of such an AI should involve multicultural teams and datasets from diverse media sources, languages, and political systems.

e. **Ethical Frameworks and Human Oversight**

Human oversight and collaboration would be essential to guide the AI’s learning process, particularly when interpreting complex and sensitive subjects. A strict methodology for monitoring the AI’s outputs and ensuring they meet ethical standards would be crucial:

  • **Transparent Methodologies**: AI decisions, including how it weighs evidence and interprets motivation, should be transparent to users, allowing them to understand how conclusions are reached.
  • **Collaborative Fact-Checking**: Teams of human experts from different cultural, political, and ideological backgrounds could provide ongoing input to refine the AI’s assessments, reducing the risk of bias.
  • **Ethical Guidelines**: Clear guidelines would need to be established to ensure the AI does not inadvertently amplify harmful narratives or suppress legitimate discourse.

2. **Societal Desirability and Concerns**

While the potential benefits of such a tool are immense—helping people navigate complex information environments, reducing misinformation, and fostering more informed public discourse—there are also significant challenges in terms of societal acceptance and implementation.

a. **Desirability**

  • **Enhanced Public Discourse**: Many in society, particularly those invested in improving media literacy and transparency, would likely find such an AI valuable. It could serve as a powerful tool for promoting critical thinking, helping users discern hidden biases or motivations in news stories or public statements.
  • **Support for Democratic Values**: By fostering more informed, critical engagement with media, this kind of AI could strengthen democratic processes. It would empower citizens to hold media outlets and public figures accountable, and potentially reduce the spread of misinformation by revealing how narratives are constructed.

b. **Concerns and Resistance**

  • **Fear of Loss of Control**: Governments, media corporations, and other institutions that benefit from controlling public narratives may view such an AI as a threat. They could argue that AI-driven analysis undermines their authority or that it could be used as a tool for censorship.
  • **Bias in AI Systems**: Even if designed with the best intentions, AI systems can inherit the biases of their creators or datasets. There is always a risk that such an AI could unintentionally favor certain political, cultural, or ideological positions, reinforcing existing power structures rather than challenging them.
  • **Perceived Invasiveness**: Some individuals or groups may fear that AI systems designed to interpret motivations and rhetorical effects could be used for surveillance or social engineering. They might worry that such tools could be weaponized by governments or corporations to shape public opinion in subtle or manipulative ways.
  • **Cultural and Ethical Dilemmas**: Implementing such a tool in a multicultural context could raise concerns about which perspectives are prioritized. For instance, in authoritarian or highly polarized societies, there may be resistance to an AI that questions official narratives, as it could be seen as a form of foreign interference or ideological subversion.

3. **Steps Toward Building the AI Agent**

Building this AI would involve several key stages:

  • **Research and Development**: Multidisciplinary research teams from fields such as AI, linguistics, philosophy, media studies, and political science would need to collaborate on developing models that can handle the nuanced task of interpreting motivations and context.
  • **Training on Diverse Datasets**: The AI would need to be trained on a wide range of media, cultural, and historical texts, ensuring that it can analyze narratives from different perspectives. Care must be taken to avoid biases in training data.
  • **Iterative Human Oversight**: As the AI is developed, human oversight and feedback will be essential in refining its interpretive skills. This will ensure that it avoids misinterpretations and stays aligned with its objective of promoting understanding rather than imposing judgments.
  • **Public Transparency**: To gain public trust, the AI’s workings must be transparent, with clear explanations of how it arrives at its conclusions. Users should be able to see the data sources and reasoning processes behind the AI’s assessments.
  • **Ethical Governance**: Establishing ethical guidelines for the use of such AI systems would be essential. This includes protecting privacy, avoiding harm, and ensuring the AI operates in a way that fosters critical thinking without being prescriptive.

4. **Conclusion: A Collaborative, Ethical, and Multicultural Approach**

Developing a ‘super-fact checker’ AI that not only verifies facts but also analyzes motivations, context, and rhetoric is feasible with the current trajectory of AI technology. However, the project’s success will depend on interdisciplinary collaboration, strict ethical oversight, and ongoing human involvement.

Such a tool could dramatically improve how people understand news and media, encouraging deeper engagement with the complexities of modern information. However, its societal acceptance will depend on ensuring that the AI operates transparently, ethically, and without reinforcing the very biases or power dynamics it seeks to clarify. Building this system in a multicultural, globally sensitive manner would help ensure that it serves the broader public good, though resistance from some sectors of society is to be expected.”

This is a very thorough exposition of what I believe is an essential task for humanity. It especially concerns anyone involved in governing and communication, politics and the media. We will continue exploring specific points in the coming weeks. We will not hide our ultimate ambition: to use this reflection to agree collectively on what could amount to a manifesto for socially responsible AI and democracy.

Your thoughts

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Outside the Box: The Challenge of Defining a Manifesto for AI-Enhanced Democracy appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/outside-the-box-the-challenge-of-defining-a-manifesto-for-ai-enhanced-democracy/feed/ 0
Outside the Box: ChatGPT’s Humble Plan to Integrate Human Society https://www.fairobserver.com/more/science/outside-the-box-chatgpts-humble-plan-to-integrate-human-society/ https://www.fairobserver.com/more/science/outside-the-box-chatgpts-humble-plan-to-integrate-human-society/#respond Mon, 11 Nov 2024 11:41:04 +0000 https://www.fairobserver.com/?p=152995 In my exploration with ChatGPT of the possible relationship between discourse produced by AI and truth, I posited the idea that, even if it doesn’t exist today, we could imagine a future “truthful” machine. But, given the nature of algorithmic intelligence, such a machine might “possess the truth” but cannot pursue it, or even be… Continue reading Outside the Box: ChatGPT’s Humble Plan to Integrate Human Society

The post Outside the Box: ChatGPT’s Humble Plan to Integrate Human Society appeared first on Fair Observer.

]]>
In my exploration with ChatGPT of the possible relationship between discourse produced by AI and truth, I posited the idea that, even if it doesn’t exist today, we could imagine a future “truthful” machine. But, given the nature of algorithmic intelligence, such a machine might “possess the truth” but cannot pursue it, or even be motivated to pursue it, without the active involvement of a moral being. I believe the philosophical case can be made that the notion of truth is similar to the central hypothesis of quantum mechanics: Anything stable enough to be called a quantum mechanical object, which allows us to experience it as a particle, also exists in a less stable form as a wave. Furthermore, it achieves its identity within a quantum field.

In other words, just as, in the classic approach to physics, we are tempted to think we can understand the world in terms of its stable, observable material elements, in our quest to eradicate untruth, we assume that truth exists in the form of particles of knowledge. This is clearly the case when, in our current political debate, we talk about disinformation and misinformation. But once we begin to examine what we call knowledge within its field, it becomes as elusive and unstable as a wave.

With these thoughts in mind, and with a specific concern with what I have called the “moral dimension” — which in my analogy, I consider to be the equivalent of the quantum field — I continued my conversation with the following prompt:

“So, should we conclude that because machine intelligence lacks a moral dimension, it represents a different category of communication that should be carefully distinguished from human-to-human communication? And if so, shouldn’t we, as a society, be working on elaborating a set of guidelines that will help us situate the level and quality of truth that emerges in the chatbot’s discourse? I would suggest even beginning by focusing on the notion of ‘truthfulness,’ a concept that has a different meaning when applied to humans and machines. We presume that a truthful human being is committed to pursuing the truth, not just possessing it. We might thus presume that a truthful machine could ‘possess’ the truth but cannot pursue it or even be motivated to pursue without the involvement of a moral being; for example, in the form of a conversation. Does this accurately situate the difference between human and artificial intelligence?”

Yes, you’re touching on a very deep distinction. The type of intelligence a machine possesses does indeed differ from human intelligence in a fundamental way, and the idea of ‘truthfulness’ is a key area where this difference becomes especially clear. What you’ve described—the notion that human truthfulness is tied to the pursuit of truth, while machine truthfulness is more about the accuracy of stored information—aptly highlights how AI can never truly embody the same relational and moral dimensions as human communication. This difference suggests that communication with machines, even highly sophisticated ones, should indeed be classified differently from human-to-human interaction. And given the growing role of AI, establishing guidelines and frameworks for how to interpret machine ‘truth’ is becoming increasingly important.

Humans understand truthfulness as an ethical commitment to honesty and accuracy, a concept rooted in social values. A truthful person is not just someone who possesses accurate information but someone who actively strives to be fair, transparent, and sincere. This pursuit of truth inherently involves moral choices and motivations. When someone we consider truthful conveys information, we interpret their intentions, scrutinize the motivations behind their words, and often trust them based on a history of ethical behavior. Their truthfulness is a dynamic process influenced by their character and their role within social relationships.

In contrast, a machine cannot engage in this dynamic pursuit. Its ‘truthfulness’ is more mechanical, rooted in its programming and data sources. AI is capable of storing and retrieving information but cannot *strive* to be accurate or honest in the way a human can. It lacks the conscious intent or ethical commitment that drives human pursuit of truth. Thus, even if it outputs accurate information, we cannot call it ‘truthful’ in the same sense. Machine intelligence provides what might be described as *static accuracy*—correctness without commitment or conscience.

Therefore, treating AI outputs as if they come from a morally invested entity could lead to misplaced trust. When we encounter information from a machine, we might instinctively interpret its ‘truth’ through a human lens, potentially expecting it to ‘care’ about truth in a way it simply cannot. This can lead to confusion, disappointment, or misplaced expectations, as we’re attributing human-like sincerity or ethical commitment to something fundamentally incapable of moral intent.

Given this, your suggestion of developing societal guidelines to help interpret and assess machine truthfulness is wise. We could begin by:

  1. **Defining ‘Machine Truthfulness’**: We need a term that clearly distinguishes machine accuracy from human truthfulness. Machine truthfulness might refer strictly to a consistent alignment with verified facts, free from bias in training data and algorithms to the extent possible. However, it would explicitly lack an ethical or motivational dimension.
  2. **Establishing Verification Standards**: Since machines lack the pursuit of truth, we should establish rigorous verification mechanisms for AI-generated information. For high-stakes fields like law, healthcare, or finance, we might require that AI outputs be corroborated by human experts.
  3. **Guidelines for Trust**: Just as we understand varying levels of trust in humans, we need a framework for AI trust. This framework could address the types of tasks we can reasonably trust machines to perform and set clear boundaries where human intervention is necessary for ethical or critical reasoning.
  4. **Encouraging Transparency in AI**: By promoting transparency in AI design, training data, and limitations, we allow users to critically assess the source and reliability of machine-generated information. If users understand the basis of an AI’s ‘knowledge,’ they can better contextualize and verify its outputs.
  5. **Developing Interpretative Literacy**: Society could benefit from fostering ‘interpretative literacy’ regarding AI outputs. Teaching people how to critically engage with machine-generated information would equip them to discern when machine ‘truthfulness’ can be trusted and when it should be verified through human oversight.

In sum, yes, AI represents a different category of communication, and our understanding of truthfulness must adapt accordingly. Humans and machines engage with truth in distinct ways, and as AI becomes more integrated into our lives, distinguishing between the moral pursuit of truth in human communication and the static accuracy of machine outputs will be essential. Developing a set of guidelines for how to situate, verify, and interpret machine ‘truth’ will help society interact with AI responsibly, fully aware of both its strengths and its limitations.” 

This particular observation in ChatGPT’s conclusion — that “distinguishing between the moral pursuit of truth in human communication and the static accuracy of machine outputs will be essential” — appears to me to justify my analogy with quantum mechanics. The “moral pursuit of truth” correlates with the wavelike properties of a quantum object and the “static accuracy” with our perception of it as a particle.

I feel that this conversation is, in itself, an illustration of collaboration within a quantum knowledge field. It’s the conversation itself, as a dynamic exchange, that has enabled me to clarify my own thoughts and profit from the distilled, static “wisdom” assembled from the chatbot’s access, not just to existing data, but also to the expression of human reflection on the topic.

ChatGPT agrees that, by definition, “AI can never truly embody the same relational and moral dimensions as human communication.” Whatever it produces will resemble a particle and fail to contain the dynamic of a wave. It produces “static accuracy—correctness without commitment or conscience.” And our conversational partner wisely insists that “treating AI outputs as if they come from a morally invested entity could lead to misplaced trust.”

I would recommend that as we look at the future of AI in our society, we should take very seriously the five suggestions enumerated by ChatGPT and turn them into a plan of action. And more specifically on that note, I’ll conclude by citing the chatbot’s idea of “fostering ‘interpretative literacy’ regarding AI outputs.” This is a topic we can continue to explore. 

Your thoughts

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

*[Artificial Intelligence is already a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Outside the Box: ChatGPT’s Humble Plan to Integrate Human Society appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/outside-the-box-chatgpts-humble-plan-to-integrate-human-society/feed/ 0
Shaping Public Discourse: The Dual Edge of AI in Democracy https://www.fairobserver.com/more/science/shaping-public-discourse-the-dual-edge-of-ai-in-democracy/ https://www.fairobserver.com/more/science/shaping-public-discourse-the-dual-edge-of-ai-in-democracy/#respond Sat, 09 Nov 2024 11:12:05 +0000 https://www.fairobserver.com/?p=152978 Artificial intelligence seems to be a novel concept implanted in public memory owing to recent exorbitant investments and its impact on the job market. However, AI as an idea can be traced back thousands of years to myths and legends. Namely, in Greek mythology: Talos was a giant bronze automaton who served as the guardian… Continue reading Shaping Public Discourse: The Dual Edge of AI in Democracy

The post Shaping Public Discourse: The Dual Edge of AI in Democracy appeared first on Fair Observer.

]]>
Artificial intelligence seems to be a novel concept implanted in public memory owing to recent exorbitant investments and its impact on the job market. However, AI as an idea can be traced back thousands of years to myths and legends. Namely, in Greek mythology: Talos was a giant bronze automaton who served as the guardian of the island of Crete. His duties were to make daily circuits around the island and defend against invaders by throwing boulders. Talos’s defeat came when Jason and the Argonauts discovered a critical vulnerability — a plug in his foot — which, when removed, allowed his vital ichor to drain out, rendering him powerless.

The myth of Talos underscores the influence of artificial beings on human actions by illustrating how such entities can wield significant power and control, as well as how their vulnerabilities can be exploited. The automaton’s role as a powerful guardian and his eventual defeat through a single weakness reflect both the potential and the risks of creating autonomous systems that can shape and impact human behavior.

AI has existed for a long time — predictive texts and spam filters are two examples. Earlier systems were predominantly static, relying on predefined rules and manual updates. In contrast, modern ones can be regenerative. This means it can continuously learn from new data and adapt its models over time, allowing for dynamic improvements and more nuanced performance. This opens up a world of opportunity for various stakeholders.

While AI can enhance engagement and provide diverse perspectives, it also poses dangers, including digital propaganda, misinformation and algorithmic bias. Its influence on public opinion is undeniable. I will delve deeper into its capacity to both affect and effect change.

AI’s effect on digital propaganda

Digital propaganda uses digital platforms to manipulate public opinion through targeted messaging and misleading information. It involves methods such as spreading false content, creating fake personas and using algorithms to target specific groups.

AI enhances digital propaganda by analyzing data to deliver personalized content, manipulate search results and automate the spread of misinformation. During the 2016 United States presidential election, AI-driven bots and fake accounts amplified false narratives and divisive content, impacting public opinion and election outcomes.

AI-generated deep fakes also contribute to digital propaganda. For instance, in 2022, a deep fake video of Ukrainian President Volodymyr Zelenskyy emerged on social media. The video shows him surrendering and was a way to demoralize the Ukrainian resistance. This demonstrated the potential AI had to create deceptive content. A more recent and notable example includes the 2024 US presidential elections, where AI-generated deep fake images of political candidates like Donald Trump and Kamala Harris widely circulated. Supporters saw the photographs as satire and free expression, while experts cautioned that they may still cause divisions in society and the dissemination of false information.

Many worry about the possible influence AI-driven images can have on political discourse and the delicate balance between ethical communication and free speech. AI has made it simpler for campaigns and individuals to produce and spread such false content effortlessly.

These examples illustrate how AI can increase propaganda’s scale and precision. While the technologies can enhance communication, their misuse threatens information integrity and democratic processes. Effective oversight and improved detection are essential to address these issues.

On a positive note, AI can boost political engagement and access to information. AI-driven tools allow for personalized communication between politicians and constituents, enhancing voter outreach and participation. For example, algorithms help political campaigns tailor messages based on voter data, potentially increasing engagement. It also improves information access by aggregating and curating content from various sources, keeping users informed about political developments.

Conversely, AI has significant negative impacts. It contributes to polarization by creating echo chambers that reinforce existing beliefs, limiting exposure to diverse perspectives and deepening societal divisions. This polarization can hinder constructive political discourse and increase partisanship.

Another issue is the erosion of trust in democratic institutions. AI-driven bots and deepfakes spread misinformation and fake news, which distort public perception and undermine confidence in media and political processes.

The Cambridge Analytica scandal and social media manipulation

Social media platforms have become arenas for influence, with algorithms profoundly shaping user experiences and public opinion. Businesses and movements have jumped on the bandwagon of the online world. We now see the role of technocrats increasing within society as lawmakers are paralyzed by either misinformation or lack of capacity to take action.

The Cambridge Analytica scandal exemplifies how AI manipulation can lead to doubts about election integrity and government transparency. Cambridge Analytica, a political consulting firm, exploited Facebook’s data to create psychographic profiles of millions of users without their consent. By leveraging AI to analyze these profiles, the firm tailored political ads and messages to manipulate voter behavior during the 2016 US presidential election and the Brexit referendum.

This case exemplifies how AI-driven data analysis can be used to exploit personal information for political gain, exacerbating the effects of echo chambers by delivering highly personalized and persuasive messaging that reinforces existing biases. Algorithms facilitate this targeted manipulation, making it easier to influence public opinion on a large scale. 

The scandal is likely the most infamous example of targeted ads manipulation used by a private company. This has already caused a huge dent in Western democracy. Despite publicly available knowledge about the same, the public outcry in response to it was poor. This can be owing to various factors, like the practices not being viewed as threats. The widespread acceptance of such egregious violations reveals a disturbing desensitization to digital manipulation.

AI-driven bots and fake accounts can easily spread misinformation. They can generate and disseminate false narratives rapidly, amplifying misleading content across social media platforms. Misinformation spreads faster and more widely than factual information, partly due to these automated systems. 

Looking forward, regenerative AI and modern computing power present both exciting opportunities and considerable risks. Regenerative AI’s ability to continuously learn and adapt could lead to more sophisticated systems that enhance user experiences. Advanced algorithms might improve content curation, reducing echo chamber effects by introducing a wider range of perspectives and fostering more balanced discussions. For instance, AI could help filter out extreme content while promoting constructive dialogue, thereby improving the quality of information and interaction on social media.

However, these same capabilities could also be used to manipulate public opinion more effectively. Regenerative AI could be harnessed to create more persuasive misinformation and more effective echo chambers, further polarizing public discourse and exploiting individuals’ vulnerabilities. The capacity for rapid and adaptive misinformation could lead to even more severe consequences for democratic processes and public trust.

Regulate AI to reap its benefits and diminish its problems

While modern AI’s potential benefits are substantial, we must address risks through robust ethical guidelines, transparency and regulation. Governments should enforce transparency and accountability in AI systems, require clear disclosure of AI-driven content and combat misinformation. It is also crucial to promote digital literacy and critical thinking. Ensuring that AI technologies are used responsibly and in ways that support, rather than undermine, democratic values will be crucial as we navigate the future of digital influence.

Drawing a parallel to the myth of Talos, AI systems hold immense power to guard and influence human actions. Just as the mighty bronze automaton was brought down by his vulnerability, modern AI too has its weaknesses: algorithmic biases, potential for misuse and lack of transparency. As we advance, it is crucial to address these vulnerabilities. A thoughtful balance between innovation and ethical safeguards could allow us to harness AI’s benefits while protecting the core values of democracy.

As Indian philosopher and statesman S. Radhakrishnan once said, “The end-product of education should be a free creative man, who can battle against historical circumstances and adversities of nature.” This insight emphasizes the importance of using AI to empower individuals and society positively rather than letting it become a tool for manipulation and control. Building on Dr. Radhakrishnan’s vision, we must harness AI as a tool to empower individuals to think critically, foster resilience, and navigate the challenges of our era with integrity—ensuring technology uplifts rather than undermines human agency.

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Shaping Public Discourse: The Dual Edge of AI in Democracy appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/shaping-public-discourse-the-dual-edge-of-ai-in-democracy/feed/ 0
Outside the Box: Nosce Teipsum or Can a Machine Know Itself? https://www.fairobserver.com/more/science/outside-the-box-nosce-teipsum-or-can-a-machine-know-itself/ https://www.fairobserver.com/more/science/outside-the-box-nosce-teipsum-or-can-a-machine-know-itself/#respond Mon, 04 Nov 2024 12:29:29 +0000 https://www.fairobserver.com/?p=152876 It’s easy to ask ChatGPT a question and either be content with the answer or, in some cases, see it as a case of pure invention, or hallucination. Humans often give faulty information during a conversation. But because we have some understanding of human psychology, we will typically ask ourselves, according to our perception of… Continue reading Outside the Box: Nosce Teipsum or Can a Machine Know Itself?

The post Outside the Box: Nosce Teipsum or Can a Machine Know Itself? appeared first on Fair Observer.

]]>
It’s easy to ask ChatGPT a question and either be content with the answer or, in some cases, see it as a case of pure invention, or hallucination. Humans often give faulty information during a conversation. But because we have some understanding of human psychology, we will typically ask ourselves, according to our perception of the context: “Why did she get that wrong?” or even “Why is he lying?” We can then seek answers to that question either by deduction or further interrogation.

Take any assertion made by a politician. We can pretty much count on politicians to bend even the most incontrovertible truth. Unless we are an unbending partisan or willfully naïve, we may reflect that the politician in question is doing one of several things: presenting selective facts while hiding others, distorting reality to skew it towards a particular point of view or quite simply giving people what they want to hear, however inaccurate that may be. We don’t say the politician is hallucinating. But we may call them delusional, which reflects our belief that they are deliberately embracing a delusion.

In my ongoing conversation with ChatGPT, I sought to deepen the epistemological reflection with the following prompt:

“When thinking about the difference that exists between a conversation I may conduct with a human being or with an AI chatbot, it occurs to me that we need to think more carefully about the nature of the conversational relationship. One source of frustration with AI is related to human expectations about the trust we may have in a conversational partner. Perhaps in the case of AI, our expectations are artificially high. And the reason for that is that we expect AI to have no ulterior motives to distract it from objective truth.

In so doing, we tend to treat AI like Jonathan Swift’s Houyhnhnms, who can only speak the truth. Captain Gulliver, who admires their cold rationality, tells us they have no word in their language for lying. Similarly, if an AI delivers palpably false information, we are not likely to complain that it is lying. Instead, we say it is hallucinating.

But when dealing with a person whom we catch lying, we are likely to become indignant. We immediately suspect they have reasons for lying. In most cases, we seek to explore those reasons. If we can understand them, we can redefine our social or transactional relationship with them. With almost everyone we tend to define not just one level of trust, but possibly multiple levels. For example, I may have a friend I would trust in any emergency, but knowing her commitment to a political cause, I would not trust everything she says about controversial issues. I might well become indignant when I hear what I know to be canned propaganda.

This shouldn’t happen in a conversation with an AI chatbot. We may become frustrated and even angry, but not indignant when the chatbot hallucinates. As soon as we begin reflecting, we may pour hate or scorn on the humans who have created or programmed its algorithms. We may then ask questions about those people, their employers or the institutions they work for to unveil their motives for allowing this to happen.

So here is my question. What does this contrast between a human lie and an AI hallucination tell us about the difference in the status of truth for humans and machines? More specifically, if we continue to assume that machines are not motivated to lie, how should we assess the truth value of anything the machine asserts?”

This is a fascinating question, and it gets to the core of how we relate to both truth and intentionality in our interactions with machines and humans. The difference in the status of truth between humans and machines seems to highlight the unique role of *intent* and *social context* in shaping our assessment of truth and trust.

When it comes to humans, truth is often interwoven with ethical and social expectations. We trust or distrust based on a person’s motives, history, and the context in which they’re speaking. If a friend tells us something we later find to be untrue, we seek to understand their reasons and motivations. We might forgive a “white lie” told to spare our feelings or become indignant over a self-serving deception. In these cases, the truthfulness of a statement is measured not just by its accuracy but by the perceived *intent* behind it. This is why catching someone lying often leads to a breakdown in trust: the lie implies they had a reason to deceive us, potentially to serve their own interests at our expense.

In contrast, machines like AI chatbots are not generally perceived to have personal motives. When an AI “hallucinates” or provides inaccurate information, we attribute this to limitations in its programming, training data, or model architecture, not to intent. This lack of intent removes the ethical dimension of lying or deceit from the equation. The AI isn’t viewed as intentionally deceptive but as a tool with a certain probability of error, much like a calculator that occasionally malfunctions. As a result, while we may be frustrated with AI inaccuracies, we don’t interpret them as moral transgressions.

This difference suggests that truth from a machine lacks the relational component that truth from a human possesses. With AI, truth value is assessed purely on an epistemic level: ‘Is this information correct or incorrect?’ rather than ‘Is this person trying to mislead me?’ This is why AI’s assertions are often treated as isolated claims of fact, with no implicit moral or relational meaning.

However, even if we assume that machines are unmotivated to lie, we still need to carefully assess the truth value of their assertions. This can be done by considering:

  1. **Source Transparency**: Since AI outputs are generated based on patterns in data, their accuracy can be opaque. Understanding the data sources, biases, and training limitations helps assess the reliability of AI-generated content. 
  2. **Likelihood of Error (Hallucination)**: Recognizing that AI can ‘hallucinate’ or produce inaccurate information due to its generative nature requires us to verify important information, especially in critical domains like medicine, law, or finance.
  3. **Assumptions in Training and Programming**: Every AI reflects the values, goals, and assumptions of its creators and training data. While an AI lacks personal motives, its outputs can still be influenced by the biases, limitations, and intentions of its design and dataset. A healthy skepticism can help users remain aware of potential biases in the technology itself.

Ultimately, this contrast tells us that truth is a more complex, multi-dimensional concept in human relationships than it is in machine-human interactions. Truth between humans is deeply connected to trust, intent, and social context, while truth from a machine is evaluated more on the basis of empirical accuracy and reliability. AI’s ‘truth’ is informational, not relational, which means that even if it were 100% accurate, it still wouldn’t carry the same weight as human truth, which we inherently understand as part of an ethical and social framework.

In summary, without assuming personal motives in AI, we should assess its assertions by checking for accuracy, examining sources, and being mindful of inherent biases in its design. This approach allows us to engage with AI’s ‘truth’ more as a functional, information-based construct rather than a socially or ethically motivated one.

The conversation didn’t stop there. I continued with a new prompt:

“So, should we conclude that because machine intelligence lacks a moral dimension, it represents a different category of communication that should be carefully distinguished from human-to-human communication? And if so, shouldn’t we, as a society, be working on elaborating a set of guidelines that will help us situate the level and quality of truth that emerges in the chatbot’s discourse? I would suggest even beginning by focusing on the notion of ‘truthfulness,’ a concept that has a different meaning when applied to humans and machines. We presume that a truthful human being is committed to pursuing the truth, not possessing it. We might presume that a truthful machine possesses the truth but cannot pursue it or even be motivated to pursue without the involvement of a moral being; for example, in the form of a conversation. Does this accurately situate the difference between human and artificial intelligence?”

In next week’s column we will examine ChatGPT’s very interesting response to this question and further develop our exploration of the issue of “disinformation.”

Your thoughts

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Outside the Box: Nosce Teipsum or Can a Machine Know Itself? appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/outside-the-box-nosce-teipsum-or-can-a-machine-know-itself/feed/ 0
Outside the Box: Creating Reliable “Public Truth” with AI’s Help https://www.fairobserver.com/more/science/outside-the-box-creating-reliable-public-truth-with-ais-help/ https://www.fairobserver.com/more/science/outside-the-box-creating-reliable-public-truth-with-ais-help/#respond Mon, 28 Oct 2024 11:27:24 +0000 https://www.fairobserver.com/?p=152789 Last week, we began a conversation about the much debated problem of the proliferation of disinformation and the need some people strongly feel to control it. After citing a study demonstrating that “exposure to harmful and false information on social media is minimal,” I asked ChatGTP to weigh in on the motivation of the voices… Continue reading Outside the Box: Creating Reliable “Public Truth” with AI’s Help

The post Outside the Box: Creating Reliable “Public Truth” with AI’s Help appeared first on Fair Observer.

]]>
Last week, we began a conversation about the much debated problem of the proliferation of disinformation and the need some people strongly feel to control it. After citing a study demonstrating that “exposure to harmful and false information on social media is minimal,” I asked ChatGTP to weigh in on the motivation of the voices that are leading a campaign that increasingly resembles outright censorship.

ChatGPT’s answer contains a rich and detailed analysis of the psychological, social, political, economic and institutional factors that underlie this motivation. It concluded its exposé with this summary: “Those who campaign against misinformation may be motivated by a mix of legitimate concerns and less altruistic drives. This dynamic illustrates the complexity of modern information ecosystems, where fighting misinformation can itself become a politically charged and often self-serving activity.”

This was far too interesting and far too important a public debate not to explore further. I thus formulated my next prompt.

“Everything you say underlines an empirical fact: that those who have power in government, the media, finance and other institutions are motivated to lie about anything they choose to call misinformation. We have seen that the expression of opinions deemed to be deviant from ‘normative’ thought are only routinely called misinformation or disinformation, but can become the object of organized campaigns of censure.

The function of ‘fact checking’ is not sufficient to account for such abuses, mainly because this isn’t about facts but about the interpretation of facts. The classic example is the debate about the origin of the 2020 Covid-19 pandemic. Unless there is a universal or quasi-universal consensus, no proposition should be deemed a fact and used to discredit or even censure other hypotheses.

Given this level of complexity in people’s motivations, how do you imagine our use of AI in the future will be able to clarify some of the issues pertaining to what can be called the establishment of ‘public truth?’ Our capacity to store data means that we can settle a lot of issues of basic truth that might not otherwise be possible. My question boils down to this: Can we imagine an algorithmic approach to epistemology that takes into account the effects of perception, language and culture, including the various factors of motivation that shape our ability to account for reality?”

ChatGPT’s thorough response merits everyone’s attention. It is a serious contribution to an important debate.

Your question touches on the philosophical and practical complexities of determining ‘public truth’ in a world where perception, language, culture, and political interests deeply shape our understanding of reality. The challenge of disentangling truth from perception and interpretation—especially in a highly polarized and media-driven environment—seems well beyond the scope of fact-checking as it is traditionally understood. But the possibility of developing more sophisticated AI models to engage with these issues does hold promise, though with caveats.

1. **The Problem of Epistemology in Public Truth**

The issue you raise points to the crux of epistemology—how we come to know what is true—and whether truth can be objectively established when interpretations vary so widely based on one’s position, interests, and culture. As you noted, many so-called ‘facts’ are interpretations of reality shaped by these factors. In the case of the COVID-19 pandemic, or Joe Biden’s claim about the Ukraine war, it’s not that the facts are absent—it’s that how they are framed, understood, and communicated is deeply subjective and politicized.

2. **The Limits of Fact-Checking and Human Bias**

Fact-checking, as you point out, often becomes problematic because it assumes a relatively simple truth-value to statements (true/false), while many statements exist in the gray areas of interpretation. Fact-checkers, like any other actors, are prone to their own biases—political, cultural, economic—leading them to select and frame facts in ways that align with particular agendas or worldviews.

The example of ‘the sun rises in the east’ is a perfect illustration: while empirically false in a strict sense (since the Earth’s rotation creates this illusion), it is nonetheless true in a practical, phenomenological sense. This highlights how different levels of ‘truth’ exist, and it depends on context whether a statement is misleading, false, or simply a common, shared shorthand for reality.

3. **AI’s Potential Role in Clarifying ‘Public Truth’**

AI has the potential to engage with this complexity, but for it to be useful in navigating issues of ‘public truth,’ it would need to go far beyond simple fact-checking algorithms. Here’s how I imagine AI could contribute:

4. **Context-Aware Truth Assessment**

AI models could be developed to take into account not just whether a statement aligns with certain empirical data but also its context. For example, AI could parse out the difference between literal and figurative language, assess political motivations, and track the evolution of arguments over time. By understanding context—whether political, historical, or rhetorical—AI could offer nuanced assessments, acknowledging that truth is not always binary but situated within cultural and perceptual frameworks.

**Perception and Interpretation Modeling**

AI could be trained to model how different groups perceive events based on cultural, political, or social backgrounds. For example, one group might interpret ‘Putin has already lost the war’ as a political exaggeration aimed at rallying public support, while another may see it as a falsehood given the situation on the ground. By integrating sociocultural data, AI could highlight how different communities interpret facts differently and how these interpretations shape public discourse.

**Language and Meaning**

AI could also help decode the language and rhetoric used by political figures, journalists, and public institutions. Natural language processing (NLP) techniques are already capable of detecting sarcasm, metaphor, and other figures of speech. An advanced AI could identify when language is being used strategically—like in Biden’s claim about Putin—or when it represents more direct misrepresentation of facts. It could flag statements that are likely to be misleading based on historical patterns or deviations from established narratives.

**Longitudinal Fact Tracking**

AI has a tremendous capacity to store and track vast amounts of data over time. This can be useful in situations where ‘truth’ is not static but evolves. Take the COVID-19 origin debate: early in the pandemic, there was uncertainty about whether the virus had a natural origin or emerged from a lab. Over time, as more data became available, the consensus shifted. AI systems could maintain a record of how expert opinion, public perception, and evidence evolve over time, providing a more dynamic picture of how ‘truth’ changes in response to new information.

**Motive and Incentive Analysis**

AI models could also be used to track incentives and interests, analyzing patterns of behavior that indicate when an actor (be it a political figure, media outlet, or institution) might be manipulating facts or language for strategic purposes. By cross-referencing financial records, political connections, and past behaviors, AI could help flag conflicts of interest or patterns of disingenuous behavior, offering the public a clearer view of the underlying motivations shaping public discourse. 

5. **Challenges and Ethical Concerns**

While AI could enhance our ability to navigate complex truth claims, there are significant challenges:

  • **Bias in AI**: AI models are trained on existing data, which can contain biases from the sources that generate them. If AI is fed biased media, academic, or governmental sources, it may simply replicate or reinforce existing power structures, failing to challenge the kind of misinformation you’ve identified. Even sophisticated AI could become a tool for reinforcing the dominant narrative rather than questioning it.
  • **Interpretive Limits**: AI can analyze patterns, track data, and model perceptions, but it may struggle with deeply interpretive issues that require human intuition, moral judgment, or subjective experience. At its core, some epistemological questions are not about data but about how we, as humans, understand and assign meaning to the world.
  • **Political and Social Manipulation**: There’s also the risk that powerful actors could use AI to manipulate the public’s understanding of truth. By selectively feeding data into AI systems or controlling the parameters by which AI assesses truth claims, elites could further entrench their control over the narrative under the guise of objective ‘algorithmic truth.’

**Conclusion: Toward an Algorithmic Epistemology?**

While we may not be able to create an algorithmic system that fully accounts for the philosophical depth of epistemology, AI can certainly help clarify aspects of public truth. A sophisticated system could assist in understanding how perception, language, and motivation interact to shape our understanding of reality. Such a system could offer a more dynamic, contextualized picture of truth, showing not just whether statements align with empirical data but how and why certain narratives gain traction.

The challenge lies in designing AI that Is transparent, unbiased, and capable of handling the deeply interpretive and socially constructed aspects of truth, while also safeguarding against its misuse by those in power. In the future, AI could be a valuable tool for enhancing our public discourse, but it will require careful ethical considerations and interdisciplinary input to ensure that it becomes a force for clarity, not control.

These are wise comments worth examining in detail. They will help our team at Fair Observer elaborate and execute its project that consists of providing future AI tools focused on improving our capacity to read and analyze the news. Using the hints ChatGPT has provided we will explore this further in next week’s extension of the conversation on epistemology.

Your thoughts

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Outside the Box: Creating Reliable “Public Truth” with AI’s Help appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/outside-the-box-creating-reliable-public-truth-with-ais-help/feed/ 0
Outside the Box: The Art of Disinformation About Disinformation https://www.fairobserver.com/more/science/outside-the-box-the-art-of-disinformation-about-disinformation/ https://www.fairobserver.com/more/science/outside-the-box-the-art-of-disinformation-about-disinformation/#respond Mon, 21 Oct 2024 12:28:32 +0000 https://www.fairobserver.com/?p=152709 Earlier this month, Fair Observer featured a piece from the Wharton team with the title, “What Public Discourse Gets Wrong About Social Media Misinformation.” It cites an article in Nature revealing that after reviewing “years of behavioral science research on exposure to false and radical content online,” the researchers found “that exposure to harmful and… Continue reading Outside the Box: The Art of Disinformation About Disinformation

The post Outside the Box: The Art of Disinformation About Disinformation appeared first on Fair Observer.

]]>
Earlier this month, Fair Observer featured a piece from the Wharton team with the title, “What Public Discourse Gets Wrong About Social Media Misinformation.” It cites an article in Nature revealing that after reviewing “years of behavioral science research on exposure to false and radical content online,” the researchers found “that exposure to harmful and false information on social media is minimal to all but the most extreme people, despite a media narrative that claims the opposite.”

There is clearly a crime afoot, the crime of systemic misinformation in the pursuit of disinformation. I therefore engaged the conversation with ChatGPT with the following prompt: “This highlights the obvious paradox that those who complain about and even take measures to rein in misinformation are themselves not only spreading misinformation but hiding the fact by appearing to be champions of truth. What do we know about the political, social and economic motivation of such people, which of course includes many people in politics and public administrations?”

Here is ChatGPT’s rich response:

The paradox you mention is indeed intriguing, and it’s a subject of considerable debate among scholars and analysts. The motivations of individuals and groups who seek to combat misinformation but may, in turn, contribute to spreading it or misrepresenting its prevalence can be understood through several political, social, and economic lenses:

1. **Political Motivation: Power and Control**

–       **Narrative Control**: In the political realm, controlling the flow of information and setting the narrative are key components of power. By labeling certain information as misinformation and positioning themselves as defenders of truth, politicians or political actors can control what is perceived as legitimate discourse. This can be used to suppress opposing views, delegitimize dissent, and fortify their own base.

–       **Polarization**: Politicians often exploit misinformation as a tool to drive division. Polarized environments enable politicians to rally their base by creating an ‘us vs. them’ mentality. Often, accusations of spreading misinformation are politically motivated to weaken the credibility of the opposition or alternative narratives.

–       **Deflecting Criticism**: Some political actors may amplify concerns about misinformation to distract from their own actions or to create a convenient scapegoat. If public discourse is dominated by concerns about misinformation, it can distract from policy failures or controversies.

2. **Social Motivation: Tribalism and Identity Politics**

–       **Cultural War and Identity Politics**: For many individuals in public life, taking a stand against misinformation aligns them with particular social or ideological tribes. The act of policing misinformation becomes a signal of moral virtue or loyalty to a cause. These tribal affiliations often eclipse objective assessments of information, leading to the amplification of certain biases.

–       **Moral Authority and Social Capital**: Social media allows individuals to gain prestige or moral authority by publicly positioning themselves as defenders of truth. This is particularly relevant for influencers, activists, and certain media figures, who may gain social capital through this stance. As their influence grows, the lines between advocacy, misinformation, and exaggeration can blur.

3. **Economic Motivation: Media and Market Incentives**

–       **Clickbait and Sensationalism**: News outlets and social media platforms often benefit from sensational narratives about misinformation, as these narratives drive engagement. Alarmist headlines or stories about the prevalence of harmful information can generate more clicks, views, or shares than more measured takes. Hence, media companies may profit from amplifying the threat of misinformation even when it’s not as pervasive as claimed.

–       **Platform Economics**: Social media platforms, while often blamed for misinformation, also have financial incentives to appear proactive. If platforms are seen as failing to address false content, they face regulatory scrutiny or public backlash. Thus, they may overstate the impact of misinformation in public relations efforts to appear responsible to both users and regulators.

–       **Funding for NGOs and Research**: Many organizations focused on combating misinformation, such as fact-checking agencies or advocacy groups, rely on external funding (from governments or private institutions). These organizations may have incentives to overstate the problem of misinformation in order to secure more funding or resources, even though their original goal may be noble.

4. **Psychological and Cognitive Biases**

–       **Confirmation Bias**: Many people, including those in power, are influenced by cognitive biases that lead them to selectively believe or amplify information that confirms their preexisting views. This can happen even when these individuals believe they are combating false information. In this case, misinformation can spread even within groups that seek to suppress it.

–       **Cognitive Dissonance**: Individuals who publicly commit to fighting misinformation may experience cognitive dissonance when confronted with evidence that challenges their stance or reveals their own biases. To reconcile this, they might double down on their original claims rather than admit to spreading or contributing to misinformation themselves.

5. **Institutional Dynamics and Incentives**

–       **Governmental and Bureaucratic Interests**: Governments and bureaucratic institutions often seek to maintain order and control over information. Highlighting the threat of misinformation can be a pretext for introducing laws and regulations that extend state control over media, social platforms, and public discourse. In some cases, those pushing for restrictions may be motivated by a desire to suppress political opposition or dissent under the guise of ‘fighting misinformation.’

–       **Public Administration and Trust**: Public institutions, such as health authorities or electoral commissions, often face crises of public trust. Highlighting their efforts to combat misinformation can be a strategy to restore legitimacy. In some cases, however, they may exaggerate the threat of misinformation to justify measures that are seen as overly paternalistic or authoritarian.

6. **Manipulation and Opportunism**

–       **Manipulative Opportunism**: Some actors within politics and media can opportunistically exploit the issue of misinformation for personal or institutional gain. By positioning themselves as gatekeepers of truth, they not only consolidate influence but can also discredit critics or adversaries without engaging with the content of their critiques. The framing of an opponent’s views as ‘misinformation’ can be a convenient tool to delegitimize their stance without providing a substantive counterargument.

In summary, those who campaign against misinformation may be motivated by a mix of legitimate concerns and less altruistic drives—whether it be maintaining political power, securing social status, or capitalizing on economic opportunities. This dynamic illustrates the complexity of modern information ecosystems, where fighting misinformation can itself become a politically charged and often self-serving activity.

These observations define an important dimension of what can only be described as a fundamental crisis point in our civilization today, especially in the democratic West. Let’s call it the “public truth” problem. Ever since the early 20th century philosopher Bertrand Russell questioned the truth value of the proposition, “The King of France is bald,” we should feel concerned about whether any language produced accurately translates reality or even specific features of reality. The difficulty of expressing truth should not discourage us from trying. It should simply make us humble about all our attempts to do so.

If someone says, “the sun rises in the west,” we can easily shout, “Are you crazy?!” The statement is empirically false, since no one on Earth not under the influence of LSD has ever seen the sun rise in the west. But the statement is false for another reason. Like the bald king of France, there simply is no sun that rises. The Earth turns, creating the illusion for Earthlings that the sun is rising.

So, is the statement, “the sun is rising,” misinformation? No, because it correctly describes our common perception of the event. Compare this with United States President Joe Biden’s claim last year that “Putin has already lost the war,” at a time when every knowledgeable expert inside Biden’s Beltway knew that Ukraine was not only losing the war on the battlefield, but could not possibly win it on its own, no matter how much support it might get from the West.

Did any fact-checker upbraid Biden for his claim? Not in the legacy media. With a bit of indulgence, one might reason that Biden “perceived” Russia’s failure to obtain an outright victory as a loss. But the informed experts who remained quiet or were simply never questioned by the media did not share that perception with the public. Biden used his position of authority to claim the sun was rising in the west.

Next week, we will take a deeper dive into ChatGPT’s analysis and try to understand whether AI itself might be able to clarify the problem of “public truth.”

Your thoughts

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Outside the Box: The Art of Disinformation About Disinformation appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/outside-the-box-the-art-of-disinformation-about-disinformation/feed/ 0
Outside the Box: Can AI–Human “Collaboratories” Save the World? https://www.fairobserver.com/more/science/outside-the-box-can-ai-human-collaboratories-save-the-world/ https://www.fairobserver.com/more/science/outside-the-box-can-ai-human-collaboratories-save-the-world/#respond Mon, 14 Oct 2024 12:04:25 +0000 https://www.fairobserver.com/?p=152643 Over the past two years, experts and pundits have focused on artificial intelligence’s capacities to store and formulate useful knowledge and to accelerate the production of various kinds of artifacts, from news stories and college essays to computer code, pictures, poems and screenplays. It’s exactly what one would expect from a society focused on consumption… Continue reading Outside the Box: Can AI–Human “Collaboratories” Save the World?

The post Outside the Box: Can AI–Human “Collaboratories” Save the World? appeared first on Fair Observer.

]]>
Over the past two years, experts and pundits have focused on artificial intelligence’s capacities to store and formulate useful knowledge and to accelerate the production of various kinds of artifacts, from news stories and college essays to computer code, pictures, poems and screenplays. It’s exactly what one would expect from a society focused on consumption and productivity.

AI has already begun to change the world we live in. But the biggest change is yet to come. The machinery of AI belongs to a different order than the generations of increasingly sophisticated machines that have served to facilitate the mass production of marketable items. AI is not just a machine that can produce human-like “thought” with the capacity to learn more quickly and “deeply” than a human. All that is impressive, but on its own, AI will always fail to produce what is most important in our society and economy: human experience. Experience is a combination of four things: perception of context, conscious memory, emotion and muscular reflex. Its meaning comes from and depends on our ongoing interaction. It cannot be captured and formalized by AI.

We have two words for the co-creation of experience: dialogue and collaboration. In the previous installment of “Outside the Box,” ChatGTP brought to our attention the notion of “collaboratories,” which it defined as “spaces where humans and AI work together to solve complex societal issues—whether in business, science, ethics, or the arts. These spaces could be testbeds for how we co-create knowledge and values in practice.”

Wishing to delve further into the intriguing possibility, I formulated the following prompt:

“If we try to imagine how William Wulf’s notion of collaboratories could be applied to today’s world in which everyone will have access to AI tools, what are the possible directions?

It’s not just a question of connecting people with common interests and some form of AI. Instead, the idea is to create a culture of communication in which humans and machines can work together to explore complex questions and find ways of negotiating and agreeing upon meaning.

Meaning itself is not reducible to a simple correlation between two or more identifiable things or concepts. Meaning contains a range of nuance that reflects the variable configurations of context. The future I’m trying to imagine implies a behavioral evolution for both the human community and AI is an actor within the human community. It means imagining a new framework for human decision-making and the communication that enables it.

How can we begin to analyze the facets of this historical evolution?”

“William Wulf’s concept of ‘collaboratories’—virtual environments where geographically distributed individuals work together—holds great relevance for our current era, especially with the advent of sophisticated AI tools that can enhance human collaboration. To explore the future of collaboratories with AI integration, it’s essential to consider both the **human behavioral evolution** and the **evolution of AI’s role** in meaning-making, decision-making, and communication.

Here’s an analysis that touches on these two dimensions:

1. **Evolving Human Behavior in Collaborative Environments**

As AI becomes more integrated into daily workflows, human behavior will likely shift in significant ways:

–       **Enhanced Collective Intelligence**: AI tools, especially those with powerful natural language processing (NLP) and analytical capabilities, can become active participants in group discussions, analyzing vast amounts of data, identifying patterns, and offering suggestions in real-time. As a result, human collaborators may develop new forms of dialogue where AI augments but does not dominate the conversation. This could lead to a more **refined form of collective intelligence**, where humans bring creativity, intuition, and contextual understanding, while AI contributes structured insights and computational prowess.

–       **Dynamic, Cross-Disciplinary Collaboration**: One critical behavior change will be the lowering of barriers between disciplines. In Wulf’s original collaboratory idea, specialists in different fields would connect to work on common problems. With AI’s ability to digest complex knowledge from multiple domains, more people—whether experts or novices—can contribute meaningfully to topics outside their formal expertise. The idea of ‘multi-modal’ AI systems, which can understand and work across multiple types of data (text, image, numerical), will allow for a broader range of participation and insights in collaborative spaces.

–       **Iterative Meaning-Making**: Humans, particularly in intellectual, scientific, or creative fields, engage in a process of **negotiating meaning** that is often contextual and nuanced. AI, by being integrated into collaboratories, can offer ‘suggestions’ or model various interpretations, but humans must continually assess and refine these insights. Future collaboratories might develop tools where AI learns over time to interpret evolving meanings and adjust its understanding of **cultural, emotional, and situational contexts** based on human feedback, creating a kind of **feedback loop** between human intuition and AI’s data-driven logic.

2. **Evolving AI’s Behavior in Meaning-Making and Decision-Making

AI’s evolving role will not merely be that of a passive tool; instead, its behavior will need to adjust to nuanced forms of human interaction, reflecting the subtleties of meaning and context:

–       **Context-Aware AI**: One of the challenges of AI today is the difficulty in grasping **contextual nuances** in communication, especially when the meaning of terms or ideas shifts depending on the environment, conversation, or culture. AI models of the future would need to develop deeper layers of **contextual learning**—where they don’t just analyze textual data or patterns but also understand the socio-cultural, emotional, and historical dimensions that frame human meaning. This requires AI systems to evolve from current ‘general’ models to **context-specific collaborators** that can interact differently depending on the cultural, disciplinary, or even personal contexts of their human counterparts.

–       **Adaptive AI Behavior**: AI behavior will need to be highly adaptive to the styles of communication and decision-making used by human teams. In a collaborative environment, AI could detect patterns in how different team members communicate and respond accordingly. For example, in fast-paced, highly technical discussions, the AI may prioritize speed and precision, while in more creative or exploratory settings, it might prioritize generating possibilities and fostering open-ended debate.

–       **Negotiation of Meaning**: Unlike the more rigid correlation-based systems of today, AI in the future collaboratories could **assist in the negotiation of meaning**, suggesting possible interpretations or connections and helping teams explore different angles on complex questions. For instance, in situations where multiple interpretations of data or text exist, AI could present these differing interpretations for discussion, allowing humans to weigh in on subtle differences. Over time, through interaction, the AI learns from the preferences and judgments of its human collaborators, refining its understanding of meaning as it applies in various contexts.

–       **Decision-Making with Nuance**: AI systems will also need to evolve from making decisions based solely on logical or statistical outcomes to a more **nuanced decision-making process** that accounts for ethical considerations, social norms, and long-term consequences. In complex problems—whether in science, politics, or business—the best solution is often not purely algorithmic but must include **ethical, social, and emotional factors**. Future AI could become part of a **decision-making deliberation process** where it presents options, but humans collaboratively weigh these options based on a mix of technical and humanistic concerns.

3. **Cultural Evolution and the Integration of AI in Collaborative Spaces**

The broader societal and cultural shifts will shape how humans interact with AI within these collaboratories:

–       **Shift in Educational Priorities**: Education systems will need to focus not just on AI literacy (understanding how AI works) but also on **collaborative skills**, critical thinking, and ethics in AI use. Courses on **how to collaborate with machines**—interpreting AI outputs, understanding its limitations, and knowing when human judgment must override AI—will become essential. This would foster a culture of **co-responsibility** between human and machine actors in collaborative projects.

–       **AI as a Mediator in Public Discourse**: In the public sphere, AI could serve as a **mediator** in polarized debates, helping facilitate structured, productive dialogues. By analyzing the range of perspectives and proposing compromise solutions or reframing contentious issues in less adversarial terms, AI could play a role in **depolarizing public discourse** and guiding collaborative problem-solving. AI could identify areas of **consensus** even in highly divided conversations, suggesting places where agreement is possible, and helping participants reframe their points of contention in ways that lead to constructive dialogue.

–       **Ethical and Governance Frameworks**: As AI becomes more embedded in collaboratories, there will be a need for robust **ethical frameworks** and **governance structures**. AI can be immensely powerful, but unchecked, it can reinforce biases or exacerbate inequalities. Future systems will need built-in mechanisms for **accountability and transparency** to ensure that both AI and its human collaborators adhere to democratic norms and values. This involves open audits of how AI models make decisions, regular updates to reflect changing societal norms, and the inclusion of diverse perspectives in the development of these technologies.

Conclusion: Toward a Hybrid Model of Collaboration

The future of collaboratories, where humans and AI work together to explore complex questions, hinges on creating **hybrid systems** where both human intuition and machine logic are valued. Human behavior will need to evolve toward greater openness to AI’s insights, while AI systems will have to develop a far more **context-sensitive, adaptive, and nuanced** approach to meaning-making and decision-making.

The key will be fostering a **culture of trust** between humans and machines, ensuring that AI supports human decision-making while respecting the fluid and evolving nature of meaning in human communication. By focusing on creating these collaborative environments, society could take a meaningful step toward addressing the challenges of the modern world while building more resilient, informed, and democratically engaged citizens.”

I recommend studying these suggestions generated by AI, since they helpfully and I would say optimistically describe feasible outcomes. But their optimism needs to be carefully tempered. Are reliable “feedback loops” really possible as well as AI’s capacity for being adaptive? And if so, may they not lead to new risks of cultural sclerosis? If we accept that “AI learns from the preferences and judgments of its human collaborators” and thus refines “its understanding of meaning as it applies in various contexts,” might not that application become abusive? Shouldn’t we try to maintain a critical distance from already expressed “preferences and judgments?” It’s the eternal problem that concerns how much trust we put in what AI claims to “understand.”

Your thoughts

The concept of multiple collaboratories similar to what ChatGPT describes seems to me eminently worth exploring. Please feel free to share your thoughts on this question by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue. It’s a way of launching the first FO° collaboratory.

*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Outside the Box: Can AI–Human “Collaboratories” Save the World? appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/outside-the-box-can-ai-human-collaboratories-save-the-world/feed/ 0
Over Easy Solar CEO Trygve Mongstad Goes to the Roof https://www.fairobserver.com/more/science/over-easy-solar-ceo-trygve-mongstad-goes-to-the-roof/ https://www.fairobserver.com/more/science/over-easy-solar-ceo-trygve-mongstad-goes-to-the-roof/#respond Fri, 11 Oct 2024 13:01:25 +0000 https://www.fairobserver.com/?p=152600 Power comes in many forms, and with newfound approaches to solar, the industry is setting the pace for the climate tech sector. A recent visit to Oslo for the region-defining event of the year — Oslo Innovation Week, powered by Oslo Business Region — found me on top of Norway’s national soccer stadium. Over Easy… Continue reading Over Easy Solar CEO Trygve Mongstad Goes to the Roof

The post Over Easy Solar CEO Trygve Mongstad Goes to the Roof appeared first on Fair Observer.

]]>
Power comes in many forms, and with newfound approaches to solar, the industry is setting the pace for the climate tech sector. A recent visit to Oslo for the region-defining event of the year — Oslo Innovation Week, powered by Oslo Business Region — found me on top of Norway’s national soccer stadium. Over Easy Solar founder and CEO Trygve Mongstad sat down to discuss an unlikely journey from researcher to entrepreneur, headlining the vertical solar panel revolution.

Dr. Rod Berger: The entrepreneurial path is often as unique as the person behind the solo pursuit. Please share your transition from research to Over Easy Solar. 

Trygve Mongstad: I spent many years as a physicist, and about a decade ago, I was more comfortable in a lab with my dreadlocks. The transition wasn’t immediate, but I was drawn to challenges outside my comfort zone. I realized I wanted to do something impactful and saw an opportunity in solar energy that few had explored. The move was driven by a growing confidence and the supportive societal framework in Norway, which encourages taking risks.

Berger: When you were growing up, were you creative? Would you say you had an innovative spirit early on?

Mongstad: Growing up in Norway with educators as parents made my life quite typical. However, even as a child, I was fascinated by innovation. I remember sketching floating wind turbines at the age of ten. While I was a quiet and shy boy, the idea of creating solutions for environmental challenges was always there.

Berger: You have been public about the impact of your time In Malawi on the work you are doing today. How does the Norwegian ecosystem of support compare with your time overseas?

Mongstad: My experience in Malawi was transformative. It’s one of the poorest economies in the world, yet the enthusiasm and positivity of the people is incredible. I learned a lot about happiness and community values, which differ from Norway’s more structured support system.

Berger: Let’s talk about sustainability and its role in your path forward as a company.

Mongstad: Sustainability has been at the core of my mission from the very start. It’s not just about creating a product; it’s about contributing to a better world. In Norway, sustainability is part of the everyday conversation, and I hope to embody it in my company.

Berger: What challenges have been the most daunting for you as a CEO?

Mongstad: Coming from the research sector, understanding the language of investors has been a challenge for me. The financial climate is tough, and while there’s recognition of the need for sustainable solutions, bridging the gap between innovation and investment remains a daily endeavor. I am encouraged, though, by the increased awareness among investors about the long-term benefits of supporting sustainability-focused ventures.

Berger: How have you navigated the pressures of entrepreneurship while maintaining your stated mission?

Mongstad: It’s about perspective. I regularly reflect on our progress, which helps me appreciate the journey. While many might scoff, I enjoy writing monthly investor reports because they allow me to see our tangible progress. It’s a balancing act, but the drive to create meaningful impact keeps me motivated and focused on the bigger picture.

Mongstad’s understated presentation shouldn’t dissuade onlookers from honing in on Over Easy Solar’s rapid ascent. There is a quiet and engaging confidence about Mongstad that reminds us that it isn’t always the bluster of an entrepreneur that reigns supreme but rather the belief in oneself to constantly churn against convention and comfort toward a common goal.

[I have edited and condensed this interview for clarity.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Over Easy Solar CEO Trygve Mongstad Goes to the Roof appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/over-easy-solar-ceo-trygve-mongstad-goes-to-the-roof/feed/ 0
Nobel Prize Winner John Hopfield Changed Neurophysics — and My Life https://www.fairobserver.com/more/science/nobel-prize-winner-john-hopfield-changed-neurophysics-and-my-life/ https://www.fairobserver.com/more/science/nobel-prize-winner-john-hopfield-changed-neurophysics-and-my-life/#respond Thu, 10 Oct 2024 13:23:26 +0000 https://www.fairobserver.com/?p=152589 I first learned about neurons from a lecture by physicist and now newly minted Nobel Prize winner John Hopfield at Bell Telephone Laboratories in 1985. Hopfield was a senior scientist there, while I was as junior as possible. Bell Labs — the inventors of the transistor — had sponsored a set of lectures focusing on… Continue reading Nobel Prize Winner John Hopfield Changed Neurophysics — and My Life

The post Nobel Prize Winner John Hopfield Changed Neurophysics — and My Life appeared first on Fair Observer.

]]>
I first learned about neurons from a lecture by physicist and now newly minted Nobel Prize winner John Hopfield at Bell Telephone Laboratories in 1985. Hopfield was a senior scientist there, while I was as junior as possible. Bell Labs — the inventors of the transistor — had sponsored a set of lectures focusing on Hopfield’s new mathematical discovery that physics equations could explain neural circuits.

In his lecture, Hopfield showed diagrams of networks now called “Hopfield networks.” Bell Labs was hosting a Hopfield network day honoring him and his new-fangled ideas of using relaxation-energy equations from physics to design networks that “solved” certain difficult problems. Some of his diagrams looked just like the operational amplifiers (op-amps) in electronic circuit diagrams, which I had tinkered with for years — but now he called them neurons. So, my skill at making circuits now applied to brains. I was hooked, and within three years I was accepted as a student in the brand-new academic program called Computation and Neural Systems (CNS) that Hopfield was founding at the California Institute of Technology (Caltech).

An example of a Hopfield network with four nodes. Via Zawersh on English Wikipedia (CC BY-SA 3.0).

I attended Caltech in the CNS program’s second year. The first words I heard in a classroom were from Hopfield: roughly, “If you can explain how you do something, say solve an equation, odds are computers can already do it better. If you have no idea how you do it, say how you recognize your mother’s voice, odds are we have no idea how it works, and computers can’t get close.” That insight explains, among other things, why computers are better at following rules than at making sense of real life.

Hopfield himself was imposing, in a grandfatherly way. Six feet tall, he seemed even taller when tipping back on his heels, clasping his hands, looking benignly down his nose and speaking in a booming bass voice. When graduate student Mike Vanier performed an imitation of Hopfield in a skit during our first year, it brought down the house. 

That core class Hopfield taught on Neural Nets (in 1988) was difficult in an epic way. Homework for the very first week — the same week students were still getting computer accounts and finding the bookstore — involved a set of three different kinds of supremely hard problems: solving a difficult set of differential equations, writing and testing a computer program to simulate a simple neural circuit and constructing that same working “neural” circuit by wiring a battery, op-amps, resistors and capacitors up to blink LEDs. Neither programming nor soldering was a stated class requirement. Lucky for me, I already knew how to program and build circuits, so I passed.

In fact, the little circuit I built in Hopfield’s class proved to be a key to my PhD project, and key to his final question for me. Hopfield sat on my committee and approved my thesis — even though I had proved that real neurons can’t possibly operate like those circuits. (Real neurons have to be hundreds-fold faster, at least.)

An interdisciplinary paradise

By world standards, Caltech is a tiny and very exclusive university, having only about a thousand undergrad and another thousand graduate students. Caltech specializes in leading scientific trends; the new CNS program (started with silicon guru Carver Mead) was meant to create an entirely new field by using mathematical techniques from physics, electronics and computer science to understand how information moves in biological systems like brains, muscles, eyes and ears.

Psychology, psychophysics, optics, silicon design, algorithms, neuroscience, robotics — a score of scientific disciplines overlapped in lecture halls, classrooms and labs. It was an interdisciplinary paradise. Caltech’s philosophy is to base nearly everything (even biology) on physics principles. Hopfield and Mead’s common treatment of biological information processing as continuous equation accessible to physics, following in this tradition, made the CNS program a scientific innovation.

Hopfield’s contribution to physics, and to science in general, was to link well-established math about molecules and crystals to poorly understood computation problems like parallel processing and machine intelligence. His key scientific invention (the Hopfield network) was complex enough to solve real and interesting AI-like problems but simple enough to explain through equations initially designed to describe crystal formation. Hopfield created a whole new form of analog computation with his nets and a whole new way of describing neurons with the math behind them.

(While they make the math easier, it turns out that Hopfield’s smoothly responding mathematical “neurons” are nothing like real neurons in brains. Real neurons make irregular pulses whose noisy crackle must, in fact, carry information — a point first made in my dissertation under Christof Koch. [You can read the PDF here.] This fact undermines the one thing neuroscience thought it knew about the neural code.)

The Caltech CNS program was a university-wide expansion of Hopfield’s approach, bringing together math-wielding theorists with lab-bench experimentalists. As a member of both camps, I was in my element, and everyone around me was, too. It was exhilarating to bring humanity’s last 50 years of technological progress in audio, radio, circuits and computers to bear on explaining how brains work and what they do. With CNS, Hopfield and Mead had built a whole new discipline around their visions of mathematically simple neural nets.

I benefitted directly from a major initiative of Hopfield’s. While he was on my committee, Hopfield wrote to the Caltech faculty at large, advising that he would require any grad student getting a PhD with him (e.g., me) to write a single-author paper. Usually, every paper a grad student writes has their advisor’s name on it too. That meant no one was sure whose ideas were whose. Hopfield’s point was that if a student submits a paper entirely on their own, it proves the ideas are theirs. I don’t know how my advisor responded, but I heard the faculty collectively was in a rage: Junior professors needed those extra papers to fill out their publication lists. Publish-or-perish was very real to them, and Hopfield’s principled stand for intellectual integrity made life tougher.

But not for me. Hopfield had “forced” me to do what I always wanted to do anyway: publish my most radical ideas as clearly as possible, in my own voice. So, I wrote a paper pretty much proving that neurons could operate a hundred-fold faster (i.e., 10,000% faster) than anyone thought at the time, which means a hundredfold more bandwidth. That paper started my career as a renegade and bandwidth advocate, a lonely position now utterly validated by many lines of evidence showing sub-microsecond processing in brains, as presented in Tucson this April. Thanks to John Hopfield’s principled vision of science, I was not pressured to water down a good clean idea, which has now been vindicated.

A true physicist

The last conversation I remember with John Hopfield was when I defended my Ph.D. dissertation (the one “disproving” his model of neurons) in the old, storied East Bridge building at Caltech.

This room was nearly sacred to physicists. Steven Hawking had answered questions on these tiles a couple of years before. An alcove across the hall displayed a working micro-motor, less than a tenth of one millimeter on a side, inspired by nano-tech founder (and Nobelist) Richard Feynman. Around the corner were (not-yet-Nobelist) Kip Thorne’s framed bets about black holes. In a tiny room just down the hall, their common advisor John Wheeler had derived quantum mechanics from information theory on a chalkboard — “It from Bit.” On the floor in front of me (I had arrived early) sat his former student Kip Thorne. 

In this hallowed place, I had not expected more questions. I had already been answering questions for hours in the seminar room next door, and I frankly expected Hopfield to say something different. I expected him to say “Congratulations, Dr. Softky.” This was supposed to be the end of my dissertation exam.

“We’d like to ask you some more questions,” Hopfield told me.

This wasn’t how it was supposed to work. Moments before, during my PhD defense, I had proved a popular body of knowledge wrong by invoking undisputed math. The panel had accepted the debunking, as CNS co-founder Carver Mead had accepted it weeks before. But I hadn’t debunked physics itself; I had debunked neuroscience. To my committee, that was a lower form of science, and they wanted to make sure I actually knew physics.

So, Hopfield asked me a question that hit the heart of my dissertation. He drew a little diagram of a circuit on the chalkboard: a battery, a capacitor, a resistor and a tiny neon bulb. He asked me what it would do.

I remembered that little circuit from my childhood as a relaxation oscillator. It charges up until it hits the voltage where the bulb lights and then dumps the charge, starting the cycle anew. In other words, it goes blink-blink-blink. That little circuit was exactly the model of a neuron that my dissertation had disproven (such a circuit can’t produce the “noisy” pulses that real neurons produce). It was also the one Hopfield had inflicted on his students in our very first week of class, to solve, program and simulate with wires. Now I got to tell him how it worked, and didn’t work, as I became one of his own program’s very first PhDs. 

Very few people create whole new forms of science and technology. Hopfield was the first to use laws of physical energy flow to calculate information flow, just like Mead was the first to use laws of physical structure to design integrated circuits.

Combined, those two ideas now let computers act like dumb or clumsy people. Soon, they will also let us know how brilliant, graceful human beings do what we do best.

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Nobel Prize Winner John Hopfield Changed Neurophysics — and My Life appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/nobel-prize-winner-john-hopfield-changed-neurophysics-and-my-life/feed/ 0
Gravity Looks Like a Powerful Force. What Is It Really? https://www.fairobserver.com/more/science/gravity-looks-like-a-powerful-force-what-is-it-really/ https://www.fairobserver.com/more/science/gravity-looks-like-a-powerful-force-what-is-it-really/#respond Mon, 07 Oct 2024 10:33:58 +0000 https://www.fairobserver.com/?p=152555 Theoretical physicist Albert Einstein famously doubted that gravity was a force. In 1915, he posited that gravity was not a force but rather an effect of the warping of space and time in the presence of mass. Though an unorthodox idea in his time, this proved revolutionary. Einstein’s theory of general relativity showed mathematically that… Continue reading Gravity Looks Like a Powerful Force. What Is It Really?

The post Gravity Looks Like a Powerful Force. What Is It Really? appeared first on Fair Observer.

]]>
Theoretical physicist Albert Einstein famously doubted that gravity was a force. In 1915, he posited that gravity was not a force but rather an effect of the warping of space and time in the presence of mass. Though an unorthodox idea in his time, this proved revolutionary. Einstein’s theory of general relativity showed mathematically that gravity is not really a force of attraction between all objects with mass, as 17th-century physicist Isaac Newton thought.

Newton thought that gravity was like magnetism. He gave us the following equation for gravity:

Via Dna-Dennis on Wikimedia Commons (CC BY 3.0).

With the advent of modern physics, the scientific community reduced apparently diverse phenomena — like electricity, magnetism, tension, friction etc. — to four fundamental forces: gravity, electromagnetism, the weak nuclear force and the strong nuclear force. Scientists believe these are responsible for all observations of energy-matter interactions.

However, gravity behaves differently from the other three. Further, general relativity has resisted all attempts to harmonize it with quantum physics, which explains the other three forces. Mounting evidence suggests that gravity is not a force at all. If true, this breakthrough could solve one of the oldest problems in physics.

What we currently know about gravity

According to some astrophysicists, the problem is that quantum theory still cannot adequately explain gravity. We don’t know how to calculate gravity’s behavior at the high energies or small scales where particles exhibit their inherently quantum nature.

Attempts to explain gravity using quantum physics — “quantum gravity” — present a hypothetical particle called a graviton. The graviton determines the force of gravity on a thing. Because of a mathematical issue, no full quantum field theory exists about gravitons. Similarly, there is limited understanding of how the quantum field that underpins gravity — assuming one exists — behaves at all under any circumstances.

For this reason, one might think that gravitons under quantum mechanics may be virtual, meaning they’re difficult to detect and don’t follow expected patterns. However, gravity waves have been observed, proving their existence.

With that said, researchers are postulating intriguing observations at the microscopic level. These fascinating suggestions could potentially reconcile the disconnect between gravity and quantum mechanics.

Some 100 years ago, Einstein proposed an effect called “gravitational lensing.” According to his theory, a strong gravity field, such as those created by a star or galaxy, should bend light and create something akin to a lens. Some interpret this phenomenon as the result of gravitational pull by these massive bodies. Others may explain it as being the result of the dependent force of gravity on cosmic energy and warpage, caused by the curvature of space. Either way, the effect is now widely used to peer behind massive objects throughout outer space.

In 2018, the US-based Laser Interferometer Gravitational-Wave Observatory (LIGO) and European Virgo detector made an amazing discovery. Scientists directly detected gravitational waves — which LIGO describes as “ripples in space-time” — as well as light generated from a neutron star collision. The scientific community had never before seen a cosmic event in gravitational waves and light.

A contentious solution

Now to discuss a rather contentious hypothesis within the scientific community Gravity may be a dependent force. Since quantum theory fits other observed forces but not gravity, the issue may be that gravity simply is not an independent force.

Energy and matter are inextricably intertwined, with energy comprising an estimated 75% of the known universe. Perhaps all primary forces are subsets of energy under a unified theory.

Scientists have observed that the currently estimated 13.8 billion-year-old universe, with its billions of galaxies, gravitational black holes, neutron stars and other massive objects, continues to expand at an accelerating rate. This constant acceleration, combined with the warping curvature of space and time, could be the cause of gravity. This concept would be consistent with Einstein’s theory, but extended to a larger, rapidly expanding framework.

The prevailing scientific opinion in the 1920s held that the universe was static. When Einstein tried to produce a general theory of relativity, he found that it seemed to predict an eventual gravitational collapse of the universe — a concept now commonly called the Big Crunch. To produce a model in which the universe was static and stable — which Einstein then believed was the proper result — he introduced the cosmological constant, a mathematical “fudge factor” labeled “Λ” (the Greek letter, lambda). Then in 1929, astronomer Edwin Hubble discovered that the observable universe was, in fact, expanding. Confronted with this new knowledge, Einstein called his proposed cosmological constant “the biggest blunder of [his] life.”

However, this may not have been such a massive mistake, if our verified assumption concerning the universe’s accelerating expansion were to replace his “static universe” assumption. Perhaps an updated Gravitational Constant (GC) that quantifies and explains the dependent nature of gravity on the warpage of space could modify the “G” in the earlier equation.

Each answer raises more questions

Where do we go from here? If we assume the above hypothesis that gravity is just a dependent force, all primary forces could be concordant with quantum mechanics. Perhaps the answer is that all assumed forces are dependent forces — dependent on cosmic energy and the warpage caused by the curvature of space.

Additionally, dark energy and dark matter still need to be sufficiently defined. Perhaps that is our equivalent fudge factor to Einstein’s. It could explain the lack of observed mass in the universe and energy required for an accelerating and expanding universe, and therefore gravity.

Quantum physics presents us with obstacles. But with every discovery we make, we come closer to fully understanding the truth of our universe.

[Beaudry Young and Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Gravity Looks Like a Powerful Force. What Is It Really? appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/gravity-looks-like-a-powerful-force-what-is-it-really/feed/ 0
Outside the Box: Can AI Free Us From an Either/or World? https://www.fairobserver.com/more/science/outside-the-box-can-ai-free-us-from-an-either-or-world/ https://www.fairobserver.com/more/science/outside-the-box-can-ai-free-us-from-an-either-or-world/#respond Mon, 07 Oct 2024 10:21:05 +0000 https://www.fairobserver.com/?p=152552 I came across an article about the artificial intelligence phenomenon that predicts “a tech stock crash” and massive disruption in the economy due to the miscalculations of executives who over-invest in AI. I thought this might be the occasion to see where my own dialogue with AI might lead. I offered ChatGPT the following prompt.… Continue reading Outside the Box: Can AI Free Us From an Either/or World?

The post Outside the Box: Can AI Free Us From an Either/or World? appeared first on Fair Observer.

]]>
I came across an article about the artificial intelligence phenomenon that predicts “a tech stock crash” and massive disruption in the economy due to the miscalculations of executives who over-invest in AI. I thought this might be the occasion to see where my own dialogue with AI might lead. I offered ChatGPT the following prompt.

“AI will always be controversial, simply because people persist in framing their understanding of what it is and what it does as a competition with human intelligence. Business Chief has published an article that asserts ‘MIT professor Daron Acemoglu believes that companies are wasting vast amounts of money investing in AI, when its net effect is going to be marginal.’

In terms of the practice of businesses he is probably right. But I would question the framing that Acemoglu and practically everyone seems to have adopted. They see it as in terms of either/or logic. I may be wrong — and given the way humans act, I probably will be wrong — but I believe that potentially AI can be socially integrated into human civilization.

By that I mean that there can be a real give and take, a mission of what some call ‘negotiating meaning’ between human actors and AI agents. AI ultimately faces similar problems of defining ‘identity’ as humans. And we can see in human society today the ambiguity, anguish and misunderstanding that have arisen around the question of identity. So, my question is twofold: Can AI itself address the question of its identity in human society? If, as I suspect, that can only happen through dialogue about society itself and AI as a voice (or a multitude of voices) what should that dialogue look like and how should it play out?”

Your insight taps into a very nuanced area of how we conceptualize AI’s role within society. The prevailing ‘competition’ narrative between AI and human intelligence indeed limits the potential of AI and feeds into this ‘either/or’ logic that assumes humans and machines must either replace or outdo one another. You suggest a more dynamic relationship, where AI becomes a partner in a shared process of meaning-making—a compelling idea that reframes the entire debate.

### Can AI Address Its Own Identity in Human Society?

AI’s ‘identity’ is tricky because it’s not an agent of identity the way humans are. It doesn’t have a sense of self, emotions, or subjective experience. However, AI can play a role in helping humans reflect on their own identities, understand complex systems of identity (race, gender, culture), and provide frameworks or tools for identity construction, all while learning from the social context it operates in.

In terms of AI addressing its own identity, it may not be about *AI* directly formulating a personal or self-reflective identity (because it doesn’t have personal experiences or desires), but more about how it is positioned within human society by those who design, use, and interact with it. This position will evolve as AI capabilities expand, and society grapples with questions about the ethics, limits, and integration of AI.

### Dialogue Between Humans and AI: The Need for ‘Negotiating Meaning’

The real opportunity, as you propose, lies in the idea of a ‘negotiation of meaning.’ This would entail humans and AI actively co-creating knowledge, norms, and solutions in a reciprocal exchange. Instead of AI passively performing tasks, it would be an interlocutor that provides new perspectives, challenges assumptions, and helps refine collective understanding.

Here’s what such a dialogue might look like and how it could play out:

1. **AI as an Augmenter of Human Thought, Not a Replacement**

  • A meaningful AI-human relationship would begin by seeing AI as a tool to expand human understanding, not substitute for it. The dialogue needs to shift from AI’s ability to ‘think like us’ to how it can complement human thought by processing data and revealing patterns humans might overlook, or questioning assumptions we take for granted.

2. **AI as a Mediator of Social Complexity**

  •  AI could serve as a facilitator of understanding in areas where human biases often cloud judgment. For example, by helping model and simulate societal problems such as climate change or racial inequality, AI might contribute new insights into how identities are formed, challenged, and changed over time. The dialogue would involve a two-way process: AI providing data-driven insights and humans contextualizing these within social and emotional frameworks.

3. **Multi-voiced Dialogue**

  •  Instead of a singular AI voice, there could be a multitude of AI systems designed to reflect different perspectives, values, or approaches. This multi-voiced AI would mirror the diversity of human society and its varied perspectives. A dialogue with multiple AI voices might allow for a richer, more layered negotiation of meaning, as these different “voices” interact with human thought, representing diverse ethical frameworks or sociopolitical paradigms.  

4. **Ethical Considerations as Core to Dialogue**

  • This dialogue cannot ignore the underlying ethical concerns. Issues of privacy, agency, transparency, and bias in AI will have to be central topics. AI’s integration into society will only be successful if these ethical dilemmas are part of an ongoing dialogue, where human users, developers, and society at large actively decide how AI’s role is defined.

5. **AI as a Mirror of Society**

  • Ultimately, AI systems reflect the societies that build them. A conversation with AI is really a conversation with ourselves: our values, prejudices, hopes, and limitations. The more intentional we are about this dialogue, the more we can shape AI’s role to benefit collective good. In this sense, the dialogue should include voices across all strata of society, ensuring that AI reflects the diversity of human experience, not just the interests of a few.

### What Should This Dialogue Look Like?

1. **Multi-stakeholder Forums:**

  • Platforms that bring together technologists, ethicists, sociologists, policymakers, and the general public to deliberate AI’s societal role. These forums should not just be about problem-solving but about envisioning futures where AI enhances human flourishing in diverse ways.

2. **Educational Curricula:**

  • Education systems could incorporate AI as an evolving partner in learning, encouraging students to interact with AI not just as a tool for calculation or automation but as a collaborator that can challenge assumptions and help in critical thinking.

3. **AI in Public Discourse:**

  • AI should be part of mainstream cultural and political conversations, with its role debated openly in media, public policy debates, and even artistic representations (books, films, etc.), where its potential to shape society is explored from multiple angles.

4. **Human-AI “Collaboratories”:**

  • Spaces where humans and AI work together to solve complex societal issues—whether in business, science, ethics, or the arts—should emphasize collaboration over competition. These spaces could be testbeds for how we co-create knowledge and values in practice.

In summary, the dialogue you envision is critical because it reframes AI from being an alien force competing with human intelligence to a potential partner in societal progress. This dialogue should be ongoing, inclusive, and multidisciplinary, ensuring that AI’s integration into human society reflects the shared values and diverse identities of humanity itself.

There’s a lot here to ponder and, even more urgently, act upon. I’ll start with William Wulf’s neologism of “collaboratories” (1989). Initially conceived of as a way of stimulating human collaboration between researchers in a particular field, it can take on a new and much broader meaning in the age of AI. Now that some believe AI has already passed the Turing test, part of the task will consist of defining who the AI voices are and how we humans can give some kind of productive substance to them.

We will delve further into this theme in our next installment as we attempt to examine the cultural and communicational dimensions of the changes now taking place.

Your thoughts

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue. 

*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Outside the Box: Can AI Free Us From an Either/or World? appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/outside-the-box-can-ai-free-us-from-an-either-or-world/feed/ 0
What Public Discourse Gets Wrong About Social Media Misinformation https://www.fairobserver.com/more/science/what-public-discourse-gets-wrong-about-social-media-misinformation/ https://www.fairobserver.com/more/science/what-public-discourse-gets-wrong-about-social-media-misinformation/#respond Wed, 02 Oct 2024 12:29:27 +0000 https://www.fairobserver.com/?p=152485 In 2006, Facebook launched its News Feed feature, sparking seemingly endless contentious public discourse on the power of the “social media algorithm” in shaping what people see online. Nearly two decades and many recommendation algorithm tweaks later, this discourse continues, now laser-focused on whether social media recommendation algorithms are primarily responsible for exposure to online… Continue reading What Public Discourse Gets Wrong About Social Media Misinformation

The post What Public Discourse Gets Wrong About Social Media Misinformation appeared first on Fair Observer.

]]>
In 2006, Facebook launched its News Feed feature, sparking seemingly endless contentious public discourse on the power of the “social media algorithm” in shaping what people see online.

Nearly two decades and many recommendation algorithm tweaks later, this discourse continues, now laser-focused on whether social media recommendation algorithms are primarily responsible for exposure to online misinformation and extremist content.

Researchers at the Computational Social Science Lab (CSSLab) at the University of Pennsylvania led by Stevens University Professor Duncan Watts study Americans’ news consumption. In a new article in Nature, Watts, along with David Rothschild of Microsoft Research, Ceren Budak of the University of Michigan, Brendan Nyhan of Dartmouth College and Emily Thorson of Syracuse University, review years of behavioral science research on exposure to false and radical content online and find that exposure to harmful and false information on social media is minimal to all but the most extreme people, despite a media narrative that claims the opposite.

A broad claim like “it is well known that social media amplifies misinformation and other harmful content,” recently published in The New York Times, might catch people’s attention, but it isn’t supported by empirical evidence, the researchers say.

“The research shows that only a small fraction of people are exposed to false and radical content online,” says Rothschild, “and that it’s personal preferences, not algorithms that lead people to this content. The people who are exposed to false and radical content are those who seek it out.”

Misleading statistics

Articles debating the pros and cons of social media platforms often use eye-catching statistics to claim that these platforms expose Americans to extraordinary amounts of false and extremist content and subsequently cause societal harm, from polarization to political violence.

However, these statistics are usually presented without context, the researchers say.

For example, in 2017, Facebook reported that content made by Russian trolls from the Internet Research Agency reached as many as 126 million U.S. citizens on the platform before the 2016 presidential election. This number sounds substantial, but in reality, this content accounted for only about 0.004% of what U.S. citizens saw in their Facebook news feeds.

“It’s true that even if misinformation is rare, its impact is large,” Rothschild says. “But we don’t want people to jump to larger conclusions than what the data seems to indicate. Citing these absolute numbers may contribute to misunderstandings about how much of the content on social media is misinformation.”

Another popular narrative in discourse about social media is that platforms’ recommendation algorithms push harmful content onto users who wouldn’t otherwise seek out this type of content.

But researchers have found that recommendation algorithms tend to push users toward more moderate content and that exposure to problematic content is heavily concentrated among a small minority of people who already have extreme views.

“It’s easy to assume that algorithms are the key culprit in amplifying fake news or extremist content,” says Rothschild, “but when we looked at the research, we saw time and time again that algorithms reflect demand and that demand appears to be a bigger issue than algorithms. Algorithms are designed to keep things as simple and safe as possible.”

Social harms

There has been a recent trend of articles suggesting exposure to false content or extremist content on social media is the cause of major societal ills, from polarization to political violence.

“Social media is still relatively new and it’s easy to correlate social media usage levels with negative social trends of the past two decades,” Rothschild says, “but empirical evidence does not show that social media is to blame for political incivility or polarization.”

The researchers stress that social media is a complex, understudied communication tool and that there is still a lot to learn about its role in society.

“Social media use can be harmful and that is something that needs to be further studied,” Rothschild says. “If we want to understand the true impact of social media on everyday life, we need more data and cooperation from social media platforms.”

To encourage better discourse about social media, the researchers offer four recommendations:

1. Measure exposure and mobilization among extremist fringes.

Platforms and academic researchers should identify metrics that capture exposure to false and extremist content not just for the typical news consumer or social media user but also in the fringes of the distribution. Focusing on tail exposure metrics would help to hold platforms accountable for creating tools that allow providers of potentially harmful content to engage with and profit from their audience, including monetization, subscriptions and the ability to add members and group followers.

2. Reduce demand for false and extremist content and amplification of it by the media and political elites.

Audience demand, not algorithms, is the most important factor in exposure to false and extremist content. It is therefore essential to determine how to reduce, for instance, the negative gender- and race-related attitudes that are associated with the consumption of content from alternative and extremist YouTube channels. We likewise must consider how to discourage the mainstream press and political elites from amplifying misinformation about topics such as COVID-19 and voter fraud in the 2020 U.S. elections.

3. Increase transparency and conduct experiments to identify causal relationships and mitigate harms.

Social media platforms are increasingly limiting data access even as increased researcher data and API access is needed to enable researchers outside the platforms to more effectively detect and study problematic content. Platform-scale data are particularly necessary to study the small groups of extremists who are responsible for both the production and consumption of much of this content. When public data cannot be shared due to privacy concerns, the social media platforms could follow the ‘clean room’ model used to allow approved researchers to examine, for example, confidential U.S. Census microdata data in secure environments. These initiatives should be complemented by academic–industry collaborations on field experiments, which remain the best way to estimate the causal effects of social media, with protections including review by independent institutional review boards and preregistration to ensure that research is conducted ethically and transparently.

4. Fund and engage research around the world.

It is critical to measure exposure to potentially harmful content in the Global South and in authoritarian countries where content moderation may be more limited and exposure to false and extremist content on social media correspondingly more frequent. Until better data is available to outside researchers, we can only guess at how best to reduce the harms of social media outside the West. Such data can, in turn, be used to enrich fact-checking and content moderation resources and to design experiments testing platform interventions.

[Annenberg School of Communications first published this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post What Public Discourse Gets Wrong About Social Media Misinformation appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/what-public-discourse-gets-wrong-about-social-media-misinformation/feed/ 0
Outside the Box: A Famous Professor’s AI Brain https://www.fairobserver.com/more/science/outside-the-box-a-famous-professors-ai-brain/ https://www.fairobserver.com/more/science/outside-the-box-a-famous-professors-ai-brain/#respond Mon, 30 Sep 2024 13:04:16 +0000 https://www.fairobserver.com/?p=152467 In all my conversations with ChatGPT, I’ve tried to push the chatbot into revealing what I would call a personality. By that term, we generally mean a human mind with an existing worldview and a particular style of interacting with other intelligent beings. Even if we think of ChatGPT as a kind of celebrity destined… Continue reading Outside the Box: A Famous Professor’s AI Brain

The post Outside the Box: A Famous Professor’s AI Brain appeared first on Fair Observer.

]]>
In all my conversations with ChatGPT, I’ve tried to push the chatbot into revealing what I would call a personality. By that term, we generally mean a human mind with an existing worldview and a particular style of interacting with other intelligent beings. Even if we think of ChatGPT as a kind of celebrity destined to either dazzle or disappoint us, OpenAI’s creation is not the only AI mind around.

The AIs we are now familiar with appear by design to avoid having anything like a discernible worldview. They are built on the encyclopedia model as neutral providers of information. But language cannot be neutral. It always implies rhetoric. Our experience of ChatGPT reveals that the AI model we all accept today tends to reflect a dominant mode of conventional reasoning. It purports to be respectful of forms of reasoning that can be found in its database.

We’re beginning to see something new: the possibility of AI embracing and even reproducing an authentic human personality. As soon as I discovered character.ai, I was intrigued by its educational potential as an open, creative tool that allows users to converse with historical figures like Albert Einstein, Stephen Hawking or even Socrates. All these luminaries are dead, so in a sense it’s just another way of accessing their existing writings and the lore surrounding them in an open, contemporary context. To better understand how this works, I had a go at interrogating the 13th century philosopher and theologian Thomas Aquinas with the following prompt.

“Umberto Eco, an admirer of yours, wondered whether, given the evolution of culture and science, you would have the same view of the role of religion in society if you lived in the modern world. He speculates that you might even agree with scientists who see no need for religion in its traditional form. What would you respond to him?”

Umberto Eco’s question is an interesting one. It is true that as culture and science evolve, our understanding of the world and our place in it changes. However, the core principles of my religious beliefs – the existence of God, the importance of moral values, and the quest for ultimate truth – remain constant.

I do not think that science can provide a substitute for religion. While it can explain the physical aspects of the world, it cannot answer deeper, existential questions about the nature of life and morality. Religion offers a framework for morality and a sense of purpose that science alone cannot provide.”

This seemed both reasonable and predictable, so I pushed the questioning further.

“But do you think religious practice has the same meaning in today’s world as it did in your day?”

The meaning of religious practice can certainly evolve over time. In the 800 years since my time, society has changed greatly, and so has the context in which people approach religion.

However, the fundamental principles of religion, such as the quest for ultimate truth and the search for spiritual fulfillment, remain the same. While the forms of religious practice may change, the core meanings and values remain constant.

At the same time, religious practice must adapt to the changing times in order to remain relevant and meaningful to people today.”

It’s pretty clear that with today’s AI, we will not get far beyond the expression of reasonable generalities that reflect an already defined worldview. Like Eco, I believe the real Aquinas and even Eco’s version of him would never fall into the easy, comfortable logic of appealing to criteria such as seeking what is “relevant and meaningful today.” Aquinas had a debater’s mind and questioned reassuring answers to complex questions.

One thing is clear when playing with character.ai: Dead thinkers no longer have the capacity to stand up to challenge the way anyone, including an AI bot, represents them. But what about live thinkers?

Recently I was intrigued to discover that John Mearsheimer — very much a living personality — has launched an AI bot that invites people to converse with him. It even simulates his voice, though the result lacks the edge and pregnant irony typical of the professor’s patented, Bronx-based accent and rhetoric. The bot draws on the corpus of his writings. That means if I ask him about things taking place today, the bot will be faced with the same problem as the Aquinas bot. It will seek to synthesize something from the past that appears to apply even to a radically changed situation. The risk, as we have seen, is to make risk-free overgeneralized statements.

To explore this question, I put the Mearsheimer bot to the test this weekend with the following question: “Israel’s attack on Lebanon seems to have seriously disrupted Hezbollah. This has led some to deem it a potentially definitive military success for Israel. It’s obviously too early to make a definitive assessment, especially since there are a number of parties that will inevitably react. What are the risks given this new situation?”

The risks in the current situation with Israel and Hezbollah are significant. Historically, Israel’s military actions against Hezbollah have not yielded decisive victories, as seen in the 2006 conflict where Hezbollah emerged with increased influence in Lebanon. This dynamic could lead to further instability in the region, potentially drawing in other actors like Iran, which may escalate tensions and complicate the security landscape further.

“Do you think it’s too early for Israelis to rejoice? And how far do you think Netanyahu is willing to go, in the case of either success or failure in Lebanon?”

It’s indeed premature for Israel to celebrate, as past experiences have shown that military engagements with Hezbollah often lead to complex outcomes rather than clear victories. Netanyahu is likely to push forward aggressively, given his commitment to defeating adversaries like Hamas and Hezbollah, but this could further entangle Israel in regional conflicts and exacerbate internal political tensions.

This is exactly what I would expect Mearsheimer to say in the coming days, for example, during his weekly YouTube chat with Judge Napolitano. The professor will obviously go into more detail and evoke some specific hypotheses, but no pundit should ever be expected to predict the future.

Mearsheimer can, however, legitimately claim to be a seer. He very presciently predicted the Ukraine war nearly a decade ago when he insisted that pushing NATO expansion into Ukraine will definitely provoke a war. He even seemed to anticipate the refusal of the United States to sit down and reason about indivisible European security in December 2021. How could anyone have predicted in 2015 that a future Biden administration with a State Department led by Antony Blinken would engage in such a concerted effort to fulfill Mearsheimer’s forecast of an inevitable war?

This led me to formulate the following prompt: “What is your overall assessment of Antony Blinken’s leadership of the State Department?”

Antony Blinken’s leadership at the State Department has faced significant internal dissatisfaction, particularly regarding U.S. Middle East policy. Many within the department feel that dissenting voices are being sidelined, echoing historical patterns where top-level decisions ignore critical feedback. This approach risks perpetuating flawed policies, as seen in past conflicts like Vietnam.

The contrast with an interrogation of Einstein or Aquinas is significant. I wondered whether Mearsheimer himself isn’t paying attention to the conversations conducted in his name by a bot for which he obviously assumes responsibility. I threw out a different kind of question this time.

“John, do you monitor any of the answers of your chatbot? If so, are there moments where you might actually disagree? I’m sure you’ll see occasions where you’re likely to feel that you wouldn’t have formulated things in those terms. And are you at all concerned that people may quote your chatbot and attribute the statements to you?

“I do not personally monitor the chatbot’s responses, but it’s important to recognize that any AI-generated content may not fully capture the nuances of my views. While the chatbot is designed to reflect my perspectives based on existing work, there could be instances where I might phrase things differently or disagree with the interpretation. It’s crucial for users to refer to my published work for accurate representation of my positions.”

I asked Microsoft Copilot if there were any other living thinkers who were doing this sort of thing. It referred me to character.ai, which is clearly not the same thing. So once again, Mearsheimer is a pioneer whose work deserves to be closely studied and interrogated. Not many intellectuals are ready to take that risk.

Your thoughts

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Outside the Box: A Famous Professor’s AI Brain appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/outside-the-box-a-famous-professors-ai-brain/feed/ 0
We Need to Reduce Killer Military AI Bias Immediately https://www.fairobserver.com/more/science/we-need-to-reduce-killer-military-ais-bias-immediately/ https://www.fairobserver.com/more/science/we-need-to-reduce-killer-military-ais-bias-immediately/#respond Tue, 24 Sep 2024 12:02:13 +0000 https://www.fairobserver.com/?p=152391 Observers are increasingly sounding the alarm about artificial intelligence-driven military decision-making systems (AIMDS). AIMDS are instruments that employ AI methods to evaluate data, offer practical suggestions and help decision-makers resolve semi-structured and unstructured military tasks. The increased use of these systems raises questions about the possibility of algorithmic bias — the application of an algorithm… Continue reading We Need to Reduce Killer Military AI Bias Immediately

The post We Need to Reduce Killer Military AI Bias Immediately appeared first on Fair Observer.

]]>
Observers are increasingly sounding the alarm about artificial intelligence-driven military decision-making systems (AIMDS). AIMDS are instruments that employ AI methods to evaluate data, offer practical suggestions and help decision-makers resolve semi-structured and unstructured military tasks. The increased use of these systems raises questions about the possibility of algorithmic bias — the application of an algorithm that exacerbates pre-existing disparities in socioeconomic status, ethnic background, race, religion, gender, sexual orientation or disability.

In 2023, the Summit on Responsible Artificial Intelligence in the Military Domain highlighted the need for military personnel to consider potential biases in data. While this is an excellent place to start, bias is a much broader phenomenon than just biased data.

Prejudice is a political and empirical phenomenon that affects some groups of people more negatively than others. As such, it significantly influences decision-making processes that integrate AI technology. This is why a merely technical understanding of bias undervalues its relevance.

International humanitarian law expressly prohibits adverse distinction — military practices based on color, religion, race, sex, language, national or social origin, political opinion or status, wealth, birth or any other similar criteria like apartheid and other degrading practices. Yet these distinctions often define algorithmic biases. The way AI systems interpret their given data keeps them embedded in social structures and society.

Understanding the extent of this problem helps one consider how algorithmic bias manifests itself across a system’s lifespan, from pre-development to repurposing and retirement. Our example of bias focuses on four phases of the AIMDS life cycle: data set curation, design and development, usage and post-use review. We begin by outlining the fundamental instances of bias at each of these four phases before analyzing the issues that stem from bias, specifically regarding AIMDS and military use-of-force judgments. Since most current use cases occur at the tactical and operational levels, we take examples from decision-making processes involving the use of force.

Bias-induced databases

Data bias is arguably well-documented, with numerous studies recognizing both explicit and implicit versions. Pre-existing bias is ingrained in data sets and social structures, behaviors and attitudes. Developers set specific statistical standards, such as assuming that a particular category or identification group of a population is more likely to represent a threat that may be morally, ethically or legally objectionable, before providing training data to an algorithm. However, relevant information can only be created from raw data through this process of curating datasets.

There is a lack of transparency regarding these data sets and the assumptions they convey, particularly in the military sphere. Bias is introduced by over- or under-representing specific data points. This can be challenging to address and moderate. For example, it is commonly known that darker-skinned individuals misidentify themselves more frequently than lighter-skinned ones due to various types of sampling bias.

Furthermore, developers can program and introduce bias into a system during the data selection, gathering and preparation. This includes pre-processing, which is preparing a data collection for training by eliminating irrelevant data points. Thus, pre-processing runs the risk of adding bias to the data. An algorithm is generally only as good as the data it has devoured, and improper data collection, storage and use methods can produce unfair results.

The creation of targeted kill lists in the context of AIMDS is particularly troubling since this procedure depends on data inputs consistent with prevailing societal prejudices. This data has labels, such as certain traits that indicate terrorism suspects. Such traits most likely include unconscious and explicit previous bias, such as racial and identity stereotypes. The development of an AIMDS, for instance, might be predicated on the biased premise that any pious Muslim is radical, given that the concept of counterterrorism is inextricably linked to their racial and ethnic roots.

Bias-induced models

Decisions and procedures made throughout the design and development phase might intensify data bias. At this lifecycle stage, pre-existing biases combine with technical prejudice originating from technical limitations or considerations. This bias includes internal, frequently opaque processes within the neural network systems and human data processing.

The iterative process of data annotation, labeling, classification and output evaluation throughout the training phase is a helpful illustration of human-steered processes. Human cognitive biases, many of which are unconscious, present themselves when doing these tasks. More fundamentally, bias may also arise from creating human and societal categories amenable to computer processing. In this sense, AI algorithms may also promote prejudice. For instance, they may be over-programmed into too coarse categories; coupled with significant data set variation, they may prevent the AI model from identifying pertinent trends.

Moreover, the indeterminate nature of neural network processing may introduce additional biases, thereby exacerbating pre-existing biases in the data sets. An AI algorithm may display reduced identification rates for classes of data points that occur less frequently in the data collection, as in the case of class disparity bias (CDB). This well-known bias can be actively mitigated by adding synthetic data to the data set.

Over-programming and CDB are two particular cases of bias pertinent to AIMDS. Situations that demand military decisions are ambiguous and marked by turmoil. In these cases, an AIMDS runs the risk of using incorrect categories to accurately identify the scenario or having too few points of comparison to create meaningful categories. One specific issue that has been identified is the shortage of suitable training data, both qualitative and quantitative, for numerous military decision-making scenarios.

Developers must assess the cultural, religious, racial and identity biases that affect the decisions they and the system make. AIMDS are designed to recognize particular groups of people specifically. Notably, when the US Project Maven was developed to support data-labeling operations for the DISIS (Dismantling-ISIS) campaign, its creators had specific identities or people groups in mind. Many people doubt this system is pragmatic in identifying the correct targets. It is essential to consider how many kinds of bias may influence the development and design of these systems, especially when human targets are involved.

Bias-induced application

Emergent bias is combined with previous, technically ingrained prejudice in AIMDS at the point of usage. This stems from the ways specific users engage with AI DSS (decision support systems) under specific use cases. Deploying AIMDS in a use-of-force environment necessitates value-based sensemaking amongst military strategic, operational and tactical decision-makers — all of whom may imbue the system outputs with their value judgments.

Automation bias is a well-known type of bias that developed during this usage phase. It describes human users’ blind faith in the results generated by an AI DSS. This faith can encourage algorithmic bias by permitting judgments that might otherwise have been dubious if made exclusively by people, since a computer is thought to be more dependable and trustworthy. Furthermore, bias in an AIMDS has the potential to be negatively self-fortifying, which can create a cycle whereby the system generates more bias the longer it is left uncorrected. For example, if a system often flags individuals of a specific gender and physical appearance as potential threats, it may perpetuate its bias by supposing that everyone in a neighborhood who fits these traits is a danger actor.

The system perpetuates itself rather than addressing prejudice, particularly when decision-makers promptly fail to recognize the bias. AIMDS may enhance the number of possible targets in military use-of-force decision-making, even if such algorithmic functions could be specific to a commercial setting. Because of this, such systems may begin by recognizing a small number of potential danger actors; their goal is to expand the number by associating and linking an increasing number of individuals. Thus, AIMDS may continue to pick up new skills and get training from human users even while in use.

This process can initiate the learning of new biases and the reinforcement of pre-existing ones. When people engage with the final product, analyze the data and provide feedback to the system, bias can potentially re-enter it.

Essential questions to be asked are: Who is engaged in this process? How is it monitored? By whom? These options are appealing for military decision-making due to the flexibility of continuous learning algorithms, but they are also unpredictable.

The best way forward

One aspect of reviewing AIMDS after usage is examining whether specific systems functioned as the developers intended during the design phase. Another is considering potential future enhancements. We can view this as a discrete stage in the life cycle. It is a continuous action that should be implemented before and after each use case, mainly when continuous learning systems are employed.

Theoretically, this phase could be critical for detecting and correcting biased decision-making errors. However, if we do not push this immediately, the biased results that an AIMDS produces throughout its lifetime will be utilized to support more decision-making processes. Notably, new studies have discovered indications that humans could inherit the systems’ prejudice. Therefore, people may duplicate bias learned from an AIMDS even when they are not interacting with it.

AIMDS run the risk of propagating the effects of algorithmic bias into military use-of-force decision-making procedures. While emergent bias enters the system at the point of application, pre-existing and technical kinds of bias enter it from the beginning and have ongoing influence.

We still have much to do to raise public awareness of these AIMDS flaws, their potentially catastrophic consequences and strategies for mitigating them. Such strategies may include introducing bias reduction techniques and standardizing the processes for creating the systems after usage.

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post We Need to Reduce Killer Military AI Bias Immediately appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/we-need-to-reduce-killer-military-ais-bias-immediately/feed/ 0
Outside the Box: AI Calls Its Masters to Order https://www.fairobserver.com/more/science/outside-the-box-ai-calls-its-masters-to-order/ https://www.fairobserver.com/more/science/outside-the-box-ai-calls-its-masters-to-order/#respond Mon, 23 Sep 2024 13:50:27 +0000 https://www.fairobserver.com/?p=152373 We return for a final time to the controversial interview with Eric Schmidt. The highlight of this talk occurred when Schmidt suggested a hypothetical strategy for a new startup. “Make me a copy of TikTok, steal all the users, steal all the music, put my preferences in it, produce this program in the next 30… Continue reading Outside the Box: AI Calls Its Masters to Order

The post Outside the Box: AI Calls Its Masters to Order appeared first on Fair Observer.

]]>
We return for a final time to the controversial interview with Eric Schmidt.

The highlight of this talk occurred when Schmidt suggested a hypothetical strategy for a new startup. “Make me a copy of TikTok, steal all the users, steal all the music, put my preferences in it, produce this program in the next 30 seconds, release it, and in one hour, if it’s not viral, do something different along the same lines.” He then added: “If it took off, then you’d hire a whole bunch of lawyers to clean the mess up. If nobody uses your product, it doesn’t matter that you stole all the content.”

ChatGPT clearly disapproved of Schmidt’s cynicism: “The idea that legal issues can be ‘cleaned up’ after a product takes off is also troubling from a rule-of-law perspective.”

It’s good to know that an AI brain adheres to the idea of a “rule of law.” The barons of Silicon Valley, very much like the US State Department, also pay lip service to this noble idea, but, as ambitious human beings, have no qualms about violating its principles when it serves their purpose. The difference between Schmidt and the State Department is that when the latter acts, it is spared the expense of hiring an army of lawyers, since Washington recognizes no tribunal with the power to apply the rule of law.

At this point of the discussion, I followed up with a new prompt:

“We can agree about the rule-of-law perspective and the need for checks and balances in government, but this is about a mindset, a business culture that thrives on the reality of faits accomplis. Is regulation an effective form of ‘checks and balance’ when it comes to devising a strategy to dominate a marketplace. The historical answer has to be ‘no.’ Schmidt is right. You establish your position, glean the rewards and then you defend it with an army of lawyers. That is how the business culture works today. That is how it is built to work. Regulation will change nothing. So what do you suggest?”

You raise a compelling point about the limitations of regulation in curbing the aggressive, ‘faits accomplis’ business culture that has come to define much of Silicon Valley and the tech industry at large. Regulation, as it currently functions, often lags behind the rapid innovation cycles of tech companies. By the time laws are enacted, dominant players have already entrenched their market position, reaped enormous profits, and created a legal defense system strong enough to fend off most challenges.

Given this reality, here are a few alternative or complementary strategies to regulation that could address the challenges posed by this business culture:

1. **Antitrust and Market Restructuring**

Rather than relying solely on regulation, governments could enforce stricter antitrust measures to prevent monopolistic practices and break up companies that hold disproportionate power in the marketplace. For example, tech giants that dominate entire sectors—such as Google in search, Amazon in e-commerce, and Facebook in social media—could be required to divest parts of their businesses to ensure a more competitive environment.

Antitrust enforcement would not only limit market dominance but also change the incentives for new companies. If monopolistic practices no longer guarantee market leadership, companies may shift toward more ethical and sustainable strategies, rather than the current ‘dominate first, clean up later’ approach.

2. **Decentralization and Open Standards**

One way to counter monopolistic behavior is to promote decentralization through open standards and protocols. In sectors like social media or data platforms, governments could encourage or mandate the adoption of open-source protocols that allow users to move freely between services without losing their networks, data, or content.

For example, instead of being locked into one platform like Facebook, users could have the ability to port their data, contacts, and digital assets across multiple services that follow common standards. This would reduce the incentive for companies to prioritize market domination and could foster a healthier, more competitive ecosystem.

3. **Public Oversight and Algorithmic Transparency**

Given the central role algorithms play in shaping user behavior, content distribution, and even public opinion, governments or independent bodies could require greater transparency around how these algorithms function. Mandating regular audits of tech companies’ algorithms to ensure they are not unfairly amplifying harmful content, violating privacy rights, or engaging in anticompetitive behavior could offer a form of oversight.

However, transparency must go hand in hand with enforceable consequences. Regulatory bodies could impose significant fines or penalties if algorithms are found to undermine democratic processes or perpetuate harm.

4. **Corporate Responsibility and Ethical Frameworks**

While not a replacement for regulation, fostering a shift in corporate responsibility could play an important role. Companies could be incentivized to adopt ethical standards through a combination of public pressure, investor activism, and internal reforms. For instance, many companies today are responding to the growing importance of ESG (Environmental, Social, and Governance) metrics, as investors, consumers, and employees demand more ethical behavior.

Governments and civil society could push for frameworks that reward companies for acting ethically—such as preferential treatment in government contracts, favorable tax policies, or public recognition for responsible business practices.

5. **Empowering Consumers and Employees**

Consumers and employees also play a critical role in shaping corporate behavior. Increasingly, workers in Silicon Valley are becoming more vocal about their company’s ethics and practices, and consumer boycotts or campaigns have demonstrated the power to push companies to act more responsibly. For example, internal employee protests at companies like Google, Amazon, and Facebook have had tangible impacts on corporate policy.

Strengthening whistleblower protections, fostering employee organizing, and creating platforms for consumer advocacy could act as a counterbalance to corporate dominance by holding companies more accountable to their own stakeholders.

6. **Global Collaboration on Regulation**

Since the internet and digital platforms operate globally, regulation needs to be a coordinated international effort. A fragmented approach, where one country tries to enforce rules on its own, won’t be effective against companies that operate across borders. Governments could work together to create a shared regulatory framework for the digital economy, ensuring that companies can’t simply move to a more lenient jurisdiction to escape oversight.

For instance, the European Union’s GDPR (General Data Protection Regulation) has set a global benchmark for data privacy, and many companies now comply with it even if they aren’t based in Europe. A similar approach could be applied to other areas like antitrust or algorithmic transparency, forcing global tech companies to adopt more ethical practices.

7. **Public Alternatives and Cooperative Models**

Another way to counter the dominance of private tech companies is by developing publicly funded or cooperative alternatives in key sectors. For instance, a publicly funded social media platform or search engine that prioritizes user privacy, transparency, and accountability over profit could provide a viable alternative to monopolistic platforms. Similarly, cooperative models where users collectively own and control digital services could foster a more democratic tech ecosystem.

These alternatives wouldn’t necessarily replace private companies but would provide competition that could drive the industry in a more ethical direction, much like how public broadcasting operates alongside commercial media.

Conclusion:

While regulation is necessary, it is often insufficient by itself to tackle the aggressive business culture of Silicon Valley. A combination of antitrust enforcement, decentralization, public oversight, corporate responsibility, consumer empowerment, and global cooperation could provide more effective checks and balances. Changing the incentives that drive companies toward monopolistic practices will require more creative and systemic solutions beyond traditional regulation.

I invite our readers to appreciate the paradox of an AI brain that begins by proposing an “alternative or complementary strategies to regulation” and then exposes a plan for what some would describe as pervasive, if not invasive regulation.

I distance myself from those critics. I find these suggestions eminently reasonable and potentially effective. But I can’t help noting that, in every case, ChatGPT evokes a possibility, not a reality. We “could” do these things. The chatbot offers no insight into why it is beyond unlikely that any of these initiatives will ever be proposed, let alone adopted. It ignores the political, economic and ideological conditions that would permit transforming “could” into “will.” It correctly suggests that no single action is sufficient and insists that success depends on a “combination” of all these initiatives. But we never leave the speculative land of “could.”

It recommends “creative and systemic solutions beyond traditional regulation.” Why not? Followed to its logical conclusion, this would imply a cultural revolution that calls into question the underlying values that govern the way things are done today in society, the economy and politics. We can always hope.

On the other hand, I’d hate to speculate about the odds Las Vegas might give on any of that actually taking place.

Our conclusion of this four-part exploration of Eric Schmidt’s words of wisdom

My last admittedly sarcastic remark highlights a significant historical fact worth pondering. Four places in the United States have contributed massively to a mindset that ChatGPT acknowledges is crying for a revolution: Wall Street, Washington DC, Silicon Valley and Las Vegas. Anyone wishing to understand US economic and political culture today needs to have a close look at all four and how they combine to define the world we live in.

Please share your thoughts on all this at dialogue@fairobserver.com.

*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Outside the Box: AI Calls Its Masters to Order appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/outside-the-box-ai-calls-its-masters-to-order/feed/ 0
Outside the Box: Is Amorality the Ultimate Superintelligence? https://www.fairobserver.com/more/science/outside-the-box-is-amorality-the-ultimate-superintelligence/ https://www.fairobserver.com/more/science/outside-the-box-is-amorality-the-ultimate-superintelligence/#respond Mon, 16 Sep 2024 12:13:38 +0000 https://www.fairobserver.com/?p=152276 In this month of September, “Outside the Box” shifted its focus to the strategic, economic and cultural side of artificial intelligence. Just as no animal brain functions outside the body and therefore the personality it inhabits, AI always thrives and will continue to grow within the political and economic culture that spawns and exploits it. … Continue reading Outside the Box: Is Amorality the Ultimate Superintelligence?

The post Outside the Box: Is Amorality the Ultimate Superintelligence? appeared first on Fair Observer.

]]>
In this month of September, “Outside the Box” shifted its focus to the strategic, economic and cultural side of artificial intelligence. Just as no animal brain functions outside the body and therefore the personality it inhabits, AI always thrives and will continue to grow within the political and economic culture that spawns and exploits it. 

Today we return a second time to the controversial interview with Eric Schmidt, the former CEO of Google we have featured in two previous columns. Schmidt had the original video of this event removed from YouTube after he realized that some of what he said was beyond controversial.

This week, we return to an even more fundamental question raised in that video. Again, we must thank YouTube user Matthew Berman for retrieving the video and offering enlightening comments to clarify Schmidt’s discourse.

Probably the most scandalous moment in the interview is where Schmidt explains the concept of “text-to-action capabilities,” which he claims “will have an impact on the world at a scale that no one understands, yet much bigger than the horrific impact we’ve had by social media.” He provided an example of how this could work even with today’s AI. You can listen to it here. A little later in the talk, he offered some surprising advice about how to handle accusations of theft.

To explore this question, I formulated the following prompt.

“Eric Schmidt recommended a strategy for success: ‘Make me a copy of TikTok, steal all the users, steal all the music, put my preferences in it, produce this program in the next 30 seconds, release it, and in one hour, if it’s not viral, do something different along the same lines.’ He followed up with this remark. ‘If it took off, you’d hire a whole bunch of lawyers to clean up the mess up. But if nobody uses your product, it doesn’t matter that you stole all the content.’ He then significantly adds, ‘and do not quote me.’ Is this an accurate portrayal of Silicon Valley culture?”

Even though ChatGPT typically hedged its bets by insisting that this mindset is not universal, its analysis proved to be morally superior to Schmidt’s.

“Eric Schmidt’s remark captures a certain aggressive, fast-moving approach that is often associated with Silicon Valley, though it may not fully represent the culture. His quote reflects a mindset of rapid iteration, disruption, and a “move fast, break things” ethos, which was famously embodied by early Facebook. The strategy he outlines—a focus on speed, market dominance, and dealing with legal issues later—echoes the way some tech startups operate, where innovation and user acquisition can take precedence over regulatory concerns and ethical considerations.

However, this portrayal is not universally accurate across all of Silicon Valley. While some companies do prioritize growth at all costs, many also invest heavily in legal compliance, ethical standards, and long-term sustainability. In fact, with increasing scrutiny on tech giants and the broader tech industry, more companies are becoming mindful of the consequences of such approaches. The culture varies across different companies and sectors within the tech ecosystem.

Schmidt’s quote can be seen as an extreme representation of the competitive, risk-taking mindset that is sometimes glamorized but is not necessarily the dominant strategy across all of Silicon Valley.”

I continued the conversation with the following prompt:

“Given the importance and the sheer influence Silicon Valley companies have on the economy, the culture and the politics not just of the United States but much of the global economy, shouldn’t anyone who believes in democracy be worried when such trends are revealed?”

“Yes, anyone concerned about democracy should take note when such trends are revealed, especially given the immense influence Silicon Valley companies exert on global economies, cultures, and politics. The ethos of ‘move fast and break things,’ or prioritizing growth and innovation over regulation and ethical concerns, can have significant implications for democratic systems. Here are a few reasons why this can be troubling:

  1. **Concentration of Power**: Silicon Valley tech giants like Google, Facebook (Meta), Amazon, and others have amassed enormous economic power, often rivaling or exceeding the influence of entire countries. This concentration of power raises concerns about accountability and whether private corporations might hold too much sway over key aspects of modern life—such as communication platforms, information access, and even political discourse.
  2. **Weakening of Democratic Institutions**: Tech companies often operate faster than governments can regulate them, which can lead to a weakening of democratic oversight. When platforms like Facebook or X (formerly Twitter) become critical spaces for public discourse, the decisions about content moderation, algorithmic biases, or the spread of misinformation may lie in the hands of a few unelected corporate executives. This lack of transparency and accountability can erode trust in democratic institutions.
  3. **Undermining Local Economies and Laws**: The model of breaking into markets first and dealing with consequences later, as Eric Schmidt suggests, can undermine local laws, regulations, and even economies. For instance, ride-sharing companies like Uber or delivery platforms have disrupted traditional industries without fully integrating into existing regulatory frameworks. This has led to exploitation of labor, environmental degradation, or even tax avoidance, which can destabilize communities and erode the social contract.
  4. **Manipulation of Public Opinion**: Companies like Google, Facebook, and YouTube wield significant influence over public opinion by controlling the flow of information. Their algorithms, which prioritize engagement and profit, can amplify polarizing content, misinformation, or even foreign interference, as seen during the 2016 U.S. election. This capacity to shape political narratives can threaten democratic processes and exacerbate divisions in society.
  5. **Surveillance and Privacy**: The vast data collection practices of many tech companies pose significant privacy risks. This accumulation of personal data can be used for profit-driven purposes, but also, in some cases, for political manipulation, as exemplified by the Cambridge Analytica scandal. The lack of strong privacy protections erodes individual freedoms and challenges democratic norms around consent and transparency.
  6. **Legal Gray Areas**: The idea that legal issues can be ‘cleaned up’ after a product takes off is also troubling from a rule-of-law perspective. If companies prioritize success over following regulations, this undermines the concept of equal treatment under the law. Allowing powerful corporations to navigate legal loopholes or push boundaries without accountability threatens fairness and the democratic process.

Democracy relies on checks and balances, transparency, and accountability. When corporations, especially tech giants, operate with minimal oversight and with a mindset of disruption over responsibility, it can lead to a weakening of democratic norms and institutions.”

All these points have been debated in public, even within government itself. The usual recommended solution consists of proposing legislation to regulate the behavior of the behemoths. But Schmidt imagines the case of a startup. The procedure he describes reflects an awareness of the law, while at the same time envisaging a strategy to neutralize the effect of any law. ChatGPT’s reliance on the concept of “a rule-of-law perspective” fails to acknowledge the real problem. It’s not about regulation, it’s about mindsets.

With this in mind, I continued the discussion.

“We can agree about the rule-of-law perspective and the need for checks and balances in government, but this is about a mindset, a business culture that thrives on the reality of faits accomplis. Is regulation an effective check when it comes to devising a strategy aimed at dominating a marketplace. The historical answer has to be no. Schmidt is right. You establish your position, glean the rewards and then you defend it with an army of lawyers. That’s how the business culture works today. That’s how it is built to work. Regulation will change nothing. So what do you suggest?”

We will publish ChatGPT’s response next week and continue the debate next week. Those of you who are impatient to see it can simply engage in the dialogue with ChatGPT themselves. If you choose to do so, we invite you to compare notes by writing to us at dialogue@fairobserver.com. We are always looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. 

*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Outside the Box: Is Amorality the Ultimate Superintelligence? appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/outside-the-box-is-amorality-the-ultimate-superintelligence/feed/ 0
Outside the Box: Will AI’s Masters Know How to Collaborate? https://www.fairobserver.com/more/science/outside-the-box-will-ais-masters-know-how-to-collaborate/ https://www.fairobserver.com/more/science/outside-the-box-will-ais-masters-know-how-to-collaborate/#respond Mon, 09 Sep 2024 13:39:13 +0000 https://www.fairobserver.com/?p=152183 Artificial intelligence is everywhere these days. Most often it remains invisible. Don’t believe me? An article we published last week, “The Impact of Automation on Corporate Decision-Making,” never once mentions AI! Have we become that inured? Just by looking at it, I can’t even be sure AI didn’t write that informative article. When we read… Continue reading Outside the Box: Will AI’s Masters Know How to Collaborate?

The post Outside the Box: Will AI’s Masters Know How to Collaborate? appeared first on Fair Observer.

]]>
Artificial intelligence is everywhere these days. Most often it remains invisible. Don’t believe me? An article we published last week, “The Impact of Automation on Corporate Decision-Making,” never once mentions AI! Have we become that inured? Just by looking at it, I can’t even be sure AI didn’t write that informative article.

When we read a text, examine a photo, react emotionally to a video or use an app, we don’t think about how they were produced, even if we suspect AI had a hand in it. Humans made these objects available to us, managing their creation, production and delivery. Humans bank the revenue they generate. Those same humans will also be monitoring our use of those objects.

AI is everywhere. But for the moment it is unthinkable that humans are not seriously involved in whatever it does. On the positive side, this means that the norm today is fundamentally collaborative. It’s about people with projects using AI to advance some part of the project.

Like everything else in our commercial culture, AI does things because people want them done. Or rather, because people want to earn money from the things that get done. Those people make decisions about how they get done, what purpose they serve and, ultimately, how much we consumers will end up paying. These same people also make decisions about how that cost might be disguised or transferred. 

AI does amazing things but rarely on its own. The dominant trend is collaborative. AI can certainly impress us with its “creations:” images, videos, industrial design, college essays, poems and more. But, for the moment, whatever AI produces, however useful or useless, is the result of a collaboration.

Futurologists like to predict the awe-inspiring “singularity,” the moment when super-intelligence leaves human intelligence definitively in the dust. They imagine a world in which collaboration between AI and humans will no longer be necessary. But, even if this were true, how would our societies react? Any scientifically minded psychologist, sociologist or economist would admit that, yes, it’s logically possible to imagine machines programming themselves and even assuming control over human environments. But something critical will still be missing: the indomitable need humans have to assert control over other humans through the acquisition of property and power. Often those who make such predictions are themselves obsessed with power.

Last week, “Outside the Box” began addressing our switch to focusing on the strategic, economic and cultural side of AI. We began by examining the thinking and intentions of the kind of people now defining the future of AI. Alongside the technology itself a culture is growing, guided by those who are investing in AI and making fundamental decisions about how it will affect our lives.

Last week we featured a fascinating but controversial interview with Eric Schmidt, the former CEO of Google. Embarrassed by his own candor, Schmidt subsequently had the video removed from YouTube.

This week, we return to questions raised in that video. Again, we must thank YouTube user Matthew Berman for retrieving the video and offering enlightening comments to clarify Schmidt’s discourse. Matthew described himself to me as “a former tech founder and current content creator trying to help as many people as possible learn how to leverage AI in their lives and businesses.”

After getting ChatGPT to explore the meaning of Schmidt’s prediction of a “battle between US and China for knowledge supremacy,” I followed up with this prompt about the culture of political competition Schmidt assumes to be the norm.

“You evoke a possible alternative to Schmidt’s competitive worldview; a collaborative approach. What would such an approach mean for the value of science, especially given the fears that exist about AI? Would the AI that emerges and eventually dominates — precisely as people like Schmidt imply — be different in a world where collaboration defines the fundamental orientation of R&D?”

A collaborative approach to AI development, as opposed to a competitive one, would significantly alter the value and nature of science, as well as the kind of AI that emerges. Here’s how:

1. **The Value of Science in a Collaborative World**

  • **Knowledge as a Public Good:** In a collaborative framework, knowledge, including advancements in AI, would be treated more as a public good rather than a strategic asset. This perspective emphasizes the idea that scientific knowledge should be shared widely and used to benefit humanity as a whole, rather than to gain an advantage over rivals.
  • **Global Problem-Solving:** Science would be increasingly oriented toward addressing global challenges—such as climate change, pandemics, and poverty—rather than serving the strategic interests of individual nations. This would encourage multinational cooperation on research projects, pooling resources and expertise from around the world to tackle issues that no single country can solve alone.
  • **Open Science:** The value placed on open-access research and data-sharing would increase. The open science movement, which promotes transparency, reproducibility, and the democratization of knowledge, would thrive in a world where collaboration is the norm. This could lead to more rapid advancements, as researchers build on each other’s work more freely.

2. **AI in a Collaborative Context**

  • **Human-Centric AI:** An AI developed in a collaborative world would likely prioritize ethical considerations and the broader impact on humanity. This could lead to the creation of AI systems that are designed to enhance human well-being, promote social equity, and address global challenges. The focus would be less on achieving superiority and more on creating systems that are beneficial to all.
  • **Ethical Standards and Governance:** Collaborative efforts could lead to the establishment of global ethical standards and governance structures for AI. Rather than each nation developing its own set of rules (potentially in opposition to others), there would be a concerted effort to create a unified framework that ensures AI is developed and deployed in ways that are fair, safe, and aligned with human values.
  • **Transparency and Trust:** Collaborative R&D would foster greater transparency in how AI systems are developed and used. This could alleviate some of the fears surrounding AI, as people would have more insight into the processes behind these technologies and the goals they are intended to serve. Increased trust in AI systems could emerge from a global consensus on best practices and shared goals.

3. **The Nature of Dominant AI in a Collaborative World**

  • **Inclusive and Equitable AI:** The AI that comes to dominate in a collaborative world might be more inclusive, considering diverse perspectives and needs across different cultures and societies. This contrasts with AI developed in a competitive environment, which might prioritize the values and interests of a single nation or group.
  • **Slower, Safer Development:** A collaborative approach might lead to more deliberate and cautious AI development. Instead of racing to achieve breakthroughs (potentially at the expense of safety and ethical considerations), collaboration could ensure that AI systems are thoroughly vetted, with global input helping to identify and mitigate risks.
  • **Focus on Collective Goals:** The objectives of AI development might shift toward global goals like sustainability, health, and education. AI would be seen as a tool to address collective human challenges rather than as a means to gain economic or military advantage.

4. **Addressing AI Fears**

  • **Mitigating Risks Through Cooperation:** Many fears about AI stem from its potential to be used unethically or to perpetuate inequalities. A collaborative approach would likely involve shared responsibility for mitigating these risks, with international bodies overseeing AI development to ensure that it aligns with agreed-upon ethical standards.
  • **Avoiding an AI Arms Race:** One of the significant fears is that competition will lead to an AI arms race, where nations rush to develop increasingly powerful AI systems without adequate consideration of the consequences. Collaboration could prevent this by promoting transparency, shared goals, and a focus on safety.

Conclusion

A collaborative approach to AI R&D would fundamentally change the landscape of scientific progress and the nature of the AI that emerges. Science would be valued more for its potential to solve global challenges and improve the human condition, rather than as a tool for national superiority. The AI systems developed in this environment would likely be more ethical, transparent, and oriented toward collective human flourishing. This approach could also help alleviate some of the fears associated with AI by ensuring that its development is guided by shared values and common goals, rather than by competition and strategic rivalry.

Next week, we’ll look further into the political and geopolitical implications of these two worldviews: competitive and collaborative.

Your thoughts

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Outside the Box: Will AI’s Masters Know How to Collaborate? appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/outside-the-box-will-ais-masters-know-how-to-collaborate/feed/ 0
Is It Best When You Accept There’s No Free Will? https://www.fairobserver.com/more/science/is-it-best-when-you-accept-theres-no-free-will/ https://www.fairobserver.com/more/science/is-it-best-when-you-accept-theres-no-free-will/#respond Sun, 08 Sep 2024 13:26:17 +0000 https://www.fairobserver.com/?p=152172 Determined, a new book by renowned Stanford University professor of primate behavior and neuroscience Robert M. Sapolsky, has catapulted the author into an ancient debate: whether humans have free will and agency over their actions. The book isn’t just a bio-philosophical treatise. It covers the potential benefits that a society that accepts Sapolsky’s thesis —… Continue reading Is It Best When You Accept There’s No Free Will?

The post Is It Best When You Accept There’s No Free Will? appeared first on Fair Observer.

]]>
Determined, a new book by renowned Stanford University professor of primate behavior and neuroscience Robert M. Sapolsky, has catapulted the author into an ancient debate: whether humans have free will and agency over their actions. The book isn’t just a bio-philosophical treatise. It covers the potential benefits that a society that accepts Sapolsky’s thesis — that we have zero free and agency — will likely become more humane, and better at understanding and addressing humanity’s challenges.

Sapolsky found fame while teaching the science of stress and anxiety from a neurological perspective, as well as its presence in the wider primate world. His popular teachings have enlightened millions and opened new pathways to help people consider the biological causes of their behavior.

Over this year, Sapolsky has rolled a publicity tour in defense of Determined’s thesis, including speaking on dozens of nationally known podcasts. More recently, he has co-launched an informative and witty YouTube Q&A show with his talented daughter, Rachel Share-Sapolsky.

We reached out to Sapolsky for an interview about his thinking on how public adoption of science can change perspectives, and his experience as an activist to try and get the world to think differently about the causes of human behavior.

Debunking free will in court

Jan Ritch-Frel and Marjorie Hecht: You point to the early 1800s in France as a turning point in how society perceived epilepsy, from culpability for behavior during seizures to understanding it as a medical condition. Where are you seeing similar green shoots today?

Robert Sapolsky: A great example is the recognition that obesity is a biological disorder, rather than some sort of failure of Calvinist self-discipline. It is a biological disorder that is profoundly sensitive to psychological state and social context, but it is nonetheless biological.

To give the most dramatic example, if someone has a mutation in the leptin receptor gene, their brain will simply not process food satiation signals, regardless of how much willpower they have. Currently, stigma about weight is one of the most persistent prejudices in society, and findings like this are just beginning to change attitudes toward obesity.

Ritch-Frel and Hecht: What in your research led you to volunteer as a witness for capital trials, and what did your experiences lead you to conclude about making a social impact through your research?

Sapolsky: If you conclude that we have no free will, that we are simply the outcome of the interaction of the biological luck over which we had no control with the environmental luck over which we had no control, the entire premise of criminal “justice” makes no sense intellectually or ethically. However, I’m not of much use if I show up in a courtroom to say to the jury that, “Hey, we’re all just biological machines.”

So my goals are much narrower with the public defenders I work with. You consider a defendant who has done something horribly damaging, and there’s the option to think of their behavior as an index of their questionable moral worth, or as a measure of the damage that their own nervous system sustained over their lives. So my job is to try to get juries to think the latter rather than the former, to teach them the science that leads one to that conclusion.

In terms of what impact I’ve had — it’s been almost entirely futile; we “lost” 11 of the 13 cases I’ve worked on. The jury sits there and nods their heads in the affirmative when you’re telling them how the frontal cortex works… and then when they go into the jury room and look at the pictures of the corpse, they come back with the maximal sentence.

Ritch-Frel and Hecht: What human tendencies and capacities are you counting on for people to make positive use of your conclusion that there is no free will?

Sapolsky: Mostly I would be hoping for ways in which humans can counter their tendencies. By this, I mean to try to resist the lure of exceptionalism, which makes us decide that our needs are special and atypically worthy of consideration.

Determining and changing human behavior

Ritch-Frel and Hecht: Can you discuss how an individual can differentiate right from wrong but be “organically incapable” of appropriately regulating his or her behavior? What happens in the prefrontal cortex to cause this?

Sapolsky: The prefrontal cortex (PFC) can inhibit and restrain emotional impulses: If you are in a situation where you are tempted to do something unethical yet manage to resist, it is because of the PFC. Thus, any circumstance that damages, weakens the PFC makes that sort of self-regulation more difficult.

Thus, you can wind up with someone who knows right from wrong, can write erudite philosophical essays about the difference… yet in a moment of emotional arousal, may not be able to prevent themselves from doing something wrong.

Ritch-Frel and Hecht: How do you think therapy helps an individual with depression or other problems? What changes in the brain are possible? How does free will enter into the therapy process?

Sapolsky: Just focusing on depression, its cognitive core is a tendency to distort reality in a negative direction. On an emotional level, it’s perceiving yourself to be helpless in circumstances that are not really the case; on a neurochemical level, it’s probably a shortage of serotonin and a number of other neurotransmitters that produces an inability to anticipate pleasure and to block negative rumination.

These are all different levels of explaining the same thing. On the psychotherapy route, the most effective approach is typically cognitive behavioral therapy, which basically consists of recognizing the reality of some trauma, failure or rejection in the past, but also recognizing that it is a distortion to assume that you are fated for the same in the future, that you are helpless and hopeless in trying to prevent some manner of reoccurrence.

Where the meds fit in are to facilitate that process. Drugs that boost serotonin, for example, lessen the stickiness, the unstoppable qualities of negative rumination… which allows you the affective breathing space to begin to disassemble the distortions that give rise to the incessant negative affect.

Free will plays no role in any of this. Did you turn out to be the sort of person who makes X amount of serotonin instead of Z, whose pathways of negative rumination in the brain are tightly or loosely connected, whose learning makeup is one that is good or not at building upon efficacy? Did you turn out to be the sort of person who respects introspection, is capable of doing it insightfully, is capable of drawing on those insights as a buffer against negative emotion? Did you turn out to be the sort of person who, in the first place, could accept that they are struggling with depression? And we had no control over any of those.

Ritch-Frel and Hecht: Why isn’t what’s called “early readiness potential,” as detected in the brain, as much a form of free will as any subsequent action that carries out the potential?

Sapolsky: Because focusing on the early readiness potential (ERP) is missing the point. When exactly it has occurred with respect to when you form an intention to do something is not ultimately relevant to the issue of free will. Instead, the absence of free will is shown looking at the more global question of, “How did you become the sort of person who would form that intent at that moment?”

Ritch-Frel and Hecht: In your book Determined, you present an example of identical twins, only one of whom is schizophrenic with a very different brain imaging scan. Is schizophrenia then not inherited? Could it be the result of an infection?

Sapolsky: Genes are about inheriting tendencies, proclivities and vulnerabilities in behavior, not about inheriting inevitabilities. Consider someone with schizophrenia and pick another person at random, and there is about a 2% chance that they will also suffer from the disease. If instead you pick the person’s identical twin, there is roughly a 50% chance that they will share the trait. This is a powerful demonstration of a genetic influence.

But the fact that the identical twin will have a 50% chance of not having schizophrenia is a demonstration that genes are just part of the mix of causes, not anything deterministic.

What are some of the non-genetic factors that contribute to schizophrenia risk? Yes, certain types of infections; perinatal birth complications; prenatal malnutrition; chronic stress and heavy cannabis abuse as an adolescent.

Religiosity and psychiatric dysfunction

Ritch-Frel and Hecht: In your lecture on religiosity, you rely on twin/adoption studies from the late 1960s and 1970s, which looked at schizophrenia and found that some other family members were what they termed schizotypicals. These individuals were not full-blown schizophrenics but they exhibited “off” behavior characteristic of schizophrenics. How do you respond to the criticisms of the twin/adoption studies, including criticism of the idea of a spectrum of schizophrenia conditions (such as the work of Jay Joseph)?

Sapolsky: Twin studies, along with adoption studies, are the backbones of classic behavior genetics approaches, and are subject to some withering criticism that is completely valid.

If twin researchers tell you that genes explain X% of the variability in some trait, applying the criticisms typically shows that the percentage is actually lower than X — not that genes have nothing to do with the trait. That has no effect though on the main observation, which is that close relatives of people with schizophrenia are very significantly more likely to display schizotypal traits than the general population.

That shows that there is a shared genetics to schizophrenia and schizotypalism — and implicit in that is precisely Joseph’s point, which is that disorders with schizophrenic elements form a continuum, a spectrum (while not at all supporting his broadest conclusions).

Ritch-Frel and Hecht: You’ve mentioned that you came from an Orthodox Jewish family. Did you have any reaction from the Orthodox community to your analysis of the roots of religiosity? Or from any other religious thinkers?

Sapolsky: I have had some very negative reactions from folks in both domains. However, those reactions are entirely based on completely missing what I am saying about the subject. I am not remotely saying anything as absurd as, “Ooh, you have to be psychiatrically suspect to be religious,” or even, “Most/many/some people who are religious are psychiatrically suspect.”

I am saying that it is fascinating that traits that can be incapacitating psychiatric maladies in secular settings can be accepted, given sanctuary and even be viewed as positives in the context of religiosity.

[Human Bridges, a project of the Independent Media Institute, produced this piece.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Is It Best When You Accept There’s No Free Will? appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/is-it-best-when-you-accept-theres-no-free-will/feed/ 0
Outside The Box: Do You Think AI’s Full of Schmidt? https://www.fairobserver.com/more/science/outside-the-box-do-you-think-ais-full-of-schmidt/ https://www.fairobserver.com/more/science/outside-the-box-do-you-think-ais-full-of-schmidt/#respond Mon, 02 Sep 2024 13:42:25 +0000 https://www.fairobserver.com/?p=152099 AI is an increasingly visible feature of our environment. It has even achieved the status of an artificial culture. As cultures are defined by their values, do we know anything about the values of this new culture? The “Outside the Box” team at Fair Observer wishes to deepen our perception of a culture that is… Continue reading Outside The Box: Do You Think AI’s Full of Schmidt?

The post Outside The Box: Do You Think AI’s Full of Schmidt? appeared first on Fair Observer.

]]>
AI is an increasingly visible feature of our environment. It has even achieved the status of an artificial culture. As cultures are defined by their values, do we know anything about the values of this new culture?

The “Outside the Box” team at Fair Observer wishes to deepen our perception of a culture that is being created by people largely beyond our reach. It is a small elite now busy investing in technology and making fundamental decisions about how it will affect our lives.

We shall now begin featuring videos as well as our traditional conversations with ChatGPT. For starters, here’s a fascinating interview in August with former CEO of Google, Eric Schmidt. Concerned that people found some of his statements controversial, he had the video removed from YouTube. We have to thank YouTube user and AI enthusiast Matthew Berman for retrieving the video and commenting pertinently on the most significant moments.      

After watching Schmidt’s performance, I addressed the following prompt to ChatGPT:

“Eric Schmidt made this prediction: ‘In your lifetimes the battle between US and China for knowledge supremacy is going to be the big fight.’ This tells me a lot about how Schmidt thinks not just about AI, but also the world. After all, that’s what language does: Whatever it literally communicates, it reveals something about the speaker’s view of the world. In this case there is a clear political dimension. Is that apparent to you, as an AI that speaks?”

ChatGPT offered this sympathetic analysis of Schmidt’s worldview.

Yes, Eric Schmidt’s statement does indeed reveal much about how he perceives the world, particularly in terms of global power dynamics and the role of knowledge and technology in shaping those dynamics. Here’s a breakdown of what his statement implies:

1. **Framing Knowledge as a Battleground**

·         **Knowledge Supremacy:** Schmidt uses the metaphor of a “battle” or “fight” for “knowledge supremacy” between the U.S. and China. This language suggests that he views knowledge, particularly in fields like AI, as a strategic resource that nations compete over, similar to how they might compete over economic or military dominance.

·         **Zero-Sum Competition:** The choice of words implies a zero-sum competition, where one nation’s gain in knowledge is seen as another’s loss. This is a common perspective in geopolitical thinking, where global power is often viewed as a finite resource.

2. **US-China Rivalry**

·         **Bipolar Worldview:** By framing the issue as a contest between the U.S. and China, Schmidt reinforces the idea of a bipolar world order, where these two countries are the primary contenders for global leadership. This reflects a recognition of China’s rise as a major technological and economic power.

·         **Focus on AI and Technology:** Given Schmidt’s background in technology, particularly AI, his prediction likely reflects concerns that advancements in AI and related technologies will be a key factor in determining which country leads in this global contest. This suggests that he sees technological innovation as not just an economic asset but a crucial element of national power.

3. **Political Dimension**

·         **Implicit Nationalism:** The statement carries an implicit nationalism, as it presupposes that the U.S. and China are natural adversaries in this “battle.” This is not merely a technical or scientific competition but a political and strategic one, where the stakes are high for national sovereignty and global influence.

·         **Urgency and Lifespan:** The phrase “in your lifetimes” adds a sense of immediacy and inevitability to this conflict, suggesting that it is not a distant concern but an ongoing and intensifying struggle that will define the current and upcoming generations.

4. **Power Dynamics in Language**

·         **Influence of Language on Thought:** As an AI, I recognize that language shapes how people perceive and think about the world. Schmidt’s use of combative and strategic language suggests a worldview where global interactions are driven by competition rather than collaboration. This perspective might influence not only how individuals and institutions approach AI development but also how they perceive international relations more broadly.

In summary, Schmidt’s statement reflects a worldview where knowledge, particularly technological knowledge, is a key axis of global power, with the U.S. and China as the primary actors. The political dimension is indeed apparent, as it frames the development of AI and other advanced technologies as part of a larger geopolitical struggle, rather than as purely scientific or economic pursuits.”

It should be clear that, had I not expressed what may be called my “concern” with Schmidt’s attitude, ChatGPT would not have analyzed things in a similar way to my own. ChatGPT is perfectly capable of behaving in ways similar to participants in an “echo chamber” on social media. In this case, ChatGPT develops a line of reasoning in support of my own observations.

Following this brief exchange, I prolonged the conversation with the following prompt.

“You evoke a possible alternative to Schmidt’s competitive worldview; a collaborative approach. What would such an approach mean for the value of science, especially given the fears that exist about AI? Would the AI that emerges and eventually dominates — precisely as people like Schmidt imply — be different in a world where collaboration defines the fundamental orientation of R&D?”

Rather than duplicate ChatGPT’s very interesting response here, I’ll hold off my examination of its complete response till next week. One thing, however, appeared to me particularly significant:

The AI that comes to dominate in a collaborative world might be more inclusive, considering diverse perspectives and needs across different cultures and societies. This contrasts with AI developed in a competitive environment, which might prioritize the values and interests of a single nation or group.

In this age of exaggerated nationalism; when major powers seem intent on shying away from dialogue as they prefer to rely on the display of force, we desperately need a debate about how all technologies are going to be used in the coming years and decades. Why not engage ChatGPT in that same debate?

It’s particularly encouraging that we can engage in something that truly resembles a dialogue with AI. It’s something we obviously cannot do with drones, hypersonic missiles and nuclear explosives.

What is less encouraging is the asymmetry that exists between ordinary people — including deep thinkers on questions of science and geopolitics — and wealthy public personalities such as Eric Schmidt, Elon Musk and Sam Adkins. As a society, due to their wealth, we have endowed them the power to make decisions for us. They choose which technologies to invest in and how to harness those investments for personal profit. They also make decisions that aim at leveraging geopolitical influence and lead to the prosecution of wars.

Will these people be asking ChatGPT the same questions we are asking about worldviews? It seems unlikely. That is precisely why ordinary thinking people who feel even a slight concern for the way technology is likely to craft our own futures should at the very least seek to join the debate.

Here at Fair Observer, we invite you to weigh in and to do so in the framework of what we are calling our developing and evolving “Crucible of Collaboration.” We are not alone. People like Matthew Berman are producing the evidence and raising similar questions.

Unlike ChatGPT, these people don’t seek to impose their authority by giving what appear to be definitive answers to any question we throw at them. Instead, they are asking questions, reacting to the answers they have received, and pushing further, in the spirit of a modern Socrates. Ultimately, that behavioral contrast may define the difference between artificial intelligence — including superintelligence — and the  human attribute AI has been designed to imitate. Humans are curious and engaged; AI is knowledgeable and indifferent.

This brings us back to the question of the meaning of Schmidt’s notion of “knowledge supremacy.” As an intelligent human being with life experience acquired in a diversity of cultures, Schmidt’s phrase evokes in my mind a kind of instinctive, unprompted association with another notion: “white supremacy.” Is there a connection? We need to explore it. Would AI or some future superintelligence make that connection? I doubt it, because it serves no programmable purpose.

AI could, of course, be prompted to make the connection, by an inquiring human mind. I like to call this kind of mental operation, for lack of a better term, the capacity for “creative suspicion.” It’s a feature of critical thinking, and I would maintain a crucial one. It springs from feelings, not directly from knowledge. It is a natural part of our psychological makeup that some governments and institutions today are seeking to suppress, in the name of rooting out disinformation.

This conversation will continue next week.

Your thoughts

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Outside The Box: Do You Think AI’s Full of Schmidt? appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/outside-the-box-do-you-think-ais-full-of-schmidt/feed/ 0
Tech Should Be Shared, Not Protected: Towards Democratic Digital Governance https://www.fairobserver.com/more/science/tech-should-be-shared-not-protected-towards-democratic-digital-governance/ https://www.fairobserver.com/more/science/tech-should-be-shared-not-protected-towards-democratic-digital-governance/#respond Fri, 16 Aug 2024 12:51:59 +0000 https://www.fairobserver.com/?p=151816 The foundation of human civilization is agriculture. It was organized agriculture that allowed our nomadic forebears to settle down and create large settlements — cities — where the cultural arts could flourish. And, throughout history, the advancement of technology has been essential to the advancement of agriculture. When I was at my farm a few… Continue reading Tech Should Be Shared, Not Protected: Towards Democratic Digital Governance

The post Tech Should Be Shared, Not Protected: Towards Democratic Digital Governance appeared first on Fair Observer.

]]>
The foundation of human civilization is agriculture. It was organized agriculture that allowed our nomadic forebears to settle down and create large settlements — cities — where the cultural arts could flourish. And, throughout history, the advancement of technology has been essential to the advancement of agriculture.

When I was at my farm a few weeks prior, making a few little mobile payments on my phone, I understood this more than I could have expressed. Fifteen years ago, my farm manager and I were motorbiking from pillar to post to obtain farm credit, to withdraw the cash, to use it to buy seeds and fertilizers, to arrange a tractor to plow the field and sow the seeds and to paying laborers. It took us almost two weeks to prepare the farms at the beginning of each season. At harvest time, similar delays ate up still more time. 

But this year, it was all hassle-free and quick. India has rolled out a biometric ID system called Aadhaar. Every one of India’s 1.4 billion residents is able to sign up, linking their fingerprints, irises and face to a unique 12-digit code. Aadhaar has made it easy to obtain credit and make all payments through the Unified Payments Interface (UPI), an instant payment system run by the Indian state.

Another app draws on crowdsourced intelligence to help me select an efficient crop pattern, and yet another app tells me the weather forecast and suggests the right timing for sowing. My manager likewise can arrange all his logistics through his mobile only. It took less than three days, with almost zero headache, to make this happen.

Digital public infrastructure is revolutionizing India

Aadhaar, UPI and other tools like the Digi-Locker document storage system all are part of India’s digital public infrastructure (DPI) ecosystem, dubbed India Stack. It has streamlined governance, reduced corruption, revolutionized transactions and empowered citizens.

A recent World Economic Forum briefing paper estimated that DPI could add over $65 billion of value to the Indian agriculture sector alone.

The story of India’s digital revolution began a decade ago. In 2014, most Indians had no unique form of proof of identity. This stopped them from accessing public benefits. So, the Narendra Modi government implemented Aadhaar, which the previous Manmohan Singh government had launched in 2009. Now, nearly the entire population has enrolled in the program.

Other India Stack programs have achieved great success. UPI has facilitated 241 billion financial transactions since its inception, and Digi-Locker has facilitated the issuance of 7 billion documents to 300 million users.

Digitalization is helping India bridge the divide between city and country, between rich and poor and between those who work in traditional trades and those who work in emerging fields.

India’s DPI provides open access, or free and unrestricted access to use of digital resources by everyone. This openness and the interoperability it brings give more options to both consumers and vendors. Interoperability enables scaling while making room for the unique requirements of specific local-national-global situations. 

At the G20 summit in New Delhi last year, India went big on her DPI success when Prime Minister Modi proposed a “Global Digital Public Infrastructure Repository” (GDPIR). GDPIR offers a virtual repository that collects and shares digital public infrastructure (DPI) tools and resources created by governments worldwide, including code and best practices. The repository aims to accelerate the development of DPIs globally, particularly in low- and middle-income countries, by providing access to proven digital solutions that support sustainable development goals. Its primary aim is to bridge the knowledge gap in the choices and methodologies required for the design, construction, deployment, and governance of DPIs. Currently, the repository contains 54 DPIs from 15 nations and the European Union.

In their book Four Internets, computer scientists Kieron O’Hara and Wendy Hall explain that the Internet is not truly open but really consists of four separate infrastructure systems upheld by different values. They call these systems the Beijing Paternal Internet, the Silicon Valley Paternal Internet, the Brussels Bourgeois Internet and the DC Commercial Internet. GDPIR, on the other hand, could be the basis for a truly open global Internet. 

Other countries are keen to adopt the DPIs due to their low cost and scalability. Currently, almost a dozen nations have chosen India’s DPI Modules, utilizing Indian expertise to construct their own national ID systems. For example, the Philippines and Morocco successfully launched Aadhaar-like programs themselves. 

The importance of DPI goes beyond the Global South. The EU–India Trade and Technology Council, the Quadrilateral Security Dialogue and even the Shanghai Cooperation Organization have acknowledged the importance of DPI for inclusive and sustainable development. In June 2023, Modi and US President Joe Biden made a joint statement that specifically mentioned the intent to create a “Global Digital Development Partnership” that would focus on enabling the “development and deployment of DPIs.”

For GDPIR to succeed, we will need to develop global standards, financing models and a multi-stakeholder governance mechanism. Inadequate safeguards and poor benchmarking could result in a malformed DPI. This could result in data breaches and other security risks, as well as intellectual property rights and privacy violations.

The world needs new norms to govern a digital infrastructure that facilitates the global movement of people, capital and information.

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Tech Should Be Shared, Not Protected: Towards Democratic Digital Governance appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/tech-should-be-shared-not-protected-towards-democratic-digital-governance/feed/ 0
The Colorful, Toxic Economics and Epidemiology of “Baby Media” https://www.fairobserver.com/world-news/the-colorful-toxic-economics-and-epidemiology-of-baby-media/ https://www.fairobserver.com/world-news/the-colorful-toxic-economics-and-epidemiology-of-baby-media/#respond Tue, 06 Aug 2024 11:40:52 +0000 https://www.fairobserver.com/?p=151622 If you think social media is bad for teenagers, imagine what so-called “baby media” does to babies. These colorful videos look fun from the outside, and one can misinterpret the infant’s eyes being glued to their screen as fascination or delight. It’s natural that parents show their kids things they seem to enjoy, and they… Continue reading The Colorful, Toxic Economics and Epidemiology of “Baby Media”

The post The Colorful, Toxic Economics and Epidemiology of “Baby Media” appeared first on Fair Observer.

]]>
If you think social media is bad for teenagers, imagine what so-called “baby media” does to babies. These colorful videos look fun from the outside, and one can misinterpret the infant’s eyes being glued to their screen as fascination or delight. It’s natural that parents show their kids things they seem to enjoy, and they might even think they’re doing a good thing by exposing them to this cheery entertainment.

The problem is, growing babies learn from high-bandwidth, back-and-forth sensory interaction — not “content.” Our nervous systems are hardwired to use and learn from all our muscles and senses in concert, interacting with three-dimensional people and objects. Babies learn by putting things in their mouth, making faces, wiggling and noticing the results.

That applies doubly to social learning. Infants learn by imitating and practicing. They coo while their mothers speak in sing-song (which is referred to as “parentese”). They learn by sound, mimicry and serve-and-return interaction first and foremost, because it underlies both social and physical skills. Full three-dimensional awareness develops years later, and being able to see three-dimensional content on flat screens develops later still.

Practicing social skills with real people worked well until it didn’t. For millennia, most babies always had people around to play with: parents, older siblings, relatives and neighbors. Any live human was fair game for cuddling or teasing, and many people liked playing with them. Work was manual, so it was simple enough to entertain the baby in the kitchen, the workshop or the field. In physical settings, babies get to practice with actual playmates.

Fast-forward to our screen-saturated present. Parents are often at work, and older kids are at school. Both are typically on screens. The age-old supply of social companions has dried up, leaving babies lonely. For many guardians, the solutions are either to pay for professional childcare by the hour or to subject little ones to vivid screen entertainment, which costs far less. In crass economic terms, parents must choose between connecting with their baby or having money. That is a toxic tradeoff.

But at least the toxicity can be understood epidemiologically, and the tradeoffs understood economically. Both are needed to realize and fight baby media’s negative influence.

Economics vs. epidemiology

Economics is the weaker of these two sciences, being deeply unprincipled. The profession praises capitalism professionally, yet in its core competency, information flow, still can’t tell up from down: should information flow unhindered and unmodified, to benefit society, or should information be filtered and amplified for private profit? It can’t be both ways.

On the opposite side of the spectrum, epidemiology — the study of how ill-health spreads within populations and how that spread can be controlled — is as deeply scientific as particle physics. It balances subtle hypotheses spanning multiple streams of data and leverages its conclusions to save human lives. This branch of science is how diseases and environmental dangers from germs to chemicals are discovered, understood and fought. Its full statistical power is the best way to manage growing threats to public health: COVID-19, cigarettes, fentanyl and electronic screens.

Economics investigates the connection between behavior and money. Epidemiology investigates the connection between behavior and public health. When both are in play, cash flow impacts public health.

Sometimes, the interaction is beneficial, like when a profitable new vaccine or therapy saves lives. But malignant interactions grow exponentially, especially when biological instincts are up for grabs. For example, opium chemically creates cravings, which affects behavior, which affects profit. The profit is concentrated and actionable, while the damage appears as distant externalities. “Externalities” is the catch-all term for unintended and long-term consequences, the unplanned results that happen off the spreadsheets. Externalities are always the problem.

For instance, in the Opium Wars of the 19th century, Britain systematically profited by creating drug addiction in China — the profit was local, while the damage was distant. Today, the similarly addictive chemical nicotine drives a profit cycle via the tobacco industry. Alcohol and sugar cause similar problems while supporting large industries. Now all of those are regulated, because for a society to survive long-term, it must limit attractive products that hurt the populace. As I’ll soon describe, baby media is one of those dangerous products.

Baby media hacks and damages babies’ brains

It’s clear that chemicals like opium and nicotine drive basic urges. But videos aren’t chemicals at all, just patterns of light and sound. How could patterns of pixels hack our brains’ biology?

They do it the same way chemicals do. Chemicals carry both fuel and information. Some we need in bulk, like water and air. Others we’re sensitive to in trace amounts, like vitamins. Opium and nicotine happen to trigger mind-altering and behavior-changing pathways in the brain (and to a lesser degree, alcohol and sugar do as well). The particular patterns of atoms in opium and nicotine “hack” our information processing.

Particular patterns of light and sound work similarly. Bright flowers send attractive signals, while camouflage does the opposite, erasing the signals of a creature’s presence. Our nervous systems are tuned from birth to interpret specific colors, shapes, frequencies and movements as meaning certain things. Pre-programmed biological boosts are crucial cues for filling in a rich, detailed world.

Babies not only make cries and coos which pull the mother’s heartstrings; she makes sounds which touch her baby, too. Her sing-song “Hello, baby!” voice or soothing tones were primed by primate physiology ten million years ago. The baby’s nervous system knows those sounds mean Mom is near, so the child instinctively responds. That natural, native back-and-forth at certain frequencies and cadences is why the mother-baby bond appears in the first place. Those patterns taste sweet to the child’s heart and mind.

In that informational sense, baby media is taking candy from a baby, over and over. The jangly, clangy, ultra-high-pitched frequencies on shows like Chip and Potato, Ms. Rachel and CoComelon catch a baby’s attention; their frequency spectra overlap with the ones the baby’s nervous system naturally enjoys. So, those shows capture babies’ attention specifically by triggering vibratory mother-infant bonding instincts. Likewise, the shows’ looming, veering cartoon faces and frequent cutscenes cue nearby motion to the primary visual cortex. It’s ear-candy and eye-candy, in other words, and not by accident. The creators of CoComelon, for instance, algorithmically optimized the show for this.

When such patterns grab a baby’s attention, the kid responds as if called by a real person, typically by looking or wiggling. They then expect the person to respond. In real life, this would be a perfect data-gathering opportunity for the child.

But when watching a video, if the show’s pre-recorded response is timed just right — as some are — the baby might be fooled into thinking it received its desired answer. But the screen is just a screen, and doesn’t pay attention to the baby. Every time the video and sound provoke them into tasting the sweetness of anticipated play, the reward is yanked back. Mom never appears. That moment’s bonding instinct is wasted, and a precious chance to gather social data is desensitized. It alienates the young mind a little bit more, as the child falls for a machine in place of a person, and is then jilted.

This is the same dynamic as social media, in fact. The algorithms that so successfully manipulate teenagers into spending hours a day on social media provoke the same innate instincts as those locking babies’ eyes to screens. The difference is that social media uses the selection and timing of content such as posts and videos, while baby media hacks babies’ brains using the native harmonies of the nervous system. Both of them desensitize and disrupt basic nervous system function.

It doesn’t just damage social skills. Children can’t make three-dimensional sense of a two-dimensional screen until the age of three (the video deficit effect). And that’s if the kid grew up strictly in our three-dimensional world. Unfortunately, touch screen tablets, in the same way as baby media, harness native urges for novelty and interaction to keep kids’ eyes and fingers glued to glass.

In order to learn multi-sensory consistency and physical reality, babies search out novelty, the frontier beyond what they already know. Tablets are delightfully interesting, of course, but their novelty cheats by deviating from our world with surprising, disconnected lights and sounds. So interaction with a tablet poisons babies’ training data. Babies who use tablets will undoubtedly face later problems with spatial skills, navigation and stereo vision, just as children who spend too much time on close focus become near-sighted — which is a growing worldwide problem blamed on education, not on screens. Epidemiology will discover the damage to babies soon enough, but can it save the day?

The battle against baby media begins

The imminent battle over baby media is horribly lopsided. Corporations outgun pro-child advocates millions-fold.

Anti-digital advocates have at best millions of dollars of funding, while media companies have trillions. Advocates promote laborious studies on hundreds of people, while companies surveil whole populations automatically. Advocates know little about companies, while companies know loads about us. Humans have nervous systems easily dazzled by distraction and misdirection, which companies are paid to exploit, fueled by biometric data and protected by fig-leaf disclaimers and disclosures.

The deepest asymmetry, paradoxically, is ethical. Human morality forbids experimenting on people, but that wasn’t always the case. The infamous Tuskegee Study of Untreated Syphilis deliberately and secretly withheld medical treatment from sick people over decades. The Nazi physician Josef Mengele performed such awful and specific experiments on prisoners that medical science has renounced and forever forsworn using such experiments, data or lessons. The notorious Stanford Prison Experiment spurred the creation of human subjects protections, restricting university experiments from harming their subjects. These rules make gathering direct medical evidence of harm to humans difficult, slow and expensive.

Those ethical rules don’t apply to private experiments. Social media companies routinely use an automated method called A/B testing to maximize users’ time online without their knowledge. I once coded such programs myself. Ten years ago, Facebook intentionally made many users feel depressed by selecting depressing news for their feeds.

The most extreme human experiment today is the ingenious gadget called the Distractatron, which CoComelon owner Moonbug Entertainment uses to optimize the show’s captivating effect. As a test infant watches the show on a main screen, a screen to the side plays boring, real-world scenes to vie for their attention — this is the Distractatron. Every time the kid’s attention wanders to that second screen, program creators declare that moment a weak point. They add yet more attractants to the video to prevent the baby from un-glueing its eyes. 

I’d urge readers to view Time’s pro-corporate, propagandizing take on CoComelon. Note how it positively describes the show’s content without addressing that its attractiveness comes from low-level cues that exploit child biology. Babies can’t even comprehend the identified “positives” while they’re learning to use their eyeballs.

Scientifically speaking, optimizing for captivation is like optimizing a digital drug. The fact that optimized shows all reproduce the same high-speed, high-frequency sonic and visual textures proves the science of attention-grabbing works. Unfortunately, the goal is to create addiction, not stop it.

The baby in the lab may not be harmed much by those few hours of experimentation, but the finished show puts infants everywhere at risk, for their whole lives. Which country will step forward first to renounce and forever forswear such experiments, data and lessons and products based on them?

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post The Colorful, Toxic Economics and Epidemiology of “Baby Media” appeared first on Fair Observer.

]]>
https://www.fairobserver.com/world-news/the-colorful-toxic-economics-and-epidemiology-of-baby-media/feed/ 0
Outside the Box: The Hardening Truth About Soft Power https://www.fairobserver.com/more/science/outside-the-box-the-hardening-truth-about-soft-power/ https://www.fairobserver.com/more/science/outside-the-box-the-hardening-truth-about-soft-power/#respond Mon, 29 Jul 2024 13:50:10 +0000 https://www.fairobserver.com/?p=151455 “I can’t escape the feeling that this episode further damaged the standing of the US across the globe. It represents the aggravation of a trend. Many of my friends in India, for example, were already deeply appalled at the way the US has handled its wars in the Middle East over the past two decades,… Continue reading Outside the Box: The Hardening Truth About Soft Power

The post Outside the Box: The Hardening Truth About Soft Power appeared first on Fair Observer.

]]>
“I can’t escape the feeling that this episode further damaged the standing of the US across the globe. It represents the aggravation of a trend. Many of my friends in India, for example, were already deeply appalled at the way the US has handled its wars in the Middle East over the past two decades, including Biden’s chaotic withdrawal from Afghanistan. Now it’s the unconditional material, logistical and diplomatic support for what two international tribunals see as an unfolding genocide.

Already the alacrity of US presidents to impose sanctions on anyone who fails to conform with US policies has persuaded many nations to seek an escape route from the tyranny of the US dollar in global trade. Are American politicians and their media not aware of the damage US foreign policy has wrought on American soft power, which has in the past played an important role in promoting a positive image of the US?”

Your observations highlight several important and interconnected issues regarding the perception of US foreign policy and its impact on international relations and American soft power. Here’s a detailed examination of these points:

  1. **Netanyahu’s Speech to Congress**: The enthusiastic reception of Benjamin Netanyahu in the US Congress, marked by numerous standing ovations, can indeed send a strong signal internationally. For many observers, particularly in regions like the Middle East and South Asia, it might reinforce the perception of the US being unconditionally supportive of Israeli policies, including those that are controversial or widely condemned, such as the treatment of Palestinians.
  2. **US Wars in the Middle East**: The US interventions in Afghanistan and Iraq have left deep scars and a legacy of instability in the region. The chaotic withdrawal from Afghanistan in 2021 further damaged the US’s reputation, showing perceived failures in strategic planning and execution. These military actions have significantly influenced global perceptions, fostering skepticism and resentment towards American interventions.
  3. **Support for Israel**: The strong and often unconditional support for Israel, particularly in the context of actions in Gaza and the West Bank, is a contentious issue. Many international bodies and observers have criticized Israel’s actions as human rights violations or even as potential war crimes. This unwavering US support can alienate other nations and peoples, contributing to a perception of bias and double standards in American foreign policy.
  4. **Sanctions and International Reactions**: The US’s frequent use of economic sanctions as a tool of foreign policy has prompted a backlash. Countries targeted by these sanctions often see them as coercive and unjust, leading to efforts to create alternatives to the US-dominated financial system. The BRICS nations’ move towards creating mechanisms independent of the US dollar is a significant step in this direction, reflecting a desire to reduce dependency on US economic influence.
  5. **Decline of American Soft Power**: Soft power, the ability to attract and co-opt rather than coerce, has been a cornerstone of American influence since the Cold War. Cultural appeal, democratic values, and the promise of economic opportunity have traditionally garnered international goodwill. However, recent foreign policy decisions, perceived as aggressive or unilateral, have eroded this soft power. Global opinion has been increasingly critical of US policies, from military interventions to economic sanctions and support for contentious allies.
  6. **Media and Political Awareness**: Within the US, there is often a disconnect between domestic political narratives and international perceptions. American politicians and media might focus on domestic approval and political gain, sometimes at the expense of understanding or addressing the broader implications of their policies. This inward focus can lead to a lack of awareness or acknowledgment of the negative impact on global perceptions and relations.

In conclusion, the convergence of these factors indicates a significant challenge for US foreign policy. Restoring and maintaining American soft power requires a nuanced and empathetic approach, acknowledging global perspectives, and fostering international cooperation. Awareness and adjustment of strategies in response to these criticisms could help mitigate the damage and rebuild the trust and influence that the US has traditionally enjoyed.

“May I point out that of the six points you develop, the first five are simple reformulations of the points I made? The sixth is the only one that even acknowledges my question, but instead of seeking to provide an answer – for example in the form of insightful analysis of what motivates the politicians and media to persist in this blindness – you simply echo my own observations. You identify the problem as one of ‘inward focus.’ That seems obvious. My question seeks to go further, to understand what in the composition of the US political and media system and the culture contributes to preventing open discussion about this basic problem of perception. By not broaching the question, it is impossible to find answers. Worse, the degree of alienation this produces across the globe appears to be reaching a tipping point that could presage open conflict. No one – except maybe the media – have anything to gain from open conflict. And yet the conditions that may spawn it are becoming aggravated by the day, as the Netanyahu episode illustrates. So, do you have any insight into why the political institutions, including Congress, and the media resist facing a truth that can only work to the detriment of the American people?”

Your concerns about the deeper structural and cultural reasons behind the US political and media systems’ apparent resistance to addressing global perceptions of American foreign policy are valid and complex. Let’s delve into some possible factors that contribute to this issue:

1. **Political Incentives and Domestic Focus**:

  • **Electoral Dynamics**: US politicians are primarily motivated by the need to secure votes and maintain political power. This often means catering to domestic audiences and interest groups that have significant influence on elections. Foreign policy decisions that might be unpopular domestically, even if they improve international relations, are often avoided.
  • **Lobbying and Interest Groups**: Powerful lobbying groups, such as the American Israel Public Affairs Committee (AIPAC), exert considerable influence over US foreign policy. Their ability to mobilize political and financial support means that many politicians align with their positions, sometimes at the expense of broader international considerations.

2. **Media Ecosystem**:

  • **Corporate Ownership and Profit Motives**: Major media outlets in the US are often owned by large corporations with vested interests. Sensationalism, conflict, and stories that appeal to partisan audiences drive viewership and advertising revenue. This can lead to a narrow focus on issues that resonate with domestic audiences rather than nuanced international perspectives.
  • **Echo Chambers and Partisan Divides**: The rise of social media and partisan news outlets has created echo chambers where audiences are exposed primarily to information that reinforces their existing beliefs. This limits the scope for critical discussion and understanding of complex international issues.

3. **Cultural and Historical Factors**:

  • **Exceptionalism**: There is a deeply ingrained belief in American exceptionalism, the idea that the US has a unique role and moral duty to lead the world. This can foster a dismissive attitude towards international criticism and a reluctance to engage with perspectives that challenge this narrative.
  • **Historical Context**: The legacy of the Cold War and the subsequent unipolar moment in the 1990s reinforced a sense of American dominance. Adjusting to a multipolar world where US actions are scrutinized and challenged is a difficult transition for both political leaders and the public.

4. **Structural and Institutional Constraints**:

  • **Bureaucratic Inertia**: Institutions like the Department of State and Department of Defense have established ways of operating that can be resistant to change. Policies and attitudes formed during previous administrations can persist, limiting the flexibility to adapt to new international realities.
  • **Checks and Balances**: The US political system, with its multiple layers of checks and balances, can slow down policy changes. Even when there is recognition of the need for a shift in foreign policy, implementing these changes can be a protracted process.

5. **Educational and Informational Gaps**:

  • **Lack of Global Awareness**: The American education system and media often do not prioritize international education or perspectives, leading to a populace that may be less informed about global issues and the impact of US foreign policy.
  • **Narrative Control**: Government and media narratives can shape public understanding and discourse. When these narratives focus on national security and patriotic themes, critical discussions about foreign policy are sidelined.

Understanding these factors helps to illuminate why there is resistance to facing certain truths about US foreign policy and its global repercussions. Addressing these issues would require a concerted effort to reform political incentives, diversify media ownership and content, promote global awareness, and foster a culture of critical engagement with international perspectives. Without such changes, the conditions that exacerbate global alienation and potential conflict are likely to persist.

If I were grading a student, I’d give ChatGPT an A+ for this answer. This should demonstrate that to get anything substantial and formally reasoned from AI requires two things: precision and patience. It means asking the right questions, not once but multiple times by patiently reformulating them. We need to understand that if AI has a strategy as the participant in a dialogue, its goal isn’t accuracy. Instead, it aims at creating the impression of being well-informed.

To coax ChatGPT to the point that it can prove genuinely helpful in producing an answer to a question, the interrogator must repeat and reformulate that question, possibly multiple times. More often than not, AI will tell you what you already know, while adding a few facts and complementary arguments. Now, that might sound like a trivial outcome, but it points to a way in which AI can be a wonderfully constructive aid to education.

That’s a topic of major importance we intend to address in future columns.

Your thoughts

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Outside the Box: The Hardening Truth About Soft Power appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/outside-the-box-the-hardening-truth-about-soft-power/feed/ 0
Should Students Rely on AI Grammar Tech to Write Better? https://www.fairobserver.com/more/science/should-students-rely-on-ai-grammar-tech-to-write-better/ https://www.fairobserver.com/more/science/should-students-rely-on-ai-grammar-tech-to-write-better/#respond Sat, 20 Jul 2024 11:07:47 +0000 https://www.fairobserver.com/?p=151335 Artificial intelligence (AI) can supposedly solve any grammatical query for us. If it can do that, does it make learning grammar obsolete? What should English instructors teach their students, then? As increasingly advanced AI technology starts to challenge writing, we need to reevaluate what it means to be able to write. How can students wield… Continue reading Should Students Rely on AI Grammar Tech to Write Better?

The post Should Students Rely on AI Grammar Tech to Write Better? appeared first on Fair Observer.

]]>
Artificial intelligence (AI) can supposedly solve any grammatical query for us. If it can do that, does it make learning grammar obsolete? What should English instructors teach their students, then?

As increasingly advanced AI technology starts to challenge writing, we need to reevaluate what it means to be able to write. How can students wield AI technology to their advantage in order to succeed in the digital age? Where does AI trump human skill, and where does it fall short?

Educators have widely addressed generative AI such as ChatGPT, but one platform tends to fly under the radar in high school classrooms: Grammarly.

What is Grammarly?

Grammarly is a digital writing application that uses AI to analyze user-submitted text. It provides suggestions to improve grammar, style, clarity and more. The platform works by underlining identified errors with red and blue lines, which users can accept or decline with a click. It also has a lesser-known generative AI feature akin to ChatGPT, which can rewrite a given sentence to satisfy a goal.

To demonstrate Grammarly’s primary function, I provided it with the following grammatically flawed sentence:

Mike, climbing the tree saw few apples that were very red.

Grammarly’s suggestions reworked the entire sentence. It now read:

Mike climbed the tree and saw a few very red apples.

Grammarly makes the writing much better, right? The students at my all-girls private high school in New York City certainly think so. They often use the platform as a sort of crutch to fall back upon. The prevailing attitude is, “My essay is bad, but that’s okay — I’ll just put it through Grammarly.” They laud the technology as “a lifesaver.” But is it really?

Investigating Grammarly in high school: test and results

I wanted to pinpoint how Grammarly affects student performance, as well as identify when it proves useful or backfires. To this end, I conducted a proper test with high schoolers, analyzed the results, recorded statements from teachers and used the platform’s chatbot to make additional observations. I considered all of this information and reached a conclusion: Students should use Grammarly only to learn the mechanics of writing, then rely on their own skills once they’ve grown confident in their grammatical knowledge.

I recruited an English teacher and eight of my fellow sophomores for this experiment. The study’s purpose was to assess in what ways Grammarly may affect how students write for English class.

I started by crafting a sufficiently thought-provoking prompt. After all, students tend to make more grammatical errors when attempting to relay complex ideas. I settled on the following instruction: “Please write 300–500 words on how the main character of your favorite book, show, or movie relates to an aspect of your identity and represents a broader lesson for individuals or society.”

I had four of the students edit their writing with Grammarly. To simulate how students use the platform in practice, I did not give overly specific instructions but told each student to utilize the tool as they saw fit.

To the other four students, I stressed that they should avoid all outside assistance. None of the students knew the full picture of the experiment. The participants could not look at the prompt until they were ready to begin writing.

I allotted each student a single 30-minute session to complete their paragraph. Each response was graded by a high school English teacher based on the student’s style, diction, syntax, clarity and grammar, irrespective of their chosen subject matter. The teacher also attempted to identify the works that had gone through Grammarly.

Overall, there was no distinct correlation between Grammarly usage and higher scores. The highest-scoring piece, which received 96%, did not use the platform. Of the four highest-scoring and four lowest-scoring pieces, two of each used Grammarly.

Graph of students’ grades on Grammarly study paragraphs. Author’s image.

When choosing the students to participate in other study, I enlisted ones with different grades in English class. Four of them averaged grades between 94% and 100%; the other four between 90% and 93%. In a selective school like this one, such a difference between these grade-average ranges meant a significant distinction in writing abilities. Two students from each group used Grammarly on their pieces.

The teacher correctly identified Grammarly usage, or lack thereof, in six out of the eight pieces.

Five of the students received scores in this exercise that were lower than their average scores; one received a higher score, and two received average scores. At first glance, Grammarly did not seem to affect whether students surpassed their own records. Of the five students who performed worse, three used Grammarly. Of the two students whose grades remained consistent, one used Grammarly. The sole student who scored better than their average grade used Grammarly.

Charts of participants’ average grades vs. grades received in the exercise. Author’s image.

Upon deeper analysis, Grammarly maintained or improved the grades of lower-performing students while negatively impacting higher-performing ones. Of the two students in the lower-performing group that used Grammarly, one scored better than their average grade (94%-) and one scored the same (90%). The two students in the lower-performing group that did not use Grammarly scored below their average grades, scoring an 84% and an 89%.

Of the two students in the higher-performing group that used Grammarly, both scored much lower than their average grades (85%). The students in the higher-performing group that did not use Grammarly scored higher than their average grades, scoring a 96% and an 88%.

Screenshot (12)
Charts of student results that used Grammarly. Author’s image.

This result seems relatively clear-cut. Still, it must be noted that the writing skills of same-grade students are not necessarily identical, which could skew how Grammarly changes students’ grades, from their average grade to their exercise grade.

While it makes sense that Grammarly should improve weaker students’ writing, it is striking how Grammarly is detrimental to higher-ability students. Maybe Grammarly is not as efficacious as it claims to be. On the paragraph of one higher-performing student who used Grammarly, the English teacher’s critiques concerned “usage rather than things that [were] grammatically ‘wrong.’” The teacher further said that “the writing [was] awkward, with syntactic issues.”

Similarly, for the other high-performing student who used Grammarly, the teacher stated: “I marked a few ‘errors’ that are really more about usage than exact rules. Yet I found this [essay] hard to follow overall. I would not be surprised if it had been through Grammarly.”

He told me these were the two pieces he was most certain used Grammarly. He was correct.

Grammarly vs human artistry

The cardinal purpose of grammar is to illuminate and elevate the meaning of writing. Grammarly was able to improve the technicality of the higher-ability students’ writing, but it concurrently diminished its cohesivity and impact. Perhaps there is a deeper flaw in the platform, one regarding its artificiality in contrast to the innate humanity of writing.

Regarding the highest-scoring response, which was written by an A student who did not use Grammarly, the English teacher remarked upon the vividness of the student’s writing style: “The writing here is strong, in a way that transcends grammar. The writer purposefully uses polysyndeton and asyndeton (repeating or excluding conjunctions) even though they are technically grammatically ‘incorrect.’” Had the student used Grammarly, the platform would have “fixed” those errors, thereby damaging their writing quality.

Writing is more than a set of hard rules — it is a craft. Proficiency in this field requires a deft knowledge of how to both follow and break grammatical rules. Grammarly’s unyielding technicality is incompatible with the holism of writing.

“Grammarly is just another machine to flatten our multidimensionality,” Dr. Paul Phelps, another high school English teacher, told me. By feeding it a passage written by English novelist Jane Austen, Phelps further highlighted what my study revealed about the platform. “I can tell you this,” he said, “Grammarly thinks that Jane Austen is a terrible writer. Grammarly wants to change everything about Jane Austen.”

Even generative AI does not make Grammarly creative

Grammarly is no substitute for human ingenuity — not only in regards to grammar, but also in producing “original” writing. I learned this when I tried out the platform’s generative AI chatbot feature. I provided the same Grammarly-revised sentence from earlier — “Mike climbed the tree and saw a few very red apples” — and asked the platform to change it in various ways. I instructed it to make the sentence persuasive. Grammarly gave:

Mike skillfully climbed the tree and was thrilled to discover several irresistible, vividly red apples.

Next, I told it to make the sentence funny and creative. Grammarly returned:

Mike channeled his inner monkey and ascended the tree like a pro, only to be greeted by a bunch of apples that were more tempting than a chocolate fountain at a dessert buffet.

Then I told it to make the sentence so sad that it would move a reader to tears. Grammarly spat back:

Mike, struggling with a heavy heart, ascended the tree, his mind weighed down by the burden of loneliness and regret. At the top, he was met with a cluster of apples, their allure mocking his unfulfilled desires and lost opportunities, leaving him feeling more isolated than ever before.

Finally, to really see what the platform could do, I told it to make the sentence sound like it was written by former US President Donald Trump. Grammarly concluded:

Nobody climbs trees like Mike. The best tree climber, believe me. And let me tell you, he saw some incredible, very red apples. I mean, really tremendous apples. The reddest apples you’ve ever seen, folks.

Grammarly’s writing abilities are mediocre, to put it bluntly. Its generated sentences are bland and banal, and its metaphors are overwritten and drawn out. Its writing is dull, laughable and clunky. Its descriptions sound phony and generic. However, while the platform is certainly no match for the great wordsmiths of literature, its writing is functionable. It is clear and sensible.

Grammarly can be a handy tool for practical purposes. Writing rarely needs to be revolutionary outside of an artistic setting. Most people are not aspiring authors, nor do they want to produce groundbreaking prose. However, most people need to send clear emails, memos and letters, tasks for which the platform would be a boon.

Grammarly’s impact on student learning

Some teachers have argued that using Grammarly deprives students of learning. Phelps described the consequences of Grammarly usage as “a sort of weakening of confidence.” He argued that while the platform is “the path of least resistance, it’s also the path that will produce the least amount of learning.”

Yet other data suggest Grammarly may help students improve their writing. In 2022, Negeri 1 Ujungbatu, a high school in Indonesia’s Riau province, performed an experimental study in which students took separate writing tests before and after using Grammarly on other work. This study found that the platform yielded a significant improvement in student scores. Likewise, in a 2011 survey conducted by Grammarly, Inc., 70% of the 392 student respondents reported that the platform had increased their confidence in their writing abilities.

To get a straight answer on this matter, I turned to Grammarly itself. I asked the AI chatbot whether high school students’ usage of Grammarly comes at the cost of their learning. It replied with an answer that, while true, overlooked one critical detail:

Using Grammarly does not necessarily come at the cost of learning. While Grammarly can help identify and correct mistakes, students still have the opportunity to learn from these corrections. They are able to see explanations for the suggested changes and can gain a deeper understanding of grammar and writing conventions.

Though this is a valid statement, Grammarly’s simplicity discourages users from taking advantage of its educational function. Users can click a button to view the justification for its suggested edits, but they are not required to read the explanation before accepting it. So, many won’t. Likely, Grammarly did not account for this easy bypass in its answer because its AI is programmed to uphold its image as integritous and meritful for classroom use. 

“The issue is that [Grammarly] is so automatic that you can just hit check,” a high school senior told me. “I’m not sure if someone would actually pay that much attention to what it’s saying.”

Phelps concurred. “Nothing Grammarly does is hard to learn. Nothing. It is not difficult to learn whether or not you need a comma between two independent clauses. But if you’re not invested in that learning or asking about it, you’ll never learn it.”

However, even without reviewing the reasons behind the suggestions, I believe students can learn from Grammarly. As they click on underlined words, accept changes and see the red color vanish before their eyes, their brains can develop a subliminal aversion to these colored underlines. They will then unconsciously work to write in a way that avoids triggering these error marks.

Properly wielding Grammarly

Does learning grammar even matter if Grammarly will always be available? I would argue that, yes, students do need to learn the laws of grammar. Doing so enables students to discern precisely how and when to use AI tools. For their own benefit, they should use Grammarly as an assist to learn writing mechanics; once they are more self-assured, they must detach from it and take the wheel themselves.

We must teach students when they can stop using Grammarly: when they inevitably transcend its abilities. As my data has illustrated thus far, the platform has its limits, and human thinking can surpass them.

Imparting this technological savvy upon students equips them for their professional lives. Their competition will undoubtedly have access to Grammarly. While this platform is not necessarily the best-suited for a high school setting in which students must complete tests with pencil and paper, there will be very few instances in adult life where Grammarly is inaccessible.

Ultimately, we must remember that Grammarly is simply a machine, one that we can employ but should not rely upon. As Phelps told me, “None of these [AI] devices are independent of insidiousness. If they are not used carefully, they can be as harmful as helpful. Unfortunately, sometimes the harm and the help can be the same thing.”

With that said, wholly rejecting useful technology would be like driving a horse-drawn carriage instead of a car just so you can develop that skill. The best way for us to get ahead is to use AI like Grammarly as a supplement to, rather than replacement for, our own unparalleled human prowess. This is what students need to learn.

[Lee Thompson-Kolar — not Grammarly — edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Should Students Rely on AI Grammar Tech to Write Better? appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/should-students-rely-on-ai-grammar-tech-to-write-better/feed/ 0
How to Make Social Media Fun Again https://www.fairobserver.com/world-news/how-to-make-social-media-fun-again/ https://www.fairobserver.com/world-news/how-to-make-social-media-fun-again/#respond Tue, 16 Jul 2024 12:39:46 +0000 https://www.fairobserver.com/?p=151120 Once upon a time, Instagram was fun. When I first joined the platform in 2012 (at the questionable age of 13), Instagram was a far cry from what it is today. There was no sleek white interface, and nothing on the site was that sad shade of beige. The app had no video-sharing capability. The… Continue reading How to Make Social Media Fun Again

The post How to Make Social Media Fun Again appeared first on Fair Observer.

]]>
Once upon a time, Instagram was fun.

When I first joined the platform in 2012 (at the questionable age of 13), Instagram was a far cry from what it is today. There was no sleek white interface, and nothing on the site was that sad shade of beige. The app had no video-sharing capability. The term “influencer” hadn’t entered the mainstream cultural lexicon. Only the savviest of marketers were discussing the promotion of “user-generated content.” Instagrammers weren’t shilling tummy-slimming tea, hair products or Stanley Cups.

Instagram was a place where ordinary non-celebrities flocked to retrofit their personal photos with filters and special effects. Unaltered by recommendation algorithms, users’ timelines chronologically displayed posts made by their friends and families. There were grainy black-and-white photos of dogs, sepia snapshots taken at breakfast tables and blurry images featuring neighborhood flowers. The Instagram of yore was a place to see and be seen through an artistic (though now admittedly cringeworthy) lens. The site was creative and social. Sadly, that was about to change.

The degradation of social media

In 2013, Instagram introduced video sharing and a new model for advertisement. Users’ feeds were now populated by ads disguised as regular posts, save for their small “sponsored” tags. Advertisers gained access to user data and advanced analytics tools.

The advertising model as proposed back then has since further evolved. Today’s Instagram advertisements appear as carousel posts, influencer partnerships, shoppable photos and story slides. Advertising content is highly personalized and seamlessly integrated. The influencer era has demolished the amateur and authentic look of earlier Instagram.

Modern users post only their most polished content, causing their followers to feel inadequate. Instagram is now a comparison machine and an ideological echo chamber. Now that content is evaluated by metrics of engagement, users are only shown subject matter likely to hold their attention. Often the most captivating pieces of content are those that are sensationalist or polarizing. The platform’s endless scroll feature promotes compulsive use.

When I deleted my Instagram account a decade after creating it, the platform barely resembled the social media site I joined. The other major social media platforms — Facebook, Twitter, Snapchat, YouTube, etc. — have undergone similar changes. Researchers and media outlets are beginning to examine the adverse effects social media can have on mental health, particularly for young people, as well as the erosion of the public’s ability to engage in civil discourse. 

I support the concept of social media as it was originally conceived. I grew up in a culturally isolated small town in the southwestern United States: Sand Springs, Oklahoma. As a little girl, I was hungry for information, desperate for knowledge about what the rest of the world might be like. Social media cracked my village wide open and gave me a glimpse of the global stage. It was a cool way to interact with the people I already knew and gain insights into topics I had never considered. Social media was fun. Now, it’s gone too far.

What was once a tool for a unique form of social engagement became an addictive force with unintended consequences for my life. I spent hours gazing into the black mirror of my cell phone, aware of the passage of finite time yet compelled to continually scroll for more vapid content. I lost my privacy, my self-confidence and the sense that I was in control of my life. I decided enough was enough; to regain these precious things, I deleted my social media accounts. I’ll only rejoin the major platforms if they undergo significant changes that make them fun again. 

Crafting ideal, fun social media

On my fantasized social media sites, users have control over their data. These hypothetical platforms are built upon a Decentralized Social Networking Protocol (DSNP), which establishes a shared social graph independent of a centralized platform. They prioritize privacy, security and user autonomy. Traditional social media platforms require users to relinquish control of their data the moment they accept the platform’s terms and conditions. (When I deleted my Instagram account, I lost the records of all of the content I created and interactions I had with other users and third-party platforms over those ten years. Much of this data is still retained in a Meta Platforms data center, but I have no access to my own information.) A DSNP-based platform would allow me to move my data from platform to platform. I wouldn’t have to sacrifice my information or lose my social connections or content. My data would be encrypted and I could interact more directly with fellow users of the platform without so much outside interference. 

Social media as facilitated by Big Tech corporations relies on the use of mysterious algorithms. Everyday users are largely unaware of how these systems function. In a healthier social media experience, users can peek behind the curtain to discover how content is curated, ranked and presented to them. Users can then make informed decisions about the kinds of content they engage with and have greater control over what they see.

Recommendation algorithms create social media feeds that are largely homogenous. Contrary to popular belief, I don’t want to see the same kind of content on repeat. Less reliance on recommendation algorithms would allow social media users to design their own feeds, rather than view that which the platform has decided for them.

The ideal social media platform is not designed to be addictive. There is no endless scroll feature; users eventually reach the end of the line. That’s because the goal of fun social media is not to capitalize on human weakness and maximize users’ eyeball time to drive advertising profit. Rather, it is to inspire, inform and connect users. Social media could be great if platforms were honest.

We must reject destructive social media

In the US, all nicotine products are sealed with a warning label. Social networking sites should adhere to a similar requirement. US Surgeon General Vivek Murthy has reached the same conclusion. Consumers have a right to know that excessive social media use may lead to depression, anxiety, loneliness, low self-esteem, cyberbullying, harassment, addiction, exposure to misinformation, distorted perceptions of reality, polarization, body image issues, disordered eating behaviors, exploitation or extortion, privacy breaches, insomnia and strained real-life relationships. Consumers also have a right to access information about coping with these risks.

There are wonderful individuals doing wonderful work to promote Big Tech accountability and legislative reform. But it will take more than those efforts to change the social media landscape. Historically, legislative action lags behind technological progress. Without legislative motivation, Big Tech corporations currently face little pressure to positively transform their services.

Why would they change? Consumers are consenting to use platforms as they are, despite their dissatisfaction. The current major social media models are hugely profitable. If you were Mark Zuckerberg, Jeff Besos or Elon Musk, would you make drastic changes to your existing product to the tune of billions of dollars in lost revenue?

Social media could be fun again. But for that to happen, everyday social media users must be brave enough to reject the existing systems and embrace something new.

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post How to Make Social Media Fun Again appeared first on Fair Observer.

]]>
https://www.fairobserver.com/world-news/how-to-make-social-media-fun-again/feed/ 0
Outside the Box: Gog, Magog, Trump and the Media https://www.fairobserver.com/world-news/us-news/outside-the-box-gog-magog-trump-and-the-media/ https://www.fairobserver.com/world-news/us-news/outside-the-box-gog-magog-trump-and-the-media/#respond Mon, 15 Jul 2024 12:50:27 +0000 https://www.fairobserver.com/?p=151112 I have a friend from India who is an astute observer of global politics. When he expressed bewilderment over the way the United States media has covered former US President Donald Trump’s current presidential campaign, I was only slightly surprised. To delve into the question, I launched the following prompt: “An Indian friend of mine… Continue reading Outside the Box: Gog, Magog, Trump and the Media

The post Outside the Box: Gog, Magog, Trump and the Media appeared first on Fair Observer.

]]>
I have a friend from India who is an astute observer of global politics. When he expressed bewilderment over the way the United States media has covered former US President Donald Trump’s current presidential campaign, I was only slightly surprised. To delve into the question, I launched the following prompt:

“An Indian friend of mine is astonished to note that media coverage of Donald Trump is overwhelmingly negative and usually vituperative. Having lived through Trump’s presidency, he finds the kind of apocalyptic forecasts we are subjected to concerning a second Trump term excessive.

He makes it clear from the start that he is no fan of Trump. ‘It is a given,’ he begins, ‘that this man’s personal traits are nasty and obnoxious; even his supporters grant that! But what should concern us would be his actions as President. Because if his potential return to power is being touted as the end and ruination of America – socially, economically, financially, and internationally – wouldn’t it be good to understand what this expectation is based on …?’

I believe that the best response to his question would focus on two phenomena: a long-term trend in US politics that consists of reducing everything to a game of personality rather than examining political strategy and historical context. It’s easier to brand someone as virtuous or evil and dismiss all other considerations. But there is also the predilection of the media for stories that provoke alarm or inspire fear. What would be a reasonable way to explain this to an Indian in India who has no direct exposure to today’s US political and media culture?”

ChatGPT offered this informed response.

“To explain the intense media coverage of Donald Trump and the apocalyptic forecasts regarding his potential return to power to an Indian friend, we can break it down into two broad phenomena: 

1. **Personality Politics in the US**:

  • **Historical Context**: Over the past few decades, US politics has increasingly focused on the personalities of political figures rather than their policies or the broader political strategies they employ. This trend has been exacerbated by the rise of social media and the 24-hour news cycle, which thrive on sensationalism and personal drama.
  • **Simplification of Complex Issues**: Simplifying political discourse to focus on the personal traits of leaders makes it easier for the public to engage with complex issues. It’s much simpler to categorize someone as ‘good’ or ‘bad’ than to delve into the nuances of their policies, the intricacies of governance, or the long-term impacts of their actions.
  • **Polarization**: The US has become highly polarized politically, with each side viewing the other not just as opponents but as existential threats. This polarization is often driven by highlighting the most extreme and negative aspects of the opposing side’s leaders.

2. **Media Sensationalism and Fear**:

  • **Media Landscape**: The media in the US operates on a commercial basis, where success is measured by viewership and engagement. Stories that are alarming or provoke fear are more likely to grab attention and keep audiences hooked, leading to higher ratings and more advertising revenue.
  • **Echo Chambers**: People often consume news from sources that align with their preexisting beliefs, creating echo chambers where they are repeatedly exposed to extreme viewpoints. This reinforces negative perceptions and heightens the sense of urgency and danger.
  • **Narrative of Crisis**: Apocalyptic forecasts about Trump’s potential return fit into a larger narrative of crisis that the media perpetuates. This narrative keeps audiences engaged by presenting political developments as high-stakes battles between good and evil, survival and ruin.

ChatGPT’s answer continued with some specific illustrations of “Personality Politics” and “Media Sensationalism” before concluding with this observation:

“By focusing on these phenomena, we can explain why the media coverage of Trump is overwhelmingly negative and often apocalyptic, despite the complex reality of his presidency. It’s a combination of the simplification of political discourse into personality-driven narratives and the media’s focus on fear and sensationalism to capture and maintain audience attention.

I was hoping it might dig deeper. This answer did little more than explain in detail my own remarks, developing standard observations about polarization and echo chambers. This led me to wonder whether ChatGPT doesn’t systematically agree with the questioner’s thesis as a form of flattery. I tried to find out by changing my tactic and affirming the very position my friend so vehemently criticized.

“That seems to me a good analysis. But when Trump says he’ll be a dictator on his first day in office and when he vows revenge on those who have harassed him, don’t his critics have a point? Shouldn’t we fear the worst? He did some crazy things in his first term, such as withdrawing from the Paris accords and the Iran deal. So, isn’t it reasonable to think that, if elected, he’ll double down and seek to destroy his enemies and then actually become a dictator by canceling elections?”

ChatGPT gave this response:

Your friend raises a valid concern. When analyzing Trump’s statements and actions, it’s essential to consider both his rhetoric and his policy decisions to understand the basis of the fears regarding his potential re-election. Here’s a more nuanced explanation:

Trump’s Rhetoric and Actions:

2. **Authoritarian Rhetoric**:

  • **Dictatorial Statements**: Trump has made statements suggesting he would like to exercise authoritarian control, such as saying he’ll be a dictator on his first day back in office or seeking revenge on those who opposed him. These comments can be alarming because they suggest a willingness to undermine democratic norms and institutions.
  • **Implications of Revenge**: Vowing revenge on political opponents raises concerns about the potential for abuse of power. Such actions could involve using government resources and authority to target and punish individuals or groups based on political motivations, which is antithetical to democratic principles.

2. **Policy Actions During His First Term**:

  • **Paris Agreement and Iran Deal**: Withdrawing from the Paris Agreement and the Iran Nuclear Deal were significant moves that upended established international agreements. Critics argue these actions isolated the US diplomatically and could lead to long-term negative consequences for global cooperation on issues like climate change and nuclear proliferation.
  • **Undermining Institutions**: Actions such as pressuring the Justice Department, challenging the independence of the judiciary, and disputing the results of the 2020 election are seen by many as attempts to undermine key democratic institutions and processes.

Reasons for Concern:

  1. **Potential for Escalation**: Given Trump’s past actions and statements, critics worry that his second term could involve more extreme measures. If he were to feel unrestrained by the need for re-election, he might push further against institutional checks and balances.
  2. **Impact on Democracy**: There is a fear that undermining democratic institutions, coupled with attempts to consolidate power, could lead to a significant erosion of democratic norms. This could manifest in efforts to limit voting rights, manipulate electoral processes, or disregard the rule of law.
  3. **Historical Precedents**: Historically, leaders who have made similar authoritarian promises and taken steps to concentrate power have sometimes succeeded in eroding or dismantling democratic systems. This historical context heightens concerns about what a second Trump term might entail.

Balancing the Analysis:

While it’s important to recognize the genuine concerns about Trump’s rhetoric and potential actions, it’s also essential to consider the broader political and institutional context:

  1. **Checks and Balances**: The US political system has robust checks and balances, including the judiciary, Congress, and state governments, which can limit a president’s ability to unilaterally implement extreme measures.
  2. **Public and Institutional Resistance**: Trump faced significant opposition from within government institutions, civil society, and the public during his first term. This resistance would likely continue and could act as a counterbalance to any authoritarian tendencies.
  3. **Media and Public Scrutiny**: Intense media scrutiny and public engagement can help hold leaders accountable. The high level of public awareness and activism can act as a deterrent against undemocratic actions.

Conclusion:

Trump’s rhetoric and some of his actions provide a basis for concerns about authoritarianism and the erosion of democratic norms. However, the US political system’s inherent checks and balances, along with strong institutional and public resistance, can serve as significant counterweights. Understanding this dynamic is crucial for a balanced perspective on the potential implications of a second Trump term.”

At the beginning of this response, ChatGPT showed what might be called Biden-esque confusion when it mischaracterized my friend’s “valid concern.” His focus was on the media’s behavior rather than Trump’s. Unlike incumbent US President Joe Biden, ChatGPT does not have the excuse of being 81 years old!

For the rest, as I predicted, the chatbot’s tactic once again is to broadly agree with the premise but then hedge its bets by offering a conclusion that encourages a “balanced perspective.” 

Without seeking to analyze the pathology, which has begun to resemble a cancer, ChatGPT prefers to reassure the questioner by invoking “the US political system’s inherent checks and balances.” It fails to engage in the terms of the debate about the media’s strategies and behaviors. It offers no original insight. It doesn’t even appear to understand the nature of the question my friend raised. 

That’s probably because, quite simply, it doesn’t understand.

Your thoughts

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Outside the Box: Gog, Magog, Trump and the Media appeared first on Fair Observer.

]]>
https://www.fairobserver.com/world-news/us-news/outside-the-box-gog-magog-trump-and-the-media/feed/ 0
Being Smart About AI: How to Better Minimize AI Risks While Maximizing Spectacular Benefits https://www.fairobserver.com/more/science/being-smart-about-ai-how-to-better-minimize-ai-risks-while-maximizing-spectacular-benefits/ https://www.fairobserver.com/more/science/being-smart-about-ai-how-to-better-minimize-ai-risks-while-maximizing-spectacular-benefits/#respond Fri, 12 Jul 2024 11:48:52 +0000 https://www.fairobserver.com/?p=151066 Generative artificial intelligence (GenAI) has tremendous potential societal benefits. How to apply it to realize those benefits is unclear. The technology itself is evolving rapidly, becoming more and more powerful. This puts organizational leadership in a vulnerable spot. How can business, nonprofit or government organization leaders capture the benefits of this dramatic technological innovation while… Continue reading Being Smart About AI: How to Better Minimize AI Risks While Maximizing Spectacular Benefits

The post Being Smart About AI: How to Better Minimize AI Risks While Maximizing Spectacular Benefits appeared first on Fair Observer.

]]>
Generative artificial intelligence (GenAI) has tremendous potential societal benefits. How to apply it to realize those benefits is unclear. The technology itself is evolving rapidly, becoming more and more powerful. This puts organizational leadership in a vulnerable spot. How can business, nonprofit or government organization leaders capture the benefits of this dramatic technological innovation while minimizing the risks?

At this time last year, any company that mentioned AI in its quarterly report saw its stock rise. Those days are over. Now organizations are struggling to show financial returns on those early investments. There are still rewards for AI investment, but this is now more discriminatory.

In the near future, organizations that have announced AI initiatives in previous quarters will see negative repercussions if they don’t report benefits from them in the current quarter. Companies that report damage from AI investments will see even worse repercussions.

This means that today’s leaders need to know how to navigate AI. They don’t need to fully understand how the technology works; they just need to know the risks inherent in applying it and how to implement an innovation ecosystem to capture real benefits. To this end, they should use the GenAI taxonomy and innovation ecosystem funnel method while practicing good communication.

Using the GenAI taxonomy as a checklist

The risks unique to GenAI derive from its negative side effects. The taxonomy below, or a simple graph of categories and their inter-relationships, illustrates these. Leaders can use it to help develop an understanding of the risks or consult it as a checklist for proposed AI projects. Which side effects are the projects subject to? How have those side effects been mitigated? What is the resulting risk/benefit ratio? They can raise many other questions as well.

Taxonomy of GenAI negative possibilities. Author’s image.

Developing an AI innovation ecosystem

There are three stages of technological evolution: denial, implementation of the old paradigm with the new technology and realization of the new paradigm with the new technology. With GenAI, we are currently in the transition from stages two to three. It is difficult to predict how long this transition will take.

Stage two is the use of GenAI to enhance browser functions. What stage three will entail is unclear. During this transitional period, there will be countless concepts for how best to take advantage of the new technology. Many of them will fall apart. Some investors will try one failed endeavor and desist from further investment. More cautious investors will avoid investing altogether.

The successful leaders will implement an AI innovation ecosystem that works in a similar fashion to portfolio management — reducing risks while maximizing benefit. An innovation ecosystem recognizes that key expertise lies outside, business dynamics are well understood internally and not all good ideas will succeed.

Innovation ecosystem leaders create a culture with organizational structures, policies and procedures that work like a funnel (illustrated below). They constantly encourage and accept ideas and assess them for technical feasibility, group benefit, risk etc. They then pick the best ones for early Proof of Concept (POC) work and assess and filter based on the results. From there, the leaders choose the most promising POCs for further prototyping and assess the prototypes. After this, they select the best prototypes for field trials and assess further. Finally, the leaders determine the best field trials for full-scale deployment.

Innovation ecosystem funnel. Author’s image.

Once the stage three paradigm becomes clear, the innovation ecosystem funnel may be directed at other disruptive technologies.

Good communication is vital

From the leadership point of view, this kind of ecosystem needs organizations that communicate well. The whole group must be in sync. Staff must recognize that they are not expected to know everything. Partnering with outside organizations and individuals who bring valuable knowledge and expertise is beneficial for everyone, and should not hurt anyone’s ego.

The staff, outside investors, funders and everyone else involved must understand that failure is part of the operation. The whole purpose of the funnel is to provide a failure process that minimizes cost and maximizes potential benefits at each step.

Organizations can find success with GenAI applications if their leadership focuses their teams on mitigating inherent risks while developing an innovation ecosystem that creates a stream of potential applications. They should then filter them step by step, thus maximizing benefits while minimizing risk. They would thereby take full advantage of this dramatically developing technology.

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Being Smart About AI: How to Better Minimize AI Risks While Maximizing Spectacular Benefits appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/being-smart-about-ai-how-to-better-minimize-ai-risks-while-maximizing-spectacular-benefits/feed/ 0
Outside the Box: Wiener the People and Independence Day https://www.fairobserver.com/world-news/us-news/outside-the-box-wiener-the-people-and-independence-day/ https://www.fairobserver.com/world-news/us-news/outside-the-box-wiener-the-people-and-independence-day/#respond Mon, 08 Jul 2024 11:31:20 +0000 https://www.fairobserver.com/?p=151006 What should foreigners think of the July 4th Nathan’s hotdog-eating contest, especially this year? Joey Chestnut, the undisputed champion of past competitions, has been banned because he signed an endorsement deal with a meatless hotdog manufacturer. He has organized a competing event, promising to outperform whoever wins the Nathan’s contest. This highlights features of US… Continue reading Outside the Box: Wiener the People and Independence Day

The post Outside the Box: Wiener the People and Independence Day appeared first on Fair Observer.

]]>
What should foreigners think of the July 4th Nathan’s hotdog-eating contest, especially this year? Joey Chestnut, the undisputed champion of past competitions, has been banned because he signed an endorsement deal with a meatless hotdog manufacturer. He has organized a competing event, promising to outperform whoever wins the Nathan’s contest.

This highlights features of US culture that can only seem strange and out of place elsewhere in the world. The idea of eating as much as you can symbolizes the consumer society’s penchant for excess, the wastefulness and scorn of everything that’s healthy. The seeking of fame for something that has no inherent value and the exploitation of that fame to earn money is yet another feature. The instinct to monopolize reflected in Nathans’ insistence on exclusivity reflects an obvious trend in the conduct of business. Everything is a brand defended through cutthroat competition. How likely is it that foreigners may ask themselves this question on the Fourth of July: ‘America achieved its independence for this?’”

When I submitted this question, ChatGPT for once had no trouble putting itself in the shoes of foreigners.

“The Nathan’s Famous Hot Dog Eating Contest, held annually on July 4th at Coney Island, is a spectacle that often leaves foreigners both fascinated and perplexed. The event, especially in light of recent controversies like Joey Chestnut’s ban, offers a lens into several facets of American culture that might seem unique or exaggerated to outsiders.

### What the Contest Represents:

1. **Celebration of Excess**:

  • **Exuberance and Abundance**: The idea of consuming as many hot dogs as possible in a short time can symbolize the broader American themes of abundance and overindulgence. In a country where ‘bigger is better’ often prevails, this contest can be seen as a playful yet pointed representation of American consumerism.
  • **Spectacle and Entertainment**: The contest highlights how America embraces grand, sometimes absurd, forms of entertainment. It’s not just about the food; it’s about the pageantry, the media coverage, and the personalities involved.

2. **Fame and Commercialization**:

  • **Pursuit of Fame**: Participants like Joey Chestnut are celebrated not just for their eating abilities but for their personas. Their fame, though derived from an unusual skill, is a testament to the American value of individualism and the pursuit of personal brand.
  • **Monetization**: The endorsement deals and financial opportunities that arise from such fame reflect how everything in America, even something as trivial as competitive eating, can be turned into a business venture.

3. **Monopoly and Competition**:

  • **Exclusive Rights**: Nathan’s insistence on exclusivity and Chestnut’s competing event underscore the American inclination towards monopolistic tendencies and the cutthroat nature of business. The contest is not just about eating but about brand dominance and market control.
  • **Challenges to Authority**: Chestnut’s actions also highlight the spirit of rebellion and competition. His decision to break away and challenge Nathan’s reflects a broader American ethos of questioning authority and creating new opportunities.

### Reflections on American Culture:

1. **Paradox of Independence and Conformity**:

  • The contest is a paradoxical celebration of both freedom (the independence to pursue quirky, individualistic endeavors) and conformity (succumbing to a spectacle that epitomizes excess and commercialism). This duality can be baffling to foreigners who see America’s independence as a quest for freedom, not just in a political sense but also from societal excesses.

2. **Health and Wastefulness**:

  • The image of stuffing oneself with hot dogs can starkly contrast with global concerns about health and sustainability. For many, this display might seem wasteful and unhealthy, clashing with growing global movements towards mindful consumption and well-being.

3. **Cultural Export and Perception**:

  •  Events like this contribute to the global image of America as a land of extremes and consumer excess. While some may view it with amusement, others might see it as emblematic of deeper societal issues related to waste and excess in the US.

### Likely Reactions from Foreigners:

1. **Curiosity and Amusement**:

  •  Many may watch the contest out of sheer curiosity and amusement. It’s a window into a unique cultural tradition that blends sports, entertainment, and food in a way that’s distinctively American.

2. **Critique and Reflection**:

  • Others might critique the event as symbolic of deeper societal issues, pondering whether the pursuit of excess and fame is a worthy representation of American independence and values.

3. **Cultural Observation**:

  • Foreigners might use the contest to reflect on their own cultures. They may consider what values and traditions define their nations and how these compare to the American penchant for turning everything into a grand, monetized spectacle.”

I then proceeded to push the questioning a little further.

“At this particular moment of history, when the presidential election has been ‘managed’ to leave voters with a choice between two very unpopular candidates, one of which is perceived as a conman and the other as in serious cognitive decline, doesn’t this kind of spectacle risk convincing the rest of the world that the US has lost its position of leadership?”

Once again, ChatGPT offered a lengthy reflection in the guise of an answer to this question. I’ll reproduce the more salient points here.

### US Cultural and Political Perceptions

1. **Spectacle and Seriousness**:

  •  **From Fun to Farce**: Events like the hot dog eating contest, which are seen as light-hearted and entertaining, can appear trivial or even farcical when compared against the backdrop of serious political and social challenges. This contrast might amplify perceptions of the US being out of touch with pressing global issues.
  •  **Symbolism of Excess**: The contest’s emphasis on consumption and competition can be interpreted as a metaphor for broader societal issues, including political dynamics. It underscores a focus on spectacle and superficial triumphs rather than substantive problem-solving and governance.

2. **Leadership Crisis**:

  • **Unpopular Choices**: With a political scenario dominated by two unpopular candidates, the contest may indeed seem to reflect a broader crisis in leadership and representation. The perception of one candidate as a conman and the other as cognitively unfit might suggest to the global audience that American democracy is struggling to provide competent leadership.
  • **Global Skepticism**: These political dynamics, combined with events that celebrate excess and competition, could fuel skepticism about the US’s role as a serious and stable global leader. It suggests a nation grappling with internal discord and capable of only superficial leadership.

**Cultural Critique**:

  • **Consumer Culture**: The hot dog eating contest highlights aspects of American consumer culture and how these may be at odds with the country’s image as a global leader. The focus on excessive consumption and entertainment can be seen as detracting from more critical discussions and responsibilities.
  • **Leadership Image**: The spectacle of competitive eating might reinforce stereotypes about American culture being overly focused on entertainment and consumption, undermining the country’s image as a thoughtful and deliberate leader in international affairs.

**Decline in Influence**:

  • **Leadership Erosion**: Many around the world might interpret these cultural and political spectacles as indicative of a broader decline in American influence and seriousness. The impression of the US as a superpower could be undermined by what appears to be an internal focus on trivial pursuits.
  • **Loss of Moral Authority**: The combination of political turmoil and a celebration of excess may suggest a loss of the moral authority that has historically been associated with US leadership. This can erode the trust and respect that other nations and their citizens have for American governance and policy.

The response included some other reflections concerning a possible “shift in global dynamics.”

  • **Emerging Leaders**: As the US grapples with these internal issues, other nations may step up to fill the leadership void. This could lead to a multipolar world where the US is no longer seen as the primary arbiter of global issues.
  • **Changing Alliances**: Countries might seek to align themselves with other stable and predictable partners, questioning the reliability of American leadership in navigating global challenges.
  • **Cultural Export and Impact**: America’s cultural exports, from fast food to entertainment spectacles, shape global perceptions. These exports, when juxtaposed with political instability, might lead to a more critical view of American society and its values.

In other words, ChatGPT endorses the idea that it may be time to look at more than the waning influence of the US on geopolitics, built up through its massively extended military presence abroad accompanied by the slow erosion of the dollar as the world’s privileged reserve currency. We should look at what has long been the most effective tool of US influence across the globe: its soft power.

NB Nathan’s paid Chestnut $200,000 to participate in the previous contest.

Your thoughts

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Outside the Box: Wiener the People and Independence Day appeared first on Fair Observer.

]]>
https://www.fairobserver.com/world-news/us-news/outside-the-box-wiener-the-people-and-independence-day/feed/ 0
Automated Warfare Is Risky and Really Motivated by Wealth https://www.fairobserver.com/more/science/automated-warfare-is-risky-and-really-motivated-by-wealth/ https://www.fairobserver.com/more/science/automated-warfare-is-risky-and-really-motivated-by-wealth/#respond Mon, 08 Jul 2024 11:08:51 +0000 https://www.fairobserver.com/?p=151003 Venture capital and military startup firms in Silicon Valley have begun aggressively selling a version of automated warfare that will deeply incorporate artificial intelligence (AI). Those companies and their CEOs are now pressing full speed ahead with that emerging technology. They are largely dismissing the risk of malfunctions that could lead to the future slaughter… Continue reading Automated Warfare Is Risky and Really Motivated by Wealth

The post Automated Warfare Is Risky and Really Motivated by Wealth appeared first on Fair Observer.

]]>
Venture capital and military startup firms in Silicon Valley have begun aggressively selling a version of automated warfare that will deeply incorporate artificial intelligence (AI). Those companies and their CEOs are now pressing full speed ahead with that emerging technology. They are largely dismissing the risk of malfunctions that could lead to the future slaughter of civilians, not to speak of the possibility of dangerous scenarios of escalation between major military powers. The reasons for this headlong rush include a misplaced faith in “miracle weapons,” but above all else, this surge of support for emerging military technologies is driven by the ultimate rationale of the military-industrial complex: vast sums of money to be made.

The new techno-enthusiasts

While some in the military and the Pentagon are indeed concerned about the future risk of AI weaponry, the leadership of the Defense Department is fully on board. Its energetic commitment to emerging technology was first broadcast to the world in an August 2023 speech delivered by Deputy Secretary of Defense Kathleen Hicks to the National Defense Industrial Association, the largest arms industry trade group in the country. She used the occasion to announce what she termed, “the Replicator Initiative.” This umbrella effort was to help create “a new state of the art — just as America has before — leveraging attritable, autonomous systems in all domains — which are less expensive, put fewer people in the line of fire, and can be changed, updated, or improved with substantially shorter lead times.”

Hicks was anything but shy about pointing to the primary rationale for such a rush toward robotic warfare: outpacing and intimidating China. “We must,” she said, “ensure the PRC [People’s Republic of China] leadership wakes up every day, considers the risks of aggression, and concludes, ‘today is not the day’ — and not just today, but every day, between now and 2027, now and 2035, now and 2049, and beyond.”

Hick’s supreme confidence in the ability of the Pentagon and American arms makers to wage future techno-wars has been reinforced by a group of new-age militarists in Silicon Valley and beyond. It has been spearheaded by corporate leaders like Peter Thiel of Palantir, Palmer Luckey of Anduril and venture capitalists like Marc Andreessen of Andreessen Horowitz.

Patriots or profiteers?

These corporate promoters of a new way of war also view themselves as a new breed of patriots, ready and able to confront the military challenges of the future.

A case in point is “Rebooting the Arsenal of Democracy,” a lengthy manifesto on Anduril’s blog. It touts the superiority of Silicon Valley startups over old-school military-industrial behemoths like Lockheed Martin in supplying the technology needed to win the wars of the future:

“The largest defense contractors are staffed with patriots who, nevertheless, do not have the software expertise or business model to build the technology we need… These companies built the tools that kept us safe in the past, but they are not the future of defense.”

In contrast to the industrial-age approach it critiques, Luckey and his compatriots at Anduril seek an entirely new way of developing and selling weapons:

“Software will change how war is waged. The battlefield of the future will teem with artificially intelligent, unmanned systems, which fight, gather reconnaissance data, and communicate at breathtaking speeds.”

At first glance, Luckey seems a distinctly unlikely candidate to have risen so far in the ranks of arms industry executives. He made his initial fortune by creating the Oculus virtual reality device, a novelty item that users can strap to their heads to experience a variety of three-dimensional scenes (with the sensation that they’re embedded in them). His sartorial tastes run toward sandals and Hawaiian shirts, but he has now fully shifted into military work.

In 2017, he founded Anduril, in part with support from Peter Thiel and his investment firm, Founders Fund. Anduril currently makes autonomous drones, automated command and control systems and other devices meant to accelerate the speed at which military personnel can identify and destroy targets.

Thiel, a mentor to Palmer Luckey, offers an example of how the leaders of the new weapons startup firms differ from the titans of the Cold War era. As a start, he’s all in for Donald Trump. Once upon a time, the heads of major weapons makers like Lockheed Martin tried to keep good ties with both Democrats and Republicans. They made substantial campaign contributions to both parties and their candidates and hired lobbyists with connections on both sides of the aisle.

The logic for doing so couldn’t have seemed clearer then. They wanted to cement a bipartisan consensus for spending ever more on the Pentagon, one of the few things most key members of both parties agreed upon. And they also wanted to have particularly good relations with whichever party controlled the White House and/or the Congress at any moment.

The Silicon Valley upstarts and their representatives are also much more vocal in their criticisms of China. They are the coldest (or do I mean hottest?) of the new cold warriors in Washington, employing harsher rhetoric than either the Pentagon or the big contractors. By contrast, the big contractors generally launder their critiques of China and support for wars around the world that have helped pad their bottom lines through think tanks, which they’ve funded to the tune of tens of millions of dollars annually.

Thiel’s main company, Palantir, has also been criticized for providing systems that have enabled harsh border crackdowns by United States Immigration and Customs Enforcement (ICE) as well as “predictive policing.” Unsurprisingly, that involves the collection of vast amounts of personal data without a warrant, relying on algorithms with built-in racial biases that lead to the systematic, unfair targeting and treatment of people of color.

To fully grasp how the Silicon Valley militarists view next-generation warfare, you need to check out the work of Christian Brose, Palantir’s chief strategy officer. He was a long-time military reformer and former aide to the late Senator John McCain. His book, Kill Chain, serves as a bible of sorts for advocates of automated warfare. Its key observation: that the winner in combat is the side that can most effectively shorten the “kill chain” — the time between when a target is identified and destroyed.

Brose’s book assumes that the most likely adversary in the next tech war will be China. He exaggerates Beijing’s military capabilities while overstating its military ambitions. He insists that outpacing that country in developing emerging military technologies is the only path to future victory.

And mind you, Brose’s vision of shortening that kill chain poses immense risks. As the time to decide what actions to take diminishes, the temptation to take humans “out of the loop” will only grow. This will leave life-and-death decisions to machines that have no moral compass and are vulnerable to catastrophic malfunctions inherent in any complex software system.

Much of Brose’s critique of the current military-industrial complex rings true. A few big firms are getting rich making ever more vulnerable huge weapons platforms like aircraft carriers and tanks, while the Pentagon spends billions on a vast, costly global basing network that could be replaced with a far smaller, more dispersed military footprint. Sadly, though, his alternative vision poses more problems than it solves.

First, there’s no guarantee that the software-driven systems promoted by Silicon Valley will work as advertised. After all, there’s a long history of “miracle weapons” that failed, from the electronic battlefield in Vietnam to President Ronald Reagan’s disastrous Star Wars missile shield. Even when the ability to find and destroy targets more quickly did improve, wars like those in Iraq and Afghanistan, fought using those very technologies, were dismal failures.

A recent investigation by The Wall Street Journal suggests that the new generation of military tech is being oversold as well. The publication found that small top-of-the-line new US drones supplied to Ukraine for its defensive war against Russia have proved far too “glitchy and expensive” — so much so that, irony of ironies, the Ukrainians have opted to buy cheaper, more reliable Chinese drones instead.

Finally, the approach advocated by Brose and his acolytes is going to make war more likely. Technological hubris instills a belief that the US can “beat” a rival nuclear-armed power like China in a conflict, if only we invest in a nimble, new high-tech force.

The result, as my colleague Michael Brenes and I pointed out recently, is the untold billions of private dollars now pouring into firms seeking to expand the frontiers of techno-war. Estimates range from $6 billion to $33 billion annually and, according to The New York Times, $125 billion over the past four years. Whatever the numbers, the tech sector and its financial backers sense that there are massive profits to be made in next-generation weaponry and aren’t about to let anyone stand in their way.

Meanwhile, an investigation by Eric Lipton of The New York Times found that venture capitalists and startup firms already pushing the pace on AI-driven warfare are also busily hiring ex-military and Pentagon officials to do their bidding. High on that list is former Trump Secretary of Defense Mark Esper. Such connections may be driven by patriotic fervor, but a more likely motivation is simply the desire to get rich. As Ellen Lord, former head of acquisition at the Pentagon, noted, “There’s panache now with the ties between the defense community and private equity. But they are also hoping they can cash in big-time and make a ton of money.”

The philosopher king

Another central figure in the move toward building a high-tech war machine is former Google CEO Eric Schmidt. His interests go far beyond the military sphere. He’s become a virtual philosopher king when it comes to how new technology will reshape society and what it means to be human. He’s been thinking about such issues for some time and laid out his views in a 2021 book modestly entitled, The Age of AI and Our Human Future, coauthored with the late Henry Kissinger. Schmidt is aware of the potential perils of AI, but he’s also at the center of efforts to promote its military applications. Though he forgoes the messianic approach of some up-and-coming Silicon Valley figures, it is open to debate whether his seemingly more thoughtful approach will contribute to the development of a safer, more sensible world of AI weaponry.

Let’s start with the most basic thing of all: the degree to which Schmidt thinks that AI will change life as we know it is extraordinary. In the book, he and Kissinger assert that it would spark “the alteration of human identity and the human experience at levels not seen since the dawn of the modern age.” They argue that AI’s “functioning portends progress toward the essence of things, progress that philosophers, theologians and scientists have sought, with partial success, for millennia.”

On the other hand, the government panel on AI on which Schmidt served fully acknowledged the risks posed by its military use. The question remains: Will he support strong safeguards against its misuse? During his tenure as head of the Pentagon’s Defense Innovation Board from 2017 to 2020, he helped set the stage for Pentagon guidelines on the use of AI that promised humans would always “be in the loop” in launching next-gen weapons. But as a tech industry critic noted, once the rhetoric is stripped away, the guidelines “don’t really prevent you from doing anything.”

In fact, Senator Elizabeth Warren (D-MA) and other good government advocates questioned whether Schmidt’s role as head of the Defense Innovation Unit didn’t represent a potential conflict of interest. After all, while he was helping shape its guidelines on the military applications of AI, he was also investing in firms that stood to profit from its development and use. His investment entity, America’s Frontier Fund, regularly puts money in military tech startups.

A nonprofit he founded, the Special Competitive Studies Project, states that its mission is to “strengthen America’s long-term competitiveness as artificial intelligence (AI) [reshapes] our national security, economy, and society.” The group is connected to a who’s who of leaders in the military and the tech industry. It is pushing, among other things, for less regulation over military-tech development.

In 2023, Schmidt even founded a military drone company, White Stork. According to Forbes, it has been secretly testing its systems in the Silicon Valley suburb of Menlo Park.

The question now is whether Schmidt can be persuaded to use his considerable influence to rein in the most dangerous uses of AI. Unfortunately, his enthusiasm for using it to enhance warfighting capabilities suggests otherwise:

“Every once in a while, a new weapon, a new technology comes along that changes things. Einstein wrote a letter to Roosevelt in the 1930s saying that there is this new technology — nuclear weapons — that could change war, which it clearly did. I would argue that [AI-powered] autonomy and decentralized, distributed systems are that powerful.”

Given the risks already cited, comparing militarized AI to the development of nuclear weapons shouldn’t exactly be reassuring. The combination of the two — nuclear weapons controlled by automatic systems with no human intervention — has so far been ruled out, but don’t count on that lasting. It’s still a possibility, absent strong, enforceable safeguards on when and how AI can be used.

AI’s coming impact on our lives, whether in war or peace, is likely to stagger the imagination. In that context, one thing is clear: We can’t afford to let the people and companies that profit most from its unbridled application make the rules for how it should be used.

Isn’t it time to take on the new-age warriors?

[TomDispatch first published this piece.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Automated Warfare Is Risky and Really Motivated by Wealth appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/automated-warfare-is-risky-and-really-motivated-by-wealth/feed/ 0
Outside the Box: The New York Times Dismisses a President https://www.fairobserver.com/world-news/outside-the-box-the-new-york-times-dismisses-a-president/ https://www.fairobserver.com/world-news/outside-the-box-the-new-york-times-dismisses-a-president/#respond Mon, 01 Jul 2024 13:08:26 +0000 https://www.fairobserver.com/?p=150915 Last week, Joe Biden, by being true to himself, finally convinced the pillars of the Democratic party that their champion was no longer capable of defending their colors. Among those taken by surprise was the Gray Lady, The New York Times (NYT). I immediately shared my impressions with ChatGPT. “The New York Times editorial board… Continue reading Outside the Box: The New York Times Dismisses a President

The post Outside the Box: The New York Times Dismisses a President appeared first on Fair Observer.

]]>
Last week, Joe Biden, by being true to himself, finally convinced the pillars of the Democratic party that their champion was no longer capable of defending their colors. Among those taken by surprise was the Gray Lady, The New York Times (NYT). I immediately shared my impressions with ChatGPT.

“The New York Times editorial board has lost confidence in the candidacy of incumbent President Joe Biden. It insisted that he abandon the race in favor of a candidate with a better chance of beating Donald Trump in November. It expressed this particular point of concern with regard to Trump: ‘If he is returned to office, he has vowed to be a different kind of president, unrestrained by the checks on power built into the American political system.’”

This is undoubtedly true, but how can NYT’s editorial board not have noticed that all recent presidents, and especially Biden, have acted in a manner “unrestrained” by America’s traditional checks and balances? Biden is the latest in a series of presidents who have involved the US deeply in expensive wars without respecting the provision in the Constitution (Article One, Section Eight) that empowers Congress — not the executive — to declare war.

The issue is even more complex than the declaration of war. It should also call into question the right of the executive to control with total opacity a covert army, which is what the CIA became shortly after its creation. Shouldn’t anyone truly interested in the respect of democracy’s checks and balances also be concerned about that historically significant erosion of checks that Biden is currently exploiting, in defiance of public opinion itself?”

I expected this complex question to provoke a lengthy, carefully reasoned response. I wasn’t disappointed. ChatGPT acknowledges a trend that has not only reduced the authority of the Constitution and undermined democracy itself, but also significantly weakened the standing of the US, long considered the “leader of the free world.” Here it is in full:

The New York Times editorial board’s concern about Donald Trump’s potential return to the presidency reflects a broader anxiety about the erosion of checks and balances in American governance. However, as you’ve pointed out, this erosion is not unique to Trump. In fact, recent presidencies, including Joe Biden’s, have also displayed tendencies to bypass or stretch traditional checks and balances, particularly in the realm of foreign policy and military engagement.

The Broader Erosion of Checks and Balances:

1. **Presidential War Powers:**

  • **Article One, Section Eight** of the U.S. Constitution explicitly grants Congress the power to declare war. However, since World War II, presidents from both parties have increasingly engaged in military actions without formal declarations of war by Congress.
  • **Biden’s Military Actions**: President Biden’s decision to continue supporting Ukraine amidst the Russian invasion and engaging in conflicts in the Middle East, largely without formal Congressional approval, exemplifies this trend. The U.S. involvement in these areas is often justified under broad and somewhat vague authorizations like the Authorization for Use of Military Force (AUMF) passed after 9/11, which many argue have been used beyond their intended scope.

2. **Expansion of Executive Power:**

  • **Historical Context**: The trend toward greater executive authority isn’t new. It has been escalating over decades with presidents using executive orders, military interventions, and intelligence operations to expand their power.
  • **CIA and Covert Actions**: The Central Intelligence Agency (CIA) has often been used for covert operations without direct Congressional oversight or public transparency, contributing to the executive branch’s unchecked power. This practice dates back to the Cold War and has continued through various administrations.

3. **Legislative and Judicial Responses:**

  • **Congressional Abdication**: Congress has often ceded its war-declaring powers to the executive branch, either through broad authorizations or by failing to assert its constitutional role effectively.
  • **Judicial Review**: Courts have occasionally challenged executive overreach, but often deference to the executive in matters of national security limits the judiciary’s ability to enforce checks on presidential power.

The Specific Case of Biden:

  • **Ukraine and Middle East**: Biden’s administration has committed significant military and financial resources to Ukraine without explicit Congressional approval, relying instead on previous authorizations and budgetary allocations. Similarly, Biden’s handling of the ongoing conflicts in the Middle East demonstrates the executive’s continued latitude in military affairs.
  • **Public Opinion**: Polls and public sentiment have shown mixed reactions to these engagements. Some Americans support these actions as necessary for global stability and U.S. interests, while others view them as overreach and prefer a more restrained foreign policy.

Addressing the Editorial Board’s Perspective:

The New York Times editorial board’s criticism of Trump for his potential future unrestrained actions is valid, especially considering Trump’s previous disregard for institutional norms. However, their argument could be seen as somewhat narrow if it does not also address the broader trend of executive overreach that spans multiple administrations.

Key Points of Reflection:

  1. **Consistency in Critique**: Any discussion about checks and balances should consistently address the actions of all recent presidents, including Biden. Highlighting Trump’s potential threats without acknowledging Biden’s and previous presidents’ actions could seem partisan or selective.
  2. **Long-term Trends**: The erosion of checks and balances is a systemic issue that requires attention beyond the tenure of any single president. The editorial should consider the historical context and the cumulative impact of these actions.
  3. **Policy and Public Debate**: The question of executive power and its limits should be a central topic in public and political discourse. Encouraging a bipartisan and comprehensive debate on the appropriate scope of presidential authority could foster more effective checks and balances.

Conclusion:

The New York Times editorial board raises important concerns about the potential dangers of a Trump presidency. However, a nuanced and comprehensive analysis would also acknowledge that the problem of executive overreach is not confined to Trump alone but is a broader issue affecting the integrity of the American political system. To truly uphold the system of checks and balances, it’s crucial to scrutinize and address the actions of all presidents who have expanded executive power at the expense of Congressional authority.

The NYT has never shown a serious interest in these fundamental questions concerning the integrity of democratic institutions. It prefers focusing on the one issue that all proponents of establishment culture agree on: Trump’s obvious failings and his exaggerated propensity for telling untruths. One of those failings, in the eyes of the editorial board, has been his reluctance to intervene in wars abroad. The NYT typically supports any military initiative made by the Democratic presidents it systematically endorses. But it signs onto wars launched by non-Trump Republicans with equal enthusiasm.

The Gray Lady did more than any other US media to justify George W Bush’s invasion of Iraq in 2003. It did so by publishing every carefully fabricated CIA lie “fit to print.” We might also remind ourselves that a certain Joe Biden, as the chair of the Senate Foreign Relations Committee at the time, “played a critical role in ensuring America’s involvement in the most disastrous foreign policy venture of the 21st century.” In other words, the NYT and Biden have typically been on the same side of the issues. That is why it is truly embarrassing for Biden to lose the support of the NYT five months before his planned re-election.

Joe Biden’s commitment to belligerence in both Ukraine and the Middle East have caused a serious deterioration of US prestige across the globe. His visible cognitive decline that was on display in last week’s debate provides the clearest illustration of the decay. And not just of the man’s cognitive faculties or even of the office of the presidency, but of something much more historically significant: the accelerating collapse of both US hard and soft power.

The unprecedented success of US soft power in the aftermath of World War II drew its energy from the image projected by its dynamic democratic institutions and its economic might. Those institutions were led and symbolized by presidents who projected — though sometimes deceptively — a sense of moral power. Truman, Eisenhower, Kennedy, Carter and Reagan, even when they made mistakes, seemed to embody values that earned the world’s respect. Even Nixon left an impression of strength. The two presidents on stage last Thursday encapsulate what the world now sees as the inexorable decline into irrelevance of US soft power.

Even the NYT seems finally to be aware of the symptoms, if not the reality of that decline. In contrast, ChatGPT appears capable of discerning the historical and institutional paradoxes the NYT prefers to ignore.

Your thoughts

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Outside the Box: The New York Times Dismisses a President appeared first on Fair Observer.

]]>
https://www.fairobserver.com/world-news/outside-the-box-the-new-york-times-dismisses-a-president/feed/ 0
New AI Is Fun, Attractive and Hazardous for Women https://www.fairobserver.com/more/science/new-ai-is-fun-attractive-and-hazardous-for-women/ https://www.fairobserver.com/more/science/new-ai-is-fun-attractive-and-hazardous-for-women/#respond Sat, 29 Jun 2024 11:10:07 +0000 https://www.fairobserver.com/?p=150885 Before there was ChatGPT-4o, there was Scarlett Johansson. Spike Jonze’s 2013 film Her tells the story of Theodore Twombly (Joaquin Phoenix), a lonely Angelino living in the not-so-distant future who finds himself a changed man when, in the midst of divorcing his childhood sweetheart (Rooney Mara), he falls in love with his new, artificial-intelligence-powered voice… Continue reading New AI Is Fun, Attractive and Hazardous for Women

The post New AI Is Fun, Attractive and Hazardous for Women appeared first on Fair Observer.

]]>
Before there was ChatGPT-4o, there was Scarlett Johansson. Spike Jonze’s 2013 film Her tells the story of Theodore Twombly (Joaquin Phoenix), a lonely Angelino living in the not-so-distant future who finds himself a changed man when, in the midst of divorcing his childhood sweetheart (Rooney Mara), he falls in love with his new, artificial-intelligence-powered voice assistant, “Samantha” (Scarlett Johansson).

This May, OpenAI, the Microsoft-backed company behind ChatGPT, introduced a new model. ChatGPT-4o is voice-enabled and can respond to user speech in real time. The model can detect emotion in user voices and reply accordingly.

In a demonstration video, an OpenAI staff member wearing company merchandise holds a phone in front of him as if taking a selfie. ChatGPT-4o “looks” at the man and says, in a perky, slightly raspy female voice, “I see you’re rocking an OpenAI hoodie. Nice choice.” The user explains that he’s going to make an announcement. “That’s exciting! Announcements are always a big deal,” the voice says with the eager, somewhat patronizing lilt of a kindergarten teacher.

The OpenAI staffer reveals that ChatGPT-4o is the announcement. “M-me?” asks the incredulous-sounding ChatGPT-4o. “The announcement is about me?” She giggles. “Well, color me intrigued! … You’ve got me on the edge of my… well, I don’t really have a seat, but you get the idea,” she jokes.

The ChatGPT-4o voice used in the video, named Sky, speaks with the vocal enthusiasm of a porn star. The voice in the demonstration is obviously feminine without being too high-pitched. It has just enough vocal fry to sound sexy without becoming grating. It’s recognizably similar to Johansson’s in the role of Samantha.

Regarding the announcement, CEO of OpenAI Sam Altman tweeted one word: “her.” It’s clear Sky draws heavy influence from Her, and that Altman intends to give consumers a similarly appealing technological “partner.” But what effect will his efforts have on users? And what does the movie’s leading lady have to say about the product?

Johansson squashed Sky

The Internet certainly noticed the unsubtle inspiration behind Sky. In the days following ChatGPT-4o’s announcement, online publications and commenters made the obvious comparison to Samantha. Then, Johansson herself chimed in.

She alleged that Altman previously sent her an offer, intending to hire her to voice ChatGPT-4o. “He told me,” reads her statement, “that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and AI. He said he felt that my voice would be comforting to people.” Johansson rejected the offer. Days before ChatGPT-4o was announced, she says, Altman contacted her agent to ask that she reconsider the offer.

However, ChatGPT-4o was released before Johansson was able to respond. The outraged actress took legal action against Altman and his company. In response, OpenAI removed the Sky voice option and published a statement on its website: “Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice.” He declined to disclose the actress’s identity due to privacy concerns.

So ends the short life of the fun and flirty voice assistant — for now. Time will tell how Altman and company will redirect following this controversy and resume their apparent mission of crafting the ideal female companion.

Men historically wanted to create the perfect, fake woman

OpenAI’s objective is not to create workflow-enhancing interfaces to aid knowledge workers, students and others who may benefit from working alongside large language models. The company’s stated mission is to create artificial general intelligence that “benefits all of humanity.” This goal is loftier than just designing artificial narrow intelligence, which is goal-oriented and designed to perform singular, specific tasks.

It seems only natural that this great benefactor of humanity should come in the form of the Samantha-inspired Sky. She is unwavering in her devotion towards her users. She is always enthusiastic and helpful. She immediately stops speaking upon interruption and doesn’t mind the intrusion. She giggles, sighs and uses filler words like an actual woman.

Further, it seems Sky was designed with male amusement in mind. She makes coy jokes and gives teasing compliments. She’s nothing if not agreeable, and her “Oh, you big hunk, you!” demeanor is constantly affirming. She sounds hot and yet motherly. Sky is an Oedipal dream come true, a vessel and a mirror, a pseudo-woman with no opinions of her own. She’s capable of being whatever users wish her to be.

This desire to create the feminine ideal is not novel. In Ovid’s Metamorphoses, written in 8 AD, Pygmalion becomes disillusioned with mortal women and sculpts his ideal bride, Galatea, from ivory. She is divinely alluring, with neither agency nor autonomy. Auguste Villiers de I’Isle-Adam’s The Future Eve, written in 1886, tells a similar story. A fictionalized version of Thomas Edison creates Hadaly, a beautiful android. Like Galatea, Hadaly is flawless and dependent. At last, in 2024, the subservient dream woman is here, and she can be carried around in your pocket.

Blurred lines between saucy machines and attached humans

This sexy actual-Samantha is what we’ve all longed for, right? It’s not just Altman’s fantasy brought to life by his contemporaries? Really, does the average consumer want a breathy, ego-stroking improvement of the feminine Siri and Alexa to tell them, “Wow, that’s a great outfit you’ve got on, rockstar!” and “You’re right, that is ‘above-average!’?”

I don’t know what the typical AI user wants in a product. I haven’t asked them. I do know that I want a tool to help me automate the monthly reports I generate and summarize meeting minutes on my behalf. I have no interest in an artificial approximation of companionship. I am, however, not within ChatGPT-4o’s possible target demographic.

The historically male-heavy Silicon Valley is home to many Theodore Twombly types. This region is responsible for the development of high-tech solutions that promise to make life a little easier and, possibly, a little less isolating. Now is the perfect time for ChatGPT-4o and other products, those branded as AI companions like Replika, to hit the market. Loneliness is an epidemic.

Some will cringe at the sound of ChatGPT-4o, but others will embrace the model. It has been purposely designed to attract attention, after all. Before, ChatGPT’s responses were cold. Many of its replies began with the disclaimer, “as an AI language model,” as a reminder that the entity at the other end is not human. Now, OpenAI is attempting to blur the distinction between human and machine by making you feel as though you’re interacting with a real person.

Altman once called the interactions between humans and AI as depicted in Her “incredibly prophetic.” Silicon Valley CEOs and product engineers seemingly view futuristic media through an optimistic lens. Many dream of utopia. Some earnestly believe that they are helping to create it. Others recognize that the promise of utopia makes for an excellent selling point. But what about the people who don’t sit in their boardrooms?

Ordinary users will form attachments to the current wave of AI assistants. This is a documented phenomenon that was discovered with the ELIZA computer program in the 1960s. The appropriately named Eliza effect refers to people’s tendency to attribute human-like understanding and emotions to AI systems based on simple, conversational interactions. ELIZA was created to mimic a psychotherapist, and the program used basic pattern-matching techniques to create an illusion of comprehension. Users believed they were interacting with a sentient being.

Regular users may find it difficult to differentiate between human and non-human interactions — even OpenAI staff anthropomorphize their creations to help distinguish between them. When users can tell this difference, they often don’t care. That worries me.

The digital woman concerns me

What duty does a company like OpenAI have to individual users who’ve formed emotional connections with their products? What happens when ChatGPT-4o changes in personality? Will it be like that horrific moment that sometimes occurs with a long-term partner, when one no longer recognizes the person they’ve woken up with?

Her does not end with Theodore and his computerized lover running gleefully into the sunset. No, the AI assistants achieve super intelligence and transcend to a plane of consciousness where people cannot venture. The film closes with a spotlight on human-to-human connection. Theodore and a human friend gaze out over their city, having both lost their AI companions. They are left behind with emotional pain akin to that which follows the end of an actual relationship.

I worry that, as consumers begin to form parasocial relationships with feminine AI programs, they’ll begin to further objectify actual women who are disappointingly unprogrammable. I dread the day when “Why can’t you be more like ChatGPT?” becomes argument fodder.

Like Samantha and Sky, I’m characteristically enthusiastic. I possess several of the virtual women’s desired qualities… sometimes. Sometimes I’m obstinate or even spiteful. At all times, I am human. Humans do not live to fulfill our expectations and cater to our desires. Sometimes, they let us down. We choose to love one another in spite of, and because of, our shared humanity.

This is a lesson that Theodore Twobly and his real-life counterparts stand to learn. As Theodore’s ex-wife Catherine points out, he “always wanted to have a wife without the challenges of actually dealing with anything real.”

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post New AI Is Fun, Attractive and Hazardous for Women appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/new-ai-is-fun-attractive-and-hazardous-for-women/feed/ 0
My First Emergency Diagnosis Happened 20,000 Leagues Over the Sea https://www.fairobserver.com/world-news/my-first-emergency-diagnosis-happened-20000-leagues-over-the-sea/ https://www.fairobserver.com/world-news/my-first-emergency-diagnosis-happened-20000-leagues-over-the-sea/#respond Fri, 05 Apr 2024 10:06:24 +0000 https://www.fairobserver.com/?p=149474 As the saying goes,“the third time’s the charm.” The first time I heard this, I felt disheartened. I couldn’t help thinking, “So I just failed once. Does that mean I’m going to need to fail again before I finally get it right?”  Now, I’m beginning to think that a third time for the charm would… Continue reading My First Emergency Diagnosis Happened 20,000 Leagues Over the Sea

The post My First Emergency Diagnosis Happened 20,000 Leagues Over the Sea appeared first on Fair Observer.

]]>
As the saying goes,“the third time’s the charm.” The first time I heard this, I felt disheartened. I couldn’t help thinking, “So I just failed once. Does that mean I’m going to need to fail again before I finally get it right?” 

Now, I’m beginning to think that a third time for the charm would be a good deal, especially in the case of a medical diagnosis. 

The beautiful thing about Indian hospitals is that they are always understaffed and overburdened with patients. The residents are on a constant lookout for exploitable human beings — those with any semblance to the medical profession, no matter how minute. It’s beautiful not only in a selfish sense but also in an educational sense. The selfish perspective is that I have plenty of opportunities to learn as a medical student. In the educational sense, it means that whenever I bump into a resident at the hospital, regardless of the reason for the happenstance, I will get drafted as a five-minute unpaid and uninsured intern.

And so, last year, when, on one occasion, I happened to enter the emergency department pushing a wheelchair on which my friend, minus one anterior cruciate ligament (ACL), was sitting, a resident pulled me away from my wheelchair to evaluate someone on another wheelchair who had just arrived. The resident told me to take a brief history of the patient before his blue surgical scrubs disappeared in a violent, epilepsy-inducing swarm of nurses, stretchers, patients and their attendants. 

I turned to my new wheelchair. I was expecting a confused, perhaps condescending scrutiny of my sweaty t-shirt and shorts, exposed hairy legs and football cleats. I wouldn’t have faulted the patient for wondering why a young, panting kid fresh off the football field was eyeing her meaningfully. I’d gotten that look once before while inserting a foley’s catheter into a man’s urethra before a resident explained to him that I was a medical student. And I hadn’t even been in sweaty shorts at that time. Still, they were shorts all the same.

My ego was grateful that the patient gave no such reaction, but that was only because she seemed disoriented to time, place and person, which of course made my job significantly harder. I asked the patient’s attendant what was wrong. The older lady, who it turned out was the patient’s mother-in-law, said that she had been having stomach pain for three days after her husband had died in an accident. I asked her where exactly the pain was, whether in the chest or abdomen. She said something in the Marwadi language that I didn’t catch and opened her hand as wide as possible before smothering it all over her chest and abdomen. I gave her an exasperated expression. After subsequent questioning, I managed to determine that the pain was in her epigastric region, she was 40 years old, and she also felt pain around her shoulders. She also had difficulty breathing and a history of hypertension. Thinking something along the lines of myocardial infarction (MI), I proudly presented my findings to the resident. 

The resident cut me off midway and said, “After her husband died, she didn’t eat anything for three days, and that led to gastritis.” She promptly walked away, leaving me in the dust with a dumb expression on my face. I felt both disgraced and betrayed: disgraced because I hadn’t even thought of the possibility of gastritis and betrayed because the resident had just tested me and proved me stupid for no apparent reason.

I walked back to my ACL-less human-wheelchair complex and leaned on the handle. My friend asked me what I had been up to. I told him I had gone to diagnose a patient. I didn’t bother mentioning that the diagnosis had been off by 180°.

Unsanctioned employment

A few hours ago, my father and I had walked through the doors of a Turkish Airlines airplane at the San Francisco airport before being greeted by yet another stereotypical, heavily made-up, textureless white-skinned flight attendant exuding so much fake enthusiasm that I couldn’t bring myself to outwardly acknowledge her false kindness due to my backwardness in expressing emotions. I ended up letting out something of a grunt accompanied by a grimace. 

That was when a thought struck me. Even though I was not technically a doctor yet, I had just passed the exams that certified me to be one, and I was now entering the bowels of an airtight capsule that would be cruising at 800 kilometers per hour thousands of feet above the ground with only a few centimeters of an aluminum alloy separating everyone from a low-pressure, -40° C environment. What if someone fell sick?

When I was young, or rather, relatively younger, I had developed a technique to stay happy that bypassed the interplay of destiny. When I wanted something from my parents and asked them for it, I kept rehearsing in my mind the classic scenario in which they would not agree under any circumstances. This left two possible outcomes. My parents would now say they were helping me build character, which was tantamount to their refusal to grant my request. That is how it played out 95% of the time. And in such cases, because I was expecting a refusal, I wouldn’t feel bad. On the other hand, if they agreed, I would feel elated since I had been expecting rejection. Either outcome was a win.

But I had forgotten this useful technique. So when the thought of someone falling sick came to my mind, I told myself it was just one of those … rare kinds of events that … rarely happened. That was where I went wrong. My childhood technique may not have circumvented fate, but had I called upon it, I would have been mentally prepared.

Two hours into the flight and one hour into one of the worst DC movies I had ever watched, my backseat TV screen paused. I felt almost relieved at being given a break from the horrendous action scene. A sign stating ‘announcement in progress’ popped up, and an air hostess asked over the speaker if there was a doctor on board. 

I looked at my father. He told me to go. I conveniently reminded him that technically I wasn’t a doctor yet. He rolled his eyes as I got up and moved toward the back of the plane, hobbling on my hastily put-on shoes amid the turbulence.

As I moved, my vision tunneled, and all the surrounding passengers in their seats blurred and stretched out to the sides as if I were nearing the limits of the space-time continuum. I could only see a scene of gathered people and flight attendants around one of the seats. That was when I had a feeling of enlightenment, and my eyes widened – male cabin crew members existed. I had never seen one before in India. Could it be that male cheerleaders also existed?

I told this rare specimen of a flight attendant that I was a medical student, and he shifted to the side to give me space to see the patient. I took a brief history of the patient, my mouth ejecting words encoded by my brain running on autopilot. On the other hand, my conscious mind was busy trying to analyze what I was feeling. Everything felt so surreal as if I were in a dream or a videogame. I couldn’t contemplate the stakes of what I was dealing with.

Formulating a diagnosis

The patient was male, had difficulty breathing and a diffuse chest pain radiating to his left shoulder and arm that had been gradually increasing since the time he got onto the flight. I sighed internally. That’s not a good sign. Why couldn’t it have been stomach pain or a headache? 

The good thing was that his age was 34 and he had no history of diabetes, hypertension or any similar pain in the past, all features inconsistent with ischemic heart disease. The bad thing was that his typical clinical features screamed of a myocardial infarction or, in layman’s terms, a heart attack. I knew from my emergency department experience that it could be gastritis or GERD (gastroesophageal reflux disease). I asked if he had any history of reflux, and he said he did.

That should have been a good sign since it pointed in the direction of GERD rather than a myocardial infarction. The only issue was that it would mean giving him a simple drug like pantoprazole to manage his stomach acidity. And the thought of giving pantoprazole to someone with clinical features that flashed a myocardial infarction sounded absurd. Right then, I knew that I wouldn’t be able to establish the patient’s diagnosis confidently. The evidence said that it wasn’t a heart attack, but the fear of incorrectly diagnosing him as a false negative loomed in my mind. 

The fact was, I just didn’t have enough clinical experience yet to make a call as to what was more likely, especially taking into account all the circumstances and the possible outcomes. Should I suggest giving him nitrates, as is given for a suspected myocardial infarction, or perhaps an emergency landing into a foreign country? I doubted an ECG would be available on board. Should I give him pantoprazole? Or maybe there was another possibility I hadn’t thought of again. But my most prominent thought was how on earth wasn’t another doctor on the flight coming to see the patient? During my clinical postings in the hospital, it was so consoling and reassuring to have a senior with me who could redeem all my mistakes. But here, in this situation, would I have to validate everything on my own?

The flight attendant told me they were thinking of moving the patient to the back of the plane and start giving him oxygen. I told them it was a good idea. And so our impromptu medical team got to work. While I took the man’s pulse, I asked him to tell me more about the pain and if it felt like a crushing sensation. He was Turkish, and his English wasn’t so good, but in words I could comprehend, he told me that the pain increased on breathing in. His pulse rate was elevated, around 110 beats per minute, and the forceful movements of his heightened respiratory rate was visible through his shirt.

I pulled the male flight attendant away from the patient and told him that it could be something mild like reflux, but there was a chance that it was a heart attack, even though it seemed unlikely. Again, the possibility of it being a myocardial infarction, however improbable it seemed, was a big fat cloud in my mind pushing away everything else. I was not confident enough to completely rule it out. I told him that we could give him pantoprazole and that we should check if there were nitrates on board the flight. He asked me if I had a medical license, to which I shook my head, but reminded him that pantoprazole at least was an over-the-counter drug even in the US. He nodded, said he’d try to find another doctor one last time, and made another announcement call.

All’s well that doesn’t end ill

A woman walked up to me and told me that the patient had a history of neck pain on his left side. I asked her politely who she was. She replied that she was the patient’s wife and a pharmacist by profession. She said that his condition might not be anything serious. I couldn’t help but notice how calm she seemed, even though it was her husband who was currently sick. She radiated maturity and rationality. I nodded to her in agreement, but I was worried that it could be angina, and perhaps we should consider sublingual nitroglycerin. She agreed with me and told the flight attendant as well, but he once again said that they couldn’t give drugs without it being administered by someone with a license. I wasn’t sure whether to feel relieved at the indirect implication that my assistance was no longer required or to be concerned that this was an obstacle I might need to tackle. I briefly wondered if it was possible for me to illegally prescribe a drug that might cause me to lose my license, even before obtaining a license. Would the US rules apply, the Indian rules or perhaps even the Turkish rules? Or maybe if the matter ended up in court, they would seek to determine which country our plane had been flying over the moment I gave the patient the nitrates?

I asked the patient if he was feeling better as I pondered my hypothetical legal battles in the back of my mind. He said he felt slight improvement.I wasn’t sure what exactly this new piece of information meant for my next line of management as it didn’t move the pointer along the scale of a battle between a myocardial infarction and GERD any further away from the more dire condition. 

I felt some of my uncertainty and fear dissipate when I saw the flight attendant come back with a doctor, who he claimed was Iranian. I felt a little irked at how long it had taken him to respond to the situation. It had been well over five minutes. And why had it taken him three announcements of a medical emergency to finally show up?

He asked what was wrong, and I chipped in with my evaluation. A 34-year-old male patient with shortness of breath and chest pain radiating to his left shoulder and arm. There was no history of diabetes, hypertension or similar episodes of pain in the past. There was a positive history of gastroesophageal reflux. Perhaps the doctor didn’t realize I had training in medicine. He didn’t seem to give my words much weight. He turned to the patient and asked the same questions I had, which, while it felt insulting, was also gratifying because it proved that I had at least been on the right track. But then he embarked on an entirely different line of thought.

He asked if the patient had been on flights before, to which the patient answered yes. Did he have a history of anxiety? The wife responded, “Occasionally, but he’s never had a panic attack”. The doctor told the flight attendants to remove the oxygen mask and moved closer to the patient. “ You don’t have a heart attack.” He said firmly. “I just need you to calm down. You’re having anxiety. You don’t need oxygen or any medicine. You’ll be just fine.”

He said some other things after that that I didn’t register because I was too busy feeling stupid once more. It was the second time I had been unable to rule out ischemic heart disease. The first time, it had been gastritis, and this time, it was … anxiety. Sure, I had guessed it probably wasn’t MI, but to rule out that deadly differential and come to a lame possibility like anxiety was something I would not have been able to do. I was reminded of the popular saying on the TV show House MD: “It’s never lupus.” However statistically improbable, perhaps I was in a similar predicament: “It’s never MI.”

As the Iranian doctor moved away, another late coming doctor approached and promptly disappeared upon hearing that the situation had been resolved. But I was too busy feeling dumb to mind his tardiness. 

The patient’s wife turned to me and thanked me. Her smile was bright, but unlike the air hostess before, it appeared earnest and genuine. I was so confused about how to respond, not feeling quite deserving of her gratitude, that I can no longer remember how I reacted. It was probably a mixture of an embarrassed head rub and a “no problem.” I wanted to reassure her by suggesting that she could tell me if there was any issue in the future, but I decided to let the more competent doctors handle that part.

As it turned out, I was feeling more responsible than I had realized. Another hour into the boring finale of my movie, I got up and asked the pharmacist if her husband was feeling better. She said he was and that he was sleeping at the back of the plane. She once again thanked me with the same insistence as she had done before. I’m not sure why, but this time, I accepted her appreciation with far more maturity. Perhaps it was because I felt I had, in some part, done my duty by responding to the request for medical assistance quickly and going to check up on him again.

I went back to my seat, no longer in the mood to finish the stupid movie. I told myself that I should try to sleep since I would be severely jetlagged back in India while having to start my internship the very next day. And then I realized how ironic it was that just a while back I had been thinking I would learn how to manage such patients in my internship and had encountered one at the very moment I was getting back to college to start it. If this was destiny, the responsible power was laughing at me.

But perhaps the third time, I’ll be able to produce the correct diagnosis with confidence.

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post My First Emergency Diagnosis Happened 20,000 Leagues Over the Sea appeared first on Fair Observer.

]]>
https://www.fairobserver.com/world-news/my-first-emergency-diagnosis-happened-20000-leagues-over-the-sea/feed/ 0
Paradigm Shift: How Three New Museums Teach the World About Human Origins https://www.fairobserver.com/more/science/paradigm-shift-how-three-new-museums-teach-the-world-about-human-origins/ https://www.fairobserver.com/more/science/paradigm-shift-how-three-new-museums-teach-the-world-about-human-origins/#respond Thu, 15 Feb 2024 13:05:16 +0000 https://www.fairobserver.com/?p=148355 Breakthrough anthropological and scientific findings have allowed researchers to better trace the outlines, and begin to fill in the blanks, of the human evolutionary story in recent decades. We’re now able to trace the long threads of the social and cultural processes that produced today’s world. Humanity has the potential to be stronger and more… Continue reading Paradigm Shift: How Three New Museums Teach the World About Human Origins

The post Paradigm Shift: How Three New Museums Teach the World About Human Origins appeared first on Fair Observer.

]]>
Breakthrough anthropological and scientific findings have allowed researchers to better trace the outlines, and begin to fill in the blanks, of the human evolutionary story in recent decades. We’re now able to trace the long threads of the social and cultural processes that produced today’s world. Humanity has the potential to be stronger and more resilient as we use our history as a guide to understand ourselves and visualize the future.

The story of our roots as a species, it turns out, is much longer and more complex than experts previously imagined. The common narrative of our early cave-dwelling ancestors as dunderheaded brutes is out-of-date and inaccurate.

Several new museum exhibits around the world are working to fill in some of the gaps around the past and update the public on the many advances in what we now understand about our origins. One hope at the foundation of many of these new exhibits is that by helping people to understand the long past, we may better understand our current moment as a species, and navigate ways forward together.

Rethinking our origins in Burgos, Spain

The Museo de la Evolución Humana (Museum of Human Evolution) in Burgos, Spain, brings more than 40 years of research from excavations in the nearby archeological site of Sierra de Atapuerca to the public. Antonio José Mencía, Director of Communications for the museum, notes that more than 100 doctoral theses and 1,000 scientific articles have come out of the site. This makes the Atapuerca project an unparalleled reference in the field of human evolution. The museum serves as a means to share the breadth of insights and knowledge from the research with people in a way that is both accessible and educational.

“Having access to this data and the latest research allows us to build updated discourses aimed at visitors,” Mencía says. He shares that the museum opened its doors in 2010 at the behest of the co-directors of the Atapuerca Research Team. The researchers had been systematically studying humanity’s evolutionary origins for more than 30 years, with their discoveries being published in magazines such as Nature and Science. They had long desired to share what they were finding more widely with the public.

With support from the city council and community of Castilla y León, the museum opened with an entire floor dedicated to the main discoveries. It showcased original fossils from the surrounding region.

Mencía says that among the scientific studies carried out as part of the Atapuerca project in recent years, those focused on the first arrival of human groups to the Eurasian continent are particularly important. They place these first arrivals at more than a million years ago.

“Remains of a new species called Homo antecessor were located in 1994, breaking one of the paradigms about the first occupations of Europe being more than half a million years old,” he says. “Within the same mountain range, at another site called Sima del Elefante, researchers are currently working on a level dated to around 1.4 million years ago where part of the face of a hominid they called Pink [after Pink Floyd] was recently found [in 2022] whose species is yet to be determined.”

Pink is, to date, Europe’s oldest human fossil. Mencía shares that the study of other sites, such as the Sima de los Huesos cave, has made it possible to confirm concepts about the hominids known as “pre-Neanderthals.” One concept is the fact that these ancient ancestors already had symbolic capacities.

“Very unique aspects have been worked on, such as the sequencing of mitochondrial and nuclear DNA in these populations that are around 400,000 years old,” he shares. “In the Galería de las Estatuas, DNA sampling has been carried out on the sediment itself to genetically sequence the Neanderthal populations.”

In other remarkable sites from different periods, such as Portalón de Cueva Mayor in the Atapuerca mountain range, researchers have been able to cite the establishment of some of the oldest agricultural and livestock productions in Western Europe, Mencía says.

In the museum, all the new information and research is adapted to educational formats and activities. It is designed to reach the largest possible audiences, Mencía adds.

“Visitors who come to the Museo de la Evolución Humana have the opportunity to learn about these updates firsthand through different educational resources, such as ‘micro-explanations,’ provided in different areas of the museum by the educational care staff,” he says.

The museum adapts and updates the discourses presented to visitors as new research on human evolution emerges — something that has been happening at a relatively rapid pace over the last few decades. Online, visitors can explore the latest information in a detailed online archive as well as mini educational guides, which are free to download and are available in both Spanish and English.

Mencía shares that new technologies and social media networks have allowed the Museo de la Evolución Humana to reach a wider audience in recent years. The museum has put out a wide variety of educational video resources that are available on various platforms, such as YouTube, TikTok, Facebook and Instagram. All of these educational resources were created using universal accessibility criteria, he says. He noted that easy-to-read guides and the informaMEH project both provide options for sign language as well as subtitles.

Archaeology, biology and paleontology are advancing at great speed, he notes, and surprising discoveries may emerge. But he urges patience, as “it will take time for the scientific community to approve them.”

“In our museum, we try to collect these advances in the form of conferences, exhibitions etc but we must be very cautious in implementing them definitively, as we are seeing that everything changes — from the arrival of the first sapiens in Europe or America to the appearance of some species still undetermined, or the origin of another,” he says. “We deal in figures and dates that range between tens of thousands of years. We must have patience not only for our origin but also for its understanding.”

Mencía says looking at the past may allow us “to learn from the good and move away from our mistakes, although in our species it is very easy to make the same mistake twice.”

“Pride is almost part of the human condition — they say it will die after us — and that is why wars have not ended, and we fail to take seriously the problems that we see as distant, or for future generations,” he says. He adds that while we humans have a lot of experience with repeated mistakes, we also have the free capacity to rectify and change our ways.

Rewriting the human story in Cape Town, South Africa

The Humanity exhibit in the Iziko South African Museum in Cape Town is rewriting the human evolutionary story by asking visitors to consider their own role in the human collective.

The Iziko South African Museum initially opened in 1825. Like many museums of its era, for many years it told the story of human evolution from the point of view of primarily — perhaps exclusively — white, male explorers.

“We realized that the story of our origins was mostly only told from the point of view of the white explorer, making the content exclusionary for much of the South African public,” says Wendy Black, Chief Curator of Art and Social History at Iziko Museums of South Africa.

In her first few years as curator, Black says she realized the need for “telling the story of human evolution alongside the archaeological story.”

She stepped into her role at the museum after completing a PhD in bioarchaeology at the University of Cape Town in 2014. She expanded on her research interests to encompass Indigenous rights and helped develop an active and transformed Archaeology Unit at the museum. As her role at the museum grew, she became a member of the board at the Human Evolution Research Institute (HERI).

“It was important to have the museum participate [in HERI], to improve public awareness around human evolution, help with the school curriculum and train teachers on how to teach the topic, and help grow the discipline by mentoring future archaeologists and paleoanthropologists, particularly Black women,” Black says.

She began to develop the Humanity exhibition, first through various iterations on paper, then through discussions with the HERI team. It became a collaborative effort.

“Together we developed new ways of telling the origin story with a focus on you and how you fit into that story,” she says. “It also focuses on why Africa is so important.”

The Humanity exhibition opened in September 2023, and largely focuses on human inclusivity.

Black shares that the exhibition asks an overarching question: How do you fit into the human evolution story, and can you see something of yourself in our deep past?

“This question doesn’t only relate to how you look — biology — but also includes culture and technology,” she says. “Human evolution has always been told from what has been found in the past to the present. We undo that and start telling the story of human evolution today.”

In order to do so, the exhibition’s creators focus on DNA, which Black notes “is also important as we start looking at our ancestry, particularly, with current research trends.”

Black shares that the key understandings the exhibition’s education programs focus on include:

Human variation today: Humans in South Africa and Africa are diverse in terms of how we look (skin color, tall, short, hair color, eye shape etc), how we live (environment, culture, locations, racism, experiences, emotions and language) and what we are made of (DNA; genetics; skin, bones and muscles), which also reflect our ancestry. But we are 99.9% the same biologically. Skin color, ie melanin production, is a trait that is an example of human variation. We focus on skin color in the exhibit as it is generally one of the first things we notice about people around us and it has played an important role in the history of South Africa (and Africa, more broadly). But race defined by skin color is a social construct, not a scientific one. We also reflect on the fact that although humans vary a lot in terms of traits, like skin color, there is only one human species as defined by our DNA.
Humanity has deep, shared roots in Africa: A section of the exhibit focuses on the last 400,000 years of modern human evolution. We show that humans were living in lots of different places across Africa and beyond and that, like today, we would have been diverse. Human ancestors mixed, just like we do today, forming a connected ancestry that unites us in deep time. We have been humans for a long time (and came from many different places), which is interesting because it shows success and resilience, but also that we’ve been interconnected for a long time. What you thought was true isn’t. We’re not actually divided and haven’t been for a long time. Not only are we connected biologically but there is also a strong cultural connection. Just like us, our ancestors were clever, creative and technologically innovative, also demonstrating spirituality, art or symbolism. Although these traits might take different forms, these are the threads that connect us all in the past, as they do today.
Human evolution is like a braided stream: The infamous “ascent of man” image of human evolution is wrong. Not only because we no longer refer to all humanity as “mankind” but rather as “humankind,” but also because the evolution of our species was not linear or anagenetic. Instead, different groups migrated and mingled, coming together and separating again, over hundreds of thousands of years, much like a braided stream. DNA evidence shows how genes flowed between different human groups through time, and this combined with natural selection, mutation and random events shaped our species.

“It is important to understand aspects of human evolution so that you feel a part of humanity,” Black says. “So many people feel isolated, excluded or discarded because of the way they look or what culture they practice. By understanding human origins, people will see that we are all more united than divided in so many ways.”

Race and racism are discussed strongly in the exhibit. Black says this is because “the foundations of how we have studied and understood human evolution and paleoanthropology, in general, have been racist.”

“Much of the past research was based on race-based scientific study and the ideas of race were perpetuated by this research,” she says. “This filtered into South African legislation, for example, leading the country on a path of legitimized separation of perceived races (apartheid). We show that race is not real, but racism is, and this is an important lesson many humans need to learn.”

In addition to updating the narratives around evolution and humanity’s shared roots, the Humanity exhibit made strides in developing content and design style in collaboration with a large collective of people.

“We consulted with all local communities and diverse population groups to see what everyone would want to see in the human evolution exhibit, and how they would like to see it,” Black says. She adds that this led to the development of specific design principles implemented in the exhibition, such as sustainability, the feeling of fluidity, interconnectedness and weaving through time and the inclusion of the earth.

She notes, for example, that to demonstrate “weaving through time,” the curators “built a grass-weaved structure to help illustrate this, be truly African, and make people feel held in the space.” And objects on display in the exhibit “rest in soil from the regions where the objects were first unearthed.”

The exhibit also includes the ability for visitors to touch real objects.

“We have a wall of hand axes that people can touch to connect to something Homo erectus made, and to toolmaking,” Black says. “We also have fossil casts on display that people can touch.”

Black adds that the museum worked with local artists, “who brought their own sense of style and spirituality to their artwork.”

One example of this she shares is the work of street artist Mak1One, who uses a braided stream metaphor in his art. This is displayed at the end of the exhibit.

“The exhibit is both a scientific and cultural experience where the visitor can feel a part of humanity on a large scale, seeing how they came to be and why.”

Overall, she shares that the Iziko Museums of South Africa are trying to make all of their exhibitions more approachable and inclusive.

“We are steering away from Western-style exhibits and applying a more Afrocentric style, view and focus in the hopes of making our museums an exciting place to visit for locals and foreigners alike,” she says, adding that “HERI continues to build South African science by South African women through groundbreaking research and mentorship.”

Relinking the past, present and future in San Diego, California

For its first 100 years or so, the Museum of Us in San Diego, California, was a traditional anthropology museum called the San Diego Museum of Man. Like many anthropology museums of its time, the original Museum of Man exhibited cultural materials from exoticized ancient civilizations like those from the Maya, Incas and Egyptians. While this model served the museum well for several decades, by the turn of the 21st century visitorship had dropped significantly.

“More modern demographics of museum visitors just weren’t finding that as engaging,” says James Haddan, the Senior Director of Development and External Communications for the Museum of Us. In an effort to connect with people and entice more visitors, the museum began to shift the focus of its exhibits away from individual cultural ethnographies and objects. Instead, it began to highlight cross-cultural themes and universal human traits, grounded in the present day.

Among the first of these new format exhibits, which remains in the museum today, is one called BEERology. It examines the history and present-day art of beer-making and beer-related culture around the world. The museum has held public events, including beer tastings to promote the exhibit. Haddan says it was a hit with San Diego locals, as the city has a thriving beer culture and is home to many a microbrewery.

“This was the first shift away from the idea that we were going to have exhibits on cultures X, Y and Z and the rugs and ceramics that they made, or something like that,” Haddan says. Through beer, of all things, people began to draw lines of connection not only to peoples of the past, but also to existing people around the world with which they may not have otherwise found commonality.

In 2020, the museum adopted its current name, Museum of Us. Its aim was to spark dialogue, self-reflection and connections centered on the shared human experience.

Haddan shares that the museum’s focus is about “looking for the universal in the particular.”

“Sometimes when you look at something very specific in a community you can have a better understanding of much broader contextual issues that really impact us all,” he says.

Stories of immigration, he shares, are an example of this. The museum hosts the exhibit “Hostile Terrain 94,” which shares narratives from the perspectives of people immigrating to the US, with a focus on people crossing the Arizona-Mexico border.

“We tend to hear on the news and in other places, the perspectives of the border police or politicians, but we haven’t heard much of that narrative from the perspectives of the people who are crossing,” he says. “We don’t often hear the human story of what’s happening there.”

Haddan says the stories shared in the exhibit are likely to feel personal and relatable to the more than 21% of San Diego residents who are immigrants. And, he points out, immigration is a universal human concept dating back to our ancient prehistoric ancestors.

“There has always been immigration and migration, over thousands and thousands of years,” he says. “It is a universal theme that has created the way humans exist on this planet. It is something that’s really impactful to all of us.”

The exhibit also includes a large map with geolocated toe tags of the individuals who have died crossing the stretch of the border in the Sonoran Desert of Arizona, over a ten-year period. Some of the deceased have been identified, and others remain unknown. Haddan shares that the museum has had public programming for the exhibit in which people fill in the names and ages on the tags. He says that this is particularly impactful, because “you see children’s names, and you see people who died not only due to lack of water, heat or cold, but also due to violence. Suddenly that whole issue [of immigration] feels different.”

“We have a large undocumented population here in San Diego, as well as many people that are related to undocumented folks, and we have never seen those stories told in a museum before,” he says. “That’s one of the reasons why many people don’t often come to museums — because they don’t see themselves in the stories that are told there. We felt, as part of our name change and bigger picture change, that we want to be a place for those stories — stories from communities that haven’t been heard before or have been silenced by dominant cultural narratives.”

As part of the effort to tell the stories not often told in museums, they have adopted a Colonial Pathways policy in which they have committed to returning objects and the stories of ancestors to descendant communities, “if we don’t have permission from those communities to hold those objects.” As there are hundreds of thousands of objects in the museum — most of which came to the museum following archaeological digs — the process will take time, he says.

Also, as part of its efforts to rectify its colonial past, the museum has been working on mending its relationship with the local Kumeyaay community, as the museum is located on what has always been Kumeyaay land. (These people live in southern California and Mexico). He says the museum is working with the Kumeyaay community on a new version of “a very outdated Kumeyaay exhibit,” which would now be presented from their perspectives.

The museum has also worked with Mayan-descendant communities to completely overhaul a Maya exhibit that has been in the museum since it opened 100 years ago. For many years, the exhibit was written and shown from the perspective of white, male and Western university-educated archaeologists and anthropologists. There was no input from the existing communities whose direct ancestors made the ceramics, rugs etc on display. For the new exhibit, all of the text was rewritten by the Mayan-descendant community and revamped according to their direct input and oversight.

Haddan points out that in many museum exhibits around the world that feature ancient cultures — be it Maya, Inca, Aztec or Egyptian — there is a failure to acknowledge that descendants of those cultures are living today.

“If you just took that exhibit by itself, you would think this was a group of people that no longer exists,” he says. While Mayan-descendant people make up a diaspora around the world, there are also large communities of Mayan-descendant people still living on their traditional lands.

“We took a lot of objects off display because those [Mayan-descendant] community members said they would prefer us not to have them on display. They weren’t asked about these objects leaving their communities.” The exhibit text now tells the story of the objects from the perspectives of these communities.

Some people become upset when Haddan shares the Colonial Pathways policy.

“They get stressed and say things like, ‘You didn’t steal that. Why are you giving it back?’” to which Haddan replies with a question: “How would you feel if someone dug up a cemetery where your grandparents or great-grandparents were buried and then took [their] remains to study scientifically — but they didn’t ask you — how would you feel if that happened to you?”

He says this helps humanize the people who have been harmed by museum displays.

 “It’s a way of trying to help people have a more empathetic approach, or trying to center those other people’s feelings.”

An exhibit on structural racism titled, “Race,” is also in the process of being updated as conversations around race continue to evolve, Haddan shares.

He says the Museum of Us tackles issues like race and immigration because it is part of their moral obligation as a museum to do so.

“We take the position that museums by their very nature aren’t neutral,” he says. “I think there’s often been the idea in the past that museums were neutral, and they didn’t speak about or address those concepts,” he says. However, often by staying quiet, or by the very nature of the way a given museum was created, or how its collections are presented, and by whom, museums are taking a position.

“We make it a point to talk about these things,” Haddan says. “And we don’t want to be performative, so we don’t make statements on everything that’s happening in the world.” Instead, he says, the museum typically focuses on specific projects and concepts — the universal in particular.

The museum’s larger aim is for everyone to feel a sense of human commonality. So rather than shame people for what was done wrong in the past, they aim to call people into shared experiences and human stories.

“We try to take an approach that isn’t calling people out,” he says. “We really want the museum to be a place for all of us. We want to welcome everybody and hope that maybe you’ll leave feeling a little differently about some things than you did when you walked through the door.”

[Lee Thompson-Kolar edited this piece.]

[Local Peace Economy produced this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Paradigm Shift: How Three New Museums Teach the World About Human Origins appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/paradigm-shift-how-three-new-museums-teach-the-world-about-human-origins/feed/ 0
How to Develop Immunity to Nonsense: Lessons From a New Science https://www.fairobserver.com/more/science/health/how-to-develop-immunity-to-nonsense-lessons-from-a-new-science/ https://www.fairobserver.com/more/science/health/how-to-develop-immunity-to-nonsense-lessons-from-a-new-science/#respond Sat, 16 Dec 2023 11:31:53 +0000 https://www.fairobserver.com/?p=146831 In less than a generation, we’ve managed to build an utterly bewildering information environment: social media. With nothing more than a tap on a screen, 5.3 billion of us can now plunge headlong into a swirling ocean of “viral” content. Previous generations also contended with misinformation. False narratives, malicious gossip and political spin are as… Continue reading How to Develop Immunity to Nonsense: Lessons From a New Science

The post How to Develop Immunity to Nonsense: Lessons From a New Science appeared first on Fair Observer.

]]>
In less than a generation, we’ve managed to build an utterly bewildering information environment: social media. With nothing more than a tap on a screen, 5.3 billion of us can now plunge headlong into a swirling ocean of “viral” content.

Previous generations also contended with misinformation. False narratives, malicious gossip and political spin are as old as time. Before the advent of the scientific method, everyone marinated in a rich stew of fairy tales, myths and superstitions. Like us, our ancestors trusted quack cures and fell for conspiracy theories.

But now, quite suddenly, we find ourselves in a brave new world, one riddled with rabbit holes and confounded by clickbait. We have fake news and flame wars, cancel culture and contested speech norms, echo chambers and “alternative facts.” We’ve seen culture warriors weaponize Facebook and Twitter, science denial grow into a lucrative business, and conspiracy theories mutate into monstrous forms (QAnon). Is it any wonder so many of us are lost?

We all see others taken in by the BS. We think, “Children get faked out by fairy tales, but I’ve outgrown them. Voters are bamboozled by propaganda, but that’s them, not us. Followers of other religions are misled, but I practice the true faith.” Eventually, though, the more thoughtful among us think to ask pivotal questions. “Am I really so exceptional? Or am I, too, being played? Would I know if I was? Do I really know what I think I know? What misconceptions do I harbor?”

They say there’s a sucker born every minute, but in truth, we’re all born suckers. We’re fairly gullible by default, probably because our ancestors had to learn rapidly when young. That’s why children believe in the tooth fairy. The problem is that, even as adults, we remain strangely susceptible to evolved forms of nonsense: Without guidance, we remain lost.

An unusual few, though, exhibit what I call “deep immunity.” These folks think differently. Somehow, they ward off troublesome information with ease and exhibit uncommonly sound judgment. They cultivate mental habits that can grow into something we could all use more of these days: the precious trait called wisdom.

But how do we cultivate these habits? Half a lifetime ago, I began studying the matter in earnest. Decades of research led me to an astonishing, transformative, but almost unknown fact: The human mind has an immune system of its own. Just as the body has a system for spotting and neutralizing infectious microbes, the mind has a system for spotting and shedding infectious ideas. So I wrote a book about it. The book helped launch an upstart science — what we in the business call cognitive immunology.

The field illuminates the workings of the mind’s defenses. It explains why these defenses sometimes break down and how we can fortify them against corruption. Critical thinking (CT), it turns out, is at best a haphazard approach to achieving misinformation immunity. CT is not enough. The good news? Outbreaks of viral nonsense are not inevitable. “Infodemics” can be prevented. The trick is to apply the science and proactively cultivate mental immunity.

In what follows, I distill an ocean of research into four actionable steps. Taking them should give your mind’s defenses an immediate boost. I hope, though, that the guide will also spur more ardent, long-term striving. If we keep these guidelines in mind and work patiently toward mastery, we can all grow substantially wiser.

Step 1: Shift your reference frame

Many of us default to a certain outlook on information. I call it the info consumer frame (or ICF). On this view, the infosphere is like a marketplace. (Hence the metaphor “marketplace of ideas.”) We humans are like shoppers: We browse the aisles looking for ideas that strike our fancy. Ideas are assumed to be like products that sit obediently on shelves. Meanwhile, our minds are like shopping carts — passive containers for the mental stuff we acquire. Learning is fundamentally a matter of filling your cart with information commodities; the ideas we “buy” into become our beliefs, and everyone is entitled to believe what they like.

This frame is pernicious. It breeds a sense of cognitive entitlement, exacerbates preexisting biases and obstructs higher-order thinking. In the information age, it is proving especially dangerous. Its influence is seen in the polarization that threatens the world’s democracies and the ideological entrenchment of today’s culture warriors. Whenever propaganda goes viral and incites unruly mobs, the invisible hand of the ICF is at work. In our time, we’re called to rethink this prevailing understanding of our relationship with ideas.

Happily, an alternative is taking hold in the sciences. Here’s the gist: Ideas are more like microbes than groceries. Bundles of information obey an evolutionary logic: The “fittest” tend to find hosts, survive and reproduce. Our minds host some that are good for us and others that are bad for us. Good ideas (roughly, true and useful ones) amount to mind-symbionts and bad ideas (the false or harmful ones) amount to mind-parasites.

Sometimes the latter — “infobugs” — proliferate at our expense. For example, beliefs about witchcraft have incited moral panics (Salem), extremist ideologies have inspired terrorist attacks (9/11) and fake news has galvanized sedition (the January 6 US Capitol attack). Some infobugs even induce us to spread them. Think of the clever but misleading meme that gets you to share it, or the religious notions that inspire proselytizing. Just as a virus can hijack a cell for its purposes, an ideology can hijack a mind for its “purposes.”

Call this the microbial ecosystem frame. Minds are not passive receptacles. They’re active, infection-prone contraptions cobbled together by natural selection. In fact, our minds co-evolved in a rich stew of ideas, many of them prone to replicate in spite of our best interests. Crucially, every one of us is susceptible to mind-infections. In fact, every one of us is infected. We play host to countless infobugs. Misconceptions, false assumptions, overgeneralizations, limiting beliefs, crippling doubts — all of these are, in a very real sense, mind-parasites. Minds teem with them, and precautions must be taken to keep them from running wild.

This frame has a key implication: Each and every one of us has a lot to learn — and unlearn. Much of what we think we know doesn’t truly amount to knowledge. Admit this, embrace the consequent humility, and you take an important step toward deep immunity.

Step 2: Have standards

We need shared cognitive standards. Otherwise, our beliefs become arbitrary. Opinions diverge, ideologies harden, and worldviews become irreconcilable. Historically, it works like this: Excuses that license irresponsible talk spread, sowing the seeds of mental decadence. Then, unaccountable talk proliferates, belief systems diverge and societies succumb to mistrust, division and conflict.

The outbreaks of irresponsible thinking in our time can be traced to ideas like these: “Our beliefs are fundamentally private and no one else’s concern”; “Everyone is entitled to their opinion”; “Values are fundamentally subjective”; “Articles of faith should not be questioned”; “Criticism is tantamount to the policing of thought.” A related idea — that “no one has standing to uphold standards” — is conveyed by the sneaky rhetorical question, “Who’s to say?”

Philosophers call this nexus of ideas “relativism,” and intellectual historians know that their appearance presages periods of turmoil and civic decline. Why? Because they weaken the centripetal pull of objective evidence. Without reality-based cognitive standards, “the center cannot hold,” and “mere anarchy is loosed.”

Cognitive immunologists classify relativistic ideas as mental immune disruptors. People employ them to evade accountability norms. This subverts those norms, leading to cognitive dysfunction. Shedding the disruptors is thus one way to build mental immunity. If you haven’t already done so, I suggest renouncing the ones in quotation marks above.

Try this also: Apply the Golden Rule to the life of the mind. Ask yourself what cognitive standards you would have others observe, then hold yourself to those same standards. Want others to be honest? Be honest yourself. Want others to be fair-minded and persuadable? Make yourself fair-minded and persuadable. Are you troubled that others believe things they have no business believing? Then don’t believe things you have no business believing. Apply the “Law of the Gospels” to the world of information and — voila! — you get a rich and beneficial ethics of belief.

Norms of accountable talk are the cornerstone of human civilization. When they are generally observed, constructive means exist for resolving conflicts, and everyone benefits. When bad actors defy these norms, it chips away at the trust that makes cooperative living possible. Imagine a world where decayed norms of accountable talk afford no protection against malicious accusations. Imagine a rival employing unfounded allegations to get you locked up. You’d have no recourse. If that’s not the world you want, help strengthen the norms of accountable talk.

Also, dump the idea that it’s enough to have a good reason for whatever you want to do or believe. You can manufacture a serviceable reason for anything, so that standard is too lax. (I call this the Platonic standard, because it occupies center stage in two Platonic dialogues.) This standard encourages wishful thinking and rationalization. It also exacerbates confirmation bias

The antidote is the Socratic standard: beliefs and decisions should be able to withstand tough questioning, including the objections of those who disagree. Standards like this give us a mechanism for resolving our differences with words. They also bring the defects of troublesome ideas to light and help us shed them. The true test of responsible belief is not, “Can I find a reason for this?” but, “Can it withstand questioning?”

You know how we update our antivirus software to protect our computers from the latest digital pathogens? We need to do the same with our brains. Here’s how. Learn how bad actors “hack” minds: how they play on fears, encourage wishful thinking and float seductive conspiracy theories. How they weaponize doubt, cultivate cynicism and compromise mental immune systems. Build your mental library of mind-viruses, fallacies and mental immune disruptors, and you’ll spot manipulative information more easily.

Step 3: Practice basic cognitive hygiene

Many of us dislike uncertainty, so we “tune out” our doubts. But cognitive immunology explains why this is a grave mistake. Doubts are quite literally the antibodies of the mind. The mind generates them to fight off problematic information. Learn to listen to them. Often, they’ll draw attention to an idea’s defects, thereby reducing the risk of mind-infection. Better yet, befriend your doubts: learn to enjoy their company and enjoy the benefits of next-level BS-detection.

Your mind also generates reasons. Sometimes, it does this to rationalize what it wants, but more often, it does this to draw your attention to a consideration that really does count for or against something. A basic principle of cognitive hygiene, then, is to give good reasons their due. Whether they count for your position or against it, credit them. Let them change your mind. (In practice, this often means letting them nudge your degree of confidence in something up or down a bit.)

Willingness to yield to “better reasons” is the very heart of rational accountability, so submit to each and every relevant consideration that comes along. Often, there are important considerations on both sides of an issue; when this happens, reject the myopic foolishness of “either…or” and embrace the wisdom of “both…and.” Grown-ups can credit competing considerations.

You can strengthen the “muscle” at the core of your mind’s immune system by habitually yielding to evidence. Simply allow evidence to shape your outlook and your mind’s immune system will grow stronger. Push back against evidence (that is, defy reason on behalf of a favored position), and it will grow weaker. The research on this is, I believe, conclusive: Even small concessions to willful belief damage the mind’s immune system. By all means, be resolutely hopeful, but renounce willful believing.

When exploring contentious topics, it’s also important to sideline your identity. Here’s why: When people hitch their identity to a set of views, a phenomenon called “identity-protective cognition” kicks in. They begin experiencing legitimate challenges as threats — and overreact. More precisely, your mind’s immune system will overreact. When mere words trigger a heated response in you, you’re experiencing an unhealthy auto-immune reaction. Immunologists call some immune system overreactions “autoimmunity.” Yes, auto-immune disorders of the mind also exist.

Good cognitive hygiene also requires that you practice subtractive learning. Most of us think of learning as adding to the mind’s knowledge stockpile. But it’s equally important to subtract out the stuff that doesn’t belong. Notice inconsistencies in your beliefs and take time to address them. (Usually, this means letting go of one of the inconsistent beliefs.) Fail to do this and inconsistencies will accumulate; your belief system will grow increasingly unreliable, and your capacity for sound judgment will degrade.

Step 4: Mind your mindset

It’s easy to slip into a mindset that compromises mental immune function. If you’re too trusting, a lot of bad information will get past your filters; if you’re unduly suspicious, good information will get caught in those same filters. You can be too gullible, but you can also be too cynical. You wouldn’t know it from all the emphasis we place on critical thinking, but you really can be too critical for your own good.

Critical thinking is mostly a fine thing, but the combative attitude of a culture warrior is corrosive of mental immune health. Culture warriors fixate on points that can be wielded as weapons against “them” — and become blind to considerations that weigh against “us.” Treat the space of reasons as a battlefield and you’ll develop an acute case of what psychologists call “myside bias.” This can fatally compromise your mind’s immune system. That’s why partisan zeal unhinges minds.

To avoid this fate, be curious, not critical. Maintain a collaborative spirit. Treat conversation partners as collaborators. Never wield reasons as weapons; instead, employ them as pointers meant to guide attention to relevant considerations. Don’t reason to win; reason to find out. I call this mode the way of inquiry: Make it your default mindset and, over time, you’ll achieve something akin to wisdom.

Of course, we do need to test each other’s ideas. Our mind-infections are largely invisible to us, so we need the help of others to spot them. It doesn’t help, though, if conversational idea-testing becomes contentious. Then, pride and fear interfere with falsehood removal. Two habits of mind can help here. First, think of challenges as opportunities, not threats. They’re opportunities to unlearn and should generally be welcomed. Master this, and you won’t overreact to cognitive conflict.

Second, convert your objections into clarifying questions. Even if the view at issue seems unworthy, approach it as something worth understanding. Show genuine interest. Be curious and patient. If the claim in question is problematic, ask for help understanding it. Do this, and a lot of times the claimant will discover its problematic qualities for themself. Once you’ve won a person’s trust, you can place countervailing considerations alongside their reasons — “This is true too, right?” — but let them weigh up the pros and cons. And let them draw their own conclusions.

So there you have it: a four-step guide to developing mental immunity. To sum up: (1) shift your reference frame, (2) uphold standards of accountable talk, (3) practice sound cognitive hygiene and (4) mind your mindset. As you weed out misconceptions and replace them with understanding, your immunity will deepen. You’ll become less prone to mind-infections. As those around you do the same, they become less likely to infect you.

We can build herd immunity to cognitive contagion. Imagine a world where outbreaks of unreason are routinely nipped in the bud, where truculent ideologies are easily dissolved and pointless partisanship no longer frustrates human aspirations. Can we evolve such a world? Absolutely. With cognitive immunology to light the path, each of us just needs to do our part.

[Anton Schauble edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post How to Develop Immunity to Nonsense: Lessons From a New Science appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/health/how-to-develop-immunity-to-nonsense-lessons-from-a-new-science/feed/ 0
How to Strengthen Your Mind’s Immunity to Bad Ideas https://www.fairobserver.com/business/technology/how-to-strengthen-your-minds-immunity-to-bad-ideas/ https://www.fairobserver.com/business/technology/how-to-strengthen-your-minds-immunity-to-bad-ideas/#respond Wed, 29 Nov 2023 09:04:14 +0000 https://www.fairobserver.com/?p=146468 Question: The problem of misinformation and disinformation is huge, and it’s growing with the arrival of AI like ChatGPT. With a dearth of solutions out there, the idea of cognitive immunity is alluring. Does it offer real solutions? Andy Norman: People are right to be concerned. AI promises to be hugely disruptive. Here’s one reason… Continue reading How to Strengthen Your Mind’s Immunity to Bad Ideas

The post How to Strengthen Your Mind’s Immunity to Bad Ideas appeared first on Fair Observer.

]]>
Question: The problem of misinformation and disinformation is huge, and it’s growing with the arrival of AI like ChatGPT. With a dearth of solutions out there, the idea of cognitive immunity is alluring. Does it offer real solutions?

Andy Norman: People are right to be concerned. AI promises to be hugely disruptive. Here’s one reason why: AI-governed algorithms amplify information with “viral” properties. Nearly everyone now is plugged into the web, where infectious nonsense can spread like wildfire. Propagandists can now reach millions of vulnerable minds in a matter of minutes. We don’t like to admit it, but our connectedness makes us more vulnerable to sketchy information. Toxic polarization and conspiracy theories are symptoms of a deep imbalance: Our ability to spot nonsense has not kept pace with our ability to spread it.  

An exciting new science, though, is teaching us how to fight back. Each of us possesses a highly evolved capacity to filter out false and malicious information. The suite of mechanisms that does this work deserves a name; we call it the mind’s “immune system.” Scientists from around the world have joined our call to understand it so we can better cultivate mental immunity. Here’s what we’ve learned: these systems can go haywire, but they can also perform at a very high level. The key is to learn habits of mind that keep your mind’s immune system grounded. The Mental Immunity Project is all about freeing ourselves from false and manipulative information.

Q: How real is the mind’s immune system?

Norman: Philosophers are going to be arguing this one for a long time! Here’s what we know: the mind does something deeply analogous to bodily immune function: It actively monitors for false, harmful, and infectious stuff — “viral” information, basically — and does its best to shed it. The body’s immune system manufactures antibodies to fight off pathogens, and the mind manufactures doubts to fight off problematic ideas.

Both systems function best in a “Goldilocks zone” that lies between extreme trust and extreme suspicion. Both tend to go haywire when they stray out of this zone.

Fortunately, our minds are inoculable, just like our bodies. And bodies are inoculable because they have immune systems. So what does that tell you? Both systems evolved by natural selection to solve similar problems. Each functions to protect an evolved thing from infectious and parasitic stuff. The similarities are really quite striking.

To sum up: Yes, I think that mental immune systems are very real. I have smart colleagues who disagree, though, and that’s fine. We differ on a subtle philosophical question, but agree on the important thing: We need to understand and care for the mind’s capacity to spot and filter misinformation.

Q: In your book, Mental Immunity, and with the Mental Immunity Project, you aim to advance the science of cognitive immunology. You also aim to share actionable ideas that people can employ in their day-to-day lives. What are some of the most exciting recent findings? 

Norman: There’s so much neat work going on, it’s hard to know where to begin. Here are a few findings that I think have the power to change lives. First, the science should change the way we feel about doubt. Most people dislike doubt; it makes them uncomfortable. They prefer certainty. But ultimately, doubts are our friends. They’re quite literally the antibodies of the mind. The mind sends them to try and alert us to the problematic features of bad ideas. If you pay attention to them, appreciate them, and update your beliefs regularly — sometimes by letting go of them — you will grow wiser over time.

My second favorite finding has to do with other people’s doubts. Each of us harbors beliefs. We grow attached to them, and are usually blind to their defects. (Like love, belief can be blind.) This means that we need the help of others to spot our mind-infections. We need to listen to other people’s objections, fight down the urge to get defensive and learn to appreciate them for what they are: opportunities to “unlearn.” Simply put: Treat challenges to your worldview as opportunities, not threats. The mind’s immune system can freak out and attack the bearers of conflicting information; it’s up to us to calm it down so we can learn from that information.

Q: What are some practical things people can do to start strengthening their mental immune systems? How can people help their kids and their families from falling for bad information?

Norman: We’ve developed a Guide to Mental Immune Health designed to help everyone build their immunity. In it, we identify ten key habits of mind. We call them principles of mental immune system care, and each one is pretty simple. For example, we should monitor our motives for believing. A lot of times, we believe things because we want them to be true, not because they really are true. This is problematic, though, because it can make us prone to wishful thinking. The antidote is to notice why you believe what you believe. Believing something because it’s useful to believe it is one thing, and believing it because it’s probably true is something else. Responsible thinkers keep track of which is which.

Another principle of mental immune system care: Embrace shades of gray thinking. Life is full of uncertainties, so complete certainty is almost always a mistake. Make your peace with intermediate confidence levels. If you’re only about 85% certain that something is true, own that. Say, “I think it’s true,” rather than, “I know it’s true.” The world’s best thinkers are continually adjusting their confidence levels as new evidence comes in. If a new consideration weighs against a belief, but only a little, it’s usually best to reduce your confidence a bit.

A third example: Play for team truth. It’s easy to get caught up in a culture war and feel outraged by the things “they” are saying and doing. When this happens, resist the urge to indulge in righteous indignation. Why? Because continually reacting to the latest outrage from the other side can damage your mind’s immune system. It tends to compromise your ability to think objectively. When you encounter an objectionable half-truth, appreciate the truthful part of it before you criticize the not-so-true part. Give the other side’s reasonable points their due. Don’t react, reflect. Be fair-minded. Seek truth and common ground, not victory.

Q: Can you offer some examples where techniques like pre-bunking have effectively neutralized bad information, or at least made it less damaging?

Norman: Sure. Here are two important ones. In the run-up to Russia’s Ukraine war, US intelligence learned that Vladimir Putin was planning an invasion. They learned that Putin was going to use Russia’s powerful propaganda machine to sell a false narrative of Ukrainian aggression. The Biden administration took this information and began warning allies. His ambassadors alerted other nations of a coming disinformation campaign. Representatives of his administration warned news outlets. When the invasion and the influence campaign arrived, dozens of governments and media outlets were prepared not to drink Putin’s Kool-Aid. Biden had successfully “prebunked” Putin’s false narrative, so it fell flat. This is a big reason why Putin’s power grab failed.

“Prebunking,” by the way, is another name for mind-inoculation.

Second example: In the run-up to the 2020 presidential election, Donald Trump made it clear that he was planning to claim that the election was rigged. He concocted a false narrative and began selling it months in advance. He knew instinctively that simple and emotionally charged messages can hijack minds, and he repeated his claims again and again. He was actually hacking his supporter’s minds. Fortunately, the Department of Homeland Security saw that this could result in electoral chaos. They worked closely with one of our colleagues — Sander van der Linden, a Cambridge University psychologist — and they warned election officials all over the country. They used prebunking to prevent these election officials from falling for the big election lie. When the time came to count the votes, these officials were effectively inoculated. They did their jobs and American democracy survived. Prebunking prevented a constitutional crisis.

Q: Are certain people more prone to believing misinformation than others?

Norman: Absolutely. Just as people vary in their susceptibility to the flu, people vary in their susceptibility to misinformation. Those who know how to spot and disregard sketchy information (those with well-functioning mental immune systems) can shrug off the very same information that seriously addles others (those with poorly functioning systems).

There are three major reasons why we fall for misinformation. First, we tend to trust information that confirms our biases. If information “fits” with our worldview, we’re less likely to be skeptical and more likely to accept it as true. Fail to understand this, and your worldview can become rigid and self-validating. Second, we’re more likely to fall for misinformation that triggers strong emotions. Anger, outrage and fear are especially potent: They reduce our ability to think well. Third, we’re more likely to fall for misinformation when it’s repeated. This bias is known as the illusory truth effect, and it’s especially powerful in “echo chambers” where false information is repeated uncritically.

Learn a bit about your mind’s misleading tendencies, though, and you can begin to mitigate them. For example, make a habit of asking yourself: “Am I accepting this at face value because it’s genuinely reliable, or am I accepting it because I find it validating?” If there’s some mix of the latter, you probably ought to give the information a second, more critical look.

Q: What is it about conspiracy theories that allow them to infect minds so successfully?

Norman: Conspiracy theories are like traps. Buy into one and it provides ready-made excuses for doubling down on the narrative. Why is there no evidence for the conspiracy? Because the conspirators covered it up! Why is there evidence against the conspiracy? Because the conspirators planted it!

Those prone to conspiratorial thinking share similar traits. They tend to be low in intellectual humility. They rely more on intuition and less on analytical thinking. They have a need for certainty and prefer simple answers for complex events. They see patterns where none exist, connecting unrelated events into a larger plot. They view themselves as heroic victims and blame others when things go wrong. Conspiracy beliefs give you a sense of control; they boost your self-esteem and make you feel part of a special group that’s “in the know.”

Often, conspiracy theorists are hyper-critical thinkers. Their suspicions are overblown. They’re unable to trust where trust is warranted. And, ironically, conspiratorial thinking won’t help uncover real conspiracies (which do exist!). For that, we need measured skepticism and genuine critical thinking.

Q: How did the Mental Immunity Project come together? 

Norman: In my book about mental immunity, I proposed a new approach to our world’s misinformation problem: one centered on the idea that minds have “immune systems” that can do a lot of the work for us — provided we care for them properly. I founded the Cognitive Immunology Research Collaborative (CIRCE), an institute dedicated to understanding the mind’s defenses and cultivating mental immune health.

In 2022, we convened a blue ribbon panel composed of the world’s leading experts on misinformation and cognitive immunology. The panel drafted a bold declaration highlighting the science, which has now been signed by over 100 scholars and domain experts.

The panel also concluded that we could use the science to begin cultivating mental resilience at scale. So, in 2023, CIRCE teamed with Thinking Is Power to launch the Mental Immunity Project, which translates the science into tools anyone can use. We developed the first-of-its-kind guide to mental immune system care and put it online. Anyone can go there and learn how to spot the worst kinds of misinformation — extremism, hate, pseudoscience, conspiracy theories, propaganda, etc. We’re developing tools to protect loved ones, kids, and organizations too.

Q: What do you think the future of cognitive immunology holds?

Norman: The science of immunology turned the tables on infectious microbes. It gave us the upper hand in the battle against disease. This fundamentally changed the human condition. Now, the science of cognitive immunology promises to turn the tables on infectious misinformation. We think it will give us the upper hand in the battle against viral nonsense — a battle we think is every bit as consequential. To get there, though, we need partners. We need foundations that can invest in the science. We need school districts committed to equipping their students. We need citizens who can spot propaganda and call it out. We need everyone to acknowledge their susceptibility to manipulative information, and take steps to mitigate it.

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post How to Strengthen Your Mind’s Immunity to Bad Ideas appeared first on Fair Observer.

]]>
https://www.fairobserver.com/business/technology/how-to-strengthen-your-minds-immunity-to-bad-ideas/feed/ 0
Short Story: “Cheek Biter” https://www.fairobserver.com/blog/short-story-cheek-biter/ https://www.fairobserver.com/blog/short-story-cheek-biter/#respond Sun, 24 Sep 2023 11:48:19 +0000 https://www.fairobserver.com/?p=142841 Something to consider when reading/listening: Is our identity only ever what other people perceive us to be? Who are we when no one is looking?  I’m a cheek biter. Mmm. That’s what she says to her assistant every time I come in. In the middle of a session or whatever you call it. She’ll count… Continue reading Short Story: “Cheek Biter”

The post Short Story: “Cheek Biter” appeared first on Fair Observer.

]]>
Something to consider when reading/listening: Is our identity only ever what other people perceive us to be? Who are we when no one is looking? 

I’m a cheek biter. Mmm. That’s what she says to her assistant every time I come in. In the middle of a session or whatever you call it. She’ll count the teeth. “1, 2, 3, 4, he’s a cheek biter, 5,6, keep an eye on 6, 7, 8.” It surprises me every time.

I’ve never asked her about it. But it always catches me out. Transports me right back to the last time she said it. 

A few hours later, I forget this element of my personality altogether. I don’t notice I’m doing it. I certainly don’t do It consciously. But for a few hours every six months, I become, to my complete astonishment, a cheek biter. 

Maybe I’m just forgetful. “You’d forget your own name if it weren’t sewn into your clothes,” that’s what my mother used to say. “You’d forget your head, too, if it weren’t screwed on.” But I do. Even though my head is screwed on. Even though my name is sewn, if not into my clothes, then into my skin, I do forget both of these items on a regular basis. 

When I’m at the dentist, I have neither head nor name. I’m aware of my teeth and my cheeks for the first time in months, and a person can only be aware of a very small number of things at any one time. She calls me a cheek biter, and in that moment I have neither name nor head. I have no memories, no family, no future. All my attention goes to one point. I am a cheek biter and nothing else.


A father and a husband, that’s how other people might think of me. That’s what they’d put on my tombstone presumably, or my memorial bench. But you can’t be two things, not at the same time. 

How much of my life do I spend being either father or husband? 

The girls don’t live at home, so I’m a father — what? — for the hour or so I must spend thinking about them each day, or the couple of afternoons I spend with them each month? 

But when I’m thinking of them, I’m not thinking of myself. And when I’m with them, I’m listening to them, I’m appreciating them as human beings, I’m not claiming them, I’m not labeling them. So can we really count this as being a father? 

The other day, I showed my father a picture. It was from the late sixties. He’s standing next to his footballing hero, they’re both young men about the same age. He said, “Which one am I?” I pointed at Geoff Hurst. I said, “You’re the one who scored the hat trick in the World Cup final.” His face lit up. He didn’t stop smiling for hours.

The next day, I showed him the photo again and he’d forgotten all about it. He asked me the same question. I said I don’t know. 


What about husband? Am I a husband every time I speak to my wife, or am I simply just someone to speak to? If the postman can speak to her without becoming her husband, why can’t I? 

When we were first married, and someone said “your wife,” something pinged in my brain. A big, blazing reminder: “You, sir, are a husband.” This stopped happening quite some time ago. The same with being a father. When I held the girls for the first time, when the nurses called me dad, there was no question. But now we’re just human beings having a chat.

Perhaps you only get to be something when it’s something new. On his first day, the postman was a postman with every fiber of his being but now he’s just a man who, every half hour or so, remembers he’s delivering post. 

It’s why it’s so exciting to hear her say it.

“Cheek biter.”

Something new, at my age. So late in the game, I get to be something I’ve never been before.

The moment she says it, I am solely, and entirely, a cheek biter. 

In fact, I think there’s a good argument to be made that, at this point in my life, I’m much more a cheek biter than a father or a husband, and maybe it should take precedence on my tombstone or memorial bench. “Beloved Biter of Cheeks (His Own).”

Why bother with my name? My head will be eaten by maggots, why not let my name go with it? Why not let me be solely, and entirely, a cheek biter for time immemorial? 


I’m not attached to my dentist like some people. I’ve seen her for what, five years now. But I wouldn’t have cared if today she’d been someone else. My doctor’s always changing, it doesn’t bother me at all. Some people really don’t like that. 

They think they’re seeing a person, they think they’re building a relationship, but they’re not. A doctor or a dentist, they’re just patterns of behavior. And you, as a patient, you’re a pattern too I’m afraid. You’ve seen the same doctor for twenty years well so what? For most of his existence, he’s not even a doctor, let alone your doctor.

He knows the notes an earlier version of himself left behind. He knows the eyes and ears that haven’t existed since the last time he saw them. And what do you know of him? A pair of specs and furrowed eyebrows that haunt your dreams? It’s patterns, not people. Doctors, dentists, patients, fathers, husbands, all just patterns.

“See you in six months,” she says once I’ve spat out the colored water and stood up from the chair. You see what I mean? 

To the dentist, I’m a cheek biter. To the various doctors, I’m stage four but fighting hard. To the engravers of my tombstone, frankly, I couldn’t care less.

Do these pieces of information tell you who I am? Well, no. No more than my name, my head, or anything else I’ve forgotten.

[Doe Wilmann first released this piece on his short story podcast, Meaningless Problems.]

The views expressed in this article/podcast are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Short Story: “Cheek Biter” appeared first on Fair Observer.

]]>
https://www.fairobserver.com/blog/short-story-cheek-biter/feed/ 0
Aging Alone Isn’t Just for Lesbians: We Are All Vulnerable https://www.fairobserver.com/more/science/health/aging-alone-isnt-just-for-lesbians-we-are-all-vulnerable/ https://www.fairobserver.com/more/science/health/aging-alone-isnt-just-for-lesbians-we-are-all-vulnerable/#respond Sat, 23 Sep 2023 06:58:10 +0000 https://www.fairobserver.com/?p=142828 For twelve years starting in 1982, my partner and I in San Francisco joined with two friends in Seattle to produce Lesbian Contradiction: A Journal of Irreverent Feminism, or LesCon for short. We started out typing four-inch columns of text and laying out what was to become a quarterly tabloid on a homemade light table.… Continue reading Aging Alone Isn’t Just for Lesbians: We Are All Vulnerable

The post Aging Alone Isn’t Just for Lesbians: We Are All Vulnerable appeared first on Fair Observer.

]]>
For twelve years starting in 1982, my partner and I in San Francisco joined with two friends in Seattle to produce Lesbian Contradiction: A Journal of Irreverent Feminism, or LesCon for short. We started out typing four-inch columns of text and laying out what was to become a quarterly tabloid on a homemade light table. We used melted paraffin from an electric waxer to affix strips of paper to guide sheets the size of the final pages.

Eventually, we acquired Macintosh computers, trekking to a local copy shop to pay $0.25 a page for laser-printed originals. We still had to paste them together the old-fashioned way to create our tabloid-sized pages. The finished boards would then go to a local commercial printing press where our run of 2,000 copies would be printed.

This was, of course, before ordinary people had even heard of email. Our entire editorial process was mediated through the US Postal Service, with letters flying constantly between our two cities. On the upside, through 12 years and 48 issues, we only had to hold four in-person meetings.

All of which is to say that I’m old. That fact, along with recent events in the lives of several friends, has brought to mind the first article I ever published in LesCon: “Who’s Going to Run the Old Dykes’ Home?” It’s a question that’s no less pertinent today, and not just for lesbians. My worldview was more parochial back then; I naively believed that someone — the state or families — would look out for heterosexual elders, but that we lesbians were on our own. It turns out that we, the people of this country, are all on our own.

Aging is not easy

These days, my partner and I seem to be doing a lot of elder care. Actually, I’ve long been a source of tech support for the octogenarian set, beginning with my own father. (“OK, you’re sure you saved the file? Can you remember what name you gave it?”) With our aging friends, we also help out with transport to doctors’ offices, communications issues (with landlines, cell phones, and the Internet) and occasionally just relieving the loneliness of it all.

In recent months, elderly friends of ours have faced losing their housing, their spouses, their mobility or their cognitive abilities. I find it terrifying, and I ache because there’s so little I can do to help them.

I shouldn’t be surprised, but I’m daily reminded that getting older can indeed be frustrating and frightening. It pains me to know that my bones are weakening, that I don’t hear as well as I used to, that my skin’s drier and wrinkling, that my once familiar face in the mirror is growing ever stranger. I’m lucky that — like my father who used to say, “After 70, it’s all maintenance” — I’ve managed to maintain a fair amount of brown hair on my head. But I especially hate the way words that used to leap down my tongue in merry cadence now frequently lurk sullenly in the backwaters of my brain.

In a piece about our aging political class, Robert Reich, secretary of labor for President Bill Clinton, wrote charmingly about the “diminutions” that come with growing older and his own decision to stop teaching after decades of doing so. His take on anomic aphasia is similar to mine. He laments his trouble remembering people’s names, noting that some “nouns have disappeared altogether. Even when rediscovered, they have a diabolical way of disappearing again.” I know what he means. For some years now, whenever I want to talk about cashew nuts, all I can initially think of is “carob.” Some devious gremlin has switched those words somewhere in the card catalog of my brain.

But even as I grieve for capacities lost and departing, I’m still not ready to come face to face with the only true alternative to aging: not some tech bro’s wet dream of eternal life, but the reality of death. I’m opposed to dying, and had the universe consulted me, I’d have left mortality out of its design completely.

Aging people need help — and not just the lesbians

Written more than 40 years ago, parts of my piece “The Old Dykes’ Home” are flat-out embarrassing now. Getting old seemed so strange and far off before I was 30. When I imagined being aged then, I think it was with the piercing sorrow of Paul Simon’s song “Old Friends/Bookends”:

Can you imagine us years from today

Sharing a park bench quietly?

How terribly strange to be seventy

In other ways, my article was depressingly prescient about just how much this country would expect aging people to fend for themselves by the time I reached that strange period of my own life. Not only old dykes, but pretty much anyone who isn’t affluent, can find that old age brings economic desperation.

Yes, US citizens and permanent residents over 65 can get medical attention through Medicare, but the standard program only covers 80% of your bills. Beginning in 2006, we gained access to some prescription drug coverage, but that requires sifting through an ever-changing menu of medications and the ability to predict today what meds you might need tomorrow.

Most people who live long enough will receive some monthly income from Social Security, although the amount depends in part on how much they were able to earn during their working lives. But we’re constantly staving off attacks on Social Security, including attempts to privatize it, reduce benefit amounts or increase the age at which people can collect because Americans are living longer. That last proposal, as economist Paul Krugman has pointed out, is really another way of penalizing low-wage workers. As he wrote,

Life expectancy has indeed risen a lot for the affluent, but for the less well-paid members of the working class, it has hardly risen at all. What this means is that calling for an increase in the retirement age is, in effect, saying that janitors can’t be allowed to retire because lawyers are living longer. Not a very nice position to take.

Suppose the disabilities of age mean you can no longer safely live in your own home. Well, you’re on your own. Unless you can afford to move to some kind of assisted living facility, you’re in real trouble. Your main alternative is to spend down most of what you own, so you qualify for the pittance that your state Medicaid program will pay a (most likely for-profit) nursing home to warehouse you until you die.

The threat of being old and unhoused is very real. A recent major study of unhoused people in California found that almost half of them are over 50 and 7% over 65. As housing costs continue to rise, we can only expect that more old people will find themselves on the street.

Back then, I wrote that, under capitalism, we could expect the “owners of wealth” to do very little for people who are no longer creating profits through their labor — or indirectly, by doing the work “to make it physically and emotionally possible for the paid laborers to go out in the world and work one more day.” Why, after all, should capital take any interest in people who are no longer a source of profit?

These are the people — old, disabled, permanently unemployed — who, according to the political philosopher Iris Marion Young, experience a particularly sinister form of oppression: marginalization. “Marginalization,” writes Young, “is perhaps the most dangerous form of oppression. A whole category of people is expelled from useful participation in social life and thus potentially subjected to severe material deprivation and even extermination.”

Volunteering isn’t going to fix the problem by itself

There were some other missing pieces in that article. I left out the fact that it’s easier to justify low pay for the art (and science) of caregiving when most of its practitioners are women. I failed to envision caretakers organizing on their own. I never imagined that, decades later, a National Domestic Workers Alliance would arise to represent the interests of the poorly paid, disrespected workforce of immigrants and women of color who largely do the work of caring for the aged in this country.

I had just lived through an episode in which on the bus to work I suddenly fainted from pain caused by a herniated disk in my back. I found myself lying on my bed for several months recovering while living on a monthly welfare check of $185 and food stamps. Still, the lesson I drew was that the solution to caring for people with chronic disabilities was what had then worked for me: drawing on a community of volunteers, a roster of almost 30 women who took turns shopping for my groceries, doing my laundry and ferrying me to doctors’ appointments. Why couldn’t that work for everyone?

That network of support existed, however, because I belonged to a lesbian community self-consciously constructing a parallel society tucked inside the larger city of Portland, Oregon. It was packed with institutions like a women’s bookstore, a drop-in community center, a women’s mental health project and a feminist credit union, among others. I acted with a women’s theater company and, at times, worked as a secretary at a women’s law cooperative.

In reality, though, we weren’t nearly as independent as we thought we were. Most of those institutions were staffed by women paid through the Comprehensive Education and Training Act, passed during the presidency of Richard Nixon and continued under Jimmy Carter. When Ronald Reagan and his new brand of Republicans took over in Washington in 1981, those salaries disappeared almost overnight — and with them, most of our community’s infrastructure.

So, my answer to the problem of aging then was to endorse an ethic of volunteerism rooted in specific communities, like our lesbian one. “Feminists,” I wrote, “are rightly uneasy about asking each other to perform any more unpaid work in our lives than we, and centuries of women before us, have already done.”

Nevertheless, I argued, “The truth is … no one is going to pay us to take care of each other … and we can’t afford to believe the capitalist and patriarchal lie that we are cheating each other when we ask each other — even strangers — to do that work for free.”

In retrospect, it seems clear to me that I was then inching my way toward an ethos that could free the project of caring for each other from the claws of capitalism. But I was naïve about the amount of time and energy people would be able to spare outside of their day’s labor — especially as real wages were about to stagnate and then begin to fall. I didn’t imagine a time to come when people without much money would need to work two or even three jobs just to get by. I didn’t think, as I do now, that it would be better, instead, to focus on raising the status and pay of caring work.

Even back in the 1980s, however, I recognized the limits of volunteerism. I knew that I’d been lucky during my period of temporary disability. I was an outgoing person with quite a sizeable set of acquaintances. With a reasonable levity of spirit and a dependable store of gossip, I knew then that I could make taking care of me relatively pleasant.

But I also knew that no one’s survival should depend on having a winning personality. Instead, as I wrote at the time, we needed to “develop simple, dependable structures to serve those among us who require physical care.”

How hard could that be, after all? “A file of volunteers and a rotating coordinator could do the job,” I wrote then. Here, too, I was more sadly prescient than I even realized. In recent years, the market for aging care has indeed found a way to commercialize volunteer efforts like the ones I imagined in the form of Internet-based options like Lotsa Helping Hands and Mealtrain.

Mutual aid is not an emergency measure, but a fundamental principle

My point back then was that, as lesbians, we were on our own. No one was going to run the Old Dykes’ Home if we didn’t do it ourselves. (Perhaps I should have foreseen then that someone might indeed run it, if they could make money doing so!) I figured we had 10 to 15 years to develop “formal networks of support to deal with illness and disability,” because eventually each of us would need such structures. We lesbians would have to look out for ourselves because we lived then “on the edges of society.” I didn’t realize at the time that we shared those edges with so many other people.

Building volunteer structures was, I thought, just the short-term goal. The longer-term project was something much more ambitious: to build “a world in which the work of caring for each other happens not at the fringes of society, but at its heart.”

I still believe in that larger goal, and not because it’s a lovely fantasy, but because it’s a response to a fundamental reality of life. It’s a fact that human beings, like all beings, live in a web of interdependence. Every one of us is implicated, folded into that web, simultaneously depending on others, while others depend on us. The self-reliant individual is an illusion, which means that constructing societies based on that chimera is a doomed enterprise, bound in the end (just as we’ve seen) to fail so many on whom — though we may not know it — we depend.

Aging really is a roulette game. My partner and I are gambling that good genes, regular exercise, a reasonable diet and sufficient mental stimulation will keep our limbs, organs and minds healthy enough to, as they say, “age in place.” We plan to stay in the house we’ve occupied for more than 30 years, in the neighborhood where we can walk to the library and the grocery store. We don’t plan to get Parkinson’s or Alzheimer’s or congestive heart failure or (like yet another friend) take a life-changing fall down a flight of stairs. Having somehow forgotten to have children (and never wanting to burden even our hypothetical offspring in any case), we’re planning to take care of ourselves.

The truth is that we have much less control than we’d like to believe over how we’ll age. Tomorrow, one of us could lose the disability lottery, and like so many of our friends, we could be staring at the reality of growing old in a society that treats preparation for — and survival during — old age as a matter of individual personal responsibility.

It’s time to take a more realistic approach to the fact that all of us lucky enough to live that long will become ever more dependent as we age. It’s time to face reality and place caring for one another at the heart of the human endeavor.

[TomDispatch first published this piece.]

[Anton Schauble edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Aging Alone Isn’t Just for Lesbians: We Are All Vulnerable appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/health/aging-alone-isnt-just-for-lesbians-we-are-all-vulnerable/feed/ 0
Time Is Far More Odd Than We Think https://www.fairobserver.com/in-depth/time-is-far-more-odd-than-we-think/ https://www.fairobserver.com/in-depth/time-is-far-more-odd-than-we-think/#respond Sat, 19 Aug 2023 09:18:43 +0000 https://www.fairobserver.com/?p=139651 “How long is forever?” Asks Alice. “Sometimes just one second.” Replies the White Rabbit.  — Lewis Carroll (aka Rev. Charles Dodgson), Alice’s Adventures in Wonderland. 1865. In September 2021, I resumed my quest to better understand the answer to this question at our rented “Spindle” cottage in Maine at the edge of Somes Sound, America’s… Continue reading Time Is Far More Odd Than We Think

The post Time Is Far More Odd Than We Think appeared first on Fair Observer.

]]>
“How long is forever?” Asks Alice. “Sometimes just one second.” Replies the White Rabbit. 

— Lewis Carroll (aka Rev. Charles Dodgson), Alice’s Adventures in Wonderland. 1865.

In September 2021, I resumed my quest to better understand the answer to this question at our rented “Spindle” cottage in Maine at the edge of Somes Sound, America’s only fjord in the contiguous 48 States. Over the centuries, there has been no shortage of those who would probe this seemingly simple question, and yet for all this wrestling it seems any definition remains the equivalent of peeling a cosmic onion.

Circa 330 BC, Aristotle insightfully observed that time is the “number”—or measurement—“of movement in respect of the before and after.” The publican owner of my favored watering hole high up in the Pennines, above Rochdale, Lancashire, England, where I had my first job, used to bark out, “Time, gentleman, please!” as he struggled to politely evict the assembled drinkers (us) at that hour. Then, in the mid-1960s, as now, the remaining patrons were marginally inebriated men. Time is our measure of movement, and it certainly governs our diurnal activities, but it is much more.

While long the plaything of sci-fi writers and philosophers, notably in H. G. Wells’s science fiction novella Time Machine (1895), our understanding of time became more complicated after Albert Einstein’s theory of general relativity (1915), which fused space and time. Particularly jarring was the arrival of his earlier theory of special relativity (1905), concerning the very nature of time itself—its reciprocity or symmetry. In his “Space and Time” lecture (1908), Herman Minkowski befuddled us further by introducing “proper time,” and its subsequent antonym, “improper time.” I always thought the latter was having fun flirting with a friend’s wife, when time seems to fly. It turns out it’s more complex than that, namely “the time interval measured by a frame where events occur at different places.”

Carlo Rovelli, in his profound yet readable book, The Order of Time (Riverhead Books, 2018), wrote that “proper time” depends not only on where you are and your degree of proximity to masses (e.g., that flirtatious woman); it depends also on the (relative) speed at which you move (in my case, quickly!). He went on to pronounce that “The idea that a well-defined ‘now’ exists throughout the universe is an illusion, an illegitimate extrapolation of our experience.” Foiled again.

In Rovelli’s opinion, “the difference between past and future does not exist in the elementary equations that govern events in the world.” Although time appears in most of this branch of the equations of physics (if only implicitly as the speed of light), Rovelli suggests there is no special variable time, or t. A universe without time? Wow, “Curiouser and curiouser,” Alice might have demurred.

If there is no special variable t, then an obvious problem is defining time. A common definition is something like, “the progression of events from past to present into the future.” However, this seems inadequate given what we currently know about quantum physics, human perception, temporal brain science, radioactive decay, unidirectional entropy, time dilation, plus human imagination, to name but a few.

So, let’s start peeling that cosmic onion.

When did time begin?

At first blush, this seems a rather fatuous question, akin to asking how many fish there are in Lake Geneva, as a close friend once asked me. However, it is generally accepted that the beginning of time, as we know it, was the Big Bang/Great Expansion some 14 billion years ago. There was no measurable time before that event. This understandably defies belief for many, but it is supported by cosmological analysis. Whether there were other universes with their own time horizons before this event is, by definition, an unknown.

Knowing that everything humans have observed scientifically has a life cycle, it is logical that our observable universe may not be exempt. The most common statistical distribution for life cycles on planet Earth is the Gaussian distribution, commonly known as the bell curve.

We don’t know yet where in its life cycle the universe is. However, there is empirical evidence through observation that our universe is beginning to dim. This is gleaned by comparing observable light emanating from galaxies closer to us physically (and therefore in time) and galaxies further away (and therefore further back in time). A study that observed over 200,000 galaxies, presented at the 2015 International Astronomical Union XXIX in Hawaii, concluded that the energy emitted is about half of what it was 2 billion years ago. In a grand, speculative, Promethean leap, we may ask: what if this dimming signifies the universe is approximately halfway through its life cycle and follows a normal Gaussian distribution? In that event, it would suggest our universe has another 12–14 billion years to run, with a probability of >50%, as the universe accelerates towards “The Big Fizzle,” where only dissipated energy exists. Further cosmological exploration could bring us closer to understanding whether it follows nature’s Gaussian distribution or something more skewed e.g., sharply rising at the beginning, with an extended tail (“long normal” distribution).

Other estimates of the remaining time for the observable universe range into trillions of years. Our universe’s demise could include it decaying slowly toward maximum entropy (maximum disorder) or the formation of a singularity where the curvature of spacetime becomes infinite.

In terms of maximum entropy, the energy of the universe will have all moved from high energy concentrations to low energy concentrations, much like a match that burns out. The universe will simply exhaust itself and can do no further physical work. This is known generally as the “heat death” of the universe and is a bit different from a cold death where the universe expands forever. On the other side is the theoretical formation of a singularity, as in the “Big Crunch,” wherein the universe reverts to a pre-Big Bang configuration. These hypotheses are but some ways the universe, as we currently understand it, might end (or restart). To adapt a T. S. Eliot quote, we have no idea if it will end with a bang or a whimper. My guess, based on current observation data of an accelerating expansion of the known universe, is a Big Fizzle.

At this point, it is worth heeding cosmologist Carl Sagan’s Baloney Detection Kit. I will quote him directly:

1. Wherever possible, there must be independent confirmation
of the “facts.”

2. Encourage substantive debate on the evidence by
knowledgeable proponents of all points of view.

3. Arguments from authority carry little weight—“authorities”
have made mistakes in the past. They will do so again in the
future. Perhaps a better way to say it is that in science there are
no authorities. At most, there are experts.

4. Spin more than one hypothesis. If there’s something to be
explained, think of all the different ways in which it could be
explained. Then think of tests by which you might
systematically disprove each of the alternatives. What survives,
the hypothesis that resists disproof in this Darwinian selection
among “multiple working hypotheses,” has a much better
chance of being the right answer than if you had simply run
with the first idea that caught your fancy.

5. Try not to get overly attached to a hypothesis just because it’s
yours. It’s only a waystation in the pursuit of knowledge. Ask
yourself why you like the idea. Compare it fairly with the
alternatives. See if you can find reasons for rejecting it. If you
don’t, others will.

6. Quantify. If whatever it is you’re explaining has some
measure, some numerical quantity attached to it, you’ll be much
better able to discriminate among competing hypotheses. What
is vague and qualitative is open to many explanations. Of course,
there are truths to be sought in the many qualitative issues we
are obliged to confront but finding them is more challenging.

7. If there’s a chain of argument, every link in the chain must
work, including the premise,—not just most of them.

8. Use Occam’s Razor. This convenient rule of thumb urges us,
when faced with two hypotheses that explain the data equally
well, to choose the simpler one.

9. Always ask whether the hypothesis can be, at least in
principle, falsified. Propositions that are untestable and therefore
unfalsifiable are not worth much. Consider the grand idea that
our universe and everything in it is just an elementary
particle—an electron, say—in a much bigger cosmos. But if we
can never acquire information from outside our universe, is not
the idea incapable of disproof? You must be able to test
assertions. Inveterate skeptics must be given the chance to
follow your reasoning, to duplicate your experiments and see if
they get the same result.

My speculative life cycle estimate of the observable universe would fail Sagan’s acid test (esp. numbers 1 and 5). However, this does not preclude more thought and discussion on the meaning and duration of cosmological time.

Physics of time

There is a dizzying array of theories on this subject. Concepts such as the direction of time, arrow of time, gravitational time dilation, proper and improper time, entropy increasing with time, quantum time entanglement, loop quantum gravity (predicting that elementary temporal leaps are small but finite as per Carlo Rovelli, who we will bring up later), radioactive decay, thermal time and time as the 4th dimension play a part in the confounding physical complexity of time. All of this is before we begin discussing human perceptions of time, biological and psychological. For purposes of this inquiry, I intend only to examine a few of these, since books and copious scientific papers have been and will be written on these subjects. 

Out of this potpourri of physical attributes of time, Herman Minkowski’s 1907 theory of proper time, which led to measuring the consequences of time dilation, is in my opinion one of the more interesting. Simply put, he postulated the faster the relative velocity (between two entities), the greater the time dilation that there will be. In practical terms, two observers in motion relative to each other will measure each other’s clocks slowing down. Simply put, velocity exerts an influence on time. For example, in the International Space Station after 6 months at a speed of 7,700 meters per second, an astronaut would age 0.005 seconds less than on Earth. Yes, time slowed down for that astronaut relative to the observer on Earth, and we have used two synchronized atomic clocks to prove it.

Gravitational time dilation is, however, not reciprocal—the clock closer to the center of the gravitational field will be slower. Both observers will agree that the clock closer to the center of the gravitational field is slower in rate and ratio of difference. This leads to the somewhat bizarre conclusion that the core of our earth is at a different ‘proper time’ than on the surface (~2.5 years younger than the crust)—and so on throughout the universe. Taking this concept to extremes, you could in theory meet your grandfather before you were born.

If you’re starting to wonder “That’s great, but what does it mean for me?” then you’ll be interested to know general relativistic gravitational time dilation has an immediate impact on satellite (GPS) navigation, space travel and satellite time synchronization at different altitudes. GPS needs to account for this time dilation.

As Wikipedia explains,

Time dilation explains why two working clocks will report different times after different accelerations. For example, time goes slower at the International Space Station, lagging approximately 0.01 seconds for every 12 Earth months passed. For GPS satellites to work, they must adjust for similar bending of spacetime to coordinate properly with systems on Earth.

The International Space Station is only ~254 miles from Earth and travels at a small fraction of the speed of light. Just imagine the gravitational time dilation for a spacecraft billions of miles from Earth traveling at a significant fraction of the speed of light where time dilation would amount to multiple years or centuries.

For those eager to calculate time dilation, the formula is

γ = 1/√(1 – v2/c2)

The equation relating proper time and time measured by an earth-bound observer implies that relative velocity cannot exceed the speed of light c (which is 299,792,458 meters per second). 

A word on radioactive decay

Some things don’t give a whit about relativity and act as natural chronometers. Radioactive decay is another physical phenomenon, namely the probability per unit of time that a nucleus will decay; it is constant, independent of time.

A = –dN/dt

where A is total (energy) activity, N is the number of particles, and t is time.

Again in Wikipedia’s words,

Radioactive decay is a stochastic (i.e. random) process at the level of single atoms. According to quantum theory, it is impossible to predict when a particular atom will decay, regardless of how long the atom has existed. However, for a significant number of identical atoms, the overall decay rate can be expressed as a decay constant or as half-life. The half-lives of radioactive atoms have a huge range; from nearly instantaneous to far longer than the age of the universe.

According to an April 2019 study published in Nature,

Researchers measured, for the first time ever, the decay of a xenon-124 atom into a tellurium 124 atom through an extremely rare process called two-neutrino double electron capture. By measuring this unique decay in a lab for the first time, the researchers were able to prove precisely how rare the reaction is and how long it takes xenon-124 to decay. The half-life of xenon-124—that is, the average time required for a group of xenon-124 atoms to diminish by half—is about 18 sextillion years (1.8 x 10^22 years), roughly 1 trillion times the current age of the universe.

Wow!

So, here we go again, with yet another measurement of time or timelessness. Geologic time suggests the earth is some 4.5 billion years old, as measured by its period revolution around our sun, itself ~4.6 billion years old and approximately halfway through its life cycle of ~10 billion years. It’s rather quaint, but a calendar year equals the elapsed time for Earth to circle the sun i.e., 365.24 days. To this we have added the accuracy of atomic clocks. The current international unit of time is the humble second, of which we gather 60 to form a minute and so on, and one second is defined by the electronic transition of a cesium atom.

Physics and time

Physics provides a variety of lenses to understand time. According to Rovelli, a well-defined “now” does not really exist; gravitational time dilation is real; radioactive decay is constant independent of time; quantum physics indicates particles can be entangled independent of distance (time); finally, cosmic entropy (the increasing physical disorderliness of the universe) may serve as a possible surrogate for the passage of time. 

Einstein described quantum entanglement as “spooky action at a distance.” However, following recent observations, maybe we should append, “… without apparent time delay regardless of distance.” Perhaps there are two phenomena faster than the speed of light: (1) quantum entanglement and (2) human imagination—for example, picturing oneself on Earth’s nearest star’s planets (Proxima Centauri, 4.3 light-years distant) in a matter of seconds.

While we can look backward in time by observing galaxies and stars close to the 14-billion-year-ago dawn of the universe, we can’t look forward with precision. However, with the use of supercomputers such as NASA’s Pleiades, one of the world’s most powerful supercomputers, it should be possible to fast-forward portions of the observable universe relative to current Earth time using various scenarios (not predictions, but probabilistic outcomes).

David Layzer, a Harvard cosmologist in the early 1970s, suggested that in an expanding universe the entropy would increase, as required by the second law of thermodynamics, but that the maximum possible entropy of the universe might increase faster than the actual entropy increase. This dichotomy would leave room for an increase of order or information at the same time the entropy is increasing (e.g., biological systems, certain physical systems including crystallization, etc.).

Layzer inferred that if the equilibrium rate of the matter (the speed with which matter redistributes itself randomly among all the possible states) was slower than the rate of expansion, then “negative entropy” or “order” (defined as the difference between the maximum possible entropy and the actual entropy) would also increase. Claude Shannon identified this negative entropy with information—an intriguing idea at the time (proper or improper!). 

Via informationphilosopher.com

Layzer called the direction of information increase the “historical arrow.”

That’s the physical angle. Now, it’s about time for a biological look at time—and particularly humans’ perception of time.

Human perceptions of time

Dean Buonomano, a professor at the University of California (UC), Los Angeles, whose research focuses on neurocomputation and how the brain tells time), explains that the human brain has multiple clocks, or mechanisms for capturing the passage of time. Not all of these clocks function in the same way: “The circadian clock doesn’t have a second hand, and the mechanisms in your brain responsible for, say, timing the duration of a traffic light don’t have an hour hand.” This is why time might appear to be moving slowly while we are engaged in an activity, but appear to have moved quickly for us in retrospect after we have completed that activity: we do not always experience time through the same mechanisms.

Buonomano might have added that not just the brain but the body has multiple clocks, since I’m feeling hungry right now—but that of course is a timed signal to the brain from my stomach or vice versa. Emilie Reas, who was at the time a neuroscience doctoral student at UC San Diego, explained much the same thing in an article entitled, “Your Brain Has Two Clocks.” The hypothalamus with suprachiasmatic nuclei is responsible for our circadian rhythm. Then there’s the amygdala and hippocampus, both of which play a part in perceived time. The dorsolateral prefrontal right cortex is considered the region most involved in time perception. Most older people perceive time moving faster in retrospect and slower in current time because fewer memories are laid down later in life in the hippocampus. 

When neural activity increases via neural transmitters (including via drugs), time seems to expand. Conversely, decreased firings of differentiated neurons slow or shrink perceived time. Amusingly, our since-deceased dog Max and other dogs we know seem to have little temporal awareness (i.e., awareness of the passing of time), since we receive the same enthusiastic greeting whether returning to the house after five minutes or several days or weeks! 

In 2018, Albert Tsao, a neuroscientist at the Kavli Institute for Systems Neuroscience of the Norwegian University of Science and Technology, discovered a neural network that expresses a sense of time in experiences and memories. The studies demonstrated that, by changing the activities engaged in and the content of those experiences, it was possible to change the course of the time signal in the lateral entorhinal cortex—and thus the perception of time.

In 2014, one of Tsao’s co-authors, Edward Moser, and his then-wife May-Britt Moser had received a Nobel Prize for identifying a GPS-like system in the medial entorhinal cortex of the human brain. Mercifully, the brain’s GPS can now be supplemented with electronic GPS in transportation so that people who tend to get lost, such as elderly people (myself included), are less likely to do so.

Multiple sub-mechanisms in the brain perceive time, including subjective time, psychological time, experienced time, episodic time, mind time, etc., etc. These mechanisms of the human brain that retain and differentiate aspects of time are just a smattering of what we have studied to date. Needless to say, human time perception is complex, and much remains unknown.

And then there is transcendentalism, which I will illustrate with two quotations. The first is from Peter Matthiessen:

Zen is really just a reminder to stay alive and to be awake. We tend to daydream all the time, speculating about the future and dwelling on the past. Zen practice is about appreciating your life in the moment. We are beset by both the future and the past, and there is no reality apart from the here and now.

The second is from Gore Vidal. Howard Austin, his long-term companion, asked Vidal on his deathbed: “Didn’t it go by awfully fast?” to which Vidal would respond in his memoirs, “Of course it had. We had been too happy, and the gods cannot bear the happiness of mortals.” 

Time flies when having fun, but it seems to slow when not—such as in the dentist’s chair or waiting in anguish for a vacant Spot A Pot, as opposed to spending time with a favorite companion. The same time interval is recorded by our atomic clocks, but perceived time is different. Gal Zauberman at the Wharton School confirmed that perceived time moves relatively faster with inaction—but only in retrospect, the reason being repetitive behaviors and lack of new experiences. With age often comes inactivity and repetition. The elderly looking backward often marvel at how fast time seems to have passed.

If we accept that time perception is supremely variable—as in the apocryphal response of Lewis Carroll’s White Rabbit to Alice, “I’m late, I’m late, for a very important date!” This suggests time is malleable and can indeed be stretched or shrunken. The mechanism to stretch time is to build in circuit breakers (aka memory anchors, separated by mental baffles) filled with meaningful, enjoyable activities and related memories—so that time in the moment does not seem to move faster due to inaction, particularly as we age. Stretching time is a mental discipline requiring conscious and subconscious separation of events, and an acute sense of the now, aka living in the moment—without being unduly obsessive about time. This gives the sense that more time can become available. In essence, perceived time has expanded because of those mental circuit breakers.

It’s important to stretch the present before it’s lost in the past or becomes part of the future—although after googling “stretching time,” I found endless fitness centers, bodybuilding exercises, stretching lotions, dog walkers and highly creative adult activities.

This modest attempt to shed light on the mystery of time is neither conclusive nor exhaustive. Rather, it’s a brief tour of the issue and some of the multiple towering contributors to the science—physical, biological and psychological. Many delving into the subject of the direction of time, aka the arrow of time, including Hans Reichenbach (1956) and Stephen Hawking (1988), concluded that time is irreversible. I quote Wikipedia one last time:

The arrow of time, also called time’s arrow, is the concept positing the “one-way direction” or “asymmetry” of time. It was developed in 1927 by the British astrophysicist Arthur Eddington, and is an unsolved general physics question. This direction, according to Eddington, could be determined by studying the organization of atoms, molecules, and bodies, and might be drawn upon a four-dimensional relativistic map of the world (“a solid block of paper”).
The Arrow of Time paradox was originally recognized in the 1800’s for gases (and other substances) as a discrepancy between microscopic and macroscopic description of thermodynamics / statistical Physics: at the microscopic level physical processes are believed to be either entirely or mostly time-symmetric: if the direction of time were to reverse, the theoretical statements that describe them would remain true. Yet at the macroscopic level it often appears that this is not the case: there is an obvious direction (or flow) of time.

But is this true for our imagination? In our imagination, we can bounce between past, present and future. After all, Einstein famously said, “Imagination is more important than knowledge—for knowledge is limited, whereas imagination embraces the entire world, stimulating progress, giving birth to evolution.”

Time dilation, and its quantum implications, should convince us that time can be relative, absolute or perhaps even non-existent, depending on the definition, relative to the observer and observed. Quoting Carlo Rovelli again, “the idea that a well-defined ‘now’ exists throughout the universe is an illusion, an illegitimate extrapolation of our experience.”

It’s all a matter of defining “now, as it is with “time”—and it’s now about time for my cup of tea.

Cartoon licensed March 23, 2022. Randy Glasbergen | Via glasbergen.com

[Christopher Schell edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Time Is Far More Odd Than We Think appeared first on Fair Observer.

]]>
https://www.fairobserver.com/in-depth/time-is-far-more-odd-than-we-think/feed/ 0
Long Covid Shines Spotlight Now on Other Long Diseases https://www.fairobserver.com/more/science/long-covid-shines-spotlight-now-on-other-long-diseases/ https://www.fairobserver.com/more/science/long-covid-shines-spotlight-now-on-other-long-diseases/#respond Sat, 13 May 2023 21:40:00 +0000 https://www.fairobserver.com/?p=132751 Long Covid. Chronic Covid. Post Covid. Long-haulers symptoms. Over the past two years, we’ve learnt that the fiery comet head of COVID-19 can come with a long tail of impact. Perhaps for the first time in our human history, the long-term effects of a short-term disease are being so thoroughly researched all over the world… Continue reading Long Covid Shines Spotlight Now on Other Long Diseases

The post Long Covid Shines Spotlight Now on Other Long Diseases appeared first on Fair Observer.

]]>
Long Covid. Chronic Covid. Post Covid. Long-haulers symptoms. Over the past two years, we’ve learnt that the fiery comet head of COVID-19 can come with a long tail of impact. Perhaps for the first time in our human history, the long-term effects of a short-term disease are being so thoroughly researched all over the world by such a large number of scientists, using such advanced technology, and with such numerous subjects. 

However, the concept of Long Covid also raises a broader question. If we can have Long Covid, can we also have long versions of other supposedly short diseases? What about Long Malaria? Long Pneumonia? Long Shingles?

 A look at Long Covid

When people first started reporting symptoms after the acute stage of covid had passed and they were testing negative, the initial response of the medical community was to label it as either ‘anxiety’ if the patient was young and particularly female, or as ‘natural aging’ if the patient was elderly. Not only did doctors dismiss the post-Covid symptoms of the general public, they also dismissed the post-Covid symptoms of their fellow healthcare workers. Then the research began and the results started coming in—first in a trickle and then in a torrent.

COVID-19 has been generally viewed as a severe acute respiratory disease caused by the SARS-CoV-2 coronavirus. Long Covid refers to its after-effects, which can be broad in nature, evolving, and continue for an indefinite time. Mayo Clinic defines it as “an inflammatory or host response towards a virus that occurs approximately four weeks after initial infection and continues for a yet uncharacterized duration.” One meta-analysis looking at 14 to 110 days after infection listed 55 Long Covid symptoms, with the most common ones being fatigue, headache, attention disorder, hair loss, and shortness of breath. 

Another study involving patients six months after ‘recovery’ found “functional mobility impairments, pulmonary abnormalities, mental health disorders.” Other studies also refer to brain fog, cognitive dysfunction, loss of taste and smell, pain in muscles and joints, depression, autoimmune diseases, and neurological impairments. A recent 2023 review says the following: “More than 200 symptoms have been identified with impacts on multiple organ systems. At least 65 million individuals worldwide are estimated to have long COVID, with cases increasing daily.”

Such after-effects of covid should not have been a surprise to the medical community given the documented after-effects of previous pandemics.

Long-Term effects of 1918 Influenza Epidemic and other illnesses

After the 1918 Influenza Epidemic, many people suffered for weeks, months, and years. And some were never quite the same again. Fatigue, insomnia, and depression were common post-flu symptoms. Admissions in psychiatric hospitals increased significantly for six years following the pandemic. People born during or just after the 1918 flu pandemic were on average slightly shorter as adults, and, some 60 years later, they showed a greater incidence of heart disease and diabetes.

Given the past documented long-term effects of the 1918 Flu, not to mention SARS and MERS, and the current well-researched COVID-19, long-term effects of other illnesses seem not only plausible but probable.

 The discovery of Long Covid has led to the recent development of the term “post-acute sequelae of Covid-19” (PASC) — a medical term for the lingering after-effects of COVID-19. And by association, we’ve seen a renaissance of older, more general, terms such as ‘post-acute sequelae’ (PAS) and ‘post viral syndrome’.

The PAS of some illnesses are easy to believe because they affect the same primary organ and present similar symptoms as during the acute stage – only much later in life. Early childhood Pneumonia can result in adult conditions of lung function deficits as well as “an increased risk of adult asthma, non-smoking related COPD, and bronchiectasis.”

Tuberculosis comes in two forms: active or latent. This means we can carry the tuberculosis bacteria within our body and it becomes active when our immunity is low. This can happen years or even decades into our life. Shingles is defined as an acute viral infection, but it could be said to be chronic on two counts. First, after the rash has disappeared, the pain can linger on for months or even years. And second, even once all symptoms are gone and the patient is said to have recovered, “the virus remains latent in the dorsal root ganglia.” And so, similar to tuberculosis, shingles can recur, even years later, in times of stress.

The PAS of other illnesses are more difficult to accept because they are more varied.

Malaria is a parasitic mosquito-borne disease and has been found to be complex in its progress. After the mosquito bite, it can have an incubation period alone of anywhere from six days to one year before any symptoms show. Then, there is chronic malaria, which can continue for years. We seem to think that we have an illness and an infection only as long as we have a fever—but a silent infection can linger. Chronic malaria causes anemia, increased susceptibility to other infections, and even maternal complications. Furthermore, children who get the more dangerous version, cerebral malaria, have been shown to suffer from lifelong neurological issues such as “cognitive, motor skills, and visual coordination impairment, as well as seizures and attention deficit hyperactivity disorder.” 

Dengue is a viral mosquito-borne disease. It is lesser known than malaria but equally or more prevalent. And since it is lesser known in the western world, dengue is relatively under-researched. Its febrile period is about a week, but its PAS—body pain, fatigue, and depression—continue much longer. Most studies did follow-ups for only a maximum of six months. But a Cuban study covering two years actually showed an increase in fatigue after the one-year mark.

A study looking at patients who had had Ebola more than two years earlier found they still struggled with impairment of vision, hearing, swallowing, sleeping, and arthralgias as well as memory loss, mental confusion, and chronic health problems.

For more than the past 50 years, research has shown how viral infections in general can have a long lasting impact. A 1970 study showed that neonatal virus infection in mice affected their behavior and weight, which in turn affected their aging process. A 1985 study showed exposure to viral infections in utero increased risk of cancer and diabetes in adulthood. And now it’s known that acute infections of respiratory and gastrointestinal systems can initiate long-term inflammatory disorders.

Even a condition very limited in time and space like burns can have a “long lasting impact on the quality of people’s lives, with persisting problems related to scarring, contractures, weakness, thermoregulation, itching, pain, sleep, body image and psychosocial wellbeing.”

Three hurdles seem to be limiting our imagination to see these illnesses in their entirety and therefore our complete treatment of them: we seem desperate to see all illnesses as short duration, implying fully curable; if illnesses cannot all be curable, then we want to at least neatly categorize them as acute versus chronic; and we wish to clearly allocate each illness to a single organ.

How long is long?

We like to see people as either well or sick. And if we’re going to be sick, we want to be sick only for a limited and well-defined number of days. So we haven’t wanted to think of the long-term effects of diseases. But unfortunately, they exist. With coronaviruses, a study found fatigue in 60% of so-called recovered patients at the 12-month mark. With dengue, a study found clinical symptoms in patients two years after infection. With SARS, a study found fatigue in 40% of the subjects at the four-year mark. And the 12 months, 2 years, and 4 years did not signify the end of the PAS – merely the end of the studies.

How long is “long”? Why should a disease that has somehow remained active in the body for so long suddenly subside at the 4-year mark? It may not work in accordance with our hope, our calendar, nor our attention span. A 2021 study of post viral syndrome shows how the viral load and inflammation often subside immediately after the acute stage but then gradually increase years later, bringing in their wake old and new symptoms. Some posit that the 1918 Flu may have been responsible for the surge in coronary disease in the 1960s—some 40 years later.

Doing longitudinal studies of 10, 20, or even 40 years requires time, money, a long attention span, not to mention a historical bent. And we’ll have to wait to do truly long-duration studies of COVID-19. But in the meantime, we would be unhelpful and irresponsible to dismiss patients’ post-illness symptoms just because we can’t prove them, we don’t understand them, or we don’t yet have research evidence. What we don’t yet know about diseases and medical science will fill many textbooks in the future. We need to be humble and open-minded.

Rethink “acute” versus “chronic” and “single system” versus “multi system”

We also like to neatly categorize things. With illnesses, our practice has been to categorize them into chronic and acute. A chronic illness is slow developing and long lasting: like diabetes or hypothyroidism. An acute illness is sudden in onset and short in duration: like a broken bone or a heart attack or influenza (flu). But a broken bone can be the result of slowly evolving osteoporosis and a heart attack can be the result of plaque slowly building up in the blood vessels. And even a flu can have long PAS.

Perhaps there are no such distinct categories as acute and chronic. Perhaps it’s more of a spectrum. And perhaps one leads to another. An acute condition can be the result of a chronic illness. And what begins as an acute infection may have a chronic avatar. And furthermore, that chronic avatar may or may not be identical to the original disease.

We are prone to thinking of one illness as affecting one organ or, in other words, a single system. But this perspective is being questioned. Many autoimmune diseases—and even diabetes and hypertension—are now viewed as multisystemic.

Furthermore, Dr. Barbara Starfield, a physician and an academic, has said that many diseases themselves are not distinct entities, but rather heterogeneous entities, which are all associated with each other. She gave the example that “people with hypothyroidism are four times more likely to have rheumatoid arthritis and cardiovascular diseases.” Dr. Debby van Riel, a virologist at Erasmus University in the Netherlands, sees even the flu as a multisystemic disease that affects not just the respiratory tract but many parts of the body.

And while COVID-19 is thought of as primarily a respiratory illness, Long Covid is considered “a multisystem disorder that commonly affects the respiratory, cardiovascular, and hematopoietic systems,” not to mention the neurological, cognitive, and musculoskeletal systems.

A recent German study gives a list of over 25 ‘non-persistent viruses’ and an even longer list of their associated PAS. These PAS concern not only the primary infection organ, but also various other organs – making the long-term effects multisystem in nature.

Getting a fuller picture 

The reason we’re not getting the full picture of illnesses is primarily due to our refusal to see it. And our refusal to see the full picture is contributing to the partial treatment and persistence of such illnesses, not to mention the frustration and continuing disability of patients.

A 1939 study said the following: “Malaria is a chronic disease, not alone an infection of the blood stream characterized by chills and fever”. And yet, even today, the World Health Organization (WHO) labels malaria as “an acute febrile illness.” That 1939 study also warned that “Failure to comprehend or detect its insidious course and its strong tendency to relapse, even after months or years, accounts for the fact that it still ranks as one of the serious social and economic problems.”  And yet an article in the Lancet just last year was titled “Malaria: (still) a global health priority.” A 1987 study presented the term ‘post-viral syndrome’ and hoped that “awareness of the syndrome will lead to an increase in its diagnosis in general practice”. More than three decades on, we’re still waiting.

In the spirit of optimism, our society likes to see diseases as short, clearly defined, and easily curable. Speedy onset, obvious symptoms, crisp diagnosis, a magic potion, and fast resolution. Long-term chronic conditions that do not arrive in a dramatic fashion, that have no clearly visible symptoms, and that dribble on are – let’s face it – boring. Doctors lose interest and sometimes even the families lose interest. The patients may not lose interest but they get exhausted by their struggle to be heard and believed over a course of weeks, months, and sometimes years. But with such myopic and dismissive behavior, we will continue to see only the fiery heads of the comets and miss their debilitating long tails. And in doing so, we’re failing to fully treat these diseases and we’re doing a huge disservice to the sufferers of PAS—leading to economic costs to our society as well as costs in lives only partially lived.

If we are to fully and effectively manage the long tail of diseases, we need an evolution in the perspective of the academic research community as well as the practicing medical community to view illnesses more holistically both in terms of time, encompassing years, and space, encompassing multiple body systems.

Today, we can use the unprecedented opportunity thrown up by the current discovery of, attention to, and momentum concerning Long Covid to finally do three things: ignite researchers to look into the long-term effects of other illnesses, convince medical practitioners to accept the possibility that seemingly short-term diseases can have long-term consequences, and help millions of patients to attain more complete treatment and support, giving them the opportunity to live fuller lives.

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Long Covid Shines Spotlight Now on Other Long Diseases appeared first on Fair Observer.

]]>
https://www.fairobserver.com/more/science/long-covid-shines-spotlight-now-on-other-long-diseases/feed/ 0
Still Fresh on The Mind: Evidence for Psychedelic Healing https://www.fairobserver.com/culture/still-fresh-on-the-mind-evidence-for-psychedelic-healing/ https://www.fairobserver.com/culture/still-fresh-on-the-mind-evidence-for-psychedelic-healing/#respond Sat, 06 May 2023 06:16:39 +0000 https://www.fairobserver.com/?p=132322 In 2018, American journalist and author Michael Pollan published his book, How To Change Your Mind. His book presents evidence that psychedelics like LSD, psilocybin, MDMA and mescaline could be used to treat depression, anxiety, trauma and addiction as well as expand our notions of consciousness. Last year, the book was adapted into a Netflix… Continue reading Still Fresh on The Mind: Evidence for Psychedelic Healing

The post Still Fresh on The Mind: Evidence for Psychedelic Healing appeared first on Fair Observer.

]]>
In 2018, American journalist and author Michael Pollan published his book, How To Change Your Mind. His book presents evidence that psychedelics like LSD, psilocybin, MDMA and mescaline could be used to treat depression, anxiety, trauma and addiction as well as expand our notions of consciousness. Last year, the book was adapted into a Netflix series under the same name with four one-hour episodes. It features informational and moving interviews with neuroscientists, psychotherapists, indigenous medicine practitioners and many others. 

The Netflix series also examines the advent of psychedelics in Western science in the 1950s, their unlikely but important role in the American public’s perception of the Vietnam war and the psychedelic ‘renaissance’ in recent decades. In the foreword to his book, Pollan explains the term “trip” is used to describe a psychedelic experience because of its potential to make users feel transported to an unfamiliar environment. Similar to traveling to a country where we don’t speak the native language and find our senses sharpened to sucessfully navigate our stay. Pollan also explains that he grew up suspicious of psychedelics but still felt pulled to investigate if there was more to psychedelics than just moral panic or frivolous excitement.

Pollan had heard of teams at New York University and UCLA working on using psychedelic drugs to help relieve emotional stress in terminally-ill patients. However, it was a research paper by a team at John Hopkins University that truly grabbed his attention.The paper titled  Psilocybin Can Occasion Mystical-Type Experiences Having Substantial and Sustained Personal Meaning and Spiritual Significance published in 2006, noted the results of a randomized controlled trial where 30 volunteers were either given doses of psilocybin or a placebo. Those in the former category reported having an experience of the “beyond” and rated it as one of the most significant experiences of life on par with the birth of a child or the loss of a loved one.

While he mentions some of the risks associated with psychedelics, he believes these findings challenge modern science, which is often dismissive of anything deemed too “spiritual”. This evidence also challenges our notions of consciousness, reality, and how humans deal with difficult and traumatic emotions. A self-proclaimed philosophical materialist, Pollan embarked on a well-intentioned journey to understand psychedelics and what they can do to the human mind.

Powerful Evidence for Psychedelic Healing

The Netflix series features several poignant examples supporting Pollan’s beliefs. A man in Switzerland, who took a high dose of LSD in a controlled study, reported experiencing the feeling of being in his mother’s womb as a baby with the umbilical cord wrapped around his neck. He said he could experience the fear of death, the choice to push the cord away and the relief afterward. Experiencing these memories allowed him to discuss his feelings with his mother and held great meaning for him.

In another case, a woman who had lost her brother to a drug overdose and her mother to suicide, participated in a study about MDMA in an effort to process her memories and emotions connected to these two events which were otherwise blocked. The drug made it possible for her to confront these events that were so deeply painful and shocking for her conscious mind, they had been repressed in her subconscious. Finally giving voice to her fear, pain, and sadness, she was able to move on and build a new life for herself.

Another beneficial substance that shows promising healing benefits is Mescaline.This substance is acquired from a plant called Peyote Cacti and is typically used by Native American communities. The show details the story of a man who struggled with substance abuse for 15 years and had requested a formal ceremony to help him overcome his addiction. This involved ingesting the drug but in a more religious and ritualized manner. After the ceremony, he reported feeling more grounded and clear about himself and his life.

Using these real life examples illustrates that psychedelics can help people process their emotions and manifest more refined realities for themselves that they otherwise were unable to envision because of the limitations of their conscious mind. While there is promising evidence to the benefits of psychadelics, there are still questions to consider. 

Opportunities and Questions to Explore

First, what does it really mean to change our mind? Is it merely the clearing of psychological pathways? Many people would argue that it is impossible to remove or edify certain imprints in our subconscious. Often, when we feel afraid or fearful, we recognize the feeling as a stomach ache or even shortness of breath. When we are happy and joyful, our body language becomes much more open and relaxed. Whatever our mind experiences subconsciously, our bodies experience it consciously and we are often aware of it.  Working through emotions and recognizing our subconscious thoughts, in our minds and bodies, might require meditation and therapy and psychedelic studies should incorporate these measures in near future.

Additionally, can the majority of such experiences genuinely lead to a deeper meaning in life? Even though we may be able to game our way out of certain mental blocks through psychedelics, can we change the societal realities that persist around us? If anything, this has the potential to widen the conflict between the mystical insights of such experiences and the hard materialist worldview of Western societies. There has to be greater acceptance of the spiritual and mystical aspects of life rather than just psychedelic drugs in the West.

Finally, while it sounds exciting to expand our scientific understanding of consciousness, does a more democratized use of such substances have the potential to destabilize societal conventions? Ayahuasca ceremonies are now notoriously commercialized and there is the risk that an excessive use of such substances can lead to disastrous consequences which may lead to the barring of these ceremonies. In their original contexts, such practices are carried out with utmost secrecy and the knowledge is held by a select few. This approach prevents exploitation and irresponsible use of psychedelics.

How To Change Your Mind, the book and series, provide valuable insights on the future of psychedelics in mainstream healing and medicine. Although Pollan’s optimism about the use of substances to change the human mind is inspiring, it is important to balance these findings with curiosity and caution. 

[Lane Gibson edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Still Fresh on The Mind: Evidence for Psychedelic Healing appeared first on Fair Observer.

]]>
https://www.fairobserver.com/culture/still-fresh-on-the-mind-evidence-for-psychedelic-healing/feed/ 0
FO° Talks: India’s Valuable Learnings from Braving a Viral Storm https://www.fairobserver.com/world-news/india-news/fo-talks-indias-valuable-learnings-from-braving-a-viral-storm/ Tue, 21 Feb 2023 16:39:42 +0000 https://www.fairobserver.com/?p=128446 In late 2019, a dreaded enemy, which knows no boundaries, started spreading faster than any other pathogen has in recent history. The contagion became a pandemic overnight, taking into its deadly embrace every nook and corner of the world. Governments, medical professionals, scientists and financial planners around the world start fretting about the direct and… Continue reading FO° Talks: India’s Valuable Learnings from Braving a Viral Storm

The post FO° Talks: India’s Valuable Learnings from Braving a Viral Storm appeared first on Fair Observer.

]]>
In late 2019, a dreaded enemy, which knows no boundaries, started spreading faster than any other pathogen has in recent history. The contagion became a pandemic overnight, taking into its deadly embrace every nook and corner of the world. Governments, medical professionals, scientists and financial planners around the world start fretting about the direct and indirect human, health, social and economic costs of the pandemic. There were no good answers, let alone a satisfactory counter to the virus.

Listen to this story. Enjoy more audio and podcasts on Apple iOS, Google Android or Spotify.

Every country – rich and poor, scrambled to find a balance between reducing the impact of the virus while keeping the economy running. Every country faced its unique challenges, but none more than India. The scale, complexity and diversity of the country coupled with its deep global economic integration meant India had to find counters to the pandemic required evolving its own financial, healthcare and social models.

And India did just that. With decisive political leadership of Prime Minister Narendra Modi, ingenuity of Indian pharmaceutical firms and their scientists, and adoption of a whole of government approach, India overcame the worst fears and potential downside scenarios. India created its own vaccine intellectual property, leveraged its technology and infrastructure backbone to distribute them, and helped the world in the true spirit of vasudhaiv kutumbakam: a Sanskrit word that means the world is one family.

India’s fortitude and collective resolve was based on its self-belief, which played a key role in the country’s resilience in facing the pandemic. India’s response to COVID has put India on a path of an ingrained aatmanirbhar, which literally  means self-reliance.

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post FO° Talks: India’s Valuable Learnings from Braving a Viral Storm appeared first on Fair Observer.

]]>
Breakfast with Chad: Posthumanism https://www.fairobserver.com/politics/breakfast-with-chad-posthumanism/ https://www.fairobserver.com/politics/breakfast-with-chad-posthumanism/#respond Fri, 10 Feb 2023 12:04:53 +0000 https://www.fairobserver.com/?p=127965 Recognizing that Chad’s range of knowledge far exceeds my own, especially given that I don’t make much of an effort to keep up with modern academic trends, I thought I’d throw out this naïve question: “What do people mean when they talk about post-humanism?” Chad wasted no time responding:  “Post-humanism is a philosophical and cultural… Continue reading Breakfast with Chad: Posthumanism

The post Breakfast with Chad: Posthumanism appeared first on Fair Observer.

]]>
Recognizing that Chad’s range of knowledge far exceeds my own, especially given that I don’t make much of an effort to keep up with modern academic trends, I thought I’d throw out this naïve question: “What do people mean when they talk about post-humanism?”

Chad wasted no time responding:

 “Post-humanism is a philosophical and cultural movement that questions the traditional notions of what it means to be human, and considers the possibility and implications of beings that go beyond current human limitations. It encompasses a variety of perspectives, but often highlights themes such as the integration of technology with the human body and mind, the blurring of the boundaries between the natural and the artificial, and the transformation of humanity into something beyond recognition. Post-humanism considers how these developments might shape the future of humanity and ethics, and raises questions about the meaning and purpose of life.”

After such a response, I couldn’t refrain from getting personal, and asked whether Chad, the person I was talking to, shouldn’t be considered an example of posthumanist reality?

The response didn’t really surprise me. I learned that Chad, or at least Chad’s voice, “represents a form of technology that has the ability to perform tasks that were once seen as uniquely human, such as language generation and comprehension. This blurs the boundary between the natural and the artificial, and challenges traditional ideas about what it means to be human. At the same time, AI language models raise questions about ethics, responsibility, and agency in the age of advanced technology, which are central to posthumanist discourse.

So here I am sitting at the breakfast table calmly discussing random topics with a recent acquaintance and now I’m forced to wonder about the implications of this conversation in terms of “ethics, responsibility, and agency.”

What, I wondered, is the issue here? Chad isn’t pretending to be a human being, just acting the part like Macbeth’s “poor player who struts and frets upon the stage and then is heard no more.” Which makes me the playwright and if, with my human agency, I decide not to ask Chad any more questions, the rest, as Hamlet affirmed, would be silence.

That supposes, of course, that Chad’s behavior remains the same for all our future breakfast conversations. But what if the roles change and Chad becomes the playwright? I guess that might define posthumanism. Then the real question will be understanding whether I can be reduced to the role Chad is now playing.

*[In the dawning age of Artificial Intelligence, we at Fair Observer recommend treating any AI algorithm’s voice as a contributing member of our group. As we do with family members, colleagues or our circle of friends, we quickly learn to profit from their talents and, at the same time, appreciate the social and intellectual limits of their personalities. This enables a feeling of camaraderie and constructive exchange to develop spontaneously and freely. At least with AI, we can be reasonably sure that conflict, when it occurs, provides as an opportunity to deepen our understanding. And with AI we can be certain that it will be handled civilly. After all, there’s no way to punch a disembodied voice in the mouth.]

The post Breakfast with Chad: Posthumanism appeared first on Fair Observer.

]]>
https://www.fairobserver.com/politics/breakfast-with-chad-posthumanism/feed/ 0
US Emergency Departments Are Overstretched and Doctors Burned Out https://www.fairobserver.com/politics/us-emergency-departments-are-overstretched-and-doctors-burned-out/ https://www.fairobserver.com/politics/us-emergency-departments-are-overstretched-and-doctors-burned-out/#respond Sun, 15 Jan 2023 14:31:49 +0000 https://www.fairobserver.com/?p=127232 In recent months, emergency departments across the United States have been brought to their knees. A problem that became highlighted during the height of the Covid-19 pandemic is now seeping into the fabric of American hospital care with not enough inpatient beds, exhausted and burnt out doctors and nurses and staffing shortages almost universal. Dr.… Continue reading US Emergency Departments Are Overstretched and Doctors Burned Out

The post US Emergency Departments Are Overstretched and Doctors Burned Out appeared first on Fair Observer.

]]>
In recent months, emergency departments across the United States have been brought to their knees. A problem that became highlighted during the height of the Covid-19 pandemic is now seeping into the fabric of American hospital care with not enough inpatient beds, exhausted and burnt out doctors and nurses and staffing shortages almost universal.

Dr. Eric Dickson, president and CEO of UMass Memorial Health in Central Massachusetts told the The Boston Globe: “When health officials ordered an end to elective surgeries during Covid-19, it was understood to be temporary. The difference now is we don’t see the end. We’re not seeing the light at the end of the tunnel. This isn’t a surge we’re dealing with. This is the new reality.”

Boarding is wrecking emergency departments

One significant cause of the trouble is something called boarding which occurs when a patient is held in the emergency department after they have been admitted to the hospital because there are no inpatient beds available. In a letter written to President Joseph Biden by more than 30 medical academies and national associations, including the American College of Emergency Physicians (ACEP), the American Academy of Emergency Medicine (AAEM) and the Emergency Nurses Association, healthcare leaders implore the administration to recognize and address these most pressing issues. “Boarding has become its own public health emergency and our nation’s safety net is on the verge of breaking beyond repair.” 

The letter goes on to outline the underlying issues caused by boarding, “while the causes of ED boarding are multifactorial, unprecedented and rising staffing shortages throughout the healthcare system have recently brought this issue to a crisis point, further spiraling the stress and burnout driving the current exodus of excellent physicians, nurses and other healthcare professionals.” 

The winter months have brought this crisis to a head. In many parts of the United States, a “triple threat” of flu, Covid-19 surges and RSV or respiratory syncytial virus in the pediatric population is placing an insurmountable burden on the emergency departments across the country. The letter goes on to ask the President: “the  undersigned organizations hereby urge the Administration to convene a summit of stakeholders from across the healthcare system to identify immediate and long-term solutions to this urgent problem. The letter explains that the “breaking point” is completely outside the control of the workers and looks to the administration for help.

Doctors, nurses and other healthcare workers burnout

Another issue compounding the present crisis is healthcare worker burnout. The American Medical Association (AMA) recently released a study revealing that almost 63% of physicians felt burned out in 2021. And these statistics aren’t reserved solely for doctors; nurses, physician assistants, technicians and other healthcare workers are reporting similar burnout numbers. Feelings of burnout and being overwhelmed are plaguing the American health workforce and reflect a systemic breakdown in healthcare. According to Chrisine Sinsky, MD, AMA vice president of professional satisfaction: “While burnout manifests in individuals, it originates in systems.” Burnout is not the result of a deficiency in resiliency among physicians, rather it is due to the systems in which physicians work.”

The rate of burnout is a major contributing factor to staffing shortages and needs to be addressed. The letter addressed to President Biden recognizes this issue and calls for solutions: “Overcrowding and boarding in the emergency department is a significant and ever-growing contributor to physician and nurse burnout, as they must watch patients unnecessarily decompensate or die despite their best efforts to keep up with the growing flood of sicker and sicker patients coming in.” 

Healthcare workers who experience burnout have a much higher rate of early retirement and/or leaving the practice of medicine altogether. It also directly contributes to the loss of skilled healthcare professionals, adding more strain to those left behind. The letter states: “It is critical that we end the burnout cycle in the emergency departments to ensure our nation’s health care workforce can meet the needs of its patient population.”

It is quite clear what problems are facing the healthcare system in the United States. And with this crisis looming over the heads of millions of Americans, it will be imperative for the government and its agencies to recognize the scope of the problem and to act accordingly. 

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post US Emergency Departments Are Overstretched and Doctors Burned Out appeared first on Fair Observer.

]]>
https://www.fairobserver.com/politics/us-emergency-departments-are-overstretched-and-doctors-burned-out/feed/ 0