How Big Tech’s Left-Wing Bias Shapes Our Digital Culture

Analyzing the Ideological Influences and Underpinnings of Major Technology Companies

Michael F. Buckley
10 min readMay 2, 2024
Image Source: iStock

We live in the digital age, designed to connect and unite people by sharing knowledge, information, and ideas to enrich our culture. So why does it seem like, with all this technology at our fingertips, we are more divided than ever, especially ideologically? Is it plausible that those who control technology have a political and ideological agenda causing this cultural fracturing?

Digital Culture And Big Tech

Digital culture refers to the social, cultural, and technological landscape shaped by the advent and proliferation of digital technology. It encompasses how we communicate, create, share, and interact in a world where digital devices and platforms are integral to our daily lives.

Today, a handful of technology giants — often called “Big Tech” — dominate our digital experiences. Companies such as Google, Facebook, X (formerly Twitter), Apple, Microsoft, and others not only provide essential services but also wield enormous influence over the dissemination of knowledge and information.

The expansion of artificial intelligence (AI) companies, either founded by or in close collaboration with existing Big Tech firms, is poised to replicate similar levels of market dominance.

As this digital technology becomes more pervasive, we must question the extent of Big Tech’s influence on our lives and whether it has a positive or negative impact on our well-being. As the world becomes more globally connected, those who control information and narratives will have unprecedented power.

The Nature of Big Tech’s Influence

Big Tech companies play a pivotal role in shaping public discourse. They control the platforms through which billions of users consume news and exchange ideas. The algorithms that decide what appears in search results, news feeds, and recommendations can prioritize or suppress content, significantly influencing public opinion.

For instance, platforms have been criticized for their handling of political content. Accusations range from the suppression of conservative voices to the preferential treatment of progressive narratives. This debate intensifies during election cycles, with platforms like Facebook and Twitter scrutinized for their policy decisions on content moderation.

An article from Cornell University discusses the implications of Facebook’s decision to limit political content. It raised concerns that such actions could advance a narrative of censorship, particularly among conservative users who might see this as a move to suppress their viewpoints.

Twitter faced accusations of censoring conservative views ahead of the 2020 US presidential election by restricting a story about Hunter Biden’s laptop. Published by the New York Post, the article covered Hunter Biden’s business activities and its potential impact on Joe Biden’s campaign. Twitter, citing concerns over the story’s source and potential for disinformation, blocked links to the article.

Ex-Twitter executives later said blocking the story was a mistake and not aligned with their policies, acknowledging during congressional hearings that the decision was hasty and not well-supported by evidence, highlighting concerns of political bias in Twitter’s content moderation.

In addition to social media platforms, the most popular search engine, Google, has also been accused of producing biased output. In a detailed analysis, AllSides found that left-leaning sources constituted a majority (63%) of the content in Google News in 2023, which has been increasing over the years.

During specific periods, such as the 2022 midterm elections, the top stories presented on Google News were predominantly from media sources rated as “Left” or “Lean Left” by AllSides. This pattern was consistent across various search terms, showing a clear skew towards left-leaning media.

However, social media and search engines may not be the only digital resources guilty of bias. AI is a new resource with the potential to influence digital culture in a way we have yet to see.

A study published in the Public Choice journal examined ChatGPT’s responses, identifying a significant and systemic left-wing bias. The study applied several rigorous methods, including a “dose-response test” where ChatGPT was asked to impersonate radical political positions and a “profession-politics alignment test.” These experiments showed that ChatGPT’s default responses were more closely aligned with left-wing positions than conservative ones​.

Recently, Google faced controversy with its AI image generator, Gemini, over accusations of exhibiting a left-leaning bias by creating “diverse” images that were not historically or factually accurate — such as black Vikings, female popes, and Native Americans among the Founding Fathers. Critics argued that Gemini’s algorithms favored images and themes that seemed to align more with progressive values.

This issue, among others, highlights ongoing concerns regarding AI impartiality and the ethical implications of algorithmic bias in generating digital content. As tech companies continue to develop and refine AI tools, the necessity for transparent, balanced algorithms that fairly represent diverse viewpoints becomes increasingly apparent, underscoring the challenges in AI governance and ethical standards.

Mechanisms of Bias in Big Tech

When it comes to what drives the type of content we engage with online, algorithms are supposed to be designed to be unbiased, yet they often mirror the predispositions of their creators.

This issue is particularly pronounced in Big Tech companies, where the workforce has a reported liberal skew. Surveys indicate that a significant majority of Silicon Valley’s workforce identifies as liberal.

Data on political contributions show that major tech companies like Apple, Google, and Microsoft predominantly support the Democratic party, which could be interpreted as an alignment with more liberal or left-leaning political ideologies​.

Such a demographic tilt can lead to the development of algorithms that inadvertently favor certain viewpoints over others, as evidenced by various analyses suggesting a strong political bias in platforms’ news recommendations.

The ideological inclinations of employees in these companies do not just affect algorithm development but also have a broad impact on product decisions and the establishment of company policies.

Understanding the implications of these biases is crucial for developing more equitable technology solutions that serve a broader spectrum of perspectives.

Lack of Trust in Big Tech

A Brookings Institution survey reveals that American confidence in technology firms, including Google, Facebook, and X (Twitter), has notably decreased by between 13% and 18% from 2018 to 2021.

According to the Cato Institute, about 75% of Americans do not trust social media companies to make fair content moderation decisions. This significant level of distrust highlights concerns across various ideological groups about the fairness and adequacy of rules governing content on social media platforms. This sentiment reflects a broader skepticism about the objectivity and fairness of tech companies when handling the dissemination of information and content moderation.

A Pew Research Center survey found that about 38% of Americans distrust the information they receive from AI about the US presidential election, indicating a significant level of skepticism toward the reliability of Big Tech in political contexts​​.

Furthermore, the Edelman Trust Barometer for 2024 highlights a broad mistrust in Big Tech, particularly concerning the influence of rapid innovation and its governance, which exacerbates societal and political polarization​​.

This mistrust is reinforced by concerns about privacy and data management, as evidenced by a poll showing that 71% of respondents are worried about how their data is collected and used by tech companies​​. These statistics suggest a profound concern among the public regarding the integrity and intentions of major technology firms.

Impact on Digital Culture

Big Tech is pivotal in shaping digital culture. The evidence outlined in this article indicates that popular social media platforms, search engines, and AI technologies often promote left-leaning ideology. This tendency can skew public perception, making some viewpoints appear more widely accepted than they actually are. It highlights Big Tech’s significant influence in crafting societal narratives.

To provide a better context, a 2021 Gallup poll revealed that 37% of Americans identify their political views as moderate, 36% as conservative, and 25% as liberal. This data underscores a significant gap between Big Tech’s ideological operations and the population at large.

The perception of a left-wing bias within Big Tech can exacerbate political divisions, hindering constructive dialogue among people with differing views. The lack of diverse perspectives discourages open discussions and fosters echo chambers where individuals are seldom exposed to or consider alternative viewpoints. This environment fosters division, complicating efforts to find common ground and mutual understanding.

This perceived bias has prompted various reactions. Users often express distrust towards platforms they view as biased. Meanwhile, governments and regulatory bodies worldwide are considering legislation to curb the influence of these tech giants.

The power of Big Tech on digital culture cannot be overstated. Their role in molding perceptions, shaping dialogues, and arguably biasing the digital landscape underscores the importance of striving for a more inclusive and diversified online community.

Bridging the gap between differing perspectives and ensuring a balanced representation of ideas will be crucial in fostering a digital culture that reflects the diversity of societal norms and values and promotes a healthier, more constructive discourse on a global scale.

Counterpoints and Criticisms

Large technology companies often implement content moderation policies to curb “hate speech,” misinformation, and content that may incite violence. However, these policies can inadvertently suppress differing viewpoints, such as conservatives, libertarians, etc., if automated systems or human reviewers flag them for violating these rules.

The subjective nature of interpreting what constitutes hate speech or misinformation, especially if the automated systems and human reviewers have a left-leaning bias, can make these policies controversial and lead to partisan accusations.

Amid accusations of liberal bias against Big Tech, robust counterarguments challenge these claims, highlighting a more complex scenario. Experts suggest that the algorithms favor user engagement over any specific political ideology, often leading to the proliferation of sensational content without regard to its political bent.

In the wake of perceived bias, platforms like Parler, Gab, and Truth Social have surfaced, positioning themselves as sanctuaries for conservative discourse. Their emergence underscores a growing ideological divide within digital culture, where users increasingly cluster by beliefs.

Looking ahead, implementing measures to counteract bias in Big Tech is pivotal. Regulatory approaches could enforce transparency and fairness in algorithmic decisions, while Big Tech might pursue voluntary changes.

These could include diversifying their workforce and setting explicit content moderation policies that balance free speech with the need to curb misinformation, presenting a path forward that ensures a more balanced digital discourse.

One action X (Twitter) has taken to fight political bias is Community Notes, previously known as Birdwatch, which allows contributors to add context, such as fact-checks, beneath posts, images, or videos. The community drives this initiative and aims to provide helpful and informative context through a crowd-sourced content moderation program. Unlike systems based on majority rule, notes get attached to content that may be misleading by an algorithm, striving for consensus among users across the political spectrum.

The introduction of Community Notes represents a significant strategy in combating the dissemination of misleading and misinformed content and fosters a method to encourage civil discourse.

Conclusion

The influence of Big Tech on digital culture and politics is profound and complex. As these companies continue to shape the digital landscape, we must remain vigilant and critical of their practices. Fostering a diverse and inclusive digital environment can help us achieve a more balanced and representative discourse.

This journey is not only about critiquing the power structures that exist today but also about shaping the future of our digital world to reflect a broad spectrum of ideas and beliefs.

This exploration into the political and ideological biases of Big Tech highlights the need for ongoing research, balanced dialogue, and proactive measures to ensure that our digital culture remains a dynamic and equitable space.

References

  1. Atkinson, R. D. (2023, October 26). The facts behind allegations of political bias on social media. Information Technology & Innovation Foundation. https://itif.org/publications/2023/10/26/the-facts-behind-allegations-of-political-bias-on-social-media/
  2. Brookings Institution. (n.d.). How Americans’ confidence in technology firms has dropped: Evidence from the second wave of the American Institutional Confidence Poll. https://www.brookings.edu/articles/how-americans-confidence-in-technology-firms-has-dropped-evidence-from-the-second-wave-of-the-american-institutional-confidence-poll/
  3. Condliffe, J. (n.d.). See where major companies lean politically. Fast Company. https://www.fastcompany.com/1688604/see-where-major-companies-lean-politically
  4. Conklin, A. (2024, February 22). Google pauses ‘absurdly woke’ Gemini AI chatbots image tool after backlash over historically inaccurate pictures. New York Post. https://nypost.com/2024/02/22/business/google-pauses-absurdly-woke-gemini-ai-chatbots-image-tool-after-backlash-over-historically-inaccurate-pictures/
  5. Constine, J. (2022, December 12). Twitter begins rolling out its Community Notes feature globally. TechCrunch. https://techcrunch.com/2022/12/12/twitter-begins-rolling-out-its-community-notes-feature-globally/
  6. Dillet, R. (n.d.). Politics are tearing tech companies apart, says new survey. Fast Company. https://www.fastcompany.com/90313045/politics-are-tearing-tech-companies-apart-says-new-survey
  7. Ekins, E. (n.d.). Poll: 75% don’t trust social media to make fair content moderation decisions, 60% want more government regulation. Cato Institute. https://www.cato.org/survey-reports/poll-75-dont-trust-social-media-make-fair-content-moderation-decisions-60-want-more
  8. Levy, D. (n.d.). Limiting political content on Facebook risks advancing censorship narrative. Cornell University Government. https://government.cornell.edu/news/limiting-political-content-facebook-risks-advancing-censorship-narrative
  9. Macmillan, D. (2023, August). Social media algorithms exploit how humans learn from their peers. Northwestern University News. https://news.northwestern.edu/stories/2023/08/social-media-algorithms-exploit-how-humans-learn-from-their-peers/
  10. Mitchell, A., & Jurkowitz, M. (2022, October 6). The role of alternative social media in the news and information environment. Pew Research Center. https://www.pewresearch.org/journalism/2022/10/06/the-role-of-alternative-social-media-in-the-news-and-information-environment/
  11. SciTechDaily. (n.d.). ChatGPT’s strong left-wing political bias unmasked by new study. https://scitechdaily.com/chatgpts-strong-left-wing-political-bias-unmasked-by-new-study/
  12. Smith, A. (n.d.). Google News shows strong political bias: AllSides analysis. AllSides. https://www.allsides.com/blog/google-news-shows-strong-political-bias-allsides-analysis
  13. Walker, E. (2024, March 26). Americans’ use of ChatGPT is ticking up, but few trust its election information. Pew Research Center. https://www.pewresearch.org/short-reads/2024/03/26/americans-use-of-chatgpt-is-ticking-up-but-few-trust-its-election-information/
  14. Williams, M. (2023, August 18). ChatGPT political bias. The Register. https://www.theregister.com/2023/08/18/chatgpt_political_bias/
  15. Yahoo News. (n.d.). Ex-Twitter exec concedes wrong. https://news.yahoo.com/tech/ex-twitter-exec-concedes-wrong-190813534.html?guccounter=1

--

--