Online algorithms, biases and incorrect information
Digital algorithms are amazing! All computer programs rely on algorithms. These invisible sets of instructions are working behind the scenes in websites, social media platforms, apps and many other digital spaces.
Algorithms are instructions designed to solve specific problems, while computer programs are the frameworks that implement these algorithms in a way that a computer can carry out.

Amazing algorithms
An article in the 2018 Level 2 Connected journal, ‘Amazing algorithms’ published by the Ministry of Education, New Zealand.
Illustration by Beck Wheeler.
Algorithms have multiple uses from aiding search engines to serve up hundreds of links in seconds to helping scientists to sort and analyse data. Many algorithms are enhanced with artificial intelligence (AI) to improve the performance of the app or website you are using.
Algorithms and AI
Search engines and social media platforms use AI-enhanced algorithms to analyse our habits and interests. Everything you do online is collected as data points and used to build a unique personal profile. This includes:
reacting to social media posts
commenting or arguing in comment sections or on social media
the videos or articles you choose to open
how much of a video you watch
links you scroll right past, videos you stop watching halfway through and posts you ignore!
Some of this data can be very personal – like social media posts intended for only a few close friends. These data points are used to refine the algorithm to serve up materials and search results that are especially tailored to our likes and dislikes.
Some algorithms go as far as personality typing users – and they’re very effective. One study showed that only 10 likes on Facebook enabled the algorithm to identify your personality type with the same degree of accuracy that a co-worker could, while 150 likes would lift the algorithm to pick your personality type with the same degree of accuracy as a family member!
Further, when algorithms determine what we see and don’t see, this can manipulate our feelings and influence how we think.

User behaviour analytics
The user behaviour analytics (UBA) process involves social media sites monitoring and collecting data on how users interact with the platform. This can include the users’ actions, clicks and engagement levels. It enables the platform to personalise content for the user.
If you engage and like a lot of posts about cats, you’ll see a lot more in your personalised feed.
This image has been altered to remove personal information.
Algorithms are not inherently good or bad. They don’t think and feel. They’re coded instructions created by humans – and therein lies the issue. People can intentionally or unintentionally create algorithms that create problems or cause harm.
Bias and algorithms
As humans, we all have different beliefs and values and different ways of perceiving the world around us, which means we hold conscious or unconscious biases.
Much of the evidence we see shows people are suffering from a form of cognitive bias, or confirmation bias. What they’re doing is seeking out information that confirms their views rather than thinking about ways they can dis-confirm them. Doing the latter is undoubtedly hard, but absolutely critical. People don’t want to be wrong, but it makes you a better thinker.
There are many different types of bias with a variety of classification systems for them:
Confirmation bias – looking for information that supports our beliefs while rejecting information that doesn’t.
Stereotyping – making assumptions or guessing the reason behind events or the way people behave using stereotypes.
Normalcy bias – thinking the situation we are currently in will always be the same. An example is thinking the climate crisis is overstated and things will be OK.
Motivated reasoning – believing arguments that favour our thinking are stronger than arguments that conclude the opposite to what we think. This is partially because arguments seem more plausible when they align with our existing beliefs and ideas.
Anchoring bias – when we rely heavily on one trait or piece of information, often the first we learned or viewed on a subject, when making a decision. This is also known as first impression bias.
An algorithm can therefore be imbued with the biases of the person or people that created it. These biases can be minor or they can be harmful. For example, the Correctional Offender Management Profiling for Alternative Sanctions is an algorithm used by the criminal justice system in some areas of the United States. It was found to discriminate against certain groups of people and incorrectly identified black offenders as more likely to reoffend and underestimated the likelihood of white criminals to continue to commit crime.
Our own biases can also impact the information that algorithms sort for us. Search engine algorithms track our every click and scroll and use the information to hone the algorithm to better work for what it perceives we are looking for. The results are determined and ranked by what the algorithm perceives is our preferred answer/results. Our biases can also impact how we word a search, and this again will alter results to reflect our own biases.
For example, if you wanted to get a reasonably fair and unbiased search result, which of the following statements do you think would best deliver that?
Is social media destroying democracy?
Is social media helping democracy?
How does social media impact on democracy?

Biases and ChatGPT
There are concerns and ethical issues around biases that may be built into LLMs like ChatGPT.
Text generated by ChatGPT.
Human bias is also an ethical concern for large language models (LLMs) like ChatGPT. Those who choose the data used to build the models will also decide the biases that will appear in the outputs of LLMs.
There’s no such thing as a free lunch!
Why are we being tracked and profiled by algorithms online? In many cases, it’s simply because our attention equates to money.
Have you heard the saying there’s no such thing as a free lunch? It means it is hard to get something for nothing. Our free access to websites and social media may feel free but the reality is that we’re being tracked by those sites to build valuable personal digital profiles. This data is then used by the platforms to sell advertising that can be better targeted at individuals.
Our data is also used to keep us online. If you hear someone say they’re addicted to social media, that’s because social media platforms are addictive – they’re configured to keep you online, endlessly scrolling. The longer you’re online, there are more opportunities to target you with advertisements, and the digital platform can attract top prices from those wanting to advertise products, services and ideas.
Our attention is a valuable resource in the digital age. The concept even has a name – the attention economy!
Targeted advertising is not inherently bad, but it can be used to manipulate users. For example, many experts are concerned at the use of algorithms and AI to politically persuade people such as in the Facebook/Cambridge Analytica scandal.
Algorithms can also increase the spread of false information and hateful content, and they can be manipulated to deepen political polarisation. An example was the rampant spread of online misinformation and disinformation about the ‘stolen election’ in America in 2020 that contributed to an attack on the US Capitol on 6 January 2021.
Bubbles, echo chambers and positive feedback loops
Social media platforms design algorithms to boost user engagement by showing us content that encourages us to keep interacting. Emotionally charged content that aligns with users’ existing beliefs tends to perform better than factual information. This fuels a cycle where people are exposed to more of the same.
When people no longer seek out or are exposed to different perspectives and ideas and knowledge systems, the divides between opposing political or ideological groups can become entrenched. A bubble is when we unintentionally become part of an aligned group, while an echo chamber is more intentional.
In a bubble, we only follow those with similar views, and we block any opposing perspectives. Algorithms then reinforce this. Bubbles can create a safe space for users, but they can also stop people being exposed to a diversity of ideas and perspectives. People can be unaware they’re in a bubble.
When we engage on social media by reacting or joining others in reinforcing or deriding posts, we can become part of an echo chamber. An echo chamber is an environment where ideas, beliefs or opinions are amplified and reinforced by communication and repetition.
This exposure to similar content becomes a positive feedback loop. On social media platforms, when a user posts something that gets a lot of likes, shares or comments, the algorithm notices this high engagement and shows that post to more people. As more people see it and interact with it, the post gains even more visibility, generating even more interactions, which in turn makes the algorithm push it further. This cycle continues, leading to increased visibility and engagement for that post or content.
Positive feedback loops can be both beneficial and problematic:
They can be beneficial by helping to highlight popular content or ideas, amplifying voices or causes that resonate with many people.
They can be problematic by amplifying polarising or sensational content, creating echo chambers and reinforcing existing biases or misinformation.
Be aware of algorithms!
Algorithms are useful, but if we value objective and accurate information and being able to think for ourselves, we need to be conscious of this invisible coding at work when we’re online.
This means being vigilant and questioning what we see and share. Understanding that these algorithms are constantly at play is an important part of countering false information.
Learn more in Recognising false information online and Misinformation, disinformation and bad science.
Nature of science, technology and social sciences
Algorithms and artificial intelligence are examples of scientific and technological innovations. Social sciences aid understanding of the societal impacts of these innovative technologies and can help guide the responsible use.
Supporting resources
The following resources provide additional information and learning around countering false information online:
Countering false information – article
Countering false information – key terms – article
Recognising false information online – article
Common logical fallacies – interactive
Examples of bad science and countering false information – download
Activities
In this activity, students are presented with statements containing logical fallacies. Through discussion or discovery, they work through the statements, identify specific vocabulary or characteristics and match the statement with a common logical fallacy technique.
In Manipulation tactics – create an inoculation campaign, students watch videos and use a template to analyse the inoculation messages they explain. Students then use the template to plan and create their own inoculation campaigns.
Related content
Use the Connected article Amazing algorithms to learn more about algorithms.
Algorithms are useful. Scientists use algorithms to help interrogate and analyse data. Dr Adele Williamson codes algorithms to identify unique protein sequences in her research looking for novel enzymes in extremophiles.
Aquatic remote sensing scientist Dr Moritz Lehmann works with an international community of scientists. They’re building an algorithm that will remove atmospheric scattering from satellite images of lakes. Watch Calibrating and validating satellite data.
In Do our biases affect what we protect?, a scientist researched bias and conservation decisions. And in Interpreting microscope data, a microscopist talks about considering your biases when collecting sample materials
Explore ethical concerns around bias and large language models in ChatGPT – generating text and ethical concerns.
The Ethics thinking toolkit provides a structured framework for scaffolding student thinking about an ethical issue.
The Futures thinking toolkit can be customised to explore how changes in technology may impact our lives and the lives of future generations.
Useful links
The University of Auckland has a suite of easy-to-understand resources on bias. They include What is an unconscious bias?, Common decision-making biases, with information on how to overcome these biases, and Overcoming unconscious bias and implicit associations.
The newspaper article Why people fall for pseudoscience (and how academics can fight back) has some useful information on biases.
The News Literacy Project has an in-depth interactive lesson Introduction to algorithms on its Checkology site – a free e-learning platform with lessons on subjects like news media bias, misinformation, conspiratorial thinking and more.
Acknowledgement
This resource has been developed with the help of The Workshop, who are experts in framing – the conscious and unconscious choices people make about how to present an issue. They conduct research and draw on data and insights from various disciplines, including psychology, linguistics and oral storytelling. Their work on false information draws specifically on the work of Dr Jess Berentson-Shaw from her book A matter of fact: Talking truth in a post-truth world.
The Workshop shares their work under a Creative Commons Attribution Non-Commercial Share Alike International Licence, encouraging people to pick up and use it for non-commercial purposes.
