Ads, recommendations and headlines — not just clickbait

Should businesses take ethics into account when making online recommendations? (Photo © Sayuri Moodliar)

Take one worldwide pandemic, add social distancing and economic lockdown with restrictions on schools, restaurants, nightclubs and movies … and it seems inevitable that online activity and e-commerce transactions will increase drastically.

Sounds like the perfect time for businesses to take advantage of people’s boredom, insecurities, fears, or addictions … Right? Wrong!!!

At a time when the world is being rocked by economic uncertainty and a fight for physical and emotional survival, it is not appropriate to be encouraging unbridled spending, driving addictive consumer behaviour, or reinforcing stereotypes based on biases and prejudices.

Ethical content … whose responsibility is it?

Behaving in an ethical manner is part of every business’ social license to operate. Social license refers to stakeholders’ acceptance of the business’ practices and operating procedures. It is created and maintained over a period of time, through building trust between the business and the community in which it operates.

Service providers and developers cannot avoid taking responsibility for the consequences of recommendations and advertisements on vulnerable consumers.

You may have heard of the talem qualem rule in law (also known as the eggshell rule), in terms of which a person is liable for damage or harm to his or her victim even if that victim is more vulnerable than the general public.

Even without such a rule, there is the ethical issue that online consumers are vulnerable or at a disadvantage in comparison to service providers — especially those organisations that have access to large amounts of data about their users’ behaviour and preferences.

Online and mobile users include children, people with addictions, those who are prone to depression or anxiety, financially vulnerable consumers, users who do not understand how algorithms work, and people who make decisions based on emotions … in other words, everyone who is human.

The awareness that decisions are made based on emotions and not rationality provides even more reason for businesses and developers to ensure that we do not deliberately develop online systems to take advantage of people’s vulnerabilities.

Statistics, ‘post-truth’ and outright lies … who can we trust?

In the digital and internet era, novel words and concepts enter our vocabulary almost every day, and are often a reflection of the direction in which society is moving. Dictionary publishers award annual accolades to new words that are accepted into the English language.

‘Every year, we debate candidates for word of the year and choose a winner that is judged to reflect the ethos, mood, or preoccupations of that particular year and to have lasting potential as a word of cultural significance.’ — Oxford Languages

In 2016, the Oxford Word of the Year was ‘post-truth’. The word was defined as ‘relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief’. The assertion that we live in a post-truth society is based on the fact that truth itself has become irrelevant in decision-making in the realms of politics, media coverage and public influence.

Is this true in consumer decision-making too? It is concerning that a consumer is more likely to purchase an item based on a recommendation than he or she is to take any steps to verify the ratings or statistics backing up that recommendation.

And yet this has been identified as one of the most critical ethical issues with regard to online reviews and recommendations — their authenticity is not guaranteed and usually cannot be verified. The consumer therefore relies on the integrity of the service provider and its advertisers.

Clickbait and fake news — who is the gatekeeper?

It has become commonplace when browsing the internet to see controversial or inflammatory headlines that are designed to elicit curiosity and get readers to click on hyperlinks.

Recent studies of legitimate online news sites have found that about half of them generate clicks through provocative and sensationalist headlines, and not through quality content. Even more concerning is the proliferation of fake news and hoaxes spread through social media and the internet.

Over the last decade, there has been an ongoing debate regarding whose responsibility it is to ensure that the public is not exposed to fake news — newspapers, social media sites like Facebook and Twitter, governments, or other members of the public? And who determines whether this constitutes the removal of illegitimate content or the taking away of freedom of speech?

The other ethical conundrum … who has access to our data?

Privacy and data security breaches are among the biggest risks of using the internet. Every search, click and like is recorded when we are using internet sites or mobile applications. And even the slightest activity results in recommendations and ads.

Some memorable quotes from the Mark Zuckerberg testimony before the United States Congress in 2018 included the following:

  • “If I’m emailing within WhatsApp … does that inform your advertisers?” (Senator Brian Schatz)
  • “I’m communicating with my friends on Facebook, and indicate that I love a certain kind of chocolate. And, all of a sudden, I start receiving advertisements for chocolate. What if I don’t want to receive those commercial advertisements?” (Senator Bill Nelson)
  • “How do you sustain a business model in which users don’t pay for your service?” (Senator Orrin Hatch)

While many media sites dismissed these questions as an indication of the lack of digital savvy of a few ‘boomers’, the truth is that most internet users are somewhat nervous and confused about how content, reviews, behavioural data, and algorithms all fit together to enable relevant recommendations and advertisements to pop up as they browse.

The more targeted these are, the scarier it is — advertisers, social media sites and online content providers appear to ‘know’ more about our likes, interests and behaviour than even our closest friends and family.

The core issue … can the public trust you?

With datasets constantly increasing, members of the public curating content, and machine learning algorithms becoming more sophisticated, we should expect that models will be more efficient and more useful. But it seems that we have become so focused on finding links between consumer behaviour and sales or hits that we may have forgotten that being trustworthy is a basic requirement of succeeding in business.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Sayuri Moodliar

Sayuri Moodliar

Writer, explorer and lifelong learner