top of page

The Dilemma

  • Skribentens bild: Karl Johansson
    Karl Johansson
  • för 2 dagar sedan
  • 25 min läsning
ree

This is part three of a five part series on Elon Musk, Twitter, and how social media does and should work in a modern society. You can find the previous part here:

ree

The Dilemma


The third chapter of ‘The Bird & The Technoking’ will grapple with the core issues of social media. As the name implies social media presents a dilemma for societies, on one hand it is a genuinely great technology for keeping in touch with distant friends and relatives, to connect with new people with similar interests, and to have an open platform where regular people can discuss and debate. On the other hand, the anonymity it brings actively attracts bad faith users, mis- and disinformation is commonplace, as is scams, racism, bigotry, and all forms of bullying. Do the benefits outweigh the drawbacks? Can we preserve all the strengths while simultaneously getting rid of all the weaknesses? Before we get to these grand questions it is worth remembering how recent social media is as a phenomenon, and that there are few if any direct historical parallels for us to draw from when formulating thoughts and policy.


Before we get to the meat of the questions outlined above, we need to establish what a social media is. There is a variety of definitions depending on when the definition was formulated, and whether it focused on the social media as a feature set, a business venture, or a space for public discourse. In the context of ‘The Bird & The Technoking’ the most relevant conception of a social media is as a distinct island of internet culture, as you can – and many people do – argue politics on Facebook, Instagram, Snapchat, Youtube, Reddit, 4Chan, Discord, TikTok, Twitch, Tumblr, and Twitter. Obviously, ‘a distinct island of internet culture’ is hardly a workable definition, so in order to be a bit more specific let’s add the following four core features to that definition: a personal profile, an instant messaging system, a friend/follower list, and some form of ‘public space’ where users can post so that others can see. That is a broad definition and, it still excludes some of what we would probably want to include, but it will have to do.


At the time of writing, I am 28 years old, born just before the new millennium and can reasonably claim to be part of the first generation to grow up with social media, so this chapter will draw more on my personal experiences than “The Man” and “The Bird” did. Coincidentally, social media is also 28 years old this year as the two pioneers who laid the groundwork for the internet we now know and love or hate were founded in 1997. We may think of these two, sixdegrees.com and AOL instant messenger as proto-social media though. Many of the defining features we now associate with social media would not become standard until Facebook started the social media revolution. But we are getting ahead of ourselves. Sixdegrees and AOL Instant Messenger (AIM) were both founded in 1997 and they each brought one of the key features of early social media, instant messaging and a profile page, as well as the friend or follower list. This is why these are proto-social media, they lack the crucial public space of later versions. Proto-social media can be thought of as social networks rather than as media, as they primarily fill a social role. No one got their news on sixdegrees or AIM.


As I remember it, early social media which in my case meant primarily Facebook and Instagram was genuinely fun. The broad arc of development for social media has been bending away from the intimate and interpersonal interactions which defined early social media usage towards the impersonal and algorithm-driven social media we have today. When I started using Facebook I only added people I knew personally, and since there was no pre-established customs on Facebook many of my friends posted openly and actually responded to each others’ posts with more than platitudes. Partly this rose-tinted view of early social media is defined by the stage of my life where social media became wide-spread. For many – myself included – the mid to late teenage years is a time when their social networks are at their largest, which naturally makes social media more appealing. 


What ruined my idyllic, imagined golden age of social media was the boring reality that hosting websites with a lot of traffic is expensive, and social media firms are businesses with a profit motive. Almost all the problems social media would go on to spawn can be traced back to the core of social media’s business model. In the immortal words1 of Mark Zuckerberg: “Senator, we run ads.” Running ads is not in itself a bad thing, journalism has relied on advertising for over a hundred years and the modern entertainment business that gave rise to radio and TV was built on advertising. The difference between ¨traditional businesses relying on advertising and social media is what is being supplied. On an ad supported TV-channel the company provides free scripted or unscripted entertainment in exchange for running ads at set time intervals but Facebook has no entertainment to provide. Facebook builds the platform for you to use, which deputises users into creating the entertainment they are getting in exchange for the ads. To put it another way, you are not on Instagram to see what the Tesco Instagram page posts, you are on Instagram to see pictures of your aunt’s cat and your friend’s meals. So if you run a social media platform you make money by having as many users as possible spend as much time on your platform as possible, as many users create more posts, and more time on the platform means more ads are shown. The problem early social media faced was that if you as a user only follow your real life friends then the time you spend on the platform is primarily governed by how much your friends post, and how interesting those posts are. That is outside the firm’s control.


At the same time marketers realised that social media is an optimal vehicle for word-of-mouth advertising; there are academic papers from the late noughties investigating the now banal question of whether2 it would be good for brands to have a Twitter account. Social media firms need more posts, and marketers realise that advertising not just on social media but through social media users is an effective strategy so increasingly the economic model for not just social media platforms but for popular social media users start to rely on advertising. It starts becoming beneficial for platforms to push the posts made by its most popular users so that people stay on the platform longer, this makes the popular users more like micro celebrities than regular users which makes them more attractive to brands which want to advertise through electronic word-of-mouth. This in turn gives the popular users more incentives and resources to create better posts which continues this feedback loop. In essence, the economic model social media platforms have created inevitably leads to the semi-professionalisation of the creation of posts; in other words the rise of the influencer.


This wrecks the old dynamic of regular people posting about their regular lives partly because it encourages people to aspire to a career in influencing, which changes how they post, and partly because it is no longer in the interests of the platform to promote the ‘family and friends keeping in touch’ usage model. Early social media felt to me like a digital version of hanging out on the school yard – full of low stakes gossip about people you personally know – whereas the new version is too stuffed with ads and driven by algorithms to be anything but a commercial venture. The classic sign of a platform entering its mature impersonal phase is when it changes its homepage, or whatever its version of the ‘public space’ is, to be algorithmically driven with plenty of recommendations from users you do not follow instead of being primarily for seeing what your friends have posted.


Algorithms are important to social media for two reasons3, firstly they deliver posts to a user based on what the data the platform has on that user suggests the user would find interesting, and secondly the data on how the user engages with the posts the algorithm suggested helps refine the algorithm. In other words, the more recommendations the algorithm makes, the better it gets. This means there are both financial and technological incentives for platforms to make users stay on the platform. Financial in the sense that the longer the user spends on the platform the more ads the user sees which is how the platform makes its money, and technological in the sense that spending longer on the platform makes the algorithm better at serving up posts the user likes which naturally makes the user spend longer on the platform in turn making it money.


So what is a machine learning algorithm is and how do they work? A general explanation4 is that it is “the field of study that gives computers the ability to learn without explicitly being programmed.” This is done by giving a programme a lot of data for the programme to find patterns in, a process known as ‘training’, where the programmer instructs the programme what is a good result and what is a bad result. Machine learning generally comes in three forms, descriptive, predictive, and prescriptive, but for our purposes predictive is the most relevant. To explain it with a practical example, the Instagram algorithm recommends you nine pictures and three videos on your explore page. When you press a picture of a cat the algorithm leans that you might be interested in cats, and that you are probably not interested in the other eight pictures. Repeat this process a couple of thousand times and you can start to get a reasonably accurate picture of what types of posts a user spends time on. What is crucial to understand about machine learning though, is that computer programmes cannot see things the way a human can; it only understands that pictures are similar without understanding why. In other words, it knows that you are interested in pictures which resemble the picture of the cat, not that you are interested in cats and accordingly it will not recommend you pictures of text about cats based on the fact that you liked the cat picture.


The takeaway is that while machine learning algorithms are necessary for social media platforms’ current business models they cannot understand context which creates a lot of issues with content moderation. Here is an example: a picture of Hitler in someone’s home is a massive red flag but it is appropriate in a museum about the Holocaust. It is trivial for a person to understand why and how these different contexts shape what the picture is supposed to mean, but it is difficult for a computer programme. This creates dual problems: on one hand it might recommend a user interested in Chinese history communist propaganda as the user is known to have engaged with a posts about Mao Zedong. On the other hand, the algorithm might ban or otherwise try to suppress a post informing about the Bucha massacre in the war in Ukraine because it features graphic images which would be inappropriate in other contexts. These are thorny issues and that is before we even get into how this interacts with platforms’ profit motives. To get a perfect illustration of how much power platforms like TikTok and YouTube and their algorithms have over modern culture let us consider two case studies: first, misogyny on TikTok, then, religion on YouTube.


In 2022 a new star was born on TikTok: Andrew Tate. Here is how The Guardian’s Shanti Das summed up Tate in 20225: “Andrew Tate says women belong in the home, can’t drive, and are a man’s property. He also thinks rape victims must “bear responsibility” for their attacks and dates women aged 18–19 because he can “make an imprint” on them, according to videos posted online. […] Tate’s views have been described as extreme misogyny by domestic abuse charities, capable of radicalising men and boys to commit harm offline. But the 35-year-old is not a fringe personality lurking in an obscure corner of the dark web. Instead, he is one of the most famous figures on TikTok, where videos of him have been watched 11.6 billion times.” How does one of the world’s largest and most popular social media platforms allow someone like Tate, who is at the time of writing convicted in Romania6 for ”rape, human trafficking and forming an organised crime group to sexually exploit women”, allowed to be on their platform, let alone be one of its most famous stars?


The answer is misaligned incentives and, if you are being uncharitable greed. Controversy sells. Or more accurately, controversy sparks feelings and gets people watching. As mentioned earlier, TikTok has a complicated and semi-detached relationship to its users and influencers. TikTok did not commission Tate’s misogynistic rants for the platform to show to its users, instead Tate independently started posting on TikTok. Advertisers cannot blame TikTok for this as any platform open to everyone is bound to attract some bad eggs. But TikTok also directly profits off the controversy Tate generates as it makes more people watch more videos and thus more ads. It is also important to note that a lot of the attention Tate was getting was generated by feminist posters objecting to and discussing Tate’s rhetoric; leading to complicated content moderation questions. Is a video featuring misogynist language inappropriate if it is used as an example in a video explaining feminist theory? Is there such a thing as bad press or is talking about Tate, even when condemning him, helping to spread his message? An optimistic view of TikTok’s actions is that they were thinking hard about how to handle Tate and the problem people like him pose when running a social media platform. Perhaps they were slow to act due to grappling with thorny and deeply political questions about the limit of free speech and in which contexts showing clips of him were appropriate, if ever. The cynical view is that the company held off on banning Tate for as long as possible in order to make money from the views he generated.


One of the key features of TikTok compared to other video platforms like YouTube, which we will discuss later, is that TikTok’s ads are not tied to a specific video. Ads are shown after a certain number of videos on TikTok rather than before a specific video. This is important as any advertiser being upset that their product is shown right before a misogynistic rant by an alleged human trafficker might create a negative brand image can be assuaged by the platform. There was almost certainly some discussion where TikTok told advertisers something along the lines of: “We have no control over which video is played after the ads, that’s down to the algorithm. We’re sorry that your brand was shown before that video and we’ll investigate internally how to improve our processes, but we don’t select which ads play before which video.” Ultimately public pressure became intense enough ban Tate in August 2022, but make no mistake, their business model incentivises keeping controversial users on the platform for as long as possible.


TikTok’s muddled stance on what is and is not allowed on its platform extends further than hate directed at women. The app has also been great for criminals. In 2023 TikTok was not just the home of makeup and dance tutorials but also “Kia boys tutorials7, wherein people teach you how to steal and hotwire Kia and Hyundai cars. There were also videos openly advertising8 smuggling people illegally from Albania to Britain. Some who chose to emigrate through these illegal channels advertised on TikTok were forced to work in cannabis farms run by Albanian gangs when they arrived in England. One can argue to what extent TikTok is to blame, but it surely deserve some blame.


While the story that Tate was being banned from TikTok and Meta’s platforms, among others, was widely reported, few media outlets examine TikTok’s role in letting Tate’s message be spread. The app is routinely argued9 to be a counterintelligence threat in Washington DC due to being owned by a Chinese company. It is also notorious for its censorship regarding Hong Kong10, LGBTQ11 issues, and people with disabilities12, just to name a few. And that is ignoring how the Chinese version of the app, Douyin, censors topics like the treatment of Uighurs in Xinjiang, the pro-democracy movement in Hong Kong, and Taiwan. TikTok has the tools and infrastructure necessary to suppress videos it finds distasteful such as videos from “users deemed too ugly, poor, or disabled for the platform” to quote The Intercept’s reporting13 on TikTok censorship. The fact TikTok had both the means, and the opportunity to remove Tate’s videos before he became an online sensation means that the platform effectively deemed his open misogyny to be less dangerous to its users than seeing LGBTQ users or users it deemed “ugly”. It is easy to see Tate’s ban as a success – TikTok’s users put enough pressure on the platform to ban him in spite of the revenue he generated – but that rings hollow when TikTok had the power to remove him at any time, and chose not to. 


Another dramatic example of the power social media algorithms can have is the rise of the Flat Earth movement on YouTube.  According to research14 by Ashley Landrum of the Texas Tech University people who attend flat earth conventions learned about the “theory” primarily through YouTube. Before blowing this out of proportion it is important to note that as Dan Olson of Folding Ideas put it15: “Flat Earthers are not otherwise normal people with one crazy belief. They believe Flat Earth because it fits the world view they already hold.” Flat Earth is fairly easy to disprove, Dan Olson in his documentary explains how faulty math, a misunderstanding of how big mountains are, and a distrust of science is the evidentiary foundation of Flat Earth. In short, most people would not and do not fall for their supposed “evidence” or “proof”.


Still, even if the majority of people are not susceptible to conspiracy theories like Flat Earth, the increased reach platforms like YouTube offer dramatically increase the pool of potential recruits. Mark Sargent, perhaps the key figure in the Flat Earther movement, is quoted in a Wired article16 as saying: “We were recommended constantly, people getting into flat earth apparently go down this rabbit hole, and so we're just gonna keep recommending.” All of this changed with a tweak to YouTube’s algorithm in early 2019, and Sargent said the following regarding the change: “You will never see flat-earth videos recommended to you, basically ever”. YouTube fixed the issue, right?


It is a bit more complicated than that. According to a study commissioned by YouTube and published in 2016, US adults spend more time watching YouTube than TV, and that more adults under 50 watch YouTube during prime time than watch the top ten TV programmes put together. Just like with TikTok YouTube has incredible reach and thus needs to be careful about what it is showing people. Like TikTok YouTube does not produce its own videos and might therefore be thought of as a neutral host rather than an active curator. But when its algorithm can make or break not just a channel like Mark Sargent’s but an entire genre of videos it cannot just get credit for removing harmful videos without also getting the blame for spreading them in the first place.


Clive Thompson in the aforementioned Wired article16 writes the following about a shift at YouTube to preference watch time over simple clicks on a video in 2014: “The algorithmic tweaks worked. People spent more and more time on the site, and the new code meant small creators and niche content were finding their audience. It was during this period that Sargent saw his first flat-earth video. And it wasn't just flat-earthers. All kinds of misinformation, some of it dangerous, rose to the top of watchers' feeds. Teenage boys followed recommendations to far-right white supremacists and Gamergate conspiracies; the elderly got stuck in loops about government mind control; anti-vaccine falsehoods found adherents. In Brazil, a marginal lawmaker named Jair Bolsonaro rose from obscurity to prominence in part by posting YouTube videos that falsely claimed left-wing scholars were using “gay kits” to “convert kids to homosexuality.”


Flat Earth is a good example for our purposes because it is a relatively harmless conspiracy theory which is mostly based on a fundamentalist and literal interpretation of the Bible rather than the violently apocalyptic beliefs of say Q-Anon. But conspiratorial thinking is a way of viewing the world. When the seed of doubt is planted that the government and the powerful classes in society are actively deceiving you and trying to keep you down it can blossom into a variety of conspiracy theories. Even if someone is not convinced by the whole package of Flat Earth one could be convinced by one of the predicate conspiracies which make up the whole. You might reasonably come to the conclusion that the earth is indeed round but be convinced by the movement’s “debunking” of the moon landing which waters the seed of conspiratorial thinking. It does not matter if the seed blossoms into Q-Anon, the Great Replacement, or 9/11 Trutherism, what matters is that social media could have actively helped create an asocial or in some cases anti-social conspiracy movement by acting like a profit seeking enterprise instead of a social institution.


Of course, social media is not all bad. It has led to genuinely transformative social movements like #MeToo and heroic and laudable attempts at social change like the Arab Spring. As discussed in ‘The Bird’ Twitter and other platforms have changed our politics and changed how and where we discuss important issues. There have always been public outlets for discussion in free societies, whether it is the literal town square or the opinion pages. What sets social media apart is twofold: both that it is a space where everyone is encouraged to post, and that the corporations running these platforms see themselves as neutral caretakers who only step in when something goes wrong. Imagine if Mark Sargent had voiced his theories in an op-ed instead of on YouTube. Would the Financial Times have accepted his submission and published it first, only to stop printing conspiracy theory nonsense after academics had produced research indicating that the op-eds the Times printed was a major booster of these “theories”? Obviously not. When we put it in terms of a newspaper YouTube and TikTok’s actions are patently absurd.


A free society in the social media age has to grapple with what free speech is, and how to regulate it. We know what happens if we shy away from it; people like Sargent and Tate become celebrities and they spread their misinformation and hate. In ‘The Bird’ we discussed the Silicon Valley idea that technology is inherently good, that better technology leads to better outcomes, and Sargent and Tate are two powerful arguments against that idea. As has been alluded to several times already, algorithmically driven social media’s main power is its ‘agenda setting power’. By changing what algorithms are supposed to recommend firms like Twitter, YouTube, and TikTok are able to shift the public conversation; and they have not been prudent in exercising that power. Some may assume then that building a pro-social social media is an intractable problem. But other mass media such as newspaperx, radio, and TV have been able to be profit seeking enterprises while being able to both fill the social functions of spreading news and serving as spaces for public debate. The issue is that social media was never designed to be a replacement for the local newspaper.


The original social media platforms were designed to be spaces where people could connect digitally with friends and family, it was the need to expand time on the platform to be able to show more ads which made the platforms accidentally stumble into their current societal role. Their new power must be accompanied by more responsibility, otherwise platforms will continue minting questionable stars like Tate and Sargent. In a culture where technology is sometimes revered for its own sake rather than for what it enables humans to do, there is a risk that the people in charge see the harm their platforms cause as transitory rather than structural. YouTube did change its algorithm in response to the Flat Earth movement, which was essentially created through a quirk of that algorithm, but as long as the algorithm decides what to recommend instead of a human the hunt for a new exploit will remain. Modern social media is a winner takes all lottery for social prestige, political clout, and cold hard cash. If there is a trick to gaming the algorithm people will use it, otherwise you are always going to be behind your rivals. And after 20 years of social media the competition for views, clicks, likes, and subs is fierce. The pros on Instagram use professional cameras and professional editors on YouTube; if you are starting out with just a smartphone and ambition you will need all the tricks you can get your hands on.


If social media was purely an economic venture for the users then the problem of disinformation would solve itself – it is simply more profitable to do fashion videos than fringe conspiracy theories. But so much of social media is just that: social. As misguided as someone like Sargent is, and as loathsome as someone like Tate is, they are broadcasting their messages not due to a cynical profit motive but from a genuine conviction. Social media is social for good and ill and it is this that the platforms themselves seem not to understand. Pejorative nicknames for social media sites abound; hellsite, cesspool, or any number of other terms are routinely thrown around to describe the platform users wish they could quit. But social media is where it happens. Because it is where things happen the platforms get away with treating their userbase poorly and instead pandering to the brands and institutions whose advertising money keeps the site running. It is common in business to say that the customer is always right. When discussing social media, remember that they users are not the customer, the advertisers are. And for the advertisers the platform provides an economic service, not a social one.


Modern social media then, is an arms race. The active users create things in hope of striking the mother lode of clout, prestige, and money the platforms can offer while the platforms use the creators work to attract passive users they can show ads to. The goal is always engagement, the fancy term for time spent and ads watched. Engagement is the issue though. Engagement is what makes these sites so frustrating to deal with. The key insight about engagement is that maximising engagement is almost always in direct opposition to the users’ interests. Unlike a movie theatre or a book a social media post is from the platform’s point of view never supposed to be a complete and satisfying experience. After finishing Blade Runner, Dune Messiah, or The Last of Us Part 2 I have had an experience of art which I need time to digest, I am not going to immediately jump into the next movie, book, or game. But that is exactly what social media platforms want you to do. You cannot be given one great, artful, and complete experience because then you would want to take a break before the next post, and by extension the next ad.


Engagement-oriented recommendation algorithms effectively function as a sort of dopamine slot machine. With an endlessly scrolling feed the algorithm will only give you what you are really looking for after four or eight or even twenty posts which it knows you will not like very much to keep you hooked. Since 2022 tech journalists at The Washington Post17, VICE18, and Mashable19 have complained about Instagram’s deteriorating user experience, and that is just a small drop in the bucket compared to the legions of people, journalists and otherwise, who have complained about worsening user experiences on other platforms. That deteriorating user experience is what makes the hunt for the next good post so potent, it is what makes you want to pull the slot machine lever. In an article20 for the Brown Undergraduate Journal of Public Health Sophia Petrillo writes: “Although the similarity may not be immediately evident, analysis of social media apps reveals that they are designed to function like slot machines — the “swipe down” feature required to refresh one’s feed mirrors pulling a slot machine lever, and the variable pattern of reward in the form of entertaining videos on TikTok simulates the intermittent reward pattern of winning or losing on a slot machine; this pattern keeps individuals engaged under the impression that the next play might be “the one.””


It is not clear whether or not this is the intent behind the TikTok or Instagram algorithm; as discussed the algorithms are black box functions meant to “learn” through reinforcement. It could be that the engineers in charge of the algorithms realised the power of a dopamine slot machine or it could be that the algorithm accidentally stumbled across the irrational part of human psychology which makes us vulnerable to gambling. Intent hardly matters though. These systems appear to work like slot machines irrespective of what their designers wanted them to be, and they cause harm no matter what the intent behind their design was.


Just like with Tate and Sargent, the gambling-like nature of algorithmically-driven social media is a problem cause by misaligned incentives and unfortunate chance if you are being charitable, and by cynical greed if you are not. The people in charge of Instagram and TikTok have the power to stop having their platforms prey on people’s addictive personalities. There was a time not that long ago when social media did not use underhanded methods like engagement algorithms to attract users. We do not have to accept things as they are and we need to acknowledge that even if the algorithms came up with this system the people in charge are culpable in the harms caused as they had the power to stop it at any time and chose not to.


So far we have focused on case studies like Sargent and Tate because they dovetail well with the core argument of this chapter, that social media platforms have a lot of power which those platforms use irresponsibly. Tate and Sargent are interesting as social media phenomenon as they are figures which at least at one point had a large following as a result of their proselytising of fringe beliefs and values. They are interesting not as overnight successes on social media, but as case studies of using social media to push fringe politics. They are thorny issues for an open society which grant people the right to free speech while also trying to limit the harm hate speech and misinformation causes. But the harm caused by social media platforms is not limited to spreading hate; there are plenty of studies about how Instagram affects users’ self-esteem21 and body image22 negatively. Even Meta itself admits as much in internal research23.


Moving fast and breaking things is the motto of the Silicon Valley technology industry, a phrase coined by Mark Zuckerberg to describe a business ethos which prioritises innovation at the expense of almost anything else. Disruption, innovation, digitalisation and other business buzzwords can be appropriate in certain industries. No one would mind having faster washing machines and safer cars, indeed the development of phones from being cord bound communication to mini computers that can do almost everything has been great, and is a great example of what Silicon Valley can do at its best. We would never have gotten the modern smartphone without innovation and disruption. But not every industry is ripe for or a good candidate for disruption. Sometimes and industry serves a social or political role in addition to an economic role. Shutting down rural hospitals might make business sense, but it does not make sense from a public health perspective. Shutting down local newspapers might be good for protecting profit margins but it also makes it more difficult to keep local government officials accountable. Moving fast and breaking things is a bad idea when your business it ultimately about people. Just like healthcare and news, social media is not just a business – it is everyone’s business.


The point here is not to make a reductive ‘business bad/the profit motive is inherently corrupting’ argument. Running servers, hiring developers, and hiring moderators is expensive; social media has to generate revenues to be able to exist, and despite everything we want it to exist. The argument is rather to say that the fact that a company can own the places where so much of modern life happens needs to come with strings attached. Controlling social media confers power. Agenda setting power, and the power to decide who can and cannot get their voices heard. The power to educate and the power to radicalise. The power to create stars and topple regimes. Why do we let that power be wielded with so little oversight?


It makes perfect sense that social media could not have anticipated problems like worsening self esteem and spreading fringe modern faith movements; who could have possibly anticipated that? The main issue is not the problems caused, it is the way these firms have been unable or unwilling, possibly both, to fix the issues it causes and learning from it. YouTube might have fixed the specific issue of spreading Flat Earth conspiracy theories but as long as the algorithm optimises for “engagement” there is a real risk that some other pernicious “theory” gets boosted. And that is before even discussing how YouTube’s experiences with Flat Earth failed to be heeded by other platforms when Q-Annon started to gain momentum. Social media as an industry will never get to grips with these issues on its own. Leaders in the industry are concerned with running a business, not running a public institution and again, it is difficult to tell what the second or third order effects of a change to a content algorithm will mean.


The situation is far from hopeless though. If we recognise social media for what it is – a social institution – then we can design systems for controlling it. Mandating having people whose primary concern is the effect on society as opposed to on the bottom line in leadership position at say Instagram could go a long way in ameliorating these issues. If current trends of internet usage and online socialisation continue unabated we are moving towards a society where social media is a load bearing institution. Being in charge of such important institutions entails a lot of responsibility, responsibility which has so far been lacking. When the only accountability mechanism for Instagram is Meta’s share price is it really surprising that the company has not changed radically after the discovery that the company knew that it had designed a platform that dismantle teenagers’ self-esteem on an industrial scale?


Social media may have started as a fun way to connect with family and friends but in the 28 years since AIM and Sixdegrees.com platforms have branched out and become more and more important. These platforms are the primary place for discussing the news, politics, art, and culture; and most of the platforms have distinct cultures of their own. Social media is the conduit for and enabler of public life as we know it in the 21st century. That confers power. That power cannot be handed to parties unaware of or unconcerned with how consequential their decisions are.




Next Chapter: The Bird & The Technoking




ree

Sources:

The Dilemma

1 Senator Asks How Facebook Remains Free, Mark Zuckerberg Smirks: ‘We Run Ads’. Youtube Video, NBC News, https://www.youtube.com/watch?v=n2H8wx1aBiQ. Åtkomstdatum 01 oktober 2024.

2 Jansen, Bernard J., m.fl. ”Twitter Power: Tweets as Electronic Word of Mouth”. JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, vol. 60, nr 11, 2009, s. 2169–88, https://web.archive.org/web/20160305013618/https:/faculty.ist.psu.edu/jjansen/academic/jansen_twitter_electronic_word_of_mouth.pdf.

3 Algorithms in Social Media Platforms. 24 april 2021, https://www.internetjustsociety.org/algorithms-in-social-media-platforms.

4 Machine Learning, Explained | MIT Sloan. 01 oktober 2024, https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-explained.

5 Das, Shanti. ”Inside the Violent, Misogynistic World of TikTok’s New Star, Andrew Tate”. The Observer, 06 augusti 2022. The Guardian, https://www.theguardian.com/technology/2022/aug/06/andrew-tate-violent-misogynistic-world-of-tiktok-new-star.

6 Andrew Tate Charged with Rape and Human Trafficking. 20 juni 2023. www.bbc.com, https://www.bbc.com/news/world-europe-65959097.

7 ”American Cities Are Suing Car Manufacturers over Auto Theft. They Have a Case”. The Economist, 31 augusti 2023, https://www.economist.com/cities-are-suing-car-manufacturers-over-auto-theft-they-have-a-case.

8 ”The TikTok exodus: how an Albanian town was emptied”. The Economist. The Economist, https://www.economist.com/1843/2023/08/24/the-tiktok-exodus-how-an-albanian-town-was-emptied. Åtkomstdatum 01 oktober 2024.

10 ”TikTok Suspends Libertarian Think Tank That Posted about Hong Kong and Jimmy Lai”. National Review, 03 maj 2023, https://www.nationalreview.com/corner/tiktok-suspends-libertarian-think-tank-that-posted-about-hong-kong-and-jimmy-lai/.

11 TikTok Admits Restricting Some LGBT Hashtags. 10 september 2020. www.bbc.com, https://www.bbc.com/news/technology-54102575.

12 Kelion, Leo. ”TikTok suppressed disabled users’ videos”. BBC, 03 december 2019, https://www.bbc.com/news/technology-50645345.

13 TikTok Told Moderators: Suppress Posts by the “Ugly” and Poor. 17 mars 2020, https://web.archive.org/web/20200317011112/https://theintercept.com/2020/03/16/tiktok-app-moderators-users-discrimination/.

14 Asheley R. Landrum, Alex Olshansky & Othello Richards (2021) Differential susceptibility to misleading flat earth arguments on youtube, Media Psychology, 24:1, 136-165, DOI: 10.1080/15213269.2019.1669461

15 In Search Of A Flat Earth. Directed by Dan Olsen, Youtube Video, Folding Ideas, 2020, https://www.youtube.com/watch?v=JTfhYyTuT44.

16 Thompson, Clive. ”YouTube’s Plot to Silence Conspiracy Theories”. Wired. www.wired.com, https://www.wired.com/story/youtube-algorithm-silence-conspiracy-theories/. Åtkomstdatum 01 oktober 2024.

17 Hunter, Tatum. ”How to fix your ‘trash’ Instagram feed — at least temporarily ”. Washington Post, 18 juli 2022, https://www.washingtonpost.com/technology/2022/07/18/turn-off-suggested-posts-instagram/.

18 ”Instagram Sucks Now, Sorry”. VICE, 26 juli 2022, https://www.vice.com/en/article/why-everyone-hates-instagram-now/.

19 Schroeder, Stan. ”Instagram Seems to Have Completely Stopped Caring about Its Users”. Mashable, 08 juni 2022, https://mashable.com/article/instagram-too-many-ads.

20 BUJPH. What Makes TikTok so Addictive?: An Analysis of the Mechanisms Underlying the World’s Latest Social Media Craze – Brown Undergraduate Journal of Public Health. 13 december 2021, https://sites.brown.edu/publichealthjournal/2021/12/13/tiktok/.

21 Alfonso-Fuertes, Isabel, m.fl. ”Time Spent on Instagram and Body Image, Self-esteem, and Physical Comparison Among Young Adults in Spain: Observational Study”. JMIR Formative Research, vol. 7, april 2023, s. e42207. PubMed Central, https://doi.org/10.2196/42207.

22 Pedalino, Federica, och Anne-Linda Camerini. ”Instagram Use and Body Dissatisfaction: The Mediating Role of Upward Social Comparison with Peers and Influencers among Young Females”. International Journal of Environmental Research and Public Health, vol. 19, nr 3, januari 2022, s. 1543. PubMed Central, https://doi.org/10.3390/ijerph19031543.

23 Wells, Georgia, m.fl. ”Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show”. Wall Street Journal, 14 september 2021, https://www.wsj.com/articles/facebook-knows-instagram-is-toxic-for-teen-girls-company-documents-show-11631620739?mod=hp_lead_pos7.

Kommentarer


bottom of page