Keynote 4: The connected world with Elizabeth Linder
19th May 2021
Registered attendees can watch again on demand here.
With first-hand experience of the influence of social media, for both good and ill, she considers the realities, hype and consequences of an increasingly connected society. Working at the intersection of business, politics, society and technology with both Facebook and YouTube, Elizabeth has helped leaders around the world understand how social media has changed the balance of power in an era where the public’s need for authenticity has led to Donald Trump’s Twitter presidency.
We have seen much discussion about the impact of social media – sadly mostly negative of late, and a poll running through the session asking: “Should social media platforms be forced to do more to rid forced to do more to rid their sites of sexist, racist and autonomous postings?” resulted in a resounding 98% voting yes!
Elizabeth opened with an acceptance that in 2021 the connected world makes many people feel nervous but reflected that 10 years ago, a key-note on the connected world would have been exhilarating – being seen as an exciting space for democracy to be ushered in; creating a robust dialogue.
What changed since then why less optimism and more concern?
Going back to when social media companies began to emerge, in 2008 Facebook’s mission was about connecting with people you already knew. It created a different way to connect with those already in your life.
She remembered a discussion in Spain when the concept was laughed at! Tech wasn’t needed to connect with friends and family she was told – they are only down the road! Culturally and geographically the situation was different in the USA with friends moving from high school to college, to work; often moving to different states yet sill wanting to remain connected. Social media did that.
It subsequently and quickly evolved to a movement that brought those who didn’t already know each other together. And more – it began to be used for campaigns and politically.
A force for good
Barack Obama used social media to help him get elected – that was seen by many in Silicon Valley as positive. Subsequently there were examples of platforms being used to rally support for causes for change and was seen as a force for good. It was, for example, a driver for rallies leading to the Arab Spring was originally to create a positive change in national leadership.
Elizabeth explained that this was a roller coaster of change as she actually spoke to the then new leader of Egypt, then concerned about being overthrown who said that he himself had previously been a person on his phone called to action, but was in office and second revolution was coming and he was afraid!
This raised other questions about sides, how does one determine what is right and what is wrong in a connected world?
Observers or protagonists
Elizabeth explained that in the early days Facebook and other social media were not in the business of taking the credit for actions or outcomes they were observers allowing conversations to happen. Commentators would say without Facebook a particular outcome may not have happened which could be both positive and frightening. Some may say it was the platform that was de-stabilising a country. Who decides what is good and a bad outcome. And, who determines that it would or would not have happened in any case?
Techs may say they are on the side of freedom of speech and that there is unabashed allowance of it on social media. Platforms publishing speech are however regulated spaces right from the beginning there were things that could not be said so users could not bully or harass or glorify terrorism.
How do techs take policing in the real world and apply it to the tech space to reform users or ban them? What has happened in the US with Trump will have ramifications around the world where leaders are watching it.
Policing of content has been a concern and is costly. Techs do need to get better at it with more resources. At present given the number of Facebook users the policing is the equivalent of having only one police officer for the whole of New Jersey. But is scaling up really feasible?
However, focusing effort on looking at content can be useful. For example, in border conflicts – obviously in the real world there is not 24 hour total guarding across the whole border. Can tech get cleverer at monitoring situations through using monitors who understand cultural issues in a particular area and might identify flashpoints through conversations.
In addition, the growth of the Gig economy could help content moderation but this hasn’t been applied yet.
Could the solution be via AI? AI is used but the nuances are missed! How can a machine currently identify the distinction between a teen commenting: ”I could kill my teacher for that homework” compared to a comment about someone actually wanting to kill someone?
Tech companies are trying to get it right but this should have been addressed earlier.
Liability and responsibility
Without giving a legal opinion Elizabeth addressed the question of posts causing anguish suggesting there was more to be done to understand the implications on mental health which needs to have more discussion within techs and with the public involved too.
On spam, false adverts and ID theft Elizabeth felt that that this was a big issue particularly on the identity side where there are examples of whole profiles being stolen and views expressed in your name which are not yours.
Can injury caused be insurable? There is emerging conversations about damage being caused – where for example female politicians felt forced to come off social media. Techs are now giving users better controls of what they see, who can comment on their posts etc and we are likely to see more of that as tech companies are held more to account over the power they have
Techs are now huge companies so there is an argument to break them up to force competition. But people use the likes of What’s App because the know they can find the people they want to connect on it. Big tech works because it is big.
What gets missed is that a lot of start-ups developing applications have a clear strategy to be acquired by a big tech and that creates competition at a lower level
What the pundits said
Considering in 2018 the Cambridge Analytica scandal affected opinion and even Sir Time Berners Lee said he thought the web was becoming more of a force for bad the space should be more heavily regulated according to Graeme Trudgill Huw agreed that tech companies have at least woken up to the impacts they can have on the democratic process. Steve white said our nervousness about platforms is brought about by what we have experienced. He asked whether the firms were now too big to control and sports teams coming off social media for a weekend sent a very strong message to them.
Might a global authority be needed to police big tech and how would you get consensus for that. The poll though showed 98% support for heavier regulation. Something does need to be done to make this free and useful platform free and appropriate.