Programme Updates

Social Justice in Tech: Big Tech – the regulation battle lines are being drawn

by .

The following article is based on a roundtable conversation held at The Delaunay in London on 23 September 2019.  It is the second in a series – supported by Stifel – looking at the reputational and regulatory challenges faced by Big Tech on both sides of the Atlantic. A write-up from the first discussion can be found here.

The series seeks to explore the considerable power that such companies now possess and how it is wielded. A podcast covering some of these issues was recorded before the roundtable and can be listened to here.

Big Tech has a regulation struggle on its hands. Even if Apple, Amazon, Facebook and Alphabet don’t outwardly appear to be at battle stations, insiders acknowledge that what is going on at the moment on both sides of the Atlantic are manoeuvres which could blow up into all-out conflict.

With Margarethe Verstager re-appointed as Competition Commissioner in Europe and, on the other side of the Atlantic, the combined forces of The Department of Justice, the Federal Trade Commission and the House Anti-Trust Subcommittee all with enquiries underway, on September 9th this year 48 US state Attorney Generals launched an antitrust probe into Alphabet. No wonder the lobbyists in Washington and Brussels have never been busier, making sure Big Tech’s side of the story is whispered into the correct ears.

In response to a growing disquiet – at least among the chattering classes – Facebook has suggested it requires its own Supreme Court to regulate its behaviour. The company has said that it wishes to fill its quasi-judicial panel with people from a variety of different backgrounds. “There’s going to be a set of people who serve on this board who make different people within that group uncomfortable,” noted Facebook’s Director of Governance and Global Affairs, Brent Harris.

In a blog post Mark Zuckerberg added: “The board’s decision will be binding, even if I or anyone at Facebook disagrees with it. The board will use our values to inform its decisions and explain its reasoning openly and in a way that protects people’s privacy.” But like the US President’s relationship with the actual Supreme Court in Washington, members will all be approved by Zuckerberg.

When Google tried something similar with an Artificial Intelligence Ethics Committee, the whole process unravelled within a fortnight after a howl of protest. Retiring, temporarily hurt Google said: “It’s become clear that in the current environment, the Ethics Committee can’t function as we wanted. So we’re ending the council and going back to the drawing board. We’ll continue to be responsible in our work on the important issues that AI raises, and will find different ways of getting outside opinions on these topics.”

At the second Social Justice in Tech roundtable, which took place in London  in late September 2019, we sought some opinions about where next for Big Tech from a number of experienced and well informed stakeholders.

The roundtable heard opening provocations from two contributors: Baroness Denise Kingsmill, a lawyer by training and one time Deputy Chair of the UK Competition Commission, and John Thornhill, Innovation Editor at the Financial Times.

Denise began: “From my experience, it’s always preferable if companies are involved in their own regulation. Regulating from on high isn’t the best sort of regulation because it tends to involve acting on a crisis that is already past rather than what is coming down the track. And tech is moving very fast all the time. Fourteen years in the House of Lords has continued to astonish me that business people don’t understand the pressures on and concerns of politicians and vice versa.

“Politicians are supposed to be expert in establishing what the public interest is but the public interest remit is now off the agenda in the realm of regulation. So you are left with consumer choice as the point of sole focus. The critical issue is to get the balance between the commercial requirements of a company to do business legitimately, to grow and reward its employees and owners and what the public interest might be in the area, for example, of climate change or whatever.”

Several members of the panel agreed that it is very hard with tech to specify, what precisely, the harms governments might like to remedy actually are. What Facebook, Google and Microsoft do are often highly efficient, popular and free of charge. Nobody is forced to upload pictures of their kids on holiday onto the Facebook platform. What is the public interest? What does the broad population want?

Eithne O’Leary from Stifel thought, however, a marked change in public opinion has occurred in recent years: “I think public sentiment is moving rather faster than many have noticed. Whether it be climate change or the fundamental nature of capitalism. I for one would never have expected to see the FT debate questioning whether capitalism is a universal good. But that’s happening. The remit and therefore focus on the tech world is quickly becoming wider and regulatory attention will inevitably result.”

However, John Thornhill made the point that the most effective regulation is free market competition and, “the nature of a data economy is such that the people who have data on a massive scale in the way that Amazon does are both creating and hosting markets themselves. Amazon is, in effect, an incumbent insider trader in its own market place because it has more information about the market, how it’s going and the pricing structure, than anyone else.”

In his provocation John laid out several concerns:  “Technology is a phenomenal engine of improvement which has enabled progress. The Oxford Internet Institute has recently reported that 79% of the UK population think it’s a positive thing, a force for good. However, I have three areas of deep concern which are taken from three books. They frame the debate well.

“The first question is: has technology changed the very nature of capitalism and, if it has, what does that mean for the institutions by which we govern our world?  That argument is made very powerfully by Shoshana Zuboff in her book: The Age of Surveillance Capitalism”. Her argument is that just as labour, land and money were turned in to commodities, now we are monetising human behaviour and human attention. This is a new and unprecedented form of capitalism.

“You can kick back against that and argue that, as Tim Wu did in The Attention Merchants” that other people have long monetised attention, whether it’s the radio, television or newspapers. That’s what we do at The Financial Times. We try to monetise people’s attention.

“However the argument is that this is now on a scale and a speed and an unprecedented intrusiveness, that this is a different level of capitalism and if that argument is right then we need different institutions and different forms of regulation.

“The second question is: Are our technologies, on balance, labour-replacing or labour enhancing? The Technology Trap” by Carl Benedikt-Frey addresses this well – he contrasts what happened in the Industrial Revolution with what happened in the 20th century. In the Industrial Revolution his argument is that technology was overwhelmingly labour-replacing. The Luddites were right. The Industrial Revolution led to massive progress but it also would promote a miserable life for three generations of people who had to live through it and the workers in particular who had their jobs displaced by technology were not the beneficiaries.

“Contrast that with what happened in the 20th century where most of technology was labour-enhancing; so electricity, the washing machine, the flight enables living standards to rise and enabled women to enter the workforce on a mass scale, it enables incomes to rise and pretty much everyone benefited from 20th century capitalism. What are we now experiencing in the 21st century? Are we going back? Is the fourth industrial revolution a replay of the first industrial revolution, or is it something very different?

“The third question – and maybe this is a little too abstract –  but I find it the most fascinating question of all – is: what happens if we really do create a super intelligence that leads to an intelligence explosion. That question I think is posed brilliantly by the author Stuart Russell, a professor of computer science at Berkeley, who has just written a book called Human Compatible. He asks what happens if we succeed in creating artificial general intelligence. That is the last invention that man need make, because it will change all the rules of the economic game and change all the political rules as well.

“Russell argues that we need to come up with properly beneficial AI. We should use AI as a way of optimising human preferences, but the killer question is how on earth do you do that? We are not very good at articulating human preferences.  They may vary at the time, they different between cultures, they differ between generations – the young, the old, the unborn.  And utilitarianism – the greatest happiness for the greatest number of people – is not a terribly useful way of looking at this because an AI might interpret human preferences in very different ways to ones that we envisage, and the classic case of that is King Midas, where he had a very strong human preference which really didn’t end very well. So I think that all of these questions involve the relationships of power, economic power, or political power, and the big tech companies are at the centre of all of these questions given that they have the means, the resources and the data to shape these questions.”

John’s podcast interview recorded after the dinner can be heard here:

The issue of tax was brought up by Spencer Hyman who acts as an independent Advisory Board member at Linked IN. “What worries me is that we are possibly missing the really, really big question here. If you really want to influence these companies it’s actually not through regulation that you will do it. I actually think you need to change the tax policy. The biggest problem at the moment with the tech industry is the whole way that the tax industry is structured. Basically tax is based on the idea that you tax profits, because that’s what companies were meant to do. Amazon has found some unique ways around that by never making profit and by just destroying everybody else. It has come by this by playing governments off one another, but we are in a crazy position whereby actually, if you could fix some of the tax issues then we might get some of these companies to behave in a slightly more responsible way.”

Indeed a week after this event the OECD came forward with long-anticipated proposals on international tax reform. The aim, the OECD said, was to create a new and “stable” international corporate tax system because “the current rules dating back to the 1920s are no longer sufficient to ensure a fair allocation of taxing rights in an increasingly globalised world”. Jericho Chambers Responsible Tax Project, with KPMG is also tackling these issues.

Just how capable the UK government was of bringing in coherent and effective regulation was questioned by both Matthew Painter, of Ipsos MORI and Steve Moore, of Volte Face. Matthew pointed out that his organisation’s research suggest that government is currently trusted far less than the tech sector. Steve Moore added,  “These technology companies can’t be relied upon to regulate themselves but good regulation has to be sensitive to the capacity and capability of government as well. I’m involved in an exercise on regulation [the use of medicinal cannabis] at the moment where there is no institutional capacity to regulate effectively. The Government cannot do it. In fact my industry members are carrying the burden of that, but I think any good regulatory process requires more intelligent consumers and a proportionate response from government, based on public debate. I’m also old enough to remember going to seminars where everyone was rhapsodising about information being free and how those kinds of ideas shaped business models. At the time there was very little challenge to that idea.”

Steve’s podcast interview can be heard here:

Joanna Arnold, Access Intelligence, made the important point that the frictionless nature of much of what Big Tech does means it goes under the concern radar of many. Although the former director of GCHQ said recently that Google and Facebook now have far more information on users than his previous employer could dream of acquiring, there is no discernible widespread public disquiet.

“There are swathes of demographics,” said Joanna, “That actually don’t care about the use of data. It’s free and convenient – they don’t mind what is done.”

Joanna’s podcast interview can be heard here:

The sense of consumer unawareness and lack of concern about what is being done with them and their data was noted by Sonia Livingston. “I am intrigued by the idea that it’s very hard to clarify what harms we fear. I work in the area of children’s online experiences. We’re in the land of prognostications and predictions but we don’t have evidence about anything to do with the future. We just don’t know what could go wrong and what could be good. But what does concern me are path dependencies. There is a sense of what we’re doing now will place us in a straitjacket for the future. We’re making possibly fateful decisions whose consequences are unknown and unintended. Twenty years back, a freedom of speech argument was applied to big tech companies and their platforms – one that suggested they should not be responsible for content on their platforms – and look where that has led us. And now we are desperately trying to back-track from the results.”

Finally, Louis Jeng from Stifel made an important point: “I have little doubt that given the fact the Big Tech companies now own data on almost 3 billion of us they will be regulated and they expect it. However no mention has been made of China which presents a marked contrast to what is occurring and the difficult conversations that are happening in the West. Some of the most advanced AI processes and companies are actually in China and they proceed unencumbered by questioning or regulation to use AI in mass surveillance on populations to garner insights and therefore control. We talk about protecting our crown jewels in the Western hemisphere but make no mistake there is an arms race in AI, there is a war for talent to help to win it and for those who win this race for supremacy the rewards will be very great.”

A short while after the supper, Apple removed a mapping application used by Hong Kong protesters from its App Store, just days after being criticised by Chinese state media for allowing it to be downloaded. Thinking differently, the Apple way.

The next roundtable discussion with be held in November.

For further information about the Social Justice in Tech programme or to get involved, please contact becky.holloway@jerichochambers.com.

The discussion was attended by:

  1. Joanna Arnold, CEO, Access Intelligence
  2. Stephen Avila, Chief Commercial Officer, Project Etopia
  3. Jenny Burton, Commercial Director, Aspirant Analytics
  4. Adam Grodecki, Founder and Director, Forward Institute
  5. Matthew Gwyther, Partner, Jericho Chambers
  6. Spencer Hyman, Partner and Co-founder, Navigator Commerce and Independent Advisor/Board Member LinkedIn, YoyoGames and EPOS
  7. Louis Jeng, Managing Director, Stifel
  8. Nader Khosrovani, Director Marketing Communications, Five AI
  9. Baroness Denise Kingsmill
  10. Rebecca Lawrence, MD, F1000
  11. Sonia Livingstone, Professor of Social Psychology in the Department of Media and Communications, LSE
  12. Rose Luckin, Professor of Learner Centred Design, UCL
  13. Steve Moore, Strategic Counsel, Volteface
  14. Eithne O’Leary, President, Stifel
  15. Matt Painter, Research Director, IPSOS Mori
  16. Robert Phillips, Co-Founder, Jericho Chambers
  17. John Thornhill, Innovation Editor, FT

Sign up to be kept up to date