We don't support landscape mode yet. Please go back to portrait mode for the best experience
This is the age of artificial intelligence (AI). And one of the biggest questions the world is grappling with is how AI should be regulated. Then there’s social media, which humankind can’t seem to have enough of. And the behemoth that is straddling these worlds is Meta, with its bouquet of three popular social media services—WhatsApp, Instagram and Facebook. Shepherding the company through the complex process of global regulations and the pressing issues of the day is Nick Clegg, President of Global Affairs at Meta. Clegg, who is also a former Deputy Prime Minister of the UK, in an interaction in Delhi with Rahul Kanwal, News Director of India Today and Aaj Tak, and Executive Director of Business Today, talks about AI, regulation, Meta’s latest product Threads and innovation, among other things.
A: AI is not new. It’s been around for decades. There’s a lot of hype at the moment around something called generative AI… But I think it somewhat obscures the reality that AI has been around for years. At Meta, for instance, we’ve been using AI for ages; anything you see on Facebook or Instagram has, in one way or another, been touched by AI already. I think how to regulate it is a question of first working out what harms and problems you’re trying to deal with. Is it intellectual property and copyright, is it misinformation, and then ask yourself whether the current laws we have on the statute books are sufficient or not. And I suspect it will be a mixture of the two; some of the existing laws we have will be able to be applied to AI, and some new laws will be required. I hope that as those new laws develop, they are developed as internationally as possible because this technology is bigger than any country, bigger than any company.
A: I think there is as much a danger in rushing to regulate something that hasn’t been properly analysed yet than there is about being too slow. Being too fast could also create problems because it means that you suffocate a lot of the innovation that will come from AI, or that at least will be the risk, which would be a great shame, particularly for countries like India. [For] India, it’s not a question of if; it is a question of when India becomes one of the great digital superpowers of the world. It already has the world’s second-largest community of developers. And there are fantastic innovators, entrepreneurs and developers in India who are using AI today. And I think that culture of innovation [that] is strong in India… is something you don’t want to stymie by rushing to pass laws when it’s not always obvious that new laws are necessarily the answer. I do think new laws will be necessary, but I think it’s not a bad thing to take a little bit of time to get it right.
A: Not really, on several counts. First, you can’t build the so-called metaverse… without AI, that umbilical link. And that’s the reason why far from catching up, we’ve actually been leaders in AI research for years, and over the last decade, Meta has open-sourced, shared over a thousand AI databases and models, including very powerful AI models, which help with the automatic translation of many languages, including the numerous languages in India. And recently, we did something that none of the big US tech companies have done so far: We have open-sourced our latest large language model (LLM) called Llama. What does that mean? That means that any academic, any researcher, any developer, any entrepreneur, any budding businessperson here in India—instead of having to build their own LLM at the expense of billions of US dollars—can just download it. It runs directly on Windows… and you can create a new large language, new tools in finance, financial services, and education, health, [among other things]. I think that approach to open innovation is something we’ve always believed in, and it will really help going forward as well.
A: When you get new apps, you always get this eruption of interest; lots of people use it two or three times and then it falls off... And then you get a core base of users and build from that. And we’ve done that before, multiple times on Instagram, on Facebook with new features. And remember, Threads is a sort of a work in progress for lots of new features will be added over time.
But why Threads? Because I think there are a lot of people who are looking for a microblogging site where they can share news and views… particularly when it’s led by people you admire—creators, influencers and so on; they don’t necessarily find Twitter particularly attractive right now and want something that is a slightly kinder alternative. There’s space for more than one kind of microblogging site. The interesting thing about Threads is we’re building it very, very different to things like Twitter... so that it will become a part of something called the fediverse—where you will be able to interoperably share your content on Mastodon, for instance. It’ll be a much more open platform where people will be able to share content across different sites.
A: I don’t think anyone can say that Facebook itself is not one of the most innovative technologies over the last decade… [and our] huge investments in building a new computing platform is something that we’re pioneering in a way that nobody else is. And by the way, car manufacturers will look at each other’s products; of course, people compare notes and see what is moving and shaking in the market. But look at our big bets—whether they’re on social media platforms, the metaverse, or indeed, our long-standing investments in AI well before it became a major talking point. And just to give you an example of that, one of the foundational AI libraries that everybody now uses in the AI industry is called PyTorch—something that Facebook engineers and researchers came up with. I think you can both innovate and, at the same time, look at how people use technology as it evolves. And then evolve yourself, and that’s exactly what we do as a company.
A: I haven’t seen the latest version of the legislation you refer to, but I very much hope that it will not include provisions to sort of divide up the data cake. Because, one of the great things about the internet—particularly the internet outside China—is that it is so fluid, it doesn’t recognise geography. The internet is something that everyone can relish and partake in and build businesses and communicate with each other. And that’s also true for social media. And I think the great risk would be if India were to say, ‘Oh, well, we’re going to hoard all this data for ourselves’; and then Vietnam will say, we’ll do that next; and then the European Union; and the US. And before you know it, the global internet, as we know it, will have disintegrated, will have fragmented. That is why we believe that it is in India’s own interest to keep the data flows open. And particularly, at a time when the Europeans and the US have just recently entered into a new agreement to ensure the continued open data flows across the Atlantic. And I think India and Europe and the US are the tripod for the future governance of the online world. And the more that India, Europe, and the US can align and work together, the better for us all.
A: You mentioned research. As it happens, the research is not conclusive. [Because there is] quite a lot of research that suggests that for the vast majority of youngsters, being able to find a community... find people they can associate with and share their experiences with is a very good thing for their own sense of well-being. But of course, for people who are not feeling great about themselves or dealing with challenging issues in their lives anyway, and particularly if they are passively scrolling and not interacting with other people, then it’s not always a great experience. What we try and do is understand that and then find and build features in Instagram, which will help both parents and kids get the best experience. Over the last several months, we’ve rolled out 30 new features… you can limit the amount of time on Instagram, with far greater parental controls… I think both with the research and with the new features that we’re rolling out, everyone, whether its governments, parents, families, kids, ourselves, [we will] make sure that any experience online for young people is as wholesome and as positive as it can be.
A: The thing to remember about AI and misinformation, or indeed any undesirable, deep fake disinformation, anything that we don’t want on the platform, is, yes, it is true that AI might make it a bit easier for someone to produce a fake image… that’s not new, but you might be able to do it more quickly now. But conversely, AI is [also] our best defence. I’ll give you one very concrete example. The prevalence or proportion as a percentage of the total content on Facebook of hate speech today is now as low as 0.02 per cent.
That means if you’re scrolling through your news feed endlessly, and you saw 10,000 bits of content, [only] two bits of content might be hate speech… it has reduced by over 50 per cent over the last couple of years, precisely because of AI. And the thing to remember about content moderation systems on platforms like Facebook is from our point of view, it doesn’t matter whether it’s a human being or a robot that produced the bad content, our systems will still try to pick that up, regardless of how it’s been generated… I’m quite optimistic that the latest advances in AI will almost help strengthen our defences as much if not more so than help people produce bad content.
Interview : Rahul Kanwal
UI Developer : Pankaj Negi
Producer : Arnav Das Sharma
Creative Producer : Raj Verma
Videos : Shakshi, Gaurav Khera