Discover more from The Founder Thesis Podcast | Learn from disruptive founders
Transcript: Fighting Disinformation With AI | Lyric Jain @ Logically
London-based startup Logically uses a mix of AI and human intelligence to detect fake news.
Over 120 data scientists, engineers, analysts, developers, and investigators have joined hands to provide tools to organizations that identify and disarm misleading information.
UK-based entrepreneur Lyric Jain shares his fascinating journey of building Logically!
Other Ways to Listen:
Read the text version of the episode below:-
[00:00:00] Lyric Jain: Hi everyone, I'm Lyric the founder and CEO of Logically. It's a pleasure to be with you today,
[00:00:04] I was born to North Indian parents in South India, so we were one of those North Indian families in the middle of Karnataka that did not know how to speak any kind of that. My dad is a bit an entrepreneur, but of a very different variety, than an innovation driven enterprise. But his life story is almost more, more interesting than mine in some ways. I think his origin was, he was the son of a, a headmaster in this village in in Haryana. The family were okay off. They weren't the typical middle class family, but unfortunately were struck down during, I believe one of the, one of the wars and pretty much lost everything that they had.
[00:02:03] And dad rebuilt himself from scratch. And kind after to marrying mom did a job as a, basically a textile worker in a factory for for a few years. He has stories of him working some ridiculous hours, which I can't even compete with. Multiple weeks at a time without coming to home at a factory. And eventually was promoted to being manager and then eventually was able to raise money to build his own kind of textile plant. And interestingly I didn't give him credit to be innovation driven. But he actually ended up coming to the UK to buy some like old textile mill bits like factory machines, et cetera because a lot of factories in the UK were closing and moved them to India to I think go at first. And that's where his entrepreneurship journey really began and set up a couple of factories, then got into a little bit of real estate, et cetera. And then eventually moved when my sister moved to the uk. We all moved the, moved to the uk. So we moved when I was 12, I think 12-13. So I was in halfway through eight standard. I remember life being pretty easy Hey, maths and science are much harder in India. And but I really struggled with languages. Oh, French and Spanish. I'd never done them, and really struggled with them over the first couple of years. Made my way through school. And then came at a crossroads moment where I was quite intrigued by the world of finance but also by the world of engineering. Pre university was this kind of challenge with, hey which direction do I go in? And good piece of advice from multiple people, ended up probably pointing me towards the engineering group. I went to, went to Cambridge, had a great time with some great people. But even then, my head was still tranced by finance and investment banking in particular.
[00:03:31] Akshay Datt: In Cambridge you did like a bachelor's of science with a computer science specialization or something like that?
[00:03:36] Lyric Jain: No, so it was general engineering in Cambridge. It was, when I went to MIT, I saw it as my one opportunity to say, Hey, if I'm ever getting into computer science, it's here. Otherwise I'm never really getting into it. And that's I've done some computer science et cetera at Cambridge, but again no, no discredit to Cambridge, but they didn't really teach it very well. But I think Cambridge did give the solid foundation and computer science to be able to pursue that at MIT and at MIT I really got into artificial intelligence.
[00:04:01] Akshay Datt: How did like from Cambridge to MIT like how did that happen?
[00:04:05] Lyric Jain: So this was part of a kind of program that Cambridge and MIT had cause it was a joint masters's and bachelor's.
[00:04:10] Akshay Datt: Okay. Like an exchange.
[00:04:12] Lyric Jain: Yeah. Because it was joined masters and since you had, you had depending on your first year results, your kind of supervisor, et cetera could nominate you to go to MIT and the other way around. People at MIT could be nominated to go to Cambridge. Ended up being a pretty varied MIT experience. But that was fun. But that was really what the origin story of of Logically begins.
[00:04:28] Akshay Datt: Okay. I can see on your LinkedIn you started up while you were still studying. So like how did that happen?
[00:04:33] Lyric Jain: It's unfortunately like a series of really strange events, a bit of family tragedy in 20 14, 20 15. My grandma, she was 86 at the time but she still used WhatsApp and she got this tirade of messages saying, Hey, drink the special green juice, give up your cancer meds and you'll live longer. And unfortunately, we lost her a lot earlier than we ought to have. But at that time no, or very few people really thought of misinformation or disinformation as a problem. And they those concepts were pretty poorly defined at that stage. I just thought it was fraud. But really started to develop an interest in a lot of social media information dynamics in 2016, particularly in the runup to the European referendum. So my experience there, was quite novel because a home for me in the UK is a little town called Stone in the middle of nowhere.
[00:05:17] And it happens to be the highest Brexit burning constituency in all of the UK. And where I was at the time, Cambridge happens to be the highest remain voting consistency uh, in all all the UK. So it was this perfect storm of being having one leg in both poles almost. And I vividly remember one of these moments where a friend from Stone came over to Cambridge and I compared feeds with a friend from Cambridge and completely different information, rife with misinformation. And obviously they made very different decisions. But really it was the misinformation aspect of how much of that was creeping into their feeds and how much almost social engineering was creeping into their feeds. That was quite interesting for me. Cause at that time, I think very few had made those kind of observations.
[00:05:57] But that it still, it was like, Hey, big problem. It's like world hunger. Someone will solve it. I don't know if it's for me to pursue. But it was really when I was at MIT where kind of that problem met potential solutions during my time at C Cell and the media lab there. To see how AI is evolving, particularly around content understanding. I think NLP, et cetera had started to be reasonably well evolved at that point. And really this area of NLU natural language understanding had started to become quite interesting. And there was some promising breakthroughs that year and the year before. So really wanted to start applying some of those to this problem context.
[00:06:29] Akshay Datt: What is the C Cell?
[00:06:30] Lyric Jain: C Cell that's the AI lab at MIT. So there's the MIT media lab and there's the, there's C cell, which is the AI lab, and I had one foot in both those camps doing like a lot of research for basically my coursework pretty much. And that, that's where I took a focus particularly in kind of content assessment and content risk assessment, et cetera.
[00:06:48] Akshay Datt: So these labs are like fundamental research. They're doing fundamental research. They would be like I think global leaders in, in fundamental research what are these labs like for people who don't know?
[00:06:57] Lyric Jain: Absolutely. Like C Cell's, kind of one of the top labs for area research out there, along with probably Stanford and probably these days, Toronto's doing some ridiculously good work. Yeah, those two, three labs probably are right up there. When it comes to global state of the art AI research these days there's a bunch of kind of private sector or semi-private sector and non-profit labs that have entered into the foray as well. Open AI and various private sector companies. But in terms of pure academia, I'd say, yeah, C Cell and Stanford are way up there. There it gave me the opportunity, better interface with uh, researchers, but also to really focus some time and energies on this problem around mis information and disinformation. And it was still, these were still early days, it was still late 2016, early 2017. And for me the technical proof point to reach was, could we quickly hack something together that identified misinformation on a social media platform. Facebook. It was Facebook. How do we really quickly build something that can identify misinformation on Facebook? And that would be the the test of whatever we built in terms of is it good enough? Is it doing a better job than whatever at the existing platforms and their their measures are doing. And yeah, it took us a while that we were able to build that within a few months.
[00:08:00] And that really was the milestone for us of saying, Hey, there's really a, there, there here that very few other people in the world are going after. But it's clearly gonna be a substantial challenge moving forward given how democracies are being moved because of stuff like this. But also these individual high risk events such as what my grandma experienced happening because of events such as this. That, that's when the Logically journey really began. It was a, a solo founder journey. Tried spending some time looking to find a few people to build Logically together with me, but couldn't really find anyone with the perfect kind of complimentary set of skills, et cetera.
[00:08:32] But yeah, ended up getting solo and really building, building a team and an early team when I returned back to the UK. And during the early days, our focus was actually. Let's build on this technology and get the efficacy of our methodology uh, up, so that it works more robustly, but at the same time, let's position it to where the product is. And during the early days, our focus was very much consumer. It was, Hey, if we're able to build a better news experience, than pretty much every one of these, either social network companies or news aggregated companies that are out there we probably are gonna get a lot of traction.
[00:09:03] And after a couple of attempts it would gradually improving execution. We managed to find some degree of product market fair, particularly when it came to uh, big crisis events. So be it like elections or even the early days of Covid when we launched the app it had like hundreds of thousands of daily active users during those days and weeks. But then retention was horrible.
[00:09:22] Akshay Datt: so you were a, solo founder you hack together that MVP of like a tool which screens for misinformation and tags it what was that MVP, which you initially built, and obviously later on you told me you tried to bring it into a consumer product for news like let's say like a Google News kind of a product, I'm guessing, but what was the original MVP you built?
[00:09:40] Lyric Jain: Oh, the original MVP was effectively, gimme a social post. Be it a long form article or a, like a single claim on social media. Can we check it? And can we check whether this contains any degree of misinformation risk? And it was effectively just a script. It was something that we just had an API for. That was the original kind of technical proof point, that was pre-company, just as a technical milestone of saying, yeah, this is theoretically, scientifically possible. We've gotta improve it in a, in a million different ways. But yeah the concept is viable.
[00:10:07] Akshay Datt: How did it give it a score? Like essentially it would give it a score of how authentic this post or article is how would it give the score?
[00:10:15] Lyric Jain: Yeah, so the early days were fairly primitive. So the early day I'm glad that pretty much everything that I built back in the day everything, every single thing has been thrown out, so that's good. But some of the primitive methodology has been built on, but at that time it was, Hey, let's just look at the source and let's just look at the content. In terms of source. Let's have an index of how credible different organizations, et cetera are.
[00:10:34] Akshay Datt: So like a New York Times or a Washington Post as the source would get a higher score versus some unknown.
[00:10:40] Lyric Jain: Yeah. And now those methods have become quite, quite a lot more complex going into like domain expertise, funding sources of organizations, et cetera, et cetera. But at that time it was fairly primitive. And then the way we look at misinformation risk within the content, Would be just comparing articles et cetera to each other. So it was almost a popularity voting mechanism of, hey a lot of credible things saying the same thing. Then it's probably true a lot of non-credible things saying a lot of the same thing and not a lot of the credible ones, so it's probably false. So it was, pretty naive methodology in terms of the percentage robustness of it and accuracy of it, it ended up being pretty good. That alone is sufficient to get in the nineties when it comes to the efficacy of misinformation detection, but really moving the needle from something that works kind of 80, 85 to 90% time to 98 to 99% time has been the challenge of the last few years. So gaining that next 10% in performance has been the challenge that's taken us another 180 people.
[00:11:32] Akshay Datt: How did you feed it data? Because what you're telling me sounds like it must consume a lot of news and posts and articles to really be accurate. So how are you feeding it data then?
[00:11:44] Lyric Jain: Yeah, so this is also one of the things we've maybe lost sight on by being a by, by being an early mover, we had to build a lot of our own scrapers et cetera, because at that time, the, these days it's pretty easy to scrape stuff and there's a hundred different scraping service providers, et cetera, that back then there were some frameworks that were able, like (inaudible), et cetera. But so we ended up copying those together to build our own scrapers for pretty much multiple different news websites. We ended up licensing some content as well. And that was a lot of our database with long form content. And at that time we had the Twitter API that we licensed from Twitter to get some of that social context.
[00:12:19] And that was about it. That was all the data that we needed. But when we moved into MVP land, we knew that data was, and data scarcity was gonna be the biggest challenge in this space. Even before really doubling down and investing in our engineering teams we started building out our content assessment teams because fundamentally that's then the capacity that's been lacking globally in this space.
[00:12:39] There's a bunch of batching organizations. There's a bunch of credibility assessment organizations, and they're doing some really important powerful work. About three, four years ago, capacity was pretty, pretty constrained. So we then ended up building in-house capacity for that bringing in kind of a dozen people which is quite a lot for an early say, startup to focus purely on, being that knowledge base for us, for being for building a methodology for how do we assess the credibility of websites and building a a methodology firm.
[00:13:04] Akshay Datt: So you were talking about that in-house team of a dozen people. So these dozen people are like the ones who decide this is an authoritative source of news. This is not an authoritative or is that what they were doing?
[00:13:15] Lyric Jain: Yes, partly. So it was even within them, it wasn't up to one person. We had this almost internal jury system where people needed to come with different views and then an assessment would be made collaboratively within within that team.
[00:13:28] And there would be things like (inaudible) agreement, et cetera, that we take into account before coming up with the, with the Logically score. Equally at that time a lot of sentiment analysis capabilities in the field were pretty, pretty basic. So figuring out how we model stance and entity sentiment, et cetera was also something that team helped us build out.
[00:13:45] And also just our, our libraries of misinformation and disinformation. And really that's what kind of gives us a lot of the robustness that we have today, because that's the data that we've been building for like three, four years now. And that team's, that team's scale, we have some external partners that help us. We have our platform partnerships that help contribute to that data pool now. But fundamentally this is a data scarcity challenge. One of the challenges of misinformation detection is proportionately how little misinformation actually exists relative to all information on the internet.
[00:14:16] it's in single digit percentages. It's not like 50% which makes it a a tricky classification problem. Or a strictly tricky detection problem which means we need to have good representative coverage of the different styles and modalities within this information say that's what the team really started mapping out. And I wouldn't say we've mapped out every single type. We today covered the geopolitical context pretty well, and we cover some of health really well. But when it comes to say, financial misinformation, disinformation, that's pretty much untouched by by by Logically so still long ways to go. But yeah that, that's kinda the roadmap there a lot of the efforts that those teams are still making.
[00:14:46] Akshay Datt: How did you fund this? You started this when you were still like you're not graduated yet and you hired people then, how did you make that happen?
[00:14:54] Lyric Jain: Yeah, that, that was thankfully a lot to do with family support. That's was a leap of faith for the first few years.
[00:15:00] we were able to run a pretty bootstrapped operation for the first year or so before we got our first, first round of seed funding in from a UK based uh, VC. But that, that first year getting up to the point of initial traction.
[00:15:12] That was yeah, thanks to a lot of family support, but also a huge leap of faith that our kind of initial team took on us. And a lot of those folks are still with us today. There are, there our day one-ers
[00:15:21] . And I always admire anyone again across our journey who's taken that leap of faith because at pretty much every year we de-risked our journey.
[00:15:28] That individuals who've joined us four years ago, three years ago, two years ago did certainly take on a lot of risk while joining us. We were a lot scrappier then than we are today. So yeah, thanks to their investment and as well as family we were able to build something in the form of that consumer application initially that got a little bit of traction that warranted a seed investment, but more than the consumer app. Really what the investment was in was the underlying concept and the underlying detection methodology, cuz that was really what our core IP was.
[00:15:55] Akshay Datt: Okay. What was the consumer app called? How did you position it?
[00:15:58] Lyric Jain: Oh, so it was called Logically very magically named . So it was supposed to be this destination, be this one-stop shop for news consumption where people would have automated feeds which had these big stories. Each story would be a collection of multiple articles that were on the same underlying event or issue that, that occurred. It would present multiple, like an objective summary and a number of bullet points of, Hey, here's what's happened. the objective summary and multiple viewpoints from across the political spectrum that would be reflected, a timeline of events. And all of this was automatically generated and contextualized through, through our platform. And in V1 post MVP, we also augmented that then with a semi-automated fact checking service. So automated fact checking and image verification as well as some of our which was supported by our ai as a first pass. And if consumers didn't get a, get an answer through the through the automation, they were able to ask our, our teams for an answer, our factoring teams. But that was an amazing experience, particularly during election events. And really the first test case for us was the Indian elections.
[00:16:58] So during I think the 2019 Lok Sabha elections we launched the app, got a, pretty good amount of traction. But as soon as May subsided, we just saw the retention numbers just completely die. And we thought that was a lot to do with our execution. And we knew we had DXUI issues, so we reinvested in that, that side of what we were doing. And in time for them Maharashtra elections, we, relaunched the app that year. But a funny old thing happened then, so we actually partnered that that in that election cycle with the election commission and the local law enforcement to identify misinformation during the election. But really it was supposed to be something that. Was branding and marketing for us and our consumer app. But that relationship and that engagement ended up becoming commercial. And that was the first bit of revenue that we got, and that's where life became a bit interesting for us because we always knew there is a value proposition of Logically in our technology working directly with social platforms and working with various public sector agencies. But we hadn't seriously explored it just yet. We were pretty laser focused until that point on the consumer app story. But on the consumer app side of the business, we still were in this place where we saw that retention dynamic again, and we were like, okay, great. We need to, there's next year, there's the US elections coming up, but we need to have a hedge strategy.
[00:18:13] We need to go in almost a big final roll of the dice for our consumer proposition in terms of this current formation and conceptualization of it. We reworked a lot of things with the app, launched it ahead of the American elections. We had an interesting feature for live fact checking the presidential debates, which got I think 150,000 viewers which is pretty big given the, like big news channels.
[00:18:33] Got like single digit millions us to get a hundred, that was pretty, pretty big for us. We got featured in a bunch of places, but at the same time, building on top of our Maharashtra experience, we started working with battleground states in America in a commercial capacity to really build a software platform poc like a SaaS POC for how organizations could detect misinformation that could threaten election integrity and other risks more generally. So that October we, we rolled that out with couple of partners. And that's really been the traction story of Logically for the last couple of years.
[00:19:03] Akshay Datt: I wanna go back to when you raised the seed round through the consumer app, what was your pitch then? Was it advertising? The monetization pitch.
[00:19:11] Lyric Jain: No, it was a subscription. It was subscription. And at that time we were running a few subscription experiments that were, they were going slightly better in the UK than they were in India. Like the conversion rates in India were pretty poor like, less than a percent.
[00:19:24] Akshay Datt: But subscription would just give them access to you know, content, which is not behind the paywall, it's not like subscription would also give them paywalled content that was not part of the offering. It was like content with the fact checking that was the pitch to a consumer.
[00:19:40] Lyric Jain: Somewhat. So it was free content. So there was a couple of tiers that were involved. So there was free content and freely accessible content. That's contextualize in that story concept that I shared earlier with on the demand fact checking, and then some premium content as well. So we'd struck up some partnerships with some of the premium publishers that are out there, u fts WSJS, et cetera. And their content was gonna be part of the Logically experience as well. So we were a bit further along in terms of the commercial aspects. But really the big challenge there for us was retention. And for a long time we put that down to our execution and not the underlying market dynamics. But it feels like for that kind of value proposition, there is serious and urgent demand for it during crisis events.
[00:20:20] Cuz even though we don't support, like actively support the consumer app right now, it's still live and in different markets when there's surge events, we see like little peaks and troughs in the, in in utility of the app. But there's little ways in which we can commercially monetize that directly. But long term we still want a way in which we can uh, be in the hands of end users and deliver impact. I think there's still a role for Logically and consumers to work together to deliver impact, but it's just probably not at the top of our priority list because we're focusing on these high leverage markets.
[00:20:51] so if we're, we can certainly amplify our impact by working directly with platforms and governments. And that's, that, that's the privacy for us at the moment.
[00:20:58] Akshay Datt: The Maharashtra pilot where they paid you for fact checking and flagging fake news what was the Maharashtra election commission getting? Was it getting a website where people could see this is fake news? Or what was it like? What's the product that
[00:21:14] Lyric Jain: No, so this was ,So because this was supposed to mainly be a marketing initiative for us at this point we didn't think of it, it commercially, it was this physical war room that we set up in their office. So it had like Logically branding everywhere and we'd taken out space physically in their office , with loads of press and all that good stuff. But main value prop there was identifying moral code of conduct violations. So specifically things like, Hey, your election has been moved from this location to this location.
[00:21:40] So really not political stuff, but just like stuff every single person can agree on that it's wrong, fraudulent, and shouldn't be happening. That, that's the kind of stuff that we we focused on with them and it's their responsibility to, during that model code of conduct period to identify these kind of things. And the remarkable thing for us was in the looks of a cycle, all of the ECI across India had found 900 violations. We, in Maharashtra alone, I think the number we got to was 20,000 So in one state, we found 20 times more than what kind of the status quo found across the whole nation just three months earlier. So I think that really spoke to a lot of the scale of the challenge that exists in India. And this, again, away from like political misinformation and disinformation, which is, I agree it, it needs to be handled sensitively. This is just like stuff everyone can agree on is wrong stuff. Hey, your ballot's been moved. Hey, don't come to the election because the election's kept like super obvious stuff that no well-intentioned person can say isn't misinformation or disinformation. Those are the kinds of things we focused on. We also focus on foreign interference and started seeing some degree of foreign activity from uh, the PRC in Pakistan that were as involved during that campaign, even during Maharashtra, which was quite interesting to us.
[00:22:48] And that got us into some interesting conversations with various stakeholders in India, but that's really began the Logically intelligent story as it were.
[00:22:55] Akshay Datt: This was like you were monitoring Facebook and Twitter feeds like Facebook and Twitter posts was what you were monitoring and flagging?
[00:23:02] Lyric Jain: That's right. So this was like the way we delivered this was nothing was little to do with our consumer app. It was a lot of the backend APIs, et cetera, that we had.
[00:23:09] We just used to run them on these batches of content and feeds that either the ECI had access to or we had access to, and put them download them as a TSV put them together in a document and give it to them. And that was it. And they would, we would triage it, maybe prioritize it a little bit, and that, that's about it. So it was pretty like low fidelity. In some ways it was very hacked together because this wasn't supposed to be a product for us. But we really learned from that. And 12 months later when it came to the American belligrant Grant states we had a POC for an entire workflow for how the equivalence of the ECI but in the US could define their information environment, define what they would consider a threat within that come up with prioritization framework.
[00:23:49] We, we'd fine tuned our models for very high precision and recall. And we then had a remediation process built in as well so customers could identify and respond to misinformation and disinformation through our platform. And that was the first big moment of fit for us.
[00:24:04] And in March last year, we ended up launching that as a commercially available product. But it's not been smooth selling still because even though conceptually there's a degree of product market fit, we, there's still a very few people in the world who understand how to deal with misinformation and disinformation. And that's where a lot of the team that we've been building over the last three or four years that was initially this team of assessors and fact checkers and laser open source intelligence analysts. They they ended up effectively using our platform to deliver various reporting products to, to, to various customers because currently what we do is a blend of our customers using our platform directly as well as our us providing our partners with capacity as well as our platform.
[00:24:45] So it's almost that Palantir-esque business model, of platform only or platform plus, uh, delivery. And yeah, it, that, that's the, the scaling story over, over the last 15 months.
[00:24:55] Akshay Datt: So for the US election when you had the product ready, I just wanna understand that product a little better. So this product would monitor social media activity about elections. Maybe there would be some tags or prominent accounts and stuff which you would identify. These are related to the election, and it would then create a repository of inauthentic or like posts which have a low score. And then what you said, there was a, like a mechanism where consumers could see their posts and like a redress sell mechanism? I didn't understand.
[00:25:27] Lyric Jain: So not consumers, but the departments of states or whoever are responsible for acting on those risks? They can make multiple decisions. Again, you can monitor this monitors cross-platform, both articles as well as short form posts.
[00:25:39] We didn't have a lot of multimedia back then, so there was only text only and English only. We assessed what had a degree of misinformation risk. So that was or disinformation risk. So either because of content or because bots were involved, or because a nation state actor was involved, or because it looks like someone's impersonating an election official or someone's calling for violence against an election official.
[00:25:58] So those, again, those aren't all them, but those were emblematic of the kinds of risks that we were identifying on on the platform. And each one of those risks be an individual account or a piece of content or a piece of activity that could be investigated further through the platform. So we would contextualize it again, probably not in October but the more recent version of the platform we contextualize it with who's it reaching, how many people is it reaching, and which locations, is it reaching people? What demographics people are, is it reaching, is it hyper targeting people or is it pretty pretty general, et cetera.
[00:26:27] And then it gives users options of, Hey, what do you wanna do about this? Do you wanna actively do nothing? Because sometimes that is the best option. Don't give them more oxygen. Sometimes it's, Hey, this isn't just harmful activity. This is a illegal activity. And we need to escalate it to law enforcement or another agency. In other cases, Hey, this is clearly a platform terms and service violation. Let's flag it to a platform and see if they agree. And finally it's coming up with some kind of fact checking or communications in response to a piece of misinformation. Those are examples of some of the remediations that were baked into the platform.
[00:26:58] Akshay Datt: The reporting was baked in or quoting that tweet or that post and saying that this is fake news. All of this was baked in, like it could all be done through the Logically product. Yeah. And okay, so I, I understood this part of it. Now tell me that what, then the next part, which you told me as a platform or a platform plus service just help me understand that a bit better.
[00:27:18] Lyric Jain: Sure. So really it, it stems from the challenge of there not being enough capacity still in the counter mis dis space. Because a lot of the organizations that need to tackle misinformation, disinformation, they don't have specific analysts or dedicated resources to go out and assess and use platforms such as Logically to identify misinformation or disinformation. Because at end of day as a platform, there needs to be a user at the end of this platform to really deliver some amount of value. And most organizations don't have that.
[00:27:46] Akshay Datt: So you're saying that this this act of reporting versus posting that this is fake news versus escalating to law enforcement, this act is something which you also provide as a service because often,
[00:27:58] Lyric Jain: That's right. But it's not just that act, it's setting up the information environment. So right now with say your, I don't know you're, you're an election commission coming up in India and there's some elections coming up later. The CEO of the elections in 2 years time, you're you need to figure out what's your monitoring scope, what's your information environment?
[00:28:13] You're clearly not gonna monitor out like 10 billion pieces of content every day because everything on the internet is irrelevant to you. That's un feasible for some organizations that's unfeasible for most organizations. So you're probably not gonna do that. So what's your scope? How do you define what is election and election adjacent? What's MCC violation and MCC violation adjacent? And the platform can help someone do that, but it needs a degree of subject matter expertise and a and a user who's trained to do that. Then you need some kind of framework for what do you prioritize? Because the number of threats here will be in the thousands or even the tens of thousands and hundreds of thousands. So what's your prioritization framework? What does you as an organization care most about based on your policies? Logically can't really decide that. We can advise. Yes. But do you care more about, again, it's as explicit as this, do you care more about, and is this a bigger concern for you of an election worker getting killed by a conspiracy theorist or a million people believing that the election was hacked?
[00:29:08] Which one is a bigger risk to you and which one do you care more about? And it's making those kind of policy decisions. Again, it won't be as blunt as that, but effectively, what's your prioritization framework? And that's for a customer to build, Logically can advise our, our services team can obviously consult. And then it's about, okay, what does proportionate and effective response look like? A lot of kind of unexperienced people in the space will like, Hey, it is misinformation is disinformation. Let's just take it all down. No you'll make a problem a lot worse by just doing that. So proportionate effective response would be looking at it and then making a calculated decision based on what the potential impact would be of taking something down or reporting something to a platform or escalating it or actively doing nothing, or putting out a piece of fact check. Yeah, again, we're baking all of those things into the platform as part of the roadmap to make it easier for future users. But for today, that's a degree of specialism and expertise that users need to have. Users need to be trained and certified to be able to use that platform. And some organizations have that, some organizations even including in India, have those super expert users. But a lot of organizations don't.
[00:30:09] And that's where kind of our accredited teams can step in as well. Be it our fact checking teams or our open source intelligence teams and provide that capacity should that be needed.
[00:30:18] Akshay Datt: What do you mean by open source intelligence teams?
[00:30:21] Lyric Jain: It's, it sounds like a really fancy term, but one of the ways in which we can think about it, they're just expert Googlers in some ways and expert researchers in the online landscape. I, I do them a disservice by calling them that it's a long, a bit more complex than that, but effectively it's a discipline within intelligence gathering and within intelligence analysis. So you might have heard of signals intelligence or human intelligence or the James Bond-y stuff is human intelligence.
[00:30:43] Or some part of it is that an open source intelligence is really anything that's in the open source domain in the publicly available, publicly accessible domain. How do you build an intelligent and a common operating picture a, as a result of what exists in the open source domain?
[00:30:56] How do you detect threats as a result of what exists in the open source domain? That's really that open source intelligence discipline.
[00:31:02] Akshay Datt: The engagement with a body, like a government body, like an election commission starts first with scoping. So scoping means what? Does it mean that they give you that these are the keywords or they give you the geo tags or like the location that I wanna monitor this location, I wanna, do they give you the accounts of these people who are standing for elections so that those accounts can be monitored? What all comes in scoping?
[00:31:27] Lyric Jain: All of the above. And it has to do a little bit with who the organization is. So when it comes to, say, accounts, if you wanna monitor accounts you have to be a very specific type of organization with very specific authorizations for Logically to ever monitor accounts. Because that's almost surveillance that's, again, depending on the regulation of a particular country. That's pretty much on the surveillance side of things. And if you have the authorization, we'll obviously do it and the platform will do it. But usually it's defined on the basis of content or on the basis of location or on the basis of what audiences it's potentially reaching. So relevance in a given location or for a given community or things such as that. So it's a combination of all those factors that go into it. Like we have a love-hate relationship when it comes to things like keywords. They're a very blunt instrument. You can, they can think of it as, hey, someone, someone wants to look for the word bomb because they're looking for I dunno, threats of car bombs. But some, someone saying that the word like, Hey this car is the bomb, or, ah, that song was the bomb. It's it is gonna be pretty pretty poor.
[00:32:24] So that, that's the whole point of a lot of the intelligence systems that we have supporting our ingestion as well as our threat detection systems. It's to it's to filter out a lot of those false positives. It's scoping and figuring out what the information environment looks like on the basis of contents, accounts activity based on the basis of locations as well as demography.
[00:32:43] Akshay Datt: And so this is one part of your business, which is, let's say the B2G business, which is where you're selling to government organizations. What about platforms? Do you also sell something to platforms directly? Does Facebook or Twitter, do they use your product or service?
[00:32:59] Lyric Jain: Yeah, it's, I think it's publicly known. We work with Facebook, Instagram, and TikTok. So we work with them partly through our friend's homes' platform and partly through our fact checking again the modality there is pretty similar, the only exception in the platform cases, sometimes they want us to provide the information, like very much how we do in the government context, but a lot of the time they already have feeds.
[00:33:22] They have feeds because their users have flagged various things on the platform. It's as being potential misinformation. And that enters into our queues either into our automated queues or into our our teams. And it, it then goes through the assessment stage again, either through our services team or through our platform, comes up with an assessment, that assessment then goes to the platform, and the platform's then responsible for doing whatever. On the basis of that assessment, they have their own policies. So Facebook's policy is slightly different. TikTok's policy it was slightly different to Twitter's policy, which is slightly different to Google's policy. So based on these assessments, they apply, they un enforce their policies.
[00:33:55] Akshay Datt: So essentially when someone hits the report, this post for inappropriate content that post will come to you for their like
[00:34:03] Lyric Jain: Only if it's misinformation.
[00:34:04] Akshay Datt: Oh, okay. When someone is reporting, they're asked to give a reason there, like there's a dropdown or something for
[00:34:09] Lyric Jain: That's right. So if it's hate or if it's child safety related, et cetera it, that's not our bread and butter. Many other organizations in the world do powerful work for that, for us, really the domain where we specialize as misinformation and disinformation and associated harms, things that occur as a result of that underlying misinformation and disinformation. That, that's our core specialism.
[00:34:30] Akshay Datt: Okay. Okay. So when someone is tagging it for misinformation, like report inappropriate, and the reason is misinformation, then that post comes to you for like giving back Facebook a decision on it that yes, this is there, or to what degree is it misinformation. So you will give Facebook back some information on which they will then take a further action, and your decision making can either be purely machine driven, or at times the machine may not be able to give a clear decision. So it'll go to a human.
[00:34:58] Lyric Jain: That's right. Again these are all there's little nuances for each platform. So certain platforms have like triggers of, Hey, a thousand users need to report it before we send it to someone. Some platforms like, it's more to do with the concentration of users. If 10 users have reported it in 10 seconds, then we'll share it. Again we're not involved in a lot of the policy decisions and the reasons for why it enters into our feeds that platforms decide that based on their policies it enters into our feeds and we're responsible for the assessment. We're responsible for the robustness and expediency of those assessments and platforms are then responsible for, again, what they wanna do on the basis of that assessment.
[00:35:32] Akshay Datt: How do you decide when you will give a machine generated assessment and when a human will look at it?
[00:35:38] Lyric Jain: Confidence. So every uh, assessment that we have is kind of confidence based. But our automated assessments as well as our people assessments. Wherever confidence levels are below a certain level that's commercially agreed with that's when we'll refer it to our team.
[00:35:52] In some cases for some platforms, everything has to go through a manual review regardless of the automated review. Someone will have to check whatever has been assessed by our Veracity Stack.
[00:36:01] Akshay Datt: And probably like something critical, say, election might this might be the criteria for those kind of posts,
[00:36:07] Lyric Jain: yeah. Anything that might be high sensitivity or high impact probably would usually go through manual review. Anything that I just in the assessment itself we're uncertain because either of an absence of information or too much contradictory information or evidence we would've a low confidence level or anything with data that's not very timely. Like the claim might be one day old, but our most recent evidence context of it, maybe be one week old. So all these are, again I'm peeling the onion a little bit too much there but those are the kinds of signals that are going into confidence.
[00:36:34] Akshay Datt: And I guess with time the human moderation, which is happening today would be training the machine learning algorithms to be able to increase the confidence level and reduce the percentage of content which goes to human moderators.
[00:36:49] Lyric Jain: That's right. within a domain, yes. So we've certainly seen that in the geopolitical domain or when it's come to issues around covid, and you can very clearly see that S curve of improving performance and plateauing performance based on how much training we're getting from expert in pip. But really there's so many domains, there's so many different domains for us to go into. So we still need to scale those subject matter expert teams for the foreseeable future. But yes, eventually there will be a an automation pay payback. And there'll be a kind of, again, that s curve of how many people we're gonna need to, to support the overall level of outputs we're able to deliver.
[00:37:21] Akshay Datt: Yeah. Gimme some scope of numbers. What are the number of posts that Logically is assessing daily, weekly, monthly, whatever? Like some idea of or what are the metrics that you look at
[00:37:32] Lyric Jain: Every day? It's every day we pull through about 15 million pieces of content. I think 15 million. Yeah. But that's not enough. There's we need a 100x to that because there's over, over a billion pieces of content that are posted every day. It's closer to 10 billion every day. I think Twitter alone, I believe is just under a billion a day. There's a long way for us to go in terms of scaling this is very much as a tit of the iceberg a lot of the underwaters and undercurrents of misinformation and disinformation. This is where it's exposed to enterprises, brands and even individuals. And that's a follow-on market for us to start working with some organizations.
[00:38:05] Akshay Datt: What are the other metrics you track? What are the numbers you look at on a regular basis? One would be how many pieces of content you're reviewing. What else?
[00:38:12] Lyric Jain: A few. There's for us it's the efficacy of our automation.
[00:38:16] That's a pretty, pretty big one for us to continuously see trending up as a result of investments in our roadmap. We have this interesting framework. It's our capability completeness framework, and it's almost this big jigsaw puzzle that we wanna build out of what our roadmap looks like.
[00:38:29] And what is that, where is that from a completeness level and where is it from a performance level? Both those are things that we measure and track quite closely. It's obviously the financial sides in terms of
[00:38:39] Akshay Datt: How do you measure efficacy? Yeah. So I see you're saying like, would mean how good a job you're doing at flagging or giving a score, like how accurate your score is. How do you measure it? Like how would you know that this is it based on what, whether a human moderator is disagreeing with the machine score? Is that how you did?
[00:38:58] Lyric Jain: That's right. Both even within our expert operations, and we don't just put things through one person. Things have to go through three people before we come up with assessments, et cetera. So we have efficacy scores in both places, what's our (inaudible) agreement or what's our inter assessor agreement when it comes to people and what is it between the overall people outcome and overall machine led outcome. These are all scores that yeah are pretty closely tracked by our teams.
[00:39:20] Akshay Datt: And you said you also track revenue. I, I wanna understand what is the way in which you monetize? Is it on per post or I'm, I'm sure the model will be different for a platform versus for a government agency. Like what is the commercial arrangement like.
[00:39:34] Lyric Jain: Sure. It really varies that the, we have this interesting construct. We call it the situation room. And we bring this back to Maharashtra days because Maharashtra was a war room that we set up. So it's effectively a product concept that we have called the situation room, which defines this information environment that someone needs to monitor, detect staff detect threats and triage them and respond to them in, so it's priced based on the size and complexity of the information environment.
[00:39:59] So the size would be obviously the number of posts, the number of accounts, and the number of interactions. And the complexity would be things like, how many languages is there, is it just text or is it multimodal? So stuff like that would be be something that goes into what the overall kind of subscription value for a customer would be. And then it's effectively recurring the business model, very much a recurrent revenue business model. Yeah that, that's the way we're price and obviously for most, technology driven businesses, it's the ARR number that tend to keep a pretty close eye on. Those numbers are usually pretty top of mind.
[00:40:27] Akshay Datt: Say Maharashtra direction commission. They would subscribe for it throughout the year or this will just be like that one couple of months period for which they would subscribe?
[00:40:35] Lyric Jain: It really varies. For us it's we definitely share the story of this being an always on risk.
[00:40:41] This is a risk that is, again, heightened during these critical events, but there's a lot that an organization can be doing to mitigate those risks by being ever present even in less intensive months. So our for some of these types of organizations, we do have surge pricing to to be able to accommodate those high impact windows and a more accommodating pricing level when it comes to business as usual. But yeah, a lot of organizations use us all time. I think like 90, 85 to 90% of the work that we do is on a, always on basis, but 10 to 15% of our work is certainly on a very much an event driven basis. But again that, that's work that we do to demonstrate the value of what we can bring to send organizations.
[00:41:20] And really if it's an organization that's facing one kind of risk, it's pretty likely that they're gonna have a new crisis pretty soon.
[00:41:27] Akshay Datt: In, In the government space beyond the election use case, are there other use cases also, like other types of government organizations that you work with?
[00:41:35] Lyric Jain: Absolutely. Uh, for us, there's four core use cases within the public sector. There's public health, public safety, election integrity, and national security. So those four are key use cases. Within public health, it'd be public health organizations hospital networks. Again some countries have sync,
[00:41:50] Akshay Datt: like Covid misinformation and.
[00:41:53] Lyric Jain: Yeah, or even in, in general, there's a lot of anti-vaccine misinformation out there. In India in particular, there's certainly online frauds that are driven like from kind of kidney transplants and all of that, that are driven through misinformed and propaganda driven scams. There's a lot of other particularly in the space of alternative health. it's a uh, it's a touchy, space given the Indian cultural context. But it's certainly some clear cut areas where there that there are some disinformation campaigns there.
[00:42:17] In the public safety space that's very much to do with communal violence as well as potentially nation state activity. So a lot of time nation state actors step to stoke some of these internal fires. There's obviously elections and national security would always be around both foreign variants protecting a country's interest domestically, but also protecting a country's interests overseas, that one's quite interesting because when Logically started 18 countries, or I think 16 countries had information operations capabilities that allowed them to run some kinds of information operations either within their own borders or outside of their countries.
[00:42:51] Today that number is 90, so there's 90 pretty, so basically half the countries in the world can run information operations right now. So it's a pretty polluted landscape. But yeah that again, is a pretty, pretty critical use case for us.
[00:43:02] Akshay Datt: And this would be like in India, like Ministry of Home Affairs for this kind of like they, they would be your clients or like, ministry of Health and Welfare for health related stuff.
[00:43:12] Lyric Jain: yeah, across, again, both at central level, at state level, it'll be those kind of organizations but beyond kind of ministries it's, it, each of them, these have affiliated agencies. So it's, our work is mainly with kind of the, this, the various branches of the civil service as opposed to with kind of political executives, et cetera.
[00:43:29] So it's really working with those organizations. Health home affairs around that will be focused on law enforcement. There'll be some folks on national security in some agencies. There'll be some focused around the upcoming interesting landscape is also the regulatory dimension.
[00:43:43] A lot of countries are looking to regulate platforms and regulate how a lot of these trust and safety operations are run. So in the UK we have this online harms bill that's coming to parliament pretty soon. India has had I think, one attempt already at regulating this last year as the amendment to the rules governing the it acts, but there's also I think some other redrafting for it that that's, that's currently ongoing. So there's that if it feels is, is an emergent catalyst for us as a space. Because what's become clear is that platforms, although they are trying and they're trying hard some harder than others, but they're certainly trying to tackle this problem.
[00:44:17] They've proven that they can't do it alone. There's obvious risks governments doing this by themselves around freedom of expression and kind of the, all the politics around it. But also so I think it, this is something, again, regardless of me wearing my Logically hat from an independent perspective, it still hold the view that it needs to be something that's run by an independent organization or a free market of independent organizations. That's a real place we wanna get to
[00:44:41] Akshay Datt: so you would typically work with like a consulting agency that would further have the government as its client? That's what you're saying? Or the government is directly your client?
[00:44:50] Lyric Jain: Sometimes, yes.
[00:44:51] Akshay Datt: Okay. Sometimes. Okay. It's a mix of both. Okay.
[00:44:54] Lyric Jain: Yeah, sometimes it'll be through various kind of we call 'em channel partners, so it'll be through channel partners. Sometimes in some cases it'll be directly. In India we've done both in the UK we've usually gone direct in the US we've done both.
[00:45:04] Akshay Datt: Yeah. How did you navigate this sales to government? You need typically Whitehead folks driving something like that. How did you navigate that, like the sales to government?
[00:45:15] Lyric Jain: I have no idea. , I'll let you know when I figure it out. in startup world, it definitely gets a bad rep.
[00:45:20] Also in, in venture world, it's always seen as a bit of an ugly market in some ways because there's horror stories of how long sales cycles can be. But for me, the biggest reward in the government sector kind of thinking commercially is again huge amount of impact because of just the big leverage that governments and platforms have, but stickiness, once you're in, you're pretty much in, unless we screw up in some horrible way.
[00:45:43] It's an incredibly sticky customer. And I think that, that's, that for us is the biggest value of over investing during those early days to go and acquire these customers. But yeah, I think that's what we've been biased towards. And we do have a couple of more, more gray haired individuals in the team than myself. They, they obviously help .
[00:46:00] Akshay Datt: And what do you charge the platforms? Is it on a per content per post that you review? Something like that?
[00:46:05] Lyric Jain: It varies. it's close enough to, per post. It's there's some kind of compliance roundout in terms of, hey, can't be a duplicate of a post, can't be a highly similar post, et cetera, et cetera.
[00:46:14] But those are priced slightly differently. But yeah, broadly it's on a a per post basis.
[00:46:19] Akshay Datt: What is your ARR right now? Are you at liberty to share that?
[00:46:22] Lyric Jain: Not quite publicly,
[00:46:23] Akshay Datt: 10 million plus?
[00:46:24] Lyric Jain: Just south. Just south. So we raised a 25 million series a, a couple months ago.
[00:46:28] Akshay Datt: What kind of you have the ability to fact check text, video, audio, everything, or what is the current capability in terms of the modes?
[00:46:37] Lyric Jain: Yeah, we're in a really interesting place. So I think by the time this podcast is out, we will have released the fully multimodal version of the platform. At the moment, in the current architecture it's very much text is the bread and butter, and there's some image bits that are bolted on and some video bits that are bolted on. But the version of our platform that's being released on the in the first week of August it'll be multi-modal by default. So not just a text only image only audio, only video only, but when when they're blended up together as well. So it'll be like memes will be covered or like an image with a with some text within a WhatsApp message. Like those, all of those form factors will be covered. Quite excited about that update in three months time.
[00:47:17] Akshay Datt: How did you solve that? I'm just thinking of the kind of WhatsApp stuff we get where there is somebody, let's say, talking in a regional language and who's giving some misinformation. Are you able to detect regional languages also and all of that? Or that sounds really challenging to d o it for video
[00:47:35] Lyric Jain: in a limited way. Some, I think we can be we do a pretty good job in 12 Indian languages. But beyond those, it's a big challenge. In addition to kind of those Indian languages, we work in some European languages as well that really wanna expand our roadmap to be able to cover all major languages, or at least we wanna, you have kind, I think a hit list of 110 languages that we wanna cover before the end of the year.
[00:47:54] we're always gonna be in this place where our level of efficacy and performance in English will naturally be higher just because the state of natural language understanding is a lot more advanced in English than it is in any other language. I think Mandarin's pretty close. But we can't work with a PRC although we can monitor them if someone's interested. But we certainly can't work with them.
[00:48:14] Akshay Datt: You're using like an existing voice recognition engine? Say Google has this voice to text engine and all, or you're building your own engine.
[00:48:22] Lyric Jain: It- It's a little bit of both. So it depends if it's formal speech or informal speech. So if it's formal speech, like a lot of the things that, that are available on the shelf are just way better. So we use those, but when it's to do with informal speech and kind of social media speak in particular we haven't found again with respects, whatever exists market be it Azure or AWS or Google has that higher level of efficacy. So we're fine tuning what we're building internally for the social context and using what's available commercially for formal speech.
[00:48:49] Akshay Datt: Fascinating. Okay. You told me you're doing some corporate pilots. So what would that be say McDonald's would wanna make sure that there's no fake news going on about it? Like from that perspective?
[00:48:59] Lyric Jain: Yeah, so I think there's two or three main verticals for us. Within this kinda, there's the security side, which is this approach of conspiracy driven threats. So it's literally, there's particularly in the states, there's organizations right now whose kind of, I dunno, officers, warehouses and executives are being targeted because their beliefs to be part of some big global conspiracy or that,
[00:49:22] Akshay Datt: like the Pizzagate thing, there was some pizza partner.
[00:49:25] Lyric Jain: Yeah. Or like vaccine sensors and manufacturers of vaccines and stuff like that. But it's really broadening out it's like last year it was Wayfair was it last year or was a year before? It was Wayfair. It was a furniture company that became the epicenter of the human conspiracy because they had cupboards that had the names, that had women's names and these conspiracy, they, and they were quite expensive.
[00:49:46] They were like $5,000 or $10,000 cupboard. And these conspiracists, thought, Hey, they're trafficking in women that are called that in these cupboards. That's the like, come on but this was a serious conspiracy and this organization was being targeted. Some people got radicalized to the point where they wanted to start getting after executives. They started finding out who these executives are, who their children are, and like really like nasty stuff. Unfortunately. So, These kind of threats today present. So that's a lot of the security dimension. The, there's also a financial dimension here, and I think there's some pretty interesting examples in India of a handful of banks in particular that have been targeted by various amplification and pump and dump schemes.
[00:50:22] Akshay Datt: I remember there, there was a rumor about ICICI bank shutting down, and there were lines outside of ICICI ATMs of people trying to withdraw their money.
[00:50:30] Lyric Jain: Yeah, that's right. So again those kinds of events is both from a financial disinformation for market manipulation.
[00:50:35] You can think of crypto even as a segment. We have something quite interesting we're working on at the moment for for crypto, for things that there, there's so many inauthentic accounts that are pushing various coins and there's clear trading activity a bit that, that's linked to their post et cetera as well. That, that's an interesting problem for us. And so is the kind of pure reputational side even. I think that that one's slightly more challenging for us because I think there's plenty of organizations that out there that do a good job at reputation management. We don't necessarily wanna go into that. But for us, if there is a, an active disinformation threat that's focused on an organization that would be interesting. And in some countries that market exists. In others it doesn't. Because historically when people have thought of disinformation campaigns, they've thought of countries, what's happened over the last three or four years is there's now these agents of disinformation available for hire in various countries around the world.
[00:51:22] It's very much equivalent to almost ransomware. Ransomware was this kind of big cyber threat that's pertinent today. People knew about it probably for the last five, six years. But our positioning is similar where Ransomware was maybe in 2016, the threat vector exists, but it's not ever present. It like one or two organizations are being targeted by it every few weeks and months, but it's not hundreds every day. But it's, it'll be there in, in three years or four years from now given what, what's happening in the adversarial space.
[00:51:49] Akshay Datt: And you're located like in India also? where's your headcount, what's your headcount split like?
[00:51:53] Lyric Jain: Yeah, so we are about 170 hundred 80 people at the moment. And about half of those are based in the UK just under half based in India. About half a dozen based in the US so around 80, 70 10 or 90, 80 10.
[00:52:09] Akshay Datt: And what is the team in India doing? Are these the tech guys or
[00:52:12] Lyric Jain: so tech as well as that across the UK and India.
[00:52:15] They're mainly our most of our engineering teams set out of Bangalore. Most of our AI teams set out of London and most of our product teams also set out of the UK. And we also have some of our subject matter expert teams when it comes to fact checking and open source intelligence that set out of India as well.
[00:52:30] But the majority of of people in India are within engineering roles.
[00:52:34] Akshay Datt: Got it. Okay. You raised this pretty massive 24 million round. What do you wanna use these funds for?
[00:52:40] Lyric Jain: It's pretty much half and half. It's we know there's a long way for us to go in terms of furthering our our platform itself, think the multi-modality aspect that I mentioned being one of the, one of the milestones where we're gearing up to.
[00:52:51] But equally yeah, we have a pretty, pretty aggressive roadmap to, to better support some of our high leverage customers in particular as well as differentiate our product offering potentially for, enterprise. It's then also investing in new threat vectors. Again there's a lot of buzz around deep fakes, but it turns out it's not probably the biggest disinformation threat. There's a few other interesting things happening in the world of synthetic text in particular that are probably a bigger threat vector. Where we're keeping on top of and red teaming all of the new emergent disinformation vectors. And the other half is really going into building our uh, our code and market teams across these three countries.
[00:53:22] Akshay Datt: Okay. What are the new vectors of misinformation? What is synthetic text like?
[00:53:26] Lyric Jain: I mean, People have heard of deep fakes in the video form and there's a lot of kind of buzz around them, but in terms of how much you see them in the wild, it's mainly just porn.
[00:53:35] Like 99% of deep fakes out there are porn. Which again it's a risk that exists. But it's not purely, it's not mis dis it' s very small percent like maybe every couple of weeks you might get one that's high profile in nature. synthetic text is really the text equivalent of that. So it's, imagine you can create a disinformation campaign that's posting, a thousand very different posts from thousand different accounts. There's some there's been a lot of progress in that direction by a lot of organizations that are working in the adversarial space, but also things that are building on top of recent breakthroughs in natural language generation. So that's a pretty significant risk at the moment. I think we've seen one of the campaigns that were actually being targeted to target at Wikipedia. I think there was one attempt that was recently made to edit, I think something like 10,000 Wikipedia pages concurrently. Again, these edit wars are always going on, but what was interesting about the most recent one is the editors were all, like, all 10,000 edits were being made by a synthetic agent.
[00:54:30] And you bot, bot based edit wars are also common, but what the new dimension was, they aren't just spam posting the same thing. What they're writing is human-like. And, in some cases easy to detect, but in some cases probably challenging to detect. We, we see that as an interesting dimension. The other dimension, we also see as really this two truths the kind of social engineering framework. So it's getting people down a rabbit hole of radicalization. So giving them two truths first and then giving them the third lie has been a repeated tactic. We've seen from various adversaries. So I think tactically a lot of knowledge sharing might be happening within the adversarial space right now. And they're converging towards some best practices. Yeah they're developing their playbooks. We need to just stay ahead of them.