What is Generative UI? Interview with Sannuta Raghu, AI lab lead at Scroll.In

This article was originally published on February 27, 2025 in the AI in News Media Newsletter. The next issue will be sent on March 31, 2025. Get AI in News Media first and direct to your inbox by signing up here.

Hello and welcome back,

In this issue, we interview Sannuta Raghu who leads the AI hub at Scroll.in. I first came across Sannuta on LinkedIn, where she is extremely generous in sharing her work at Scroll (including prototypes). Go to her profile to see some of the prototypes she mentions in action.  We’re always looking for ways to improve, so if there’s something you want to see here you can email me or get in touch on LinkedIn.

Thanks for reading,

Beth Ashton

Chief Growth Officer, Bright Sites

Interview with Sannuta Raghu, AI lab lead at Scroll.In

How did you end up working in AI?

I come from a television background. I spent a significant part of my early career as a producer and field reporter. Later, I started the digital video newsroom at Scroll, building out the video team when Facebook was experiencing a surge in video content around 2016. We grew a sizable team, but unfortunately, it wasn’t sustainable in the long run. It’s interesting how these cycles keep repeating—today, we see similar investments in TikTok teams.

By 2019, we had to scale down, which meant making tough decisions about our video operations. However, given that India has 1.4 billion people and 600 million smartphones—most of them used for video consumption—we couldn’t afford to stop making short shelf life news videos altogether. We needed a way to continue producing high-quality, engaging content while being mindful of resources.

This led us to explore AI. The challenge was balancing in-depth, human-driven video production with scalable, short-form content that could reach massive audiences. AI became the bridge.

In 2022, we won a Google grant to develop a text-to-video tool. Over 12 to 15 months, we built and refined it. One of the most interesting insights we gathered was that audiences weren’t resistant to AI-generated content. We made it clear that videos were created with AI and verified by Scroll’s editors.

The results were promising. We went from producing no short-shelf-life or vertical videos to creating 10-15 per day. These videos attracted views, increased engagement and boosted our subscriber base. We secured another Google grant to develop version 2.0of our tool, expanding beyond text-to-video into text-to-interface applications.

How do you go from an idea to an AI use-case in a newsroom?

Newsroom tasks generally fall into two categories: language tasks and knowledge tasks. Language tasks include writing better headlines, summarising content, and reformatting stories. Knowledge tasks involve searching archives, retrieving information and structuring insights.

Many aspects of news production can be streamlined using large language models (LLMs). For instance, with a $20 ChatGPT subscription, you can build customised GPTs tailored to specific newsroom needs. I started experimenting with custom GPTs as soon as they were available—they serve as excellent prototyping tools. More importantly, they help demonstrate AI’s potential to decision-makers. Having a tangible example makes it easier to secure buy-in.

For anyone looking to bring AI into their newsroom, starting with a simple show-and-tell tool can be a game-changer. It demystifies AI and makes its value immediately clear to stakeholders.

How did you learn to create AI prototypes, and what challenges did you face?

The learning process itself wasn’t too difficult—it involved watching a lot of YouTube videos, experimenting with tools, and hands-on practice. I was fortunate to have both the time and management support to explore AI applications.

The bigger challenge was communication. Coming from an editorial background, I had experience bridging the gap between business and content teams, but working closely with engineers was a new experience. The real hurdle was developing a shared language with the tech team—finding a way to translate editorial needs into technical solutions. That took time and collaboration.

What can we expect from Scroll in 2025?

We’re building out our text-to-interface tool or a Generative UI (User Interface) tool, if you like. A great example is our tax calculator instead of just reading a news article about tax implications, users can input their income and receive a personalized tax breakdown based on the news.

Beyond that, we’re exploring how to personalise news consumption by creating different entry points. Not everyone needs the same depth of information—some readers want a quick summary, while others prefer a deep dive. We’ve developed a prototype that provides different entry points based on a reader’s familiarity with a topic.

I think we’ll build out about five or six more this year to make sure that we are able to deliver a news article in the way the user really wants to consume the information at that particular moment.