The Design Systems Podcast – Exploring the world of design systems, UX, and digital product development. Featuring a stylized green map and compass graphic over a subtle topographic background.

DSP Mashup | The Future of AI and Digital Production

In this episode of The Design Systems Podcast, guest host Richard Banfield takes us on a journey through the intersection of artificial intelligence and design systems. We revisit some of the most compelling discussions featuring experts David Calleja, Nick Hahn, Ranjeet Tayi, and Kate Moran. Together, they unravel how AI is transforming the way digital products are designed and developed. From AI's potential as a creative partner to its role in automating repetitive tasks, this episode dives into the profound impact AI is having on design workflows. We explore how AI can serve as a copilot, adapting in real time to deliver personalized user experiences, and discuss the importance of maintaining human oversight to ensure quality and integrity. Whether it's hyper-personalizing user interfaces or enhancing the efficiency of design processes, AI is not set to replace humans but to augment our creativity and productivity. Join us as we explore the exciting possibilities AI brings to the world of design systems.

Original Episodes

Transcript

richard banfield [00:00:00]:

Well, hello, everybody. It's Richard Banfield here. I'm heading up design leadership at Knapsack, and I'm taking the reins for the next few episodes to revisit some of the most thought provoking moments we've had on this podcast. This episode is all about the subject on everybody's lips, artificial intelligence. AI. AI seems to be in every conversation that I'm involved in, but as I'm sure it is the same with you, the question is, will it actually matter to. To people who build digital products? Well, I believe it will. The big questions that remain are, will it be a partner in our creative process or is it actually going to take over the work that we do? So, in today's episode, we've brought back four of our outstanding discussions around AI, and we're going to explore what it means for the digital production world, how it's shaping workflows, automating tasks, and even influencing how we create user experiences.

richard banfield [00:00:58]:

First up is David Calleja, who really gets into how design systems are the bridge between human creativity and the power of AI. It's certainly been my experience that AI is here to help with the mundane and the repetitive tasks and even some of the heavy lifting. AI can be used for tasks like accessibility checks all the way through to automating the repetitive tasks and freeing up time for more creative work. David, too, believes that AI won't replace human creativity, but instead it becomes a tool to make our design processes more efficient. Well, we'll see if that actually matters over the long term, but I'm with David on this one. I think that it's going to be all about freeing us up so that we can spend more time being creative. Along with David, we also talked to Nick Hahn, who takes us deeper into the transformational potential of AI. So imagine AI as the ultimate assistant on the assembly line, helping generate design variations on demand really, really fast.

richard banfield [00:02:00]:

Nick explains how design systems can keep everything aligned and consistent, all while leaving room for that personal touch from your designers and your developers. AI, Nick believes, is the first draft of a design just like it is if you're starting writing and it's up to us to refine it and add taste and discernment. So in Nick's view, it's not just about speed and efficiency, it's about working smarter. After that, Ranjeet Tayi gives us a look at how AI will evolve from being a simple assistant to a copilot. And what he means by that is that eventually we will have some activities and tasks on autopilot that are taking care of some of those more complex tasks that we deal with every single day. He talks about how AI is adapting in real time and creating personalized experiences for a lot of products on the fly. It's a future where clean data is important and AI together with these smarter, intuitive designs will ultimately deliver a better experience. The way that designers, developers and product managers collaborate with AI is going to change the game, according to Ranjit.

richard banfield [00:03:16]:

It's a fascinating vision and I hope you'll stick around for that part of the conversation as well. Finally, we've got Kate Moran. Kate Moran digs into the role AI is playing in shaping the user experience. AI, she says, is already changing how we approach UI design, of course, but it's now also helping to create hyper personalized experiences. Just like Ranjeet explains, these experiences can really adjust to the individual user using data that is relevant to that person. It can change the experience on the fly. And she's also going to bring a thoughtful experience on how AI can be handled in such a way that it doesn't have to be scary, it doesn't have to sacrifice the integrity of the user experience. It can actually add a lot of value.

richard banfield [00:04:04]:

So for a lot of these really, really smart people who are dealing with user experiences every day, it seems that AI is here to stay. And it's really here to ensure that we can continue to work better and more efficiently and in some cases even more creatively. So stick around because there's lots to explore about the interaction of AI and design systems.

Chris Stroll [00:04:27]:

Hi, welcome to the Design Systems Podcast. This is the place where we explore where design, development and product overlap, hear from experts about their experiences and the lessons they've learned along the way, and get insights into the latest trends impacting product development and design systems from the people who pioneered the industry. As always, this podcast is brought to you by Knapsack. Check us out at knapsack.cloud if you want to get in touch with the show, ask some questions or tell us what you think. Send us a message over on LinkedIn. You can find a direct link to our page in the show Notes. We'd love to hear from you.

Dave Calleja [00:04:53]:

I think there's a lot of false sort of angst put up between different skill sets and capabilities that they don't know how to work together and what the role of the design system is. To make that life easy is to make that relationship as easy as possible and to facilitate that because there's different needs. We understand that. So the real value is, at the end of the day, the System must enable that rendering of intent, that shared rendering of intent that we talk about. Design is the rendering intent, experience. Design is a rendering intent, all of those things. None of that happens without the actual code that's gotten you into browsers and apps. So if the system can help render that intent, which is a great experience, a more seamless experience, a more accessible experience, a more personalized experience, then that's going to just result in more people having a better time, both in the creation and then the delivery and the experience of those things.

Chris Stroll [00:05:48]:

Wow, well said. I think that that's probably the best five sentence description of exactly what this is supposed to be about that I think I've ever heard.

Dave Calleja [00:05:56]:

I think it gets overlooked a lot. I think that it needs. It just. It feels like we need to come back to that. More to say, this is not about tokens. Tokens are great. I can geek out to tokens all day. We can spend the whole podcast talking about tokens.

Dave Calleja [00:06:07]:

And I'm sure, you know, that's a valuable conversation that is involving. But for me personally, the part that's most exciting is what can we do as a community to help get better, more exciting, more accessible, more meaningful experiences into the hands of people. And that's the role of the design system, why I've been so obsessed with them for the last 10 years.

Chris Stroll [00:06:26]:

Wonderful. So when we think about that, and then we think about how this all functions now with design tools and with code repositories and with IDEs and with all these different stakeholders that wrap together and get NPM packages that ultimately go into products that create something that ultimately ships to an end user. There's complexity there, of course, and now what we're doing is we're throwing AI in on the top of the pile and basically saying, like, okay, so does AI create more complexity and enhance our capabilities, or does it reduce the complexity via automation? And I think that there's not a really clear answer here, but I'd love for you to just kind of put your stake in the ground about where you think the value of AI in design systems really lives.

Dave Calleja [00:07:11]:

As you said, it's such a fast evolving space. And so I'm sure that what is possible is going to be changed probably by the end of the podcast. But genuinely, you know, like month on month, week on week, that those things are evolving, which is great. But where I'm excited and kind of where I'm seeing a lot of value right now is in two places. One is it is in that automation layer. It is in the ability to say the AI is going to take those tasks which are critical, which you can't avoid doing. Like, you know, checking for accessibility, checking for compliance with whatever checklist you've got as part of your build and contribution process. All of those things are going to take time and put them back in the hands of the developers, the designers, the people, give that more creative time, that innovative time back.

Dave Calleja [00:07:53]:

So that's critical. But what I think there is also a bit of an existential crisis that I'm seeing, which is across the creative and design experience industry generally, is AI is going to steal that creative output. It's going to start to become responsible for the creative output, for the innovation output, instead of in the hands of the people. And I think that we're very much in control of making sure that that's not the case by focusing AI on those things and using it as the tool that it is genuinely intended to be.

Chris Stroll [00:08:21]:

Well, this is amazing, right, because I view this in a very similar manner, right? There's 30 startups that knapsack competes with that have all come out in the past 18 to 24 months that are all about the idea about, let me generate something, let me take that creative capability and let me put that in the hands of an LLM that can take the thing that I made or the thing that I wrote down as a prompt, or the thing that I sketch out on the back of a cocktail napkin, hold that up to a webcam and now make me a website out of that. And I think that there's some interesting ideas that are happening in that space. But there's also this genuine trepidation that I think is often warranted about whether or not we want AI doing that, or if we want AI to just make us faster at doing the things we're already doing. And I think there's a fundamental question here about the role of systems in AI, and I've actually posed this a couple of times now. Is a design system just kibble for an LLM that we feed it and that LLM gets smarter and bigger because of the context data provided by a design system? Or is it something that ultimately represents a value asset that can be tied somehow to an automation process that uses all these different data sources to help us make new components that are better or faster?

Dave Calleja [00:09:40]:

That idea that the people who are doing the roles of design system work today are going to be empowered through the use of LLMs and through AI generally and through generative AI is the part that I think is really exciting. But people view Very much as binary. But the idea that you sort of press a button and a website is generated by scanning everything that's available on the Internet and sort of looking at the aggregate, that's a dark future. But there's many possible, very bright futures where the designers, the developers, the copywriters, the creatives that are involved today in the design systems, in the way that we're working today, are now standing at the levers of this incredibly powerful machine and controlling how it consumes that kibble and saying, now that you've got all of this energy, I'm going to guide you on what to do. This is what an experience for Knapsack looks like. This is what an experience for Coca Cola looks like. This is what an experience for health, for US Government, for the Australian government, this is what that should be. We're going to provide the guide rails.

Dave Calleja [00:10:36]:

And so now as we tweak and turn the knobs and the dials, that's an element to the creative input that I think is often overlooked. So it's not just about like letting the AI loose and then creating whatever it is and having no control over that. It's about empowering those same people, those same minds, about tweaking the model, about tweaking the input, tweaking the structured data that it's then consuming what that kibble is to create something that is meaningful and creative on the other side, and then still being able to manipulate it at faster than light speed compared to today.

Chris Stroll [00:11:03]:

This brings to light one of the key things about a design system is a design system, depending on how mature it is, depending on how far you've gone, is a library of your design choices. And that library is structured innately just by its very nature and definition. There's tokens, there's schemas, there's different props and data types and all this other stuff that exists as sort of those like foundational structures of what a well structured or well built design system looks like. And so if you have all these different things that represent that structured data and you can do something like a multimodal experience that takes a bunch of these different things like a PRD or a requirements document, or a set of user stories or a JIRA board and then references a FIGMA file, references your design system and makes a website, that's one potential way of leveraging this, right? Because that's automating a bunch of steps of how I build something with components. But is that actually how product gets built is the big question I have in that, right? Like I see value in that from the creation of componentry. Right. I want to basically say, here's a card, it's got a button inside of it. Go make this as an example.

Chris Stroll [00:12:12]:

In my design system, when I think about like that actually at the level of product, I oftentimes wonder if that's really the experience we're all going for. Because these brands that you mentioned are so guarded about that brand value and that brand equity. There's some CFO out there that has said that like, you know, the value of Coca Cola as a brand is like, you know, X number of dozens of billions of dollars. Are we at a point, in your opinion, where people want to go and take a PRD and you know, sketch out on a cocktail napkin what the Coca Cola Australia experience should look like and then click a button and make it?

Dave Calleja [00:12:45]:

Yeah, I think that that is a much more meaningful experience than a lot of what the AI generative today is that it's so much more aggregated because often because it's not using internal models or that internally structured data. I think the teams that sort of embrace documentation and as a caveat, when I say documentation, I don't just mean like written documentation, although that's an important part. How you structure your tokens, how you structure your naming conventions, how you structure the data and the association with different templates, with Personas or scenarios like there's a whole complex web of documentation. When I talk about that, those teams that embrace that, that start to understand how does this template or this experience work in different scenarios or for different outputs, outcomes, those are the teams that are going to be ready for this next wave in the most meaningful way. Because those benefits that you talked about there, where you can say, hey, we're able to now generate really meaningfully personalized experiences at the one to one level, dynamically. So you're a new user today for brand X and so patterns that would be most beneficial to you as a brand new user of this app, of this product, of this website that's going to change by the end of your journey. We can do that to a degree through MVT and through a B testing and through personalized journeys, but not dynamically. They're still preset, we're still incredibly rigid compared to the scenario you're describing.

Chris Stroll [00:14:06]:

And this is where I think the really cool work can happen. Because while the multimodal experience is important to this realization, the more important thing is that we're able to actually make a determination about what works. And what works is innately something that's in user land. And this is a place that has been sort of like there be dragons in terms of the design systems landscape, right?

Dave Calleja [00:14:28]:

Absolutely.

Chris Stroll [00:14:28]:

That's the domain of the product people that consume the design system. But I think the design system can start to care about how users actually use the products built with the design system. And the really interesting concept that I've been toying around with a lot is this idea if I'm able to take like you know, data from Optimizely or Google Analytics or something else like that, that's over there in user land and pair that with the data structures and the design choices in the design system for that user that's building that product using the design system. Instead of giving them a giant tub of Legos, I can actually give them like here's the Lego builder set for a castle or a pirate ship or the Millennium Falcon. And that kit is a lot more valuable because it has a lot of those things that are understood to be valuable pre baked into it. That to me is really exciting.

Dave Calleja [00:15:17]:

The idea that we can start to, as you said, really speed up the things that we're doing today by having AI be that sidekick to say, as you said, here's the templates, here are the components, here are the journeys that are really working or not working for users onboarding to a product. And so now what do those look like? And then show me the ones that are and I can start to actually generate a much, much more optimized, a much better, more meaningful experience. But I also imagine a future very quickly by connecting those worlds where you can start to. In the same way that we have CI CD pipelines, you know, automation pipelines to change things in figma and go all the way to code, whether that be tokens or component updates. Like that's a world that we're starting to live in increasingly more now. We're not that far from a possible future and I think it's possible now and I'm sure someone is already doing it, but it's obviously at the top of the relative maturity curve where you can then write into documentation, human friendly documentation, and say this is now a better component or a better pattern for onboarding users for new users. And the system is now prioritizing giving that component to that new user when they land instead of something else. So those levers aren't just, hey, designer, design this new thing that gets handed off through all of the regular channels because everything's connected, you can write something in one place and actually affect the customer Experience in functionally real time today.

Dave Calleja [00:16:42]:

There should be, of course, checks and balances and all, but functionally, that's a possible future.

Chris Stroll [00:16:51]:

There is this idea, and I talked to this a little bit with Dave Kaliha a couple of weeks ago, where when you think about the role of AI, and if we're talking about this again in our metaphor of an assembly line factory, what you're looking at is you're saying, I'm fabbing new parts and then all those parts go into warehouse and those warehouses feed a conveyor belt and there's a bunch of people on that conveyor belt that are going and assembling cars out of it. What we should be able to do is we should be able to say, I, Chris, or I, Nick, want this very specific car with this very specific package, with this very specific color and all these other things like that, and that car should be able to roll off that assembly line built exactly the way we want it.

Nick hahn [00:17:30]:

The nuance of that is, I don't think we need to ask, I want the car with these specs. We're asking AI now to say, I want to go up that mountain, build me something that enables me to do that on a three day trip and it can shoot that out. And that's similar to these experiences. I want a customer, I want a user to be able to achieve this process on our platform. Build me seven variations of that, and I can now look at those and make some choices. And then guess what, it's already built in code. I don't have to go work with an engineer for six weeks to get it done.

Chris Stroll [00:18:01]:

So I've been having this conversation over and over and over again lately that is about design systems as a control tower. And there's all this talk about like, you know, design has this big uncertainty in front of it, which is like, what is AI going to do to the way that we think fundamentally about design? And when we answer that question, one of the core questions in that is, how is AI going to change all this? And there's a lot of people that think about all these design decisions as just kibble. Like, AI is going to go like, eat all the design decisions that are good decisions out there, and then it's going to be able to sort of spit out the thing that we all request of it. But that seeds a ton of control to a robot that ostensibly doesn't have any taste and isn't a part of the Bill brand or the Zero brand or any of these other brands. And there's so much value Tied up in that brand and in the control of that brand that like, yes, AI should be able to make a bunch of decisions for us, but within a set of constraints. And that set of constraints is governed by a design system that ultimately represents the leash on the dog that's eating all the kibble. And I think that there's this interesting place for design systems that's really emergent right now. And it's kind of like what you all are doing with the systems of systems model where you're basically saying like, we want to give it a freedom at the last mile that can change really quickly, that can do kind of anything it wants to do.

Chris Stroll [00:19:20]:

But we also want to be able to have these methods of inheritance that maintain the control over the system, that make sure that we're safeguarding the value of that brand.

Nick hahn [00:19:28]:

Yeah. There's an interesting analogy that comes up. My amazing partner, she was a high school teacher for 10 years, English, and we've talked about AI in that space and there's a lot of like hubbub and the education space about especially writing. Like AI can write stuff, so why even teach writing, right? She's shown me articles and she's done it herself now where she has AI to write a prompt or gives it the prompt, gets a draft and then starts from there. And there's new teaching models that are coming out for schools and English teachers specifically to say, start with AI, give it the prompt, give it the constraints, have it write some things out. And then you doing the work to add in your flavor, your perspective, your humanity into it is the actual work. As opposed to I just filled double lined 10 pages, right? Like the metric of success was sort of adoption. Right.

Nick hahn [00:20:15]:

It's just like how much have I done versus am I adding value?

Chris Stroll [00:20:18]:

You remember that like from high school and college and everything like that. Okay. It's gotta be like 60 pages, single spaced and you know, 1 inch margins and all that other like man, what.

Nick hahn [00:20:26]:

Useless waste of time. Yeah, because we were doing more work to try to meet the requirement of how much space to fill versus what are we learning? What's our comprehension? Are we out there reading stuff? Can I write a 10 page that delivers a more impactful, powerful thing? That's what I think we can do with AI and design systems. Right? Give it the prompt. It's got the constraints, the control tower from the design system so it's not going insane. Right. It doesn't just do everything. And then now, I mean I've been a design manager for so many years now. How many times have we had engineers, teams going, we're just waiting on design.

Nick hahn [00:20:59]:

Why isn't this done yet?

Chris Stroll [00:21:00]:

Blocked on design.

Nick hahn [00:21:00]:

Blocked on design. Because we're doing concepting, right? If we could take that concepting phase and turn it into a couple of hours or a day or two where we're just knocking out a bunch of concepts with AI and then we're reviewing it, tweaking it, and going, this is the best possible flow. And now we can hand that off or kill the handoff, but, you know, work more closely.

Chris Stroll [00:21:17]:

You can click a button and all of a sudden it's mostly built for you.

Nick hahn [00:21:20]:

Mostly built for you, right? And it's not taking the humanity out of it. It's letting us use our superpower, which is taking in all the other factors that you can't type into a computer, at least.

Ranjeet Tayi [00:21:35]:

What'S happening in this whole autonomous world, right? There are three steps to it. One is especially manual. And second thing is Copilot, which is actually doing you, assisting you while doing things. That's copilot. And third one is especially Autopilot, which is like complete autonomous mode, right? Take example of Tesla car, you have manual driving, right? Where you get this manual way you can do that, you get suggestive driving. We can put into this mode and actually you get this recommendation, human and system loop. And third one is Autopilot, especially autonomous, complete driving drives. On its own, the same thing will happen to the software industry.

Ranjeet Tayi [00:22:14]:

The applications will run on its own. So these use case, if you think about that as a metaphor, right? It can be applied into any particular kind of business, whatever. Like, you know, there is a possibility of interaction. Most of things, like, you know, certain percentage of things can be run in this particular mode.

Chris Stroll [00:22:33]:

Gotcha. And that's different than say like invisible AI or AI that exists in the background. Because what you're actually talking about is these experiences that are relatively autonomous, that are windows into data or content that we want to see. And that is very different than the way we think about things now, right? We think about things now as like these static products or web properties that have some predefined set of content. And some of that content can be dynamic based on what a user is looking at or what they need. But there's very little of it that is not predefined in terms of an experience. And I think one of the things that's interesting about this is the idea of like just in time user interfaces, or the ability to say, like, we're going to have some level of Hyper personalization. So that content experience, maybe the content is still static, but the way that it's displayed is varied based on what a user's preferences are.

Ranjeet Tayi [00:23:31]:

The hyper personalization is happening big time now. The hyper personalization is also. AI is making a lot of possibility because of the sample size in terms of behaviors, in terms of like, you know, tracking of their frequent tasks or maybe like you know, learning from the group of people of various domains and patterns in terms of, you know, what their behaviors are, what their needs are, what their actions are. And hyper personalization is going to be a big thing right now because you can learn from not just of your behaviors, you can learn a similar kind of profiles. Many, many profiles, the age, location, difference, time, lot of patterns can be combined and you can get the prediction of that accuracy of that models will be much, much, much better.

Chris Stroll [00:24:17]:

So are you viewing a world at some point in the future? You're not going to go to one place for your banking, another place for your healthcare, maybe a third place for social networking and a fourth place for productivity. You're thinking about this as like there's going to be an AI model and that AI model is going to have interfaces that are pre built that are delivered into a response from, I mean picking a really simple example, a chatbot.

Ranjeet Tayi [00:24:43]:

They can be multiple, they don't need to be on single model, but they can be multiple models connected to single thing. It's kind of a model of model like bot of bots. It's a hierarchy of bots which actually can actually help you to navigate and help you in terms of predicting what is your next step. And if it's pertained after the threshold, if it knows that this is what user is trying to do, it might even execute automatically.

Chris Stroll [00:25:09]:

Yeah, I think that's really fascinating this idea that there's going to be some sort of single entry point that is highly personalized for all of our experiences as we have a bunch of robots that go, you know, do our bidding around the Internet to try to gather the information that we want or desire into one place. And it has fantastic implications for design. And I was actually reminded of a tweet that was between Scott Belsky and another person who after the show I'll have to remember exactly what their name is. They were talking about how in an AI enabled future interfaces are the differentiator in AI. And if interfaces are the differentiator, designers are the winners here because designers get the opportunity to be the ones that design these interfaces that are going to be for, for broad Scale consumption of all this work that AI is doing for us.

Ranjeet Tayi [00:25:57]:

That's correct. You know, in fact, I think I keep telling a lot of people, especially in time, this is the best time ever to be a designer. Because you know, one of the power is designers are blessed with one great skill called empathy. That's one, right?

Chris Stroll [00:26:13]:

Right.

Ranjeet Tayi [00:26:14]:

Second thing is especially is designer's greatest asset is they can able to visualize things, right? And this all these things gonna available in multimodality, right. There are multiple different kinds of modalities applicable whether you can do it in a multimodal devices like you can do it from watch, you can do it from like, you know, phone, you can wait from your iPad, you can do it from a kiosk or you can do it from your car. And second thing is especially the input mechanism, voice or text is becoming one of the major alternative modes of interface as a primary thing because you know, it's actually making it the whole prompt, what we also call it as prompt engineering. The input which we give around is becoming the most important skill in today's era, right. Whether it's engineer, whether it's designer, right? That's the skill which everyone have to learn.

Chris Stroll [00:27:11]:

Yeah, there's a lot of people that are talking about this as like the next coding language, right? This idea that at some point natural language is effectively writing code. And the reason why there's very little distinction between the 2 is if AI is good enough to understand intention and understand like your native language and culture, context and all these other like confounding factors, ultimately the code that is running behind the scenes is of little relevance if it's able to interpret the user input and resolve the right responses.

Ranjeet Tayi [00:27:44]:

That's true. Actually see one of the most biggest advantage of this advances is also especially in the code generation, if you see especially GitHub, Copilot or any other code gens, whether it's like writing to SQL or something to some other kind of technology, it's becoming very easier. So what it means is actually everyone can become a coder, right? You know, always have. If you have minimum skills, you can become a developer, right. Or maybe like the efficiency of the code is also getting better so that you have less errors and stuff like that. And it's an enabler, you don't need to completely rely on it. But you know, almost 80% of your work is done. And the most powerful thing which amazes me is that, you know, especially recently Microsoft launched its Copilot, especially in multimodal languages.

Ranjeet Tayi [00:28:35]:

Whether you talk In Telugu, Gujarati, Japanese or whatever, it writes code, it understands and writes the code. So imagine the language barrier or the whole cultural context, the whole world is getting enabled with these things.

Chris Stroll [00:28:50]:

Yeah, I watched an absolutely fantastic demonstration that actually Richard Banfield, who works at Knapsack did, where he was speaking on zoom in English and it was live translating with like inflection and intonation in Korean. And it was absolutely fascinating. Like it was a couple of seconds to delay, but the idea that it could not just translate it into like a robotic voice, but it actually sounded like him if he was a native Korean speaker, was absolutely fascinating.

Ranjeet Tayi [00:29:19]:

That's fantastic. Right? Because these all things, especially if you see what it means is if you conceptually think about at a high level and see it conceptually, it's actually translating learning and then it's actually responding back, it's decoding and actually doing it. If you apply that same principle into any use case. So that's where the whole wonders are getting done. Especially whether you say movie making, which is actually LTX Studio. LTX Studio is one of the tools like that's next gen filmmaking. It's actually, it's a virtual storyboarding or basically storytelling. Right? You just create your prompt, it will actually create your whole video.

Ranjeet Tayi [00:30:03]:

And a lot of people are actually creating movies and ads just with prompts. And also people are creating music, people are creating storyboards, people are creating the whole symmetry of that, whole interactions between these characters.

Chris Stroll [00:30:20]:

So there's all these creative applications that are happening for AI, but one of the things that this is rooted on is fundamentally structured data. And so I want to bring it back to kind of the design systems angle for a second. One of the brilliant things about a design system is it innately has a lot of structure. And that structure and constraints that exist inside of the design system seems like it's very ripe to power AI tooling. And a big part of this is because clean data creates clean experiences within AI. And design systems are all about that clarity. And so when you look at a design system as a co pilot for AI, what do you see as the value there?

Ranjeet Tayi [00:31:01]:

This is one of the next big thing, which design systems is good and design tools and AI is going to already it's getting impacted big way. A lot of innovations happening in this particular space, especially coming to the design system, right? Design system will become a lifeblood for any application. Design system has all those relevant patterns and components and also especially in this standards and stuff. So already like, you know, a lot of tools in terms of, you know, a Lot of things you can able to do if you have a nice design system or you can able to create generate missing components for an application, right? You can able to create and we can learn a collective knowledge in terms of, you know, for example, if you take banking applications, okay, if you train the model with various banking applications, right? And you have your design system, you can able to predict or recommend or able to generate certain patterns which are missing. You can able to take feedback, live feedback in terms of which patterns work, which pattern don't work. And it can able to actually suggest you layouts, auto suggest layouts in terms of what could be a good layout which can work. And especially tools like already some of them, like Uizard or some of the tools actually does generative applications, you create your prompt which can able to create. But think about your own design system.

Ranjeet Tayi [00:32:31]:

You have your own certain branding estimates or you have your own status of the company and whatnot and stuff. If you train that particular design system, train the model and train all your use cases and put in your ecosystem so that the system can tell you about, hey, in this domain, in this business context, for our customer base applications, how they are actually using and connecting to the telemetry of it in terms of, you know, how people are using, how people are clicking on that, the behavior patterns and analytics and stuff, right? So this will improve the more productivity of the like, you know, the tool efficiency of the tool, number one and number two is especially in terms of, you know, lot of simulations you can able to do, right? It can able to simulate, which is a huge productivity for designers.

Chris Stroll [00:33:25]:

Yeah, I mean the ability to rapidly iterate through thousands of potential variations of design is really powerful.

Ranjeet Tayi [00:33:31]:

And then not only that, you know, you can able to run analytics, you can able to do complete 100% measurable.

Chris Stroll [00:33:38]:

Design at the interface level, not just at the page level. And I think that that to me is a really interesting distinction, right, because we have lots of tools that'll analyze a page, you know, AB testing tools, you have lots of analytics data about a page. But the problem is, is that telemetry is limited in its scope to a page or to a single URL.

Ranjeet Tayi [00:33:55]:

So currently, even though there are a lot of telemetry, it's in different, different tools. What happens is nothing feeds back to the design system. If you say that, you know, design system is your source of truth, all this AI powered stuff interactions is actually feeding back to this particular thing. Because, you know, now design systems are the source of truth for developers too, right? They will actually use all these tokens and whatnot for the development and all the product managers or the entire company is actually depending on that particular design components which are used using that unified design system. The comments, the feedback, the user feedback. If we integrate all of this and keep that continuous learning cycle right, that's going to be really amazing. And that's the next thing is what I think.

Kate Moran [00:34:50]:

I really see AI changing what we consider the designer's job or role to be in pretty monumental ways. And there's a lot of speculation out there about how this will all shake out. And we at Nielsen Norman Group have been making our best educated guesses doing research on these topics, both with users and practitioners, speaking to a lot of people in the industry. So we're making our best guesses. But I just want to preface this by saying this is very much up in the air. It depends on a lot of factors, including how the technology continues to evolve and how available and expensive the resources are that are required to run these systems. So I think we have to take all of these predictions with a grain of salt, considering those limitations.

Chris Stroll [00:35:39]:

So before we dive too deeply into the crystal ball, why do you think those two things are paramount to the importance here? I think the cost one is sort of self evident, right? Like this is all of a sudden, like way too expensive. Nobody's going to use it because it's going to be prohibitive. But when you try to think about those other constraints on the technology, what in particular are you talking about?

Kate Moran [00:36:00]:

Yeah. So anybody in your audience who's familiar with generative AI as it's merged over the last two years into the public consciousness and public awareness are probably familiar with some of the downsides of those systems, like hallucinations. So this tendency that these large language models have to kind of generate falsehoods, false information in an information seeking context, that's what we're worried about. But when we're thinking about producing something, to me it's more about the unpredictability or a little bit of unreliability. So do you want generative AI to be designing your interfaces for you on the fly? Depends a lot on the context, but also how much risk you have involved the technology that we have right now, I think we're nowhere near close to saying, just hand a ChatGPT equivalent, even if it's integrated in Figma, hand over all of the design decisions to those systems. So that's one side of it, is the reliability.

Chris Stroll [00:37:02]:

I think it's also funny that FIGMA literally just debuted this like a Week ago, right, was this idea about like, hey, here's this chat interface where you can tell Figma what to design for you. And I think that what the interesting part is somebody that spends a ton of time in AI research, you're saying, like that might not be it.

Kate Moran [00:37:17]:

Figma is a case where it makes sense. And also I think it works fine if you're ideating, if you're coming up with different ideas for whether it's an entire product or at the feature level, or even something as small as UI copy. The tools that we have now are actually great for those kinds of things like early stage ideation exploration. It can even be really helpful for initiating desk research if you're sitting down to design, you know, how are we going to design a specific component within our design system? You know, you can kickstart your desk research in terms of the best practices around those design components really quickly and easily. So Figma, I think figma, integrating these tools, I'm actually really excited about that because it's still in the design process itself and there's still human oversight. So what Sarah Gibbons and I also of Nielsen Norman Group, what we wrote about in our article on what we're calling genui, which is generative ui, is different. It's the concept that you have real time AI systems generating probably at first little components of the ui, but also eventually content. And in a possible long term future, I think we're looking at years here, maybe even entire interfaces being dynamically created in real time based on the individual person who is using the system, the context that system has about them and what their needs are in that moment.

Kate Moran [00:38:46]:

So we're thinking about hyper personalized interfaces. So that to me is very different from asking figma to help accelerate your design process and then using your human insight and contextual awareness to guide that process.

Chris Stroll [00:39:03]:

Yeah, I mean what I love about design tools generally, and I think that this is largely like drives at the purpose of why these tools exist. Right, Is the ability to iterate away from user land for a little while. Like you need to have that exploration, that time that you can do that thing that isn't ultimately going to be the thing that shows up in front of you. And I as a consumer of an experience and I think that what Figma showed was really cool as this interesting, like stepping stone into this world of generative interfaces. I agree with you that there's like a lot at play here. Right? There's the content piece, the interface piece, the brand piece that we haven't even really touched on yet. But all of this comes together in this unique moment that is this experience. And that experience has a lot of things you're talking about.

Chris Stroll [00:39:43]:

It has a context. That context could be all kinds of stuff. I could be like, fighting my two children for survival while on the couch, trying to order, like dinner that night, or I could be in a situation where I'm like, sitting in front of my desk, the world is peaceful and calm, I have my headphones on and I'm listening to chill hop, like, and I have plenty of time to do the thing that I'm trying to do. And one of those is the time that I buy airplane tickets. And the others is the time that I'm like, whatever the easiest pizza to order is. And I think that that in particular is the interesting part about what AI is giving us, is this idea that that experience can be adaptive and it can meet people in that moment, whatever is going on in their life, as the context for how we want to provide them the easiest pathway to solve a problem.

Kate Moran [00:40:29]:

I think what's exciting about that is this ability to create, or potential ability at some point in the future to create these super hyper personalized experiences that meet each person and their individual specific needs in that moment. And we can start speculating about what that's going to look like. And it doesn't necessarily have to be HTML or visual interfaces. It can be audio content, it can be dynamically generated data visualizations in the moment. And that's really exciting for designers. And it's something that I think we should be thinking about and thinking about what our role becomes in that environment. But as we're sort of speculating about what that future might look like, I think it's really important to again, remember those limitations of the systems. One of the other potential obstacles there is resources.

Kate Moran [00:41:23]:

So in the article that Sarah and I wrote about Generative ui, we walk through a hypothetical example of someone who is searching for a flight, booking a flight for a trip. Now that's a pretty specific example. And we tried to keep it pretty concrete just to sort of get across, because this is kind of an esoteric, abstract idea to a lot of people. But if you think about a world in which all of the digital devices that we interact with are dynamically generating the experiences that we're going through in real time, that would be extremely resource intensive. And, you know, hardware is not my area of expertise at all. But I have been following this because it's interesting. You know, somebody who, most of my career I got to just focus on software. Suddenly, like a lot of people, I'm having to pay more attention to the hardware side than I have for a long time.

Chris Stroll [00:42:17]:

It's going to take a lot more than a bunch of bitcoin miners transitioning over to like AI number crunching to get where we want to be on this front.

Kate Moran [00:42:24]:

Absolutely, yeah. So that's a big limitation right now. I do think that there's a lot of money to be made in designing these systems or building chips that require fewer resources, that can be done more cheaply, more quickly. So we'll just see where that goes. That kind of market pressure might accelerate the pace of innovation there.

Chris Stroll [00:42:42]:

To take that somewhat abstract concept that we've been talking about is like these adaptive interfaces that exist everywhere, they're real time, they're on the fly, where, like, we don't have a predefined notion of what an experience is. That experience is just constructed from data based on our individual preferences. Like, that seems pretty far flung even to me. And I spent a lot of time thinking about this stuff. When I think about this is like that as the future is like, where this is all headed. And you're right, that is like pretty speculative. Right? Like we don't know, but like, we have some pretty good indicators that things are headed in that direction. What I think is something that is a lot more grounded for people is the idea of how are the ways that we create interfaces changing now? Because I think that the vast majority of designers and engineers that are out there are like, holy cow, I can make a work product with just a few phrases or sentences and some context information.

Chris Stroll [00:43:34]:

And that's crazy. That's really, really revolutionary for me because now in figma, I can create a designed experience that has some brand adherence. All this other stuff like that in code, I can generate a bunch of stuff in REACT or whatever framework of choice and I can even do a lot of things in knapsack to link those things together. But beyond that, what is that all towards? Because there's this idea that's pretty dystopian if you're a designer about AI is coming for my job. But then there's this other idea about, well, what if we redefine what that job is and we start to think about more than just that single experience, that singleton being the thing that we're creating for our users, and start to think about how we create this stratification and this multimodal concept of all of the different possible great experiences we could create for users. And I think that is absolutely Core to what we were just talking about. How do you see that coming online?

Kate Moran [00:44:29]:

Yeah, so I think first we have to make a distinction here between generative UI or genUI as I've been calling it, which is where we're having some degree of freedom for an AI system to generate either pieces of an interface, pieces of content, or even in the most future, forward, future looking version of that, generating the entire interface itself. So maybe a scaled down, like more concrete example of that would be a website where some of the content is dynamically generated or altered in real time for different users based on analytics data and what we know about that user's profile from past behavior. I think about that as more as like personalization on a lot of steroids, like extreme personalization, essentially. Swole personalization, yes, very swole personalization. So that's on one side, that's the genuine side, the other side. And this is what's happening right here and this is what we see with like, you know, AI coming to figma and also this huge rush. There's such an influx of, of new tools flooding the market right now that are being targeted either UX people or their bosses who are looking to replace UX people. And so that's the part that's really scary right now.

Kate Moran [00:45:44]:

That's kind of a different application of AI, not generating the interface but generating the design or accelerating the design and research process. So that's kind of a different question, I have to say. So it's our job at Nielsen Norman Group to keep an eye on the industry, keep an eye on where we're heading, what are the new trends, what are the new tools, what are the new processes, testing them out, doing research with practitioners. And we really try not to be the people who rush after the flashy buzzword kind of thing. Sometimes that creates a little bit of a perception of us as being a little bit more, I don't know, like old fashioned. But my perspective on it is we really try to provide research based guidance for either people who are in UX roles or in adjacent to UX roles. So with that context, I have to say that a lot of the AI tools for design that we've tested out so far and the tools for research, they are nowhere close to replacing a human. Human beings are still very much essential in making sure that the output of these systems makes any sense at all, has any level of consistency or quality.

richard banfield [00:47:03]:

Well, that brings us to the end of our AI deep dive. In the episode you just heard, we explored how AI is reshaping digital product design. We heard from David Calleja on AI as a creative partner in the design systems, and we also heard from Nick Hahn on using AI for rapid iteration while keeping that absolutely important human oversight. Ranjeet Tayi gave us a glimpse into AI's evolution from assistant to autopilot. And Kate Moran reminded us that while AI unlocks hyper personalization, it's also our responsibility to keep the user experience front and center. So what's the big takeaway? AI isn't replacing us, it's helping us work smarter, it's helping us automate the tedious, and it's helping us to focus on what really matters. If you like this conversation, don't forget to subscribe and check out more at Knapsack Cloud. And thanks for listening.

richard banfield [00:48:03]:

We'll see you next time.

Chris Stroll [00:48:05]:

Hey everyone, thanks for listening to another episode of the Design Systems Podcast. If you have any questions, topic suggestions, or want to share feedback, go ahead and reach out to us on LinkedIn. Our profile is linked in the show Notes. As always, the podcast is brought to you by Knapsack. Check us out at knapsack.cloud. Have a great day, everyone.

Get started

See how Knapsack helps you reach your design system goals.

Get started

See how Knapsack makes design system management easy.