Design systems podcast cover art

Why Kate Moran Thinks Humans Are Essential to the Future of Generative UI and AI.

What does the future of UX look like? In today's episode, our host, Chris Strahl, sits down with Kate Moran, the VP of Research and Content at Nielsen Norman Group. They explore the promise of hyper-personalized experiences, the challenges of resource limitations, and the innovative potential of AI in accelerating design processes. Kate touches on the balance between speed and risk when integrating AI into products, highlighting the significance of human input in maintaining brand consistency and preventing unpredictability. Their conversation addresses ethical considerations, data privacy, and the societal impact of AI, emphasizing transparent communication from organizations.

Guest

Kate Moran is an expert in UX research, writing, and strategy. She has made substantial contributions to the field through her research and thought leadership. Kate is Vice President of Research and Content at Nielsen Norman Group.

Transcript

Chris Strahl [00:00:00]:

Hi and welcome to the Design Systems Podcast. This podcast is about the place where design and development overlap. We talk with experts to get their point of view about trends in design code and how it relates to the world around us. As always, this podcast is brought to you by Knapsack. Check us out at knapsack.cloud. If you want to get in touch with the show, ask some questions, or generally tell us what you think, go ahead and tweet us at thedspod. We'd love to hear from you. 

Hey everyone, welcome to the Design Systems Podcast. I'm your host, Chris Strahl. Today I'm here with Kate Moran. Kate works at Nielsen Norman Group. Kate, why don't you talk a little bit about what your role is there?

Kate Moran [00:00:31]:

Yeah, sure. So I am vp of research and content at Nielsen Norman Group. So I'm just a big old research nerd and get to spend a lot of my time talking to people in the industry, conducting research, overseeing the production of our articles. If anyone in your audience isn't familiar with us, we're Nielsen Norman Group, also known as NNG, and we produce a lot of training material. We train UX professionals around the world and various topics from research to design to leadership. We also produce free articles. We have a massive library of over 2000 free articles. Those articles document things like different design processes, different research methods, design patterns, as well as we have quite a bit of content around design systems.

Chris Strahl [00:01:22]:

Awesome. Well, I always love when there's more design system content because one of the things that's still hard in this industry is given something that kind of took off when the whole world was shut down. There tends to not be a lot of people that are out there, like being really loud about where they've seen a lot of success as a community, we tend to be really focused on the things that we have to get done, which makes sense. But I think that it's wonderful to take a moment and sort of like blast out to the world, like all the cool stuff that we're able to do with design systems. 

And in our pre meet, that's kind of what we wanted to chat about today is this idea of, in particular, how AI and design systems are starting to change the way we think about the breadth of experience that we all have. Maybe to set some context and some background. You've written a lot about AI, in particular, a lot about generative AI, and about these ideas of how AI is changing the user experiences that we all have. Can you give me a sense of a fairly high level where you see AI's impact on user experience kind of driving us.

Kate Moran [00:02:23]:

Yeah, I mean, we could probably talk about that for the rest of the time that we have for this episode, but I would say at a high level, I really see AI changing what we consider the designer's job or role to be in pretty monumental ways. And there's a lot of speculation out there about how this will all shake out. And we at Nielsen Norman group have been making our best educated guesses, doing research on these topics, both with users and practitioners, speaking to a lot of people in the industry. So we're making our best guesses. But I just want to preface this by saying this is very much up in the air. It depends on a lot of factors, including how the technology continues to evolve and how available and expensive the resources are that are required to run these systems. So I think we have to take all of these predictions with a grain of salt, considering those limitations.

Chris Strahl [00:03:19]:

So before we dive too deeply into the crystal ball, why do you think those two things are paramount to the importance here? Like, I think the cost one is sort of self evident, right? Like, if this is all of a sudden, like, way too expensive, nobody's gonna use it because it's gonna be prohibitive. But when you try to think about, like, those other constraints on the technology, what in particular are you talking about?

Kate Moran [00:03:40]:

Yeah, so anybody in your audience who's familiar with generative AI as it's merged over the last two years into the public consciousness and public awareness are probably familiar with some of the downsides of those systems, like hallucinations. So this tendency that these large language models have to kind of generate falsehoods, false information in an information seeking context, that's what we're worried about. But when we're thinking about producing something, to me it's more about the unpredictability or a little bit of unreliability. So do you want generative AI to be designing your interfaces for you on the fly? It depends a lot on the context, but also how much risk you have involved. The technology that we have right now, I think we're nowhere near close to saying just hand a chat GPT equivalent, you know, even if it's integrated in Figma, hand over all of the design decisions to those systems. So that's one side of it is the reliability.

Chris Strahl [00:04:41]:

I think it's also funny that Figma literally just debuted this like a week ago, right? Was this idea about like, hey, here's this chat interface where you can tell Figma what to design for you, and I think that the interesting part is somebody that spends a ton of time in AI research. You're saying like, that might not be it.

Kate Moran [00:04:57]:

Figma is a case where it makes sense. And also I think it works fine if you're ideating, if you're coming up with different ideas for whether it's an entire product or at the feature level, or even something as small as UI copy. The tools that we have now are actually great for those kinds of things, like early stage ideation, exploration. It can even be really helpful for initiating desk research. If you're sitting down to design, you know, how are we going to design a specific component within our design system? You know, you can kickstart your desk research in terms of the best practices around those design components really quickly and easily. 

So Figma, I think Figma integrating these tools, I'm actually really excited about that because it's still in the design process itself and there's still human oversight. So what Sarah Gibbons and I, also of Nielsen Norman Group, what we wrote about in our article on what we're calling genui, which is generative UI, is different. It's the concept that you have real time AI systems generating probably at first little components of the UI, but also eventually content and in a possible long term future, I think we're looking at years here, maybe even entire interfaces being dynamically created in real time based on the individual person who is using the system, the context that system has about them and what their needs are in that moment.

So we're thinking about hyper  personalized interfaces. So that to me is very different from asking Figma to help accelerate your design process and then using your human insight and contextual awareness to guide that process.

Chris Strahl [00:06:42]:

Yeah, I mean, what I love about design tools generally, and I think this is largely like drives it. The purpose of why these tools exist, right, is the ability to iterate away from user land for a little while. Like, you need to have that exploration, that time that you can do that thing that isn't ultimately going to be the thing that shows up in front of you. And I, as a consumer of an experience. And I think that what Figma showed was really cool as this interesting stepping stone into this world of generative interfaces. I agree with you that there's a lot at play here. There's the content piece, the interface piece, the brand piece that we haven't even really touched on yet. But all of this comes together in this unique moment that is this experience.

And that experience has a lot of things you're talking about it has a context. That context could be all kinds of stuff. I could be like, fighting my two children for survival while on the couch trying to order like dinner that night. Or I could be in a situation where I'm like sitting in front of my desk, the world is peaceful and calm. I have my headphones on and I'm listening to chill hop. And I have plenty of time to do the thing that I'm trying to do. And one of those is the time that I buy airplane tickets, and the other is the time that I'm like, whatever, the easiest pizza to order is. And I think that in particular is the interesting part about what AI is giving us is this idea that that experience can be adaptive and it can meet people in that moment, whatever is going on in their life, as the context for how we want to provide them the easiest pathway to solve a problem.

Kate Moran [00:08:09]:

I think what's exciting about that is this ability to create, or potential ability at some point in the future to create these super hyper-personalized experiences that meet each person and their individual specific needs in that moment. And we can start speculating about what that's going to look like. And it doesn't necessarily have to be HTML or visual interfaces. It can be audio content, it can be dynamically generated data visualizations in the moment. And that's really exciting for designers. And it's something that I think we should be thinking about and thinking about what our role becomes in that environment. But as we're sort of speculating about what that future might look like, I think it's really important to again remember those limitations of the systems. One of the other potential obstacles there is resources.

In the article that Sarah and I wrote about, generative UI, we walk through a hypothetical example of someone who is searching for a flight, booking a flight for a trip. Now, that's a pretty specific example. And we tried to keep it pretty concrete just to sort of get across, because this is kind of an esoteric, abstract idea to a lot of people. But if you think about a world in which all of the digital devices that we interact with are dynamically generating the experiences that we're going through in real time, that would be extremely resource intensive. And, you know, hardware is not my area of expertise at all. But I have been following this because it's interesting, you know, somebody who, most of my career, I got to just focus on software. Suddenly, like a lot of people, I'm having to pay more attention to the hardware side than I have for a long time.

Chris Strahl [00:09:57]:

It's going to take a lot more than a bunch of bitcoin miners transitioning over to like AI number crunching to get where we want to be on this front.

Kate Moran [00:10:04]:

Absolutely. Yeah. So that's a big limitation right now. I do think that there's a lot of money to be made in designing these systems or building chips require fewer resources that can be done more cheaply, more quickly. So we'll just see where that goes. That kind of market pressure might accelerate the pace of innovation.

Chris Strahl [00:10:21]:

To take that somewhat abstract concept that we've been talking about is like these adaptive interfaces that exist everywhere. They're real-time, they're on the fly, where we don't have a predefined notion of what an experience is. That experience is just constructed from data based on our individual preferences. That seems pretty far-flung even to me. And I spent a lot of time thinking about this stuff. When I think about this, is that as the future is where this is all headed? And you're right, that is pretty speculative. We don't know, but we have some pretty good indicators that things are headed in that direction. What I think is something that is a lot more grounded for people is the idea of how are the ways that we create interfaces changing now? Because I think that the vast majority of designers and engineers that are out there are like, holy cow, I can make a work product with just a few phrases or sentences and some context information.

And that's crazy. That's really, really revolutionary for me, because now in Figma I can create a designed experience that has some brand adherence, all this other stuff like that in code. I can generate a bunch of stuff in react or whatever framework of choice. I can even do a lot of things in knapsack to link those things together. But beyond that, what is that all towards? Because there's this idea that's pretty dystopian if you're a designer about AI, is coming for my job. But then there's this other idea about, well, what if we redefine what that job is and we start to think about more than just that single experience, that Singleton being the thing that we're creating for our users, and start to think about how we create this stratification and this multimodal concept of all of the different possible great experiences we could create for users. I think that is absolutely core to what we were just talking about. How do you see that coming online?

Kate Moran [00:12:09]:

Yeah, so I think first we have to make a distinction here between generative UI, or GenUI, as I've been calling it, which is where we're having some degree of freedom for an AI system to generate either pieces of an interface, pieces of content, or even in the most future forward, future looking version of that, generating the entire interface itself. 

So maybe a scaled down, like more concrete example of that would be a website where some of the content is dynamically generated or altered in real time for different users based on analytics data and what we know about that user's profile from past behavior. I think about that as more as like personalization on a lot of steroids. Extreme personalization, essentially swole personalization. Yes, very swole personalization. So that's on one side, that's the genui side, the other side. And this is what's happening right here. And this is what we see with like, you know, AI coming to Figma and also this huge rush.

There's such an influx of new tools flooding the market right now that are being targeted either UX people or their bosses who are looking to replace UX people. And so that's the part that's really scary right now. That's kind of a different application of AI, not generating the interface, but generating the design or accelerating the design and research process. So that's kind of a different question, I have to say. So it's our job at Nielsen Norman group to keep an eye on the industry, keep an eye on where we're heading, what are the new trends, what are the new tools, what are the new processes? Testing them out, doing research with practitioners. And we really try not to be the people who rush after the flashy buzzword kind of thing. Sometimes that creates a little bit of a perception of us as being a little bit more, I don't know, like old fashioned. But my perspective on it is we really try to provide research-based guidance for either people who are in UX roles or in adjacent to UX roles.

So with that context, I have to say that a lot of the AI tools for design that we've tested out so far, and the tools for research, they are nowhere close to replacing a human. Human beings are still very much essential in making sure that the output of these systems makes any sense at all, has any level of consistency or quality. As we start to see more tools like Figma integrating AI features into the places where UX professionals are already working, we're going to probably see better outputs. Like, that's what you're talking about with, like, it's pulling from these brand elements, which is amazing. So that's going to be a higher level of quality.

Chris Strahl [00:14:56]:

Let's dive into the brand side of things just really quickly, because I think this is actually a big part of why there's still a really serious human need here, right? And when I think about the brand value that an organization goes and represents, there's a ton of value unlock in AI tools to be able to create this multiplicity of experiences that takes a bunch of user information and ties it to the interfaces and the content that are shown to a user. But if that divorces itself from brand, there's also a lot of brand value that's at play here. And I think that there's a lot of risk that many people, especially that executive and brand level think about. That is like, all right, look, it's great that we can go write a prompt and have it make a thing for us, and that thing is touchable, usable, tangible. But if that isn't adhering to the consistency of my brand, I've lost something in that as well. And that loss could potentially be really, really important. And, like, we see examples of this all the time where, like, you know, AI bots suddenly becoming racist, right, or.

Kate Moran [00:15:59]:

Giving misinformation to a customer in customer service context.

Chris Strahl [00:16:02]:

Totally, yeah.

Kate Moran [00:16:03]:

I think risk is the right way for anybody out there who's having this conversation with somebody in leadership. I think risk is definitely the right way to stress that human involvement is still essential with all of these tools. And I would say that also applies to GenUI. When Sarah and I envision a future where we have more and more pieces of interfaces that are dynamically generated by AI systems. Humans setting the parameters or the constraints around what they should be designing and what content they should be generating is still going to be essential so that you don't end up with those situations where they go off the rails. And that, again, is a part of that hallucination, that unpredictability.

Chris Strahl [00:16:45]:

Right. So then accounting for that risk is really interesting, because what you're able to do is you're able to say, like, okay, I have some experiences that are really low risk, like some generated content that is minor copy or something like that. And then I have things that are really, really high risk, like core brand representations, like my logo or the tagline from my company. But then you also have all this, like, stratification of this in between. And what we're trying to do is we're trying to strike this balance between, like, speed of creativity, the speed and efficiency with which you're able to create a product, with the risk that you're taking on by having a lot of that product not necessarily be this like deeply trusted thing that you have building it, being a human. And so unequivocally we have some efficiency advantage at play here. What we do with that efficiency advantage becomes a really interesting question. Before the show, you had this really interesting tidbit you threw out.

Is AI a feature or is it a product? I think this drives really heavily at that particular question. Let's go ahead and ask that. Is this a product or is this a feature?

Kate Moran [00:17:49]:

Well, right now it's both, because we definitely have products. And this, by the way, wasn't my idea. This is a conversation that's happening actively in the AI design community. We've seen some failed AI-based products at this point. I actually signed up for the Rabbit R1 because I wanted to test it out, this little like standalone device that's supposed to run some of your apps for you. So that was like, I would say they got over their skis, racing towards the generative UI kind of threshold. So we've seen some products that have failed. We've seen some products that have been really successful, like chat, GPT, Claude, mid journey, still extremely successful standalone AI products.

But we are starting to see more integration of AI within existing products. And I think we've seen similarly, we've seen examples of that. I've seen a lot of bad examples of that, but I've also started to see some really good examples. I have not had my hands on the new AI tools and Figma yet, but I have pretty high hopes for that. Similarly, Amazon just added a little AI sparkles icon to their app toolbar.

Chris Strahl [00:18:57]:

And you're like, why the depressing iconography of AI? I'm sorry if the person that actually came up with the AI icon is listening to this, this is not your fault, because the industry decided to grab sparkles and then apparently sparkles is the thing.

Kate Moran [00:19:13]:

And you know what it is? It's this idea that it's magic and you can just sprinkle some AI and it'll suddenly become magical. I think what we're seeing right now in the future versus product distinction is it's funny. It's something that's like a tale as old as time in UX, which is the fact that you can build something. But if it doesn't answer a need, if it doesn't address a problem, if people don't have any reason to use it, then it fails. It doesn't matter what the underlying technology is or how popular it is. So just as we're seeing people rushing into the market with AI native products or AI-centric products. We're also seeing companies rushing into the market with AI features, probably because they're getting pressure from leadership or shareholders. Like cram some AI in there.

Chris Strahl [00:19:59]:

Well, I mean, like, what's getting the valuation right now is AI companies, right?

Kate Moran [00:20:02]:

Exactly.

Chris Strahl [00:20:03]:

I mean, I was just in San Francisco, right? Like walking around, there's like a billboard for AI every like 6ft.

Kate Moran [00:20:08]:

Yes, it's a gold rush for sure. Right now. I think it all comes down to, it's funny, like this technology is brand new and we're talking about all of these ways that it might shake up the, like what even is an interface, like what even is a designer. But we still have these core because we are all humans and that's what this all comes down to in my mind. We still have these foibles, these like tendencies that we all have as human beings to fall into these traps. And so we're rushing at cramming these like new shiny things into these products. So I am excited about this new wave. Like last year, I think we saw a lot of those failures.

I'm excited this year about this new wave of more mature, sophisticated products integrating AI as features, or sometimes not even as features, as like supplements or you can think about it as like acceleration that's happening behind the scenes. We don't need the sparkle icon to tell us that it's happening. We need the output. We need the result. So Figma is a great example of that, pulling in these AI features. We also, at NNG, we use Dovetail as our research repository and analysis, qualitative analysis tool, and they've started slowly implementing more and more AI features and we're finding them really useful because the team at Dovetail is putting a lot of thought into what problem can we address? They're also not pushing the technology, for the most part, they're not pushing the technology beyond what it's actually capable of doing. So, for example, great transcription, great summaries of some of the key things that happened in user interviews. You still can't trust AI systems to do all of your qualitative analysis for you.

It's just not going to, it's not going to turn out well. So these more sophisticated integrations, the reason why I think these are going to be more successful in the long term than even well-designed AI products is because they are going to have more context. So Figma's AI features are going to perform better because they have access to your design system and all of your branded elements. And Dovetail is going to hopefully perform better because it has access to all of Nielsen Norman Group's data and all of the past analyses that we've done.

Chris Strahl [00:22:26]:

The way I think about design systems in the context of AI is that, per what another guest mentioned, Dave, in a prior episode, design systems gives you this control tower from which you can set the context of pretty much any experience you're creating. And I think that's a really interesting role to play because there is this dichotomy that is present right now, which is like, is the information your app is gathering kibble for AI, or is it the leash with which you control that thing? And I think that design systems have this really interesting insight and opportunity because there's a lot of structure and context and codified information inside of design system about all the things that you want to hold close because of risk. And those are brand decisions and interface decisions and allowable variations and all this other stuff like that, that provides a tremendous amount of richness of context. 

What AI tends to be really good at, in my opinion, is moving really, really quickly through things that are just tedious or awful for humans to do. That's where there's almost unlimited strength in AI, where there's still a little bit of softness, is that let's go wholly make something net new that the world has never seen before. In that idea of the former, where you want to automate a lot of things, imagine being able to automate and test every variation of every component ever across your entire user base, and be able to do that at scale for hundreds or thousands of products, and then take those products and say, what if we did some limited, generative stuff to basically say, how would we tweak this? Or how we make subtle changes to this that still represent our brand but can create wholly new experiences? 

And there's obvious ones, right? Like, dark mode is something that, like, this entire industry has been trying to implement for ten years, and, like, it's still really hard accessibility tools. Like, there is a golden age of accessibility around the corner for us because we can automate a bunch of these variation testing to see what the most accessible experience is. Then I think that's where I get this springboard into, like, well, if you can create all these accessibility considerations and light mode and dark mode and low animation and normal animation, why couldn't you just test for a wholly different experience that is still on brand for a different set of users? Because, Kate, you experience the web in a different way than I do.

And so that's where hyper-personalization enters this really interesting part of the equation. If I can build a mean experience really, really quickly, why can't I build dozens of those experiences and have them serve different users?

Kate Moran [00:24:57]:

That's really the thing that gets Sarah and I excited about generative UI is the idea that we can, instead of designing for the average. I mean, this has been, again, a tale as old as time in UX is. We have struggled with all of these trade-offs. Well, this group of users wants this thing, and this group needs another thing. Well, between these two groups, we got to think of what percentage of our user base do they make up? Who's the majority? Who contributes the most to our organizational goals? What happens if we disappoint this group to make this other group happy? I think it's a fun logic puzzle, kind of a challenge. That's a part of design that I enjoy, but it's very hard. And what ends up happening is you cannot make everyone equally happy. So it's exciting to think about a reality where we have our design skills and our human understanding of context and behavior and priority and style and branding, and we can amplify that and produce something that's so much more tailored to the individual across these different experiences.

Accessibility is one in particular that I am extra excited about. It's been, you know, for as long as we've had these digital products. For people who have accessibility needs, it's generally been a pretty unpleasant experience. So, like, the potential is huge. An example that we included in our article was someone who has dyslexia. Just having the content presented in a dyslexia-friendly format automatically. But there's so much more to it than that. Like, for example, having smart screen readers that can actually understand the context and know when.

It's worth describing in detail for a specific image, like what's happening in the image, knowing when, for example, it's a decorative element and it's not really worth explaining. So this can go a lot further from just basic alt text, the tools that we already have. One idea that I heard recently that I think is amazing and I'm excited about, is imagine a world where you have a complex data chart in an article and somebody's using a screen reader so they can't see that data chart. I have written alt text for data charts. They are extremely. You have to be very careful and like, thinking about what is the main point that this person needs to understand and not being able to see these differences. How do I make this clear to someone? Imagine having an AI system that you could just ask it questions. You could ask about different segments of the data or ask it to even filter the data. And that goes beyond somebody who needs a screen reader. That would be an amazing experience for anyone.

Chris Strahl [00:27:38]:

Well, and then you think about that assistive experience, which is interesting as well, right? Where if you're like, if you have an AI that's riding alongside your browsing, and you could ask it ways of remixing or changing the interface as you go, you think about that as a training mechanism. You think about that as people advance in their understanding of a particular need or a particular aspect. There's a lot of cool stuff there, too. I think that there's this whole idea before we get to wholly, like, real time on the fly generation of interfaces, we'll probably have something ahead of that. That is like my first time ever going to Nielsen Norman Group's website. I'm a pretty uneducated consumer. I'm there to understand broadly what it is that you do and that sort of thing. But if I'm returning there for my 50th time, it's probably because I have a blind research I really care about.

Chris Strahl [00:28:28]:

It's probably because I want to actually go find more about a particular subject that I've already probably looked at once or twice. And so, like, can that interface adapt to my experience and my habits?

Kate Moran [00:28:39]:

Yeah, and again, that's not like a revolutionary concept. Actually, it's, again, it's swole personalization. I think we've just coined that, Chris.

Chris Strahl [00:28:47]:

Yeah, I don't know. I don't know if I love that one. But, yeah, we'll run with it for this episode. Why not?

Kate Moran [00:28:54]:

You know what this level of personalization is going to require, though, is an immense amount of data about individuals and companies. So I think we are really. I mean, we'll see how it plays out, but I think we're really approaching kind of a crisis point as a society where we're going to have to decide, do we let these systems have visibility into every aspect of our lives? Because in order for these systems to be truly useful, they do need immense amounts of context. Whether you're talking about, you know, Figma, understanding the context of your design system and your organization, your branding, or you're talking about this generative interface that knows how many times you've come to Nielsen Norman Group and can generate not just a better experience for you, and here's some recommended content that you might like. But here we wrote an article for you, Chris, specifically that you're going to find interesting, whether you're talking about either of those approaches, the systems are going to have to know a lot about individuals and organizations.

Chris Strahl [00:30:01]:

Yeah, so it was interesting. I was having this conversation with this big retailer the other day, and one of the things that they were talking about is they said, like, look, we have a massive data lake that has 25 years of consumer behavior history inside of it. And what we're really looking for is for AI to be able to allow us to leverage that data in some meaningful way in our digital products. How are things in the offline world affecting things on the online world? How things that happen in store alter the things that happen in our digital experiences? Likewise, they have a bunch of ancillary data about their consumers that isn't related to their in store behavior, that's related to other web experiences that they curate and operate. There's all of these different islands that exist of data that are out there that they're having a hard time bringing into a central context. Now, there's people actively trying to solve this problem there. But once you have that data and you have that information, how do you make use of it in the interfaces and the experiences that we create? Is that just a matter of basically saying, like, experience a is better than experience b for this particular user type or segment? Or is it really like we should wholesale create something new for all these different audiences or segments that we uncover?

Kate Moran [00:31:16]:

Yeah, I mean, I think that will be a question for designers at that time. But like, yeah, this is totally true that these companies, and also governments as well, have already collected an immense amount of data about us, either with our informed consent or not. In many cases not. They haven't all been able to make use of all of that data because, as you said, they have this huge just lake of it, no idea how to parse it and make sense of it. AI is going to help with that problem. It's already helping with data analysis at that massive scale. So I think what's going to happen is that we're going to be using these specialized AI tools to help pull apart that data and make sense of it. But this is where I see, again the need for human beings, specifically people with design skills, to come in to look at what the AI has surfaced out of this data.

And ideally, it would be great if those systems could also make recommendations. But I think that's one of those critical decision points where a human being is going to be involved. So that's important. But then I also think, so there's all of that data that's already been collected, but this concept of having an AI system that follows you around on the Internet, consumers are going to have to decide if that's something that they are going to allow. So, way back in 2017, my colleague Kim Salazar and I wrote an article that we called the creepiness convenience trade-off, which at the time, we were thinking about, like, IoT devices, which you remember, like, back then was a big deal to have this device in your home listening to you. Twenty-four seven. And lots of people were creeped out by it. And this does vary by individual, but basically what we learned through a very extensive multi-year study that we did was people go through these phases where they start off and they're like, ooh, that's kind of creepy.

I don't know. I don't know if I want that in my house. I don't know what if I want it to have that access to my life and my data? And then they. At some point, not everybody reaches this point, but some people reach a point where they say, all right, I'll try it. I'll just try it and see. And then the thing is, they start to realize, like, oh, it's pretty convenient. Like, it can set some timers for me in the kitchen, and I can ask it about the weather, and they feel that they don't feel the invasion of privacy. You're not like, ooh, I feel my data feels like it was collected today. You just don't notice it.

Chris Strahl [00:33:42]:

Please, not my data.

Kate Moran [00:33:43]:

Yeah. So over time, you get more comfortable with it and you just accept it and move on. And so it's a phenomenon that happens within individual psychology, but it's also happening at the society level. And I am really interested to see, when we drop AI into this, how it's going to impact people's perceptions.

Chris Strahl [00:34:03]:

Absolutely. It's funny, you're talking to someone who wrote GE a strongly worded letter that their data collection policy was only something I got to review after I connected my sparring appliances to my network. And I was like, so you already have all my data about everything else that's on my network.

Kate Moran [00:34:18]:

And so we already did this. Is that okay? Yeah.

Chris Strahl [00:34:21]:

Is that cool? And I was like, oh, man, that's. That's a bummer. Likewise. What did it for me was my six year old experimenting with drop-in on Alexa, and all of a sudden, like, my son's voice is coming through the kitchen speaker. I was like, oh, no way. That is not happening. Anyway, regardless, I have generally, from a security minded standpoint, a suspicion of these sorts of things. I think it's less of a privacy thing for me and more about a like, I don't like that there's all these devices that I don't really control any of the security aspects of that have a part of my surface area.

But like thinking about that from a very practical sense, I still use them. Like, I've definitely gone on network engineer in my house and made sure that there's like isolation and stuff like that. But at the same time, I'm totally willing to give up my buying habits to Alexa or my music listening habits to Google because it's really nice to be able to say, hey, play this song.

Kate Moran [00:35:08]:

I don't think I'm alone in saying that. I don't really trust Meta with my personal data based on its track record and history of how it uses it. However, I do still use Instagram and its recommendations for brands for me are spot on because it has so much of my data. So yeah, there's this tension, you know, between accepting the convenience and kind of letting yourself forget about the privacy that I think is going to be really interesting. And I oh, actually, this comes back to Figma again. Just today I saw in my inbox an email that Figma sent with updates to their terms based on these new AI features. And I thought it was excellent. It was extremely clear and straightforward in plain language, well-formatted, very scannable, had an image of the setting where you could go to turn off the data collection, had details about what the data was going to be used for, had information about whose plans have already been opted in.

So you knew if that was your plan level, you should go in and you don't want it, you should go in and change it. Now, would it be better from the user's perspective, if it was an opt-in and not an opt-out? Absolutely. But I understand from the organization's perspective why that's a choice that a lot of companies are not making. But I have seen, for example, Slack. If you want to opt-out, if you want your organization to not be.

Chris Strahl [00:36:30]:

That'S immediately where my head went, by the way.

Kate Moran [00:36:32]:

Right. You have to email them. So I also send strongly worded emails to companies because I felt like I was not notified of this. It was not made clear to me how this is going to be used or why. Like, what is the benefit to my organization for letting you do this? And then to make me jump through hoops and email like a customer support email, like in 2024? That's wild. The impact on the consumer and the user is that erodes the trust. It's already a situation where people and organizations are going to be maybe a little bit predisposed to be distrustful. But by not being transparent and not putting power and choice in the hands of the people using these products, you've just completely erased whatever trust was there in the first place.

Chris Strahl [00:37:17]:

Yeah, undoubtedly. And I think that there's a lot to be said about the anti-patterns that exist here around trust in particular, and that you have something that already induces a lot of anxiety in people. Why would you further that anxiety by thrusting upon them something that maybe they never even really asked for? And even though that may have benefit, I think that to drive back into the personalization side of things, right, what we are definitely talking about is user preference behavior on a scale that we've never been able to aggregate before. A lot of this is stuff that we give up willingly to individual organizations or individual companies. There's this idea of suddenly this personalization token that has dozens to thousands of bits of information about my behavior. Who curates that, who owns that, who controls that. There's a lot of questions that that raises for me because presumably this is something that could exist with an individual, but that data, at some level, especially in marginalized parts of our society, is highly sensitive. And so there is this really brilliant future, but there's also this little bit of caution that I have about these experiences.

Kate Moran [00:38:25]:

I would say this is maybe wearing off a little bit now, but I have felt that for the last six to twelve months, in our industry at least, there's been these really extremely polarized views of AI people who are in the camp of, yes, extremely, like, amazing, and it's going to save us all. And, you know, don't worry about your jobs because we won't, we won't need jobs. We'll live in this paradise where we'll all get to sit outside and read books all day. And then there's the other side, which is like, you know, it's the devil. It's going to be like Terminator two, right, Skynet. And I really feel like I have just from the beginning kind of fallen somewhere in the middle. Like, I think the opportunities are huge and we can even just take, you know, let's not talk about the world or society, but just, just our industry. I think there's going to be some amazing benefits.

I think there's also going to be a lot of side effects that are going to be pretty horrible, both for the people like us working in the industry and the people that we're creating products for. We talked a little bit about is this going to replace me as a design professional or a research professional right now? No. The problem is, and I don't think it ever should, especially because when you look around, there is no shortage of horrible design. It would be wonderful if we could just have more design time and energy and people available to spread out across more products, especially situations like government products where there's not a lot of money to be spent on those. But they're incredibly important in terms of how easy they are to use and pleasant they are to use. That's the rose-coloredå glasses kind of vision of the future, and I hope that we get there eventually, but in the short term, we do have quite a few business leaders who are under immense pressure to cut costs and they are being marketed and sold these products that say they're literally products out there saying this is going to replace user research, right. You're not going to have to do user research anymore because you can just ask AI.

Chris Strahl [00:40:24]:

That seems pretty naive in today's world, but I don't know, maybe I've become the curmudgeony, stodgy tech person it.

Kate Moran [00:40:31]:

Is to us because we work in the industry and we understand the nuance of it. But for somebody who, you know, leads a, you know, medium-sized organization with extremely significant resource challenges, it's better than nothing. It might seem better than nothing, or it might seem better than paying to have someone on staff. So it's going to be a transition. I actually think some of the obstacles that we talked about, like the lack of reliability, the hallucinations, the misinformation, the resource constraints, I think those could be good stumbling blocks or like speed bumps to kind of slow this down so that society and users and organizations have time to understand and adapt. Because right now, for the last few years, it's felt to especially consumers, like, the pace of technology advancement has just been through the roof and that has positive and negative impacts.

Chris Strahl [00:41:24]:

So I love this conversation because we've figured out this way to have like a nuanced take, maybe not the deepest take that either of us would have really hoped, but at the same time a fairly nuanced take on, like where the challenges and the pitfalls are, along with like, this really amazing vision for where this could all go. So I just wanted to thank you for your time, your candor, your knowledge on this. This has been an awesome chat.

Kate Moran [00:41:48]:

Well, thanks, Chris. I always love chatting with you. I feel like we could do this for another 6 hours.

Chris Strahl [00:41:52]:

Totally, totally. There's no bottom to this.

Kate Moran [00:41:54]:

Well, one plug, I'll make is, you know, anybody listening? If you are looking for really high-quality, very well-researched advice, guidance, resources, templates for UX work, including design systems, definitely check out nngroup.com. that's where we publish all of our articles. We publish new content multiple times per month, so check us out there.

Chris Strahl [00:42:40]:

Awesome. Well, hey, thanks for being on Kate. You rock. This has been the design system podcast. I'm your host, Chris Strahl. Have a great day, everyone. That's all for today. This has been another episode of the Design Systems Podcast.

Thanks for listening. If you have any questions or a topic you'd like to know more about, find us on Twitter thedspod. We'd love to hear from you with show ideas, recommendations, questions or comments. As always, this pod is brought to you by knapsack. You can check us out at knapsack.cloud. Have a great day.

Get started

See how Knapsack helps you reach your design system goals.

Get started

See how Knapsack makes design system management easy.